jeudi 12 novembre 2015

Big Data: InfiniDB vs Spider: What else ?

Many of my recent engagements have been all around strategy to implement Real Time Big Data Analytics: Computing hardware cost of extending a single table collection with MariaDB and Parallel Query found in the Spider storage engine to offload columnar MPP storage like InfiniDB or Vertica.

As of today Parallel Query is only available from releases of MariaDB Spider supported by spiral arms. The more efficient way to use parallel query with Spider can be done on group by, and count queries that use a single spider table. In such case Spider Engine will execute query push down AKA map reduce.

Spider gets multiple levels of parallel execution for a single partitioned tables.

First level is per backend server:
The way to actually tell spider to scan different backends in concurrency is to set  spider_sts_bg_mode=1

Other level is per partition:
The way to actually tell spider to scan different partitions in concurrency is by set  spider_casual_read=1

Hey but that's sometimes not enough! Per partition parallel query can conflict with the connection recycling feature of Spider. Behind the scene Spider search for some already open connections via a hash table that contains every partition connections: Host, User, Password, Port or Socket, and will reuse the same connection if found one. To really enable concurrent scan inside the same backend or inside localhost you need to create different users to abuse spider on connection recycling and the server should use TCP connection string and not socket or name pipes.

In real life scenario, it's always good to create a different user per partition table and grant only SELECT privileges to the single table accessed attaching the Spider partition. in DDL you later map to partitions via creating a unique server per partition that map every user.

Here is a local test case attach to this scenario inside a single server to use multiple cores to produce an aggregate.

  `a` varchar(1000) DEFAULT NULL,
  PRIMARY KEY (`id`)

insert into test select *,'test' from seq_1_to_4000000; 

create or replace view v1 as select * from test where id>=0000000 and id<1000000;
create or replace view v2 as select * from test where id>=1000000 and id<2000000;
create or replace view v3 as select * from test where id>=2000000 and id<3000000;
create or replace view v4 as select * from test where id>=3000000 and id<4000000;

grant all on *.* to root1@'' identified by 'mariadb' ;
grant all on *.* to root2@'' identified by 'mariadb' ;
grant all on *.* to root3@'' identified by 'mariadb' ;
grant all on *.* to root4@'' identified by 'mariadb' ;

create or replace server l1 FOREIGN DATA WRAPPER mysql OPTIONS (USER 'root1',PORT 3307,PASSWORD 'mariadb',HOST '',DATABASE 'test' );
create or replace server l2 FOREIGN DATA WRAPPER mysql OPTIONS (USER 'root2',PORT 3307,PASSWORD 'mariadb',HOST '',DATABASE 'test' );
create or replace server l3 FOREIGN DATA WRAPPER mysql OPTIONS (USER 'root3',PORT 3307,PASSWORD 'mariadb',HOST '',DATABASE 'test' );
create or replace server l4 FOREIGN DATA WRAPPER mysql OPTIONS (USER 'root4',PORT 3307,PASSWORD 'mariadb',HOST '',DATABASE 'test' );

create or replace table test_spd  ( 
  `id` int(11) NOT NULL AUTO_INCREMENT, 
  `a` varchar(1000) DEFAULT NULL,   
PARTITION P1 VALUES LESS THAN (1000000) COMMENT='table "v1", srv "l1"',
PARTITION P2 VALUES LESS THAN (2000000) COMMENT='table "v2", srv "l2"',
PARTITION P3 VALUES LESS THAN (3000000) COMMENT='table "v3", srv "l3"',
PARTITION P4 VALUES LESS THAN (4000000) COMMENT='table "v4", srv "l4"');

In this scenario queries on my 40 Million record table will use up to 4 cores or the number of views   that materialized each partition used inside the Spider table.

Now from a real use case, let's ask David Chanial, devops on the Beleive Digital platform that demonstrate this on a 8 node 64 cores cluster.

SVA: How do you initialize the cluster?

DCH: To manage the farm we are deploying using the python fabric library! We had create some  deployment scripts that take as input the node list, the number for cores and the number of replicates.

SVA: What type of tables definitions and data sizing are you using :

DCH: Let's have a look at the spider 2,5 Billion record table ,

MariaDB [spider01_ro]> show table status LIKE 'Sales' \G

           Name: Sales 
         Engine: SPIDER
        Version: 10
     Row_format: Dynamic
           Rows: 2500657155
 Avg_row_length: 91
    Data_length: 228106478935
   Index_length: 40839657478
 Auto_increment: 2618963872
      Collation: utf8_general_ci 
 Create_options: partitioned

SVA: What type of of partitioning?

DCH: That's a mixture of technical per auto increment and per business segments. Indeed we are using sub partitioning with double spider tables that point to TokuDB or InnoDB tables in this case and reduce that to modulo of number of cores in the cluster.

SVA: What performance can you get?

MariaDB [spider01_ro]> 
select count(*) from spider01_ro.Sales t; 
select idGenre, count(*) from spider01_ro.Sales GROUP BY idGenre;
| count(*)   |
| 2506437338 |
1 row in set (8.87 sec)

| idGenre | count(*)  |
|       0 |      8137 |
|       1 |  56044584 |
|       2 |  21179162 |
|       3 |  25446110 |
|       4 |  31829221 |
|     293 |   1386236 |
|     294 |     47109 |
|     295 |     50776 |
|     296 |       988 |

|     297 |     47589 |
|     298 |      9610 |
|     299 |      5215 |
|     300 |       224 |
295 rows in set (16.00 sec)

Indeed 149M records read per sec on 8 nodes and inside a single node all the cores are working hard :

| 2848 | tsrc_p15_c02 | | spider01 | Query   |     1 | Queried about 730000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p15_c02` group by `idGenre` |    0.000 |
| 2849 | tsrc_p15_c10 | | spider01 | Query   |     1 | Queried about 720000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p15_c10` group by `idGenre` |    0.000 |
| 2850 | tsrc_p15_c18 | | spider01 | Query   |     1 | Queried about 950000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p15_c18` group by `idGenre` |    0.000 |
| 2851 | tsrc_p15_c26 | | spider01 | Query   |     1 | Queried about 740000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p15_c26` group by `idGenre` |    0.000 |
| 2852 | tsrc_p15_c34 | | spider01 | Query   |     1 | Queried about 1060000 rows                                                  | select count(0),`idGenre` from `spider01`.`tsrc_p15_c34` group by `idGenre` |    0.000 |
| 2853 | tsrc_p15_c42 | | spider01 | Query   |     1 | Queried about 920000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p15_c42` group by `idGenre` |    0.000 |
| 2854 | tsrc_p15_c50 | | spider01 | Query   |     1 | Queried about 530000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p15_c50` group by `idGenre` |    0.000 |
| 2855 | tsrc_p15_c58 | | spider01 | Query   |     1 | Queried about 790000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p15_c58` group by `idGenre` |    0.000 |
| 2856 | tsrc_p16_c02 | | spider01 | Query   |     1 | Queried about 760000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c02` group by `idGenre` |    0.000 |
| 2857 | tsrc_p16_c10 | | spider01 | Query   |     1 | Queried about 660000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c10` group by `idGenre` |    0.000 |
| 2858 | tsrc_p16_c18 | | spider01 | Query   |     1 | Queried about 940000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c18` group by `idGenre` |    0.000 |
| 2859 | tsrc_p16_c26 | | spider01 | Query   |     1 | Queried about 930000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c26` group by `idGenre` |    0.000 |
| 2860 | tsrc_p16_c34 | | spider01 | Query   |     1 | Queried about 910000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c34` group by `idGenre` |    0.000 |
| 2861 | tsrc_p16_c42 | | spider01 | Query   |     1 | Queried about 800000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c42` group by `idGenre` |    0.000 |
| 2862 | tsrc_p16_c50 | | spider01 | Query   |     1 | Queried about 770000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c50` group by `idGenre` |    0.000 |
| 2863 | tsrc_p16_c58 | | spider01 | Query   |     1 | Queried about 740000 rows                                                   | select count(0),`idGenre` from `spider01`.`tsrc_p16_c58` group by `idGenre` |    0.000 |
SVA: Thanks David

Take away:

We conclude that columnar model in such scenario with less than 20 columns is only 10 time more efficient for a given hardware cost. But RDBMS take the lead on small range indexed queries.

Parallel and distributed queries is never an easy task but we can make it shine just on regular good old well stable OLTP storage engine.

Stay tuned, thanks to Spiral Arms Open Source spirit, a special to us foundation sponsor, and more and more supported clients inside MariaDB, we will get more of that spider release inside official MariaDB 10.2. branch.

If you feel to help finance such move in getting support ask your MariaDB sales rep some specific spider support or directly to a Spiral Arms sales rep. 

lundi 29 juin 2015

Slave Election is welcoming GTID

Slave election is a popular HA architecture,  first MySQL MariaDB toolkit to manage switchover and failover in a correct way was introduce by Yoshinori Matsunobu into MHA.

Failover and switchover in asynchronous clusters require caution:

- The CAP theorem need to be satisfy. Getting strong consistency, require the slave election to reject transactions ending up in the old master when electing the candidate master.

- Slave election need to take care that all events on the old master are applied to the candidate master before switching roles.

- Should be instrumented to found a good candidate master and make sure it's setup to take the master role.

- Need topology detection, a master role can't be pre defined, as the role is moving around nodes .

- Need monitoring to escalate switchover to failover.

MHA as been coded at a time no unique event id was possible in a cluster,  each event was track as independent coordinate on each node, making MHA architecture to have an internal way to rematch coordinate on all nodes.

With introduction of GTID, MHA brings the heritage and looks like unnecessary complex, with an agent base solution and ssh connections requirement to all nodes .

A lighter MHA was needed for MariaDB when the replication is using GTID, and that's what my colleague Guillame Lefranc have been addressing inside a new MariaDB toolkit

In MariaDB GTID usage is as simple as:

#>stop slave;change master to master_use_gtid=current_pos;start slave; 

As a bonus, the code is in golang and do not require any external dependencies
We can enjoy a singe command line procedure in interactive mode.

mariadb-repmgr -hosts=,, -user=admin:xxxxx -rpluser=repl:xxxxxx -pre-failover-script="/root/" -post-failover-script="/root/" -verbose -maxdelay 15    
Don't be afraid default is to run in interactive mode and it does not launch anything yet.

In my post configuration script i usually update some haproxy configuration store in a NAS or a SAN and reload or shoot in the head all proxies

Note that the new elected master will be passed as second argument of the script.

I strongly advice not to try to auto failover base on some monitoring, get a good replication monitoring tool and analyze all master status alerts, checking for false positive situation before enjoying pre coded failover.

Loss less semi-synchronous replication in MDEV-162  and multiple performance improvements of semi-synchronous MDEV-7257, have made it to MariaDB 10.1, it can be use to greatly improve zero data lost in case of failure . Combine with parallel replication it's now possible to have an HA architecture that is as robust as asynchronous can be, and under replication delay control is crash safe as well.    

Galera aka MariaDB Cluster as a write speed limit bound to upper network speed, it come at the advantage to always offer crash safe consistency. Slave election HA have the master disk speed limit and do not suffer lower network speed but is losing consistency in failover when slave can't catch.

Interesting time to see how flash storage adoption flavor one or the other architecture.

vendredi 17 avril 2015

Social Networking Using OQGraph

I was given the chance to experiment typical social networking query on an existing 60 Millions edges dataset

How You're Connected

Such algorithms and others are simply hardcoded into the OQGraph. 

With the upgrade of OQGraph V3 into MariaDB 10 we can proceed directly on top of the exiting tables holding the edges kine of featured VIRTUAL VIEW. 

  `id1` int(10) unsigned NOT NULL,
  `id2` int(10) unsigned NOT NULL,
  `relation_type` tinyint(3) unsigned DEFAULT NULL,
  KEY `id1` (`id1`),
  KEY `id2` (`id2`)

oqgraph=# select count(*) from relations;

| count(*) |
| 59479722 |
1 row in set (23.05 sec)

Very nice integration of table discovery that save me referring to documentation to found out all columns definition.  

CREATE TABLE `oq_graph`
ENGINE=OQGRAPH `data_table`='relations' `origid`='id1' `destid`='id2';

oqgraph=# SELECT * FROM oq_graph WHERE latch='breadth_first' AND origid=175135 AND destid=7;
| latch         | origid | destid | weight | seq  | linkid |
| breadth_first | 175135 |      7 |   NULL |    0 | 175135 |
| breadth_first | 175135 |      7 |      1 |    1 |      7 |
2 rows in set (0.00 sec)

oqgraph=# SELECT * FROM oq_graph WHERE latch='breadth_first' AND origid=175135 AND destid=5615775;
| latch         | origid | destid  | weight | seq  | linkid   |
| breadth_first | 175135 | 5615775 |   NULL |    0 |   175135 |
| breadth_first | 175135 | 5615775 |      1 |    1 |        7 |
| breadth_first | 175135 | 5615775 |      1 |    2 | 13553091 |
| breadth_first | 175135 | 5615775 |      1 |    3 |  1440976 |
| breadth_first | 175135 | 5615775 |      1 |    4 |  5615775 |
5 rows in set (0.44 sec)

What we first highlight is that underlying table indexes KEY `id1` (`id1`), KEY `id2` (`id2`) are used by OQgrah to navigate the vertices via a number of key reads and range scans, such 5 level relation was around 2689 jump and 77526  range access to the table . 

Meaning the death of the graph was around 2500 with an average of 30 edges per vertex 


oqgraph=# SELECT * FROM oq_graph_myisam WHERE latch='breadth_first' AND origid=175135 AND destid=5615775;
| latch         | origid | destid  | weight | seq  | linkid   |
| breadth_first | 175135 | 5615775 |   NULL |    0 |   175135 |
| breadth_first | 175135 | 5615775 |      1 |    1 |        7 |
| breadth_first | 175135 | 5615775 |      1 |    2 | 13553091 |
| breadth_first | 175135 | 5615775 |      1 |    3 |  1440976 |
| breadth_first | 175135 | 5615775 |      1 |    4 |  5615775 |
5 rows in set (0.11 sec)

Need to investigate more such speed difference using MyISAM. Ideas are welcome ?

jeudi 16 avril 2015

Howto - Move a table to different schema with no outage

I remember a time when it was debate if views can be useful for a web oriented workload ?

This post is about one good use case:

The  story is that some tables have been creating into a schema and used by the application into same connection.

Later on some more schema have been added to separate data for multiple application domain but still using original table, kind of cross domain universal table.

With addition of many new domains, a new global schema was added storing freshly create universal tables.

The question was how to move back the old table in the correct new schema without stopping availability of the service ?

We decided to use a view that point to the physical table. Change the application to use the view and later atomically switch the table and the view.

Here is the test case for doing that :

-- Create schemas

-- Create table in schema 1
CREATE TABLE schema1.t1 (
  id int

-- Create views in schema 2
CREATE VIEW schema2.t1 AS SELECT * FROM schema1.t1;
-- Create dummy view on view in schema 1 
CREATE VIEW schema1.t1_new AS SELECT * FROM schema2.t1;

-- Changing the API 

-- Switch schema 1 table and schema 2 view
RENAME TABLE schema2.t1 TO schema2.t1_old,
  schema1.t1 TO schema2.t1,
  schema1.t1_new TO schema1.t1;

Is there some other path ? Surely some triggers + insert ignore like done in OAK or Pt Online Alter table but i also remember a time when it was debate if triggers can be useful for a web oriented workload :)

Thanks to Nicolas @ccmbenchmark for contributing the test case.

mardi 25 février 2014

Introduction to Job Queue daemon plugin

Dr. Adrian Partl is working in the E-Science group of the Leibniz Institute for Astrophysics Potsdam (AIP), where the key topics are cosmic magnetic fields and extragalactic, astrophysics is the branch of astronomy concerned with objects outside our own Milky Way galaxy

Why did you decided to create a Job Queue plugin, what issues does it solve?

A: Basically our MySQL databases hold astronomic simulations and observations content, the datasets are in multi Terra Bytes size and queries can take long time, astronomers can definitely wait for data acquisition, but jump on the data as soon as they are available.  Job Queue offer a protection from too many parallel query executions and prevent our servers to be spammed. Multiple queues are here to give us priority between users, today queries are executed as soon as a slot is available. Some timeouts per group can be define and queries will be killed passing that delay.

Would you like telling us more about your personal background?

A: I studied astronomy and have a PHD in astrophysics. For my PHD I focused on high performance computing by parallelizing a radiation transport simulation code to enable running it in large computational cluster. Now a day i'm more specialized in programming and managing big dataset. I stop doing scientists tasks, but i enjoy helping in making those publications happen by providing all the IT infrastructure for doing the job.

How did you came to MySQL ?

A: In the past we used SQL Server but we rapidly rich the performance limits of a single box, we found out that it can be very expensive to expend it for sharding.

We moved to MySQL and mostly MyISAM storage engine.  We are also using Spider storage engine since 3 years, for creating the shards. We needed true parallel queries, to do so we created PAQU a fork of Shard Query to better integrate with Spider, The map-reduce tasks in PaQu are all done by submitting multiple subsequent "direct background queries" to the Spider engine and we shortcut Gearman in shard-query. With this in place it is possible to manage map-reduce tasks using our Job Queue plugin.

S: Spider is now integrated in MariaDB10 and it is making fast improvements regarding map-reduce jobs, using UDF functions with multiple channels on partitions and for some simple aggregation query plans. Are you using advanced DBT3 big queries algorithms like BKA joins and MRR? Did you explore new engines like TokuDB that could bring massive compression, and disk IO saving to your dataset.

A: I will definitely have look at this. In the past we have experimented column stores, but it's not really adapted to what we do. Scientists extract all columns despite they don't use all of them. Better getting more, then to re extract :)       

When did you start working on Job Queue and how much time did it take? Did you found enough informations during the task of developing a plugin ? What was useful to you?

A: I took me one and a half year, i started by reading MySQL source code. Some books helped me, MySQL Internals from Sacha Pachev at Percona and MySQL plugins development from Sergei Golubchick at SkySQL and Andrew Hutchings at HP. Reading the source code of handler_socket plugin from Yoshinori Matsunobu definitely put me on faster track.

S: Yes we all miss Yoshinori but he is now more social than ever:), did you also search help from our public freenode IRC MariaDB channel.

A: Not at all, but i will visit knowing now about it.

How is the feedback from the community so far?

It did not yet pickup, but i ported the PgSphere API from PostgreSQL. The project is call mysql_sphere, it's still lacking indexes but it is fully functional and that project get so far very good feedback.

Any wishes to the core ?   

A: GiST index API like in PostgreSQL would be very nice to have, i have recently started a proxying storage engine to support multi dimensional R-Tree, but i would really like to add indexing on top of the existing storage engine.

S: ConnectDB made by Olivier Bertrand share the same requirements, to create  indexing proxy you still need to create a full engine for this, we support R-tree in InnoDB and MyISAM but this a valid point, we do not have functional indexes API like GiST. This has been already discuss internally but never been implemented.  

The results of the job execution are materialized in tables, can you force a storage engine for a job result ?  

A: This is not yet possible at the moment but easy to implement.

What OS and Forks are known to be working with Jog Queue?  

A: It’s not very deep tested because we mostly use it internally on linux and MySQL 5.5 and we have tested it on MariaDB recently, i don't see any reason why it would not work for other OS. Feedback are of course very welcome!

Do you plan to add features in upcoming release?

A: We don't really need additional features now a day, but we are open to any user requests.

S: Run some query on a scheduler ?

A: Can be done. I could allocate time if it make sense for users.  
Job Queue is part of a bigger project Daiquiri, using Gearmand can you elaborate?  

A: Yes Daiquiri is our PHP web framework for publication of datasets.This is manage by Dr. Jochen Klar and control dataset permissions and roles independently of the grants of MySQL. Job Queue is an optional component on top of it, for submitting jobs to multiple predefine dataset. We allow our users to enter free queries. Daiquiri is our front office for Paqu and Job Queue plugin. We are using Gearman in Daiquiri to dump user requests to CSV or into specialized data formats.

S: We have recently implemented Roles in MariaDB 10, you may enjoy this as well but for sure it may not feet all specific custom requirements.

Where can we learn more about Job Queue?  


S: Transporting MySQL and MariaDB to the space last frontier, there are few days like that one when i discovered your work making me proud to work for an Open Source company. Many thanks Adrian for your contributions!

S: If you found this plugin useful and would like to use it, tell it to our engineer team by voting to this Public Jira Task. If your share the same needs to have GiST like indexing API please vote for this Public Jira Task.