You are here

News

FOSDEM 2016 - MySQL slides about PERFORMANCE_SCHEMA available

FromDual.en - Wed, 2016-02-03 21:56

The FOSDEM 2016 in Brussels (Belgium) January 29/30 is over and was very interesting and IMHO a big success.

For all those who could not participate at FOSDEM 2016 our presentation slides about PERFORMANCE_SCHEMA and sys schema are available here:

PERFORMANCE_SCHEMA and sys schema - What can we do with it? (PDF, 406 kbyte)

Taxonomy upgrade extras: sysperformance_schemaslides

FromDual Ops Center for MySQL and MariaDB 0.3 has been released

FromDual.en - Tue, 2016-02-02 16:24

FromDual has the pleasure to announce the release of the new version 0.3 of the FromDual Ops Center for MySQL and MariaDB.

The FromDual Ops Center for MySQL and MariaDB (focmm) is an application for DBA's and system administrators to manage MySQL and MariaDB database farms.

The main task of Ops Center is to support you in your daily MySQL and MariaDB operation tasks.

More information about FromDual Ops Center you can find here.

Download

The new FromDual Ops Center can be downloaded here.

In the inconceivable case that you find a bug in the FromDual Ops Center please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center

A complete guide on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.2 to 0.3

An upgrade path from Ops-Center v0.2 to v0.3 is currently not supported. Please also check Upgrading.

Changes in Ops Center 0.3
  • Ops Center repository database was changed from Sqlite to MySQL/MariaDB.
  • Structure and schema comparison introduced.
  • Variables comparison introduced.
  • Processlist tool introduced.
  • VIP is checked over crontab now.
  • Debugging facility improved.
  • Some configuration rules/checks implemented.
  • User management implemented for multiple-instances.
  • Clone user implemented.
  • Database backup implemented.
  • Jab handling implemented.
  • Blocking backup implemented for MyISAM tables.
  • Skip Slave with GTID works now.
Taxonomy upgrade extras: OperationsreleaseBackupfailoverfocmmFromDual Ops Center

FromDual Ops Center for MySQL and MariaDB 0.3 has been released

FromDual.en - Tue, 2016-02-02 16:24

FromDual has the pleasure to announce the release of the new version 0.3 of the FromDual Ops Center for MySQL and MariaDB.

The FromDual Ops Center for MySQL and MariaDB (focmm) is an application for DBA's and system administrators to manage MySQL and MariaDB database farms.

The main task of Ops Center is to support you in your daily MySQL and MariaDB operation tasks.

More information about FromDual Ops Center you can find here.

Download

The new FromDual Ops Center can be downloaded here.

In the inconceivable case that you find a bug in the FromDual Ops Center please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center

A complete guide on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.2 to 0.3

An upgrade path from Ops-Center v0.2 to v0.3 is currently not supported. Please also check Upgrading.

Changes in Ops Center 0.3
  • Ops Center repository database was changed from Sqlite to MySQL/MariaDB.
  • Structure and schema comparison introduced.
  • Variables comparison introduced.
  • Processlist tool introduced.
  • VIP is checked over crontab now.
  • Debugging facility improved.
  • Some configuration rules/checks implemented.
  • User management implemented for multiple-instances.
  • Clone user implemented.
  • Database backup implemented.
  • Jab handling implemented.
  • Blocking backup implemented for MyISAM tables.
  • Skip Slave with GTID works now.
Taxonomy upgrade extras: operationOperationsreleaseBackupfailover

FromDual Ops Center for MySQL and MariaDB 0.3 has been released

FromDual.en - Tue, 2016-02-02 16:24

FromDual has the pleasure to announce the release of the new version 0.3 of the FromDual Ops Center for MySQL and MariaDB.

The FromDual Ops Center for MySQL and MariaDB (focmm) is an application for DBA's and system administrators to manage MySQL and MariaDB database farms.

The main task of Ops Center is to support you in your daily MySQL and MariaDB operation tasks.

More information about FromDual Ops Center you can find here.

Download

The new FromDual Ops Center can be downloaded here.

In the inconceivable case that you find a bug in the FromDual Ops Center please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Installation of Ops Center

A complete guide on how to install FromDual Ops Center you can find in the Ops Center User Guide.

Upgrade from 0.2 to 0.3

An upgrade path from Ops-Center v0.2 to v0.3 is currently not supported. Please also check Upgrading.

Changes in Ops Center 0.3
  • Ops Center repository database was changed from Sqlite to MySQL/MariaDB.
  • Structure and schema comparison introduced.
  • Variables comparison introduced.
  • Processlist tool introduced.
  • VIP is checked over crontab now.
  • Debugging facility improved.
  • Some configuration rules/checks implemented.
  • User management implemented for multiple-instances.
  • Clone user implemented.
  • Database backup implemented.
  • Jab handling implemented.
  • Blocking backup implemented for MyISAM tables.
  • Skip Slave with GTID works now.
Taxonomy upgrade extras: operationOperationsreleaseBackupfailover

MySQL Environment MyEnv 1.3.0 has been released

FromDual.en - Sun, 2016-01-31 12:34

FromDual has the pleasure to announce the release of the new version 1.3.0 of its popular MySQL, Galera Cluster, MariaDB and Percona Server multi-instance environment MyEnv.

The new MyEnv can be downloaded here.

In the inconceivable case that you find a bug in the MyEnv please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x or higher to 1.3.0 # cd ${HOME}/product # tar xf /download/myenv-1.3.0.tar.gz # rm -f myenv # ln -s myenv-1.3.0 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.3.0 MyEnv
  • NDB Cluster stuff was removed and cleaned-up.
  • my.cnf template was slightly adapted to new requirements (query_cache_size, innodb_buffer_pool_instances, binlog_stmt_cache_size, wsrep_sst_method and wsrep_sst_auth).
  • Cgroups stuff was fixed and made more verbose in case of configuration errors.
  • Cgroups template (tpl/cgroups.conf) was extended.
  • Some warnings made more clear.
  • Too verbose error message in case of missing my.cnf was redirected to debug logging.
MyEnv Installer
  • Preparation work for Mac OSX was done.
  • Installation deletion can be automatized now.
  • Installer made ready for 5.7
  • Tags angel and default are not attached to each section any more.
  • Schema sys was added to hideschema variable
MyEnv Utilities
  • Script insert_test.sh host tag added.
  • ORDER BY added to GROUP BY (in query_bench.php) to make it working the same in the future.
  • rc bug fixed in block_galera_node.sh

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: MyEnvoperationMySQL Operationsmulti instanceconsolidationtestingreleasecloudcgroupscontainer

MySQL Environment MyEnv 1.3.0 has been released

FromDual.en - Sun, 2016-01-31 12:34

FromDual has the pleasure to announce the release of the new version 1.3.0 of its popular MySQL, Galera Cluster, MariaDB and Percona Server multi-instance environment MyEnv.

The new MyEnv can be downloaded here.

In the inconceivable case that you find a bug in the MyEnv please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x or higher to 1.3.0 # cd ${HOME}/product # tar xf /download/myenv-1.3.0.tar.gz # rm -f myenv # ln -s myenv-1.3.0 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.3.0 MyEnv
  • NDB Cluster stuff was removed and cleaned-up.
  • my.cnf template was slightly adapted to new requirements (query_cache_size, innodb_buffer_pool_instances, binlog_stmt_cache_size, wsrep_sst_method and wsrep_sst_auth).
  • Cgroups stuff was fixed and made more verbose in case of configuration errors.
  • Cgroups template (tpl/cgroups.conf) was extended.
  • Some warnings made more clear.
  • Too verbose error message in case of missing my.cnf was redirected to debug logging.
MyEnv Installer
  • Preparation work for Mac OSX was done.
  • Installation deletion can be automatized now.
  • Installer made ready for 5.7
  • Tags angel and default are not attached to each section any more.
  • Schema sys was added to hideschema variable
MyEnv Utilities
  • Script insert_test.sh host tag added.
  • ORDER BY added to GROUP BY (in query_bench.php) to make it working the same in the future.
  • rc bug fixed in block_galera_node.sh

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationtestingupgradereleasecloudcgroupscontainer

MySQL Environment MyEnv 1.3.0 has been released

FromDual.en - Sun, 2016-01-31 12:34

FromDual has the pleasure to announce the release of the new version 1.3.0 of its popular MySQL, Galera Cluster, MariaDB and Percona Server multi-instance environment MyEnv.

The new MyEnv can be downloaded here.

In the inconceivable case that you find a bug in the MyEnv please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x or higher to 1.3.0 # cd ${HOME}/product # tar xf /download/myenv-1.3.0.tar.gz # rm -f myenv # ln -s myenv-1.3.0 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.3.0 MyEnv
  • NDB Cluster stuff was removed and cleaned-up.
  • my.cnf template was slightly adapted to new requirements (query_cache_size, innodb_buffer_pool_instances, binlog_stmt_cache_size, wsrep_sst_method and wsrep_sst_auth).
  • Cgroups stuff was fixed and made more verbose in case of configuration errors.
  • Cgroups template (tpl/cgroups.conf) was extended.
  • Some warnings made more clear.
  • Too verbose error message in case of missing my.cnf was redirected to debug logging.
MyEnv Installer
  • Preparation work for Mac OSX was done.
  • Installation deletion can be automatized now.
  • Installer made ready for 5.7
  • Tags angel and default are not attached to each section any more.
  • Schema sys was added to hideschema variable
MyEnv Utilities
  • Script insert_test.sh host tag added.
  • ORDER BY added to GROUP BY (in query_bench.php) to make it working the same in the future.
  • rc bug fixed in block_galera_node.sh

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationtestingupgradereleasecloudcgroupscontainer

Replication in a star

Cédric Bruderer - Thu, 2016-01-21 21:24

Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.

Most of you know that both MySQL and MariaDB support replication in a hierarchical master-slave-setup, to propagate changes across all connected servers.

But with the latest releases, a slave can have more than one master.

The keyword: Multi-Source replication

It is supported from MySQL 5.7 and MariaDB 10.0 on, and this article describes how to set it up.

What does Multi-Source mean?

Multi-Source means, that you can take two or more masters and replicate them to one slave, where their changes will be merged. This works just like the regular MySQL/MariaDB replication.

Well, we are going to exploit that a little. It is still possible to configure a Master-Master set up, what basically allows the following configuration.

Multiple servers are conjoined in a cluster, where every server is a master, and the replication happens over one central node.

Why should we use Multi-Source-Replication, if it's just Master-Master?

With a Master-Master-Setup, you are limited to a ring topology or two servers. With Multi-Source, you can now use more than two server without having to use the ring topology, which might break and cause the replication to halt.

Further it is possible to back everything up at one place, without the risk of interrupting access to the databases.

Layout

The logical topography is a star. The following image shows a possible set up, with servers located in different countries. Thanks to the asynchronous replication, which does not require a broadband connection to work properly, this is possible without a problem.

What has to be considered?
  • The problems for any other Master-Master-Setup apply here as well.
  • If the replication in the cluster is stalled, the problem is usually on more than one server, maybe even the entire cluster.
  • Although the synchronization is asynchronous and does not cause a lot of network traffic itself, the replication of large or heavily accessed databases will cause some traffic at the central node.

How do I set it up?

The way you set it up, is like any other Master-Master replication. Except, that you will have more masters in the cluster.

1) Set up a standard installation of MySQL 5.7 or MariaDB 10.0 or above.

 

2) Prepare the configuration on all servers:

- On all the outer nodes (In Layout: All except server 1)

log_slave_updates = 0;

- On the central node (In Layout: server 1)

log_slave_updates = 1;

 

NOTE:

  • The central server must forward everything it receives, so that the changes starting on some outer node will also reach all the other outer nodes.
  • The outer nodes must not forward such changes, because they would loop through the cluster forever.

 

- Give each server a unique ID!

 

- Create the user which will be used for the replication:

CREATE USER 'replicator'@'localhost' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'' IDENTIFIED BY 'password';

- On MySQL 5.7 you have to enable the use of GTIDs (see step 3).

 

3) Set up the replication with all the outer nodes:

I will start with MySQL 5.7:

Source:

  • MySQL Multi-Source-Replication
  • Online enable GTIDs
  •  

    Although there is a paragraph, which states that it is possible with file and position, I have experienced something different, what forced me to enable GTIDs.

    You have to change the repositories for "master info" and "relay log" to the format "TABLE":

mysql> SET GLOBAL master_info_repository = 'TABLE'; mysql> SET GLOBAL relay_log_info_repository = 'TABLE';

Further, it is necessary to enable GTID. From MySQL 5.7.6 and higher, it is possible to do this without restarting the server.

mysql> SET GLOBAL ENFORCE_GTID_CONSISTENCY = ON; mysql> SET GLOBAL gtid_mode= OFF_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON;

The value of "ENFORCE_GTID_CONSISTENCY" has to be "ON", otherwise the slave will return an error and refuse to work. And don't forget to make those changes in your "my.cnf" persistent.

Create a new link to a master:

mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master';

Start slave:

mysql> START SLAVE [thread_type] [FOR CHANNEL=[]];

Stop slave:

mysql> STOP SLAVE [thread_type] [FOR CHANNEL=[]];

How to modify a link:

mysql> STOP SLAVE FOR CHANNEL='mysql-master'; mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master'; mysql> START SLAVE FRO CHANNEL='mysql-master'; mysql> SHOW SLAVE STATUS FOR CHANNEL 'mysql-master'\G

 

Now MariaDB 10.0:

Source: MariaDB Multi-Source-Replication

Create a new link to a master (ATTENTION: MariaDB has a different syntax, compared to MySQL):

mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='';

Start all slaves on this server at the same time:

mariadb> START ALL SLAVES;

Check the status of all slaves on this server:

mariadb> SHOW ALL SLAVES STATUS\G

If you want to manage a slave on its own, you can use the regular commands. But you have to make the connection the default one.

Here is an example what you have to do, if you want to modify the connection 'maria-master':

mariadb> STOP SLAVE'maria-master'; mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD=''; mariadb> START SLAVE 'maria-master'; mariadb> SHOW SLAVE 'maria-master' STATUS\G

After you have done this, you have a normal Master-Slave replication between two servers (as shown in the following picture). To complete the star, repeat those steps on each server, you want to connect.

Is it possible expand an existing set up?

It is no problem to expand an existing master-master or master-slave set up. However I would recommend to create new connections. This way the replication is consistent in the configuration with the other connections.

How do I break it?

Just like any other Master-Master-Setup. So be careful, with writing on more than one server. In general, if a bad command is committed, it will be replicated to the other nodes. This will cause the cluster to stall.

Another problem is auto_increment. Duplicate IDs will cause a "Duplicate Key Error" and stall each server it happens on. This can be prevented by editing the values of "auto_increment_increment" and "auto_increment_offset". In this scenario, the value "auto_increment_increment" should be 7 (the amount of servers in the cluster) and the value of "auto_increment_offset" would be something from 0 to 6 (or 1 to 7, depending on your preferences).

How do I fix it?

Sadly, that is not so easy. In addition, the statements are still replicated over the cluster, what usually causes more than just one server to stall, most time it is the entire cluster.

You have to execute every action on each connection. This gets tedious if you have a large amount of nodes.

Let's assume you have a duplicate key error, because two inserts happened at the same time.

 

MySQL 5.7:

If you experience a stalled replication on MySQL, you have to skip the GTID of the transaction which caused the stall.

First, stop the slave:

mysql> STOP SLAVE FOR CHANNEL 'failed-transactions-channel-name';

Retrieve the next GTID:

mysql> SHOW SLAVE STATUS\G

The line "Retrieved_Gtid_Set" contains the next GTID which would be executed. Copy the value and tell the server to execute that GTID:

mysql> SET GTID_NEXT="dd57a411-b477-11e5-b518-005056244454";

Execute a blank transaction:

mysql> BEGIN; COMMIT;

Tell the server to take control of the GTIDs:

mysql> SET GTID_NEXT="AUTOMATIC";

And restart the slave:

mysql> START SLAVE FOR CHANNEL 'failed-transactions-channel-name';

 

MariaDB 10.0:

Stop the slave for the connection:

mariadb> STOP SLAVE 'maria-master';

Define which connection, you would like to edit:

mariadb> SET @@default_master_connection='maria-master'; # No, it's not a joke.

Skip the failed statement:

mariadb> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;

Restart the slave:

mariadb> START SLAVE 'maria-master';

Check if the slave is running:

mariadb> SHOW SLAVE 'maria-master' STATUS\G

Please note the "@@default_master_connection". It is really necessary to set this variable, or you will not be able to change the counter you want. I'm not sure, if I should be surprised or shocked, that this is the recommended solution by MariaDB.

Hot expand

What is needed to add a new node? Is it required to stop the entire cluster? Or can I just add a new server?

If you want to add a new node to the cluster, it would be best, if you take a dump from another node, including the master data. You then import that dump into the node, make sure the link to the master is configured correctly, and start it. If everything is set up the right way, you should have no problems at all.

It is not necessary to stop the entire cluster, since you can add the links between the nodes while the servers are up.

Backup and Restore Method 1

The thought behind Multi-Source replication was to make the administrators life easier when it comes to backups. Instead of backing up all data at their respective location, you can gather it on one server and do the backup on this machine, without interrupting the "productive" servers work.

To obtain a backup of all the replicated databases from the cluster, it would be best to do it on the central server. The reason for this is, that everything has to go over it. This method is suitable to protect yourself from to losing all data on the cluster. The downside is, when one node dies, you have to obtain the backup from that location and transfer it to the failed server.

Method 2

If you would like to keep your data save on the location of the server, you can set up a slave at each location and replicate the master to it. For a restore, you could use the existing slave as the new node of the star, while the old server is rebuilding. This set up is capable of keeping the downtime of the service as little as possible. Further, you are not required to transfer the backup from one location to another, since it is already stored close to the failed server.

 

 

Versions used:

MariaDB: 10.1.10

MySQL: 5.7.10

Taxonomy upgrade extras: GTIDreplicationmulti-source

Replication in a star

Cédric Bruderer - Thu, 2016-01-21 21:24

Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.

Most of you know that both MySQL and MariaDB support replication in a hierarchical master-slave-setup, to propagate changes across all connected servers.

But with the latest releases, a slave can have more than one master.

The keyword: Multi-Source replication

It is supported from MySQL 5.7 and MariaDB 10.0 on, and this article describes how to set it up.

What does Multi-Source mean?

Multi-Source means, that you can take two or more masters and replicate them to one slave, where their changes will be merged. This works just like the regular MySQL/MariaDB replication.

Well, we are going to exploit that a little. It is still possible to configure a Master-Master set up, what basically allows the following configuration.

Multiple servers are conjoined in a cluster, where every server is a master, and the replication happens over one central node.

Why should we use Multi-Source-Replication, if it's just Master-Master?

With a Master-Master-Setup, you are limited to a ring topology or two servers. With Multi-Source, you can now use more than two server without having to use the ring topology, which might break and cause the replication to halt.

Further it is possible to back everything up at one place, without the risk of interrupting access to the databases.

Layout

The logical topography is a star. The following image shows a possible set up, with servers located in different countries. Thanks to the asynchronous replication, which does not require a broadband connection to work properly, this is possible without a problem.

What has to be considered?
  • The problems for any other Master-Master-Setup apply here as well.
  • If the replication in the cluster is stalled, the problem is usually on more than one server, maybe even the entire cluster.
  • Although the synchronization is asynchronous and does not cause a lot of network traffic itself, the replication of large or heavily accessed databases will cause some traffic at the central node.

How do I set it up?

The way you set it up, is like any other Master-Master replication. Except, that you will have more masters in the cluster.

1) Set up a standard installation of MySQL 5.7 or MariaDB 10.0 or above.

 

2) Prepare the configuration on all servers:

- On all the outer nodes (In Layout: All except server 1)

log_slave_updates = 0;

- On the central node (In Layout: server 1)

log_slave_updates = 1;

 

NOTE:

  • The central server must forward everything it receives, so that the changes starting on some outer node will also reach all the other outer nodes.
  • The outer nodes must not forward such changes, because they would loop through the cluster forever.

 

- Give each server a unique ID!

 

- Create the user which will be used for the replication:

CREATE USER 'replicator'@'localhost' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'' IDENTIFIED BY 'password';

- On MySQL 5.7 you have to enable the use of GTIDs (see step 3).

 

3) Set up the replication with all the outer nodes:

I will start with MySQL 5.7:

Source:

  • MySQL Multi-Source-Replication
  • Online enable GTIDs
  •  

    Although there is a paragraph, which states that it is possible with file and position, I have experienced something different, what forced me to enable GTIDs.

    You have to change the repositories for "master info" and "relay log" to the format "TABLE":

mysql> SET GLOBAL master_info_repository = 'TABLE'; mysql> SET GLOBAL relay_log_info_repository = 'TABLE';

Further, it is necessary to enable GTID. From MySQL 5.7.6 and higher, it is possible to do this without restarting the server.

mysql> SET GLOBAL ENFORCE_GTID_CONSISTENCY = ON; mysql> SET GLOBAL gtid_mode= OFF_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON;

The value of "ENFORCE_GTID_CONSISTENCY" has to be "ON", otherwise the slave will return an error and refuse to work. And don't forget to make those changes in your "my.cnf" persistent.

Create a new link to a master:

mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master';

Start slave:

mysql> START SLAVE [thread_type] [FOR CHANNEL=[]];

Stop slave:

mysql> STOP SLAVE [thread_type] [FOR CHANNEL=[]];

How to modify a link:

mysql> STOP SLAVE FOR CHANNEL='mysql-master'; mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master'; mysql> START SLAVE FRO CHANNEL='mysql-master'; mysql> SHOW SLAVE STATUS FOR CHANNEL 'mysql-master'\G

 

Now MariaDB 10.0:

Source: MariaDB Multi-Source-Replication

Create a new link to a master (ATTENTION: MariaDB has a different syntax, compared to MySQL):

mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='';

Start all slaves on this server at the same time:

mariadb> START ALL SLAVES;

Check the status of all slaves on this server:

mariadb> SHOW ALL SLAVES STATUS\G

If you want to manage a slave on its own, you can use the regular commands. But you have to make the connection the default one.

Here is an example what you have to do, if you want to modify the connection 'maria-master':

mariadb> STOP SLAVE'maria-master'; mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD=''; mariadb> START SLAVE 'maria-master'; mariadb> SHOW SLAVE 'maria-master' STATUS\G

After you have done this, you have a normal Master-Slave replication between two servers (as shown in the following picture). To complete the star, repeat those steps on each server, you want to connect.

Is it possible expand an existing set up?

It is no problem to expand an existing master-master or master-slave set up. However I would recommend to create new connections. This way the replication is consistent in the configuration with the other connections.

How do I break it?

Just like any other Master-Master-Setup. So be careful, with writing on more than one server. In general, if a bad command is committed, it will be replicated to the other nodes. This will cause the cluster to stall.

Another problem is auto_increment. Duplicate IDs will cause a "Duplicate Key Error" and stall each server it happens on. This can be prevented by editing the values of "auto_increment_increment" and "auto_increment_offset". In this scenario, the value "auto_increment_increment" should be 7 (the amount of servers in the cluster) and the value of "auto_increment_offset" would be something from 0 to 6 (or 1 to 7, depending on your preferences).

How do I fix it?

Sadly, that is not so easy. In addition, the statements are still replicated over the cluster, what usually causes more than just one server to stall, most time it is the entire cluster.

You have to execute every action on each connection. This gets tedious if you have a large amount of nodes.

Let's assume you have a duplicate key error, because two inserts happened at the same time.

 

MySQL 5.7:

If you experience a stalled replication on MySQL, you have to skip the GTID of the transaction which caused the stall.

First, stop the slave:

mysql> STOP SLAVE FOR CHANNEL 'failed-transactions-channel-name';

Retrieve the next GTID:

mysql> SHOW SLAVE STATUS\G

The line "Retrieved_Gtid_Set" contains the next GTID which would be executed. Copy the value and tell the server to execute that GTID:

mysql> SET GTID_NEXT="dd57a411-b477-11e5-b518-005056244454";

Execute a blank transaction:

mysql> BEGIN; COMMIT;

Tell the server to take control of the GTIDs:

mysql> SET GTID_NEXT="AUTOMATIC";

And restart the slave:

mysql> START SLAVE FOR CHANNEL 'failed-transactions-channel-name';

 

MariaDB 10.0:

Stop the slave for the connection:

mariadb> STOP SLAVE 'maria-master';

Define which connection, you would like to edit:

mariadb> SET @@default_master_connection='maria-master'; # No, it's not a joke.

Skip the failed statement:

mariadb> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;

Restart the slave:

mariadb> START SLAVE 'maria-master';

Check if the slave is running:

mariadb> SHOW SLAVE 'maria-master' STATUS\G

Please note the "@@default_master_connection". It is really necessary to set this variable, or you will not be able to change the counter you want. I'm not sure, if I should be surprised or shocked, that this is the recommended solution by MariaDB.

Hot expand

What is needed to add a new node? Is it required to stop the entire cluster? Or can I just add a new server?

If you want to add a new node to the cluster, it would be best, if you take a dump from another node, including the master data. You then import that dump into the node, make sure the link to the master is configured correctly, and start it. If everything is set up the right way, you should have no problems at all.

It is not necessary to stop the entire cluster, since you can add the links between the nodes while the servers are up.

Backup and Restore Method 1

The thought behind Multi-Source replication was to make the administrators life easier when it comes to backups. Instead of backing up all data at their respective location, you can gather it on one server and do the backup on this machine, without interrupting the "productive" servers work.

To obtain a backup of all the replicated databases from the cluster, it would be best to do it on the central server. The reason for this is, that everything has to go over it. This method is suitable to protect yourself from to losing all data on the cluster. The downside is, when one node dies, you have to obtain the backup from that location and transfer it to the failed server.

Method 2

If you would like to keep your data save on the location of the server, you can set up a slave at each location and replicate the master to it. For a restore, you could use the existing slave as the new node of the star, while the old server is rebuilding. This set up is capable of keeping the downtime of the service as little as possible. Further, you are not required to transfer the backup from one location to another, since it is already stored close to the failed server.

 

 

Versions used:

MariaDB: 10.1.10

MySQL: 5.7.10

Taxonomy upgrade extras: GTIDreplicationmulti-source replication

Replication in a star

Cédric Bruderer - Thu, 2016-01-21 21:24

Most of you know, that it is possible to synchronize MySQL and MariaDB servers using replication. But with the latest releases, it is also possible to use more than just two servers as a multi-master setup.

Most of you know that both MySQL and MariaDB support replication in a hierarchical master-slave-setup, to propagate changes across all connected servers.

But with the latest releases, a slave can have more than one master.

The keyword: Multi-Source replication

It is supported from MySQL 5.7 and MariaDB 10.0 on, and this article describes how to set it up.

What does Multi-Source mean?

Multi-Source means, that you can take two or more masters and replicate them to one slave, where their changes will be merged. This works just like the regular MySQL/MariaDB replication.

Well, we are going to exploit that a little. It is still possible to configure a Master-Master set up, what basically allows the following configuration.

Multiple servers are conjoined in a cluster, where every server is a master, and the replication happens over one central node.

Why should we use Multi-Source-Replication, if it's just Master-Master?

With a Master-Master-Setup, you are limited to a ring topology or two servers. With Multi-Source, you can now use more than two server without having to use the ring topology, which might break and cause the replication to halt.

Further it is possible to back everything up at one place, without the risk of interrupting access to the databases.

Layout

The logical topography is a star. The following image shows a possible set up, with servers located in different countries. Thanks to the asynchronous replication, which does not require a broadband connection to work properly, this is possible without a problem.

What has to be considered?
  • The problems for any other Master-Master-Setup apply here as well.
  • If the replication in the cluster is stalled, the problem is usually on more than one server, maybe even the entire cluster.
  • Although the synchronization is asynchronous and does not cause a lot of network traffic itself, the replication of large or heavily accessed databases will cause some traffic at the central node.

How do I set it up?

The way you set it up, is like any other Master-Master replication. Except, that you will have more masters in the cluster.

1) Set up a standard installation of MySQL 5.7 or MariaDB 10.0 or above.

 

2) Prepare the configuration on all servers:

- On all the outer nodes (In Layout: All except server 1)

log_slave_updates = 0;

- On the central node (In Layout: server 1)

log_slave_updates = 1;

 

NOTE:

  • The central server must forward everything it receives, so that the changes starting on some outer node will also reach all the other outer nodes.
  • The outer nodes must not forward such changes, because they would loop through the cluster forever.

 

- Give each server a unique ID!

 

- Create the user which will be used for the replication:

CREATE USER 'replicator'@'localhost' IDENTIFIED BY 'password'; GRANT REPLICATION SLAVE ON *.* TO 'replicator'@'' IDENTIFIED BY 'password';

- On MySQL 5.7 you have to enable the use of GTIDs (see step 3).

 

3) Set up the replication with all the outer nodes:

I will start with MySQL 5.7:

Source:

  • MySQL Multi-Source-Replication
  • Online enable GTIDs
  •  

    Although there is a paragraph, which states that it is possible with file and position, I have experienced something different, what forced me to enable GTIDs.

    You have to change the repositories for "master info" and "relay log" to the format "TABLE":

mysql> SET GLOBAL master_info_repository = 'TABLE'; mysql> SET GLOBAL relay_log_info_repository = 'TABLE';

Further, it is necessary to enable GTID. From MySQL 5.7.6 and higher, it is possible to do this without restarting the server.

mysql> SET GLOBAL ENFORCE_GTID_CONSISTENCY = ON; mysql> SET GLOBAL gtid_mode= OFF_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON_PERMISSIVE; mysql> SET GLOBAL gtid_mode= ON;

The value of "ENFORCE_GTID_CONSISTENCY" has to be "ON", otherwise the slave will return an error and refuse to work. And don't forget to make those changes in your "my.cnf" persistent.

Create a new link to a master:

mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master';

Start slave:

mysql> START SLAVE [thread_type] [FOR CHANNEL=[]];

Stop slave:

mysql> STOP SLAVE [thread_type] [FOR CHANNEL=[]];

How to modify a link:

mysql> STOP SLAVE FOR CHANNEL='mysql-master'; mysql> CHANGE MASTER TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='', MASTER_AUTO_POSITION=1 FOR CHANNEL 'mysql-master'; mysql> START SLAVE FRO CHANNEL='mysql-master'; mysql> SHOW SLAVE STATUS FOR CHANNEL 'mysql-master'\G

 

Now MariaDB 10.0:

Source: MariaDB Multi-Source-Replication

Create a new link to a master (ATTENTION: MariaDB has a different syntax, compared to MySQL):

mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD='';

Start all slaves on this server at the same time:

mariadb> START ALL SLAVES;

Check the status of all slaves on this server:

mariadb> SHOW ALL SLAVES STATUS\G

If you want to manage a slave on its own, you can use the regular commands. But you have to make the connection the default one.

Here is an example what you have to do, if you want to modify the connection 'maria-master':

mariadb> STOP SLAVE'maria-master'; mariadb> CHANGE MASTER 'maria-master' TO MASTER_HOST='', MASTER_PORT=, MASTER_USER='', MASTER_PASSWORD=''; mariadb> START SLAVE 'maria-master'; mariadb> SHOW SLAVE 'maria-master' STATUS\G

After you have done this, you have a normal Master-Slave replication between two servers (as shown in the following picture). To complete the star, repeat those steps on each server, you want to connect.

Is it possible expand an existing set up?

It is no problem to expand an existing master-master or master-slave set up. However I would recommend to create new connections. This way the replication is consistent in the configuration with the other connections.

How do I break it?

Just like any other Master-Master-Setup. So be careful, with writing on more than one server. In general, if a bad command is committed, it will be replicated to the other nodes. This will cause the cluster to stall.

Another problem is auto_increment. Duplicate IDs will cause a "Duplicate Key Error" and stall each server it happens on. This can be prevented by editing the values of "auto_increment_increment" and "auto_increment_offset". In this scenario, the value "auto_increment_increment" should be 7 (the amount of servers in the cluster) and the value of "auto_increment_offset" would be something from 0 to 6 (or 1 to 7, depending on your preferences).

How do I fix it?

Sadly, that is not so easy. In addition, the statements are still replicated over the cluster, what usually causes more than just one server to stall, most time it is the entire cluster.

You have to execute every action on each connection. This gets tedious if you have a large amount of nodes.

Let's assume you have a duplicate key error, because two inserts happened at the same time.

 

MySQL 5.7:

If you experience a stalled replication on MySQL, you have to skip the GTID of the transaction which caused the stall.

First, stop the slave:

mysql> STOP SLAVE FOR CHANNEL 'failed-transactions-channel-name';

Retrieve the next GTID:

mysql> SHOW SLAVE STATUS\G

The line "Retrieved_Gtid_Set" contains the next GTID which would be executed. Copy the value and tell the server to execute that GTID:

mysql> SET GTID_NEXT="dd57a411-b477-11e5-b518-005056244454";

Execute a blank transaction:

mysql> BEGIN; COMMIT;

Tell the server to take control of the GTIDs:

mysql> SET GTID_NEXT="AUTOMATIC";

And restart the slave:

mysql> START SLAVE FOR CHANNEL 'failed-transactions-channel-name';

 

MariaDB 10.0:

Stop the slave for the connection:

mariadb> STOP SLAVE 'maria-master';

Define which connection, you would like to edit:

mariadb> SET @@default_master_connection='maria-master'; # No, it's not a joke.

Skip the failed statement:

mariadb> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;

Restart the slave:

mariadb> START SLAVE 'maria-master';

Check if the slave is running:

mariadb> SHOW SLAVE 'maria-master' STATUS\G

Please note the "@@default_master_connection". It is really necessary to set this variable, or you will not be able to change the counter you want. I'm not sure, if I should be surprised or shocked, that this is the recommended solution by MariaDB.

Hot expand

What is needed to add a new node? Is it required to stop the entire cluster? Or can I just add a new server?

If you want to add a new node to the cluster, it would be best, if you take a dump from another node, including the master data. You then import that dump into the node, make sure the link to the master is configured correctly, and start it. If everything is set up the right way, you should have no problems at all.

It is not necessary to stop the entire cluster, since you can add the links between the nodes while the servers are up.

Backup and Restore Method 1

The thought behind Multi-Source replication was to make the administrators life easier when it comes to backups. Instead of backing up all data at their respective location, you can gather it on one server and do the backup on this machine, without interrupting the "productive" servers work.

To obtain a backup of all the replicated databases from the cluster, it would be best to do it on the central server. The reason for this is, that everything has to go over it. This method is suitable to protect yourself from to losing all data on the cluster. The downside is, when one node dies, you have to obtain the backup from that location and transfer it to the failed server.

Method 2

If you would like to keep your data save on the location of the server, you can set up a slave at each location and replicate the master to it. For a restore, you could use the existing slave as the new node of the star, while the old server is rebuilding. This set up is capable of keeping the downtime of the service as little as possible. Further, you are not required to transfer the backup from one location to another, since it is already stored close to the failed server.

 

 

Versions used:

MariaDB: 10.1.10

MySQL: 5.7.10

How to Get a Galera Cluster Into Split Brain

Jörg Brühe - Fri, 2015-10-23 17:02

"Split Brain" is the term commonly used for a cluster whose nodes have different contents, rather than identical as they should have. Typically, a "split brain" situation is the DBA's nightmare, and the Galera software is designed to avoid it. Galera is very successful in that avoidance, and it needs some special steps by the DBA to achieve "split brain". Here is how to do it - or, for most DBAs, what to avoid doing to not get a split-brain cluster.

Galera's Design

First, let's remember how Galera is operating:

  • The Galera software ensures that all nodes participating in a cluster will start from identical contents, by doing a "snapshot state transfer" (SST) of all current data to a newly joining node.
  • When the cluster is running, Galera transfers all changes (transactions) to all cluster nodes and applies them (or rolls back and ignores, in the case of a conflict).
  • If some connections get lost, all nodes check whether they "have quorum" (belong to a majority), and stop serving requests if they don't.
  • When a disconnected node re-joins the cluster, it gets all meantime changes transferred ("incremental state transfer" IST) and so makes its contents current.
  • Should that be impossible, because some of those changes have become unavailable (log purge), a full transfer (SST) is done.
By this design, the Galera software successfully avoids getting into a "split brain" situation.

Of course, the quorum is a well-known concept. The old term for it is "majority consensus", and the approach is built on a simple principle:
In any set (of cluster nodes), there cannot be two (or more) non-overlapping subsets which both contain a majority of the elements.
So if some loss of connectivity splits the cluster into subsets, at most one of them can "have quorum", all others will stop serving requests, and there cannot be two (or more) different directions in which the contents (data) changes.

What Galera introduced (compared to previous designs of distributed DBMSs) is the efficient transfer of changes and conflict detection / resolution ("certification" in Galera terms) at "commit" time that makes the system fast, while previous designs used "distributed locking" or other principles which added latency to many commands and so made their systems slow.

The Story

Let's get back to the "split brain" issue, of which I said that Galera avoids it, and also said it can be reached. Sounds contradictory? Well, there are more active components than just Galera. Here is a real-world case, as happened to (I won't say "achieved by") a customer:

Originally, they had set up a Galera cluster of three nodes; let's call them A, B, and C. This is started by bringing up node A as a stand-alone node, running MySQL with the "wsrep" plugin. Then, one after the other, nodes B and C are configured to join node A in forming a cluster, and started. As a result, there are three nodes communicating with each other that form the cluster. In addition, HAproxy is running somewhere, it will direct the clients to an active cluster node.

So far, so good: The cluster is running, clients connect and issue transactions, everything is ok, and the DBA/s turn/s to other tasks ...

Some time later, node A must be stopped to do some hardware maintenance. No problem, nodes B and C are running fine, they have quorum (2 of 3 is a majority), so the system is still available and operations continue. HAproxy detects that node A does not respond, so it directs all clients to B or C. The cluster architecture is serving its purpose of continuous availability even during a maintenance period.

Maintenance is done, node A is rebooted, its MySQL+Galera server process restarts. It comes up, HAproxy detects it as running and directs clients to it. All seems fine ...

Three hours later, someone has become suspicious, detects trouble, and node A is stopped. Why? What has happened?

Some
large
gap
is
appropriate
to
give
readers
a
break
and
let
them
consider
the
situation.

Analysis

What has happened?

Remember what I wrote about the cluster setup: It was

... started by bringing up node A as a stand-alone node, ... ... nodes B and C are configured to join node A ... ... three nodes communicating with each other that form the cluster.

These steps were sufficient to get the nodes up and running. What was missing, however, was to re-configure A from "stand-alone" to "member of cluster with B and C".

As a consequence, when A was restarted after the maintenance, it again came up stand-alone. It did not try to join the cluster (which would have triggered an IST of the meantime changes) or check for quorum (a stand-alone node is self-sufficient).

Based on A's configuration (as read from disk), all was fine.
Based on the concept of the A+B+C cluster, it was a plain, simple split-brain:
Some changes were done on B+C (which still had quorum), while others were done on A only (which was mis-configured).

Lesson to learn (or rather, to bring back into active brain memory):
If some configuration is changed at run-time, this change must also be done in the configuration files so that it is used on restart.

The typical example for such changes is a "set global" command modifying some dynamic variable, like "max_connections".
But in a Galera cluster, a node joining the others is also a dynamic configuration change, and it should ASAP be reflected in all configuration files. If this isn't done, the consequences might be as described above.

Happy-End

Now, most stories have a happy ending, and this one shouldn't be an exception:

Luckily, the application uses self-generated keys, similar to UUIDs, so the entries created on A did not conflict with those of B+C. Also, there were no changes of existing data, just inserts. So the situation could be corrected by extracting all new data from A, inserting them on B+C, and then resetting A's state so that it asked B+C for an SST. Uff!

Operational Advice

There are some tools available that will do such a transfer. However, it can be done completely with standard parts coming with MySQL:

  • "mysqldump" will extract the data from A.
  • Suitable options will make sure this exctract does not contain "drop" or "create" commands, and generates "insert ignore".
    Check the documentation for the options "--no-create-db", "--no-create-info", "--skip-add-drop-table", and "--insert-ignore".
  • If the old, common data are deleted first, both dumping and loading becomes faster, and duplicates are reduced / avoided.
  • "mysql" can be used to load these data into B or C.
Note, however, that conflicts will be ignored and not reported. Other tools or approaches might do that.

Had they used auto-increment keys, or had they modified existing data, it would have been much more complicated, and it might even have been impossible to combine all changes without losing some. I leave it to your imagination to think of such scenarios.

To repeat the lesson in DBMS / DBA terms:

  • The most important property of a database is consistency, it must be kept up at all times.
  • For database operations, the configuration on disk (in files) must be consistent to that in RAM (of the running processes), so any runtime changes must be reflected in the configuration files on disk to maintain consistency.

Percona's "pt-config-diff" can be used to compare a node's current variables to its configuration file.

Take care!

How to Get a Galera Cluster Into Split Brain

Jörg Brühe - Fri, 2015-10-23 17:02

"Split Brain" is the term commonly used for a cluster whose nodes have different contents, rather than identical as they should have. Typically, a "split brain" situation is the DBA's nightmare, and the Galera software is designed to avoid it. Galera is very successful in that avoidance, and it needs some special steps by the DBA to achieve "split brain". Here is how to do it - or, for most DBAs, what to avoid doing to not get a split-brain cluster.

Galera's Design

First, let's remember how Galera is operating:

  • The Galera software ensures that all nodes participating in a cluster will start from identical contents, by doing a "snapshot state transfer" (SST) of all current data to a newly joining node.
  • When the cluster is running, Galera transfers all changes (transactions) to all cluster nodes and applies them (or rolls back and ignores, in the case of a conflict).
  • If some connections get lost, all nodes check whether they "have quorum" (belong to a majority), and stop serving requests if they don't.
  • When a disconnected node re-joins the cluster, it gets all meantime changes transferred ("incremental state transfer" IST) and so makes its contents current.
  • Should that be impossible, because some of those changes have become unavailable (log purge), a full transfer (SST) is done.
By this design, the Galera software successfully avoids getting into a "split brain" situation.

Of course, the quorum is a well-known concept. The old term for it is "majority consensus", and the approach is built on a simple principle:
In any set (of cluster nodes), there cannot be two (or more) non-overlapping subsets which both contain a majority of the elements.
So if some loss of connectivity splits the cluster into subsets, at most one of them can "have quorum", all others will stop serving requests, and there cannot be two (or more) different directions in which the contents (data) changes.

What Galera introduced (compared to previous designs of distributed DBMSs) is the efficient transfer of changes and conflict detection / resolution ("certification" in Galera terms) at "commit" time that makes the system fast, while previous designs used "distributed locking" or other principles which added latency to many commands and so made their systems slow.

The Story

Let's get back to the "split brain" issue, of which I said that Galera avoids it, and also said it can be reached. Sounds contradictory? Well, there are more active components than just Galera. Here is a real-world case, as happened to (I won't say "achieved by") a customer:

Originally, they had set up a Galera cluster of three nodes; let's call them A, B, and C. This is started by bringing up node A as a stand-alone node, running MySQL with the "wsrep" plugin. Then, one after the other, nodes B and C are configured to join node A in forming a cluster, and started. As a result, there are three nodes communicating with each other that form the cluster. In addition, HAproxy is running somewhere, it will direct the clients to an active cluster node.

So far, so good: The cluster is running, clients connect and issue transactions, everything is ok, and the DBA/s turn/s to other tasks ...

Some time later, node A must be stopped to do some hardware maintenance. No problem, nodes B and C are running fine, they have quorum (2 of 3 is a majority), so the system is still available and operations continue. HAproxy detects that node A does not respond, so it directs all clients to B or C. The cluster architecture is serving its purpose of continuous availability even during a maintenance period.

Maintenance is done, node A is rebooted, its MySQL+Galera server process restarts. It comes up, HAproxy detects it as running and directs clients to it. All seems fine ...

Three hours later, someone has become suspicious, detects trouble, and node A is stopped. Why? What has happened?

Some
large
gap
is
appropriate
to
give
readers
a
break
and
let
them
consider
the
situation.

Analysis

What has happened?

Remember what I wrote about the cluster setup: It was

... started by bringing up node A as a stand-alone node, ... ... nodes B and C are configured to join node A ... ... three nodes communicating with each other that form the cluster.

These steps were sufficient to get the nodes up and running. What was missing, however, was to re-configure A from "stand-alone" to "member of cluster with B and C".

As a consequence, when A was restarted after the maintenance, it again came up stand-alone. It did not try to join the cluster (which would have triggered an IST of the meantime changes) or check for quorum (a stand-alone node is self-sufficient).

Based on A's configuration (as read from disk), all was fine.
Based on the concept of the A+B+C cluster, it was a plain, simple split-brain:
Some changes were done on B+C (which still had quorum), while others were done on A only (which was mis-configured).

Lesson to learn (or rather, to bring back into active brain memory):
If some configuration is changed at run-time, this change must also be done in the configuration files so that it is used on restart.

The typical example for such changes is a "set global" command modifying some dynamic variable, like "max_connections".
But in a Galera cluster, a node joining the others is also a dynamic configuration change, and it should ASAP be reflected in all configuration files. If this isn't done, the consequences might be as described above.

Happy-End

Now, most stories have a happy ending, and this one shouldn't be an exception:

Luckily, the application uses self-generated keys, similar to UUIDs, so the entries created on A did not conflict with those of B+C. Also, there were no changes of existing data, just inserts. So the situation could be corrected by extracting all new data from A, inserting them on B+C, and then resetting A's state so that it asked B+C for an SST. Uff!

Operational Advice

There are some tools available that will do such a transfer. However, it can be done completely with standard parts coming with MySQL:

  • "mysqldump" will extract the data from A.
  • Suitable options will make sure this exctract does not contain "drop" or "create" commands, and generates "insert ignore".
    Check the documentation for the options "--no-create-db", "--no-create-info", "--skip-add-drop-table", and "--insert-ignore".
  • If the old, common data are deleted first, both dumping and loading becomes faster, and duplicates are reduced / avoided.
  • "mysql" can be used to load these data into B or C.
Note, however, that conflicts will be ignored and not reported. Other tools or approaches might do that.

Had they used auto-increment keys, or had they modified existing data, it would have been much more complicated, and it might even have been impossible to combine all changes without losing some. I leave it to your imagination to think of such scenarios.

To repeat the lesson in DBMS / DBA terms:

  • The most important property of a database is consistency, it must be kept up at all times.
  • For database operations, the configuration on disk (in files) must be consistent to that in RAM (of the running processes), so any runtime changes must be reflected in the configuration files on disk to maintain consistency.

Percona's "pt-config-diff" can be used to compare a node's current variables to its configuration file.

Take care!

How to Get a Galera Cluster Into Split Brain

Jörg Brühe - Fri, 2015-10-23 17:02

"Split Brain" is the term commonly used for a cluster whose nodes have different contents, rather than identical as they should have. Typically, a "split brain" situation is the DBA's nightmare, and the Galera software is designed to avoid it. Galera is very successful in that avoidance, and it needs some special steps by the DBA to achieve "split brain". Here is how to do it - or, for most DBAs, what to avoid doing to not get a split-brain cluster.

Galera's Design

First, let's remember how Galera is operating:

  • The Galera software ensures that all nodes participating in a cluster will start from identical contents, by doing a "snapshot state transfer" (SST) of all current data to a newly joining node.
  • When the cluster is running, Galera transfers all changes (transactions) to all cluster nodes and applies them (or rolls back and ignores, in the case of a conflict).
  • If some connections get lost, all nodes check whether they "have quorum" (belong to a majority), and stop serving requests if they don't.
  • When a disconnected node re-joins the cluster, it gets all meantime changes transferred ("incremental state transfer" IST) and so makes its contents current.
  • Should that be impossible, because some of those changes have become unavailable (log purge), a full transfer (SST) is done.
By this design, the Galera software successfully avoids getting into a "split brain" situation.

Of course, the quorum is a well-known concept. The old term for it is "majority consensus", and the approach is built on a simple principle:
In any set (of cluster nodes), there cannot be two (or more) non-overlapping subsets which both contain a majority of the elements.
So if some loss of connectivity splits the cluster into subsets, at most one of them can "have quorum", all others will stop serving requests, and there cannot be two (or more) different directions in which the contents (data) changes.

What Galera introduced (compared to previous designs of distributed DBMSs) is the efficient transfer of changes and conflict detection / resolution ("certification" in Galera terms) at "commit" time that makes the system fast, while previous designs used "distributed locking" or other principles which added latency to many commands and so made their systems slow.

The Story

Let's get back to the "split brain" issue, of which I said that Galera avoids it, and also said it can be reached. Sounds contradictory? Well, there are more active components than just Galera. Here is a real-world case, as happened to (I won't say "achieved by") a customer:

Originally, they had set up a Galera cluster of three nodes; let's call them A, B, and C. This is started by bringing up node A as a stand-alone node, running MySQL with the "wsrep" plugin. Then, one after the other, nodes B and C are configured to join node A in forming a cluster, and started. As a result, there are three nodes communicating with each other that form the cluster. In addition, HAproxy is running somewhere, it will direct the clients to an active cluster node.

So far, so good: The cluster is running, clients connect and issue transactions, everything is ok, and the DBA/s turn/s to other tasks ...

Some time later, node A must be stopped to do some hardware maintenance. No problem, nodes B and C are running fine, they have quorum (2 of 3 is a majority), so the system is still available and operations continue. HAproxy detects that node A does not respond, so it directs all clients to B or C. The cluster architecture is serving its purpose of continuous availability even during a maintenance period.

Maintenance is done, node A is rebooted, its MySQL+Galera server process restarts. It comes up, HAproxy detects it as running and directs clients to it. All seems fine ...

Three hours later, someone has become suspicious, detects trouble, and node A is stopped. Why? What has happened?

Some
large
gap
is
appropriate
to
give
readers
a
break
and
let
them
consider
the
situation.

Analysis

What has happened?

Remember what I wrote about the cluster setup: It was

... started by bringing up node A as a stand-alone node, ... ... nodes B and C are configured to join node A ... ... three nodes communicating with each other that form the cluster.

These steps were sufficient to get the nodes up and running. What was missing, however, was to re-configure A from "stand-alone" to "member of cluster with B and C".

As a consequence, when A was restarted after the maintenance, it again came up stand-alone. It did not try to join the cluster (which would have triggered an IST of the meantime changes) or check for quorum (a stand-alone node is self-sufficient).

Based on A's configuration (as read from disk), all was fine.
Based on the concept of the A+B+C cluster, it was a plain, simple split-brain:
Some changes were done on B+C (which still had quorum), while others were done on A only (which was mis-configured).

Lesson to learn (or rather, to bring back into active brain memory):
If some configuration is changed at run-time, this change must also be done in the configuration files so that it is used on restart.

The typical example for such changes is a "set global" command modifying some dynamic variable, like "max_connections".
But in a Galera cluster, a node joining the others is also a dynamic configuration change, and it should ASAP be reflected in all configuration files. If this isn't done, the consequences might be as described above.

Happy-End

Now, most stories have a happy ending, and this one shouldn't be an exception:

Luckily, the application uses self-generated keys, similar to UUIDs, so the entries created on A did not conflict with those of B+C. Also, there were no changes of existing data, just inserts. So the situation could be corrected by extracting all new data from A, inserting them on B+C, and then resetting A's state so that it asked B+C for an SST. Uff!

Operational Advice

There are some tools available that will do such a transfer. However, it can be done completely with standard parts coming with MySQL:

  • "mysqldump" will extract the data from A.
  • Suitable options will make sure this exctract does not contain "drop" or "create" commands, and generates "insert ignore".
    Check the documentation for the options "--no-create-db", "--no-create-info", "--skip-add-drop-table", and "--insert-ignore".
  • If the old, common data are deleted first, both dumping and loading becomes faster, and duplicates are reduced / avoided.
  • "mysql" can be used to load these data into B or C.
Note, however, that conflicts will be ignored and not reported. Other tools or approaches might do that.

Had they used auto-increment keys, or had they modified existing data, it would have been much more complicated, and it might even have been impossible to combine all changes without losing some. I leave it to your imagination to think of such scenarios.

To repeat the lesson in DBMS / DBA terms:

  • The most important property of a database is consistency, it must be kept up at all times.
  • For database operations, the configuration on disk (in files) must be consistent to that in RAM (of the running processes), so any runtime changes must be reflected in the configuration files on disk to maintain consistency.

Percona's "pt-config-diff" can be used to compare a node's current variables to its configuration file.

Take care!

Migration of SQLite to MySQL

Cédric Bruderer - Mon, 2015-10-19 09:38

In my first Blog ever, I am going to cover the migration of a SQLite-Database to MySQL. The Tool used is MySQL-Workbench, which you can Download from the MySQL website. In this particular case, it is about the upgrade of mocenter 0.2 to 0.3.

In the Workbench on the right side, you have the button “Database Migration”. Once you click on it, the introduction to the migration wizard will show up.

Setting up source and target

On the bottom of the screen there is a button called “Start Migration”. Click it to get to the source selection.

On the first drop down menu choose “SQLite”. The menu will now change and give you the possibility to load a file. After you did this you can test the connection with the button on the bottom left, or go to the bottom right and click next.

You get a new menu, where you can choose your target of the database. You can use a stored connection or a new one. If you have no idea which connection type to use, TCP/IP usually works fine. Here I recommend you to test the connection, so you see if you can reach target. Then click next.

If the schema fetch does not return any error, you get to the schema selection.

There you have to select a target schema, before you can click next and go to fetch the source. Once you have done that too, click next once more.

Object selection

Now the database is ready to copy.

If you want to remove some of the tables from the migration, you can do this under “Show Selection”.

In case you see some warnings, you can ignore them.

Now click next and make sure there are no errors or failures, until you can select how to create the target database.

If you want to, you can create the SQL file to import the database structure somewhere else. I am just going to put it onto my server.

Clicking next after this step will create the database on the server. If you chose to create the SQL file, it would be created now as well.

Click next, when the creation is done, and you should get something, that looks like this picture. If you select a line, you will be shown the command that was executed.

This was the structure. Now to the data:

To transfer your data from the SQLite directly into the MySQL-Database, you can make an online copy. If you want to, or have to, do it from the command line you could also make a bash file.

If you click the option “Truncate target tables”, all the tables that already exist will be cleared of any data, so be careful using that checkbox.

Once complete, you should get report that looks somewhat like this.

------------------------------------------------------------------------------------ MySQL Workbench Migration Wizard Report Date: Sun Oct 18 17:36:53 2015 Source: SQLite 1.0.0 Target: MySQL 5.6.24 ------------------------------------------------------------------------------------ I. Migration 1. Summary Number of migrated schemas: 1 1. mocenter Source Schema: mocenter - Tables: 10 - Triggers: 0 - Views: 0 - Stored Procedures: 0 - Functions: 0 2. Migration Issues - versions warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - nodes warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - name_ui warning Truncated key column length for column from 0 to 255 - clusters warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - cluster_name warning Truncated key column length for column from 0 to 255 - vips warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - ip_address warning Truncated key column length for column from 0 to 255 - servers warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - server_name warning Truncated key column length for column from 0 to 255 - users warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - login_name warning Truncated key column length for column from 0 to 255 - moc_identifier warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - checks warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL.warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - name_dc warning Truncated key column length for column from 0 to 255 - jobs warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. 3. Object Creation Issues 4. Migration Details 4.1. Table mocenter.versions (versions) Columns: - version LONGTEXT - mr_version INT NULL - timestamp INT NULL Foreign Keys: Indices: - PRIMARY (version(255)) 4.2. Table mocenter.licenses (licenses) Columns: - license LONGTEXT NULL Foreign Keys: Indices: 4.3. Table mocenter.nodes (nodes) Columns: - node_id INT - name LONGTEXT NULL - last_change_ts INT NULL - status LONGTEXT NULL - node_type LONGTEXT NULL - hostname LONGTEXT NULL - basedir LONGTEXT NULL - datadir LONGTEXT NULL - my_cnf LONGTEXT NULL - port INT NULL - database_user LONGTEXT NULL - database_user_password LONGTEXT NULL - error_log LONGTEXT NULL - pid_file LONGTEXT NULL - read_only INT NULL - server_id INT NULL - role_id INT NULL - cluster_id INT NULL - master_id INT NULL Foreign Keys: Indices: - PRIMARY (node_id) - name_ui (name(255)) 4.4. Table mocenter.clusters (clusters) Columns: - cluster_id INT - name LONGTEXT NULL - last_change_ts INT NULL - type INT NULL Foreign Keys: Indices: - PRIMARY (cluster_id) - cluster_name (name(255)) 4.5. Table mocenter.vips (vips) Columns: - vip_id INT - ip_address LONGTEXT NULL - name LONGTEXT NULL - ipaddr_type INT NULL - interface LONGTEXT NULL - alias INT NULL - primary_id INT NULL - failover_id INT NULL - location_id INT NULL - cluster_id INT NULL - last_change_ts INT NULL - fo_sync_only INT NULL - fo_wait_sync INT NULL Foreign Keys: Indices: - PRIMARY (vip_id) - ip_address (ip_address(255)) 4.6. Table mocenter.servers (servers) Columns: - server_id INT - name LONGTEXT NULL - default_ip LONGTEXT NULL - os_user LONGTEXT NULL - cluster_id INT NULL - last_change_ts INT NULL - myenv_basedir LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (server_id) - server_name (name(255)) 4.7. Table mocenter.users (users) Columns: - user_id INT - login_name LONGTEXT NULL - password_hash LONGTEXT NULL - email_address LONGTEXT NULL - first_name LONGTEXT NULL - last_name LONGTEXT NULL - mobile LONGTEXT NULL - role_id INT NULL Foreign Keys: Indices: - PRIMARY (user_id) - login_name (login_name(255)) 4.8. Table mocenter.moc_identifier (moc_identifier) Columns: - moc_identifier LONGTEXT Foreign Keys: Indices: - PRIMARY (moc_identifier(255)) 4.9. Table mocenter.checks (checks) Columns: - unit_id INT - type LONGTEXT NULL - name LONGTEXT - last_check_ts INT NULL - last_check_status LONGTEXT NULL - last_successful_check_ts INT NULL - last_successful_check_status LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (unit_id, name(255)) - name_dc (name(255)) 4.10. Table mocenter.jobs (jobs) Columns: - job_id INT - name LONGTEXT NULL - server LONGTEXT NULL - pid INT NULL - start_ts INT NULL - status LONGTEXT NULL - check_interval INT NULL - last_check_ts INT NULL - end_ts INT NULL - error_code INT NULL - error_message LONGTEXT NULL - command LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (job_id) II. Data Copy - `mocenter`.`moc_identifier` Succeeded : copied 1 of 1 rows from "mocenter"."moc_identifier" - `mocenter`.`clusters` Succeeded : copied 1 of 1 rows from "mocenter"."clusters" - `mocenter`.`jobs` Succeeded : copied 0 of 0 rows from "mocenter"."jobs" - `mocenter`.`users` Succeeded : copied 1 of 1 rows from "mocenter"."users" - `mocenter`.`vips` Succeeded : copied 1 of 1 rows from "mocenter"."vips" - `mocenter`.`versions` Succeeded : copied 2 of 2 rows from "mocenter"."versions" - `mocenter`.`servers` Succeeded : copied 2 of 2 rows from "mocenter"."servers" - `mocenter`.`checks` Succeeded : copied 46 of 46 rows from "mocenter"."checks" - `mocenter`.`licenses` Succeeded : copied 1 of 1 rows from "mocenter"."licenses" Conclusion

Migrating from SQLite to MySQL is very easy when using MySQL Workbench.

Taxonomy upgrade extras: focmmmigration

Migration of SQLite to MySQL

Cédric Bruderer - Mon, 2015-10-19 09:38

In my first Blog ever, I am going to cover the migration of a SQLite-Database to MySQL. The Tool used is MySQL-Workbench, which you can Download from the MySQL website. In this particular case, it is about the upgrade of mocenter 0.2 to 0.3.

In the Workbench on the right side, you have the button “Database Migration”. Once you click on it, the introduction to the migration wizard will show up.

Setting up source and target

On the bottom of the screen there is a button called “Start Migration”. Click it to get to the source selection.

On the first drop down menu choose “SQLite”. The menu will now change and give you the possibility to load a file. After you did this you can test the connection with the button on the bottom left, or go to the bottom right and click next.

You get a new menu, where you can choose your target of the database. You can use a stored connection or a new one. If you have no idea which connection type to use, TCP/IP usually works fine. Here I recommend you to test the connection, so you see if you can reach target. Then click next.

If the schema fetch does not return any error, you get to the schema selection.

There you have to select a target schema, before you can click next and go to fetch the source. Once you have done that too, click next once more.

Object selection

Now the database is ready to copy.

If you want to remove some of the tables from the migration, you can do this under “Show Selection”.

In case you see some warnings, you can ignore them.

Now click next and make sure there are no errors or failures, until you can select how to create the target database.

If you want to, you can create the SQL file to import the database structure somewhere else. I am just going to put it onto my server.

Clicking next after this step will create the database on the server. If you chose to create the SQL file, it would be created now as well.

Click next, when the creation is done, and you should get something, that looks like this picture. If you select a line, you will be shown the command that was executed.

This was the structure. Now to the data:

To transfer your data from the SQLite directly into the MySQL-Database, you can make an online copy. If you want to, or have to, do it from the command line you could also make a bash file.

If you click the option “Truncate target tables”, all the tables that already exist will be cleared of any data, so be careful using that checkbox.

Once complete, you should get report that looks somewhat like this.

------------------------------------------------------------------------------------ MySQL Workbench Migration Wizard Report Date: Sun Oct 18 17:36:53 2015 Source: SQLite 1.0.0 Target: MySQL 5.6.24 ------------------------------------------------------------------------------------ I. Migration 1. Summary Number of migrated schemas: 1 1. mocenter Source Schema: mocenter - Tables: 10 - Triggers: 0 - Views: 0 - Stored Procedures: 0 - Functions: 0 2. Migration Issues - versions warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - nodes warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - name_ui warning Truncated key column length for column from 0 to 255 - clusters warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - cluster_name warning Truncated key column length for column from 0 to 255 - vips warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - ip_address warning Truncated key column length for column from 0 to 255 - servers warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - server_name warning Truncated key column length for column from 0 to 255 - users warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - login_name warning Truncated key column length for column from 0 to 255 - moc_identifier warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - checks warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL.warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - name_dc warning Truncated key column length for column from 0 to 255 - jobs warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. 3. Object Creation Issues 4. Migration Details 4.1. Table mocenter.versions (versions) Columns: - version LONGTEXT - mr_version INT NULL - timestamp INT NULL Foreign Keys: Indices: - PRIMARY (version(255)) 4.2. Table mocenter.licenses (licenses) Columns: - license LONGTEXT NULL Foreign Keys: Indices: 4.3. Table mocenter.nodes (nodes) Columns: - node_id INT - name LONGTEXT NULL - last_change_ts INT NULL - status LONGTEXT NULL - node_type LONGTEXT NULL - hostname LONGTEXT NULL - basedir LONGTEXT NULL - datadir LONGTEXT NULL - my_cnf LONGTEXT NULL - port INT NULL - database_user LONGTEXT NULL - database_user_password LONGTEXT NULL - error_log LONGTEXT NULL - pid_file LONGTEXT NULL - read_only INT NULL - server_id INT NULL - role_id INT NULL - cluster_id INT NULL - master_id INT NULL Foreign Keys: Indices: - PRIMARY (node_id) - name_ui (name(255)) 4.4. Table mocenter.clusters (clusters) Columns: - cluster_id INT - name LONGTEXT NULL - last_change_ts INT NULL - type INT NULL Foreign Keys: Indices: - PRIMARY (cluster_id) - cluster_name (name(255)) 4.5. Table mocenter.vips (vips) Columns: - vip_id INT - ip_address LONGTEXT NULL - name LONGTEXT NULL - ipaddr_type INT NULL - interface LONGTEXT NULL - alias INT NULL - primary_id INT NULL - failover_id INT NULL - location_id INT NULL - cluster_id INT NULL - last_change_ts INT NULL - fo_sync_only INT NULL - fo_wait_sync INT NULL Foreign Keys: Indices: - PRIMARY (vip_id) - ip_address (ip_address(255)) 4.6. Table mocenter.servers (servers) Columns: - server_id INT - name LONGTEXT NULL - default_ip LONGTEXT NULL - os_user LONGTEXT NULL - cluster_id INT NULL - last_change_ts INT NULL - myenv_basedir LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (server_id) - server_name (name(255)) 4.7. Table mocenter.users (users) Columns: - user_id INT - login_name LONGTEXT NULL - password_hash LONGTEXT NULL - email_address LONGTEXT NULL - first_name LONGTEXT NULL - last_name LONGTEXT NULL - mobile LONGTEXT NULL - role_id INT NULL Foreign Keys: Indices: - PRIMARY (user_id) - login_name (login_name(255)) 4.8. Table mocenter.moc_identifier (moc_identifier) Columns: - moc_identifier LONGTEXT Foreign Keys: Indices: - PRIMARY (moc_identifier(255)) 4.9. Table mocenter.checks (checks) Columns: - unit_id INT - type LONGTEXT NULL - name LONGTEXT - last_check_ts INT NULL - last_check_status LONGTEXT NULL - last_successful_check_ts INT NULL - last_successful_check_status LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (unit_id, name(255)) - name_dc (name(255)) 4.10. Table mocenter.jobs (jobs) Columns: - job_id INT - name LONGTEXT NULL - server LONGTEXT NULL - pid INT NULL - start_ts INT NULL - status LONGTEXT NULL - check_interval INT NULL - last_check_ts INT NULL - end_ts INT NULL - error_code INT NULL - error_message LONGTEXT NULL - command LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (job_id) II. Data Copy - `mocenter`.`moc_identifier` Succeeded : copied 1 of 1 rows from "mocenter"."moc_identifier" - `mocenter`.`clusters` Succeeded : copied 1 of 1 rows from "mocenter"."clusters" - `mocenter`.`jobs` Succeeded : copied 0 of 0 rows from "mocenter"."jobs" - `mocenter`.`users` Succeeded : copied 1 of 1 rows from "mocenter"."users" - `mocenter`.`vips` Succeeded : copied 1 of 1 rows from "mocenter"."vips" - `mocenter`.`versions` Succeeded : copied 2 of 2 rows from "mocenter"."versions" - `mocenter`.`servers` Succeeded : copied 2 of 2 rows from "mocenter"."servers" - `mocenter`.`checks` Succeeded : copied 46 of 46 rows from "mocenter"."checks" - `mocenter`.`licenses` Succeeded : copied 1 of 1 rows from "mocenter"."licenses" Conclusion

Migrating from SQLite to MySQL is very easy when using MySQL Workbench.

Taxonomy upgrade extras: focmmmigration

Migration of SQLite to MySQL

Cédric Bruderer - Mon, 2015-10-19 09:38

In my first Blog ever, I am going to cover the migration of a SQLite-Database to MySQL. The Tool used is MySQL-Workbench, which you can Download from the MySQL website. In this particular case, it is about the upgrade of mocenter 0.2 to 0.3.

In the Workbench on the right side, you have the button “Database Migration”. Once you click on it, the introduction to the migration wizard will show up.

Setting up source and target

On the bottom of the screen there is a button called “Start Migration”. Click it to get to the source selection.

On the first drop down menu choose “SQLite”. The menu will now change and give you the possibility to load a file. After you did this you can test the connection with the button on the bottom left, or go to the bottom right and click next.

You get a new menu, where you can choose your target of the database. You can use a stored connection or a new one. If you have no idea which connection type to use, TCP/IP usually works fine. Here I recommend you to test the connection, so you see if you can reach target. Then click next.

If the schema fetch does not return any error, you get to the schema selection.

There you have to select a target schema, before you can click next and go to fetch the source. Once you have done that too, click next once more.

Object selection

Now the database is ready to copy.

If you want to remove some of the tables from the migration, you can do this under “Show Selection”.

In case you see some warnings, you can ignore them.

Now click next and make sure there are no errors or failures, until you can select how to create the target database.

If you want to, you can create the SQL file to import the database structure somewhere else. I am just going to put it onto my server.

Clicking next after this step will create the database on the server. If you chose to create the SQL file, it would be created now as well.

Click next, when the creation is done, and you should get something, that looks like this picture. If you select a line, you will be shown the command that was executed.

This was the structure. Now to the data:

To transfer your data from the SQLite directly into the MySQL-Database, you can make an online copy. If you want to, or have to, do it from the command line you could also make a bash file.

If you click the option “Truncate target tables”, all the tables that already exist will be cleared of any data, so be careful using that checkbox.

Once complete, you should get report that looks somewhat like this.

------------------------------------------------------------------------------------ MySQL Workbench Migration Wizard Report Date: Sun Oct 18 17:36:53 2015 Source: SQLite 1.0.0 Target: MySQL 5.6.24 ------------------------------------------------------------------------------------ I. Migration 1. Summary Number of migrated schemas: 1 1. mocenter Source Schema: mocenter - Tables: 10 - Triggers: 0 - Views: 0 - Stored Procedures: 0 - Functions: 0 2. Migration Issues - versions warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - nodes warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - name_ui warning Truncated key column length for column from 0 to 255 - clusters warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - cluster_name warning Truncated key column length for column from 0 to 255 - vips warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - ip_address warning Truncated key column length for column from 0 to 255 - servers warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - server_name warning Truncated key column length for column from 0 to 255 - users warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - login_name warning Truncated key column length for column from 0 to 255 - moc_identifier warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - checks warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL.warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. - PRIMARY warning Truncated key column length for column from 0 to 255 - name_dc warning Truncated key column length for column from 0 to 255 - jobs warning Source table has a PRIMARY KEY allowing NULL values, which is not supported by MySQL. Column was changed to NOT NULL. 3. Object Creation Issues 4. Migration Details 4.1. Table mocenter.versions (versions) Columns: - version LONGTEXT - mr_version INT NULL - timestamp INT NULL Foreign Keys: Indices: - PRIMARY (version(255)) 4.2. Table mocenter.licenses (licenses) Columns: - license LONGTEXT NULL Foreign Keys: Indices: 4.3. Table mocenter.nodes (nodes) Columns: - node_id INT - name LONGTEXT NULL - last_change_ts INT NULL - status LONGTEXT NULL - node_type LONGTEXT NULL - hostname LONGTEXT NULL - basedir LONGTEXT NULL - datadir LONGTEXT NULL - my_cnf LONGTEXT NULL - port INT NULL - database_user LONGTEXT NULL - database_user_password LONGTEXT NULL - error_log LONGTEXT NULL - pid_file LONGTEXT NULL - read_only INT NULL - server_id INT NULL - role_id INT NULL - cluster_id INT NULL - master_id INT NULL Foreign Keys: Indices: - PRIMARY (node_id) - name_ui (name(255)) 4.4. Table mocenter.clusters (clusters) Columns: - cluster_id INT - name LONGTEXT NULL - last_change_ts INT NULL - type INT NULL Foreign Keys: Indices: - PRIMARY (cluster_id) - cluster_name (name(255)) 4.5. Table mocenter.vips (vips) Columns: - vip_id INT - ip_address LONGTEXT NULL - name LONGTEXT NULL - ipaddr_type INT NULL - interface LONGTEXT NULL - alias INT NULL - primary_id INT NULL - failover_id INT NULL - location_id INT NULL - cluster_id INT NULL - last_change_ts INT NULL - fo_sync_only INT NULL - fo_wait_sync INT NULL Foreign Keys: Indices: - PRIMARY (vip_id) - ip_address (ip_address(255)) 4.6. Table mocenter.servers (servers) Columns: - server_id INT - name LONGTEXT NULL - default_ip LONGTEXT NULL - os_user LONGTEXT NULL - cluster_id INT NULL - last_change_ts INT NULL - myenv_basedir LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (server_id) - server_name (name(255)) 4.7. Table mocenter.users (users) Columns: - user_id INT - login_name LONGTEXT NULL - password_hash LONGTEXT NULL - email_address LONGTEXT NULL - first_name LONGTEXT NULL - last_name LONGTEXT NULL - mobile LONGTEXT NULL - role_id INT NULL Foreign Keys: Indices: - PRIMARY (user_id) - login_name (login_name(255)) 4.8. Table mocenter.moc_identifier (moc_identifier) Columns: - moc_identifier LONGTEXT Foreign Keys: Indices: - PRIMARY (moc_identifier(255)) 4.9. Table mocenter.checks (checks) Columns: - unit_id INT - type LONGTEXT NULL - name LONGTEXT - last_check_ts INT NULL - last_check_status LONGTEXT NULL - last_successful_check_ts INT NULL - last_successful_check_status LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (unit_id, name(255)) - name_dc (name(255)) 4.10. Table mocenter.jobs (jobs) Columns: - job_id INT - name LONGTEXT NULL - server LONGTEXT NULL - pid INT NULL - start_ts INT NULL - status LONGTEXT NULL - check_interval INT NULL - last_check_ts INT NULL - end_ts INT NULL - error_code INT NULL - error_message LONGTEXT NULL - command LONGTEXT NULL Foreign Keys: Indices: - PRIMARY (job_id) II. Data Copy - `mocenter`.`moc_identifier` Succeeded : copied 1 of 1 rows from "mocenter"."moc_identifier" - `mocenter`.`clusters` Succeeded : copied 1 of 1 rows from "mocenter"."clusters" - `mocenter`.`jobs` Succeeded : copied 0 of 0 rows from "mocenter"."jobs" - `mocenter`.`users` Succeeded : copied 1 of 1 rows from "mocenter"."users" - `mocenter`.`vips` Succeeded : copied 1 of 1 rows from "mocenter"."vips" - `mocenter`.`versions` Succeeded : copied 2 of 2 rows from "mocenter"."versions" - `mocenter`.`servers` Succeeded : copied 2 of 2 rows from "mocenter"."servers" - `mocenter`.`checks` Succeeded : copied 46 of 46 rows from "mocenter"."checks" - `mocenter`.`licenses` Succeeded : copied 1 of 1 rows from "mocenter"."licenses" Conclusion

Migrating from SQLite to MySQL is very easy when using MySQL Workbench.

MySQL Environment MyEnv 1.2.2 has been released

FromDual.en - Mon, 2015-10-12 17:48

FromDual has the pleasure to announce the release of the new version 1.2.2 of its popular MySQL, Galera Cluster, MariaDB and Percona Server multi-instance environment MyEnv.

The new MyEnv can be downloaded here.

In the inconceivable case that you find a bug in the MyEnv please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x to 1.2.2 # cd ${HOME}/product # tar xf /download/myenv-1.2.2.tar.gz # rm -f myenv # ln -s myenv-1.2.2 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.2.2 MyEnv
  • IP addresses are shown in up.
  • All values from [default] section should be inherited to [dbname] section.
  • Configuration file parser made more robust.
  • Empty my.cnf is caught correctly now.
MyEnv Installer
  • Mac OSX findings fixed in installer.
  • Fixed suppressed mysql_install_db error output problem.
  • Missing libaio library is found in installer now.
  • Check for missing /var/run/mysqld directory added.
  • Installation script mysql_install_db is called with --no-defaults now to avoid interference with other configuration files.
MyEnv Utilities
  • Script log_maintenance.php will not abort when having a problem with one log file but continue.
  • Help examples and template improved for log_maintenance.php.
  • Function check for mb_strlen added for log_maintenance.php.
  • Script option --truncate with too many lines leads to out of memory and takes ages. Fixed this by limiting lines to 10000 in log_maintenance.php.
  • But in drop_partition.php was fixed.

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: MyEnvoperationMySQL Operationsmulti instanceconsolidationtestingrelease

MySQL Environment MyEnv 1.2.2 has been released

FromDual.en - Mon, 2015-10-12 17:48

FromDual has the pleasure to announce the release of the new version 1.2.2 of its popular MySQL, Galera Cluster, MariaDB and Percona Server multi-instance environment MyEnv.

The new MyEnv can be downloaded here.

In the inconceivable case that you find a bug in the MyEnv please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x to 1.2.2 # cd ${HOME}/product # tar xf /download/myenv-1.2.2.tar.gz # rm -f myenv # ln -s myenv-1.2.2 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.2.2 MyEnv
  • IP addresses are shown in up.
  • All values from [default] section should be inherited to [dbname] section.
  • Configuration file parser made more robust.
  • Empty my.cnf is caught correctly now.
MyEnv Installer
  • Mac OSX findings fixed in installer.
  • Fixed suppressed mysql_install_db error output problem.
  • Missing libaio library is found in installer now.
  • Check for missing /var/run/mysqld directory added.
  • Installation script mysql_install_db is called with --no-defaults now to avoid interference with other configuration files.
MyEnv Utilities
  • Script log_maintenance.php will not abort when having a problem with one log file but continue.
  • Help examples and template improved for log_maintenance.php.
  • Function check for mb_strlen added for log_maintenance.php.
  • Script option --truncate with too many lines leads to out of memory and takes ages. Fixed this by limiting lines to 10000 in log_maintenance.php.
  • But in drop_partition.php was fixed.

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationtestingupgraderelease

MySQL Environment MyEnv 1.2.2 has been released

FromDual.en - Mon, 2015-10-12 17:48

FromDual has the pleasure to announce the release of the new version 1.2.2 of its popular MySQL, Galera Cluster, MariaDB and Percona Server multi-instance environment MyEnv.

The new MyEnv can be downloaded here.

In the inconceivable case that you find a bug in the MyEnv please report it to our bug tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.x to 1.2.2 # cd ${HOME}/product # tar xf /download/myenv-1.2.2.tar.gz # rm -f myenv # ln -s myenv-1.2.2 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.2.2 MyEnv
  • IP addresses are shown in up.
  • All values from [default] section should be inherited to [dbname] section.
  • Configuration file parser made more robust.
  • Empty my.cnf is caught correctly now.
MyEnv Installer
  • Mac OSX findings fixed in installer.
  • Fixed suppressed mysql_install_db error output problem.
  • Missing libaio library is found in installer now.
  • Check for missing /var/run/mysqld directory added.
  • Installation script mysql_install_db is called with --no-defaults now to avoid interference with other configuration files.
MyEnv Utilities
  • Skript log_maintenance.php will not abort when having a problem with one log file but continue.
  • Help examples and template improved for log_maintenance.php.
  • Function check for mb_strlen added for log_maintenance.php.
  • Skript option --truncate with too many lines leads to out of memory and takes ages. Fixed this by limiting lines to 10000 in log_maintenance.php.
  • But in drop_partition.php was fixed.

For subscriptions of commercial use of MyEnv please get in contact with us.

Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationtestingupgrade

FromDual Performance Monitor for MySQL and MariaDB 0.10.5 has been released

FromDual.en - Thu, 2015-08-06 18:29

FromDual has the pleasure to announce the release of the new version 0.10.5 of its popular Database Performance Monitor for MySQL, MariaDB, Galera Cluster and Percona Server fpmmm.

You can download fpmmm from here.

In the inconceivable case that you find a bug in fpmmm please report it to our Bug-tracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

This release contains various bug fixes and improvements. The previous release had some major bugs so we recommend to upgrade...

New installation of fpmmm v0.10.5

Please follow our mpm installation guide. A specific fpmmm installation guide will follow with the next version.

Prerequisites CentOS 6
# yum install php-cli php-process php-mysqli # cat << _EOF >/etc/php.d/fpmmm.ini variables_order = "EGPCS" date.timezone = 'Europe/Zurich' _EOF rpm -Uvh http://repo.zabbix.com/zabbix/2.2/rhel/6/x86_64/zabbix-release-2.2-1.el6.noarch.rpm yum update yum install zabbix-sender
CentOS 7 # yum install php-cli php-process php-mysqli # cat << _EOF >/etc/php.d/fpmmm.ini variables_order = "EGPCS" date.timezone = 'Europe/Zurich' _EOF rpm -Uvh http://repo.zabbix.com/zabbix/2.2/rhel/7/x86_64/zabbix-release-2.2-1.el7.noarch.rpm yum update yum install zabbix-sender
Ubuntu 14.04
# apt-get install php5-cli php5-mysqlnd php5-curl # cat << _EOF >/etc/php5/cli/conf.d/fpmmm.ini variables_order = "EGPCS" date.timezone = 'Europe/Zurich' _EOF apt-get install zabbix-agent
OpenSuSE 13.1
# zypper install php5 php5-posix php5-mysql php5-pcntl php5-curl #cat << _EOF >/etc/php5/conf.d/fpmmm.ini variables_order = "EGPCS" date.timezone = 'Europe/Zurich' _EOF zypper addrepo http://download.opensuse.org/repositories/server:/monitoring/openSUSE_13.1 server_monitoring zypper update zypper install zabbix-agent
Upgrade from fpmmm 0.10.x to fpmmm 0.10.5 # cd /opt # tar xf /download/fpmmm-0.10.5.tar.gz # rm -f fpmmm # ln -s fpmmm-0.10.5 fpmmm

The following templates in your Zabbix monitor should be replaced. Before you replace the templates it is a good idea to first delete all triggers...

  • tpl/Template_FromDual.MySQL.fpmmm.xml
  • tpl/Template_FromDual.MySQL.innodb.xml
  • tpl/Template_FromDual.MySQL.master.xml
  • tpl/Template_FromDual.MySQL.myisam.xml
  • tpl/Template_FromDual.MySQL.mysql.xml
  • tpl/Template_FromDual.MySQL.server.xml
  • tpl/Template_FromDual.MySQL.slave.xml
Changes in fpmmm v0.10.5 fpmmm agent
  • Better and more verbose error handling in various modules.
  • Directory for log file is created automatically if it does not exist yet.
  • All broken SQL queries (from 0.10.4) fixed again.
  • Add delay for not so frequent changing data.
  • Several triggers which complained after restart are fixed now.
  • Connections to database were now reduced to the minimum.
  • Links for templates fixed.
  • Innodb_flush_log_at_trx_commit, log_queries_not_using_indexes and character_set_server triggers disabled by default.
  • Also sendCachedData is now checked for too big cache file. Bug from swd.
Slave module
  • Slave error messages are caught and sent to the monitor.
  • Warning is written to the log file if slave module is configured without being a slave.
  • New slave status is reported correctly now.
  • Seconds_behind_master is now only sent when running.
Master module
  • New trigger for binlog_format = MIXED and replication filtering added.
  • Severity increased on STATEMENT based filtering added.
  • Regexp bug fixed.
  • Master without binary log fixed.
MySQL module
  • Old broken triggers fixed.
Server module
  • IOPS graph added.
  • Device sda5 removed.
  • I/O statistics calculation improved.
  • I/O r/w wait experimental items implemented.
  • CPU count added.
  • NUMA and virtualization information added.
InnoDB module
  • innodb_flush_method item added.
  • Trigger for innodb_flush_method added.
  • innodb_force_recovery trigger severity increased.
  • innodb_log_files_in_group item added for log traffic threshold.
  • InnoDB transaction log traffic trigger and graph added.

For subscriptions of commercial use of fpmmm please get in contact with us.

Taxonomy upgrade extras: mysqlperformancemonitormonitoringfpmmmmaasperformance monitormpmrelease

Pages

Subscribe to FromDual agrégateur - FromDual all (en)