You are here

Migrating from MySQL Master-Master Replication to Galera Multi-Master Replication

Introduction

Galera is a synchronous Multi-Master Replication for MySQL. It is therefore in competition with several other MySQL architectures:

Very often they can be easily replaced by Galera's synchronous Multi-Master Replication for MySQL.

All those products have some advantages and disadvantages. Very often MySQL Master-Master Replication is used in the field because of its simplicity to set-up. But after a while one faces its disadvantages which is mainly data inconsistency between the 2 Masters. This is not only the fault of MySQL Replication but MySQL Replication makes it easy to get such data inconsistencies.

In the following article we look at how you can replace a MySQL Master-Master Replication by Galera Multi-Master Replication with the possibility to fall back if you do not like the solution or if you run into troubles.

Starting point

Some MySQL users have a typical Master-Master Replication set-up like a) active-passive or b) active-active for High Availability (HA) reasons. Either in the same data center or even in remote data centers:

mm_to_galera_1.png mm_to_galera_1.png
a) active-passive b) active-active

Adding Galera synchronous Replication

As a first step you can add a Galera Replication Cluster as a simple Slave:

mm_to_galera_3.png

In this set-up you have to consider, that ALL nodes which are participating in replication (Master 1, Master 2 and Galera 1) have set the following parameters:

#
# my.cnf
#
[mysqld]

default_storage_engine = InnoDB

log_slave_updates      = 1
log_bin                = bin-log
server_id              = <n>
binlog_format          = ROW

It is very important that the server_id is unique on all MySQL nodes BUT they have to be EQUAL for all Galera nodes. Example:

  • Master 1: server_id = 1
  • Master 2: server_id = 2
  • Galera 1: server_id = 3
  • Galera 2: server_id = 3
  • Galera 3: server_id = 3

This is to avoid conflicts during replication.

Galera is set-up as described in Installing MySQL/Galera. Please make sure, that you do no have any MyISAM tables anymore. Galera cannot cope with any other Storage Engine than InnoDB at the moment.
The following query helps you to find out if you are using any other Storage Engine than InnoDB:

SELECT table_schema, engine, COUNT(*)
  FROM information_schema.tables
 WHERE table_schema NOT IN ('information_schema', 'mysql', 'performance_schema')
 GROUP BY table_schema, engine;

Then you do a normal dump from the Master 2 as follows:

mysqldump --user=root --password --master-data --single-transaction \
          --databases foodmart test > full_dump.sql

When you dump the database avoid dumping the mysql schema otherwise you will destroy your Galera Node 2 and 3. Then restore the dump on ONE node of the Galera Cluster (preferably on node 1) after setting it to its master:

CHANGE MASTER TO master_host='master1', master_port=3306
     , master_user='replication', master_password='secret';

mysql --user=root --password < full_dump.sql

Then you can attach the Galera node to the Master 2.

Now all data arriving to your MySQL Master(s) will be automatically replicated to the Galera Cluster as well.

Adding the Galera Cluster into the ring

In a second step you can add the Galera Cluster into the Replication ring by pointing Master 1 to the Galera Node 1:

mm_to_galera_4.png

Application Load Balancing for Galera

To have true High Availability (HA) it makes sense to put some Load Balancer in front of your Galera Cluster. This can be done either through:

Now your Galera Replication Cluster is ready to put some load on it:

mm_to_galera_5.png

If you are more familiar with Galera you can move the Virtual IP (VIP) from MySQL Master-Master Replication to the Galera Replication Cluster:

mm_to_galera_6.png

And if you are happy with the synchronous replication and its scaling performance you can finally drop your old MySQL Master-Master set-up and bypass the VIP during next downtime of your application.

mm_to_galera_8.png

Shortcuts

A shortcut in this way would be when you directly replace Master 2 by a Galera node:

mm_to_galera_7.png

Then you need one server less and you can directly use the MySQL Master node as a base for starting with Galera. You just have to replace the MySQL Binaries by the MySQL-Galera Binaries and then add 2 other Galera nodes in the set-up.

Important notes

Currently Galera works only with InnoDB tables. So you have to make sure that you convert all your non-InnoDB tables to InnoDB tables (except the ones in the mysql Schema). Otherwise you will run into problems.

The described set-up works starting with Galera v1.1 and wsrep v22.3.

The memory of the Galera node getting the import has grown by 1.5 Gbyte in one of our tests. So be prepared that the system has enough memory! In our first tests the system heavily started to swap, which caused high I/O load. In this situation Galera behaved erroneous...

mm_to_galera_9.png

If the Master is under very high load Galera Slave can not catch up with the load and starts lagging... This is not a problem if you run the load only on the Galera Cluster!

To avoid a Split-Brain situations all Cluster Solutions need at least 3 nodes. This is the same with Galera. When you move from MySQL Master-Master replication you need one Server more than before to avoid this problem. Theoretically Galera can be run in a 2-node set-up but this is strongly NOT recommended to do.

One way out of this situation is to use the garbd who acts as an arbitrator in such a scenario. This is called a 2 1/2 node set-up.

And now have fun with your synchronous Multi-Master Galera Replication for MySQL...

Comments

Hello, Thank you, very usefull informations... I am planning a migration too soon, but I am wondering because you say it's working with release 1.1, which does not seem to be out there yet.. So.. do you know if it would it work with v1.0 ? Thank you !
Adam etiennecomment

Yes I know! :) It will NOT work with 1.0... We did some tests earlier with 1.0 and they failed. The results made it into into 1.1...
olicomment

Ok, thank you, I guess I will have to wait for the 1.1 release, hopefully soon :)
Adam Etiennecomment

If you cannot wait you can read Building Galera Replication from scratch...
olicomment

You don't mention the binary log and the position in this post. Will this work as expected (ie regular mysql)?
gflaritycomment

Hi Geoff, The mysqldump was done with --master-data(=1) which explicitly set's binary log file and binary log position. In short: Yes, it works as expected. Regards, Oli
admincomment

Hi,
we are migrating some old mysql 5.7 servers to new mariadb 10.5 machines with new galera 4 and we're doing the normal way of migration:

- Dumping all databases from old servers and importing to new galera cluster (no error)
- Configuration of traditional sql replication between old and new servers so that clients still can connect to servers without downtime (no errors)
- Starting replication and just creating a new database (error and galera nodes which are not the replication slave are dropping with inconsistency from cluster)

Do you know what could be the failure here?
You described it here very good but its not working so easy.

Greetings
dcz01

dcz01comment

Hello dcz01 I would say, this is probably a bug. I can reproduce it on my system (MySQL 5.7 Master and MariaDB 10.6 Galera Cluster). Please contact your MariaDB support contact... If you need assistance we are happy to sell you our remote-DBA services... Regards, Oli
Shinguzcomment

Hi Oli,
Thanks for your answer.
The problem is already open and solved but not for me and its still not working:
https://jira.mariadb.org/browse/MDEV-28172

Greetings
dcz01

dcz01comment

Hi dcz01 No it is not solved... It is closed and "not a bug"... I try to intervene Regards, Oli
Shinguzcomment
Shinguzcomment