You are here
Murphy’s Law is also valid for Galera Cluster for MySQL
We had a Galera Cluster support case recently. The customer was drenched in tears because his Galera Cluster did not work any more and he could not make it work any more.
Upsss! What has happened?
A bit of the background of this case: The customer wanted to do a rolling-restart of the Galera Cluster under load because of an Operating System upgrade which requires a reboot of the system.
Lets have a look at the MySQL error log to see what was going on. Customer restarted server with NodeC:
12:20:42 NodeC: normal shutdown --> Group 2/2 12:20:46 NodeC: shutdown complete 12:22:09 NodeC: started 12:22:15 NodeC: start replication 12:22:16 NodeC: CLOSED -> OPEN 12:22:16 all : Group 2/3 component all PRIMARY 12:22:17 NodeC: Gap in state sequence. Need state transfer. 12:22:18 all : Node 1 (NodeC) requested state transfer from '*any*'. Selected 0 (NodeB)(SYNCED) as donor. 12:22:18 NodeB: Shifting SYNCED -> DONOR/DESYNCED (TO: 660966498) 12:22:19 NodeC: Shifting PRIMARY -> JOINER (TO: 660966498) 12:22:19 NodeC: Receiving IST: 14761 writesets, seqnos 660951493-660966254 12:22:21 NodeC: 0 (NodeB): State transfer to 1 (NodeC) complete.
Everything went fine so far NodeC came up again and did an IST as expected. But then the first operational error happened: The customer did not wait to reboot NodeB until NodeC was completely recovered. It seems like NodeC took some time for the IST recovery. This should be checked on all nodes with SHOW GLOBAL STATUS LIKE 'wsrep%';
...
12:22:21 NodeC: Member 0 (NodeB) synced with group. 12:22:21 NodeB: Shifting JOINED -> SYNCED (TO: 660966845) 12:22:21 NodeB: Synchronized with group, ready for connections --> NodeC seems not to be ready yet! 12:23:21 NodeB: Normal shutdown 12:23:21 all : Group 1/2 12:23:21 NodeC: Aborted (core dumped)
And now Murphy was acting already the first time: We hit a situation in the Galera Cluster which is not covered as expected. Now we have 2 nodes out of 3 not working. As a result the Cluster gets a quorum loss (non-Primary
, more than 50% of nodes disappeared) and does not reply to any SQL queries any more. This is a bug because both nodes left the cluster gracefully. The third node should have stayed primary:
12:23:21 NodeB: Received SELF-LEAVE. Closing connection. 12:23:23 NodeB: Shifting CLOSED -> DESTROYED (TO: 660973981) 12:23:25 NodeB: Shutdown complete 12:23:29 NodeC: mysqld_safe WSREP: sleeping 15 seconds before restart 12:23:37 NodeA: Received NON-PRIMARY. 12:23:44 NodeC: mysqld_safe mysqld restarted 12:23:48 NodeC: Shifting CLOSED -> OPEN 12:23:48 NodeC: New COMPONENT: primary = no, bootstrap = no, my_idx = 0, memb_num = 2 12:23:48 NodeC: Received NON-PRIMARY. 12:23:48 NodeA: New COMPONENT: primary = no, bootstrap = no, my_idx = 1, memb_num = 2 12:23:48 NodeA: Received NON-PRIMARY. 12:24:30 NodeB: mysqld_safe Starting mysqld daemon 12:24:36 NodeB: Start replication 12:24:37 NodeB: Received NON-PRIMARY.
As a result the customer decided to shutdown the whole cluster. Which was not necessary but is a acceptable approach:
12:27:55 NodeB: /usr/sbin/mysqld: Normal shutdown 12:27:58 NodeB: /usr/sbin/mysqld: Shutdown complete 12:28:14 NodeA: /usr/sbin/mysqld: Normal shutdown 12:28:19 NodeA: /usr/sbin/mysqld: Shutdown complete 12:31:45 NodeC: /usr/sbin/mysqld: Normal shutdown 12:31:49 NodeC: /usr/sbin/mysqld: Shutdown complete
We experience a complete cluster outage now. An then the next operational error happened: The customer has chosen the node (NodeC) with the worst (= oldest) data as the starting node for the new Cluster:
12:31:55 NodeC: Starting mysqld daemon 12:31:58 NodeC: PRIMARY, 1/1 12:31:58 NodeC: /usr/sbin/mysqld: ready for connections. 12:33:29 NodeB: mysqld_safe Starting mysqld daemon 12:33:33 NodeB: PRIMARY, 1/2
An alternative approach would have been to run the command SET GLOBAL wsrep_provider_options='pc.bootstrap=yes';
on the node (NodeA) with the most recent data...
After connecting NodeB (with the newer state) requested an state transfer from the older NodeC:
12:33:35 all : Node 1 (NodeB) requested state transfer from '*any*'. Selected 0 (NodeC)(SYNCED) as donor. 12:33:35 NodeC: Shifting SYNCED -> DONOR/DESYNCED (TO: 660982149) 12:33:35 NodeC: IST request --> Should be SST, why IST? 12:33:35 NodeB: Shifting PRIMARY -> JOINER (TO: 660982149) 12:33:35 NodeB: Receiving IST: 7914 writesets, seqnos 660973981-660981895 12:33:36 NodeB: Slave SQL: Could not execute Write_rows event on table test.test; Duplicate entry '8994678' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log FIRST, end_log_pos 102, Error_code: 1062
And now Mister Murphy is acting a second time: We hit another situation: The newer node requests an IST from the older node which has progressed in the meanwhile to an even newer state. So the newer joiner node receives data from the older donor node which causes an AUTO_INCREMENT
Primary Key violation. As a consequence the node crashes:
12:33:36 NodeB: receiving IST failed, node restart required: Failed to apply app buffer: äJR#, seqno: 660974010, status: WSREP_FATAL 12:33:36 NodeB: Closed send monitor. 12:33:37 NodeB: Closing slave action queue. 12:33:37 NodeB: Aborted (core dumped) 12:33:37 NodeC: PRIMARY 1/1 12:33:44 NodeC: Shifting DONOR/DESYNCED -> JOINED (TO: 660983204) 12:33:59 NodeB: mysqld_safe mysqld restarted 12:34:04 NodeB: Shifting CLOSED -> OPEN 12:34:07 NodeB: Aborted (core dumped) ... Loop
This situation keeps the node NodeB now in a crashing loop. Restarted by the mysqld_safe
process requesting an IST. This is another bug which is fixed in a newer Galera MySQL (5.5.33). And now the next operational error happened: Instead of killing NodeB and forcing an SST by deleting the grastat.dat
file They started the third node as well...
12:37:12 NodeA: mysqld_safe Starting mysqld daemon ... --> code dumped ... Loop
NodeB and NodeA both have the same problem now...
As a result: Node NodeA and NodeB are now looping in a crash. But at least the node NodeC was up and running all the time.
Learnings
- Most important: Have an
ntpd
service running on all Cluster nodes to not mess up the times on different nodes while investigating in errors. This makes problem solving much easier... - In case of split-brain or quorum loss choose the node with the most recent data as your initial Cluster node.
- If you have a Cluster in split-brain you do not have to restart it. You can bring the node out of split-brain with
pc.bootstrap=yes
variable if you found out which node is the most recent one. - Analyse error log files carefully to understand what went wrong. Forcing an SST only takes a few seconds.
- Upgrade your software regularly to not hit old known bugs. The rule Do not touch a running system! does not apply here because we are already touching the running system! So regular upgrade from time to time can be very helpful!
- Be familiar with operational issues of your Cluster software. A Cluster does not only mean high-availability. It means also you have to train your staff to handle it.
- It is always valuable to have support for your business critical systems.
- Shinguz's blog
- Log in or register to post comments