You are here

News

Impacts of max_allowed_packet size problems on your MySQL database

Shinguz - Sun, 2015-01-18 11:18
Taxonomy upgrade extras: max_allowed_packetconnectionBackupRestoredump

We recently run into some troubles with max_allowed_packet size problems during backups with the FromDual Backup/Recovery Manager and thus I investigated a bit more in the symptoms of such problems.

Read more about: max_allowed_packet.

A general rule for max_allowed_packet size to avoid problems is: All clients and the server should have set the same value for max_allowed_packet size!

I prepared some data for the test which looked as follows:

mysql> SELECT id, LEFT(data, 30), LENGTH(data), ts FROM test; +----+--------------------------------+--------------+------+ | id | left(data, 30) | length(data) | ts | +----+--------------------------------+--------------+------+ | 1 | Anhang | 6 | NULL | | 2 | Anhang | 6 | NULL | | 3 | Anhangblablablablablablablabla | 2400006 | NULL | | 4 | Anhang | 6 | NULL | +----+--------------------------------+--------------+------+

Max_packet_size was set to a too small value then:

mysql> SHOW GLOBAL VARIABLES WHERE variable_name = 'max_allowed_packet'; +--------------------+---------+ | Variable_name | Value | +--------------------+---------+ | max_allowed_packet | 1048576 | +--------------------+---------+

The first test was to retrieve the too big row:

mysql> SELECT * FROM test WHERE id = 3; ERROR 2020 (HY000): Got packet bigger than 'max_allowed_packet' bytes mysql> SELECT CURRENT_USER(); ERROR 2006 (HY000): MySQL server has gone away No connection. Trying to reconnect... Connection id: 6 Current database: test

We got an error message AND we were disconnected from the server. This is indicated with the message MySQL server has gone away which is basically wrong. We were disconnected and not the server has died or similar in this case.

A further symptom is that we get an entry in the MySQL error log about this incident:

[Warning] Aborted connection 3 to db: 'test' user: 'root' host: 'localhost' (Got an error writing communication packets)

So watching carefully such error messages in your MySQL error log with the script check_error_log_mysql.pl from our Nagios/Icinga plugins would be a good idea...

The mysqldump utility basically does the same as a SELECT command so I tried this out and got the same error:

shell> mysqldump -u root test > /tmp/test_dump.sql mysqldump: Error 2020: Got packet bigger than 'max_allowed_packet' bytes when dumping table `test` at row: 2

And again we get an error message in the error log! This is also a good indicator to see if your backup, made with mysqldump failed in this case.

To get a proper dump we have to configure the mysqldump utility properly:

shell> mysqldump --max-allowed-packet=5000000 -u root test > /tmp/test_dump.sql

After the backup we tried to restore the data:

shell> mysql -u root test < /tmp/test_dump.sql ERROR 2006 (HY000) at line 40: MySQL server has gone away

Again we got an error on the command line and in the MySQL error log:

[Warning] Aborted connection 11 to db: 'test' user: 'root' host: 'localhost' (Got a packet bigger than 'max_allowed_packet' bytes)

and further the data are only partially loaded:

mysql> SELECT * FROM test; +----+--------+------+ | id | data | ts | +----+--------+------+ | 1 | Angang | NULL | | 2 | Angang | NULL | +----+--------+------+

Another symptom we can see here is that the MySQL status aborted_clients is increased in all 3 situation:

mysql> SHOW GLOBAL STATUS WHERE variable_name = 'aborted_clients'; +-----------------+-------+ | Variable_name | Value | +-----------------+-------+ | Aborted_clients | 10 | +-----------------+-------+

One positive aspect is that with MySQL 5.7.5 the first 2 symptoms do not appear any more...

Further information you can find here: Communication Errors and Aborted Connections.

Avoid temporary disk tables with MySQL

Shinguz - Fri, 2014-12-19 07:38
Taxonomy upgrade extras: temporary tablediskselectquery tuning

For processing SELECT queries MySQL needs some times the help of temporary tables. These temporary tables can be created either in memory or on disk.

The number of creations of such temporary tables can be found with the following command:

mysql> SHOW GLOBAL STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 4 | | Created_tmp_tables | 36 | +-------------------------+-------+
There are 2 different reasons why MySQL is creating a temporary disk table instead of a temporary memory table:
  • The result is bigger than the smaller one of the MySQL variables max_heap_table_size and tmp_table_size.
  • The result contains columns of type BLOB or TEXT.
In the following example we can see how the temporary disk table can be avoided without changing the column types: mysql> CREATE TABLE test ( id INT UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY , data TEXT , type TINYINT UNSIGNED ); mysql> INSERT INTO test VALUES (NULL, 'State is green', 1), (NULL, 'State is green', 1) , (NULL, 'State is red', 3), (NULL, 'State is red', 3) , (NULL, 'State is red', 3), (NULL, 'State is orange', 2); mysql> EXPLAIN SELECT data, COUNT(*) FROM test GROUP BY data; +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | id | select_type | table | type | possible_keys | key | key_len | ref | rows | Extra | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ | 1 | SIMPLE | test | ALL | NULL | NULL | NULL | NULL | 6 | Using temporary; Using filesort | +----+-------------+-------+------+---------------+------+---------+------+------+---------------------------------+ mysql> SHOW SESSION STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 0 | | Created_tmp_tables | 3 | +-------------------------+-------+ mysql> SELECT data, COUNT(*) FROM test GROUP BY data; +-----------------+----------+ | data | count(*) | +-----------------+----------+ | State is green | 2 | | State is orange | 1 | | State is red | 3 | +-----------------+----------+ mysql> SHOW SESSION STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 1 | | Created_tmp_tables | 4 | +-------------------------+-------+ mysql> SELECT SUBSTR(data, 1, 32), COUNT(*) FROM test GROUP BY SUBSTR(data, 1, 32); mysql> SHOW SESSION STATUS LIKE 'created_tmp%tables'; +-------------------------+-------+ | Variable_name | Value | +-------------------------+-------+ | Created_tmp_disk_tables | 1 | | Created_tmp_tables | 5 | +-------------------------+-------+

This method can be used if changing the table structure from TEXT to VARCHAR or the use of a RAM disk are not possible solutions.

Making HAProxy High Available for MySQL Galera Cluster

Shinguz - Sun, 2014-12-14 18:37
Taxonomy upgrade extras: HAProxyload balancerGalera ClusterVIPvirtual IPHigh Availabilityha

After properly installing and testing a Galera Cluster we see that the set-up is not finished yet. It needs something in front of the Galera Cluster that balances the load over all nodes.
So we install a load balancer in front of the Galera Cluster. Typically nowadays HAProxy is chosen for this purpose. But then we find, that the whole Galera Cluster is still not high available in case the load balancer fails or dies. So we need a second load balancer for high availability.
But how should we properly fail-over when the HAProxy load balancer dies? For this purpose we put a Virtual IP (VIP) in front of the HAProxy load balancer pair. The Virtual IP is controlled and fail-overed with Keepalived.

Installation of HAProxy and Keepalived

First some preparations: For installing socat we need the repoforge repository:

shell> cd /tmp shell> wget http://pkgs.repoforge.org/rpmforge-release/rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm shell> yum localinstall rpmforge-release-0.5.3-1.el6.rf.x86_64.rpm shell> yum update shell> yum install socat

Then we can start installing HAProxy and Keepalived:

shell> yum install haproxy keepalived shell> chkconfig haproxy on shell> chkconfig keepalived on

We can check the installed HAProxy and Keepalived versions as follows:

shell> haproxy -v HA-Proxy version 1.5.2 2014/07/12 shell> keepalived --version Keepalived v1.2.13 (10/15,2014)
Configuration of HAProxy

More details you can find in the HAProxy documentation.

shell> cp /etc/haproxy/haproxy.cfg /etc/haproxy/haproxy.cfg.bak shell> cat << _EOF >/etc/haproxy/haproxy.cfg # # /etc/haproxy/haproxy.cfg # #--------------------------------------------------------------------- # Global settings #--------------------------------------------------------------------- global # to have these messages end up in /var/log/haproxy.log you will # need to: # # 1) configure syslog to accept network log events. This is done # by adding the '-r' option to the SYSLOGD_OPTIONS in # /etc/sysconfig/syslog # # 2) configure local2 events to go to the /var/log/haproxy.log # file. A line like the following can be added to # /etc/sysconfig/syslog # # local2.* /var/log/haproxy.log # log 127.0.0.1 local2 chroot /var/lib/haproxy pidfile /var/run/haproxy.pid maxconn 1020 # See also: ulimit -n user haproxy group haproxy daemon # turn on stats unix socket stats socket /var/lib/haproxy/stats.sock mode 600 level admin stats timeout 2m #--------------------------------------------------------------------- # common defaults that all the 'frontend' and 'backend' sections will # use if not designated in their block #--------------------------------------------------------------------- defaults mode tcp log global option dontlognull option redispatch retries 3 timeout queue 45s timeout connect 5s timeout client 1m timeout server 1m timeout check 10s maxconn 1020 #--------------------------------------------------------------------- # HAProxy statistics backend #--------------------------------------------------------------------- listen haproxy-monitoring *:80 mode http stats enable stats show-legends stats refresh 5s stats uri / stats realm Haproxy\ Statistics stats auth monitor:AdMiN123 stats admin if TRUE frontend haproxy1 # change on 2nd HAProxy bind *:3306 default_backend galera-cluster backend galera-cluster balance roundrobin server nodeA 192.168.1.61:5201 maxconn 151 check server nodeB 192.168.1.61:5202 maxconn 151 check server nodeC 192.168.1.61:5203 maxconn 151 check _EOF
Starting and testing HAProxy

The HAProxy can be started as follows:

shell> service haproxy start

and then be checked either over the socket:

shell> socat /var/lib/haproxy/stats.sock readline prompt > show info > show stat > help

or over your favourite web browser entering the username and password (monitor:AdMiN123) specified in the configuration file above:

To check the application over the load balancer we can run the following command:

shell> mysql --user=app --password=secret --host=192.168.1.38 --port=3306 --exec="SELECT @@wsrep_node_name;" +-------------------+ | @@wsrep_node_name | +-------------------+ | Node C | +-------------------+ shell> mysql --user=app --password=secret --host=192.168.1.38 --port=3306 --exec="SELECT @@wsrep_node_name;" +-------------------+ | @@wsrep_node_name | +-------------------+ | Node A | +-------------------+ shell> mysql --user=app --password=secret --host=192.168.1.38 --port=3306 --exec="SELECT @@wsrep_node_name;" +-------------------+ | @@wsrep_node_name | +-------------------+ | Node B | +-------------------+
Configuration a Virtual IP (VIP) with Keepalived

Now we have 2 HAProxy load balancers. But what happens if one of them fails. Then we do not want to reconfigure our application to work properly again. The fail-over should happen automatically. For this we need a Virtual IP which should automatically fail-over.

More details you can find in the Keepalived documentation and the keepalived user guide.

shell> cp /etc/keepalived/keepalived.conf /etc/keepalived/keepalived.conf.bak cat << _EOF >/etc/keepalived/keepalived.conf # # /etc/keepalived/keepalived.conf # global_defs { notification_email { remote-dba@fromdual.com root@localhost } # Change email from on lb2: notification_email_from lb1@haproxy1 router_id HAPROXY } vrrp_script chk_haproxy { script "killall -0 haproxy" interval 2 weight 2 } vrrp_instance GALERA_VIP { interface eth1 virtual_router_id 42 # Higher priority on other node priority 101 # 102 advert_int 1 # notify "/usr/local/bin/VRRP-notification.sh" virtual_ipaddress { 192.168.1.99/32 dev eth1 } track_script { chk_haproxy } authentication { auth_type PASS auth_pass secret } } _EOF
Starting and testing Keepalived

To test the keepalived we can run the following command:

shell> keepalived -f /etc/keepalived/keepalived.conf --dont-fork --log-console --log-detail ^C

To finally start it the following command will serve:

shell> service keepalived start

To check the Virtual IP the following command will help:

shell> ip addr show eth1

And then we can check our application over the VIP:

shell> mysql --user=app --password=secret --host=192.168.1.99 --port=3306 --exec="SELECT @@wsrep_node_name;"
Literature

failed MySQL DDL commands and Galera replication

Shinguz - Tue, 2014-12-09 15:45
Taxonomy upgrade extras: galerareplicationDDLTOIRSU

We have recently seen a case where the following command was executed on a Galera Cluster node:

SQL> GRANT SUPER ON userdb.* TO root@127.0.0.111; ERROR 1221 (HY000): Incorrect usage of DB GRANT and GLOBAL PRIVILEGES
2014-12-09 14:53:55 7457 [Warning] Did not write failed 'GRANT SUPER ON userdb.* TO root@127.0.0.111' into binary log while granting/revoking privileges in databases. 2014-12-09 14:53:55 7457 [ERROR] Slave SQL: Error 'Incorrect usage of DB GRANT and GLOBAL PRIVILEGES' on query. Default database: ''. Query: 'GRANT SUPER ON userdb.* TO root@127.0.0.111', Error_code: 1221 2014-12-09 14:53:55 7457 [Warning] WSREP: RBR event 1 Query apply warning: 1, 17 2014-12-09 14:53:55 7457 [Warning] WSREP: Ignoring error for TO isolated action: source: c5e54ef5-7faa-11e4-97b0-5e5c695f08a5 version: 3 local: 0 state: APPLYING flags: 65 conn_id: 4 trx_id: -1 seqnos (l: 4, g: 17, s: 15, d: 15, ts: 113215863294782)

According to the error message it looks like this command is done in Total Order Isolation (TOI) mode during the Rolling Schema Upgrade (RSU).

Only on the nodes which did NOT receive this wrong command the error log message was written and further they have received a GRA_*.log file.

Analysis of the GRA_*.log (failed transactions) files:

hexdump -C GRA_2_16.log 00000000 f3 fe 86 54 02 53 14 00 00 76 00 00 00 76 00 00 |...T.S...v...v..| 00000010 00 00 00 04 00 00 00 00 00 00 00 00 00 00 2a 00 |..............*.| 00000020 00 00 00 00 00 01 00 00 00 40 00 00 00 00 06 03 |.........@......| 00000030 73 74 64 04 21 00 21 00 08 00 0b 04 72 6f 6f 74 |std.!.!.....root| 00000040 09 6c 6f 63 61 6c 68 6f 73 74 00 67 72 61 6e 74 |.localhost.grant| 00000050 20 53 55 50 45 52 20 6f 6e 20 75 73 65 72 64 62 | SUPER on userdb| 00000060 2e 2a 20 74 6f 20 72 6f 6f 74 40 31 32 37 2e 30 |.* to root@127.0| 00000070 2e 30 2e 31 31 31 |.0.111 |
dd if=bin-log.000001 of=binlog.header bs=1 count=120 cat binlog.header GRA_2_17.log > GRA_2_17.binlog_events mysqlbinlog GRA_2_17.binlog_events ... # at 120 #141209 15:04:54 server id 5201 end_log_pos 118 CRC32 0x3432312e Query thread_id=45 exec_time=0 error_code=0 SET TIMESTAMP=1418133894/*!*/; SET @@session.pseudo_thread_id=4/*!*/; SET @@session.foreign_key_checks=1, @@session.sql_auto_is_null=0, @@session.unique_checks=1, @@session.autocommit=1/*!*/; SET @@session.sql_mode=1073741824/*!*/; SET @@session.auto_increment_increment=1, @@session.auto_increment_offset=1/*!*/; /*!\C utf8 *//*!*/; SET @@session.character_set_client=33,@@session.collation_connection=33,@@session.collation_server=8/*!*/; SET @@session.lc_time_names=0/*!*/; SET @@session.collation_database=DEFAULT/*!*/; grant SUPER on userdb.* to root@127.0.0.111 /*!*/; DELIMITER ; # End of log file ROLLBACK /* added by mysqlbinlog */; /*!50003 SET COMPLETION_TYPE=@OLD_COMPLETION_TYPE*/; /*!50530 SET @@SESSION.PSEUDO_SLAVE_MODE=0*/;

It further looks like this command was issues by Connection ID number 4: conn_id: 4.

How to recover deleted tablespace?

Abdel-Mawla Gharieb - Fri, 2014-11-14 22:56

Sometimes, MySQL tablespace file(s) might be deleted by mistake, e.g. delete the shared tablespace (ibdata1) or an individual tablespace (table_name.ibd).

In this post I will show you how to recover those files (on Linux OS) having only one condition, MySQL service should still be running. If MySQL service stopped after deleting that file, this method will not work, so it is extremely important to act as quick as possible to avoid data loss.

The following is a simple table creation (innodb_file_per_table is enabled) and the records count inside that table:

SQL> SHOW CREATE TABLE t\G *************************** 1. row *************************** Table: t Create Table: CREATE TABLE `t` ( `id` int(11) NOT NULL AUTO_INCREMENT, PRIMARY KEY (`id`) ) ENGINE=InnoDB AUTO_INCREMENT=23 DEFAULT CHARSET=latin1 1 row in set (0.00 sec) SQL> SELECT COUNT(*) FROM t; +----------+ | COUNT(*) | +----------+ | 22 | +----------+ 1 row in set (0.02 sec)

Now, lets delete the individual tablespace for that table:

shell> rm -rf /var/lib/mysql/test/t.ibd

At this time, we can still select and modify that table!!

SQL> INSERT INTO t VALUES (NULL); Query OK, 1 row affected (0.00 sec) SQL> SELECT COUNT(*) FROM t; +----------+ | COUNT(*) | +----------+ | 23 | +----------+ 1 row in set (0.00 sec)

To be more accurate, rm does not actually delete the file, rather it removes the directory entry pointing to the file's inode. The inode - and in consequence the file - will be removed only if this is the last reference, but as long as the MySQL server process has the file opened, there is another reference which is the open file handle (that's why the "mysqld" server process must still be running).

To list the opened files we can use the Linux command lsof (we filter the output to get only the deleted tablespace information):

shell> lsof |grep t.ibd COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME mysqld 11401 mysql 25uW REG 7,0 98304 1010691 /var/lib/mysql/test/t.ibd (deleted)

The file has a tag of (deleted) which means that the directory entry pointing to the file's inode was deleted but there is another reference(s) to that inode, otherwise it won't be listed by the above command.
Now the question is, how can we get the on-disk path to that opened file if the directory entry was removed?

We can use the "/proc" interface to the running processes and their file handles by the following formula:

  • File path = /proc/PID/fd/FD-number

According to the above formula and using the output of the "lsof" command, the file we just deleted is located here:

shell> ll /proc/11401/fd/25 lrwx------ 1 mysql mysql 64 Oct 28 16:14 /proc/11401/fd/25 -> /var/lib/mysql/test/t.ibd (deleted)

To make sure that this is the on-disk path for the file we deleted, check the reference: it still points to the original path.

How can we recover that file??
  • First, we should make sure that no other queries are modifying that table: SQL> LOCK TABLE t READ; Query OK, 0 rows affected (0.00 sec)
  • Then we copy the data blocks (/proc/11401/fd/25) to a new file (we use the original file path) and change the ownership to the MySQL system user (mysql): shell> cp /proc/11401/fd/25 /var/lib/mysql/test/t.ibd shell> chown mysql:mysql /var/lib/mysql/test/t.ibd
  • Restart MySQL service (if we didn't restart MySQL service directly after recovering the tablespace all changes on that table will still be redirected to the open file handle not the just recovered copy and thus will be lost after the restart): shell> service mysql restart ..... SUCCESS! ..... SUCCESS!
  • The tablespace is now recovered and we can modify the table normally: SQL> SELECT COUNT(*) FROM t; +----------+ | COUNT(*) | +----------+ | 23 | +----------+ 1 row in set (0.00 sec) SQL> INSERT INTO t VALUES (NULL); Query OK, 1 row affected (0.00 sec) SQL> INSERT INTO t VALUES (NULL); Query OK, 1 row affected (0.00 sec) SQL> select COUNT(*) from t; +----------+ | COUNT(*) | +----------+ | 25 | +----------+ 1 row in set (0.00 sec)

Notes:

  • We can use the same procedures above to recover the shared tablespace (ibdata1) but we should lock all tables before the recovery process by using the SQL command "FLUSH TABLES WITH READ LOCK;"
  • If the MySQL server had to deal with more files (.frm, .ibd, .MYI, .MYD, ...) than its "open_file_limit", it might happen that it will close this handle. In that case, the file will also cease to exist, even though the process is still running and that's why it is extremely important to act as quick as possible.
  • The same procedure can be used to recover MyISAM files (.MYI and .MYD) but note that the file handle will be released if "FLUSH TABLES;" SQL command was executed.
  • The same procedure can be used as well to recover binary logs, general logs, ... etc but note that the file handle will be released if "FLUSH LOGS;" SQL command was executed.
  • This method can be used to recover any deleted file on Linux not only MySQL files but if the inode files have other references (lsof).

Real life case:

One of our customers was enabling the general query log on his production system, he noticed that the file was continuously growing and to not to consume the available free disk space on his server he removed that file by "rm /path/to/general_query.log". However, the available free space was still being consumed while he couldn't see the general log file. The customer thought that the file was deleted but in fact, the file handle was still opened by MySQL server process.
To get the problem solved we only issued the SQL command "FLUSH LOGS;" - which the customer should have issued after removing the file - then the file handle was closed, thus the inode was deleted and the consumed disk space freed back to the system.

Things you should consider before using GTID

Abdel-Mawla Gharieb - Fri, 2014-11-14 16:50

Global Transaction ID (GTID) is one of the major features that were introduced in MySQL 5.6 which provides a lot of benefits (I have talked about the GTID concept, implementation and possible troubleshooting at Percona Live London 2014, you can download the slides from our presentations repository or from my session at Percona Live.
On the other hand, there are some important things you should consider before deploying GTID in production, I'm going to list them here in this blog post.

Table of Content Migration to GTID replication

It is required to shutdown MySQL service on all servers in the replication setup in order to perform the migration from classic replication (based on binary logs information) to the transaction-based (GTID) replication which means that the migration process requires downtime.

The online migration to GTID replication is not yet available.
Facebook and Booking.com provided some MySQL patches for this, but they are not yet contained in Oracle's binaries.
So, if you can't afford a downtime during the migration process, then you might not be able to make the change.

Non transactionally safe statement will raise errors now

It is required to enable the system variable (enforce_gtid_consistency) on all servers inside the GTID replication setup which prevent executing the non transactionally safe statements (check GTID restrictions) like:

  • CREATE TABLE .. SELECT.
  • CREATE TEMPORARY TABLE (inside a transaction).
  • Statements that update non-transactional tables inside a transaction.

So, you will have to fix your application first if it contains any of the above statements before using GTID replication.

MySQL Performance in GTID

It is required to enable the variables (bin_log and log_slave_updates) on - at least - the slave servers which affects the performance on those slaves negatively.

So, the performance should be tested very well before the production migration to GTID replication.

mysql_upgrade script

The mysql_upgrade script's problem when executed on a server having gtid_mode=on has been fixed since MySQL 5.6.7, but it is still not recommended to execute mysql_upgrade when gtid_mode=on as it might change system tables that is using MyISAM, which is non transactional.

Errant transactions! Transactions which are executed on a slave apart from the replication transactions (i.e. not executed on the master) are called "Errant transactions", those transactions cause trouble if that slave later is promoted to be a new master in the a fail-over process.
Once the other slaves connect to the new master, they send the value of gtid_executed and the master in turn checks those values, compares it with its own gtid_executed set and sends back all missing transactions (the errant transactions) to the slaves which leads to one of the following problems:
  • If those transactions still exist in the new master's binary log files, they will be replicated to the other slave which was not intentional when those were executed only on the slave (new master).
  • If those transactions do no longer exist in the new master's binary log file, the replication will break on all slaves.
How to avoid such problem?
  • Choose some slaves to be possible candidates for promotion in case of fail-over. Thus, stand alone transactions (which are not coming from the master) should NOT be executed there.
  • Use one of the MySQL utilities (mysqlfailover or mysqlrpladmin) to find out if there are any errant transactions on the slave before the promotion or not.
Filtration on the slave

In some cases we might need to make filtration to the replication on the slave(s) i.e. not all tables' or databases' changes are propagated to the slave by using the system variables (replicate_ignore_db or replicate_ignore_table). When the slave receives transactions from the master which modify those ignored tables or databases, it simply skips executing them and when the slave restarted it sends the gtid_excuted to the master and the master finds the missing transactions (those for the ignored tables or databases) and sends them back to the slave.

Again, that leads to one of the following two conditions:

  • If those transactions still exist in the master's binary log files, then no problem as the slave will skip executing them again!!
  • If those transactions are no longer there in the master's binary log files, the replication will break on the slave.

Well, the above problem is supposed to be fixed in MySQL 5.6.18 (Bug #70048) The fix is injecting empty transactions on the slave for those ones modifying ignored tables or databases instead of just skipping them, and when the slave restarted they won't be sent back again from the master.

I listed the above problem although it should be fixed now because I want to mention that having MySQL always updated to the recent release is a good practice to avoid such problems and to get the most bug fixes.

Conclusion:

The following are the main things which should be considered before using GTID:

  • Migration from classic replication to transaction-based (GTID) replication requires downtime.
  • Non-transactionally safe statements will not be executed in GTID replication.
  • MySQL performance is a little bit slower in GTID replication, especially, on the slaves.
  • mysql_upgrade script might cause troubles on a server having GTID_MODE=ON and it should be tested first.
  • Errant transactions might break the replication in the fail-over process, thus, planning slaves for promotion will avoid falling in such cases.
  • Some GTID bugs are fixed now (like slave filtration issue), thus MySQL server should be updated to the latest version once there is a new release.
  • New bugs are expected to be discovered, so the application should be tested very well with GTID before performing the migration on production.

Galera Cluster and XA Transactions

Abdel-Mawla Gharieb - Thu, 2014-10-23 23:47

A few weeks ago, we received an interesting Galera Cluster support case from one of our customers that the application is not working well and they face a lot of troubles in their Galera Cluster setup.

After some investigations, we found a lot of insert queries in state "query end" and lasting for long time without being completed. Also some other queries which were sleeping for long time having the info of "XA COMMIT":

SQL> SHOW PROCESSLIST; 27 user host:33214 foodmart Query 14440 sleeping XA COMMIT 0x31302e31312e31332e34372e746d30303336383030303031,0x31302e31312e31332e34372e746d333638,0x SQL> SHOW ENGINE INNODB STATUS; TRANSACTIONS ============ ---TRANSACTION 2DE71D, ACTIVE 14459 sec 9 lock struct(s), heap size 1248, 1 row lock(s), undo log entries 115 MySQL thread id 27, OS thread handle 0x7fc21a42c700, query id 96187 host host-ip foodmart sleeping XA COMMIT 0x31302e31312e31332e34372e746d30303336383030303031,0x31302e31312e31332e34372e746d333638,0x41544f4d ONE PHASE

XA means eXtended Architecture and "XA COMMIT" statement is one of the distributed transactions (XA Transactions) commands which are clearly NOT supported in Galera Cluster and one of its limitations because of possible rollback on commit.

The following command can be used to check if XA Transactions are being used by your application or not:

SQL> SHOW GLOBAL STATUS LIKE 'Com_xa%'; +-------------------+---------+ | Variable_name | Value | +-------------------+---------+ | Com_xa_commit | 2828094 | | Com_xa_end | 2828221 | | Com_xa_prepare | 0 | | Com_xa_recover | 2205697 | | Com_xa_rollback | 42 | | Com_xa_start | 2828305 | +-------------------+---------+ 6 rows in set (0.00 sec)

There are only two possible solutions for this problem:

  • Get rid of all XA transactions in the application to get the Galera Cluster work.
  • Use another HA solution (Active/passive, Master/Slave, ... etc) but not Galera Cluster.
Conclusion
  • XA transactions can not be supported in Galera Cluster and that is already stated clearly in the Galera Cluster Limitations.
  • Before Moving to Galera Cluster, it is preferred to go through all its limitations and check whether your application can cope with them or not.

MySQL Environment MyEnv 1.1.2 has been released

FromDual.en - Thu, 2014-10-23 22:26
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackupcatalog

FromDual has the pleasure to announce the release of the new version 1.1.2 of its popular MySQL, Galera, MariaDB and Percona Server multi-instance environment MyEnv.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.1 to 1.1.2 # cd ${HOME}/product # tar xf /download/myenv-1.1.2.tar.gz # rm -f myenv # ln -s myenv-1.1.2 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Changes in MyEnv 1.1.2 MyEnv
  • The MySQL Backup Manager was removed from MyEnv and put into its own package which can be downloaded from FromDual Backup/Recovery Manager.
  • OEM Agent is checked now based on process list and not oemagentctl status any more. Makes it much faster.
  • Alias cdc for NDB Cluster directory removed. NDB Cluster is not supported any more.
  • Deprecate alias v and replace by V.
  • Error unlink(/var/lock/subsys/mysql): Permission denied is caught more nicely now.
  • Unknown version in up guessing is improved.
  • MD5 checksum made portable for Darwin OS (Mac OSX).
MyEnv Installer
  • innodb_flush_log_at_trx_commit default in template changed.
  • Version guessing improved for installer.
  • Better download support for Percona Server and MariaDB added.
  • mkdir bug fixed.
  • Version check for RedHat made better.
  • Check for lsb_release and SELinux/AppArmor check added for faster finding problems during MySQL installation.
  • Template my.cnf from website is used for creating an intance.
  • Option hideschema is automatically added to the myenv.conf file now.
  • Check and warning implemented if non mysql user is used.
  • Error is caught when wrong user is used.
  • mysql_install_db output made more verbose in case of errors for debugging.
  • Default option changes from Add to Save after instance was changed.
  • Missing users HOME directory is caught now.
  • Question done? can be answered with y now.
  • Comment about waiting during instance installation added.
MyEnv Utilities
  • Table offline/online scripts integrated into MyEnv utilities.
  • alter_engine.pl does hide views from Primary Key check now.
MySQL Backup Manager

Get rid of wrongly deleted InnoDB tables

Shinguz - Wed, 2014-10-22 22:10
Taxonomy upgrade extras: BackupRestoreRecoveryinnodbtable

Precaution: Before you try this out on your production system do a BACKUP first! FromDual Backup Manager can help you with this.

Situation

A MySQL user has delete its InnoDB table files for example like this:

shell> rm -f $datadir/test/test.* Analysis

We do some analysis first:

mysql> DROP TABLE test; ERROR 1051 (42S02): Unknown table 'test' mysql> CREATE TABLE test (id INT) ENGINE = InnoDB; ERROR 1050 (42S01): Table '`test`.`test`' already exists

The MySQL error log shows us the following information:

141022 17:09:04 InnoDB: Operating system error number 2 in a file operation. InnoDB: The error means the system cannot find the path specified. InnoDB: If you are installing InnoDB, remember that you must create InnoDB: directories yourself, InnoDB does not create them. 141022 17:09:04 InnoDB: Error: trying to open a table, but could not InnoDB: open the tablespace file './test/test.ibd'! InnoDB: Have you moved InnoDB .ibd files around without using the InnoDB: commands DISCARD TABLESPACE and IMPORT TABLESPACE? InnoDB: It is also possible that this is a temporary table #sql..., InnoDB: and MySQL removed the .ibd file for this. InnoDB: Please refer to InnoDB: http://dev.mysql.com/doc/refman/5.5/en/innodb-troubleshooting-datadict.html InnoDB: for how to resolve the issue.
Fix

User claims that he does NOT need the table and/or the data any more but wants to get rid of the error messages and/or create a new table with the same name.

mysql> CREATE SCHEMA recovery; mysql> use recovery mysql> CREATE TABLE test (id INT) ENGINE = InnoDB; mysql> \! cp $datadir/recovery/test.frm $datadir/test/ mysql> DROP SCHEMA recovery; mysql> use test mysql> DROP TABLE test; Prove

To prove it works we create a new table and fill in some records:

mysql> CREATE TABLE test (id int UNSIGNED NOT NULL AUTO_INCREMENT PRIMARY KEY, data VARCHAR(64), ts TIMESTAMP) ENGINE = InnoDB; mysql> INSERT INTO test VALUES (NULL, 'Test data', NULL); Literature

GTID Replication talk at Percona Live London 2014

Abdel-Mawla Gharieb - Tue, 2014-10-21 14:58

In a few weeks, I'll be speaking at Percona Live London 2014 (November 3-4) about "Transaction-based REPLICATION (GTID) - IMPLEMENTATION AND TROUBLESHOOTING". I'll talk about how to implement GTID replication in different scenarios and how to troubleshoot most of the common issues in GTID replication.

Anyone interested in learning more about GTID replication or planing to go with GTID replication in production should attend this talk.

Face-to-face meetings are very welcome, please let me know (abdel-mawla.gharieb@fromdual.com) if you are interested so we can schedule a meeting.

Looking forward to seeing you there!!

How to install multiple MySQL instances on a single host using MyEnv?

Abdel-Mawla Gharieb - Thu, 2014-10-16 16:36

We have been asked several times by MySQL users about how to install multiple MySQL instances on a single host.
Typically, this is required when testing different MySQL versions or MySQL servers (MySQL server, Percona server and MariaDB server) while no available resources are available.
Sometimes, it is even required to install multiple MySQL instances on a single production server.

In this article, I'll go through the steps needed to install multiple MySQL instances on a single host (using the tar balls binaries) and how our popular tool MyEnv can make such process so easy.

Prepare MySQL environment [root@centos-temp ~]# groupadd mysql [root@centos-temp ~]# useradd -g mysql mysql [root@centos-temp ~]# su - mysql [mysql@centos-temp:~]$ mkdir ~/product [mysql@centos-temp:~]$ mkdir ~/data [mysql@centos-temp ~]$ cd /downloads [mysql@centos-temp ~]$ wget http://dev.mysql.com/get/Downloads/MySQL-5.6/mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz [mysql@centos-temp ~]$ cd ~/product [mysql@centos-temp ~]$ tar xf /downloads/mysql-5.6.21-linux-glibc2.5-x86_64.tar.gz [mysql@centos-temp ~]$ ln -s mysql-5.6.21-linux-glibc2.5-x86_64 mysql-5.6.21 Install MyEnv

MyEnv can be downloaded from here and the installation steps are listed here.

Install the first instance (named master)
  • Prepare the first instance: [mysql@centos-temp ~]$ mkdir ~/data/master [mysql@centos-temp ~]$ vi ~/data/master/my.cnf [mysqld] port = 3306 basedir = /home/mysql/product/mysql-5.6.21 datadir = /home/mysql/data/master socket = /home/mysql/data/master/master.sock user = mysql
  • Launch the myenv installer: [mysql@centos-temp ~]$ ~/product/myenv/bin/installMyEnv.sh PHP is installed on /usr/bin/php Starting MyEnv installer: /home/mysql/product/myenv-1.1.2/bin/installMyEnv.php Configuration file /etc/myenv/myenv.conf does NOT exist. Copy from template or abort (T, a):
  • Since this is the first instance, the myenv config file does not exist yet, we take the template (t): Copy from template or abort (T, a): t Copy /home/mysql/product/myenv-1.1.2/etc/myenv.conf.template to /etc/myenv/myenv.conf
  • Then MyEnv will detect that no instances are there, we choose the first option (a) to add a new instance: No instance exists yet. An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (A/c/d/s/q)? a
  • MyEnv will ask for the new instance name (master): You have to enter a new instance name. An instance name can consist of 0-9, a-z, A-Z, - and _. Examples: test, prod, dev, [mysqld1] mysqld-3306 Enter a new instance name: master
  • MyEnv will ask for the MySQL basedir and datadir: Changing instance master: Set basedir. The basedir is the directory where your MySQL binaries are located. Example: /home/mysql/product/myenv /home/mysql/product/myenv-1.1.2 /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21-linux-glibc2.5-x86_64 /usr/local/mysql /opt/mysql [/usr] basedir = /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21 seems to exist and has already some files in it . Omitting installation of binary tar ball. Set datadir. The datadir is the directory where your MySQL data are located. Example: /usr/local/mysql/data, /opt/mysql/data or /var/lib/mysql or /home/mysql/product/mysql-5.6.21/data or [/home/mysql/data/master] or /home/mysql/data/master datadir = /home/mysql/data/master WARNING: directory /home/mysql/data/master already exist.
  • MyEnv will detect that no mysql database is created yet in the datadir and ask if it should install it for us or not (using the normal mysql_install_db script) Shall I install a mysql database under /home/mysql/data/master (Y/n)? y Installing MySQL instance. Please be patient, this can take a while... /home/mysql/product/mysql-5.6.21/scripts/mysql_install_db --datadir=/home/mysql/data/master --basedir=/home/mysql/product/mysql-5.6.21
  • For the new instance, we should specify the port, the socket, and the location of my.cnf (all these must be specific per instance): Set port. Example: 3307 or [3306] port = 3306 Set socket. Example: /tmp/mysql-3306.sock /tmp/mysql-master.sock [/tmp/mysql.sock] /var/run/mysqld/mysql.sock socket = /home/mysql/data/master/master.sock Choose location of my.cnf: Example: /etc/my.cnf /etc/mysql/my.cnf /home/mysql/data/master/my-3306.cnf [/home/mysql/data/master/my.cnf] my.cnf = /home/mysql/data/master/my.cnf
  • MyEnv will list the just added instance and ask what should be the next step, we should save and exit (s): The following instances are available: master An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (a/c/d/S/q)? s Writing the configuration file... Rename /etc/myenv/myenv.conf to /etc/myenv/myenv.conf.2014-10-15-05-50-12 Write /etc/myenv/myenv.conf
  • MyEnv will propose to add the following lines to the .bash_profile file, we should confirm (y): Do you want to add the following code to your ~/.bash_profile file? . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD (Y/n)? y Writing /etc/myenv/MYENV_BASE
  • We should add the myenv startup script as shown here into the appropriate path (the provided commands should be executed manually from the root/privileged-user after that): Please copy yourself manually the MyEnv start/stop script to the following location: shell> sudo cp /home/mysql/product/myenv-1.1.2/bin/myenv.server /etc/init.d/myenv done (Y)? y and link it to your O/S runlevels: RedHat/CentOS: shell> sudo chkconfig --add myenv; chkconfig myenv on Ubuntu/Debian: shell> sudo update-rc.d myenv defaults SLES/OpenSuSE: shell> sudo chkconfig --add myenv done (Y)? y Now source your profile as follows: . ~/.bash_profile
  • Now, we should logout and login so that the .bash_profile code will be executed (we can copy the startup script in the meantime): [mysql@centos-temp ~]$> exit [root@centos-temp ~]# cp /home/mysql/product/myenv-1.1.2/bin/myenv.server /etc/init.d/myenv [root@centos-temp ~]# chkconfig --add myenv [root@centos-temp ~]# su - mysql Up : Down : master (5.6.21) master ( 3306) : test mysql@centos-temp:~ [master, 3306]>
  • Now, we can start the just installed instance and check if it is started correctly or not: mysql@centos-temp:~ [master, 3306]> start . SUCCESS! mysql@centos-temp:~ [master, 3306]> up Up : master (5.6.21) Down : master ( 3306) : test mysql@centos-temp:~/data/master [master, 3306]>

Cool! We have got the first instance installed. let's install the second one ...

Install the second instance (named slave1)
  • Prepare the second instance: mysql@centos-temp:~ [master, 3306]> mkdir ~/data/slave1 mysql@centos-temp:~ [master, 3306]> vi ~/data/slave1/my.cnf [mysqld] port = 3307 basedir = /home/mysql/product/mysql-5.6.21 datadir = /home/mysql/data/slave1 socket = /home/mysql/data/slave1/slave1.sock user = mysql
  • Launch the MyEnv installer and add the new instance: mysql@centos-temp:~ [master, 3306]> product/myenv/bin/installMyEnv.sh PHP is installed on /usr/bin/php Starting MyEnv installer: /home/mysql/product/myenv-1.1.2/bin/installMyEnv.php Configuration file /etc/myenv/myenv.conf already exists. Use this, overwrite with template or abort (U,t,a): u The following instances are available: master An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (A/c/d/s/q)? a You have to enter a new instance name. An instance name can consist of 0-9, a-z, A-Z, - and _. Examples: test, prod, dev, [mysqld1] mysqld-3306 Enter a new instance name: slave1 Changing instance slave1: Set basedir. The basedir is the directory where your MySQL binaries are located. Example: /home/mysql/product/myenv /home/mysql/product/myenv-1.1.2 /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21-linux-glibc2.5-x86_64 /usr/local/mysql /opt/mysql [/usr] basedir = /home/mysql/product/mysql-5.6.21 /home/mysql/product/mysql-5.6.21 seems to exist and has already some files in it. Omitting installation of binary tar ball. Set datadir. The datadir is the directory where your MySQL data are located. Example: /usr/local/mysql/data, /opt/mysql/data or /var/lib/mysql or /home/mysql/product/mysql-5.6.21/data or [/home/mysql/data/slave1] or /home/mysql/data/slave1 datadir = /home/mysql/data/slave1 WARNING: directory /home/mysql/data/slave1 already exist. Shall I install a mysql database under /home/mysql/data/slav1 (Y/n)? y Installing MySQL instance. Please be patient, this can take a while... /home/mysql/product/mysql-5.6.21/scripts/mysql_install_db --datadir=/home/mysql/data/slave1 --basedir=/home/mysql/product/mysql-5.6.21 Set port. Example: 3307 or [3306] port = 3307 Set socket. Example: /tmp/mysql-3307.sock /tmp/mysql-slave1.sock [/tmp/mysql.sock] /var/run/mysqld/mysql.sock socket = /home/mysql/data/slave1/slave1.sock Choose location of my.cnf: Example: /etc/my.cnf /etc/mysql/my.cnf /home/mysql/data/slave1/my-3307.cnf [/home/mysql/data/slave1/my.cnf] my.cnf = /home/mysql/data/slave1/my.cnf The following instances are available: master slave1 An instance is the same as a mysqld process. What do you want to do next? o Add a new instance, o change an existing instance, o delete an existing instance, o save configuration and exit or o quit without saving (a/c/d/S/q)? s Writing the configuration file... Rename /etc/myenv/myenv.conf to /etc/myenv/myenv.conf.2014-10-15-06-44-04 Write /etc/myenv/myenv.conf Now source your profile as follows: . ~/.bash_profile
  • Then we logout and login again to load the new configs: mysql@centos-temp:~ [master, 3306]> exit logout [root@centos-temp product]# su - mysql Up : master (5.6.21) Down : slave1 (5.6.21) master ( 3306) : test slave1 ( 3307) : test mysql@centos-temp:~ [master, 3306]>

MyEnv says that we have two instances (master and slave1), master is up and running while slave1 is stopped. Let's start it then ..

  • First, all commands we execute are sent to the master instance, we should change to the slave1 instance: mysql@centos-temp:~ [master, 3306]> slave1 mysql@centos-temp:~ [slave1, 3307]> start . SUCCESS! mysql@centos-temp:~ [slave1, 3307]>
  • Both instances are now up and running ... mysql@centos-temp:~ [slave1, 3307]> up Up : master (5.6.21) slave1 (5.6.21) Down : master ( 3306) : test slave1 ( 3307) : test mysql@centos-temp:~ [slave1, 3307]>
Instances navigation and MySQL connection

After we installed the two instances we need to know how can we navigate between them to open MySQL connections.

  • Open a MySQL connection to the master instance: mysql@centos-temp:~ [master, 3306]> mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 7 Server version: 5.6.21 MySQL Community Server (GPL) Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. root@localhost master [(none)] SQL>
  • While the instance name is already printed in the terminal we need to double check that by a MySQL command (the used port) and exit after that: root@localhost master [(none)] SQL> SHOW GLOBAL VARIABLES LIKE 'port'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | port | 3306 | +---------------+-------+ 1 row in set (0.00 sec) root@localhost master [(none)] SQL> exit Bye mysql@centos-temp:~ [master, 3306]>
  • Open a MySQL connection to the slave1 instance and check the used port: mysql@centos-temp:~ [master, 3306]> slave1 mysql@centos-temp:~ [slave1, 3307]> mysql -u root Welcome to the MySQL monitor. Commands end with ; or \g. Your MySQL connection id is 5 Server version: 5.6.21 MySQL Community Server (GPL) Copyright (c) 2000, 2014, Oracle and/or its affiliates. All rights reserved. Oracle is a registered trademark of Oracle Corporation and/or its affiliates. Other names may be trademarks of their respective owners. Type 'help;' or '\h' for help. Type '\c' to clear the current input statement. root@localhost slave1 [(none)] SQL> SHOW GLOBAL VARIABLES LIKE 'port'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | port | 3307 | +---------------+-------+ 1 row in set (0.00 sec) root@localhost slave1 [(none)] SQL>
  • Navigate back to the master instance: mysql@centos-temp:~ [slave1, 3307]> master mysql@centos-temp:~ [master, 3306]>

More information about MyEnv commands can be checked here.

Notes:
  • Using the same procedures above we can add several instances using either the same binaries or some others.
  • We used the smallest configurations in the my.cnf file, you are free to make your own changes.

Migration between MySQL/Percona Server and MariaDB

Shinguz - Wed, 2014-10-08 21:40
Taxonomy upgrade extras: migrationsidegradeupgrademysqlmariadbpercona serverconversioncompatibility

This week we did some migrations from MariaDB 10.0 to Percona Server 5.6 at the IT department of a big German bank.

We were perfectly aware that since version 10.0 the MariaDB code base started to diverge slightly away from the MySQL and Percona Server code base which are still pretty close to each other.

Because of the Percona Server option enforce_storage_engine we wanted to do this migration.

We stopped the MariaDB 10.0 server replaced the binaries by the Percona Server 5.6 binaries and started the Percona 5.6 server again. After successfully starting the instance we found some error messages in the MySQL error log. By running the mysql_upgrade command some of the problems were fixed but not all of them. Still left problems were:

  • The MariaDB binary logs provoked some error messages for the Percona Server: [ERROR] Error in Log_event::read_log_event(): 'Found invalid event in binary log', data_len: 25, event_type: -93 [Warning] Error reading GTIDs from binary log: -1 [ERROR] Incorrect definition of table mysql.db: expected column 'User' at position 2 to have type char(16), found type char(80). [ERROR] Incorrect definition of table mysql.event: expected column 'definer' at position 3 to have type char(77), found type char(141). [ERROR] Incorrect definition of table mysql.event: expected column 'sql_mode' at position 14 to have type set... A purge of the binary logs solved this issue.
  • The tables mysql.event, mysql.innodb_table_stats and mysql.innodb_index_stats where not fixed by mysql_upgrade (a bug to fix for Percona and MySQL/Oracle?). We had to replace those tables manually by copying from an other already working Percona 5.6 Server.

Later in the FromDual technology labs we investigated further and tried the other way from Percona Server 5.6 to MariaDB 10.0. In this direction we found some other errors in the MySQL error log which also where not completely resolved by the mysql_upgrade utility:

  • The mysql.innodb_table_stats and mysql.innodb_index_stats tables where recreated manually (here a bug to fix for the MariaDB people?).
  • All error messages from tables affected by the following message: InnoDB: in InnoDB data dictionary has unknown flags 40/50/52. could be silenced by a run of the OPTIMIZE TABLE command (which can become quite expensive for very big tables).

Sidegrades from MySQL 5.6 to Percona Server 5.6 and back did not provoke any error message written to the MySQL log files. Sidegrades from MariaDB 10.0 to MySQL 5.6 and vice versa behaved exactly the same as MariaDB 10.0 to Percona Server 5.6 and back.

from/to: MySQL 5.6 MariaDB 10.0 Percona Server 5.6 MySQL 5.6 - 2 tables, OPTIMIZE OK MariaDB 10.0 binlog, 3 tables - binlog, 3 tables Percona Server 5.6 OK 2 tables, OPTIMIZE -

During our tests we got rid of the error messages. If they caused any technical harm to the tables or the data we cannot say so far. Further testing and experience from real life is needed. Any feedback is welcome!

Observations

It looks like MariaDB 10.0 understands MySQL/Percona Server replication but not the other way around. So replication from MariaDB 10.0 to MySQL 5.6 does probably not work (different implementation of GTID)?

Recommendation

To make sure a sigdegrade between these 3 MySQL branches/forks is seamlessly possible the best method seems to be to dump/restore (NOT xtrabackup!) the data. This can be an issue with huge databases (hundreds of Gbyte).

Further aid

Also have a look at our MySQL compatibility matrix and our MySQL upgrade check-list.

If you need any help to convert MySQL to MariaDB to Percona Server or the other way do not hesitate to contact the FromDual consultancy team. We will be pleased to assist you as a neutral and vendor independent consulting company.

MySQL Environment MyEnv 1.1.1 has been released

FromDual.en - Mon, 2014-09-08 08:29
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackupcatalog

FromDual has the pleasure to announce the release of the new version 1.1.1 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.

The majority of improvements happened in the MySQL Backup Manager (mysql_bman) where we added a backup catalog.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.1.0 to 1.1.1 # cd ${HOME}/product # tar xf /download/myenv-1.1.1.tar.gz # rm -f myenv # ln -s myenv-1.1.1 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/
Upgrade from 1.0.5 to 1.1.1 # cd ${HOME}/product # tar xf /download/myenv-1.1.1.tar.gz # rm -f myenv # ln -s myenv-1.1.1 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/

Replace the following files:

  • cp ${HOME}/product/myenv/etc/variables.conf.template /etc/myenv/variables.conf
  • cp ${HOME}/product/myenv/bin/myenv.server /etc/init.d/myenv
Changes in MyEnv 1.1.1 MyEnv
  • Function to check upgrade requirements to v1.1 implemented.
MyEnv Installer
  • none
MyEnv Utilities
  • none
MySQL Backup Manager
  • End tag added to mysql_bman code in certain places where it was missing.
  • innodbackup bug fixed, pigz bug fixed.
  • Warning message more verbose explaining solution in case of missing xtrabackup binaries.
  • pigz as alternative compression mechanism implemented.
  • Log mysqldump error to stderr.
  • my_exec replaces exec/system calls.
  • Return code exits fixed.
  • Cleanup date is formated nicer and cleanup bug is fixed.
  • copy_preserve introduced to preserve timestamp and archive code cleaned up.
  • Retention time can be 0 or any value out of h, d, w, m.
  • Binary log begin and end ts is written to log file.
MySQL Backup Manager Catalog
  • Catalog DB handle was sometimes called without a catalog (ugly bug introduced in v1.1.0!).

MySQL Environment MyEnv 1.1.0 has been released

FromDual.en - Sat, 2014-08-23 19:46
Taxonomy upgrade extras: myenvoperationMySQL Operationsmulti instanceconsolidationBackupcatalog

FromDual has the pleasure to announce the release of the new version 1.1.0 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.

The majority of improvements happened in the MySQL Backup Manager (mysql_bman) where we added a backup catalog.

You can download MyEnv from here.

In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

Upgrade from 1.0.5 to 1.1.0 # cd ${HOME}/product # tar xf /download/myenv-1.1.0.tar.gz # rm -f myenv # ln -s myenv-1.1.0 myenv

If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:

cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/

Replace the following files:

  • cp ${HOME}/product/myenv/etc/variables.conf.template /etc/myenv/variables.conf
  • cp ${HOME}/product/myenv/bin/myenv.server /etc/init.d/myenv
Changes in MyEnv 1.1.0 MyEnv
  • Error with !includedir fixed (Bug #143).
  • tput is hidden in interactive mode now. No more nasty warning messages.
  • Start/stop script did not pass error code correctly.
  • Timezone complain on myenv_start_stop.php fixed with temporary fix.
  • Init script should not through errors on SuSE any more.
  • Init script is improved. Myenv should only start if network is available (runlevel).
  • mysqladmin complains now if my.cnf has wrong parameters and does not show the schema down any more.
  • MYSQL_HOME was changed from basedir to datadir, consider this for upgrade, this is a significant change, impact is not clear!
  • Under some circumstances up did not show all schemata, fixed.
MyEnv Installer
  • No changes.
MyEnv Utilities
  • NUMA maps script from Jeremy Cole added.
  • block_galera_node.sh fixed for more than 1 back-end.
MySQL Backup Manager
  • For many backup types cleanup, archive and compress functionality added.
  • Schema backup per-schema writes log pos now to log file and catalog.
  • Binlog policy added.
  • Backup structure is not created any more for cleanup and catalog operations.
  • Binlog backup should work now with and without default location.
  • Passing strange characters in password is handled more robust now.
  • Schema names more robust with back-ticks.
  • memory-table-check bug fixed.
  • Command line is displayed in output (without exposing password).
  • Cleanup of archivedir is possible now.
  • Backup for configuration file implemented.
  • Physical backup based on xtrabackup/mysqlbackup implemented.
  • All exit's replaced by returns. Proper error handling should be possible.
  • Backup logging implemented.
  • Password from command line is not exposed any more to log file. Back-port from v1.0.5.
  • Per schema privilege backup implemented.
MySQL Backup Manager Catalog
  • Catalog creation and catalog upgrade integrated.

Galera Cluster VS PXC VS MariaDB Galera Cluster - Benchmarking

Abdel-Mawla Gharieb - Thu, 2014-08-07 15:36

It is not clear for many MySQL users that Percona XtraDB Cluster (PXC) and MariaDB Galera Cluster depend on the same Galera library i.e used in Galera Cluster for MySQL which is provided by Codership team:

  • Galera Cluster: MySQL Server (by Oracle) + Galera library.
  • Percona XtraDB Cluster: Percona Server + Galera library.
  • MariaDB Galera Cluster: MariaDB Server + Galera library.

But the question is, are there any performance differences between the three of them ?

Let's discover that by doing some simple benchmark to test MySQL write performance in Galera Cluster, PXC and MariaDB Galera Cluster installations.

System Information: HW configurations (AWS Servers): Nodes Servers HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (# of cores 8, # of threads 16, HT enabled).
  • Memory: 16GB RAM.
  • Storage: HDD 120GB/ 5400RPM.
Load balancer Server HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2651 v2 @ 1.80GHz (# of cores 4, # of threads 8, HT enabled).
  • Memory: 16GB RAM.
  • Storage: HDD 10GB/ 5400RPM.
Load generator Server HW configurations:
  • CPU: Intel(R) Xeon(R) CPU E5-2680 v2 @ 2.80GHz (# of cores 16, # of threads 32, HT enabled).
  • Memory: 32GB RAM.
  • Storage: HDD 10GB/ 5400RPM.
Software configurations:
  • OS : Red Hat Enterprise Linux Server release 6.5 (Santiago)
  • Sysbench : 0.5.3
  • GLB : 1.0.0
  • Galera Cluster : 5.5.34 and 5.6.16
  • Percona XtraDB Cluster : 5.5.37 and 5.6.19
  • MariaDB Galera Cluster : 5.5.38 and 10.0.12
  • Galera Library : 3.5
Test Information:
  • The testing environment consists of 5 AWS servers, three servers for a three-node cluster (each node is installed on a single server), one server for the load balancer and the final server for the load generator in which sysbench is installed to send requests to the load balancer from.
  • Sysbench command: sysbench --num-threads=64 --max-requests=1000 --db-driver=mysql --test=/usr/share/doc/sysbench/tests/db/oltp.lua --mysql-table-engine=InnoDB --mysql-user=dev --mysql-password='test' --mysql-host=load_balancer_ip run .
  • Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
  • The my.cnf used is something like: [mysqld] key_buffer_size = 16M max_allowed_packet = 16M thread_stack = 192K thread_cache_size = 8 innodb_buffer_pool_size = 8G innodb_flush_log_at_trx_commit = 0 expire_logs_days = 10 max_binlog_size = 100M server-id = 1 log-bin = mysql-bin binlog_format = ROW auto_increment_increment = 3 auto_increment_offset = 1 log_slave_updates default_storage_engine = InnoDB # Path to Galera library wsrep_provider = /usr/lib64/galera/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address = gcomm://nodeB-IP,nodeC-IP innodb_autoinc_lock_mode = 2 # Node #1 address wsrep_node_address = nodeA-IP # Cluster name wsrep_cluster_name = test_cluster # SST method wsrep_sst_method = rsync # Authentication for SST method wsrep_sst_auth = "sst:password"

Notes:

  • The number of threads used in this test is 64 as it generated the highest throughput on all cluster installations.
  • Each throughput value for each test case is generated by the average of ten (10) times execution.
Testing Results:









The raw results in Transactions / Sec might be useful:


sync_binlog=0innodb_flush_log_ at_trx_commitGalera Cluster 5.5.34PXC 5.5.37MariaDB Galera Cluster 5.5.38Galera Cluster 5.6.16PXC 5.6.15MariaDB Galera Cluster 10.0.120525.119534.022534.249519.575532.19520.7361125.615131.748341.384157.001162.783174.972526.761528.858524.039511.817526.06521.024sync_binlog=10242.201249.622262.516220.313229.807220.97196.82996.759148.815111.995114.8113.0562224.476210.904217.142209.139201.596214.311
Conclusion

According to the above results:

  • innodb_flush_log_at_trx_commit = 1 significantly slows down Galera.
  • sync_binlog also cuts in half the throughput.
  • All other are more or less equal in throughput.

FromDual invites MySQL community to company meeting 2014 in Barcelona

FromDual.en - Wed, 2014-08-06 14:29

FromDual holds its annual company meeting this year in Barcelona, Spain.

We are pleased to invite everybody interested in MySQL technologies (MySQL, Galera Cluster, FromDual Tools, Percona Cluster, MariaDB, etc.) to participate on Thursday evening September 11 at the HCC MONTBLANC, Via Laietana 61, to meet, exchange ideas and discuss MySQL related topics.

The event starts at 18:00, we will meet in the hotel lobby. The planned schedule is:

  • How to Implement GTID Replication in MySQL 5.6 (25') and 5' Questions and Answers.
  • MySQL backup/restore for anonymized exports (25') and 5' Questions and Answers.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.
  • Break 15 min
  • Quick presentation (15') of YOUR project or company.
  • Quick presentation (15') of YOUR project or company.

Please feel free to send us your suggestion about your presentation. Any technical or non-technical MySQL related topic is welcome. For example how you use MySQL in your company or special problems you have faced and solved (or not solved yet), research work you have done on MySQL products, business cases you solve with MySQL products, evaluations or experience you have made, etc. For the proposal please send us a mail.

Please also send us a short notice if you plan to participate or join us with MeetUp.

This gives us the possibility to arrange and organize all the infrastructure with the Hotel.

The event is free of costs for all participants.

We are pleased if you can make it to the event,
Your FromDual Team

FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations

FromDual.en - Sun, 2014-07-27 10:46
Taxonomy upgrade extras: galeraBackupmanagermonitoringOperations

FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.


To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).


With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.


In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.


Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.



MyEnv v1.0.5

The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.


With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).


The most important improvements in MyEnv v1.0.5 are:

  • Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
  • MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
  • Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
  • The user guidance of the MyEnv installer was made more user friendly.
  • Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
  • The tools for MySQL Partitions were extended and improved.

All improvements in detail you can find in the Release Notes.


Here you can download MyEnv.



MySQL Backup Manager v1.0.5

The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.


At this point we would like to quote a MySQL user:

"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best! Xtrabackup + mysql_bman!!!"


In mysql_bman version v1.0.5 the following improvements were integrated:

  • Security improvements (password is not exposed any more).
  • Every instance can be tagged with a name and uniquely identified.
  • The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
  • The compression of backups can be disabled to support de-duplicating drives.
  • The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.

Download (included in MyEnv).



MySQL Ops Center v0.2

Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.


The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:

  • Starting and stopping of MySQL databases on remote machines by a central management console.
  • Starting and stopping of the MySQL replication.
  • Starting and stopping of a virtual IP (VIP).
  • Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
  • Configuration of the master/slave replication.

The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.



MySQL Performance Monitor v0.9.3

The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:

  • Bugs related to sha/sha1 encryption were fixed.
  • A stopped database is better detected now.
  • DRBD informations were improved.
  • New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
  • New measuring points was added (Galera Cluster) and wrong ones fixed.

You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.

FromDual Performance Monitor for MySQL 0.9.3 has been released

FromDual.en - Wed, 2014-07-09 12:25
Taxonomy upgrade extras: mysqlperformancemonitormonitoringmpmmaasperformance monitor

FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.

This release contains various minor bug fixes and improvements.

You can download mpm from here.

In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.

Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.

New installation of mpm v0.9.3

Please follow our mpm installation guide.

Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpm

No other upgrade requirements are known.

Changes in mpm v0.9.3 mpm agent
  • Typos fixed.
  • Kill trap reports to the log file as well now.
mpm agent and MaaS
  • Example for timeshift feature added to configuration template.
MySQL module
  • DB down not detected (bug #27/#138).
InnoDB module
  • InnoDB Status module: SHA fix (bug #139).
Master module
  • Missing values in cache file fixed.
mpm templates for Zabbix
  • No changes.

Replication Troubleshooting - Classic VS GTID

Abdel-Mawla Gharieb - Fri, 2014-07-04 15:05

In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).

There are two main operations we might need to do in a replication setup:

  • Skip or ignore a statement that causes the replication to stop.
  • Re-initialize a slave when the Replication is broke and could not be started anymore.
Skip or Ignore statement

Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.

To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).

Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?
CLASSIC REPLICATION

Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:

SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;
GTID REPLICATION

Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.

To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:

  • Check which transaction is causing the problem: SQL> SHOW SLAVE STATUS\G . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7 Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1, b9b4712a-df64-11e3-b391-60672090eb04:1-6 Auto_Position: 1

    Retrieved_Gtid_Set means the retrieved GTIDs from the master

    Executed_Gtid_Set means the executed GTIDs on the slave.

    According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.

  • Inject an empty transaction: SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7'; SQL> BEGIN;COMMIT; SQL> SET GTID_NEXT='AUTOMATIC'; SQL> START SLAVE;

    BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.

Note:

After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.

Re-initialize/ re-build a slave

For many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.

How to solve that issue ?
CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
  • Start the slave: SQL> START SLAVE;

NOTE:

Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.

GTID REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'

Fix steps:

  • Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
  • Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
  • Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
  • Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
  • Start the slave: SQL> START SLAVE;

NOTES:

  • If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
    shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.

    The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.

  • Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
Conclusion

While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.

Replication channel fail-over with Galera Cluster for MySQL

Shinguz - Thu, 2014-06-19 07:05
Taxonomy upgrade extras: channelgaleraclusterfail-overreplicationmasterslave

Sometimes it could be desirable to replicate from a Galera Cluster to a single MySQL slave or to an other Galera Cluster. Reasons for this measure could be:

  • An unstable network between two Galera Cluster locations.
  • A separation of a reporting slave and the Galera Cluster so that heavy reports on the slave do not affect the Galera Cluster performance.
  • Mixing different sources in a slave or a Galera Cluster (fan-in replication).

This article is based on earlier research work (see MySQL Cluster - Cluster circular replication with 2 replication channels) and uses the old MySQL replication style (without MySQL GTID).

Preconditions
  • Enable the binary logs on 2 nodes of a Galera Cluster (we call them channel masters) with the log_bin variable.
  • Set log_slave_updates = 1 on ALL Galera nodes.
  • It is recommended to have small binary logs and relay logs in such a situation to reduce overhead of scanning the files (max_binlog_size = 100M).
Scenarios

   

Let us assume that for some reason the actual channel master of channel 1 breaks. As a consequence the slave of channel 1 does not receive any replication events any more. But we have to keep the replication stream up and running. So we have to switch the replication channel to channel master 2.

Switching replication channel

First for security reasons we should stop the slave of replication channel 1 first:

mysql> STOP SLAVE;

Then we have to find the actual relay log on the slave:

mysql> pager grep Relay_Log_File mysql> SHOW SLAVE STATUS\G mysql> nopager Relay_Log_File: slave-relay-bin.000019

Next we have to find the last applied transaction on the slave:

mysql> SHOW RELAYLOG EVENTS IN 'slave-relay-bin.000019'; | slave-relay-bin.000019 | 3386717 | Query | 5201 | 53745015 | BEGIN | | slave-relay-bin.000019 | 3386794 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | slave-relay-bin.000019 | 3386846 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | slave-relay-bin.000019 | 3386921 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +------------------------+---------+-------------+-----------+-------------+--------------------------------+

This is transaction 1457451 which is the same on all Galera nodes.

On the new channel master of channel 2 we have to find now the matching binary log. This can be done best by matching times between the relay log and the binary log of master of channel 2.

On slave:

shell> ll *relay-bin* -rw-rw---- 1 mysql mysql 336 Mai 22 20:32 slave-relay-bin.000018 -rw-rw---- 1 mysql mysql 3387029 Mai 22 20:37 slave-relay-bin.000019

On master of channel 2:

shell> ll *bin-log* -rw-rw---- 1 mysql mysql 2518737 Mai 22 19:57 bin-log.000072 -rw-rw---- 1 mysql mysql 143 Mai 22 19:57 bin-log.000073 -rw-rw---- 1 mysql mysql 165 Mai 22 20:01 bin-log.000074 -rw-rw---- 1 mysql mysql 62953648 Mai 22 20:40 bin-log.000075

It looks like binary log 75 of master 2 matches to relay log of our slave.

Now we have to find the same transaction on the master of channel 2:

mysql> pager grep -B 6 1457451 mysql> SHOW BINLOG EVENTS IN 'bin-log.000075'; mysql> nopager | bin-log.000075 | 53744832 | Write_rows | 5201 | 53744907 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53744907 | Xid | 5201 | 53744938 | COMMIT /* xid=1457450 */ | | bin-log.000075 | 53744938 | Query | 5201 | 53745015 | BEGIN | | bin-log.000075 | 53745015 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | bin-log.000075 | 53745067 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53745142 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +----------------+----------+-------------+-----------+-------------+---------------------------------------+

We successfully found the transaction and want the position of the next transaction 53745173 where we should continue replicating.

As a last step we have to set the slave to the master of replication channel 2:

mysql> CHANGE MASTER TO master_host='master2', master_port=3306, master_log_file='bin-log.000075', master_log_pos=53745173; mysql> START SLAVE;

After a while the slave has caught up and is ready for the next fail-over back.

Discussion

We found during our experiments that an IST of a channel master does not lead to a gap or loss of events in the replication stream. So restarting a channel master does not require a channel fail-over as long as an IST can be used for resyncing the channel master with the Galera Cluster.

The increase of wsrep_cluster_conf_id is NOT an indication that a channel fail-over is required.

A SST resets the binary logs so after the SST a slave will not replicate any more. So using this method should be safe to use. If you find any situation where you experience troubles with channel fail-over please let us know.

Pages

Subscribe to MySQL, Galera Cluster and MariaDB support and services aggregator - FromDual all (en)