You are here
Sammlung von Newsfeeds
FromDual Performance Monitor for MySQL 0.9.2 has been released
FromDual has the pleasure to announce the release of the new version 0.9.2 of its popular Database Performance Monitor for MySQL, MariaDB and Percona Server mpm.
This release makes mpm compatible with Zabbix v2.2 and contains various minor bug fixes and improvements.
You can download mpm from here.
In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
New installation of mpm v0.9.2Please follow our mpm installation guide.
Upgrade from 0.x to 0.9.2 # cd /download # tar xf mysql_performance_monitor-0.9.2.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.2.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.2 mpmNo other upgrade requirements are known.
Changes in mpm v0.9.2 mpm agent- Fix of mpm_mr_version newline at EOL which affects zabbix_sender.
- zabbix_sender return code change from Zabbix v2.2 and v2.1.7 fixed (bug #124).
- Log rotate problem caught.
- 2 exit errors fixed.
- Default file locations adapted to the newest standard.
- Using GLOBAL VARIABLES instead of SESSION VARIABLES.
- Cache file is removed instead of shrinked now, and remove lock file message should be at the right place.
- Agent locking problem fixed.
- Check of upload file size and shrink introduced.
- Proxy settings should be considered now for MaaS solution.
- Data upload switched from http to https.
- Send data made more verbose related to http/s send method errors and prepared for https only behaviour.
- DRBD information is now reported correctly (bug #133).
- wsrep_last_committed item added.
- Some items added to templates according to customer needs.
FromDual Performance Monitor for MySQL 0.9.2 has been released
FromDual has the pleasure to announce the release of the new version 0.9.2 of its popular Database Performance Monitor for MySQL, MariaDB and Percona Server mpm.
This release makes mpm compatible with Zabbix v2.2 and contains various minor bug fixes and improvements.
You can download mpm from here.
In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
New installation of mpm v0.9.2Please follow our mpm installation guide.
Upgrade from 0.x to 0.9.2 # cd /download # tar xf mysql_performance_monitor-0.9.2.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.2.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.2 mpmNo other upgrade requirements are known.
Changes in mpm v0.9.2 mpm agent- Fix of mpm_mr_version newline at EOL which affects zabbix_sender.
- zabbix_sender return code change from Zabbix v2.2 and v2.1.7 fixed (bug #124).
- Log rotate problem caught.
- 2 exit errors fixed.
- Default file locations adapted to the newest standard.
- Using GLOBAL VARIABLES instead of SESSION VARIABLES.
- Cache file is removed instead of shrinked now, and remove lock file message should be at the right place.
- Agent locking problem fixed.
- Check of upload file size and shrink introduced.
- Proxy settings should be considered now for MaaS solution.
- Data upload switched from http to https.
- Send data made more verbose related to http/s send method errors and prepared for https only behaviour.
- DRBD information is now reported correctly (bug #133).
- wsrep_last_committed item added.
- Some items added to templates according to customer needs.
FromDual Performance Monitor for MySQL 0.9.2 has been released
FromDual has the pleasure to announce the release of the new version 0.9.2 of its popular Database Performance Monitor for MySQL, MariaDB and Percona Server mpm.
This release makes mpm compatible with Zabbix v2.2 and contains various minor bug fixes and improvements.
You can download mpm from here.
In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
New installation of mpm v0.9.2Please follow our mpm installation guide.
Upgrade from 0.x to 0.9.2 # cd /download # tar xf mysql_performance_monitor-0.9.2.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.2.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.2 mpmNo other upgrade requirements are known.
Changes in mpm v0.9.2 mpm agent- Fix of mpm_mr_version newline at EOL which affects zabbix_sender.
- zabbix_sender return code change from Zabbix v2.2 and v2.1.7 fixed (bug #124).
- Log rotate problem caught.
- 2 exit errors fixed.
- Default file locations adapted to the newest standard.
- Using GLOBAL VARIABLES instead of SESSION VARIABLES.
- Cache file is removed instead of shrinked now, and remove lock file message should be at the right place.
- Agent locking problem fixed.
- Check of upload file size and shrink introduced.
- Proxy settings should be considered now for MaaS solution.
- Data upload switched from http to https.
- Send data made more verbose related to http/s send method errors and prepared for https only behaviour.
- DRBD information is now reported correctly (bug #133).
- wsrep_last_committed item added.
- Some items added to templates according to customer needs.
MySQL Environment MyEnv 1.0.4 has been released
FromDual has the pleasure to announce the release of the new version 1.0.4 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.4 # cd ${HOME}/product # tar xf /download/myenv-1.0.4.tar.gz # rm -f myenv # ln -s myenv-1.0.4 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.4 MyEnv- MyEnv made PHP 5.4 compatible.
- System requirement checks for installation improved.
- Usage is only displayed with --help not on every error.
- Installation instructions improved.
- Missing PHP error message improved.
- No changes.
- System requirement checks for installation used from MyEnv.
- Installation instructions improved.
- Instance name can be added in mysql_bman backup file name (--instance-name).
MySQL Environment MyEnv 1.0.4 has been released
FromDual has the pleasure to announce the release of the new version 1.0.4 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.4 # cd ${HOME}/product # tar xf /download/myenv-1.0.4.tar.gz # rm -f myenv # ln -s myenv-1.0.4 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.4 MyEnv- MyEnv made PHP 5.4 compatible.
- System requirement checks for installation improved.
- Usage is only displayed with --help not on every error.
- Installation instructions improved.
- Missing PHP error message improved.
- No changes.
- System requirement checks for installation used from MyEnv.
- Installation instructions improved.
- Instance name can be added in mysql_bman backup file name (--instance-name).
MySQL Environment MyEnv 1.0.4 has been released
FromDual has the pleasure to announce the release of the new version 1.0.4 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.4 # cd ${HOME}/product # tar xf /download/myenv-1.0.4.tar.gz # rm -f myenv # ln -s myenv-1.0.4 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.4 MyEnv- MyEnv made PHP 5.4 compatible.
- System requirement checks for installation improved.
- Usage is only displayed with --help not on every error.
- Installation instructions improved.
- Missing PHP error message improved.
- No changes.
- System requirement checks for installation used from MyEnv.
- Installation instructions improved.
- Instance name can be added in mysql_bman backup file name (--instance-name).
MySQL Environment MyEnv 1.0.3 has been released
FromDual has the pleasure to announce the release of the new version 1.0.3 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.3 # cd ${HOME}/product # tar xf /download/myenv-1.0.3.tar.gz # rm -f myenv # ln -s myenv-1.0.3 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.3 MyEnv- Comment in aliases.conf and variables.conf more clear to avoid conflicts with shell variables.
- Commented lines in aliases.conf and variables.conf are ignored.
- product/lib/mysql path added to LD_LIBRARY_PATH for some set-ups (bug #106).
- CHANGELOG added.
- up alias pointing to function in .bash_profile caused troubles in cluster fail-over. Thus we are using the variables which should always work.
- Install comments for MyEnv installer improved.
- root commands changed into sudo commands.
- basedir is guessed from product directory.
- New socket candidate added.
- Kill session procedure added (sql/create_procedures_kill_sessions.sql).
- glb_control.shx added. This utility removes or adds a Galera Cluster node from the Galera Load Balancer when the node is not SYNCED.
- Partition maintenance utilities were improved by adding a new partition merge utility (utl/merge_partition.php). Split partition is one week ahead to avoid huge amount of data copy. Password can be stored in a file instead of providing it on the command line. Options usage is fixed.
- Better check for invalid options/parameters in utl/slave_monitor.php (bug #114).
- mysql_bman retention time maximum was verified incorrectly.
- Some typos in usage and error messages fixed.
- Remove target for type --cleanup, clean-up does not require target.
- Keys and values trimmed while reading config file.
- mysql_bman compression can be switched off for de-duplicated drives.
- mysql_bman considers ~.my.cnf (user and password) in client and mysqldump section.
- Separated mysql_bman into php part and bash wrapper.
- Split mysql_bman code and lib for automated testing.
- If option was passed twice this lead to strong behaviour (bug #33).
- backupdir creation error caught more nicely (bug #75).
- No option --cleanup removes files after archiving to tape (bug #76).
- Fixed endless loop bug in config read which affected customer.
- Use readlink if realpath is not available (problem with SLES).
- Cleanup does only delete bck_* files.
- No value options (--no-compress, --per-schema, --archive, etc.) in config file works.
- Example in --help is more clear.
- Target improved and more easy.
- Catch parse error on .my.cnf caused by ! in password string.
- .my.cnf (user, password) is considered and target adapted accordingly.
- Dump overview feature implemented.
- Full backup lists all schemas included.
MySQL Environment MyEnv 1.0.3 has been released
FromDual has the pleasure to announce the release of the new version 1.0.3 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.3 # cd ${HOME}/product # tar xf /download/myenv-1.0.3.tar.gz # rm -f myenv # ln -s myenv-1.0.3 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.3 MyEnv- Comment in aliases.conf and variables.conf more clear to avoid conflicts with shell variables.
- Commented lines in aliases.conf and variables.conf are ignored.
- product/lib/mysql path added to LD_LIBRARY_PATH for some set-ups (bug #106).
- CHANGELOG added.
- up alias pointing to function in .bash_profile caused troubles in cluster fail-over. Thus we are using the variables which should always work.
- Install comments for MyEnv installer improved.
- root commands changed into sudo commands.
- basedir is guessed from product directory.
- New socket candidate added.
- Kill session procedure added (sql/create_procedures_kill_sessions.sql).
- glb_control.shx added. This utility removes or adds a Galera Cluster node from the Galera Load Balancer when the node is not SYNCED.
- Partition maintenance utilities were improved by adding a new partition merge utility (utl/merge_partition.php). Split partition is one week ahead to avoid huge amount of data copy. Password can be stored in a file instead of providing it on the command line. Options usage is fixed.
- Better check for invalid options/parameters in utl/slave_monitor.php (bug #114).
- mysql_bman retention time maximum was verified incorrectly.
- Some typos in usage and error messages fixed.
- Remove target for type --cleanup, clean-up does not require target.
- Keys and values trimmed while reading config file.
- mysql_bman compression can be switched off for de-duplicated drives.
- mysql_bman considers ~.my.cnf (user and password) in client and mysqldump section.
- Separated mysql_bman into php part and bash wrapper.
- Split mysql_bman code and lib for automated testing.
- If option was passed twice this lead to strong behaviour (bug #33).
- backupdir creation error caught more nicely (bug #75).
- No option --cleanup removes files after archiving to tape (bug #76).
- Fixed endless loop bug in config read which affected customer.
- Use readlink if realpath is not available (problem with SLES).
- Cleanup does only delete bck_* files.
- No value options (--no-compress, --per-schema, --archive, etc.) in config file works.
- Example in --help is more clear.
- Target improved and more easy.
- Catch parse error on .my.cnf caused by ! in password string.
- .my.cnf (user, password) is considered and target adapted accordingly.
- Dump overview feature implemented.
- Full backup lists all schemas included.
MySQL Environment MyEnv 1.0.3 has been released
FromDual has the pleasure to announce the release of the new version 1.0.3 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.3 # cd ${HOME}/product # tar xf /download/myenv-1.0.3.tar.gz # rm -f myenv # ln -s myenv-1.0.3 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.3 MyEnv- Comment in aliases.conf and variables.conf more clear to avoid conflicts with shell variables.
- Commented lines in aliases.conf and variables.conf are ignored.
- product/lib/mysql path added to LD_LIBRARY_PATH for some set-ups (bug #106).
- CHANGELOG added.
- up alias pointing to function in .bash_profile caused troubles in cluster fail-over. Thus we are using the variables which should always work.
- Install comments for MyEnv installer improved.
- root commands changed into sudo commands.
- basedir is guessed from product directory.
- New socket candidate added.
- Kill session procedure added (sql/create_procedures_kill_sessions.sql).
- glb_control.shx added. This utility removes or adds a Galera Cluster node from the Galera Load Balancer when the node is not SYNCED.
- Partition maintenance utilities were improved by adding a new partition merge utility (utl/merge_partition.php). Split partition is one week ahead to avoid huge amount of data copy. Password can be stored in a file instead of providing it on the command line. Options usage is fixed.
- Better check for invalid options/parameters in utl/slave_monitor.php (bug #114).
- mysql_bman retention time maximum was verified incorrectly.
- Some typos in usage and error messages fixed.
- Remove target for type --cleanup, clean-up does not require target.
- Keys and values trimmed while reading config file.
- mysql_bman compression can be switched off for de-duplicated drives.
- mysql_bman considers ~.my.cnf (user and password) in client and mysqldump section.
- Separated mysql_bman into php part and bash wrapper.
- Split mysql_bman code and lib for automated testing.
- If option was passed twice this lead to strong behaviour (bug #33).
- backupdir creation error caught more nicely (bug #75).
- No option --cleanup removes files after archiving to tape (bug #76).
- Fixed endless loop bug in config read which affected customer.
- Use readlink if realpath is not available (problem with SLES).
- Cleanup does only delete bck_* files.
- No value options (--no-compress, --per-schema, --archive, etc.) in config file works.
- Example in --help is more clear.
- Target improved and more easy.
- Catch parse error on .my.cnf caused by ! in password string.
- .my.cnf (user, password) is considered and target adapted accordingly.
- Dump overview feature implemented.
- Full backup lists all schemas included.
How to Setup MySQL Master/Slave Replication ?
It's not usual to find an easy source on how to setup MySQL replication, I thought it might be useful at least for the beginners to write a direct and simple howto blog on setting up Master/Slave replication in MySQL using the classic method (binary log information). Check out my post GTID In Action for information about transaction-based replication using GTID.
Before going through the replication setup steps, I think it's better to explain first how Replication works in MySQL.
MySQL replication mainly consists of three-part process:
- The master server records all data changes to its binary logs (binary log events) and send it to the slave using a thread called (Binlog dump thread) once the slave connects to the master.
- The slave copies the binary log events sent by the master's binlog dump thread to its relay logs using a thread called (Slave I/O thread).
- The slave applies these changes from the relay logs to its data by replaying (executing) all events using a thread called (Slave SQL thread).
Now, lets go through the setup process which is divided into 3 main sections (assuming you have already installed MySQL on master and slave servers):
- Master's side configurations.
- Slave's side configurations.
- Replication checking and troubleshooting.
- Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 ## must be unique log-bin=mysql-bin binlog_format=ROW
- Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
- Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
- Take a full snapshot from the master's databases:
shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
Note:
If you have MyISAM tables you should omit the option --single-transaction while --master-data=2 will automatically turn on --lock-all-tables option which is required for MyISAM tables). - After preparing the backup file transfer it to the slave server.
- If you are setting up fresh M/S servers (no data is there on the master) you don't have to backup the master databases and all what you need is to execute the following command on the master server just to know the master's coordinates which will be used later on the slave's: SQL> SHOW MASTER STATUS\G *************************** 1. row *************************** File: mysql-bin.000275 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:
And that is it on the master's, let's do the slave's work.
Slave's side configuration:- Add the following variables to the my.cnf file: [mysqld] server-id=2 ## must be unique. relay_log=relay-log skip-slave-start ## useful to make any checks before starting the slave (this way, slave must be started manually after each mysql restart)
- Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
- Restore the full backup taken from the master (if performed): shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql.
- Get the master's coordinates information from the backup file:
shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO"
OR from the output of "SHOW MASTER STATUS;" in case of no backups taken (check the final step in the "Master's side configuration").
- Set the master information on the slave's: SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', ## The user which was created on the master -> MASTER_PASSWORD='s3cret', -> MASTER_LOG_FILE='mysql-bin.000275', ## taken from the above step -> MASTER_LOG_POS=120; ## taken from the above step
- Start replication: SQL> START SLAVE;
- Once the slave is started, check the replication status on the slave by the following command: SQL> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: slave_user_name Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000275 Read_Master_Log_Pos: 120 Relay_Log_File: mysql-relay.000001 Relay_Log_Pos: 283 Relay_Master_Log_File: mysql-bin.000275 Slave_IO_Running: Yes Slave_SQL_Running: Yes . . . Last_SQL_Error: . . .
- If the Slave_IO_State= connecting .... then make sure that the slave user information is set correctly and there is no firewall restrictions between the two servers (master and slave) this could be checked by connecting to the master's MySQL from the salve server by the replication user (in this example, slave_user_name).
- If both Slave_IO_Running and Slave_SQL_Running = Yes, then the replication had been set up correctly.
- If the Slave_SQL_Running = No, check the value of Last_SQL_Error for more details about the SQL error.
- If you know that error and you want to ignore it, you can execute "SET GLOBAL sql_slave_skip_counter = 1;" on the slave and then start the slave again "START SLAVE;".
- To restrict all normal users from changing data on the slave - which might break the replication - the option "read-only" should be added in the slave's my.cnf file.
- the server option "server-id" must be unique among all servers inside the replication (masters and slaves).
- If your database size is big (100GB or so) Xtrabackup tool could be used instead of mysqldump - when preparing the master snapshot - for faster backup and restore operations. For more information on how to use Xtrabackup, check out this link.
- For more information on how to setup MySQL replication, check out the manual documentation.
- For more information on how to troubleshoot MySQL replication, check out the Replication Troubleshooting - Classic VS GTID blog.
How to Setup MySQL Master/Slave Replication ?
It's not usual to find an easy source on how to setup MySQL replication, I thought it might be useful at least for the beginners to write a direct and simple howto blog on setting up Master/Slave replication in MySQL using the classic method (binary log information). Check out my post GTID In Action for information about transaction-based replication using GTID.
Before going through the replication setup steps, I think it's better to explain first how Replication works in MySQL.
MySQL replication mainly consists of three-part process:
- The master server records all data changes to its binary logs (binary log events) and send it to the slave using a thread called (Binlog dump thread) once the slave connects to the master.
- The slave copies the binary log events sent by the master's binlog dump thread to its relay logs using a thread called (Slave I/O thread).
- The slave applies these changes from the relay logs to its data by replaying (executing) all events using a thread called (Slave SQL thread).
Now, lets go through the setup process which is divided into 3 main sections (assuming you have already installed MySQL on master and slave servers):
- Master's side configurations.
- Slave's side configurations.
- Replication checking and troubleshooting.
- Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 ## must be unique log-bin=mysql-bin binlog_format=ROW
- Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
- Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
- Take a full snapshot from the master's databases:
shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
Note:
If you have MyISAM tables you should omit the option --single-transaction while --master-data=2 will automatically turn on --lock-all-tables option which is required for MyISAM tables). - After preparing the backup file transfer it to the slave server.
- If you are setting up fresh M/S servers (no data is there on the master) you don't have to backup the master databases and all what you need is to execute the following command on the master server just to know the master's coordinates which will be used later on the slave's: SQL> SHOW MASTER STATUS\G *************************** 1. row *************************** File: mysql-bin.000275 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:
And that is it on the master's, let's do the slave's work.
Slave's side configuration:- Add the following variables to the my.cnf file: [mysqld] server-id=2 ## must be unique. relay_log=relay-log skip-slave-start ## useful to make any checks before starting the slave (this way, slave must be started manually after each mysql restart)
- Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
- Restore the full backup taken from the master (if performed): shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql.
- Get the master's coordinates information from the backup file:
shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO"
OR from the output of "SHOW MASTER STATUS;" in case of no backups taken (check the final step in the "Master's side configuration").
- Set the master information on the slave's: SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', ## The user which was created on the master -> MASTER_PASSWORD='s3cret', -> MASTER_LOG_FILE='mysql-bin.000275', ## taken from the above step -> MASTER_LOG_POS=120; ## taken from the above step
- Start replication: SQL> START SLAVE;
- Once the slave is started, check the replication status on the slave by the following command: SQL> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: slave_user_name Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000275 Read_Master_Log_Pos: 120 Relay_Log_File: mysql-relay.000001 Relay_Log_Pos: 283 Relay_Master_Log_File: mysql-bin.000275 Slave_IO_Running: Yes Slave_SQL_Running: Yes . . . Last_SQL_Error: . . .
- If the Slave_IO_State= connecting .... then make sure that the slave user information is set correctly and there is no firewall restrictions between the two servers (master and slave) this could be checked by connecting to the master's MySQL from the salve server by the replication user (in this example, slave_user_name).
- If both Slave_IO_Running and Slave_SQL_Running = Yes, then the replication had been set up correctly.
- If the Slave_SQL_Running = No, check the value of Last_SQL_Error for more details about the SQL error.
- If you know that error and you want to ignore it, you can execute "SET GLOBAL sql_slave_skip_counter = 1;" on the slave and then start the slave again "START SLAVE;".
- To restrict all normal users from changing data on the slave - which might break the replication - the option "read-only" should be added in the slave's my.cnf file.
- the server option "server-id" must be unique among all servers inside the replication (masters and slaves).
- If your database size is big (100GB or so) Xtrabackup tool could be used instead of mysqldump - when preparing the master snapshot - for faster backup and restore operations. For more information on how to use Xtrabackup, check out this link.
- For more information on how to setup MySQL replication, check out the manual documentation.
- For more information on how to troubleshoot MySQL replication, check out the Replication Troubleshooting - Classic VS GTID blog.
How to Setup MySQL Master/Slave Replication ?
It's not usual to find an easy source on how to setup MySQL replication, I thought it might be useful at least for the beginners to write a direct and simple howto blog on setting up Master/Slave replication in MySQL using the classic method (binary log information). Check out my post GTID In Action for information about transaction-based replication using GTID.
Before going through the replication setup steps, I think it's better to explain first how Replication works in MySQL.
MySQL replication mainly consists of three-part process:
- The master server records all data changes to its binary logs (binary log events) and send it to the slave using a thread called (Binlog dump thread) once the slave connects to the master.
- The slave copies the binary log events sent by the master's binlog dump thread to its relay logs using a thread called (Slave I/O thread).
- The slave applies these changes from the relay logs to its data by replaying (executing) all events using a thread called (Slave SQL thread).
Now, lets go through the setup process which is divided into 3 main sections (assuming you have already installed MySQL on master and slave servers):
- Master's side configurations.
- Slave's side configurations.
- Replication checking and troubleshooting.
- Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 ## must be unique log-bin=mysql-bin binlog_format=ROW
- Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
- Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
- Take a full snapshot from the master's databases:
shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
Note:
If you have MyISAM tables you should omit the option --single-transaction while --master-data=2 will automatically turn on --lock-all-tables option which is required for MyISAM tables). - After preparing the backup file transfer it to the slave server.
- If you are setting up fresh M/S servers (no data is there on the master) you don't have to backup the master databases and all what you need is to execute the following command on the master server just to know the master's coordinates which will be used later on the slave's: SQL> SHOW MASTER STATUS\G *************************** 1. row *************************** File: mysql-bin.000275 Position: 120 Binlog_Do_DB: Binlog_Ignore_DB: Executed_Gtid_Set:
And that is it on the master's, let's do the slave's work.
Slave's side configuration:- Add the following variables to the my.cnf file: [mysqld] server-id=2 ## must be unique. relay_log=relay-log skip-slave-start ## useful to make any checks before starting the slave (this way, slave must be started manually after each mysql restart)
- Restart MySQL so that configuration changes take place: shell> /etc/init.d/mysql restart
- Restore the full backup taken from the master (if performed): shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql.
- Get the master's coordinates information from the backup file:
shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO"
OR from the output of "SHOW MASTER STATUS;" in case of no backups taken (check the final step in the "Master's side configuration").
- Set the master information on the slave's: SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', ## The user which was created on the master -> MASTER_PASSWORD='s3cret', -> MASTER_LOG_FILE='mysql-bin.000275', ## taken from the above step -> MASTER_LOG_POS=120; ## taken from the above step
- Start replication: SQL> START SLAVE;
- Once the slave is started, check the replication status on the slave by the following command: SQL> SHOW SLAVE STATUS\G *************************** 1. row *************************** Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: slave_user_name Master_Port: 3306 Connect_Retry: 60 Master_Log_File: mysql-bin.000275 Read_Master_Log_Pos: 120 Relay_Log_File: mysql-relay.000001 Relay_Log_Pos: 283 Relay_Master_Log_File: mysql-bin.000275 Slave_IO_Running: Yes Slave_SQL_Running: Yes . . . Last_SQL_Error: . . .
- If the Slave_IO_State= connecting .... then make sure that the slave user information is set correctly and there is no firewall restrictions between the two servers (master and slave) this could be checked by connecting to the master's MySQL from the salve server by the replication user (in this example, slave_user_name).
- If both Slave_IO_Running and Slave_SQL_Running = Yes, then the replication had been set up correctly.
- If the Slave_SQL_Running = No, check the value of Last_SQL_Error for more details about the SQL error.
- If you know that error and you want to ignore it, you can execute "SET GLOBAL sql_slave_skip_counter = 1;" on the slave and then start the slave again "START SLAVE;".
- To restrict all normal users from changing data on the slave - which might break the replication - the option "read-only" should be added in the slave's my.cnf file.
- the server option "server-id" must be unique among all servers inside the replication (masters and slaves).
- If your database size is big (100GB or so) Xtrabackup tool could be used instead of mysqldump - when preparing the master snapshot - for faster backup and restore operations. For more information on how to use Xtrabackup, check out this link.
- For more information on how to setup MySQL replication, check out the manual documentation.
- For more information on how to troubleshoot MySQL replication, check out the Replication Troubleshooting - Classic VS GTID blog.
Setting the right GCache size in Galera Cluster
One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.
The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!
Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.
To check the current value of the GCache size:
mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):
#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?
To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:
Hold time = GCache size / Replication Rate.
Where:
- Replication Rate = Amount of replicated data / time.
- Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
The amount of replicated data for the customer's case = 7200MB.
Now, we can find out how much GCache (default 128M) can handle for the customer's case:
Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.
Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.
In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.
A shorter way using the binary logs sizeIs there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:
- Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
- Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.
- Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
- The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
- Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
- GCache size = Maintenance window * Replication Rate.
- GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
- GCache size = binary log traffic which occurs during the maintenance window.
Notes:
Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:
Setting the right GCache size in Galera Cluster
One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.
The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!
Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.
To check the current value of the GCache size:
mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):
#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?
To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:
Hold time = GCache size / Replication Rate.
Where:
- Replication Rate = Amount of replicated data / time.
- Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
The amount of replicated data for the customer's case = 7200MB.
Now, we can find out how much GCache (default 128M) can handle for the customer's case:
Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.
Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.
In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.
A shorter way using the binary logs sizeIs there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:
- Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
- Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.
- Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
- The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
- Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
- GCache size = Maintenance window * Replication Rate.
- GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
- GCache size = binary log traffic which occurs during the maintenance window.
Notes:
Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:
Setting the right GCache size in Galera Cluster
One of our customers had a question related to the right value of Galera Cache size (gcache.size) in Galera Cluster for MySQL which I would like to share with you.
The question was: My maintenance window takes 4 hours for my 5TB DB. How can I avoid an SST ?!
Basically, having too small GCache size will lead to SST (Snapshot State Transfer) instead of IST (Incremental State Transfer), thus we can avoid the SST by setting the GCache to the appropriate value.
To check the current value of the GCache size:
mysql> SHOW GLOBAL VARIABLES LIKE 'wsrep_provider_options'\G Variable_name: wsrep_provider_options Value: base_host = 192.168.1.12; . . . gcache.page_size = 128M; gcache.size = 128M; gcs.fc_debug = 0; . . .The value of GCache size could be changed by adding the following line in the my.cnf file and restarting the node (it could NOT be changed online):
#my.cnf [mysqld] wsrep_provider_options="gcache.size=256M"But the question is how can we calculate the right value for GCache size to cover the maintenance window and at the same time not larger than what it needs?
To answer that question we should first find out how much GCache can handle which could be calculated by the following formula:
Hold time = GCache size / Replication Rate.
Where:
- Replication Rate = Amount of replicated data / time.
- Amount of replicated data = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
The amount of replicated data for the customer's case = 7200MB.
Now, we can find out how much GCache (default 128M) can handle for the customer's case:
Hold time = 128MB / (7200MB / 4h) = 128MB / 0.5 MB = 256s.
Then, we can calculate the right GCache size value to handle the maintenance window by the following formula:
GCache = Maintenance window * Replication Rate = 14400s * 0.5 MB.
GCache = 7200MB.
In other words, the right GCache size should be equivalent to (or not less than) the amount of replicated data.
A shorter way using the binary logs sizeIs there any relation between (wsrep_replicated_bytes + wsrep_received_bytes) and the binary log traffic? Lets check that by the following test scenario:
- Starting a Galera Cluster where the newest binary log file size will be empty (120 bytes) and the same for the status variables: mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+-------+ | Variable_name | Value | +------------------------+-------+ | wsrep_replicated_bytes | 0 | +------------------------+-------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +----------------------+-------+ | Variable_name | Value | +----------------------+-------+ | wsrep_received_bytes | 368 | +----------------------+-------+ 1 row in set (0.00 sec)
- Execute some DML statements on the current node (to increase wsrep_replicated_bytes) and on another node (to increase wsrep_received_bytes), then check the status values:
mysql> SHOW GLOBAL STATUS LIKE 'wsrep_replicated_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_replicated_bytes | 80125192 | +------------------------+----------+ 1 row in set (0.00 sec) mysql> SHOW GLOBAL STATUS LIKE 'wsrep_received_bytes'; +------------------------+----------+ | Variable_name | Value | +------------------------+----------+ | wsrep_received_bytes | 40062948 | +------------------------+----------+ 1 row in set (0.00 sec)The Amount of replicated data = (80125192 + 40062948) - (0 + 368) = 120187772 Bytes.
- Checking the increase of the binary log file: shell> ll /var/lib/mysql/mysql-bin.000243 -rw-rw---- 1 mysql mysql 113769454 Mar 20 13:42 mysql-bin.000243
- The variable log_slave_updates MUST to be enabled on ALL nodes, otherwise, the option wsrep_received_bytes will not be reflected on the binary logs, thus will lead to WRONG calculations!!
- Since the cluster is freshly started, MySQL started to write into a new binary log file while in an already running cluster we can force MySQL to start from new binary log file by issuing the SQL command "FLUSH BINARY LOGS;" so that we can use the total size of the binary logs generated after that during the maintenance.
- GCache size = Maintenance window * Replication Rate.
- GCache size = (wsrep_replicated_bytes + wsrep_received_bytes) after the maintenance window - (wsrep_replicated_bytes + wsrep_received_bytes) before the maintenance window.
- GCache size = binary log traffic which occurs during the maintenance window.
Notes:
Although the binary log traffic will be always less than the amount of replicated data but they are nearly close, thus we can use it to get a rough estimation about the right GCache size value, hence the formula will be as follows:
Impact of General Query Log on MySQL Performance
Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.
The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?
Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.
System Information: HW configurations:- CPU: Intel® Core™ i7-3520M Processor (4M Cache, up to 3.60 GHz).
- 2 cores, 4 threads, HT enabled.
- Memory: 8GB RAM (1600).
- Storage: HDD 1TB/ 5400RPM.
- OS: Ubuntu 12.04
- MySQL Server: 5.6.17
- Sysbench: 0.4.12
- Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
- Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
Note:
The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.
General log Disabled:To make sure that the general query log is disabled:
mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+Testing results:
General log Enabled:The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):
mysql> SET GLOBAL general_log=ON;We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).
What are the benefits that we might get if we have the log output to be stored in a table not in a file?
- We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
- The log contents could be accessed remotely if someone can connect to the MySQL server.
- Standard format for the log entries.
- If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
- It is easy to expire the logs by simply TRUNCATE the log table.
- Log rotation is possible by using RENAME TABLE statement.
- Log entries are not replicated to the slave because they are not written to the binary logs.
- The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.
So, let's check the performance impact then of each log output.
Output is FILE:To check the output destination of the general log, the following command should be used:
mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+Testing results:
Output is TABLE (CSV table):To change the output destination of the general log from file to table (CSV by default), the following command should be used:
mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+Testing results:
Output is TABLE (MyISAM table):Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?
The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.
To alter the log table, you must first disable the logging:
mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;Testing results:
Output is TABLE (MyISAM table with some structures changes):In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.
The following is the general log table structure:
mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):
- Can we create partitions on that table which might boost our search?
Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables). - Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100). - What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
- Add index on `event_time` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
- Add index on `event_time` column:
- Add FULLTEXT index on `argument` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
To make it more clear, the following is the combination of all results in one graph followed by response time comparison:
The raw results in Transactions / Sec might be useful:
Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
- The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
- Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
- Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
- Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
- Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
- Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
- Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
- Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
- Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.
Impact of General Query Log on MySQL Performance
Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.
The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?
Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.
System Information: HW configurations:- CPU: Intel® Core™ i7-3520M Processor (4M Cache, up to 3.60 GHz).
- 2 cores, 4 threads, HT enabled.
- Memory: 8GB RAM (1600).
- Storage: HDD 1TB/ 5400RPM.
- OS: Ubuntu 12.04
- MySQL Server: 5.6.17
- Sysbench: 0.4.12
- Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
- Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
Note:
The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.
General log Disabled:To make sure that the general query log is disabled:
mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+Testing results:
General log Enabled:The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):
mysql> SET GLOBAL general_log=ON;We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).
What are the benefits that we might get if we have the log output to be stored in a table not in a file?
- We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
- The log contents could be accessed remotely if someone can connect to the MySQL server.
- Standard format for the log entries.
- If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
- It is easy to expire the logs by simply TRUNCATE the log table.
- Log rotation is possible by using RENAME TABLE statement.
- Log entries are not replicated to the slave because they are not written to the binary logs.
- The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.
So, let's check the performance impact then of each log output.
Output is FILE:To check the output destination of the general log, the following command should be used:
mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+Testing results:
Output is TABLE (CSV table):To change the output destination of the general log from file to table (CSV by default), the following command should be used:
mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+Testing results:
Output is TABLE (MyISAM table):Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?
The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.
To alter the log table, you must first disable the logging:
mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;Testing results:
Output is TABLE (MyISAM table with some structures changes):In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.
The following is the general log table structure:
mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):
- Can we create partitions on that table which might boost our search?
Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables). - Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100). - What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
- Add index on `event_time` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
- Add index on `event_time` column:
- Add FULLTEXT index on `argument` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
To make it more clear, the following is the combination of all results in one graph followed by response time comparison:
The raw results in Transactions / Sec might be useful:
Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
- The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
- Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
- Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
- Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
- Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
- Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
- Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
- Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
- Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.
Impact of General Query Log on MySQL Performance
Sometimes, it is required to enable the general query log (which is disabled by default). If the general query log is enabled the server writes to this log information when clients connect or disconnect, and each SQL statement received from the client.
The question is, does enabling the general query log affects the MySQL performance ?
Also, it is possible to record the output of this log into either file or table in the mysql database (mysql.general_log), what is the performance impact of each one?
Let's do some simple benchmark for those scenarios to measure the actual impact on the mysql performance.
System Information: HW configurations:- CPU: Intel® Core™ i7-3520M Processor (4M Cache, up to 3.60 GHz).
- 2 cores, 4 threads, HT enabled.
- Memory: 8GB RAM (1600).
- Storage: HDD 1TB/ 5400RPM.
- OS: Ubuntu 12.04
- MySQL Server: 5.6.17
- Sysbench: 0.4.12
- Sysbench command: sysbench --num-threads=1 --max-requests=1000 --db-driver=mysql --test=oltp --mysql-table-engine=InnoDB --mysql-user=root run .
- Table structure which was used by sysbench tests: mysql> show create table sbtest.sbtest\G CREATE TABLE `sbtest` ( `id` int(10) unsigned NOT NULL AUTO_INCREMENT, `k` int(10) unsigned NOT NULL DEFAULT '0', `c` char(120) NOT NULL DEFAULT '', `pad` char(60) NOT NULL DEFAULT '', PRIMARY KEY (`id`), KEY `k` (`k`) ) ENGINE=InnoDB AUTO_INCREMENT=8574 DEFAULT CHARSET=latin1
Note:
The test had been made against 1,2,4,8,16 and 32 threads, each throughput/response time value for each thread's number for each test case is generated by the average of ten (10) times execution.
General log Disabled:To make sure that the general query log is disabled:
mysql> show global variables like'general_log'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | general_log | OFF | +---------------+-------+Testing results:
General log Enabled:The general query log is a dynamic variable which means that it could be enabled or disabled ONLINE without restarting MySQL (since MySQL 5.1):
mysql> SET GLOBAL general_log=ON;We can choose the output for this log to be either in a log file (by default) or in a MySQL table (mysql.general_log).
What are the benefits that we might get if we have the log output to be stored in a table not in a file?
- We can use the normal SQL statements to access the log contents to get information about specific criteria (e.g. using WHERE condition) and this is a little harder when dealing with files.
- The log contents could be accessed remotely if someone can connect to the MySQL server.
- Standard format for the log entries.
- If the CSV engine is used for the log table, it will be easy to import the CSV file into spreadsheets.
- It is easy to expire the logs by simply TRUNCATE the log table.
- Log rotation is possible by using RENAME TABLE statement.
- Log entries are not replicated to the slave because they are not written to the binary logs.
- The mysqldump does not include the log tables contents (general_log or slow_log) in the backup even if --all-databases backup option is used.
So, let's check the performance impact then of each log output.
Output is FILE:To check the output destination of the general log, the following command should be used:
mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | FILE | +---------------+-------+Testing results:
Output is TABLE (CSV table):To change the output destination of the general log from file to table (CSV by default), the following command should be used:
mysql> SET GLOBAL log_output='TABLE'; mysql> show global variables like'log_output'; +---------------+-------+ | Variable_name | Value | +---------------+-------+ | log_output | TABLE | +---------------+-------+Testing results:
Output is TABLE (MyISAM table):Maybe due to the nature of the CSV storage ENGINE we faced performance issues in the previous case. Is it possible to change the table engine of the general_log table?
The answer is yes, but unfortunately, we are restricted to use only MyISAM storage engine and no engines other than CSV or MyISAM are allowed. Check this link for more information.
To alter the log table, you must first disable the logging:
mysql> alter table mysql.general_log engine=MYISAM; ERROR 1580 (HY000): You cannot 'ALTER' a log table if logging is enabled mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log engine=MYISAM; mysql> SET GLOBAL general_log=ON;Testing results:
Output is TABLE (MyISAM table with some structures changes):In general, to make any SQL query work faster, we might need to optimize the table structure, add indexes, re-write the query, .. etc.
The following is the general log table structure:
mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log'Let's check what can we do to optimize the general_log table structure (other suggestions are welcomed):
- Can we create partitions on that table which might boost our search?
Although this is a normal MyISAM table but partitioning IS NOT allowed for the log tables (by the way, partitioning is not permitted also for CSV tables). - Can we change the datatype of the `user_host` column from mediumtext to e.g. varchar(100)? (The max length of this column's data on my machine didn't exceed 50 characters)
While it is - syntax wise - accepted but no logs will be stored in the table after that and the following error will be printed in the error log file: 2014-03-06 18:44:21 6987 [ERROR] Failed to write to mysql.general_log: 2014-03-06 18:44:23 6987 [ERROR] Incorrect definition of table mysql.general_log: expected column 'user_host' at position 1 to have type mediumtext, found type varchar(100). - What about creating indexes on the columns we will use for most of our searches (`event_time` and `argument`)? lets try that!
- Add index on `event_time` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add index ev_tm_idx(`event_time`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
- Add index on `event_time` column:
- Add FULLTEXT index on `argument` column:
mysql> SET GLOBAL general_log=OFF; mysql> alter table mysql.general_log add fulltext index (`argument`); mysql> show create table mysql.general_log\G CREATE TABLE `general_log` ( `event_time` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP, `user_host` mediumtext NOT NULL, `thread_id` bigint(21) unsigned NOT NULL, `server_id` int(10) unsigned NOT NULL, `command_type` varchar(64) NOT NULL, `argument` mediumtext NOT NULL, KEY `ev_tm_idx` (`event_time`), FULLTEXT KEY `argument` (`argument`) ) ENGINE=MyISAM DEFAULT CHARSET=utf8 COMMENT='General log' mysql> SET GLOBAL general_log=ON;Testing results:
To make it more clear, the following is the combination of all results in one graph followed by response time comparison:
The raw results in Transactions / Sec might be useful:
Threads12481632General log disabled383.996814.7591421.2881674.7331414.9851071.189General log enabled (File)281.642521.391230.7431406.1271095.896923.986General log enabled (CSV Table)231.659447.173787.578507.846426.324439.992General log enabled (MyISAM Table)249.47536.379933.304532.912476.454454.015General log enabled (MyISAM Table + index)238.508430.05875.209465.464465.464395.063General log enabled (MyISAM Table + Fulltext index)157.436236.156210.968212.273218.617220.701
Conclusion:
- The best MySQL performance - among all above test cases - could be achieved by disabling the general query log, e.g. if we compared the above results for the case 4 concurrent threads (the highest value for most scenarios) we would find that:
- Using the general log enabled (general_log = ON) and the log destination is file (log_output = FILE) decreased the throughput by 13.4% and increased the response time by 17.5%.
- Using the general log enabled and the log destination is CSV table decreased the throughput by 44.6% and increased the response time by 90%.
- Using the general log enabled and the log destination is MyISAM table decreased the throughput by 34.3% and increased the response time by 59%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` decreased the throughput by 38.4% and increased the response time by 73%.
- Using the general log enabled and the log destination is MyISAM having an index added on column `event_time` and FULLTEXT index added on column `argument` decreased the throughput by 85% and increased the response time by 542%.
- Although using table as a log output destination has many benefits - as described above - but it has more negative impact on the MySQL performance as compared to log file.
- Increasing the number of concurrently running threads - in the case of log_output=TABLE - will increase the general_log table contention which is controlled by the table locking level for MyISAM or CSV ENGINES.
- Like any other MySQL table - as many rows inserted in the log table as more negative performance impact.
- Although mysqldump does not include the log tables contents in the backup but this is not the case when having full physical backup using Xtrabackup or any other physical backup based tools.
- Finally, it is preferred to only enable the general query log when it is really needed and it is not recommended to enable it in a production system. It could be enabled (dynamically) for a while then should be disabled again once we have got what we are searching for.
Why FromDual customers are using Galera Cluster for MySQL
We are very satisfied with the functionality and the launch. The entire Cluster is running absolutely stable. This is a significant advantage for our customers because we are able to offer our services without interruption. We will use Galera in further projects and convert our existing Master/Slave installations.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Download this case study: KiKxxl Gmbh: Highly available communication services based on a MySQL database and Galera Cluster (PDF, 420 kbyte).
Taxonomy upgrade extras: galeraclustercustomerWhy FromDual customers are using Galera Cluster for MySQL
We are very satisfied with the functionality and the launch. The entire Cluster is running absolutely stable. This is a significant advantage for our customers because we are able to offer our services without interruption. We will use Galera in further projects and convert our existing Master/Slave installations.
A. Rempening, Datenbank-Administrator, KiKxxl GmbH
Download this case study: KiKxxl Gmbh: Highly available communication services based on a MySQL database and Galera Cluster (PDF, 420 kbyte).
Taxonomy upgrade extras: galeraclustercustomer