You are here
Sammlung von Newsfeeds
FromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations
FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.
To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).
With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.
In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.
Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.
MyEnv v1.0.5
The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.
With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).
The most important improvements in MyEnv v1.0.5 are:
- Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
- MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
- Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
- The user guidance of the MyEnv installer was made more user friendly.
- Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
- The tools for MySQL Partitions were extended and improved.
All improvements in detail you can find in the Release Notes.
Here you can download MyEnv.
MySQL Backup Manager v1.0.5
The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.
At this point we would like to quote a MySQL user:
"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!
Xtrabackup + mysql_bman!!!"
In mysql_bman version v1.0.5 the following improvements were integrated:
- Security improvements (password is not exposed any more).
- Every instance can be tagged with a name and uniquely identified.
- The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
- The compression of backups can be disabled to support de-duplicating drives.
- The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.
MySQL Ops Center v0.2
Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.
The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:
- Starting and stopping of MySQL databases on remote machines by a central management console.
- Starting and stopping of the MySQL replication.
- Starting and stopping of a virtual IP (VIP).
- Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
- Configuration of the master/slave replication.
The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.
MySQL Performance Monitor v0.9.3
The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:
- Bugs related to sha/sha1 encryption were fixed.
- A stopped database is better detected now.
- DRBD informations were improved.
- New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
- New measuring points was added (Galera Cluster) and wrong ones fixed.
You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.
Taxonomy upgrade extras: galeraBackupmanagermonitoringOperationsMyEnvfromdual_brmanFromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations
FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.
To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).
With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.
In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.
Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.
MyEnv v1.0.5
The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.
With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).
The most important improvements in MyEnv v1.0.5 are:
- Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
- MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
- Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
- The user guidance of the MyEnv installer was made more user friendly.
- Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
- The tools for MySQL Partitions were extended and improved.
All improvements in detail you can find in the Release Notes.
Here you can download MyEnv.
MySQL Backup Manager v1.0.5
The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.
At this point we would like to quote a MySQL user:
"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!
Xtrabackup + mysql_bman!!!"
In mysql_bman version v1.0.5 the following improvements were integrated:
- Security improvements (password is not exposed any more).
- Every instance can be tagged with a name and uniquely identified.
- The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
- The compression of backups can be disabled to support de-duplicating drives.
- The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.
MySQL Ops Center v0.2
Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.
The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:
- Starting and stopping of MySQL databases on remote machines by a central management console.
- Starting and stopping of the MySQL replication.
- Starting and stopping of a virtual IP (VIP).
- Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
- Configuration of the master/slave replication.
The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.
MySQL Performance Monitor v0.9.3
The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:
- Bugs related to sha/sha1 encryption were fixed.
- A stopped database is better detected now.
- DRBD informations were improved.
- New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
- New measuring points was added (Galera Cluster) and wrong ones fixed.
You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.
Taxonomy upgrade extras: galeraBackupmanagermonitoringOperationsMyEnvfromdual_brmanFromDual: Tools for MySQL and Galera - Backup - Monitoring - Operations
FromDual tools provide valuable additional functionality which facilitate and optimize daily operations of your MySQL databases. Since our last newsletter a lot of things have changed in the FromDual tools.
To the MySQL Environment (MyEnv) numerous improvements and suggestions of our customers were added. The most important changes were introduced to the MySQL Backup Manager (mysql_bman).
With the MySQL Ops Center we meet the wishes of our numerous customers, which desire a graphical user interface for operating complex MySQL environments.
These users are often little used in handling MySQL but want nevertheless operate more complex MySQL installations as Master/Slave- or Master/Master-replications.
In the MySQL Performance Monitor (mpm) numerous little bugs have been fixed which were reported to us by our customers.
Note: In our MySQL Service Contracts, Business Hour (5x9) and All around the Clock (7x24) the usage and support of our tools is included.
If you would like to know more about our service prices, we are pleased to send you an offer.
MyEnv v1.0.5
The MySQL Environment (MyEnv) gets more and more popularity in the MySQL eco-system. MyEnv is optimized for MySQL and mimics the popular TVD BasEnv which is popular with bigger Oracle database users.
With MyEnv you can easily consolidate several MySQL instances (mysqld) on one single machine. Thanks to MyEnv this complicated configuration is a piece of cake. Furthermore MyEnv is more and more common with customers testing their applications against different MySQL versions (5.5, 5.6 and 5.7) or different MySQL branches (Galera Cluster, MariaDB, Percona Server).
The most important improvements in MyEnv v1.0.5 are:
- Old PHP functions were replaced to achieve better compatibility with PHP 5.4 and 5.5.
- MyEnv overview (up) of installed MySQL instances was polished and numerous smaller bugs were fixed.
- Extensions for active/passive fail-over clusters and Oracle Enterprise Monitor Agents for MySQL were integrated.
- The user guidance of the MyEnv installer was made more user friendly.
- Problems of MyEnv with SuSE Linux Enterprise Server (SLES) were removed.
- The tools for MySQL Partitions were extended and improved.
All improvements in detail you can find in the Release Notes.
Here you can download MyEnv.
MySQL Backup Manager v1.0.5
The MySQL Backup Manager (mysql_bman) is actually getting most of interest from our customers. It significantly eases backups for MySQL for all different types of backups.
At this point we would like to quote a MySQL user:
"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!
Xtrabackup + mysql_bman!!!"
In mysql_bman version v1.0.5 the following improvements were integrated:
- Security improvements (password is not exposed any more).
- Every instance can be tagged with a name and uniquely identified.
- The MySQL Backup Manager considers now the MySQL configuration file ~/.my.cnf.
- The compression of backups can be disabled to support de-duplicating drives.
- The option --no-memory-table-check was introduced to allow inconsistent backups with MEMORY tables.
MySQL Ops Center v0.2
Our MySQL customers have requested a simple user interface to operate and administer many MySQL databases. This is the reason why FromDual has launched the MySQL Ops Center.
The Ops Center can centrally operate and control complex MySQL configurations like Master/Slave or Master/Master set-ups, monitor, start, stop and reconfigure the replication.
With the MySQL Ops Center you can also start and stop easily virtual IP's and move them to an other host.
The most important features, which were added to the first public preview-release of the MySQL Ops Center v0.2. are:
- Starting and stopping of MySQL databases on remote machines by a central management console.
- Starting and stopping of the MySQL replication.
- Starting and stopping of a virtual IP (VIP).
- Fail-over of VIP from active master to slave (master/slave replication) or passive master (master/master replication).
- Configuration of the master/slave replication.
The MySQL Ops Center can be downloaded here. Further information you can find at MySQL Ops Center.
MySQL Performance Monitor v0.9.3
The MySQL Performance Monitor (mpm) was optimized in many places. Further know bugs were fixed and the mpm agent was made ready for the newest Zabbix version v2.2:
- Bugs related to sha/sha1 encryption were fixed.
- A stopped database is better detected now.
- DRBD informations were improved.
- New behaviour of zabbix_senders in Zabbix v2.2 is handled correctly now.
- New measuring points was added (Galera Cluster) and wrong ones fixed.
You can download the latest version of MySQL Performance Monitor from here here and for more information about the manual installation just follow up the steps on the installation guide. To check all changes and improvements of MySQL Performance Monitor check out the Release Notes.
FromDual: Werkzeuge für MySQL und Galera - Backup - Überwachung - Betrieb
FromDual Tools bieten wertvolle ergänzende Funktionen, die den täglichen Betrieb von MySQL Datenbanken erleichtern und optimieren. Seit unserem letzten Newsletter hat sich einiges getan bei den FromDual Tools.
In das MySQL Environment (MyEnv) sind zahlreiche Neuerungen und Verbesserungsvorschläge unserer Kunden eingeflossen. Die wichtigsten Erweiterungen betreffen den MySQL Backup Manager (mysql_bman).
Mit dem MySQL Ops Center kommen wir dem Wunsch zahlreicher Kunden nach, welche sich eine graphische Benutzeroberfläche für den Betrieb von komplexeren MySQL Umgebungen wünschen.
Diese Nutzer sind oft wenig geübt im Umgang mit MySQL, möchten aber trotzdem komplexere MySQL Installationen wie Master/Slave- oder Master/Master-Replikationen betreiben.
Im MySQL Performance Monitor (mpm) wurden zahlreiche kleinere Bugs behoben, welche uns in den letzten Wochen durch Kunden gemeldet wurden.
Hinweis: Bei unseren MySQL Service Verträgen, Business Hour (5x9) und All around the Clock (7x24) ist die Nutzung und der Support für unsere Tools mit enthalten.
Wenn Sie mehr über unsere Service-Preise wissen möchten, erstellen wir für Sie gerne ein Angebot.
MyEnv v1.0.5
Einer immer grösseren Beliebtheit erfreut sich das MySQL Environment (MyEnv). Dieses wurde dem bei grösseren Oracle Datenbankkunden beliebten TVD BasEnv nachempfunden, und für MySQL optimiert.
Mit MyEnv lassen sich bequem mehrere Instanzen (mysqld) auf einem Rechner konsolidieren. Dank MyEnv wird diese sonst eher komplizierte Konfiguration ein Kinderspiel. Im weiteren wird MyEnv zunehmend beliebt bei Kunden, welche Ihre Applikation gegen verschiedene MySQL Versionen (5.5, 5.6 und 5.7) sowie unterschiedliche MySQL Branches (Galera Cluster, MariaDB, Percona Server) testen wollen.
Die wichtigsten Neuerung in MyEnv v1.0.5:
- Alte PHP Funktionen wurden ersetzt um Kompatibilität mit PHP 5.4 und 5.5 zu erlangen.
- MyEnv Übersicht (up) über die installierten MySQL Instanzen wurde optisch aufbereitet und zahlreichere kleinere Bugs und Unschönheiten entfernt.
- Erweiterungen für aktiv/passiv Failover-Cluster und Oracle Enterprise Monitor Agents für MySQL wurden integriert.
- Die Benutzerführung beim MyEnv Installer wurde optisch aufbereitet und benutzerfreundlicher gestaltet.
- Probleme beim Einsatz von MyEnv auf SuSE Linux Enterprise Server (SLES) wurden behoben.
- Die Werkzeuge für MySQL Partitionen wurden erweitert und verbessert.
Alle Neuerung im Detail finden Sie in den Release Notes.
MyEnv können Sie hier herunterladen.
MySQL Backup Manager v1.0.5
Am meisten auf Interesse stösst zur Zeit der MySQL Backup Manager (mysql_bman). Dieser vereinfacht signifikant Backups für MySQL in allen möglichen Varianten.
An dieser Stelle möchten wir gerne einen Kommentar eines Anwenders wieder geben, den wir kürzlich erhalten haben:
"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!
Xtrabackup + mysql_bman!!!"
Im mysql_bman Version v1.0.5 wurden folgende Neuerungen integriert:
- Die Sicherheit wurden verbessert (Passwort wird nicht mehr angezeigt).
- Jede Instanz kann mit einem Namen versehen und somit eindeutig gekennzeichnet werden.
- Der MySQL Backup Manager zieht jetzt auch die Konfigurations-Datei ~/.my.cnf in Betracht.
- Die Backup-Komprimierung kann zur Unterstützung von deduplizierenden Laufwerken ausgeschaltet werden.
- Die Option --no-memory-table-check wurde eingeführt um inkonsistente Backups mit MEMORY Tabellen zu erlauben.
Download (in MyEnv enthalten).
MySQL Ops Center v0.2
Unsere MySQL Kunden haben immer wieder nach einer einfach zu bedienenden Benutzeroberfläche zur Steuerung und Verwaltung von mehreren MySQL Datenbanken nachgefragt. Aus diesem Grund hat FromDual das MySQL Ops Center lanciert.
Dieses kann auch komplexeren Konfigurationen mit z. B. Master/Slave oder Master/Master Setups zentral steuern, die Replikationen überwachen, anhalten und wieder starten sowie umzukonfigurieren.
Ebenfalls können mit dem MySQL Ops Center einfach virtuelle IP's gestartet und von einem Knoten auf einen anderen umgezogen werden.
Die wichtigsten Funktionen, welche in den ersten öffentlichen Preview-Release des MySQL Ops Center v0.2 eingeflossen sind:
- Starten und Stoppen von MySQL Datenbanken auf entfernten Rechnern durch eine zentrale Management-Konsole.
- Starten und Stoppen der Replikation.
- Starten und Stoppen eine virtuellen IP (VIP)
- Schwenken (fail-over) der VIP vom aktiven Master auf den Slave (Master/Slave-Replikation) oder einen passiven Master (Master/Master Replikation).
- Konfigurationen der Master/Slave Replikation.
Das MySQL Ops Center kann MySQL Ops Center Download heruntergeladen werden. Weiter Informationen finden Sie unter MySQL Ops Center.
MySQL Performance Monitor v0.9.3
Der MySQL Performance Monitor (mpm) wurde an zahlreichen stellen optimiert. Zudem wurden bekannte Fehler behoben und der mpm Agent für die neuste Zabbix Version v2.2 fit gemacht:
- Bugs im Zusammenhang mit sha/sha1 Encryption wurden behoben.
- Eine gestoppte Datenbank wird jetzt besser erkannt.
- DRBD Information wurden verbessert.
- Neues Verhalten des zabbix_senders in Zabbix v2.2 wird korrekt genutzt.
- Neue Messpunkte wurden hinzugefügt (Galera Cluster) und fehlerhafte korrigiert.
Wie der MySQL Performance Monitor installiert wird finden Sie in der Installationsanleitung. Die vollständige Liste der Verbesserung entnehmen Sie den Release Notes. Den MySQL Performance Monitor können Sie hier herunterladen.
Wir freuen uns, von Ihnen zu hören.
Taxonomy upgrade extras: BackupgaleramonitoringbetriebOperationsüberwachungalarmierungMyEnvfromdual_brmanFromDual: Werkzeuge für MySQL und Galera - Backup - Überwachung - Betrieb
FromDual Tools bieten wertvolle ergänzende Funktionen, die den täglichen Betrieb von MySQL Datenbanken erleichtern und optimieren. Seit unserem letzten Newsletter hat sich einiges getan bei den FromDual Tools.
In das MySQL Environment (MyEnv) sind zahlreiche Neuerungen und Verbesserungsvorschläge unserer Kunden eingeflossen. Die wichtigsten Erweiterungen betreffen den MySQL Backup Manager (mysql_bman).
Mit dem MySQL Ops Center kommen wir dem Wunsch zahlreicher Kunden nach, welche sich eine graphische Benutzeroberfläche für den Betrieb von komplexeren MySQL Umgebungen wünschen.
Diese Nutzer sind oft wenig geübt im Umgang mit MySQL, möchten aber trotzdem komplexere MySQL Installationen wie Master/Slave- oder Master/Master-Replikationen betreiben.
Im MySQL Performance Monitor (mpm) wurden zahlreiche kleinere Bugs behoben, welche uns in den letzten Wochen durch Kunden gemeldet wurden.
Hinweis: Bei unseren MySQL Service Verträgen, Business Hour (5x9) und All around the Clock (7x24) ist die Nutzung und der Support für unsere Tools mit enthalten.
Wenn Sie mehr über unsere Service-Preise wissen möchten, erstellen wir für Sie gerne ein Angebot.
MyEnv v1.0.5
Einer immer grösseren Beliebtheit erfreut sich das MySQL Environment (MyEnv). Dieses wurde dem bei grösseren Oracle Datenbankkunden beliebten TVD BasEnv nachempfunden, und für MySQL optimiert.
Mit MyEnv lassen sich bequem mehrere Instanzen (mysqld) auf einem Rechner konsolidieren. Dank MyEnv wird diese sonst eher komplizierte Konfiguration ein Kinderspiel. Im weiteren wird MyEnv zunehmend beliebt bei Kunden, welche Ihre Applikation gegen verschiedene MySQL Versionen (5.5, 5.6 und 5.7) sowie unterschiedliche MySQL Branches (Galera Cluster, MariaDB, Percona Server) testen wollen.
Die wichtigsten Neuerung in MyEnv v1.0.5:
- Alte PHP Funktionen wurden ersetzt um Kompatibilität mit PHP 5.4 und 5.5 zu erlangen.
- MyEnv Übersicht (up) über die installierten MySQL Instanzen wurde optisch aufbereitet und zahlreichere kleinere Bugs und Unschönheiten entfernt.
- Erweiterungen für aktiv/passiv Failover-Cluster und Oracle Enterprise Monitor Agents für MySQL wurden integriert.
- Die Benutzerführung beim MyEnv Installer wurde optisch aufbereitet und benutzerfreundlicher gestaltet.
- Probleme beim Einsatz von MyEnv auf SuSE Linux Enterprise Server (SLES) wurden behoben.
- Die Werkzeuge für MySQL Partitionen wurden erweitert und verbessert.
Alle Neuerung im Detail finden Sie in den Release Notes.
MyEnv können Sie hier herunterladen.
MySQL Backup Manager v1.0.5
Am meisten auf Interesse stösst zur Zeit der MySQL Backup Manager (mysql_bman). Dieser vereinfacht signifikant Backups für MySQL in allen möglichen Varianten.
An dieser Stelle möchten wir gerne einen Kommentar eines Anwenders wieder geben, den wir kürzlich erhalten haben:
"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!
Xtrabackup + mysql_bman!!!"
Im mysql_bman Version v1.0.5 wurden folgende Neuerungen integriert:
- Die Sicherheit wurden verbessert (Passwort wird nicht mehr angezeigt).
- Jede Instanz kann mit einem Namen versehen und somit eindeutig gekennzeichnet werden.
- Der MySQL Backup Manager zieht jetzt auch die Konfigurations-Datei ~/.my.cnf in Betracht.
- Die Backup-Komprimierung kann zur Unterstützung von deduplizierenden Laufwerken ausgeschaltet werden.
- Die Option --no-memory-table-check wurde eingeführt um inkonsistente Backups mit MEMORY Tabellen zu erlauben.
Download (in MyEnv enthalten).
MySQL Ops Center v0.2
Unsere MySQL Kunden haben immer wieder nach einer einfach zu bedienenden Benutzeroberfläche zur Steuerung und Verwaltung von mehreren MySQL Datenbanken nachgefragt. Aus diesem Grund hat FromDual das MySQL Ops Center lanciert.
Dieses kann auch komplexeren Konfigurationen mit z. B. Master/Slave oder Master/Master Setups zentral steuern, die Replikationen überwachen, anhalten und wieder starten sowie umzukonfigurieren.
Ebenfalls können mit dem MySQL Ops Center einfach virtuelle IP's gestartet und von einem Knoten auf einen anderen umgezogen werden.
Die wichtigsten Funktionen, welche in den ersten öffentlichen Preview-Release des MySQL Ops Center v0.2 eingeflossen sind:
- Starten und Stoppen von MySQL Datenbanken auf entfernten Rechnern durch eine zentrale Management-Konsole.
- Starten und Stoppen der Replikation.
- Starten und Stoppen eine virtuellen IP (VIP)
- Schwenken (fail-over) der VIP vom aktiven Master auf den Slave (Master/Slave-Replikation) oder einen passiven Master (Master/Master Replikation).
- Konfigurationen der Master/Slave Replikation.
Das MySQL Ops Center kann MySQL Ops Center Download heruntergeladen werden. Weiter Informationen finden Sie unter MySQL Ops Center.
MySQL Performance Monitor v0.9.3
Der MySQL Performance Monitor (mpm) wurde an zahlreichen stellen optimiert. Zudem wurden bekannte Fehler behoben und der mpm Agent für die neuste Zabbix Version v2.2 fit gemacht:
- Bugs im Zusammenhang mit sha/sha1 Encryption wurden behoben.
- Eine gestoppte Datenbank wird jetzt besser erkannt.
- DRBD Information wurden verbessert.
- Neues Verhalten des zabbix_senders in Zabbix v2.2 wird korrekt genutzt.
- Neue Messpunkte wurden hinzugefügt (Galera Cluster) und fehlerhafte korrigiert.
Wie der MySQL Performance Monitor installiert wird finden Sie in der Installationsanleitung. Die vollständige Liste der Verbesserung entnehmen Sie den Release Notes. Den MySQL Performance Monitor können Sie hier herunterladen.
Wir freuen uns, von Ihnen zu hören.
Taxonomy upgrade extras: BackupgaleramonitoringbetriebOperationsüberwachungalarmierungMyEnvfromdual_brmanFromDual: Werkzeuge für MySQL und Galera - Backup - Überwachung - Betrieb
FromDual Tools bieten wertvolle ergänzende Funktionen, die den täglichen Betrieb von MySQL Datenbanken erleichtern und optimieren. Seit unserem letzten Newsletter hat sich einiges getan bei den FromDual Tools.
In das MySQL Environment (MyEnv) sind zahlreiche Neuerungen und Verbesserungsvorschläge unserer Kunden eingeflossen. Die wichtigsten Erweiterungen betreffen den MySQL Backup Manager (mysql_bman).
Mit dem MySQL Ops Center kommen wir dem Wunsch zahlreicher Kunden nach, welche sich eine graphische Benutzeroberfläche für den Betrieb von komplexeren MySQL Umgebungen wünschen.
Diese Nutzer sind oft wenig geübt im Umgang mit MySQL, möchten aber trotzdem komplexere MySQL Installationen wie Master/Slave- oder Master/Master-Replikationen betreiben.
Im MySQL Performance Monitor (mpm) wurden zahlreiche kleinere Bugs behoben, welche uns in den letzten Wochen durch Kunden gemeldet wurden.
Hinweis: Bei unseren MySQL Service Verträgen, Business Hour (5x9) und All around the Clock (7x24) ist die Nutzung und der Support für unsere Tools mit enthalten.
Wenn Sie mehr über unsere Service-Preise wissen möchten, erstellen wir für Sie gerne ein Angebot.
MyEnv v1.0.5
Einer immer grösseren Beliebtheit erfreut sich das MySQL Environment (MyEnv). Dieses wurde dem bei grösseren Oracle Datenbankkunden beliebten TVD BasEnv nachempfunden, und für MySQL optimiert.
Mit MyEnv lassen sich bequem mehrere Instanzen (mysqld) auf einem Rechner konsolidieren. Dank MyEnv wird diese sonst eher komplizierte Konfiguration ein Kinderspiel. Im weiteren wird MyEnv zunehmend beliebt bei Kunden, welche Ihre Applikation gegen verschiedene MySQL Versionen (5.5, 5.6 und 5.7) sowie unterschiedliche MySQL Branches (Galera Cluster, MariaDB, Percona Server) testen wollen.
Die wichtigsten Neuerung in MyEnv v1.0.5:
- Alte PHP Funktionen wurden ersetzt um Kompatibilität mit PHP 5.4 und 5.5 zu erlangen.
- MyEnv Übersicht (up) über die installierten MySQL Instanzen wurde optisch aufbereitet und zahlreichere kleinere Bugs und Unschönheiten entfernt.
- Erweiterungen für aktiv/passiv Failover-Cluster und Oracle Enterprise Monitor Agents für MySQL wurden integriert.
- Die Benutzerführung beim MyEnv Installer wurde optisch aufbereitet und benutzerfreundlicher gestaltet.
- Probleme beim Einsatz von MyEnv auf SuSE Linux Enterprise Server (SLES) wurden behoben.
- Die Werkzeuge für MySQL Partitionen wurden erweitert und verbessert.
Alle Neuerung im Detail finden Sie in den Release Notes.
MyEnv können Sie hier herunterladen.
MySQL Backup Manager v1.0.5
Am meisten auf Interesse stösst zur Zeit der MySQL Backup Manager (mysql_bman). Dieser vereinfacht signifikant Backups für MySQL in allen möglichen Varianten.
An dieser Stelle möchten wir gerne einen Kommentar eines Anwenders wieder geben, den wir kürzlich erhalten haben:
"MySQL Backup Manager is a very nice tool! Congratulations for FromDual! I made my own shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!
Xtrabackup + mysql_bman!!!"
Im mysql_bman Version v1.0.5 wurden folgende Neuerungen integriert:
- Die Sicherheit wurden verbessert (Passwort wird nicht mehr angezeigt).
- Jede Instanz kann mit einem Namen versehen und somit eindeutig gekennzeichnet werden.
- Der MySQL Backup Manager zieht jetzt auch die Konfigurations-Datei ~/.my.cnf in Betracht.
- Die Backup-Komprimierung kann zur Unterstützung von deduplizierenden Laufwerken ausgeschaltet werden.
- Die Option --no-memory-table-check wurde eingeführt um inkonsistente Backups mit MEMORY Tabellen zu erlauben.
Download (in MyEnv enthalten).
MySQL Ops Center v0.2
Unsere MySQL Kunden haben immer wieder nach einer einfach zu bedienenden Benutzeroberfläche zur Steuerung und Verwaltung von mehreren MySQL Datenbanken nachgefragt. Aus diesem Grund hat FromDual das MySQL Ops Center lanciert.
Dieses kann auch komplexeren Konfigurationen mit z. B. Master/Slave oder Master/Master Setups zentral steuern, die Replikationen überwachen, anhalten und wieder starten sowie umzukonfigurieren.
Ebenfalls können mit dem MySQL Ops Center einfach virtuelle IP's gestartet und von einem Knoten auf einen anderen umgezogen werden.
Die wichtigsten Funktionen, welche in den ersten öffentlichen Preview-Release des MySQL Ops Center v0.2 eingeflossen sind:
- Starten und Stoppen von MySQL Datenbanken auf entfernten Rechnern durch eine zentrale Management-Konsole.
- Starten und Stoppen der Replikation.
- Starten und Stoppen eine virtuellen IP (VIP)
- Schwenken (fail-over) der VIP vom aktiven Master auf den Slave (Master/Slave-Replikation) oder einen passiven Master (Master/Master Replikation).
- Konfigurationen der Master/Slave Replikation.
Das MySQL Ops Center kann MySQL Ops Center Download heruntergeladen werden. Weiter Informationen finden Sie unter MySQL Ops Center.
MySQL Performance Monitor v0.9.3
Der MySQL Performance Monitor (mpm) wurde an zahlreichen stellen optimiert. Zudem wurden bekannte Fehler behoben und der mpm Agent für die neuste Zabbix Version v2.2 fit gemacht:
- Bugs im Zusammenhang mit sha/sha1 Encryption wurden behoben.
- Eine gestoppte Datenbank wird jetzt besser erkannt.
- DRBD Information wurden verbessert.
- Neues Verhalten des zabbix_senders in Zabbix v2.2 wird korrekt genutzt.
- Neue Messpunkte wurden hinzugefügt (Galera Cluster) und fehlerhafte korrigiert.
Wie der MySQL Performance Monitor installiert wird finden Sie in der Installationsanleitung. Die vollständige Liste der Verbesserung entnehmen Sie den Release Notes. Den MySQL Performance Monitor können Sie hier herunterladen.
Wir freuen uns, von Ihnen zu hören.
FromDual Performance Monitor for MySQL 0.9.3 has been released
FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.
This release contains various minor bug fixes and improvements.
You can download mpm from here.
In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
New installation of mpm v0.9.3Please follow our mpm installation guide.
Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpmNo other upgrade requirements are known.
Changes in mpm v0.9.3 mpm agent- Typos fixed.
- Kill trap reports to the log file as well now.
- Example for timeshift feature added to configuration template.
- DB down not detected (bug #27/#138).
- InnoDB Status module: SHA fix (bug #139).
- Missing values in cache file fixed.
- No changes.
FromDual Performance Monitor for MySQL 0.9.3 has been released
FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.
This release contains various minor bug fixes and improvements.
You can download mpm from here.
In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
New installation of mpm v0.9.3Please follow our mpm installation guide.
Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpmNo other upgrade requirements are known.
Changes in mpm v0.9.3 mpm agent- Typos fixed.
- Kill trap reports to the log file as well now.
- Example for timeshift feature added to configuration template.
- DB down not detected (bug #27/#138).
- InnoDB Status module: SHA fix (bug #139).
- Missing values in cache file fixed.
- No changes.
FromDual Performance Monitor for MySQL 0.9.3 has been released
FromDual has the pleasure to announce the release of the new version 0.9.3 of its popular Database Performance Monitor for MySQL, Galera Cluster, MariaDB and Percona Server mpm.
This release contains various minor bug fixes and improvements.
You can download mpm from here.
In the inconceivable case that you find a bug in mpm please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
New installation of mpm v0.9.3Please follow our mpm installation guide.
Upgrade from 0.x to 0.9.3 # cd /download # tar xf mysql_performance_monitor-0.9.3.tar.gz # cd /opt # tar xf /download/mysql_performance_monitor_agent-0.9.3.tar.gz # rm -f mpm # ln -s mysql_performance_monitor_agent-0.9.3 mpmNo other upgrade requirements are known.
Changes in mpm v0.9.3 mpm agent- Typos fixed.
- Kill trap reports to the log file as well now.
- Example for timeshift feature added to configuration template.
- DB down not detected (bug #27/#138).
- InnoDB Status module: SHA fix (bug #139).
- Missing values in cache file fixed.
- No changes.
Replication Troubleshooting - Classic VS GTID
In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).
There are two main operations we might need to do in a replication setup:
- Skip or ignore a statement that causes the replication to stop.
- Re-initialize a slave when the Replication is broke and could not be started anymore.
Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.
To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).
Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?CLASSIC REPLICATION
Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:
SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;GTID REPLICATION
Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.
To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:
- Check which transaction is causing the problem:
SQL> SHOW SLAVE STATUS\G
.
.
Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7
Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1,
b9b4712a-df64-11e3-b391-60672090eb04:1-6
Auto_Position: 1
Retrieved_Gtid_Set means the retrieved GTIDs from the master
Executed_Gtid_Set means the executed GTIDs on the slave.
According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.
- Inject an empty transaction:
SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7';
SQL> BEGIN;COMMIT;
SQL> SET GTID_NEXT='AUTOMATIC';
SQL> START SLAVE;
BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.
Note:
After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.
Re-initialize/ re-build a slaveFor many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.
How to solve that issue ?CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
Fix steps:
- Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
- Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
- Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
- Start the slave: SQL> START SLAVE;
NOTE:
Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.
GTID REPLICATIONSample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Fix steps:
- Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
- Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
- Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
- Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
- Start the slave: SQL> START SLAVE;
NOTES:
- If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.
- Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.
Taxonomy upgrade extras: GTIDreplicationReplication Troubleshooting - Classic VS GTID
In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).
There are two main operations we might need to do in a replication setup:
- Skip or ignore a statement that causes the replication to stop.
- Re-initialize a slave when the Replication is broke and could not be started anymore.
Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.
To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).
Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?CLASSIC REPLICATION
Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:
SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;GTID REPLICATION
Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.
To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:
- Check which transaction is causing the problem:
SQL> SHOW SLAVE STATUS\G
.
.
Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7
Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1,
b9b4712a-df64-11e3-b391-60672090eb04:1-6
Auto_Position: 1
Retrieved_Gtid_Set means the retrieved GTIDs from the master
Executed_Gtid_Set means the executed GTIDs on the slave.
According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.
- Inject an empty transaction:
SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7';
SQL> BEGIN;COMMIT;
SQL> SET GTID_NEXT='AUTOMATIC';
SQL> START SLAVE;
BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.
Note:
After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.
Re-initialize/ re-build a slaveFor many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.
How to solve that issue ?CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
Fix steps:
- Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
- Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
- Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
- Start the slave: SQL> START SLAVE;
NOTE:
Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.
GTID REPLICATIONSample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Fix steps:
- Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
- Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
- Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
- Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
- Start the slave: SQL> START SLAVE;
NOTES:
- If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.
- Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.
Taxonomy upgrade extras: GTIDreplicationReplication Troubleshooting - Classic VS GTID
In previous posts, I was talking about how to set up MySQL replication, Classic Replication (based on binary logs information) and Transaction-based Replication (based on GTID). In this article I'll summarize how to troubleshoot MySQL replication for the most common issues we might face with a simple comparison how can we get them solved in the different replication methods (Classic VS GTID).
There are two main operations we might need to do in a replication setup:
- Skip or ignore a statement that causes the replication to stop.
- Re-initialize a slave when the Replication is broke and could not be started anymore.
Basically, the slave should be always synchronized with its master having the same copy of data, but for some reasons there might be inconsistency between both of them (unsafe statement in SBR, Slave is not read_only and was modified apart of replication queries, .. etc) which causes errors and stops the replication, e.g. if the master inserted a record which was already inserted on the slave (Duplicate entry) or updated/deleted a row which was not exist on the slave, ... etc.
To solve this issue, we have to either reverse what we have done on the slave (e.g. delete the inserted rows) if that was made by mistake and is known or we can skip executing those statements on the slave and continue the replication again (I'll focus on skipping a statement in this post as it needs different interaction in Classic and GTID replication).
Sample error messages (from SHOW SLAVE STATUS output): Last_SQL_Error: Could not execute Write_rows event on table test.t1; Duplicate entry '4' for key 'PRIMARY', Error_code: 1062; handler error HA_ERR_FOUND_DUPP_KEY; the event's master log mysql-bin.000304, end_log_pos 285 Last_SQL_Error: Could not execute Update_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 492 Last_SQL_Error: Could not execute Delete_rows event on table test.t1; Can't find record in 't1', Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event's master log mysql-bin.000304, end_log_pos 688 How to solve that issue ?CLASSIC REPLICATION
Solving this problem is a straight forward process in the classic replication setup, what only we need is to issue the following SQL commands on the slave's:
SQL> SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1; SQL> START SLAVE;GTID REPLICATION
Solving this problem is not a straight forward in GTID replication like it is in the Classic replication and the variable SQL_SLAVE_SKIP_COUNTER wont be useful in this area anymore.
To get this problem solved in a GTID replication we will need to inject an empty transaction as follows:
- Check which transaction is causing the problem:
SQL> SHOW SLAVE STATUS\G
.
.
Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-7
Executed_Gtid_Set: 4f6d62ed-df65-11e3-b395-60672090eb04:1,
b9b4712a-df64-11e3-b391-60672090eb04:1-6
Auto_Position: 1
Retrieved_Gtid_Set means the retrieved GTIDs from the master
Executed_Gtid_Set means the executed GTIDs on the slave.
According to the above output, the slave retrieved GTIDs from 1:7 (b9b4712a-df64-11e3-b391-60672090eb04:1-7) and executed only from 1:6 (b9b4712a-df64-11e3-b391-60672090eb04:1-6), so the problem is in transaction number 7.
- Inject an empty transaction:
SQL> SET GTID_NEXT='b9b4712a-df64-11e3-b391-60672090eb04:7';
SQL> BEGIN;COMMIT;
SQL> SET GTID_NEXT='AUTOMATIC';
SQL> START SLAVE;
BE CAUTIOUS: The first part of Executed_Gtid_Set (4f6d62ed-df65-11e3-b395-60672090eb04:1) is the local executed GTIDs (not received from the master) while the second part (b9b4712a-df64-11e3-b391-60672090eb04:1-6) is the executed GTIDs which retrieved from the master (check the master's UUID by either checking the UUID value in "Retrieved_Gtid_Set" which is basically for the master's UUID or by issuing SHOW GLOBAL VARIABLES LIKE 'server_uuid'; on the master server). So we should make sure that we are using the master's UUID when injecting an empty transaction, otherwise, the problem will still remain and the slave wont be started.
Note:
After starting the slave successfully in either classic or GTID replication we might need to use a combination of Percona tools pt-table-checksum and pt-table-sync to fix the inconsistency problem.
Re-initialize/ re-build a slaveFor many reasons, we might end up with only re-build a slave to get the replication working, e.g. if we stopped a slave for a while where the master purged the binary log file that is needed by that slave or there are many duplicate entry errors so that pt-table-checksum and pt-table-sync could not be used then we have to re-initialize the slave from the beginning by having a fresh backup from the master server and restore it on the slave. Lets check how can we do that in both replication methods.
How to solve that issue ?CLASSIC REPLICATION
Sample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'Could not find first log file name in binary log index file'
Fix steps:
- Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --master-data=2 --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
- Get the binary logs information when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep "CHANGE MASTER TO" CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
- Issue the "CHANGE MASTER TO" command using the new information: SQL> CHANGE MASTER TO MASTER_LOG_FILE='mysql-bin.000011', MASTER_LOG_POS=120;
- Start the slave: SQL> START SLAVE;
NOTE:
Xtrabackup tool could be used instead of mysqldump,especially, if the database size is big. Check out this link for more information.
GTID REPLICATIONSample error message:
Last_IO_Errno: 1236 Last_IO_Error: Got fatal error 1236 from master when reading data from binary log: 'The slave is connecting using CHANGE MASTER TO MASTER_AUTO_POSITION = 1, but the master has purged binary logs containing GTIDs that the slave requires.'
Fix steps:
- Backup the master server by the following command: shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- Check the GTID value when the backup was taken: shell> head -n 50 /path/to/backupdir/full_backup-$TIMESTAMP.sql|grep PURGED SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';
- Reset the GTID_EXECUTED and GTID_PURGED values on the slave: SQL> RESET MASTER;
- Restore the backup file on the slave: shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql
- Make sure that the values of GTID_EXEUCTED and GTID_PURGED are the correct ones: SQL> SHOW GLOBAL VARIABLES LIKE 'gtid_executed'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_executed | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.00 sec) SHOW GLOBAL VARIABLES LIKE 'gtid_purged'; +---------------+------------------------------------------+ | Variable_name | Value | +---------------+------------------------------------------+ | gtid_purged | b9b4712a-df64-11e3-b391-60672090eb04:1-8 | +---------------+------------------------------------------+ 1 row in set (0.01 sec)
- Start the slave: SQL> START SLAVE;
NOTES:
- If we didn't reset the GTID_EXECUTED and GTID_PURGED values on the slave before restoring the backup file, the following error will be appeared:
shell> mysql -u root -p < /path/to/backupdir/full_backup-$TIMESTAMP.sql. ERROR 1840 (HY000): @@GLOBAL.GTID_PURGED can only be set when @@GLOBAL.GTID_EXECUTED is empty.The above error indicates that the statement at the beginning of the backup file - which is "SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-8';" - failed because GTID_PURGED cannot be set unless GTID_EXECUTED is empty. Since GTID_EXECUTED is a read only variable, the only way to empty its value is to issue "RESET MASTER" on the slave server before restoring the backup file.
- Xtrabackup tool could be used as well instead of mysqldump to get this problem solved and without the need to reset GTID_EXECUTED and GTID_PURGED values . Check out this link for more information.
While GTID provides many benefits over the classic replication but it has different troubleshooting and fix strategies which must be known first before deploying GTID in production systems.
Replication channel fail-over with Galera Cluster for MySQL
Sometimes it could be desirable to replicate from a Galera Cluster to a single MySQL slave or to an other Galera Cluster. Reasons for this measure could be:
- An unstable network between two Galera Cluster locations.
- A separation of a reporting slave and the Galera Cluster so that heavy reports on the slave do not affect the Galera Cluster performance.
- Mixing different sources in a slave or a Galera Cluster (fan-in replication).
This article is based on earlier research work (see MySQL Cluster - Cluster circular replication with 2 replication channels) and uses the old MySQL replication style (without MySQL GTID).
Preconditions- Enable the binary logs on 2 nodes of a Galera Cluster (we call them channel masters) with the log_bin variable.
- Set log_slave_updates = 1 on ALL Galera nodes.
- It is recommended to have small binary logs and relay logs in such a situation to reduce overhead of scanning the files (max_binlog_size = 100M).
Let us assume that for some reason the actual channel master of channel 1 breaks. As a consequence the slave of channel 1 does not receive any replication events any more. But we have to keep the replication stream up and running. So we have to switch the replication channel to channel master 2.
Switching replication channelFirst for security reasons we should stop the slave of replication channel 1 first:
mysql> STOP SLAVE;Then we have to find the actual relay log on the slave:
mysql> pager grep Relay_Log_File mysql> SHOW SLAVE STATUS\G mysql> nopager Relay_Log_File: slave-relay-bin.000019Next we have to find the last applied transaction on the slave:
mysql> SHOW RELAYLOG EVENTS IN 'slave-relay-bin.000019'; | slave-relay-bin.000019 | 3386717 | Query | 5201 | 53745015 | BEGIN | | slave-relay-bin.000019 | 3386794 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | slave-relay-bin.000019 | 3386846 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | slave-relay-bin.000019 | 3386921 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +------------------------+---------+-------------+-----------+-------------+--------------------------------+This is transaction 1457451 which is the same on all Galera nodes.
On the new channel master of channel 2 we have to find now the matching binary log. This can be done best by matching times between the relay log and the binary log of master of channel 2.
On slave:
shell> ll *relay-bin* -rw-rw---- 1 mysql mysql 336 Mai 22 20:32 slave-relay-bin.000018 -rw-rw---- 1 mysql mysql 3387029 Mai 22 20:37 slave-relay-bin.000019On master of channel 2:
shell> ll *bin-log* -rw-rw---- 1 mysql mysql 2518737 Mai 22 19:57 bin-log.000072 -rw-rw---- 1 mysql mysql 143 Mai 22 19:57 bin-log.000073 -rw-rw---- 1 mysql mysql 165 Mai 22 20:01 bin-log.000074 -rw-rw---- 1 mysql mysql 62953648 Mai 22 20:40 bin-log.000075It looks like binary log 75 of master 2 matches to relay log of our slave.
Now we have to find the same transaction on the master of channel 2:
mysql> pager grep -B 6 1457451 mysql> SHOW BINLOG EVENTS IN 'bin-log.000075'; mysql> nopager | bin-log.000075 | 53744832 | Write_rows | 5201 | 53744907 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53744907 | Xid | 5201 | 53744938 | COMMIT /* xid=1457450 */ | | bin-log.000075 | 53744938 | Query | 5201 | 53745015 | BEGIN | | bin-log.000075 | 53745015 | Table_map | 5201 | 53745067 | table_id: 72 (test.test) | | bin-log.000075 | 53745067 | Write_rows | 5201 | 53745142 | table_id: 72 flags: STMT_END_F | | bin-log.000075 | 53745142 | Xid | 5201 | 53745173 | COMMIT /* xid=1457451 */ | +----------------+----------+-------------+-----------+-------------+---------------------------------------+We successfully found the transaction and want the position of the next transaction 53745173 where we should continue replicating.
As a last step we have to set the slave to the master of replication channel 2:
mysql> CHANGE MASTER TO master_host='master2', master_port=3306, master_log_file='bin-log.000075', master_log_pos=53745173; mysql> START SLAVE;After a while the slave has caught up and is ready for the next fail-over back.
DiscussionWe found during our experiments that an IST of a channel master does not lead to a gap or loss of events in the replication stream. So restarting a channel master does not require a channel fail-over as long as an IST can be used for resyncing the channel master with the Galera Cluster.
The increase of wsrep_cluster_conf_id is NOT an indication that a channel fail-over is required.
A SST resets the binary logs so after the SST a slave will not replicate any more. So using this method should be safe to use. If you find any situation where you experience troubles with channel fail-over please let us know.
MySQL Environment MyEnv 1.0.5 has been released
FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv- Schema output in up was still ugly.
- Instance output is split correctly similar to up/down display.
- Instance list is now shorter when short instance names are used.
- ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
- Upgrade instructions have been improved and denormalized.
- Only display existing OEM agents, criteria is directory in oratab must exist.
- Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
- Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
- block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
- block_galera_node.sh made more flexible.
- Cleanup job errors with missing target. Fixed for MGB.
- Password on command line is not exposed anymore to log file.
- Instance name optionally added to binary-log backup file names.
- Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
- Config file example in --help output done more nicely.
- More strict option checking implemented.
- All schemas with non transactional tables are shown instead of just the first one.
- Help typo fixed and example improved.
- --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
- Preparation work for blocking MyISAM backup done.
MySQL Environment MyEnv 1.0.5 has been released
FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv- Schema output in up was still ugly.
- Instance output is split correctly similar to up/down display.
- Instance list is now shorter when short instance names are used.
- ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
- Upgrade instructions have been improved and denormalized.
- Only display existing OEM agents, criteria is directory in oratab must exist.
- Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
- Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
- block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
- block_galera_node.sh made more flexible.
- Cleanup job errors with missing target. Fixed for MGB.
- Password on command line is not exposed anymore to log file.
- Instance name optionally added to binary-log backup file names.
- Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
- Config file example in --help output done more nicely.
- More strict option checking implemented.
- All schemas with non transactional tables are shown instead of just the first one.
- Help typo fixed and example improved.
- --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
- Preparation work for blocking MyISAM backup done.
MySQL Environment MyEnv 1.0.5 has been released
FromDual has the pleasure to announce the release of the new version 1.0.5 of its popular MySQL, MariaDB and Percona Server multi-instance environment MyEnv.
The majority of improvements happened on the MySQL Backup Manager (mysql_bman) utility.
You can download MyEnv from here.
In the inconceivable case that you find a bug in MyEnv please report it to our Bugtracker.
Any feedback, statements and testimonials are welcome as well! Please send them to feedback@fromdual.com.
Upgrade from 1.0.x to 1.0.5 # cd ${HOME}/product # tar xf /download/myenv-1.0.5.tar.gz # rm -f myenv # ln -s myenv-1.0.5 myenvUpgrade from 1.0.2 or older to 1.0.3 or newer
If you are using plug-ins for showMyEnvStatus create all the links in the new directory structure:
cd ${HOME}/product/myenv ln -s ../../utl/oem_agent.php plg/showMyEnvStatus/Exchange the MyEnv section in ~/.bash_profile (make a backup of this file first?) by the following new one:
# BEGIN MyEnv # Written by the MyEnv installMyEnv.php script. . /etc/myenv/MYENV_BASE MYENV_PWD=`pwd` cd $MYENV_BASE/bin . myenv.profile cd $MYENV_BASE; $MYENV_BASE/bin/showMyEnvStatus.php; cd - > /dev/null cd $MYENV_PWD # END MyEnv Changes in MyEnv 1.0.5 MyEnv- Schema output in up was still ugly.
- Instance output is split correctly similar to up/down display.
- Instance list is now shorter when short instance names are used.
- ignore-passive option added for myEnv to ignore passive databases in an active/passive fail-over cluster. Based on existence of datadir.
- Upgrade instructions have been improved and denormalized.
- Only display existing OEM agents, criteria is directory in oratab must exist.
- Up instances are not reported with missing mysqladmin command (Galera binary tar balls) but it was not visible what is the reason. Reason is displayed as an error message now.
- Lists each basedir candidate in a separate line when adding a new instance. More conveniant for reading if many basedirs are available.
- block_galera_node.sh: Insert instead of Append used for firewall rules. Only block load-balancer ports and not everything else.
- block_galera_node.sh made more flexible.
- Cleanup job errors with missing target. Fixed for MGB.
- Password on command line is not exposed anymore to log file.
- Instance name optionally added to binary-log backup file names.
- Binary logs are not cleaned-up because they are not copied with bck_ prefix (Bug #143).
- Config file example in --help output done more nicely.
- More strict option checking implemented.
- All schemas with non transactional tables are shown instead of just the first one.
- Help typo fixed and example improved.
- --ignore-memory-table-check implemented to avoid error exit with MEMORY tables.
- Preparation work for blocking MyISAM backup done.
GTID In Action
In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.
The following topics will be covered in this blog:
- What is the concept of GTID protocol?
- GTID Replication Implementation
- Migration from classic replication to GTID replication
- GTID Benefits
GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.
GTID consists of two parts separated by a column {source_id:transactions_id}.
WHERE
- source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
- transaction_id: A sequence number determining the order of the committed transaction.
The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3
As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):
- gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
- enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
- CREATE TABLE .. SELECT.
- CREATE TEMPORARY TABLE (inside a transaction).
- Statements that update nontransactional tables inside a transaction.
- gtid_purged: The set of transactions that have been purged from the binary logs.
- gtid_executed: The set of transactions which already executed on that server.
- gtid_next: The GTID which will be used for the next transaction.
Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.
The implementation process is divided into two parts:
MASTER'S SIDE CONFIGURATION:- Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
- Restart MySQL so that configuration changes take place: shell> service mysql restart
- Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
SLAVE'S SIDE CONFIGURATION:
- Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
- Restart MySQL so that configuration changes take place: shell> service mysql restart
Set the master information on the slave's:
Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;- Start replication: SQL> START SLAVE;
- Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:
- Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
- Restore the backup file taken from the master.
- Use change master to with MASTER_AUTO_POSITION=1
- Start the slave.
Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?
Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:
-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.
NOTE:
If the database size is big (100GB or so) then using Xtrabackup tool instead of mysqldump here will be a very good idea. Check out this link for more information on how to use Xtrabackup tool to restore a slave server having GTID enabled.
Migration from classic replication to GTID replication How to perform the migration?To migrate an already running replication using the classical method to GTID replication, the following steps should be done:
- Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
- Shutdown MySQL on all servers and add the GTID variables to the configuration files.
- Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
- Start MySQL service on all servers.
- Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
- Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:
- GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
- Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)
The online migration steps would be:
- Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
- Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
- Point the application to write to slaveA instead of the old master.
- Restart the old master to use GTID_MODE=ON and having slaveA as a master.
Note: This is not yet available in Oracle binaries
More information on this could be find here.
GTID Benefits- Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
- Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
- Simple to determine whether masters and slaves are consistent or not.
- Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
- Automatic fail-over scripts is now much easier to implement.
To know how to troubleshoot GTID replication, check out the Replication Troubleshooting - Classic VS GTID blog.
Taxonomy upgrade extras: GTIDreplicationGTID In Action
In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.
The following topics will be covered in this blog:
- What is the concept of GTID protocol?
- GTID Replication Implementation
- Migration from classic replication to GTID replication
- GTID Benefits
GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.
GTID consists of two parts separated by a column {source_id:transactions_id}.
WHERE
- source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
- transaction_id: A sequence number determining the order of the committed transaction.
The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3
As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):
- gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
- enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
- CREATE TABLE .. SELECT.
- CREATE TEMPORARY TABLE (inside a transaction).
- Statements that update nontransactional tables inside a transaction.
- gtid_purged: The set of transactions that have been purged from the binary logs.
- gtid_executed: The set of transactions which already executed on that server.
- gtid_next: The GTID which will be used for the next transaction.
Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.
The implementation process is divided into two parts:
MASTER'S SIDE CONFIGURATION:- Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
- Restart MySQL so that configuration changes take place: shell> service mysql restart
- Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
SLAVE'S SIDE CONFIGURATION:
- Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
- Restart MySQL so that configuration changes take place: shell> service mysql restart
Set the master information on the slave's:
Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;- Start replication: SQL> START SLAVE;
- Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:
- Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
- Restore the backup file taken from the master.
- Use change master to with MASTER_AUTO_POSITION=1
- Start the slave.
Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?
Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:
-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.
NOTE:
If the database size is big (100GB or so) then using Xtrabackup tool instead of mysqldump here will be a very good idea. Check out this link for more information on how to use Xtrabackup tool to restore a slave server having GTID enabled.
Migration from classic replication to GTID replication How to perform the migration?To migrate an already running replication using the classical method to GTID replication, the following steps should be done:
- Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
- Shutdown MySQL on all servers and add the GTID variables to the configuration files.
- Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
- Start MySQL service on all servers.
- Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
- Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:
- GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
- Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)
The online migration steps would be:
- Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
- Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
- Point the application to write to slaveA instead of the old master.
- Restart the old master to use GTID_MODE=ON and having slaveA as a master.
Note: This is not yet available in Oracle binaries
More information on this could be find here.
GTID Benefits- Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
- Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
- Simple to determine whether masters and slaves are consistent or not.
- Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
- Automatic fail-over scripts is now much easier to implement.
To know how to troubleshoot GTID replication, check out the Replication Troubleshooting - Classic VS GTID blog.
Taxonomy upgrade extras: GTIDreplicationGTID In Action
In a previous post I was talking about How to Setup MySQL Replication using the classic method (based on binary logs information). In this article I'll go through the transaction-based replication implementation using GTID in different scenarios.
The following topics will be covered in this blog:
- What is the concept of GTID protocol?
- GTID Replication Implementation
- Migration from classic replication to GTID replication
- GTID Benefits
GTID is a Global Transaction IDentifier which introduced in MySQL 5.6.5. It's not only unique on the server it was originated but it's unique among all servers in a replication setup.
GTID also guarantee consistency because once a transaction is committed on a server, any other transaction having the same GTID will be ignored, i.e. a committed transaction on a master will be applied only once on the slaves.
GTID consists of two parts separated by a column {source_id:transactions_id}.
WHERE
- source_id: Normally the server's UUID on which the transaction originates. e.g. "b9b4712a-df64-11e3-b391-60672090eb04" .
- transaction_id: A sequence number determining the order of the committed transaction.
The following is the GTID for the third transaction on a server having the uuid "b9b4712a-df64-11e3-b391-60672090eb04":
b9b4712a-df64-11e3-b391-60672090eb04:3
As a new protocol in MySQL there is a set of new related variables, the following are the most important ones (IMHO):
- gtid-mode: ON|OFF to enable or disable GTID, this is not a Boolean variable (0 and 1 are not acceptable).
- enforce-gtid-consistency: prevent executing the non transactionally safe statements, like:
- CREATE TABLE .. SELECT.
- CREATE TEMPORARY TABLE (inside a transaction).
- Statements that update nontransactional tables inside a transaction.
- gtid_purged: The set of transactions that have been purged from the binary logs.
- gtid_executed: The set of transactions which already executed on that server.
- gtid_next: The GTID which will be used for the next transaction.
Fresh installation means that there's no data yet in the master or in other words, we are building a replication setup from scratch.
The implementation process is divided into two parts:
MASTER'S SIDE CONFIGURATION:- Add the following variables to the MySQL configuration file (my.cnf): [mysqld] server-id=1 log-bin=mysql-bin binlog_format=ROW gtid-mode=on enforce-gtid-consistency log-slave-updates
- Restart MySQL so that configuration changes take place: shell> service mysql restart
- Create a MySQL user to be used by the slave: SQL> GRANT REPLICATION SLAVE ON *.* TO 'slave_user_name'@'slave_ip' IDENTIFIED BY 's3cret';
SLAVE'S SIDE CONFIGURATION:
- Add the following variables to the my.cnf file: [mysqld] server-id=2 log-bin=mysql-bin binlog_format=ROW relay_log=relay-log skip-slave-start gtid-mode=on enforce-gtid-consistency log-slave-updates
- Restart MySQL so that configuration changes take place: shell> service mysql restart
Set the master information on the slave's:
Unlike the classic method, we don't need the master's binary log information and only what we need is to specify MASTER_AUTO_POSITION=1 instead:
SQL> CHANGE MASTER TO -> MASTER_HOST='master_ip', -> MASTER_PORT=3306, -> MASTER_USER='slave_user_name', -> MASTER_PASSWORD='s3cret', -> MASTER_AUTO_POSITION=1;- Start replication: SQL> START SLAVE;
- Check the replication status: SQL> show slave status\G Slave_IO_State: Waiting for master to send event Master_Host: 127.0.0.1 Master_User: gtid_repl Master_Port: 3320 Connect_Retry: 60 Master_Log_File: mysql-bin.000007 Read_Master_Log_Pos: 191 Relay_Log_File: relay-log.000004 Relay_Log_Pos: 401 Relay_Master_Log_File: mysql-bin.000007 Slave_IO_Running: Yes Slave_SQL_Running: Yes Replicate_Do_DB: . . . . Retrieved_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Executed_Gtid_Set: b9b4712a-df64-11e3-b391-60672090eb04:1-2 Auto_Position: 1
It's a very simple process to add a new slave to a running replication (or setup replication with existing data) where GTID is being used:
- Backup the master server shell> mysqldump -u root -p --all-databases --flush-privileges --single-transaction --flush-logs --triggers --routines --events --hex-blob >/path/to/backupdir/full_backup-$TIMESTAMP.sql
- On the new slave, use the same MySQL configuration as described above (except the server id which should be unique) and restart it.
- Restore the backup file taken from the master.
- Use change master to with MASTER_AUTO_POSITION=1
- Start the slave.
Is it so simple like that!! How did the slave know the backup position? What if some transactions were executed on the master after that backup?
Actually, when GTID is enabled, mysqldump includes the last transaction ID (GTID) at the time of taking the backup:
-- -- GTID state at the beginning of the backup -- SET @@GLOBAL.GTID_PURGED='b9b4712a-df64-11e3-b391-60672090eb04:1-7';After restoring the backup the variable GTID_EXECUTED will be equal to GTID_PURGED (the above value) and when the slave starts it first sends the range of GTIDs it has executed (GTID_EXECUTED) to the master so that the master can sends back every missing transaction which was not applied yet on the slave.
Migration from classic replication to GTID replication How to perform the migration?To migrate an already running replication using the classical method to GTID replication, the following steps should be done:
- Ensure that all servers (master and slaves) are in the same point by setting the master server as read only (SET GLOBAL read_only=ON;) and wait until all slaves catch up the master's data.
- Shutdown MySQL on all servers and add the GTID variables to the configuration files.
- Beside the GTID variables, add read-only to the master's configuration and skip-slave-start to the slaves configurations.
- Start MySQL service on all servers.
- Issue the change master command with MASTER_AUTO_POSITION=1 on all slaves and then start them.
- Make the master writable again by SET GLOBAL read_only=OFF; (don't forget to remove/hash it from the master's my.cnf file as well).
At the time of writing this article, the online migration is not applicable - as you can see from the above steps - we have to shutdown ALL servers at the same time and that is because of two reasons:
- GTID can NOT be enabled online because GTID_MODE is a read only variable (having this variable to be dynamic is already in Oracle's plan).
- Replication can NOT be established between two or more servers having different values for GTID_MODE, i.e. either GTID is enabled on ALL servers or disabled on ALL servers.
There's a feature request (by MySQL Devs team at Booking.com) to have an extra GTID mode (ANONYMOUS_IN-GTID_OUT) which allows a slave to receives anonymous transactions (transactions from master having GTID_MODE = OFF which do not have GTIDs) and assigns GTIDs for those transactions. In this case, this slave could be used as an intermediate server between master having GTID disabled and slaves having GTID enabled (it will be slave for the master and master for the other slaves)
The online migration steps would be:
- Restart a slave (lets name it slaveA) using the GTID_MODE = ANONYMOUS_IN-GTID_OUT.
- Rolling restart to the other slaves to use the normal GTID_MODE=ON and pointing them to slaveA as a new master.
- Point the application to write to slaveA instead of the old master.
- Restart the old master to use GTID_MODE=ON and having slaveA as a master.
Note: This is not yet available in Oracle binaries
More information on this could be find here.
GTID Benefits- Simplifies the setup of MySQL replication as master's binary logs information is not needed anymore (binary log file name and position).
- Consistency is guaranteed between master and slave as the committed transaction on the master will be applied only once on the slave.
- Simple to determine whether masters and slaves are consistent or not.
- Fail-over process is much easier. When the master fail to operate, no need to calculate a slave's binary logs information before promoting it to be new master. MASTER_AUTO_POSITION=1 will do the job as all transactions in all servers inside the replication have the same GTID.
- Automatic fail-over scripts is now much easier to implement.
In a future post, I will write about how to troubleshoot GTID replication.
Backup Manager for MySQL, MariaDB and Percona Server (mysql_bman)
The MySQL Backup Manager (mysql_bman) is a wrapper script for standard MySQL backup tools. The Problem with MySQL backup tools is, that they have many options and thus are overcomplicated and errors are easy made.
mysql_bman has the intention to make backups for MySQL easier and technically correct. This means it should per default not allow non-consistent backups or complain if some functions or parameters are used in the wrong way to guarantee proper backups.
In addition it has added some nice features which are missing in standard MySQL backup tools or which are only known from Enterprise backup solutions.
Where to download mysql_bmanThe Backup Manager for MySQL (mysql_bman) can be downloaded from our website.
What mysql_bman user say aboutMathias Brem DBA@DBAOnline on LinkedIn:
Ow! Nice!
mysql backup manager is a very nice tool! Congratulations for FromDual! I made a shell script for catalog and maintained backups by xtrabackup, but mysql_bman is the best!
Xtrabackup + mysql_bman!!!!
The intention of mysql_bman is to assist you in bigger MySQL set-ups where you have to follow some backup policies and where you need a serious backup concept.
mysql_bman exampleTo give you an impression of the power of the MySQL Backup Manager let us have a look at a little example:
shell> mysql_bman --target=bman:secret@192.168.1.42 --type=full --mode=logical --policy=daily \ --no-compress --backupdir=/mnt/slowdisk \ --archive --archivedir=/mnt/nfsmountWith this backup method we do a logical full backup (mysqldump is triggered in the background). The backup is stored in the location for backups with the daily policy and is NOT compressed to speed up the backup by saving CPU power AND because the backup device is a de-duplicating drive. Then the backup is archived to and NFS mount.
Backup typesTo achieve this we have defined different backup types:
TypeDescriptionfullfull logical backup (mysqldump) of all schemasbinlogbinary-log backupconfigconfiguration file backup (my.cnf)structurestructure backupcleanupclean-up of backup pieces older than n daysschemabackup of one or more schemasprivilegeprivilege dump (SHOW GRANTS FOR)A backup type is specified with the option --type=<backup_type>.
Backup modesA backup can either be logical or physical. A logical backup is typically what you do with mysqldump. A physical backup is typically a physical file copy without looking into the data. That is what for example xtrabackup does.
The backup mode is specified with the option --mode=<backup_mode>. The following backup modes are available:
ModeDescriptionlogicaldo a logical backup (mysqldump).physicaldo a physical backup (mysqlbackup/innobackup/xtrabackup)Backup policiesFurther we have introduced different backup policies. Policies are there to distinguish how different backups should be treated.
The following backup policies exist:
PolicyDescriptiondailydirectory to store daily backupsweeklydirectory to store weekly backupsmonthlydirectory to store monthly backupsquarterlydirectory to store quarterly backupsyearlydirectory to store yearly backupsFor example you could plan to do a daily MySQL backup with binary logs with a retention policy of 7 days. But once a week you want to do a weekly backup consisting of a full backup, a configuration backup and a structure dump. But this weekly backup you want to keep for 6 months. And because of legal reasons you want to do a yearly backup with a retention policy of 10 years.
A backup policy is specified with the --policy=<backup_policy> option. This leads us to the retention time:
OptionsThe retention time which should be applied to a specific backup policy you can specify with the option --retention=<period_in_days>. The retention option means that a backup is not deleted before this amount of days when you run a clean-up job with mysql_bman.
Let us do an example:
shell> mysql_bman --type=cleanup --policy=daily --retention=30This means that all backups in the daily policy should be deleted when they are older than 30 days.
TargetWith the --target option you specify the connect string to the database to backup. This database can be located either local (all backup types can be used) or remote (only client/server backup types can be used).
A target looks as follows: user/password@host:port (similar to URI specification) whereas you can omit password and port.
Backup location, archiving, compressing and clean-upThe --backupdir option is to control location of the backup files. The policy folders are automatically created under this --backupdir location.
If you have a second layer of backup stores (e.g. tapes or slow backup drives or deduplicated drives or NFS drives) you can use the --archive option to copy your backup files to this second layer storage which is specified with the --archivedir option. For restore performance reasons it is recommended to always keep one or two generations of backups on you fast local drive. If you want to remove the backuped files from the --backupdir destination after the archive job use the --cleanup option.
If you want to omit to compress backups, either to safe time or because your location uses deduplicated drives you can use the --no-compress option.
Especially for hosting companies a full database backup is typically not the right backup strategy because a restore of one specific customer (= schema) is very complicated. For this case we have the --per-schema option. mysql_bman will do a backup of the whole database schema by schema. Keep in mind: This breaks consistency among schemas!
Sometimes you want to do a schema backup only for some specific schemas for this you can use the --schema option. This option allows you to specify schemas to backup or not to backup. --schema=+a,+b means backup schema a and b. --schema=-a,-b means backup all schemas except a and b.
The second variant is less error prone because you do not forget to backup a new database.
MySQL does not know the concept of naming an instance (mysqld). But for bigger environments it could be useful to uniquely name each instance. For this purpose we have introduced the option --instance-name=<give_it_a_name>. This instance name should be unique within your whole company. But we do not enforce it atm. The instance name is used to name backup files and later to identify the backup history of an instance in our backup catalog and to allow us to track the files for restore.
mysql_bman configuration fileSpecifying everything on the command line is cumbersome. Thus mysql_bman considers a configuration file specified with the --config=<config_file> option.
A mysql_bman configuration file looks for example as follows:
Simulate what happens
For the Sissies among us (as for example me) we have the --simulate option. This option simulates nearly all steps as far as possible without executing really anything. This option is either for testing some features or for debugging purposes.
LoggingIf you want to track your backup history you can specify with the --log option where your mysql_bman log file should be located.
Using CatalogIt will be very useful when you can store your backups metadata in the database so you can check them in the future and to find out the backup criteria (type, mode, instance-name, ... etc) for specific backup processes. This could be achieved by using the catalog feature.
To activate this feature you have to create a database for the catalog "default name is bman_catalog" then create its tables by using the option --create in a special mysql_bman command (check examples below).
Finally, to store your backup metadata in the catalog what you only have to do is adding the option --catalog=catalog_connection_string to the normal mysql_bman command.
Check the examples below for using catalog in mysql_bman.
A little more help you can get with the following command:
shell> mysql_bman --helpExamples
Do a full (logical = default) backup and store it in the daily policy folder:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=full --policy=dailyDo a full physical backup and store it in the weekly policy folder:
shell> mysql_bman --target=root/secret@127.0.0.1 --type=full --mode=physical --policy=weeklyDo a binary-log backup omitting the password in the target and store it in the daily policy folder:
shell> mysql_bman --target=bman@192.168.1.42:3307 --type=binlog --policy=dailyDo a MySQL configuration backup and store it in the weekly policy folder:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=config --policy=weeklyDo a structure backup and store it in the monthly policy folder and name the file with the instance name:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=structure --policy=monthly --instance-name=prod-dbDo a weekly structure backup and archive it to an other backup location:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=structure --policy=weekly --archive --archivedir=/mnt/tapeDo a schema backup omitting the mysql schema:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=schema --schema=-mysql --policy=daily --archive --archivedir=/mnt/tapeDo a schema backup only of foodmart and world and write it to their own files. Omit compressing these backups because they are located for example on deduplicated drives:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=schema --schema=+foodmart,+world --per-schema --policy=daily --no-compressCreation of a backup catalog (assuming you have created already a catalog database with the default name "bman_catalog"):
shell> mysql_bman --catalog=root/secret@127.0.0.1:3306 --createBackups against catalog:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --catalog=root/secret@127.0.0.1:3306 --instance-name=test --type=full --policy=dailyPrivilege backup:
shell> mysql_bman --target=root/secret@127.0.0.1:3306 --type=privilege --policy=daily --mode=logical