Tag Archive for MySQL

MySQL Cluster 7.3.4 Released

MySQL Cluster LogoThe binary and source versions of MySQL Cluster 7.3.4 have now been made available at http://www.mysql.com/downloads/cluster/.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.3.4 (compared to 7.3.3) is available from the 7.3.4 Change log.





MySQL Cluster: The Latest Developments in Management, Free webinar

Migrating from MySQL Cluster Auto-Installer to MCMOn Thursday 23rd January, Thomas Nielsen and I will be hosting a webinar explaining the latest developments in managing MySQL Cluster. As always the webinar is free but please register here.

Note that we’ll be covering what’s new in MySQL Cluster Manager 1.3 which went GA this week.

By their very nature, clustered environments involve more efforts and resources to administer than standalone systems and this holds true for MySQL Cluster, the database designed for web-scale throughput with carrier-grade availability.

The MySQL Cluster Auto-Installer guides you through defining and running a well configured MySQL Cluster database – combining auto-discovery of platform resources with built-in best practices in an intuitive web-based GUI.

MySQL Cluster Manager (available as part of the commercial MySQL Cluster Carrier Grade Edition) simplifies the ongoing management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one! These tasks include configuration, starting & stopping the cluster, upgrades and backup/restore and new for MCM 1.3, import a running Cluster.

Join this webcast to get up to speed on the latest developments in these tools and learn how to exploit them to make management of MySQL Cluster simple, efficient and reliable.

Times:

  • Thu, Jan 23: 09:00 Pacific time (America)
  • Thu, Jan 23: 10:00 Mountain time (America)
  • Thu, Jan 23: 11:00 Central time (America)
  • Thu, Jan 23: 12:00 Eastern time (America)
  • Thu, Jan 23: 15:00 São Paulo time
  • Thu, Jan 23: 17:00 UTC
  • Thu, Jan 23: 17:00 Western European time
  • Thu, Jan 23: 18:00 Central European time
  • Thu, Jan 23: 19:00 Eastern European time
  • Thu, Jan 23: 22:30 India, Sri Lanka
  • Fri, Jan 24: 01:00 Singapore/Malaysia/Philippines time
  • Fri, Jan 24: 01:00 China time
  • Fri, Jan 24: 02:00 日本
  • Fri, Jan 24: 04:00 NSW, ACT, Victoria, Tasmania (Australia)

Even if you can’t join the live webinar, it’s worth registering as you’ll be emailed a link to the replay as soon as it’s available.





MCM 1.3 is GA – Importing a running Cluster into MySQL Cluster Manager

MySQL Cluster Manager logo
MySQL Cluster Manager 1.3.0 is now Generally Available and can be downloaded from the Oracle Software Delivery Cloud. The release contains a number of enhancements including performance improvements, handling larger clusters and of course bug fixes. The other big feature is that you can now import an existing, running MySQL Cluster instance into MCM without having to stop it first – this is the topic for this post.

In the past, we had a nice browser-based tool (the MySQL Clster Auto-Installer) to get a well configured cluster up and running (tuned to your environment) and we also had MySQL Cluster Manager to simplify the ongoing management of the cluster. Unfortunately, if you wanted to migrate the cluster you’d created with the auto-installer (or built by hand) then you first had to shut it down and then follow a manual procedure. MCM 1.3 introduces an import command that takes a running cluster and brings it under the control of MCM without having to stop the cluster (or suspend reads or writes). There are still some manual steps involved and the first half of this post will step through this process:

Once the import has been completed, the post will then step though a number of MCM tasks to test that everything has gone to plan and also to give a reminder of how simple operations such as upgrades, backup/restore and adding new nodes is once you’re using MCM.

 

Specify Cluster topology in MCM

For this example, a cluster is used that’s been created using the auto-installer that was a part of MySQL Cluster 7.3.2 (the version is signifficant as from MySQL Cluster 7.3.3, the auto-installer creates .conf) files for the mysqld processes rather than specifying everything on the command-line – that simplifies the import process).

Before going any further, some data is added to the database so that I can later check that it’s not been lost:

mysql@connect13a ~]$ mysql -h 127.0.0.1 -P3306 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE DATABASE clusterdb;USE clusterdb;
Query OK, 1 row affected (0.06 sec)

mysql> CREATE TABLE simples (id INT AUTO_INCREMENT PRIMARY KEY, time TIMESTAMP) ENGINE=ndb;
Query OK, 0 rows affected (0.15 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.00 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.01 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.00 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM simples;
+----+---------------------+
| id | time                |
+----+---------------------+
|  3 | 2014-01-07 11:13:30 |
|  1 | 2014-01-07 11:13:28 |
|  2 | 2014-01-07 11:13:29 |
|  4 | 2014-01-07 11:13:30 |
+----+---------------------+
4 rows in set (0.00 sec)

The topology of the resulting cluster can be checked using the show command:

mysql@connect13b ~]$ cluster_7_2_14/bin/ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @192.168.56.103  (mysql-5.5.34 ndb-7.2.14, Nodegroup: 0, Master)
id=2    @192.168.56.104  (mysql-5.5.34 ndb-7.2.14, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=49   @192.168.56.101  (mysql-5.5.34 ndb-7.2.14)
id=52   @192.168.56.102  (mysql-5.5.34 ndb-7.2.14)

[mysqld(API)]   8 node(s)
id=50 (not connected, accepting connect from 192.168.56.101)
id=51 (not connected, accepting connect from 192.168.56.101)
id=53 (not connected, accepting connect from 192.168.56.102)
id=54 (not connected, accepting connect from 192.168.56.102)
id=55   @192.168.56.101  (mysql-5.5.34 ndb-7.2.14)
id=56   @192.168.56.101  (mysql-5.5.34 ndb-7.2.14)
id=57   @192.168.56.102  (mysql-5.5.34 ndb-7.2.14)
id=58   @192.168.56.102  (mysql-5.5.34 ndb-7.2.14)	

To define (but not create) the cluster in MCM, the following entities need to be defined:

  • site: the list of hosts that the cluster will run on
  • package: location of the cluster binaries on each of the hosts in the site
  • cluster: the cluster itself (the collection of nodes/processes that make up the cluster)

The MCM daemon mcmd must first be started on each of the target hosts and the mcm client run on any host before creating each of these entities:


[mysql@connect13a ~]$ ./mcm1.3/mcm1.3.0/bin/mcmd&
MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13a.localdomain:1862

[mysql@connect13b ~]$ ./mcm1.3/mcm1.3.0/bin/mcmd&
[1] 3068

[mysql@connect13b ~]$ MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13b.localdomain:1862

[mysql@connect13c ~]$  ./mcm1.3/mcm1.3.0/bin/mcmd&
[1] 1922
[mysql@connect13c ~]$ MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13c.localdomain:1862

[mysql@connect13d ~]$  ./mcm1.3/mcm1.3.0/bin/mcmd&
[1] 1936
[mysql@connect13d ~]$ MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13d.localdomain:1862

[mysql@connect13a ~]$ ./mcm1.3/mcm1.3.0/bin/mcm
MySQL Cluster Manager client started.
This wrapper will spawn the mysql client to connect to mcmd

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 1.3.0 MySQL Cluster Manager

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mcm> CREATE SITE --hosts=192.168.56.101,192.168.56.102,192.168.56.103,192.168.56.104 mysite;
+---------------------------+
| Command result            |
+---------------------------+
| Site created successfully |
+---------------------------+
1 row in set (1.34 sec)

mcm> add package --basedir=/home/mysql/cluster_7_2_14 7_2_14;
+----------------------------+
| Command result             |
+----------------------------+
| Package added successfully |
+----------------------------+
1 row in set (0.40 sec)

mcm> CREATE CLUSTER --import --package=7_2_14 --processhosts=ndb_mgmd:49@192.168.56.101,
ndb_mgmd:52@192.168.56.102,ndbmtd:1@192.168.56.103,ndbmtd:2@192.168.56.104,mysqld:55@192.168.56.101,
mysqld:56@192.168.56.101,mysqld:57@192.168.56.102,mysqld:58@192.168.56.102,ndbapi:50@192.168.56.101,
ndbapi:51@192.168.56.101,ndbapi:53@192.168.56.102,ndbapi:54@192.168.56.102 mycluster;
+------------------------------+
| Command result               |
+------------------------------+
| Cluster created successfully |
+------------------------------+
1 row in set (0.34 sec)

mcm> SHOW STATUS -r mycluster;
+--------+----------+-----------------+--------+-----------+---------+
| NodeId | Process  | Host            | Status | Nodegroup | Package |
+--------+----------+-----------------+--------+-----------+---------+
| 49     | ndb_mgmd | 192.168.56.101  | import |           | 7_2_14  |
| 52     | ndb_mgmd | 192.168.56.102  | import |           | 7_2_14  |
| 1      | ndbmtd   | 192.168.56.103  | import | n/a       | 7_2_14  |
| 2      | ndbmtd   | 192.168.56.104  | import | n/a       | 7_2_14  |
| 55     | mysqld   | 192.168.56.101  | import |           | 7_2_14  |
| 56     | mysqld   | 192.168.56.101  | import |           | 7_2_14  |
| 57     | mysqld   | 192.168.56.102  | import |           | 7_2_14  |
| 58     | mysqld   | 192.168.56.102  | import |           | 7_2_14  |
| 50     | ndbapi   | *192.168.56.101 | import |           |         |
| 51     | ndbapi   | *192.168.56.101 | import |           |         |
| 53     | ndbapi   | *192.168.56.102 | import |           |         |
| 54     | ndbapi   | *192.168.56.102 | import |           |         |
+--------+----------+-----------------+--------+-----------+---------+
12 rows in set (0.06 sec)

This post doesn’t attempt to go into details about all of the commands shown above but if you’re new to MCM then check out the MySQL Cluster documentation. The --import option sets the state of each of the nodes to import; they will stay in that state until the import command is run later.

When the auto-installer created the cluster, it changed a number of the configuration parameters from their defaults (either with command-line options or using configuration files). Using the LINUX ps -ef command you can identify all of these settings (either directly or by examining the configuration files that are referenced):

[mysql@connect13a ~]$ ps -ef | grep ndb_mgmd
mysql     2766     1  2 Jan07 ?        00:34:27 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=49 --config-dir=/home/mysql/MySQL_Cluster/49/ 
--config-file=/home/mysql/MySQL_Cluster/49/config.ini

[mysql@connect13b ~]$ ps -ef | grep ndb_mgmd
mysql     2426     1  2 Jan07 ?        00:32:26 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=52 --config-dir=/home/mysql/MySQL_Cluster/52/ 
--config-file=/home/mysql/MySQL_Cluster/52/config.ini

[mysql@connect13a ~]$ ps -ef | grep mysqld
mysql     3289     1  1 Jan07 ?        00:15:07 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/55/ --tmpdir=/home/mysql/MySQL_Cluster/55/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3306 --ndbcluster --ndb-nodeid=55 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/55/mysql.socket
mysql     3392     1  1 Jan07 ?        00:15:08 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/56/ --tmpdir=/home/mysql/MySQL_Cluster/56/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3307 --ndbcluster --ndb-nodeid=56 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/56/mysql.socket

[mysql@connect13b ~]$ ps -ef | grep mysqld
mysql     2884     1  1 Jan07 ?        00:14:41 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/57/ --tmpdir=/home/mysql/MySQL_Cluster/57/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3306 --ndbcluster --ndb-nodeid=57 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/57/mysql.socket
mysql     2981     1  1 Jan07 ?        00:14:42 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/58/ --tmpdir=/home/mysql/MySQL_Cluster/58/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3307 --ndbcluster --ndb-nodeid=58 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/58/mysql.socket

[mysql@connect13c ~]$ ps -ef | grep ndbmtd
mysql     1822     1 12 Jan07 ?        02:35:06 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=1 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1823  1822 12 Jan07 ?        02:35:06 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=1 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13d ~]$ ps -ef | grep ndbmtd | grep nodeid=2
mysql     1835     1  0 Jan07 ?        00:00:52 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=2 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1836  1835 12 Jan07 ?        02:30:46 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=2 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13a ~]$ cat /home/mysql/MySQL_Cluster/49/config.ini

# Configuration file for MyCluster
#

[NDB_MGMD DEFAULT]
Portnumber=1186

[NDB_MGMD]
NodeId=49
HostName=192.168.56.101
DataDir=/home/mysql/MySQL_Cluster/49/
Portnumber=1186

[NDB_MGMD]
NodeId=52
HostName=192.168.56.102
DataDir=/home/mysql/MySQL_Cluster/52/
Portnumber=1186

[TCP DEFAULT]
SendBufferMemory=4M
ReceiveBufferMemory=4M

[NDBD DEFAULT]
BackupMaxWriteSize=1M
BackupDataBufferSize=16M
BackupLogBufferSize=4M
BackupMemory=20M
BackupReportFrequency=10
MemReportFrequency=30
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15
DataMemory=58M
IndexMemory=9M
MaxNoOfTables=4096
MaxNoOfTriggers=3500
NoOfReplicas=2
StringMemory=25
DiskPageBufferMemory=64M
SharedGlobalMemory=20M
LongMessageBuffer=32M
MaxNoOfConcurrentTransactions=16384
BatchSizePerLocalScan=512
FragmentLogFileSize=64M
NoOfFragmentLogFiles=16
RedoBuffer=32M
MaxNoOfExecutionThreads=2
StopOnError=false
LockPagesInMainMemory=1
TimeBetweenEpochsTimeout=32000
TimeBetweenWatchdogCheckInitial=60000
TransactionInactiveTimeout=60000
HeartbeatIntervalDbDb=15000
HeartbeatIntervalDbApi=15000

[NDBD]
NodeId=1
HostName=192.168.56.103
DataDir=/home/mysql/MySQL_Cluster/1/

[NDBD]
NodeId=2
HostName=192.168.56.104
DataDir=/home/mysql/MySQL_Cluster/2/

[MYSQLD DEFAULT]

[MYSQLD]
NodeId=55
HostName=192.168.56.101

[MYSQLD]
NodeId=56
HostName=192.168.56.101

[MYSQLD]
NodeId=57
HostName=192.168.56.102

[MYSQLD]
NodeId=58
HostName=192.168.56.102

[API]
NodeId=50
HostName=192.168.56.101

[API]
NodeId=51
HostName=192.168.56.101

[API]
NodeId=53
HostName=192.168.56.102

[API]
NodeId=54
HostName=192.168.56.102

Now that we have a view of all of the configuration parameters, a subset of them need to be applied to the definition of the cluster in MCM (or else MCM will override them). Note that not all of the definitions need porting to MCM – for example Portnumber=1186 as that is already the default value and configdir as MCM will use its own. So, based on the command-line options provided to the executables and the configuration files, the configuration parameters for the cluster defined in MCM are set:

mcm> SET SendBufferMemory:ndbmtd+ndbmtd=4M,ReceiveBufferMemory:ndbmtd+ndbmtd=4M,
SendBufferMemory:ndbmtd+mysqld=4M,ReceiveBufferMemory:ndbmtd+mysqld=4M,BackupLogBufferSize:ndbmtd=4M,
BackupMemory:ndbmtd=20M,BackupReportFrequency:ndbmtd=10,MemReportFrequency:ndbmtd=30,
LogLevelStartup:ndbmtd=15,LogLevelShutdown:ndbmtd=15,LogLevelCheckpoint:ndbmtd=8,
LogLevelNodeRestart:ndbmtd=15,DataMemory:ndbmtd=58M,IndexMemory:ndbmtd=9M,MaxNoOfTables:ndbmtd=4096,
MaxNoOfTriggers:ndbmtd=3500,SharedGlobalMemory:ndbmtd=20M,LongMessageBuffer:ndbmtd=32M,
MaxNoOfConcurrentTransactions:ndbmtd=16384,BatchSizePerLocalScan:ndbmtd=512,
FragmentLogFileSize:ndbmtd=64M,StopOnError:ndbmtd=true,LockPagesInMainMemory:ndbmtd=1,
TimeBetweenEpochsTimeout:ndbmtd=32000,TimeBetweenWatchdogCheckInitial:ndbmtd=60000,
TransactionInactiveTimeout:ndbmtd=60000,HeartbeatIntervalDbDb:ndbmtd=15000,
HeartbeatIntervalDbApi:ndbmtd=15000,DataDir:ndbmtd:1=/home/mysql/MySQL_Cluster/1/,
DataDir:ndbmtd:2=/home/mysql/MySQL_Cluster/2/,DataDir:mysqld:55=/home/mysql/MySQL_Cluster/55/,
DataDir:mysqld:56=/home/mysql/MySQL_Cluster/56/,DataDir:mysqld:57=/home/mysql/MySQL_Cluster/57/,
DataDir:mysqld:58=/home/mysql/MySQL_Cluster/58/,tmpdir:mysqld:55=/home/mysql/MySQL_Cluster/55/tmp,
tmpdir:mysqld:56=/home/mysql/MySQL_Cluster/56/tmp,tmpdir:mysqld:57=/home/mysql/MySQL_Cluster/57/tmp,
tmpdir:mysqld:58=/home/mysql/MySQL_Cluster/58/tmp,
socket:mysqld:55=/home/mysql/MySQL_Cluster/55/mysql.socket,
socket:mysqld:56=/home/mysql/MySQL_Cluster/56/mysql.socket,
socket:mysqld:57=/home/mysql/MySQL_Cluster/57/mysql.socket,
socket:mysqld:58=/home/mysql/MySQL_Cluster/58/mysql.socket,port:mysqld:56=3307,
port:mysqld:58=3307 mycluster;

+-----------------------------------+
| Command result                    |
+-----------------------------------+
| Cluster reconfigured successfully |
+-----------------------------------+
1 row in set (0.44 sec)

Note that as MCM is not yet managing the running cluster, you can break this up into multiple SET commands as it doesn’t need to restart any processes.
 

Prepare running cluster for import

As MCM takes over some functions from MySQL Cluster, we need to ensure that a couple of rules are imposed on the cluster so that there are no conflicts.

MCM is responsible for making sure that the management nodes are using the correct version of the configuration data and so we don’t want the management nodes holding onto older versions – this leads to the first rule, that the configuration cache should be disabled. This means restarting the management nodes with the config-cache parameter set to FALSE:

mysql@connect13a ~]$ ps -ef | grep ndb_mgmd
mysql     2766     1  2 Jan07 ?        00:34:27 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=49 --config-dir=/home/mysql/MySQL_Cluster/49/ 
--config-file=/home/mysql/MySQL_Cluster/49/config.ini
[mysql@connect13a ~]$ kill -9 2766
[mysql@connect13a ~]$  /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial --ndb-nodeid=49 
--config-dir=/home/mysql/MySQL_Cluster/49/ --config-file=/home/mysql/MySQL_Cluster/49/config.ini 
--config-cache=FALSE

[mysql@connect13b ~]$ ps -ef | grep ndb_mgmd
mysql     2426     1  2 Jan07 ?        00:32:26 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=52 --config-dir=/home/mysql/MySQL_Cluster/52/ 
--config-file=/home/mysql/MySQL_Cluster/52/config.ini
[mysql@connect13b ~]$ kill -9 2426
[mysql@connect13b ~]$  /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial --ndb-nodeid=52 
--config-dir=/home/mysql/MySQL_Cluster/52/ --config-file=/home/mysql/MySQL_Cluster/52/config.ini 
--config-cache=FALSE

MCM is responsible for making sure that a node is restarted in the event of the process stopping (only happens if StopOnError is set to FALSE) and so the data nodes no longer need their angel processes (the first of the 2 ndbmtd processes we see for each data node). This means the second rule is that all of these angel processes must be killed:

[mysql@connect13d ~]$ ps -ef | grep ndbmtd | grep nodeid=1
mysql     1822     1  0 Jan07 ?        00:00:52 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=1 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1823  1822 12 Jan07 ?        02:30:46 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=1 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
[mysql@connect13d ~]$ kill -9 1822

[mysql@connect13d ~]$ ps -ef | grep ndbmtd | grep nodeid=2
mysql     1835     1  0 Jan07 ?        00:00:52 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=2 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1836  1835 12 Jan07 ?        02:30:46 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=2 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
[mysql@connect13d ~]$ kill -9 1835

In order to manager the cluster, MCM needs to be able to connect to each of the MySQL Servers and so the mcmd user must be created on each of the MySQL Servers (unless you’re exploiting the ability to share user credentials between multiple MySQL Servers in which case it only needs doing once):

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P3306 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P3307 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

[mysql@connect13b ~]$ mysql -h 127.0.0.1 -P3306 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

[mysql@connect13b ~]$ mysql -h 127.0.0.1 -P3307 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

As the running cluster was generated by a pre-MySQL Cluster 7.3.3 version of the auto-installer, all of the mysqld settings were specified as command-line options. The MCM import command is very restrictive about what command-line options are allowed and so most of the options need moving to .cnf files and the the mysqld processes need to be restarted to use those configuration files (note that the cluster is still available through this process but there is a rolling restart of the MySQL Servers and so applications may need to switch ones they’re connected to temporarily – if this isn’t already handled by load ballancing). These are the configuration files that were manually created and the commands to restart each MySQL Server:

[mysql@connect13a ~]$ cat  /home/mysql/MySQL_Cluster/55.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/55/tmp
datadir=/home/mysql/MySQL_Cluster/55/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/55/mysql.socket
port=3306
ndbcluster
ndb-nodeid=55
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13a ~]$ mysqladmin -h 127.0.0.1 -P 3306 -u root shutdown
[mysql@connect13a ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/55.cnf&

[mysql@connect13a ~]$ cat /home/mysql/MySQL_Cluster/56.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/56/tmp
datadir=/home/mysql/MySQL_Cluster/56/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/56/mysql.socket
port=3307
ndbcluster
ndb-nodeid=56
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13a ~]$ mysqladmin -h 127.0.0.1 -P 3307 -u root shutdown
[mysql@connect13a ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/56.cnf&

[mysql@connect13b ~]$ cat /home/mysql/MySQL_Cluster/57.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/57/tmp
datadir=/home/mysql/MySQL_Cluster/57/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/57/mysql.socket
port=3306
ndbcluster
ndb-nodeid=57
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13b ~]$ mysqladmin -h 127.0.0.1 -P 3306 -u root shutdown
[mysql@connect13b ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/57.cnf&

[mysql@connect13b ~]$ cat /home/mysql/MySQL_Cluster/58.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/58/tmp
datadir=/home/mysql/MySQL_Cluster/58/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/58/mysql.socket
port=3307
ndbcluster
ndb-nodeid=58
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13b ~]$ mysqladmin -h 127.0.0.1 -P 3307 -u root shutdown
[mysql@connect13b ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/58.cnf&

 

Ensure correct PID files in place

MCM tracks each of the processes in the cluster using the process IDs held in pid files. For the data nodes, MCM will automatically fetch the process IDs from the pid files from the running cluster and so we need to make sure that they’re accurate (that they’re in the right place and contain the correct IDs):

[mysql@connect13c ~]$ ps -ef | grep ndbmtd
mysql     1823     1 12 Jan07 ?        02:35:06 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=1 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13c ~]$ echo 1823 > MySQL_Cluster/1/ndb_1.pid

[mysql@connect13d ~]$ ps -ef | grep ndbmtd
mysql     1836     1 12 Jan07 ?        02:34:03 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=2 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13d ~]$ echo 1836 > MySQL_Cluster/2/ndb_2.pid

For MySQL Servers and management nodes, the pid files need be created within the MCM directory:

[mysql@connect13a ~]$ ps -ef | grep mysqld
mysql    11145  1226  1 09:49 pts/1    00:00:02 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/55.cnf
mysql    11178  1226  1 09:50 pts/1    00:00:01 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/56.cnf

[mysql@connect13a ~]$ echo 11145 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.55
[mysql@connect13a ~]$ echo 11178 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.56

mysql@connect13b ~]$ ps -ef | grep mysqld
mysql     11247  8477  1 09:55 pts/1    00:00:03 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/57.cnf
mysql     11253  8477  0 09:57 pts/1    00:00:01 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/58.cnf

[mysql@connect13b ~]$ echo 11247 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.57
[mysql@connect13b ~]$ echo 11253 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.58

[mysql@connect13a pid]$ ps -ef | grep ndb_mgmd
mysql     8859     1  2 08:23 ?        00:00:23 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=49 --config-dir=/home/mysql/MySQL_Cluster/49/ 
--config-file=/home/mysql/MySQL_Cluster/49/config.ini --config-cache=FALSE

[mysql@connect13b ~]$ echo 8859 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.49

[mysql@connect13b pid]$ ps -ef | grep ndb_mgmd
mysql     7092     1  1 08:25 ?        00:00:17 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=52 --config-dir=/home/mysql/MySQL_Cluster/52/ 
--config-file=/home/mysql/MySQL_Cluster/52/config.ini --config-cache=FALSE

[mysql@connect13b ~]$ echo 7092 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.52

The good news is that if there are any mistakes in any of these files then the import cluster --dryrun command will fail with a useful error message and so you can come back to fix things up.
 

Import cluster into MCM

OK – you’ve now done the hard bits which is the prep work ahead of the actual importing of the cluster into MCM control, now it’s MCM’s turn to automate the actual install.

Before the real import, we use MCM to perform a dry-run to make sure that all of the prep work has been completed succesfully:

mcm> import cluster --dryrun mycluster;
+-------------------------------------------------+
| Command result                                  |
+-------------------------------------------------+
| Import checks passed. Cluster ready for import. |
+-------------------------------------------------+
1 row in set (0.80 sec)

All that’s left before running the final import is to take a backup (just in cas something should go very wrong):

mysql@connect13a ~]$ ndb_mgm
-- NDB Cluster -- Management Client --

ndb_mgm> start backup
Waiting for completed, this may take several minutes
Node 1: Backup 1 started from node 49
Node 1: Backup 1 started from node 49 completed
StartGCP: 35747 StopGCP: 35750
#Records: 2063 #LogRecords: 0
Data: 51472 bytes Log: 0 bytes

And now, finally the import itself can be run:

mcm> import cluster mycluster;
+-------------------------------+
| Command result                |
+-------------------------------+
| Cluster imported successfully |
+-------------------------------+
1 row in set (2.98 sec)

After all of the prep work, that seems a bit of an anticlimax! As a first check, make sure that all of the nodes (processes) have been imported correctly:

mcm> show status -r mycluster;
+--------+----------+-----------------+---------+-----------+---------+
| NodeId | Process  | Host            | Status  | Nodegroup | Package |
+--------+----------+-----------------+---------+-----------+---------+
| 49     | ndb_mgmd | 192.168.56.101  | running |           | 7_2_14  |
| 52     | ndb_mgmd | 192.168.56.102  | running |           | 7_2_14  |
| 1      | ndbmtd   | 192.168.56.103  | running | 0         | 7_2_14  |
| 2      | ndbmtd   | 192.168.56.104  | running | 0         | 7_2_14  |
| 55     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 56     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 57     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 58     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 50     | ndbapi   | *192.168.56.101 | added   |           |         |
| 51     | ndbapi   | *192.168.56.101 | added   |           |         |
| 53     | ndbapi   | *192.168.56.102 | added   |           |         |
| 54     | ndbapi   | *192.168.56.102 | added   |           |         |
+--------+----------+-----------------+---------+-----------+---------+
12 rows in set (0.10 sec)

As a second check, make sure that the data hasn’t been lost:

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P 3307 -u root -e 'SELECT * FROM clusterdb.simples'
+----+---------------------+
| id | time                |
+----+---------------------+
|  3 | 2014-01-07 12:08:40 |
|  5 | 2014-01-07 12:08:41 |
|  1 | 2014-01-07 12:08:38 |
|  2 | 2014-01-07 12:08:39 |
|  4 | 2014-01-07 12:08:41 |
+----+---------------------+

So that’s it, the cluster is now safely under the control of MCM. The next section performs some more tests of MCM and at the same time illustrates how much simpler some of these management steps are now we have MCM to help.
 

Try out MCM on the imported cluster

This final section shows how to exploit some of the MCM features on the cluster.

On-line backup and (especially) restore is very simple using MCM but before that can be done, we want to add extra ndbapi slots so that the restore command can be executed on the hosts running the data nodes – fortunately this is straight-forward to do:

mcm> add process -R ndbapi:59@192.168.56.103,ndbapi:60@192.168.56.104 mycluster;

+----------------------------+
| Command result             |
+----------------------------+
| Process added successfully |
+----------------------------+
1 row in set (2 min 32.91 sec)

Note that this process took more than 2 minutes – the reason for that is that behind the scenes it performed a rolling restart of all of the existing nodes (processes) to make them aware of the new nodes. We can now confirm that these nodes have been added:

mcm> show status -r mycluster;
+--------+----------+-----------------+---------+-----------+---------+
| NodeId | Process  | Host            | Status  | Nodegroup | Package |
+--------+----------+-----------------+---------+-----------+---------+
| 49     | ndb_mgmd | 192.168.56.101  | running |           | 7_2_14  |
| 52     | ndb_mgmd | 192.168.56.102  | running |           | 7_2_14  |
| 1      | ndbmtd   | 192.168.56.103  | running | 0         | 7_2_14  |
| 2      | ndbmtd   | 192.168.56.104  | running | 0         | 7_2_14  |
| 55     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 56     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 57     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 58     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 50     | ndbapi   | *192.168.56.101 | added   |           |         |
| 51     | ndbapi   | *192.168.56.101 | added   |           |         |
| 53     | ndbapi   | *192.168.56.102 | added   |           |         |
| 54     | ndbapi   | *192.168.56.102 | added   |           |         |
| 59     | ndbapi   | *192.168.56.103 | added   |           |         |
| 60     | ndbapi   | *192.168.56.104 | added   |           |         |
+--------+----------+-----------------+---------+-----------+---------+
14 rows in set (0.03 sec)

Taking a backup of the database is a single command:

mcm> backup cluster mycluster;
+-------------------------------+
| Command result                |
+-------------------------------+
| Backup completed successfully |
+-------------------------------+
1 row in set (5.76 sec)

Before performing the restore, the test data can be removed from the database:

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P 3307 -u root -e 'DELETE FROM clusterdb.simples'

To restore the database, check the available backups and then restore the most recent one (note as only the data has been deleted, the -M option is used to indicate that meta-data doesn’t need to be restored). The -I option is used to specify that the backup with Id of 2 should be used:

mcm> list backups mycluster;
+----------+--------+----------------+---------------------+---------+
| BackupId | NodeId | Host           | Timestamp           | Comment |
+----------+--------+----------------+---------------------+---------+
| 1        | 1      | 192.168.56.103 | 2014-01-08 08:31:35 |         |
| 1        | 2      | 192.168.56.104 | 2014-01-08 08:31:33 |         |
| 2        | 1      | 192.168.56.103 | 2014-01-09 05:03:13 |         |
| 2        | 2      | 192.168.56.104 | 2014-01-09 05:03:12 |         |
+----------+--------+----------------+---------------------+---------+
4 rows in set (0.19 sec)

mcm> restore cluster -I 2 -M mycluster;
+--------------------------------+
| Command result                 |
+--------------------------------+
| Restore completed successfully |
+--------------------------------+
1 row in set (9.40 sec)

To confirm that the restore has indeed been successful, check that the test data is back in the database:

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P 3307 -u root -e 'SELECT * FROM clusterdb.simples'
+----+---------------------+
| id | time                |
+----+---------------------+
|  3 | 2014-01-07 12:08:40 |
|  5 | 2014-01-07 12:08:41 |
|  1 | 2014-01-07 12:08:38 |
|  2 | 2014-01-07 12:08:39 |
|  4 | 2014-01-07 12:08:41 |
+----+---------------------+

As a final test, the cluster is upgraded to MySQL Cluster 7.3.3; all that’s needed is to define the package (telling MCM where to find the new binaries on the target hosts) and then perform the upgrade:

mcm> add package --basedir=/home/mysql/mcm1.3_cluster/cluster 7_3_3;
+----------------------------+
| Command result             |
+----------------------------+
| Package added successfully |
+----------------------------+
1 row in set (0.65 sec)

mcm> upgrade cluster --package=7_3_3 mycluster;
+-------------------------------+
| Command result                |
+-------------------------------+
| Cluster upgraded successfully |
+-------------------------------+
1 row in set (4 min 15.96 sec)

Conclusion

The most demanded feature for MySQL Cluster Manager was the ability to take an existing (running) cluster and bring it under the control of MCM (rather than having to create the cluster using MCM in the first place). MCM 1.3 delivers this. As you’ll have noticed, the migration process involves a non-trivial amount of prep work; the reason for this is that making configuration changes to cluster is fairly involved, with a lot of moving parts. Hopefully you’ll also have obderved that once the cluster is under the control of MCM, the management process becomes much simpler and less prone to user error.

It would be great to hear how people get on with MCM 1.3 in general and with the import in particular, please download the MySQL Cluster Manager software and try it out.





Webinar – Automated Sharding and High Availability with MySQL Fabric

MySQL Fabric
On Tuesday 17th December, we’ll be presenting a webinar on the latest developments for MySQL Fabric (a framework for managing pools of MySQL server – together with 2 applications: automated sharding and High Availablity). As always, the webinar is free and you should register here.

This is your opportunity to hear the details directly from the engineering team and put your questions to them.

This session will present MySQL Fabric and help you understand how you will be able to leverage it to address your scaling needs:

  • Architecture for performance of a sharded deployment
  • Management of MySQL server farms via MySQL Fabric
  • MySQL Fabric as a tool for handling sharding and high-availability
  • Application demands when working with a sharded database
  • Connector demands when working with a sharded database
  • Approaches to mixing sharded and global tables

When:

  • Tue, Dec 17: 10:00 Pacific time (America)
  • Tue, Dec 17: 11:00 Mountain time (America)
  • Tue, Dec 17: 12:00 Central time (America)
  • Tue, Dec 17: 13:00 Eastern time (America)
  • Tue, Dec 17: 16:00 São Paulo time
  • Tue, Dec 17: 18:00 UTC
  • Tue, Dec 17: 18:00 Western European time
  • Tue, Dec 17: 19:00 Central European time
  • Tue, Dec 17: 20:00 Eastern European time
  • Tue, Dec 17: 23:30 India, Sri Lanka
  • Wed, Dec 18: 02:00 Singapore/Malaysia/Philippines time
  • Wed, Dec 18: 02:00 China time
  • Wed, Dec 18: 03:00 日本
  • Wed, Dec 18: 05:00 NSW, ACT, Victoria, Tasmania (Australia)




Webinar – What’s New in MySQL Replication

MySQL Replication logoOn Friday (22/11/2013), we hosted webinar covering the new replication features in the MySQL 5.7 DMR as well as in MySQL Labs. You can now view the webinar replay here.

It’s a very exciting time for MySQL Replication! MySQL 5.6 contains numerous new replication features and Oracle recently announced…

  • The second Development Milestone Release of MySQL 5.7, introducing yet again new replication features and enhancements including optimized multi-threaded slave, better consistency with semi-synchronous replication, and improved monitoring with new Performance Schema tables.
  • Early access to multi-source replication via labs.mysql.com

During the live webinar, attendees asked questions and the MySQL Replication engineers answered them; this Q&A is included here:

  • Can we have Master & Slave on different versions of MySQL? Like Master is with 5.1 & Slave with 5.6? In general, cross version replication to a next higher version on slave is supported unless the behaviour of SQL statements change in the different versions. For more details, please look at the documentation on MySQL Replication cross-version compatibility.
  • The MySQL Replication topology discussed here supports asynchronous replication. How is Synchronous replicaiton is possible with mysql? With the loss-less semi-synchronouse replication, you’re getting very close to synchronous replication in that when your transaction has committed, you can be sure that the change has been received by a slave and so is safe. If you want truly synchronous replication then you can use MySQL Cluster (the NDBCLUSTER storage engine); typically this synchronous replication would be within a data center but you can split the cluster if you have a real need. You can watch this demo video of MySQL Cluster.
  • When Slave says it is x seconds behind, what does that means? Does it means slave would require x seconds to recover and reach the master’s current position? When the slave SQL thread is actively processing updates, this field is the number of seconds that have elapsed since the timestamp of the most recent event on the master executed by that thread.
  • What is default setting for the rpl_semi_sync_master_wait_point setting. after_sync
  • What is the best mechanism in MySQL Community Edition for backing up InnoDB tables? mysqldump has –single-transaction option, which is mainly useful for InnoDB tables with REPEATABLE READ isolation level. In short, for Community Edition mysqldump is the best solution. But for Enterprise Edition, there is MySQL Enterprise Backup.
  • How can I verify that my slave DB is in sync with the master? Is there something similar to the archive log sequence number verification in Oracle. we know there are some paramerts from show slave status output. But is there any other way to verify the sync status using MySQL tables? In MySQL 5.6, if you set master_info_repository and relay_log_info_repository to TABLE the status information is put into the mysql.slave_relay_log_info and mysql.slave_master_info tables. And in MySQL 5.7 the status is put in performance_schema you can query the status using SQL.
  • Does multi-source replication require the use of GTIDs? No – you just need to make sure that GTIDs are either enabled on the slave and all masters or disabled on all of them.
  • If multi-source can work without GTIDs, how do you prevent a transaction executed on A which is replicated to B and C, from being replicated back again to C and B via the replication channel from B to C and C to B ? Without GTIDs, the transactions are re executed (in the above scenario) if the schema developer *doesn’t* take care of fixing the keys. If there is a conflict on a key, the Slave errors out. So, In short, MSR doesn’t involve itself with conflict detection and resolution. When using positions, the schema developer has to make sure that multisourced slave receives correct data to avoid reexecution of transactions.
  • In loss-less semi-sync, how can the transaction be written in the binary log of the master without the master committing the transaction? Does this stall the transaction on the master (until a commit is written to the binary log)? When using this feature, a commit has two phases (which is same as an XA transaction) – first there is a prepare phase and then the commit phase. The binary log is written after the prepare phase but before commit phase(internally, it is part of the commit phase but happens before the engine commit).
  • With parallel replication, is the binary logs order preserved on the slave (compared to the master), even if the transactions are applied in parallel? Asked in another way, can 2 slaves of the same master have different binlogs? No, the order will be different when multi-threaded slave feature is on.
  • Are you working with an easier way to create a replication setup – like automatically copying the master database to the slave (something like you had for MyISAM before)? Take a look at MySQL Utilities as there are lots of new tools in there 2 help setup, monitor and manage replication.
  • You guys have multiple blogs and it is a shame to not present this information in a common place. That’s been something that we’ve been discussing as well. Of course they get aggregated through planet.mysql.com and there is also a summary blog pointing to the others.
  • In Loss-Less Semi-Sync replication, what happens if the slave dies, before it receives the change in the relay. Is there a timeout which then commits and ack the data to the master? Yes, there is a timeout. After the timeout, it commits and falls back to normal asynchronous replication. The timeout is specified by the variable rpl_semi_sync_master_timeout. Currently you cannot turn off this behavior, but you can set the timeout very very high. For more information, see the configuration guide.




MySQL & Friends – FOSDEM 2014 CfP last chance!

MySQL & Friends devroom
Heads up that Friday (6th December) is your last chance to submit a 30 minute session proposal for next year’s MySQL & Friends Devroom at FOSDEM. I presented last year and I was really impressed by the number of attendees, the organisation and the positive reception (I’ll be there again next year – hopefully presenting). If you’ve got a MySQL-related topic that you’d like to present to an enthusiastic open-source community then this is a agreat opportunity.

Submit your proposal(s) for a 30 minute session here.

FOSDEM 2014 is free event running in Brussels over the weekend of 1st & 2nd February, with the MySQL content aimed for the Saturday. This is a huge event (5,000+) and has a great, informal, developer-focused atmosphere – even if you don’t want to present, it’s worth attending if you can.





MySQL Cluster 7.3.3 Released

MySQL Cluster LogoThe binary and source versions of MySQL Cluster 7.3.3 have now been made available at http://www.mysql.com/downloads/cluster/.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.3.3 (compared to 7.3.2) is available from the 7.3.3 Change log.





MySQL Cluster 7.2.14 Released

MySQL Cluster LogoThe binary version for MySQL Cluster 7.2.14 has now been made available at http://www.mysql.com/downloads/cluster/.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.2.14 (compared to 7.2.13) is available from the 7.2.14 Change log.





MySQL Cluster 7.1.29 Released

MySQL Cluster LogoThe binary and source versions of MySQL Cluster 7.1.29 have now been made available at http://www.mysql.com/downloads/cluster/.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.1.29 (compared to 7.1.28) is available from the 7.1.29 Change log.





MySQL 5.7.2 DMR and Labs – new replication features

With today’s announcement of the second MySQL 5.7 Development Milestone Release and a new labs release it’s a very exciting time for MySQL Replication. MySQL 5.6 contained a lot of new content to make replication faster, easier to use and more reliable (Global Transaction Identifiers, Multi-Threaded Slaves, Binary Log Group Commit, Optimized Row Based Replication, Crash Safe Replication, Replication Event Checksums, Time Delayed Replication & Informational Logs) and now we want to improve things even further.

The new DMR has something for everyone.

With the improvements to Semi-Synchronous Replication, the application developer can be confident that when a transaction has been commited, the changes have been safely copied to one or more slaves and so whatever happens, that change will not be lost. Further, we now prevent other application threads seeing those changes until they’ve been received by the slave and so the application cannot start acting on the new data until it’s known to be safe. This is an important improvement in consistency which moves more of the onus from the application developer onto the database.

DBAs want replication to be fast – in particular for the slave(s) not to fall behind the master. MySQL 5.6 made some massive improvements in this area – both on the master and the slave. A number of users though were unable to exploit the Multi-Threaded Slave (MTS) feature as relied on the use of multiple schemas (databases) to get changes applied in parallel. In the new MySQL 5.7 DMR we’ve included a new option for MTS where changes can be applied in parallel – even within the same schema. A second performance feature improves throughput on the master – where the dump thread no longer needs to lock the binary log – refer to this engineering Blog on Dump Thread Enhancement in MySQL 5.7.2 for more details.

DBAs also want to monitor the status of replication and for years have relied on the SHOW SLAVE STATUS command. As replication has evolved, SHOW SLAVE STATUS has become less suitable – we needed a solution that could properly model the more sophisticated replication architectures (including GTIDs and MTS) now possible. The approach we’ve taken is to provide this information through the performance_schema.

Note that the earlier MySQL 5.7 DMR added non-blocking SHOW SLAVE STATUS, idempotent and –rewrite-db options for mysqlbinlog – these are still available in the new DMR.

The new features are described in a little more detail in the following sections (together with links to more technical content from the MySQL Engineeing team).

Loss-less Semi-Synchronous Replication

Intra-Schema Semi-Synchronous Replication

Intra-Schema Semi-Synchronous Replication

When using semi-synchronous replication in previous releases, the processing of the transaction on the master would wait for the slave’s acknowledgement after the change had been written to the storage engine but before the commit was acknowledged to the client. This satisfied the requirement that the application could be confident that if a transaction has been commited (and an acknowledgement received for the commit) then the update would not be lost. It did however leave a window where a user on another connection could read the new data from the master (because it has been written to the storage engine and the locks have been released) before the change had been received by the slave and safely stored in its relay log – that user could then start acting on that data but if the master failed at that point then that original update could still be lost and so the user would be acting on what is now inaccurate data.

This feature removes the above race condition by making the master wait for the update to be received by the slave before writing it to the storage engine and releasing the locks.

The functionality is enabled by default and shouldn’t have any negative impacts (for example on peformance) but it you’d like to disable it then you can do so using rpl_semi_sync_master_wait_point = AFTER_SYNC.

You can read more details on this in this Loss-less Semi-Synchronous Replication on MySQL 5.7.2 blog from the MySQL engineering team.

Intra-Schema Multi-Threaded Slaves

DBAs want replication to be fast – in particular for the slave(s) not to fall behind the master. If there is a short but heavy burst of writes on the master then the slave falls behind (and there is a risk of lost data if the master fails during this period) but if the write-rate is sustained then the slave would fall further and further behind indefinitely. The ongoing challenge has been that the master gets faster and faster as more cores and clients are added but applying these changes asynchronously on the slave is more complex as you need to maintain some form of ordering in order to always have a consistent data set.

The earliest solution to maintaining ordering/consistency was for the slave to apply all of the changes serially, in a single thread – this ensured that changes were applied in the same order as on the master and so guaranteed the slave always contained a consistent view. Unfortunately this meant that the slave applier thread could only exploit a single core which is very wasteful in modern systems.

MySQL 5.6 made some massive improvements for many use cases by allowing the slave to apply updates in parallel using multiple threads. The assumption made was that data was held in multiple schemas (databases) and that there were no dependencies between the data in those different schemas. In this way all of the updates for a schema would be applied in order by a single thread (ensuring consistency) but updates to other schemas could be handled by additional threads. This allowed the slave to work many times faster but was limited to those use cases that met the assumptions.

In the second MySQL 5.7 DMR we introduce a new option that enables the slave to safely apply updates in parallel – even when all of the data is held within a single schema and no assumptions can be made about the independence between any rows from any tables. To avoid conflicts/divergence from the master, the slave must ensure that any transactions that are applied in parallel don’t read or write any overlapping rows. The good news is that this grouping on non-overlapping transactions is already being figured out on the master as part of the binary log group commit functionality (introduced in MySQL 5.6) as row level locking means that overlapping transactions cannot be part of the same group commit.

MySQL 5.7 adds a logical clock/counter to the master which is used to tag transactions that are part of the same group commit’s prepare phase. The slave can then use that information – knowing that it is safe to apply all of the transactions with the same logical clock value in any order (and so can use multiple threads).

Activating the functionality is very straight-forward and the key step is to set slave-parallel-type = LOGICAL_CLOCK but see this blog from the MySQL engineering team: using the intra-schema MTS functionality for more detailed instructions. Note that there is another blog in the series – that blog provides a detailed view of how the intra-schema MTS is implemented.

Replication Performance Schema

With the ever increasing sophistication of MySQL Replication, presenting all of the information in SHOW SLAVE STATUS has become unwieldy with data for multiple slave applier threads, GTID sets etc. What would be much more convenient is if this information were presented in tables so that the user could get exactly the information they needed using SQL queries (plus of course we can lay out the data in an understandable (relational) way and can access the information from stored procedures). MySQL 5.7.2 does just this by adding MySQL Replication tables to the MySQL performance_schema.

This feature introduces 6 new performance_schema tables: replication_connection_configuration, replication_connection_status, replication_execute_configuration, replication_execute_status, replication_execute_status_by_coordinator, replication_execute_status_by_worker.

Shivji from the MySQL engineering team has written a great blog post on what’s in the new performance_schema tables and how to interpret the results.

Multi-Source Replication – LABS

Multi-Source Replication

Multi-Source Replication

MySQL replication is very flexible in the way that networks of masters and slaves that can be built up; a master can replicate to multiple slaves, a master can itself be a slave of another master, you can create a replication ring…. The one caveat to this has always been that a slave server can only have a single master (MySQL Cluster is an exception to this rule).

Why might you want to do this? There are a few use cases around consolidating data from multiple MySQL Servers into one:
– Where each of the masters is for a different shard (where the application is responsible for the sharding) and you want to be able to run reports over all of that data to produce a consolidated view
– You want to avoid the expense of dedicating a slave server to each master server
– A remote location may require less throughput and so a single server can service all of the traffic for all of the data – the ‘super-slave’ gives it a low-overhead, local database to access all of the data
– The ‘super-slave’ is used as a point where you can perform backups for all of the data

It’s possible to have a slave time-slice between multiple masters with a bit of scripting but that isn’t an ideal solution. You can also add an additional repliation layer (such as writing your own code to use the binary log API) but it would be much simpler if it were built into MySQL itself.

In this labs (i.e. for test only, not to be deployed!) release we allow a slave to simultaneously receive and process replication events from multiple masters – exactly what our users have been asking for.

What’s more, we’ve also ensured that this new functionality is compatible with the other enhancements that have been made to the MySQL Replication architecture – this includes loss-less semi-synchronus replication; and intra-schema multi-threaded slaves (as well as the existing per-database MTS). This means that you don’t have to choose between all of these tempting features – the keys to the candy store are yours!

It’s likely that not all of the masters will be the same, have the same maintenance schedules etc. and so it makes sense to be able to manage each of the relationships independently. In this labs release you can manage each master independently, including the relevant server variables but the same replication filters are currently applied to the replication events from all of the masters – we recognise that this isn’t ideal but this is cuurently an early access release and it’s something we intend to address in future versions.

At the moment there’s no limit to the number of masters that can replicate to a specific slave – in the final version we’re likely to apply a configurable limit.

Because this further complicates the information that would need to be included in SHOW SLAVE STATUS, most of the detailed information is instead presented in the Performance Schema.

It should be pointed out that there is no conflict detection or resolution built into this feature – it is the responsibility of the application to make sure that the masters are working on distinct data sets (or that they’re comfortable with the results of any conflicts).

For more technical details, refer to this engineering blog post on multi-source replication.

Summary

There’s a lot of exciting new content in the MySQL 5.7.2 DMR (download here) and the Mulit-Source Replication labs release (download here). The reason these features has been released is that we value early feedback from our community and customers – please try them out and let us know what we’re getting right and what needs to be enhanced!