Archive for MySQL Cluster

SQL & NoSQL, The Best of Both Worlds with MySQL Cluster – webinar replay now available

MySQL Cluster Logo

I recently presented a webinar explaining how you can enjoy the key benefits of NoSQL data stores without giving up all of the great features provided by a mature RDBMS.

In case you weren’t able to attend (or wanted to refresh your memory) then the webinar replay and charts are now available.

There’s often a lot of excitement around NoSQL Data Stores with the promise of simple access patterns, flexible schemas, scalability and High Availability. The downside can come in the form of losing ACID transactions, consistency, flexible queries and data integrity checks. What if you could have the best of both worlds?

This webinar showed how MySQL Cluster provides simultaneous SQL and native NoSQL access to your data, with a simple key-value API (Memcached), REST, JavaScript, Java or C++. You will hear how the MySQL Cluster architecture delivers in-memory real-time performance, 99.999% availability, on-line maintenance and linear, horizontal scalability through transparent auto-sharding.

These webinars are always a good opportunity to get your questions answered; here’s a catch up of the Q&A from this session:

  • Would you suggest using mysql cluster to store graph data (150k writes second)? For graph data, you always have to choose between a specialized graph database and a more general-purpose database. If it’s “almost” relational with a few graph-like connections, your decision might be different than if it’s purely graph-like. In any case, your write load of 150K writes per second can certainly be managed in MySQL Cluster. It only requires a little care to get an appropriate cluster configuration as far as number of data nodes, number of API nodes, memory, disk, and networking. Also, the total eventual size of the data is an important factor in the decision about whether to use Cluster, since indexes must always fit in the total distributed memory of the data nodes.
  • Can you please explain the RAM requirements for MySQL Cluster, for example if my database is 10GBs in disc space, will it require 10GBs of RAM in MySQL Cluster? There is additional overhead in addition to the raw data. It’s tricky to try to summarize, but there is fixed overhead per row plus space for re-do logs and indexes. Details are in online documents. All indexed columns must be in memory but other columns can be on disk if you choose. Remember that each row has to be stored on 2 data nodes and so you need to figure out your total memory requirement, double it and then divide by the number of data nodes to find how much memory would be needed for each data node. MySQL Cluster Evaluation Guide – Designing, Evaluating and Benchmarking MySQL Cluster is a good white paper to refer to in order to decide if MySQL Cluster is the right database for your application as well as what you’ll need and what you should do to get the best results.
  • Is there a wizard to migrate innoDB to MySQL Cluster? There’s not a “wizard” per se, but “ALTER TABLE x ENGINE=ndb” will convert a particular table. (It’s only tricky if you have foreign keys which might have to be dropped at the beginning and reenabled at the end of the process).
  • Can this be deployed on EC2 instances, or is this for bare metal? MySQL Cluster has been successfully deployed (e.g. by PayPal)
  • How difficult is it to do a hardware upgrade? Do you have to do it all at once or can you do each machine in turn? Both hardware and software upgrades are online operations. You can add nodes to a running cluster, and upgrade the software on nodes individually. If you use the MySQL Cluster Manager, many of the upgrade operations can be automated. You won’t be able to exploit some upgrades (e.g. extra hardware on a data node) until you’ve upgraded.
  • Does MySQL Cluster store all data in memory? What scenarios available for swaping data to disk? Can we differentiate which tables/columns are stored on memory/disk? All indexes are in memory. A table can be all in-memory, or it can have non-indexed columns stored on disk. That’s a per-column choice.
  • Can mysql be subscribed/notified when some data is changed/updated? There is a notification API. It is currently only supported in C NDB API (this is the “Event API”), not in MySQL server or others. There are plans to also support it in Node.JS, but no actual support at this time. If using SQL then triggers can be defined in the MySQL Server – just like for InnoDB tables.




I’m speaking at OUG Scotland this week

ougscot14-resourcepk-isa-v1
If you’re going to be near Edinburgh this week then consider registering for OUG Scotland. I’ll be presenting on how to acheive the benefits of NoSQL (scalability, HA, ease of use. simple APIs) while at the same time still benefiting from the RDBMS features people have grown to rely on (ACID transactions, rich schemas, flexible access patterns) – the presentation will be at 11:25 on Wednesday as part of the developers’ track.

Hint for those that can’t make it – MySQL Cluster is the key 🙂





MySQL & NoSQL – Best of Both Worlds. Upcoming webinar

MySQL Cluster LogoOn Thursday 22nd May I’ll be hosting a webinar explaining how you can get the best from the NoSQL world while still getting all of the benefits of a proven RDBMS. As always the webinar is free but please register here.

There’s often a lot of excitement around NoSQL Data Stores with the promise of simple access patterns, flexible schemas, scalability and High Availability. The downside can come in the form of losing ACID transactions, consistency, flexible queries and data integrity checks. What if you could have the best of both worlds?

This webinar shows how MySQL Cluster provides simultaneous SQL and native NoSQL access to your data, with a simple key-value API (Memcached), REST, JavaScript, Java or C++. You will hear how the MySQL Cluster architecture delivers in-memory real-time performance, 99.999% availability, on-line maintenance and linear, horizontal scalability through transparent auto-sharding.

This is also an opportunity to pick the brains of the MySQL Cluster engineering team and get your technical questions answered.

Times:

  • Thu, May 22: 09:00 Pacific time (America)
  • Thu, May 22: 10:00 Mountain time (America)
  • Thu, May 22: 11:00 Central time (America)
  • Thu, May 22: 12:00 Eastern time (America)
  • Thu, May 22: 13:00 São Paulo time
  • Thu, May 22: 16:00 UTC
  • Thu, May 22: 17:00 Western European time
  • Thu, May 22: 18:00 Central European time
  • Thu, May 22: 19:00 Eastern European time
  • Thu, May 22: 21:30 India, Sri Lanka
  • Fri, May 23: 00:00 Singapore/Malaysia/Philippines time
  • Fri, May 23: 00:00 China time
  • Fri, May 23: 01:00 日本
  • Fri, May 23: 02:00 NSW, ACT, Victoria, Tasmania (Australia)

Even if you can’t join the live webinar, it’s worth registering as you’ll be emailed a link to the replay as soon as it’s available.





MySQL Cluster Manager 1.3.1 released

MySQL Cluster Manager logoMySQL Cluster Manager 1.3.1 is now available to download from My Oracle Support and soon from the Oracle Software Delivery Cloud.

Details are available in the the MCM 1.3.1 Release Notes .

Documentation is available here.





MySQL Cluster 7.1.31 Released

MySQL Cluster LogoThe binary and source versions of MySQL Cluster 7.1.31 have now been made available at http://www.mysql.com/downloads/cluster/.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.1.31 (compared to 7.1.30) is available from the 7.1.31 Change log.





MySQL Cluster 7.4.0 Labs Release

MySQL Cluster LogoThe first version of MySQL Cluster 7.4 has now been released on MySQL Labs. Note that labs loads are not suitable for production use (in fact they’re even less mature than Development Milestone Releases); their purpose is to give users a chance to see what’s in the works, try it for themselves and then provide feedback. Having read that, if you’d like to try it out then Download MySQL Cluster 7.4 from MySQL Labs.

The focus of this first Cluster 7.4 load is performance and data node restart times.

Performance

MySQL Cluster 7.4 Sysbench Read-OnlyMySQL Cluster was designed from the outset to be a distributed, in-memory database and has been deployed that way for many, many years (it’s interesting to see that the idea of in-memory databases has now really come into vogue with excitement around new arrivals on the scene such as Hekaton). Not surprisingly when people are considering MySQL Cluster, performance and scalability are key features (High Availability is another) and so performance improvements are always a key focus of every release and MySQL CLuster 7.4 is no exception.

MySQL Cluster 7.4 Sysbench Read/WriteThe graphs show what’s already been acheived with Read Only Sysbench showing a 47% increase in throughput and a 38% improvement for the Read/Write benchmark. Even better improvements are seen when configuring the data nodes to use even more threads. For those not familiar with Sysbench, you should realise that each of the transactions involves quite a lot of work: 10 Primary Key lookups, 5 different types of scans where we fetch 100 records (normal select through ordered index followed by oder by, group by and so forth).

Restart Times

While less glamorous than performance, the time taken for a data node to restart can make a huge difference to how easy it is to manage your cluster. As the size and activity of the database increases, the restart time for a single data node will go up, if you then multiply that time by the number of data nodes you have, maintenance activities can start to take longer than you’d like.

This first MySQL Cluster 7.4 labs makes some signifficant improvements to the restart times – mostly by allowing more of the work to be done in parallel.





MySQL cluster management – webinar replay available

Migrating from MySQL Cluster Auto-Installer to MCM
Thomas Nielsen and I recently presented a webinar explaining the latest developments in managing MySQL Cluster. In case you weren’t able to attend (or wanted to refresh your memory) then the webinar replay and charts are now available.

As a reminder, this webinar covered what’s new in MySQL Cluster Manager 1.3 which recently went GA.

By their very nature, clustered environments involve more efforts and resources to administer than standalone systems and this holds true for MySQL Cluster, the database designed for web-scale throughput with carrier-grade availability.

The MySQL Cluster Auto-Installer guides you through defining and running a well configured MySQL Cluster database – combining auto-discovery of platform resources with built-in best practices in an intuitive web-based GUI.

MySQL Cluster Manager (available as part of the commercial MySQL Cluster Carrier Grade Edition) simplifies the ongoing management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one! These tasks include configuration, starting & stopping the cluster, upgrades and backup/restore and new for MCM 1.3, import a running Cluster.

These webinars are always a good opportunity to get your questions answered; here’s a catch up of the Q&A from this session:

  • One of the biggest problems I am faced with is that many common applications heavily rely on the InnoDB or MyISAM – storage engines.So I am concerned about different behaviors of them compared to the NDB.This is something that we’ve been working to address – for example, JOINs in MySQL are now a lot faster and MySQL Cluster now supports Foreign Keys.
  • What about transaction – Handling, Locking mechanisms, or even changes in supported statements which are related to instance – managment rather then being part of DDL, DML, or DQL? Yes, MySQL Cluster (NDB) is slightly different in these respects, mostly due to its real-time heritage and distributed nature.
  • Is the MySQL – Cluster auto-installer restricted to specific operating system distributions or versions? No, the auto-installer works on all platforms supported by MySQL Cluster
  • I hope there will be a white paper about the differences as this is the major reason why I am not changing to NDB right now after I changed from many – databases-in-one-instance to a one-database-per-instance approeach. There are a couple of resources available today: Reference Manual comparing InnoDB and MySQL Cluster and the MySQL Cluster Evaluation Guide
  • In the creation of the site in MySQL Cluster Manager can hosts be specified using the IPv6 addresses or it’s IPv4-based only? At the moment, MCM only supports IPv4 addresses.
  • Are performance criteria for the restoring of nodes in a cluster available? I’m not aware of performance benchmarks for a database restore but they’re fairly fast as they happen at the data node level (and so for example there’s no need to go through SQL). You can also configure high degrees of parallelism for the restore so that many records are restored at once.




MySQL Cluster 7.3.4 Released

MySQL Cluster LogoThe binary and source versions of MySQL Cluster 7.3.4 have now been made available at http://www.mysql.com/downloads/cluster/.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.3.4 (compared to 7.3.3) is available from the 7.3.4 Change log.





MySQL Cluster: The Latest Developments in Management, Free webinar

Migrating from MySQL Cluster Auto-Installer to MCMOn Thursday 23rd January, Thomas Nielsen and I will be hosting a webinar explaining the latest developments in managing MySQL Cluster. As always the webinar is free but please register here.

Note that we’ll be covering what’s new in MySQL Cluster Manager 1.3 which went GA this week.

By their very nature, clustered environments involve more efforts and resources to administer than standalone systems and this holds true for MySQL Cluster, the database designed for web-scale throughput with carrier-grade availability.

The MySQL Cluster Auto-Installer guides you through defining and running a well configured MySQL Cluster database – combining auto-discovery of platform resources with built-in best practices in an intuitive web-based GUI.

MySQL Cluster Manager (available as part of the commercial MySQL Cluster Carrier Grade Edition) simplifies the ongoing management of MySQL Cluster by automating common management tasks, delivering higher administration productivity and enhancing cluster agility. Tasks that used to take 46 commands can be reduced to just one! These tasks include configuration, starting & stopping the cluster, upgrades and backup/restore and new for MCM 1.3, import a running Cluster.

Join this webcast to get up to speed on the latest developments in these tools and learn how to exploit them to make management of MySQL Cluster simple, efficient and reliable.

Times:

  • Thu, Jan 23: 09:00 Pacific time (America)
  • Thu, Jan 23: 10:00 Mountain time (America)
  • Thu, Jan 23: 11:00 Central time (America)
  • Thu, Jan 23: 12:00 Eastern time (America)
  • Thu, Jan 23: 15:00 São Paulo time
  • Thu, Jan 23: 17:00 UTC
  • Thu, Jan 23: 17:00 Western European time
  • Thu, Jan 23: 18:00 Central European time
  • Thu, Jan 23: 19:00 Eastern European time
  • Thu, Jan 23: 22:30 India, Sri Lanka
  • Fri, Jan 24: 01:00 Singapore/Malaysia/Philippines time
  • Fri, Jan 24: 01:00 China time
  • Fri, Jan 24: 02:00 日本
  • Fri, Jan 24: 04:00 NSW, ACT, Victoria, Tasmania (Australia)

Even if you can’t join the live webinar, it’s worth registering as you’ll be emailed a link to the replay as soon as it’s available.





MCM 1.3 is GA – Importing a running Cluster into MySQL Cluster Manager

MySQL Cluster Manager logo
MySQL Cluster Manager 1.3.0 is now Generally Available and can be downloaded from the Oracle Software Delivery Cloud. The release contains a number of enhancements including performance improvements, handling larger clusters and of course bug fixes. The other big feature is that you can now import an existing, running MySQL Cluster instance into MCM without having to stop it first – this is the topic for this post.

In the past, we had a nice browser-based tool (the MySQL Clster Auto-Installer) to get a well configured cluster up and running (tuned to your environment) and we also had MySQL Cluster Manager to simplify the ongoing management of the cluster. Unfortunately, if you wanted to migrate the cluster you’d created with the auto-installer (or built by hand) then you first had to shut it down and then follow a manual procedure. MCM 1.3 introduces an import command that takes a running cluster and brings it under the control of MCM without having to stop the cluster (or suspend reads or writes). There are still some manual steps involved and the first half of this post will step through this process:

Once the import has been completed, the post will then step though a number of MCM tasks to test that everything has gone to plan and also to give a reminder of how simple operations such as upgrades, backup/restore and adding new nodes is once you’re using MCM.

 

Specify Cluster topology in MCM

For this example, a cluster is used that’s been created using the auto-installer that was a part of MySQL Cluster 7.3.2 (the version is signifficant as from MySQL Cluster 7.3.3, the auto-installer creates .conf) files for the mysqld processes rather than specifying everything on the command-line – that simplifies the import process).

Before going any further, some data is added to the database so that I can later check that it’s not been lost:

mysql@connect13a ~]$ mysql -h 127.0.0.1 -P3306 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 2
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE DATABASE clusterdb;USE clusterdb;
Query OK, 1 row affected (0.06 sec)

mysql> CREATE TABLE simples (id INT AUTO_INCREMENT PRIMARY KEY, time TIMESTAMP) ENGINE=ndb;
Query OK, 0 rows affected (0.15 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.00 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.01 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.00 sec)

mysql> REPLACE INTO simples VALUES ();
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM simples;
+----+---------------------+
| id | time                |
+----+---------------------+
|  3 | 2014-01-07 11:13:30 |
|  1 | 2014-01-07 11:13:28 |
|  2 | 2014-01-07 11:13:29 |
|  4 | 2014-01-07 11:13:30 |
+----+---------------------+
4 rows in set (0.00 sec)

The topology of the resulting cluster can be checked using the show command:

mysql@connect13b ~]$ cluster_7_2_14/bin/ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @192.168.56.103  (mysql-5.5.34 ndb-7.2.14, Nodegroup: 0, Master)
id=2    @192.168.56.104  (mysql-5.5.34 ndb-7.2.14, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=49   @192.168.56.101  (mysql-5.5.34 ndb-7.2.14)
id=52   @192.168.56.102  (mysql-5.5.34 ndb-7.2.14)

[mysqld(API)]   8 node(s)
id=50 (not connected, accepting connect from 192.168.56.101)
id=51 (not connected, accepting connect from 192.168.56.101)
id=53 (not connected, accepting connect from 192.168.56.102)
id=54 (not connected, accepting connect from 192.168.56.102)
id=55   @192.168.56.101  (mysql-5.5.34 ndb-7.2.14)
id=56   @192.168.56.101  (mysql-5.5.34 ndb-7.2.14)
id=57   @192.168.56.102  (mysql-5.5.34 ndb-7.2.14)
id=58   @192.168.56.102  (mysql-5.5.34 ndb-7.2.14)	

To define (but not create) the cluster in MCM, the following entities need to be defined:

  • site: the list of hosts that the cluster will run on
  • package: location of the cluster binaries on each of the hosts in the site
  • cluster: the cluster itself (the collection of nodes/processes that make up the cluster)

The MCM daemon mcmd must first be started on each of the target hosts and the mcm client run on any host before creating each of these entities:


[mysql@connect13a ~]$ ./mcm1.3/mcm1.3.0/bin/mcmd&
MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13a.localdomain:1862

[mysql@connect13b ~]$ ./mcm1.3/mcm1.3.0/bin/mcmd&
[1] 3068

[mysql@connect13b ~]$ MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13b.localdomain:1862

[mysql@connect13c ~]$  ./mcm1.3/mcm1.3.0/bin/mcmd&
[1] 1922
[mysql@connect13c ~]$ MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13c.localdomain:1862

[mysql@connect13d ~]$  ./mcm1.3/mcm1.3.0/bin/mcmd&
[1] 1936
[mysql@connect13d ~]$ MySQL Cluster Manager 1.3.0 (64bit) started
Connect to MySQL Cluster Manager by running "/home/mysql/mcm1.3/mcm1.3.0/bin/mcm" -a connect13d.localdomain:1862

[mysql@connect13a ~]$ ./mcm1.3/mcm1.3.0/bin/mcm
MySQL Cluster Manager client started.
This wrapper will spawn the mysql client to connect to mcmd

Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 1
Server version: 1.3.0 MySQL Cluster Manager

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademark of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mcm> CREATE SITE --hosts=192.168.56.101,192.168.56.102,192.168.56.103,192.168.56.104 mysite;
+---------------------------+
| Command result            |
+---------------------------+
| Site created successfully |
+---------------------------+
1 row in set (1.34 sec)

mcm> add package --basedir=/home/mysql/cluster_7_2_14 7_2_14;
+----------------------------+
| Command result             |
+----------------------------+
| Package added successfully |
+----------------------------+
1 row in set (0.40 sec)

mcm> CREATE CLUSTER --import --package=7_2_14 --processhosts=ndb_mgmd:49@192.168.56.101,
ndb_mgmd:52@192.168.56.102,ndbmtd:1@192.168.56.103,ndbmtd:2@192.168.56.104,mysqld:55@192.168.56.101,
mysqld:56@192.168.56.101,mysqld:57@192.168.56.102,mysqld:58@192.168.56.102,ndbapi:50@192.168.56.101,
ndbapi:51@192.168.56.101,ndbapi:53@192.168.56.102,ndbapi:54@192.168.56.102 mycluster;
+------------------------------+
| Command result               |
+------------------------------+
| Cluster created successfully |
+------------------------------+
1 row in set (0.34 sec)

mcm> SHOW STATUS -r mycluster;
+--------+----------+-----------------+--------+-----------+---------+
| NodeId | Process  | Host            | Status | Nodegroup | Package |
+--------+----------+-----------------+--------+-----------+---------+
| 49     | ndb_mgmd | 192.168.56.101  | import |           | 7_2_14  |
| 52     | ndb_mgmd | 192.168.56.102  | import |           | 7_2_14  |
| 1      | ndbmtd   | 192.168.56.103  | import | n/a       | 7_2_14  |
| 2      | ndbmtd   | 192.168.56.104  | import | n/a       | 7_2_14  |
| 55     | mysqld   | 192.168.56.101  | import |           | 7_2_14  |
| 56     | mysqld   | 192.168.56.101  | import |           | 7_2_14  |
| 57     | mysqld   | 192.168.56.102  | import |           | 7_2_14  |
| 58     | mysqld   | 192.168.56.102  | import |           | 7_2_14  |
| 50     | ndbapi   | *192.168.56.101 | import |           |         |
| 51     | ndbapi   | *192.168.56.101 | import |           |         |
| 53     | ndbapi   | *192.168.56.102 | import |           |         |
| 54     | ndbapi   | *192.168.56.102 | import |           |         |
+--------+----------+-----------------+--------+-----------+---------+
12 rows in set (0.06 sec)

This post doesn’t attempt to go into details about all of the commands shown above but if you’re new to MCM then check out the MySQL Cluster documentation. The --import option sets the state of each of the nodes to import; they will stay in that state until the import command is run later.

When the auto-installer created the cluster, it changed a number of the configuration parameters from their defaults (either with command-line options or using configuration files). Using the LINUX ps -ef command you can identify all of these settings (either directly or by examining the configuration files that are referenced):

[mysql@connect13a ~]$ ps -ef | grep ndb_mgmd
mysql     2766     1  2 Jan07 ?        00:34:27 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=49 --config-dir=/home/mysql/MySQL_Cluster/49/ 
--config-file=/home/mysql/MySQL_Cluster/49/config.ini

[mysql@connect13b ~]$ ps -ef | grep ndb_mgmd
mysql     2426     1  2 Jan07 ?        00:32:26 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=52 --config-dir=/home/mysql/MySQL_Cluster/52/ 
--config-file=/home/mysql/MySQL_Cluster/52/config.ini

[mysql@connect13a ~]$ ps -ef | grep mysqld
mysql     3289     1  1 Jan07 ?        00:15:07 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/55/ --tmpdir=/home/mysql/MySQL_Cluster/55/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3306 --ndbcluster --ndb-nodeid=55 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/55/mysql.socket
mysql     3392     1  1 Jan07 ?        00:15:08 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/56/ --tmpdir=/home/mysql/MySQL_Cluster/56/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3307 --ndbcluster --ndb-nodeid=56 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/56/mysql.socket

[mysql@connect13b ~]$ ps -ef | grep mysqld
mysql     2884     1  1 Jan07 ?        00:14:41 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/57/ --tmpdir=/home/mysql/MySQL_Cluster/57/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3306 --ndbcluster --ndb-nodeid=57 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/57/mysql.socket
mysql     2981     1  1 Jan07 ?        00:14:42 /home/mysql/cluster_7_2_14/bin/mysqld --no-defaults 
--datadir=/home/mysql/MySQL_Cluster/58/ --tmpdir=/home/mysql/MySQL_Cluster/58/tmp 
--basedir=/home/mysql/cluster_7_2_14/ --port=3307 --ndbcluster --ndb-nodeid=58 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186, 
--socket=/home/mysql/MySQL_Cluster/58/mysql.socket

[mysql@connect13c ~]$ ps -ef | grep ndbmtd
mysql     1822     1 12 Jan07 ?        02:35:06 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=1 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1823  1822 12 Jan07 ?        02:35:06 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=1 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13d ~]$ ps -ef | grep ndbmtd | grep nodeid=2
mysql     1835     1  0 Jan07 ?        00:00:52 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=2 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1836  1835 12 Jan07 ?        02:30:46 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=2 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13a ~]$ cat /home/mysql/MySQL_Cluster/49/config.ini

# Configuration file for MyCluster
#

[NDB_MGMD DEFAULT]
Portnumber=1186

[NDB_MGMD]
NodeId=49
HostName=192.168.56.101
DataDir=/home/mysql/MySQL_Cluster/49/
Portnumber=1186

[NDB_MGMD]
NodeId=52
HostName=192.168.56.102
DataDir=/home/mysql/MySQL_Cluster/52/
Portnumber=1186

[TCP DEFAULT]
SendBufferMemory=4M
ReceiveBufferMemory=4M

[NDBD DEFAULT]
BackupMaxWriteSize=1M
BackupDataBufferSize=16M
BackupLogBufferSize=4M
BackupMemory=20M
BackupReportFrequency=10
MemReportFrequency=30
LogLevelStartup=15
LogLevelShutdown=15
LogLevelCheckpoint=8
LogLevelNodeRestart=15
DataMemory=58M
IndexMemory=9M
MaxNoOfTables=4096
MaxNoOfTriggers=3500
NoOfReplicas=2
StringMemory=25
DiskPageBufferMemory=64M
SharedGlobalMemory=20M
LongMessageBuffer=32M
MaxNoOfConcurrentTransactions=16384
BatchSizePerLocalScan=512
FragmentLogFileSize=64M
NoOfFragmentLogFiles=16
RedoBuffer=32M
MaxNoOfExecutionThreads=2
StopOnError=false
LockPagesInMainMemory=1
TimeBetweenEpochsTimeout=32000
TimeBetweenWatchdogCheckInitial=60000
TransactionInactiveTimeout=60000
HeartbeatIntervalDbDb=15000
HeartbeatIntervalDbApi=15000

[NDBD]
NodeId=1
HostName=192.168.56.103
DataDir=/home/mysql/MySQL_Cluster/1/

[NDBD]
NodeId=2
HostName=192.168.56.104
DataDir=/home/mysql/MySQL_Cluster/2/

[MYSQLD DEFAULT]

[MYSQLD]
NodeId=55
HostName=192.168.56.101

[MYSQLD]
NodeId=56
HostName=192.168.56.101

[MYSQLD]
NodeId=57
HostName=192.168.56.102

[MYSQLD]
NodeId=58
HostName=192.168.56.102

[API]
NodeId=50
HostName=192.168.56.101

[API]
NodeId=51
HostName=192.168.56.101

[API]
NodeId=53
HostName=192.168.56.102

[API]
NodeId=54
HostName=192.168.56.102

Now that we have a view of all of the configuration parameters, a subset of them need to be applied to the definition of the cluster in MCM (or else MCM will override them). Note that not all of the definitions need porting to MCM – for example Portnumber=1186 as that is already the default value and configdir as MCM will use its own. So, based on the command-line options provided to the executables and the configuration files, the configuration parameters for the cluster defined in MCM are set:

mcm> SET SendBufferMemory:ndbmtd+ndbmtd=4M,ReceiveBufferMemory:ndbmtd+ndbmtd=4M,
SendBufferMemory:ndbmtd+mysqld=4M,ReceiveBufferMemory:ndbmtd+mysqld=4M,BackupLogBufferSize:ndbmtd=4M,
BackupMemory:ndbmtd=20M,BackupReportFrequency:ndbmtd=10,MemReportFrequency:ndbmtd=30,
LogLevelStartup:ndbmtd=15,LogLevelShutdown:ndbmtd=15,LogLevelCheckpoint:ndbmtd=8,
LogLevelNodeRestart:ndbmtd=15,DataMemory:ndbmtd=58M,IndexMemory:ndbmtd=9M,MaxNoOfTables:ndbmtd=4096,
MaxNoOfTriggers:ndbmtd=3500,SharedGlobalMemory:ndbmtd=20M,LongMessageBuffer:ndbmtd=32M,
MaxNoOfConcurrentTransactions:ndbmtd=16384,BatchSizePerLocalScan:ndbmtd=512,
FragmentLogFileSize:ndbmtd=64M,StopOnError:ndbmtd=true,LockPagesInMainMemory:ndbmtd=1,
TimeBetweenEpochsTimeout:ndbmtd=32000,TimeBetweenWatchdogCheckInitial:ndbmtd=60000,
TransactionInactiveTimeout:ndbmtd=60000,HeartbeatIntervalDbDb:ndbmtd=15000,
HeartbeatIntervalDbApi:ndbmtd=15000,DataDir:ndbmtd:1=/home/mysql/MySQL_Cluster/1/,
DataDir:ndbmtd:2=/home/mysql/MySQL_Cluster/2/,DataDir:mysqld:55=/home/mysql/MySQL_Cluster/55/,
DataDir:mysqld:56=/home/mysql/MySQL_Cluster/56/,DataDir:mysqld:57=/home/mysql/MySQL_Cluster/57/,
DataDir:mysqld:58=/home/mysql/MySQL_Cluster/58/,tmpdir:mysqld:55=/home/mysql/MySQL_Cluster/55/tmp,
tmpdir:mysqld:56=/home/mysql/MySQL_Cluster/56/tmp,tmpdir:mysqld:57=/home/mysql/MySQL_Cluster/57/tmp,
tmpdir:mysqld:58=/home/mysql/MySQL_Cluster/58/tmp,
socket:mysqld:55=/home/mysql/MySQL_Cluster/55/mysql.socket,
socket:mysqld:56=/home/mysql/MySQL_Cluster/56/mysql.socket,
socket:mysqld:57=/home/mysql/MySQL_Cluster/57/mysql.socket,
socket:mysqld:58=/home/mysql/MySQL_Cluster/58/mysql.socket,port:mysqld:56=3307,
port:mysqld:58=3307 mycluster;

+-----------------------------------+
| Command result                    |
+-----------------------------------+
| Cluster reconfigured successfully |
+-----------------------------------+
1 row in set (0.44 sec)

Note that as MCM is not yet managing the running cluster, you can break this up into multiple SET commands as it doesn’t need to restart any processes.
 

Prepare running cluster for import

As MCM takes over some functions from MySQL Cluster, we need to ensure that a couple of rules are imposed on the cluster so that there are no conflicts.

MCM is responsible for making sure that the management nodes are using the correct version of the configuration data and so we don’t want the management nodes holding onto older versions – this leads to the first rule, that the configuration cache should be disabled. This means restarting the management nodes with the config-cache parameter set to FALSE:

mysql@connect13a ~]$ ps -ef | grep ndb_mgmd
mysql     2766     1  2 Jan07 ?        00:34:27 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=49 --config-dir=/home/mysql/MySQL_Cluster/49/ 
--config-file=/home/mysql/MySQL_Cluster/49/config.ini
[mysql@connect13a ~]$ kill -9 2766
[mysql@connect13a ~]$  /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial --ndb-nodeid=49 
--config-dir=/home/mysql/MySQL_Cluster/49/ --config-file=/home/mysql/MySQL_Cluster/49/config.ini 
--config-cache=FALSE

[mysql@connect13b ~]$ ps -ef | grep ndb_mgmd
mysql     2426     1  2 Jan07 ?        00:32:26 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=52 --config-dir=/home/mysql/MySQL_Cluster/52/ 
--config-file=/home/mysql/MySQL_Cluster/52/config.ini
[mysql@connect13b ~]$ kill -9 2426
[mysql@connect13b ~]$  /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial --ndb-nodeid=52 
--config-dir=/home/mysql/MySQL_Cluster/52/ --config-file=/home/mysql/MySQL_Cluster/52/config.ini 
--config-cache=FALSE

MCM is responsible for making sure that a node is restarted in the event of the process stopping (only happens if StopOnError is set to FALSE) and so the data nodes no longer need their angel processes (the first of the 2 ndbmtd processes we see for each data node). This means the second rule is that all of these angel processes must be killed:

[mysql@connect13d ~]$ ps -ef | grep ndbmtd | grep nodeid=1
mysql     1822     1  0 Jan07 ?        00:00:52 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=1 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1823  1822 12 Jan07 ?        02:30:46 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=1 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
[mysql@connect13d ~]$ kill -9 1822

[mysql@connect13d ~]$ ps -ef | grep ndbmtd | grep nodeid=2
mysql     1835     1  0 Jan07 ?        00:00:52 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=2 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
mysql     1836  1835 12 Jan07 ?        02:30:46 /home/mysql/cluster_7_2_14/bin/ndbmtd --ndb-nodeid=2 
--ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,
[mysql@connect13d ~]$ kill -9 1835

In order to manager the cluster, MCM needs to be able to connect to each of the MySQL Servers and so the mcmd user must be created on each of the MySQL Servers (unless you’re exploiting the ability to share user credentials between multiple MySQL Servers in which case it only needs doing once):

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P3306 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P3307 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

[mysql@connect13b ~]$ mysql -h 127.0.0.1 -P3306 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

[mysql@connect13b ~]$ mysql -h 127.0.0.1 -P3307 -u root
Welcome to the MySQL monitor.  Commands end with ; or \g.
Your MySQL connection id is 5
Server version: 5.5.34-ndb-7.2.14-cluster-gpl MySQL Cluster Community Server (GPL)

Copyright (c) 2000, 2013, Oracle and/or its affiliates. All rights reserved.

Oracle is a registered trademarkshow  of Oracle Corporation and/or its
affiliates. Other names may be trademarks of their respective
owners.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

mysql> CREATE USER 'mcmd'@'%' IDENTIFIED BY 'super';
Query OK, 0 rows affected (0.01 sec)

mysql> GRANT ALL PRIVILEGES ON *.* TO 'mcmd'@'127.0.0.1' IDENTIFIED BY 'super' WITH GRANT OPTION;
Query OK, 0 rows affected (0.01 sec)

As the running cluster was generated by a pre-MySQL Cluster 7.3.3 version of the auto-installer, all of the mysqld settings were specified as command-line options. The MCM import command is very restrictive about what command-line options are allowed and so most of the options need moving to .cnf files and the the mysqld processes need to be restarted to use those configuration files (note that the cluster is still available through this process but there is a rolling restart of the MySQL Servers and so applications may need to switch ones they’re connected to temporarily – if this isn’t already handled by load ballancing). These are the configuration files that were manually created and the commands to restart each MySQL Server:

[mysql@connect13a ~]$ cat  /home/mysql/MySQL_Cluster/55.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/55/tmp
datadir=/home/mysql/MySQL_Cluster/55/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/55/mysql.socket
port=3306
ndbcluster
ndb-nodeid=55
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13a ~]$ mysqladmin -h 127.0.0.1 -P 3306 -u root shutdown
[mysql@connect13a ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/55.cnf&

[mysql@connect13a ~]$ cat /home/mysql/MySQL_Cluster/56.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/56/tmp
datadir=/home/mysql/MySQL_Cluster/56/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/56/mysql.socket
port=3307
ndbcluster
ndb-nodeid=56
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13a ~]$ mysqladmin -h 127.0.0.1 -P 3307 -u root shutdown
[mysql@connect13a ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/56.cnf&

[mysql@connect13b ~]$ cat /home/mysql/MySQL_Cluster/57.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/57/tmp
datadir=/home/mysql/MySQL_Cluster/57/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/57/mysql.socket
port=3306
ndbcluster
ndb-nodeid=57
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13b ~]$ mysqladmin -h 127.0.0.1 -P 3306 -u root shutdown
[mysql@connect13b ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/57.cnf&

[mysql@connect13b ~]$ cat /home/mysql/MySQL_Cluster/58.cnf
[mysqld]
tmpdir=/home/mysql/MySQL_Cluster/58/tmp
datadir=/home/mysql/MySQL_Cluster/58/
basedir=/home/mysql/cluster_7_2_14/
socket=/home/mysql/MySQL_Cluster/58/mysql.socket
port=3307
ndbcluster
ndb-nodeid=58
ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186

[mysql@connect13b ~]$ mysqladmin -h 127.0.0.1 -P 3307 -u root shutdown
[mysql@connect13b ~]$ /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/58.cnf&

 

Ensure correct PID files in place

MCM tracks each of the processes in the cluster using the process IDs held in pid files. For the data nodes, MCM will automatically fetch the process IDs from the pid files from the running cluster and so we need to make sure that they’re accurate (that they’re in the right place and contain the correct IDs):

[mysql@connect13c ~]$ ps -ef | grep ndbmtd
mysql     1823     1 12 Jan07 ?        02:35:06 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=1 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13c ~]$ echo 1823 > MySQL_Cluster/1/ndb_1.pid

[mysql@connect13d ~]$ ps -ef | grep ndbmtd
mysql     1836     1 12 Jan07 ?        02:34:03 /home/mysql/cluster_7_2_14/bin/ndbmtd 
--ndb-nodeid=2 --ndb-connectstring=192.168.56.101:1186,192.168.56.102:1186,

[mysql@connect13d ~]$ echo 1836 > MySQL_Cluster/2/ndb_2.pid

For MySQL Servers and management nodes, the pid files need be created within the MCM directory:

[mysql@connect13a ~]$ ps -ef | grep mysqld
mysql    11145  1226  1 09:49 pts/1    00:00:02 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/55.cnf
mysql    11178  1226  1 09:50 pts/1    00:00:01 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/56.cnf

[mysql@connect13a ~]$ echo 11145 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.55
[mysql@connect13a ~]$ echo 11178 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.56

mysql@connect13b ~]$ ps -ef | grep mysqld
mysql     11247  8477  1 09:55 pts/1    00:00:03 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/57.cnf
mysql     11253  8477  0 09:57 pts/1    00:00:01 /home/mysql/cluster_7_2_14/bin/mysqld 
--defaults-file=/home/mysql/MySQL_Cluster/58.cnf

[mysql@connect13b ~]$ echo 11247 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.57
[mysql@connect13b ~]$ echo 11253 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.58

[mysql@connect13a pid]$ ps -ef | grep ndb_mgmd
mysql     8859     1  2 08:23 ?        00:00:23 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=49 --config-dir=/home/mysql/MySQL_Cluster/49/ 
--config-file=/home/mysql/MySQL_Cluster/49/config.ini --config-cache=FALSE

[mysql@connect13b ~]$ echo 8859 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.49

[mysql@connect13b pid]$ ps -ef | grep ndb_mgmd
mysql     7092     1  1 08:25 ?        00:00:17 /home/mysql/cluster_7_2_14/bin/ndb_mgmd --initial 
--ndb-nodeid=52 --config-dir=/home/mysql/MySQL_Cluster/52/ 
--config-file=/home/mysql/MySQL_Cluster/52/config.ini --config-cache=FALSE

[mysql@connect13b ~]$ echo 7092 > mcm1.3/mcm_data/clusters/mycluster/pid/pid.52

The good news is that if there are any mistakes in any of these files then the import cluster --dryrun command will fail with a useful error message and so you can come back to fix things up.
 

Import cluster into MCM

OK – you’ve now done the hard bits which is the prep work ahead of the actual importing of the cluster into MCM control, now it’s MCM’s turn to automate the actual install.

Before the real import, we use MCM to perform a dry-run to make sure that all of the prep work has been completed succesfully:

mcm> import cluster --dryrun mycluster;
+-------------------------------------------------+
| Command result                                  |
+-------------------------------------------------+
| Import checks passed. Cluster ready for import. |
+-------------------------------------------------+
1 row in set (0.80 sec)

All that’s left before running the final import is to take a backup (just in cas something should go very wrong):

mysql@connect13a ~]$ ndb_mgm
-- NDB Cluster -- Management Client --

ndb_mgm> start backup
Waiting for completed, this may take several minutes
Node 1: Backup 1 started from node 49
Node 1: Backup 1 started from node 49 completed
StartGCP: 35747 StopGCP: 35750
#Records: 2063 #LogRecords: 0
Data: 51472 bytes Log: 0 bytes

And now, finally the import itself can be run:

mcm> import cluster mycluster;
+-------------------------------+
| Command result                |
+-------------------------------+
| Cluster imported successfully |
+-------------------------------+
1 row in set (2.98 sec)

After all of the prep work, that seems a bit of an anticlimax! As a first check, make sure that all of the nodes (processes) have been imported correctly:

mcm> show status -r mycluster;
+--------+----------+-----------------+---------+-----------+---------+
| NodeId | Process  | Host            | Status  | Nodegroup | Package |
+--------+----------+-----------------+---------+-----------+---------+
| 49     | ndb_mgmd | 192.168.56.101  | running |           | 7_2_14  |
| 52     | ndb_mgmd | 192.168.56.102  | running |           | 7_2_14  |
| 1      | ndbmtd   | 192.168.56.103  | running | 0         | 7_2_14  |
| 2      | ndbmtd   | 192.168.56.104  | running | 0         | 7_2_14  |
| 55     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 56     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 57     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 58     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 50     | ndbapi   | *192.168.56.101 | added   |           |         |
| 51     | ndbapi   | *192.168.56.101 | added   |           |         |
| 53     | ndbapi   | *192.168.56.102 | added   |           |         |
| 54     | ndbapi   | *192.168.56.102 | added   |           |         |
+--------+----------+-----------------+---------+-----------+---------+
12 rows in set (0.10 sec)

As a second check, make sure that the data hasn’t been lost:

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P 3307 -u root -e 'SELECT * FROM clusterdb.simples'
+----+---------------------+
| id | time                |
+----+---------------------+
|  3 | 2014-01-07 12:08:40 |
|  5 | 2014-01-07 12:08:41 |
|  1 | 2014-01-07 12:08:38 |
|  2 | 2014-01-07 12:08:39 |
|  4 | 2014-01-07 12:08:41 |
+----+---------------------+

So that’s it, the cluster is now safely under the control of MCM. The next section performs some more tests of MCM and at the same time illustrates how much simpler some of these management steps are now we have MCM to help.
 

Try out MCM on the imported cluster

This final section shows how to exploit some of the MCM features on the cluster.

On-line backup and (especially) restore is very simple using MCM but before that can be done, we want to add extra ndbapi slots so that the restore command can be executed on the hosts running the data nodes – fortunately this is straight-forward to do:

mcm> add process -R ndbapi:59@192.168.56.103,ndbapi:60@192.168.56.104 mycluster;

+----------------------------+
| Command result             |
+----------------------------+
| Process added successfully |
+----------------------------+
1 row in set (2 min 32.91 sec)

Note that this process took more than 2 minutes – the reason for that is that behind the scenes it performed a rolling restart of all of the existing nodes (processes) to make them aware of the new nodes. We can now confirm that these nodes have been added:

mcm> show status -r mycluster;
+--------+----------+-----------------+---------+-----------+---------+
| NodeId | Process  | Host            | Status  | Nodegroup | Package |
+--------+----------+-----------------+---------+-----------+---------+
| 49     | ndb_mgmd | 192.168.56.101  | running |           | 7_2_14  |
| 52     | ndb_mgmd | 192.168.56.102  | running |           | 7_2_14  |
| 1      | ndbmtd   | 192.168.56.103  | running | 0         | 7_2_14  |
| 2      | ndbmtd   | 192.168.56.104  | running | 0         | 7_2_14  |
| 55     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 56     | mysqld   | 192.168.56.101  | running |           | 7_2_14  |
| 57     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 58     | mysqld   | 192.168.56.102  | running |           | 7_2_14  |
| 50     | ndbapi   | *192.168.56.101 | added   |           |         |
| 51     | ndbapi   | *192.168.56.101 | added   |           |         |
| 53     | ndbapi   | *192.168.56.102 | added   |           |         |
| 54     | ndbapi   | *192.168.56.102 | added   |           |         |
| 59     | ndbapi   | *192.168.56.103 | added   |           |         |
| 60     | ndbapi   | *192.168.56.104 | added   |           |         |
+--------+----------+-----------------+---------+-----------+---------+
14 rows in set (0.03 sec)

Taking a backup of the database is a single command:

mcm> backup cluster mycluster;
+-------------------------------+
| Command result                |
+-------------------------------+
| Backup completed successfully |
+-------------------------------+
1 row in set (5.76 sec)

Before performing the restore, the test data can be removed from the database:

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P 3307 -u root -e 'DELETE FROM clusterdb.simples'

To restore the database, check the available backups and then restore the most recent one (note as only the data has been deleted, the -M option is used to indicate that meta-data doesn’t need to be restored). The -I option is used to specify that the backup with Id of 2 should be used:

mcm> list backups mycluster;
+----------+--------+----------------+---------------------+---------+
| BackupId | NodeId | Host           | Timestamp           | Comment |
+----------+--------+----------------+---------------------+---------+
| 1        | 1      | 192.168.56.103 | 2014-01-08 08:31:35 |         |
| 1        | 2      | 192.168.56.104 | 2014-01-08 08:31:33 |         |
| 2        | 1      | 192.168.56.103 | 2014-01-09 05:03:13 |         |
| 2        | 2      | 192.168.56.104 | 2014-01-09 05:03:12 |         |
+----------+--------+----------------+---------------------+---------+
4 rows in set (0.19 sec)

mcm> restore cluster -I 2 -M mycluster;
+--------------------------------+
| Command result                 |
+--------------------------------+
| Restore completed successfully |
+--------------------------------+
1 row in set (9.40 sec)

To confirm that the restore has indeed been successful, check that the test data is back in the database:

[mysql@connect13a ~]$ mysql -h 127.0.0.1 -P 3307 -u root -e 'SELECT * FROM clusterdb.simples'
+----+---------------------+
| id | time                |
+----+---------------------+
|  3 | 2014-01-07 12:08:40 |
|  5 | 2014-01-07 12:08:41 |
|  1 | 2014-01-07 12:08:38 |
|  2 | 2014-01-07 12:08:39 |
|  4 | 2014-01-07 12:08:41 |
+----+---------------------+

As a final test, the cluster is upgraded to MySQL Cluster 7.3.3; all that’s needed is to define the package (telling MCM where to find the new binaries on the target hosts) and then perform the upgrade:

mcm> add package --basedir=/home/mysql/mcm1.3_cluster/cluster 7_3_3;
+----------------------------+
| Command result             |
+----------------------------+
| Package added successfully |
+----------------------------+
1 row in set (0.65 sec)

mcm> upgrade cluster --package=7_3_3 mycluster;
+-------------------------------+
| Command result                |
+-------------------------------+
| Cluster upgraded successfully |
+-------------------------------+
1 row in set (4 min 15.96 sec)

Conclusion

The most demanded feature for MySQL Cluster Manager was the ability to take an existing (running) cluster and bring it under the control of MCM (rather than having to create the cluster using MCM in the first place). MCM 1.3 delivers this. As you’ll have noticed, the migration process involves a non-trivial amount of prep work; the reason for this is that making configuration changes to cluster is fairly involved, with a lot of moving parts. Hopefully you’ll also have obderved that once the cluster is under the control of MCM, the management process becomes much simpler and less prone to user error.

It would be great to hear how people get on with MCM 1.3 in general and with the import in particular, please download the MySQL Cluster Manager software and try it out.