Archive for May 29, 2012

MySQL Cluster Manager 1.1.6 released

MySQL Cluster Manager 1.1.6 is now available to download from My Oracle Support.

Details on the changes can will be added to the MySQL Cluster Manager documentation . Please give it a try and let me know what you think.

Note that if you’re not a commercial user then you can still download MySQL Cluster Manager 1.1.5 from the Oracle Software Delivery Cloud and try it out for free. Documentation is available here.





MySQL 5.6 Replication – webinar replay

MySQL 5.6 Replication - Global Transaction IDs

MySQL 5.6 Replication - Global Transaction IDs

On Wednesday (16th May 2012), Mat Keep and I presented on the new replication features that are previewed as part of the latest MySQL 5.6 Development Release.

The replay for that webinar (together with the chart deck) is now available from here.

In addition, there were a huge number of great questions raised and we had a couple of  key engineers answering them on-line – view the Q&A transcript here.

A reminder of the topics covered in the webinar…

MySQL 5.6 delivers new replication capabilities which we will discuss in the webinar:

  • High performance with Multi-Threaded Slaves and Optimized Row Based Replication
  • High availability with Global Transaction Identifiers, Failover Utilities and Crash Safe Slaves & Binlog
  • Data integrity with Replication Event Checksums
  • Dev/Ops agility with new Replication Utilities, Time Delayed Replication and more

 





MySQL Cluster 7.1.22 is available for download

The binary version for MySQL Cluster 7.1,21 has now been made available at https://www.mysql.com/downloads/cluster/7.1.html#downloads (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.1.22 (compared to 7.1.21) are available from the 7.1.22 Change log.





MySQL Cluster 7.2.6 is available for download

The binary version for MySQL Cluster 7.2.6 has now been made available at http://www.mysql.com/downloads/cluster/ (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.2.6 (compared to 7.2.5) are available from the 7.2.6 Change log.





Upcoming conferences to learn more about MySQL Cluster & Replication

There are a couple of conferences coming up where you can expect to learn about the latest developments in MySQL Cluster and MySQL Replication (as well as what else is happening in MySQL 5.6).

The first is the Oracle MySQL Innovation Day which is being held in Oracle HQ at Redwood Shores. This is an all-day event on 5th June – unfortunately I won’t be able to attend this one but there will be lots of great Cluster and replication sessions. If you can’t make it out to California then there will be a live Webcast. You can register here to attend in person or join the webcast.

The second is MySQL Connect – this runs the weekend before Oracle OpenWorld in San Francisco; it’s not  until 29th September but it’s worth registering now to get the early bird pricing and save $500 (end 13th July). There are lots of great sessions lined up both from the MySQL experts within Oracle and users and community members.





Upcoming webinar: MySQL 5.6 Replication – For Next Generation of Web and Cloud Services

MySQL 5.6 Replication - Global Transaction IDs

MySQL 5.6 Replication - Global Transaction IDs

On Wednesday (16th May 2012), Mat Keep and I will be presenting the new replication features that are previewed as part of the latest MySQL 5.6 Development Release. If you’d like to attend then register here.

MySQL 5.6 delivers new replication capabilities which we will discuss in the webinar:

  • High performance with Multi-Threaded Slaves and Optimized Row Based Replication
  • High availability with Global Transaction Identifiers, Failover Utilities and Crash Safe Slaves & Binlog
  • Data integrity with Replication Event Checksums
  • Dev/Ops agility with new Replication Utilities, Time Delayed Replication and more

The session will wrap up with resources to get started with MySQL 5.6 and an opportunity to ask questions.

The webinar will last 45-60 minutes and will start on Wednesday, May 16, 2012 at 09:00 Pacific time (America); start times in other time zones:

  • Wed, May 16: 06:00 Hawaii time
  • Wed, May 16: 10:00 Mountain time (America)
  • Wed, May 16: 11:00 Central time (America)
  • Wed, May 16: 12:00 Eastern time (America)
  • Wed, May 16: 16:00 UTC
  • Wed, May 16: 17:00 Western European time
  • Wed, May 16: 18:00 Central European time
  • Wed, May 16: 19:00 Eastern European time

As always, it’s worth registering even if you can’t make the live webcast as you’ll  be emailed a link to the replay as soon as it’s available.





On-line add-node with MCM; a more complex example

I’ve previously provided an example of using MySQL Cluster Manager to add nodes to a running MySQL Cluster deployment but I’ve since received a number of questions around how to do this in more complex circumstances (for example when ending up with more than 1 MySQL Server on a single host where each mysqld process should use a different port). The purpose of this post is to work through one of these more complex scenarios.

The starting point is an existing cluster made up of 3 hosts with the nodes (processes) as described in this MCM report:

mcm> SHOW STATUS -r mycluster;
+--------+----------+-------------------------------+---------+-----------+---------+
| NodeId | Process  | Host                          | Status  | Nodegroup | Package |
+--------+----------+-------------------------------+---------+-----------+---------+
| 1      | ndbmtd   | paas-23-54.osc.uk.oracle.com  | running | 0         | 7_2_5   |
| 2      | ndbmtd   | paas-23-55.osc.uk.oracle.com  | running | 0         | 7_2_5   |
| 49     | ndb_mgmd | paas-23-56.osc.uk.oracle.com  | running |           | 7_2_5   |
| 50     | mysqld   | paas-23-54.osc.uk.oracle.com  | running |           | 7_2_5   |
| 51     | mysqld   | paas-23-55.osc.uk.oracle.com  | running |           | 7_2_5   |
| 52     | mysqld   | paas-23-56.osc.uk.oracle.com  | running |           | 7_2_5   |
| 53     | ndbapi   | *paas-23-56.osc.uk.oracle.com | added   |           |         |
+--------+----------+-------------------------------+---------+-----------+---------+
7 rows in set (0.01 sec)

This same configuration is shown graphically in this diagram:

On-Line scalability with MySQL Cluster - starting point

Original MySQL Cluster deployment

 

Note that the ‘ndbapi’ node isn’t actually a process but is instead a ‘slot’ that can be used by any NDB API client to access the data in the data nodes directly – this could be any of:

  • A MySQL Server
  • An application using the C++ NDB API directly
  • A Memcached server using the direct NDB driver
  • An application using the ClusterJ, JPA or modndb REST API
  • The MySQL database restore command

This Cluster is now going to be extended by adding an extra host as well as extra nodes (both processes and ndbapi slots).

The following diagram illustrates what the final Cluster will look like:

MySQL Cluster after on-line scaling

MySQL Cluster after on-line scaling

The first step is to add the new host to the configuration and make it aware of the MySQL Cluster package being used (in this example, 7.2.5). Note that you should already have started the mcmd process on this new host (if not then do that now):

mcm> ADD HOSTS --hosts=paas-23-57.osc.uk.oracle.com mysite;
+--------------------------+
| Command result           |
+--------------------------+
| Hosts added successfully |
+--------------------------+
1 row in set (8.04 sec)

mcm> ADD PACKAGE -h paas-23-57.osc.uk.oracle.com --basedir=/home/oracle/cluster_7_2_5 7_2_5;
+----------------------------+
| Command result             |
+----------------------------+
| Package added successfully |
+----------------------------+
1 row in set (0.68 sec)

At this point the MCM agent on the new host is connected with the existing 3 but it has not become part of the Cluster – this is done by declaring which nodes should be on that host; at the same time I add some extra nodes to the existing hosts. As there will be more than one MySQL server (mysqld) running on some of the hosts, I’ll explicitly tell MCM what port number to use for some of the mysqlds (rather than just using the default of 3306).

mcm> ADD PROCESS -R ndbmtd@paas-23-54.osc.uk.oracle.com,
ndbmtd@paas-23-55.osc.uk.oracle.com,mysqld@paas-23-56.osc.uk.oracle.com,
ndbapi@paas-23-56.osc.uk.oracle.com,mysqld@paas-23-57.osc.uk.oracle.com,
mysqld@paas-23-57.osc.uk.oracle.com,ndbapi@paas-23-57.osc.uk.oracle.com 
-s port:mysqld:54=3307,port:mysqld:57=3307 mycluster;
+----------------------------+
| Command result             |
+----------------------------+
| Process added successfully |
+----------------------------+
1 row in set (2 min 34.22 sec)

In case you’re wondering how I was able to predict the node-ids that would be allocated to the new nodes, the scheme is very simple:

  • Node-ids 1-48 are reserved for data nodes
  • Node-ids 49-256 are used for all other node types
  • Within those ranges, node-ids are allocated sequentially

If you look carefully at the results you’ll notice that the ADD PROCESS command took a while to run (2.5 minutes) – the reason for this is that behind the scenes, MCM performed a rolling restart – ensuring that all of the existing nodes pick up the new configuration without losing database service. Before starting the new processes, it makes sense to double check that the correct ports are allocated to each of the mysqlds:

mcm> GET -d port:mysqld mycluster;
+------+-------+----------+---------+----------+---------+---------+---------+
| Name | Value | Process1 | NodeId1 | Process2 | NodeId2 | Level   | Comment |
+------+-------+----------+---------+----------+---------+---------+---------+
| port | 3306  | mysqld   | 50      |          |         | Default |         |
| port | 3306  | mysqld   | 51      |          |         | Default |         |
| port | 3306  | mysqld   | 52      |          |         | Default |         |
| port | 3307  | mysqld   | 54      |          |         |         |         |
| port | 3306  | mysqld   | 56      |          |         | Default |         |
| port | 3307  | mysqld   | 57      |          |         |         |         |
+------+-------+----------+---------+----------+---------+---------+---------+
6 rows in set (0.07 sec)

At this point the new processes can be started and then the status of all of the processes confirmed:

mcm> START PROCESS --added mycluster;
+------------------------------+
| Command result               |
+------------------------------+
| Process started successfully |
+------------------------------+
1 row in set (26.30 sec)

mcm> SHOW STATUS -r mycluster;
+--------+----------+-------------------------------+---------+-----------+---------+
| NodeId | Process  | Host                          | Status  | Nodegroup | Package |
+--------+----------+-------------------------------+---------+-----------+---------+
| 1      | ndbmtd   | paas-23-54.osc.uk.oracle.com  | running | 0         | 7_2_5   |
| 2      | ndbmtd   | paas-23-55.osc.uk.oracle.com  | running | 0         | 7_2_5   |
| 49     | ndb_mgmd | paas-23-56.osc.uk.oracle.com  | running |           | 7_2_5   |
| 50     | mysqld   | paas-23-54.osc.uk.oracle.com  | running |           | 7_2_5   |
| 51     | mysqld   | paas-23-55.osc.uk.oracle.com  | running |           | 7_2_5   |
| 52     | mysqld   | paas-23-56.osc.uk.oracle.com  | running |           | 7_2_5   |
| 53     | ndbapi   | *paas-23-56.osc.uk.oracle.com | added   |           |         |
| 3      | ndbmtd   | paas-23-54.osc.uk.oracle.com  | running | 1         | 7_2_5   |
| 4      | ndbmtd   | paas-23-55.osc.uk.oracle.com  | running | 1         | 7_2_5   |
| 54     | mysqld   | paas-23-56.osc.uk.oracle.com  | running |           | 7_2_5   |
| 55     | ndbapi   | *paas-23-56.osc.uk.oracle.com | added   |           |         |
| 56     | mysqld   | paas-23-57.osc.uk.oracle.com  | running |           | 7_2_5   |
| 57     | mysqld   | paas-23-57.osc.uk.oracle.com  | running |           | 7_2_5   |
| 58     | ndbapi   | *paas-23-57.osc.uk.oracle.com | added   |           |         |
+--------+----------+-------------------------------+---------+-----------+---------+
14 rows in set (0.08 sec)

The enlarged Cluster is now up and running but any existing MySQL Cluster tables will only be stored across the original data nodes. To remedy that, each of those existing tables should be repartitioned:

mysql> ALTER ONLINE TABLE simples REORGANIZE PARTITION;
Query OK, 0 rows affected (0.22 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> OPTIMIZE TABLE simples;
+-------------------+----------+----------+----------+
| Table             | Op       | Msg_type | Msg_text |
+-------------------+----------+----------+----------+
| clusterdb.simples | optimize | status   | OK       |
+-------------------+----------+----------+----------+
1 row in set (0.00 sec)

You can safely perform the repartitioning while the Cluster is up and running (with your application sending in reads and writes) but there is a performance impact (has been measured at 50%) and so you probably want to do this at a reasonably quiet time of day.

As always, please post feedback and questions in the comments section of this post.