Tag Archive for MySQL Cluster CGE

MySQL Cluster 7.4 New Features Webinar Replay

MySQL Cluster 7.4 GAI recently hosted a webinar introducing MySQL Cluster and then looking into what’s new in the latest version (MySQL Cluster 7.4) in some more detail. The replay of the MySQL Cluster 7.4 webinar is now available here. Alternatively if just want to skim through the charts then scroll down.

Abstract

MySQL Cluster powers the subscriber databases of major communication services providers as well as next generation web, cloud, social and mobile applications. It is designed to deliver:

  • Real-time, in-memory performance for both OLTP and analytics workloads
  • Linear scale-out for both reads and writes
  • 99.999% High Availability
  • Transparent, cross-shard transactions and joins
  • Update-Anywhere Geographic replication
  • SQL or native NoSQL APIs
  • All that while still providing full ACID transactions.

Understand some of the highlights of MySQL Cluster 7.4:

  • 200 Million queries per minute
  • Active-Active geographic replication with conflict detection and resolution
  • 5x faster on-line maintenance activities
  • Enhanced reporting for memory and database operations

Charts

Questions and Answers

  • Can all data be stored on disk rather than in memory? Any column that isn’t part of an index can be stored on disk if you want it to be. There is then an in-memory cache for the disk-based data.
  • Is in-memory data safe from the whole system shutting down (e.g. power loss for the whole data center)? The in-memory data is (asynchronously) checkpointed to disk (so that there is persistence but latency isn’t impacted).
  • I need to store more than 14K non BLOB/TEXT data in a single row – has this been addressed? As you say the options are to use TEXT/BLOB columns (or of course to split the data over multiple rows).
  • Can you comment on improvements of virtualized deploymets regarding the 7.4 version? Only to say that more and more people are deploying on VMs and we’re not seeing issues caused – if we do then they’ll be fixed
  • Can I upgrage from the previous version (7.3) to MySQL Cluster 7.4 or do I have to reinstall the product of the new version (7.4)? You can perform a rolling upgarade from MySQL Cluster 7.3 to MySQL Cluster 7.4 – the database stays uo throughout the process and you don’t lose any data or have to stop writing changes.




MySQL Cluster Manager 1.3.5 Released

MySQL Cluster Manager logoMySQL Cluster Manager 1.3.5 is now available to download from My Oracle Support.

Details are available in the the MCM 1.3.5 Release Notes. Note that this version of MCM now supports MySQL Cluster 7.4 (as well as earlier versions or MySQL Cluster).

Documentation is available here.





MySQL Cluster Asynchronous Replication – conflict detection & resolution

I was rooting through past blog entries and I stumbled accross a draft post on setting up multi-master (update anywhere) asynchronous replication for MySQL Cluster. The post never quite got finished and published and while the material is now 4 years old it may still be helpfull to some and so I’m posting it now. Note that a lot has happened with MySQL Cluster in the last 4 years and in this area, the most notable change has been the Enhanced conflict resolution with MySQL Cluster active-active replication feature introduced in MySQL Cluster 7.2 and if you’re only dealing with a pair of Clusters, that’s your best option as it removed the need for you to maintain the timestamp columns and backs out entire transactions rather than just the conflicting rows. So when would you use this “legacy” method? The main use case is when you want conflict detection/resolution among a ring of more than 2 Clusters. Note also that MySQL 5.6 (and so MySQL Cluster 7.3) added microsecond precision to timestamps and so you may not need the custom plugin that this post referred to.

Anyway, here’s the original post…

————————————————————
MySQL Cluster asynchronous replication allows you to run in a multi-master mode with the application making changes to both sites (or more than 2 sites using a replication ring). As the replication is asynchronous, if the application(s) modified the same row on both sites at ‘about the same time’ then there is a potential for a collision. Left to their own devices, each site would store (and provide to the application) different data indefinitely. This article explains how to use MySQL Cluster collision detection and resolution to cope with this.

Fig. 1 Multi-master replication leading to inconsistencies

Fig. 1 Multi-master replication leading to inconsistencies

Fig. 1 shows the timeline that can result in a conflict. The same or two different applications make a change to the same row in the table but to the 2 different instances of MySQL Cluster. Each cluster synchronously replicates the data amongst its local data node in order to provide local High Availability (everything there is safe!). At some point later (normally a fraction of a second), the changes are replicated to the remote site asynchronously – this delay opens a window for a conflict where Cluster 2 is updated by the application just before it receives the earlier update from Cluster 1. Cluster 2 will overwrite it’s row with the value (5) it has received but only after its earlier change (directly from the application) is written to the binary log ready for replication to Cluster 1 which in turn will cause that value (15) to be stored by Cluster 1. Each Cluster instance replicates what it believes to be the correct data to the other site – overwriting what that site had previously stored. In our example, that leaves one database holding the value 15 for key ‘A’ while the other stores 5.

It’s often the case that the application will tend to go to the same site during a particular time when accessing the same data and so the chances of a conflict are reduced but the application may still want to guard against (even rare) race conditions. If replication slows down (for example due to a backlog of updates to be applied) or stops temporarily (for example due to network failure to the geographically remote site) then the chances of a collision greatly increase.

For information on setting up multi-master asynchronous replication with MySQL Cluster, please take a look at Setting up MySQL Asynchronous Replication for High Availability.

Conflict Detection & Resolution using MySQL Cluster

MySQL Cluster provides two different schemes to handle these collisions/conflicts. The first scheme (referred to as “greatest timestamp wins”) detects that a conflict occurs and automatically resolves it (the change most recently received from the application is stored on both Clusters). The second scheme (referred to as “same value wins”) detects that a conflict has occurred but does not fix it – instead the conflict is recorded in such a way that the application (or user) can figure out how best to resolve it based on a full understanding of the schema, what the data means and how it’s used. It is up to the developer which approach they use (if any) – it is selected on a per-table basis.

Common prerequisite steps

These steps should be followed regardless of whether you want to use conflict resolution or conflict detection (where the application decides how to resolve it).

  1. Set up multi-master replication as described in Setting up MySQL Asynchronous Replication for High Availability
  2. Create the function “inttime” for use in the stored procedures as described in Creating a MySQL plugin to produce an integer timestamp Note that you will need to install inttime.so on each host

Setting up Automatic Conflict Resolution (Greatest timestamp wins)

This is the simplest way to handle conflicts with MySQL Cluster when implementing multi-master asynchronous replication (actually, the simplest is to do nothing and accept that if your application(s) update the same row at about the same time at both Clusters then those Clusters may be left with different data until the application(s) next update that row).

Remember that this mechanism works by checking that the timestamp field of the update received by the slave is later than the one already stored. In the example that follows, the ‘ts’ column is used for the timestamp.

Create the database on either cluster (replication will make sure that it appears in both Clusters):

mysql> create database clusterdb;

Before creating the application tables, set  up the ndb_replication system table (again, in either Cluster):

mysql> CREATE TABLE mysql.ndb_replication ( db VARBINARY(63), table_name VARBINARY(63), server_id INT UNSIGNED, binlog_type INT UNSIGNED, conflict_fn VARBINARY(128), PRIMARY KEY USING HASH (db, table_name, server_id) ) ENGINE=NDB PARTITION BY KEY(db,table_name);

mysql> insert into mysql.ndb_replication values ('clusterdb', 'tab1', 7, NULL, 'NDB$MAX(ts)');

After that, you can create the application table:

cluster1 mysql> use clusterdb;

cluster1 mysql> create table tab1 (NAME varchar(30) not null primary key,VALUE int, ts BIGINT UNSIGNED default NULL) engine=ndb;

To test that the basic replication is working for this table, insert a row into cluster1, check it’s there in cluster2, add a second row to cluster2 and make sure it’s visible in cluster1:

cluster1 mysql> insert into tab1 values ('Frederick', 1, 0);

cluster2 mysql> use clusterdb;
cluster2 mysql> select * from tab1;
+-----------+-------+------+
| NAME      | VALUE | ts   |
+-----------+-------+------+
| Frederick |     1 |    0 |
+-----------+-------+------+
1 row in set (0.00 sec)
cluster2 mysql> insert into tab1 values ('William',20,0);

cluster1 mysql> select * from tab1;
+-----------+-------+------+
| NAME      | VALUE | ts   |
+-----------+-------+------+
| Frederick |     1 |    0 |
| William   |    20 |    0 |
+-----------+-------+------+
2 rows in set (0.00 sec)

For both rows, the timestamp was set to 0 to represent ‘the start of time’, from this point on, whenever making a change to those rows, the timestamp should be increased. Later on on in this article, I’ll show how to automate that process.

We’re now ready to test that the conflict resolution is working; to do so replication is stopped (in both directions) to increase the window for a conflict and the same tuple updated on each Cluster. Replication is then restarted and then I’ll confirm that the last update wins on both clusters:

cluster1 mysql> slave stop;

cluster2 mysql> slave stop;

cluster1 mysql> update tab1 set VALUE=10,ts=1 where NAME='Frederick';

cluster2 mysql> update tab1 set VALUE=11,ts=2 where NAME='Frederick';

cluster1 mysql> slave start;

cluster2 mysql> slave start;

cluster1 mysql> select * from tab1;
+-----------+-------+------+
| NAME      | VALUE | ts   |
+-----------+-------+------+
| William   |    20 |    0 |
| Frederick |    11 |    2 |
+-----------+-------+------+
2 rows in set (0.00 sec)

clusrer2 mysql> select * from tab3;
+-----------+-------+------+
| NAME      | VALUE | ts   |
+-----------+-------+------+
| William   |    20 |    0 |
| Frederick |    11 |    2 |
+-----------+-------+------+
2 rows in set (0.00 sec)

This confirms that the later update (timestamp of 2) is stored in both Clusters – conflict resolved!

Automating the timestamp column

Manually setting the timestamp value is convenient when testing that the mechanism is working as expected could be a nuisance in a production environment (for example, you would need to get the clocks of all application nodes exactly in sync wherever in the world they’re located). This section describes how that can be automated using stored procedures (note that stored procedures don’t work when using the NDB API to make changes but in that situation it should be straight-forward to provide wrapper methods that manage the timestamp field). Note that the timestamp must be an integer field (and needs a high level of precision) and so you can’t use the regular MySQL TIMESTAMP type.

This mechanism assumes that you’ve built “inttime.so” and deployed it to the hosts running the mysqld processes for each cluster (refer to the prerequisite section).

cluster1 mysql> create trigger tab1_insert before insert on tab3 for each row set NEW.ts=inttime;
cluster1 mysql> create trigger tab1_update before update on tab3 for each row set NEW.ts=inttime;
cluster1 mysql> insert into tab1 (NAME,VALUE) values ('James',10),('David',20);
cluster1 mysql> select * from tab1;
+-----------+-------+------------------+
| NAME      | VALUE | ts               |
+-----------+-------+------------------+
| William   |    20 |                0 |
| David     |    20 | 1250090500370307 |
| James     |    10 | 1250090500370024 |
| Frederick |    11 |                2 |
+-----------+-------+------------------+
4 rows in set (0.00 sec)

cluster2 mysql> update tab1 set VALUE=55 where NAME='William';
cluster2 mysql> select * from tab1;
+-----------+-------+------------------+
| NAME      | VALUE | ts               |
+-----------+-------+------------------+
| James     |    10 | 1250090500370024 |
| Frederick |    11 |                2 |
| William   |    55 | 1250090607251846 |
| David     |    20 | 1250090500370307 |
+-----------+-------+------------------+
4 rows in set (0.00 sec)

Setting up Conflict Detection (Same timestamp wins)

With this method, conflicts are detected and recorded but not automatically resolved. The intent is to allow the application to decide how to handle the conflict based on an understanding of what the data means.

Create the database on either cluster (replication will make sure that it appears in both Clusters):

mysql> create database clusterdb;

Before creating the application tables, set  up the ndb_replication system table (again, in either Cluster):

mysql> CREATE TABLE mysql.ndb_replication ( db VARBINARY(63), table_name VARBINARY(63), server_id INT UNSIGNED, binlog_type INT UNSIGNED, conflict_fn VARBINARY(128), PRIMARY KEY USING HASH (db, table_name, server_id) ) ENGINE=NDB PARTITION BY KEY(db,table_name);

mysql> insert into mysql.ndb_replication values ('clusterdb', 'SubStatus', 7, NULL, 'NDB$OLD(ts)');

After that, you can create the application table and its associated exception table:

cluster1 mysql> use clusterdb;
cluster1 mysql> create table SubStatus$EX (server_id INT UNSIGNED,master_server_id INT UNSIGNED,master_epoch BIGINT UNSIGNED,count INT UNSIGNED,sub_id int not null,notes VARCHAR(30) DEFAULT 'To be resolved', PRIMARY KEY (server_id, master_server_id, master_epoch, count)) engine=ndb;
cluster1 mysql> create table SubStatus (sub_id int not null primary key, ActivationStatus varchar(20), ts BIGINT default 0) engine=ndb;

To test that the exception table gets filled in, add some rows to the table and then cause an update conflict (in a similar way to the conflict resolution example but after setting up the timestamp automation):

cluster1 mysql> create trigger SubStatus_insert before insert on SubStatus for each row set NEW.ts=inttime();
cluster1 mysql> insert into SubStatus (sub_id, ActivationStatus) values (1,'Active'),(2,'Deactivated');
cluster1 mysql> select * from SubStatus;
+--------+------------------+------------------+
| sub_id | ActivationStatus | ts               |
+--------+------------------+------------------+
|      1 | Active           | 1250094170589948 |
|      2 | Deactivated      | 1250094170590250 |
+--------+------------------+------------------+
2 rows in set (0.00 sec)

cluster2 myql> use clusterdb;
cluster2 mysql> select * from SubStatus;
+--------+------------------+------------------+
| sub_id | ActivationStatus | ts               |
+--------+------------------+------------------+
|      1 | Active           | 1250094170589948 |
|      2 | Deactivated      | 1250094170590250 |
+--------+------------------+------------------+
2 rows in set (0.00 sec)

cluster1 mysql> slave stop;

cluster2 mysql> slave stop;

...

(at this point, just go on to test as with the conflict resoultion example but in this case expect to see that the confict is not resolved but an entry is added into the conflict table).

Of course, you can always add a trigger on the conflict table and use that to spur the application into initiating its own conflict resolution algorithm.





MySQL Cluster 7.3.2 is released on E-delivery

MySQL Cluster 7.3 logo The binary version for MySQL Cluster 7.3.2 has now been made available at http://www.mysql.com/downloads/cluster/ (GPL version) or Oracle’s Software Delivery Cloud for the commercial version.

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.3.2 (compared to 7.3.1) is available from the 7.3.2 Change log.





MySQL Cluster 7.3 is now Generally Available – an overview

MySQL Cluster 7.3 logoMySQL Cluster 7.3 has now been declared GA! This means that you can deploy it in your live systems and get support from Oracle.

This post briefly describes the main new features in the release; for a deeper dive, refer to the What’s new in MySQL Cluster 7.3 white paper and the more specialised blog posts that you’ll find links to from this post.

I’ll also be giving more details in the MySQL Cluster 7.3 Webinar which is scheduled for 09:00 Pacific / Noon Eastern / 17:00 UK / 18:00 CET this Thursday (20th June). This is a great opportunity to get your questions answered in real-time by the experts. As usual, the webinar is free but you should n the register here ahead of time. Even if you can’t attend it’s worth registering as you’ll then be sent a link to the replay.

MySQL Cluster Auto-Installer

MySQL Cluster Auto-Installer

MySQL Cluster Auto-Installer

The MySQL Cluster Auto-Installer is a browser-based GUI that will provision a well configured, distributed Cluster in minutes, ready for test, development or production environments.

A major priority for this release is to make it much easier and faster to provision a cluster that is well tuned for your application and environment; we want you to focus on exploiting the benefits of MySQL Cluster in your application rather than on figuring out how to install, configure and start the database. The MySQL Cluster Auto-Installer provides a browser-based GUI which steps you through creating a Cluster tailored to your requirements. For a really good view of how the tool works, a tutorial video and detailed worked example is available from the blog post: MySQL Cluster 7.3 MySQL Cluster Auto-Installer.

Foreign Keys

Foreign Key constraints between tables

Tables with Foreign Key constraint

Foreign Keys (FKs) are a way of implementing relationships/constraints between columns in different tables. For example if we want to make sure that the value of the county column in the towns table has an associated entry in the counties table. In this way, no-one can place a town in a non-existent county and similarly no one can remove a county and leave orphaned towns behind.

We believe that this is going to enable a whole new set of applications exploit the advantages of MySQL Cluster where:

  • Developers want to simplify their application by pushing referential checks down into the database
  • The application is built upon 3rd party middleware that is dependent on FKs
  • The application is already so dependent on FKs that it would be too complex to remove them

Note that the FK constraints will be applied regardless of how data is subsequently written (i.e. through SQL or any of the available NoSQL APIs that bypass the MySQL Server) – this ensures that the intended data integrity is always maintained.

More details and a worked example of using Foreign Keys with MySQL Cluster can be found in this post: Foreign Keys in MySQL Cluster.

JavaScript Driver for Node.js

MySQL Cluster driver for JavaScript/Node.jsNode.js is a platform that allows fast, scalable network applications (typically web applications) to be developed using JavaScript. Node.js is designed for a single thread to serve millions of client connections in real-time – this is achieved by an asynchronous, event-driven architecture – just like MySQL Cluster, making them a great match.

The MySQL Cluster NoSQL Driver for Node.js is implemented as a module for the V8 engine, providing Node.js with a native, asynchronous JavaScript interface that can be used to both query and receive results sets directly from MySQL Cluster, without transformations to SQL. As an added benefit, you can direct the driver to use SQL so that the same API can be used with InnoDB tables.

With the MySQL Cluster JavaScript Driver for Node.js, architects can re-use JavaScript from the client to the server, all the way through to a distributed, fault-tolerant, transactional database supporting real-time, high-scale services.

Developing an application to use this API is very straightforward and an example application with full setup instructions is provided in the post Using JavaScript and Node.js with MySQL Cluster – First steps.

Connection Thread scalability

MySQL Cluster thrives when it is offered as many operations in parallel as possible. To achieve this, parallelism should be configured at each layer. There should be multiple application threads sending work to the MySQL Server (or other API), there should be multiple MySQL Servers and finally multiple connections between the MySQL Server (or other API node) and the data nodes. This is explained in more detail in the MySQL Cluster Performance white paper.

Increased throughput with Connection Thread Scalability

Increased throughput with Connection Thread Scalability

Each of the connections to the data nodes consumes one of the 256 available node-ids and so in some scenarios they could cap the scalability of the Cluster. MySQL Cluster 7.3 greatly increases the throughput of each of these connections meaning that less connections (and therefore node-ids) are needed to tackle the same workload; this in turn means that more API nodes and data nodes can be added to the Cluster to scale the capacity and performance even further. Benchmarks have shown up to a 8x increase in throughput per connection. The graph illustrates how less NDB API connection threads in MySQL Cluster 7.3 can deliver increased throughput compared with MySQL Cluster 7.2; the benchmark was based on DBT2 using a single data node; a single MySQL Server and 128 client connections.

MySQL 5.6 Server

MySQL Cluster 7.3 also rebases onto MySQL 5.6. What this means is that the MySQL Server that comes with MySQL Cluster 7.3 is based on MySQL 5.6. This in turn means that when you mix and match MySQL Cluster (NDB) and InnoDB tables, you’ll be getting the benefits of the latest and greatest version of each storage engine.

Try it out!

Please go ahead and download and start experimenting with MySQL Cluster 7.3. We look forward to hearing how you get on!





Foreign Keys in MySQL Cluster

Foreign Key constraints between tables

Foreign Key constraints between tables

The newly announced GA of MySQL Cluster 7.3 (7.3.2) builds upon second DMR (7.3.1 m2) released earlier in the year which added Foreign Keys to MySQL Cluster. Foreign Keys is a feature requested by many, many people and has often been cited as the reason for not being able to replace InnoDB with MySQL Cluster when they needed the extra availability or scalability.

Note that this post is an up-version of the original – and was first published with the 7.3 labs release in June 2012.

What’s a Foreign Key

The majority of readers who are already familiar with Foreign Keys can skip to the next section.

Foreign Keys are a way of implementing relationships/constraints between columns in different tables. For example, in the above figure, we want to make sure that the value of the county column in the towns table has an associated entry in the counties table. In this way, no-one can place a town in a non-existent county and similarly no one can remove a county and leave orphaned towns behind.

We refer to the towns table as the child and the counties table as the parent.

There are different categories of constraints that influence how they’re enforced when a row is updated or deleted from the parent table:

  • Cascade: If a row is deleted from the parent then any rows in the child table with a matching FK value will also be deleted. Similarly for changes to the value in the parent table.
  • Restrict: A row cannot be deleted from the parent table if this would break a FK constraint with the child table. Similarly for changes to the value in the parent table.
  • No Action: Very similar to “Restrict” except that any events/triggers on the parent table will be executed before the constraint is enforced – giving the application writer the option to resolve any FK constraint conflicts using a stored procedure.
  • Set NULL: If NULL is a permitted value for the FK column in the child table then it will be set to NULL if the associated data in the parent table is updated or deleted.
  • Set Default: If there is a default value for the FK column in the child table then it will be used if the associated data in the parent table is updated or deleted. Note that this is not implemented in this version – the constraint can be added to the schema but any subsequent deletion or update to the column in the parent table will fail.

How to use Foreign Keys with MySQL Cluster

For anyone familiar with using FKs with InnoDB, a lot of this will be very similar – that’s intentional in the design of this Cluster feature. Throughout this section we will use the tables shown in the above figure.

First of all, create the tables and add some data:

mysql> CREATE DATABASE clusterdb;USE clusterdb;
mysql> CREATE TABLE counties (county VARCHAR(30) NOT NULL PRIMARY KEY, country VARCHAR(30))
  ENGINE=ndb;
mysql> REPLACE INTO counties VALUES ('Berkshire','England'),('Shropshire','England'),
  ('Oxfordshire','England'),('Buckinghamshire','England');
mysql> CREATE TABLE towns (town VARCHAR(30) NOT NULL PRIMARY KEY, county VARCHAR(30),
  INDEX county_county_index (county), CONSTRAINT county_town FOREIGN KEY (county) 
  REFERENCES counties(county) ON DELETE RESTRICT ON UPDATE RESTRICT) ENGINE=ndb;
mysql> REPLACE INTO towns VALUES ('Maidenhead','Berkshire'),('Reading','Berkshire'),
  ('Shrewsbury','Shropshire'),('Oxford','Oxfordshire');
mysql> SELECT * FROM counties;SELECT * FROM towns;
+-----------------+---------+
| county          | country |
+-----------------+---------+
| Shropshire      | England |
| Buckinghamshire | England |
| Berkshire       | England |
| Oxfordshire     | England |
+-----------------+---------+
4 rows in set (0.00 sec)

+------------+-------------+
| town       | county      |
+------------+-------------+
| Reading    | Berkshire   |
| Shrewsbury | Shropshire  |
| Maidenhead | Berkshire   |
| Oxford     | Oxfordshire |
+------------+-------------+
4 rows in set (0.00 sec)

Note that the FK is actually defined as part of creating the towns table with CONSTRAINT county_town FOREIGN KEY (county) REFERENCES counties(county) ON DELETE RESTRICT ON UPDATE RESTRICT. county_town is the label assigned to the constraint (if you don’t specify one then one will be automatically created – you can see it by executing SHOW CREATE TABLE towns). The constraint is set up from the county column in the towns (child) table to the county column in the counties (parent) table.

As a first check, we can confirm that values that are not part of the constraint can be modified in the parent table – in this case we’ll move the England/Wales border:

mysql> UPDATE counties SET country='WALES' WHERE county='Shropshire';
Query OK, 1 row affected (0.00 sec)
Rows matched: 1  Changed: 1  Warnings: 0

Next, to illustrate the kind of user error that can be prevented, we attempt to add a town in a county that has not yet been defined – we want this to fail as it would break the constraint we created:

mysql> INSERT towns VALUES ('Cambridge','Cambridgeshire');
ERROR 1452 (23000): Cannot add or update a child row: a foreign key constraint fails
  (Unknown error code)
mysql> SHOW WARNINGS;
+---------+------+---------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                               |
+---------+------+---------------------------------------------------------------------------------------+
| Warning | 1296 | Got error 255 'Foreign key constaint violated: No parent row found' from NDB          |
| Error   | 1452 | Cannot add or update a child row: a foreign key constraint fails (Unknown error code) |
+---------+------+---------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

We can also confirm that the constraint checking works in the opposite direction, by attempting to delete a county that has towns defined for it:

mysql> DELETE FROM counties WHERE county='Berkshire';
ERROR 1451 (23000): Cannot delete or update a parent row: a foreign key constraint fails (Unknown error code)
mysql> SHOW WARNINGS;
+---------+------+-------------------------------------------------------------------------------------------+
| Level   | Code | Message                                                                                   |
+---------+------+-------------------------------------------------------------------------------------------+
| Warning | 1296 | Got error 256 'Foreign key constaint violated: Referenced row exists' from NDB            |
| Error   | 1451 | Cannot delete or update a parent row: a foreign key constraint fails (Unknown error code) |
+---------+------+-------------------------------------------------------------------------------------------+
2 rows in set (0.00 sec)

It is also possible to delete FK constraints and add new ones, we do that next and at the same time we’ll switch to a CASCADE restriction. Note that adding and deleting FKs are on-line operations. In case you have forgotten the name of the constraint (or we had not explicitly named it) we can use the SHOW CREATE TABLE command to remind us:

mysql> SHOW CREATE TABLE towns;
+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Table | Create Table                                                                                                                                                                                                                                                                                                                 |
+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| towns | CREATE TABLE `towns` (
  `town` varchar(30) NOT NULL,
  `county` varchar(30) DEFAULT NULL,
  PRIMARY KEY (`town`),
  KEY `county_county_index` (`county`),
CONSTRAINT `county_town` FOREIGN KEY(`county`) REFERENCES `counties` (`county`)
  ON DELETE RESTRICT ON UPDATE RESTRICT
) ENGINE=ndbcluster DEFAULT CHARSET=latin1 |
+-------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
1 row in set (0.00 sec)

mysql> ALTER TABLE towns DROP FOREIGN KEY county_town;
Query OK, 0 rows affected (0.47 sec)
Records: 0  Duplicates: 0  Warnings: 0

mysql> ALTER TABLE towns ADD CONSTRAINT county_town FOREIGN KEY (county)
  REFERENCES counties(county) ON DELETE CASCADE ON UPDATE CASCADE;

The data in the tables will not have changed as part of dropping and adding constraints and so you can always DROP and the re-add constraints if you need to turn FKs off for a while:

mysql> SELECT * FROM counties;SELECT * FROM towns;
+-----------------+---------+
| county          | country |
+-----------------+---------+
| Buckinghamshire | England |
| Berkshire       | England |
| Oxfordshire     | England |
| Shropshire      | WALES   |
+-----------------+---------+
4 rows in set (0.00 sec)

+------------+-------------+
| town       | county      |
+------------+-------------+
| Reading    | Berkshire   |
| Shrewsbury | Shropshire  |
| Maidenhead | Berkshire   |
| Oxford     | Oxfordshire |
+------------+-------------+
4 rows in set (0.00 sec)

Now if we delete the county of Berkshire then the CASCADE option will cause all of the associated towns to be deleted too:

mysql> DELETE FROM counties WHERE county='Berkshire';
Query OK, 1 row affected (0.00 sec)

mysql> SELECT * FROM counties;SELECT * FROM towns;
+-----------------+---------+
| county          | country |
+-----------------+---------+
| Shropshire      | WALES   |
| Buckinghamshire | England |
| Oxfordshire     | England |
+-----------------+---------+
3 rows in set (0.00 sec)

+------------+-------------+
| town       | county      |
+------------+-------------+
| Shrewsbury | Shropshire  |
| Oxford     | Oxfordshire |
+------------+-------------+
2 rows in set (0.00 sec)

Note that MySQL Cluster implements FKs in the data nodes and the constraints will be enforced regardless of how the data is written (through SQL, javascript, Memcached, ClusterJ, REST or directly through the NDB API). Before illustrating this, we add back the data we have just deleted:

mysql> REPLACE INTO counties VALUES ('Berkshire','England');
Query OK, 1 row affected (0.00 sec)

mysql> REPLACE INTO towns VALUES ('Maidenhead','Berkshire'),('Reading','Berkshire');
Query OK, 2 rows affected (0.00 sec)
Records: 2  Duplicates: 0  Warnings: 0

mysql> SELECT * FROM counties;SELECT * FROM towns;
+-----------------+---------+
| county          | country |
+-----------------+---------+
| Shropshire      | WALES   |
| Buckinghamshire | England |
| Berkshire       | England |
| Oxfordshire     | England |
+-----------------+---------+
4 rows in set (0.00 sec)

+------------+-------------+
| town       | county      |
+------------+-------------+
| Reading    | Berkshire   |
| Shrewsbury | Shropshire  |
| Maidenhead | Berkshire   |
| Oxford     | Oxfordshire |
+------------+-------------+
4 rows in set (0.00 sec)

We’ll then run a simple C++ app to delete the ‘Berkshire’ row from the counties table. The full source for the app is available here but the key part of the code is this:

static void do_change(Ndb &myNdb)
{
const NdbDictionary::Dictionary* myDict= myNdb.getDictionary();
const NdbDictionary::Table *countiesTable= myDict->getTable("counties");

char royalBorough[255];
royalBorough[0]=9;
strcpy(royalBorough + 1, "Berkshire");

if (countiesTable == NULL)
APIERROR(myDict->getNdbError());

NdbTransaction *myTransaction= myNdb.startTransaction();
if (myTransaction == NULL) APIERROR(myNdb.getNdbError());

NdbOperation *myOperation= myTransaction->getNdbOperation(countiesTable);
if (myOperation == NULL) APIERROR(myTransaction->getNdbError());

std::cout << "About to delete " << royalBorough << "n";

myOperation->deleteTuple(); 
myOperation->equal("county", (const char *) royalBorough);

if (myTransaction->execute( NdbTransaction::Commit ) == -1)
APIERROR(myTransaction->getNdbError());

myNdb.closeTransaction(myTransaction);
}

We then run this code and confirm that the cascaded delete has worked:

billy@ws2:~/projects/fk$ ./testfk localhost:1186
About to delete         Berkshire

mysql> SELECT * FROM counties;SELECT * FROM towns;
+-----------------+---------+
| county          | country |
+-----------------+---------+
| Shropshire      | WALES   |
| Buckinghamshire | England |
| Oxfordshire     | England |
+-----------------+---------+
3 rows in set (0.00 sec)

+------------+-------------+
| town       | county      |
+------------+-------------+
| Shrewsbury | Shropshire  |
| Oxford     | Oxfordshire |
+------------+-------------+
2 rows in set (0.00 sec)

Comparison with InnoDB’s Foreign Keys

As mentioned, we’ve attempted to mirror the way that InnoDB works as much as possible in the way that it implements Foreign Keys as that’s what we think will allow people to move more easily to MySQL Cluster when appropriate. There are a couple of exceptions though:

  • InnoDB doesn’t support “No Action” constraints, MySQL Cluster does
  • You cannot set up FKs between 2 tables where one is stored using MySQL Cluster and the other InnoDB.
  • You cannot change primary keys through the NDB API which means that the MySQL Server actually has to simulate such operations by deleting and re-adding the row. If the PK in the parent table has a FK constraint on it then this causes non-ideal behaviour. With Restrict or No Action constraints, the change will result in an error. With Cascaded constraints, you’d want the rows in the child table to be updated with the new FK value but, the implicit delete of the row from the parent table would remove the associated rows from the child table and the subsequent implicit insert into the parent wouldn’t reinstate the child rows. For this reason, an attempt to add an ON UPDATE CASCADE where the parent column is a primary key will be rejected.

MySQL Workbench

Viewing & updating Foreign Key Constraints in MySQL Workbench

Viewing and updating Foreign Key Constraints in MySQL Workbench

While not really the subject of this post, MySQL Workbench is a great tool for database developers and one of the features is the creation, modification and reverse engineering of schemas. For example, you could import your existing schema, add a new table and then introduce a FK constraint between an existing table and your new one. Previously, WorkBench would only allow you to play with FKs for InnoDB tables.

Closing remarks

We believe that this is going to enable a whole new set of applications exploit the advantages of MySQL Cluster (on-line scalability, real-time performance, NoSQL APIs, real-time performance, on-line schema changes,…) either because:

  • they want to simplify their application by pushing these referential checks down into the database – letting the developer focus more of their time on extending the application;
  • because they’re stuck with 3rd party middleware that is dependent on FKs
  • or that the application is already so dependent on FKs that it would be too much pain to remove them.

If you are new to MySQL Cluster then here are some useful resources to get you started:





“MySQL & Friends devroom” at FOSDEM 2013

FOSDEM 2013

FOSDEM 2013

The Free and Open source Software Developers’ European Meeting (FOSDEM) is a two-day event organized by volunteers to promote the widespread use of Free and Open Source software. As in previous years, there is a dedicated stream of MySQL Sessions. On Saturday (2nd Feb) evening there’s a MySQL community dinner and then we’ve a packed program from 9:15 through 17:30 on Sunday (3rd Feb).

FOSDEM 2013 is a free event and there’s no requirement to pre-register – just get yourself along to Brussels.

This year I’ll be making a presentation introducing MySQL Cluster. Several of my colleagues from Oracle will also be there to present on the latest and greatest MySQL capabilities – including what’s coming in MySQL 5.6. This will be a great opportunity to politely listen to some real technical experts but an even better one to pester them for extra details – whether during the presentations or when you can grab them at other times.

Hope to see some of you there!





MySQL Cluster Manager 1.2 – using the new features

MySQL Cluster Manager logoOracle have just announced that MySQL Cluster Manager 1.2 is Generally Available. For anyone not familiar with MySQL Cluster Manager – it’s a command-line management tool that makes it simpler and safer to manage your MySQL Cluster deployment – use it to create, configure, start, stop, upgrade…. your cluster.

So what has changed since MCM 1.1 was released?

The first thing is that a lot of work has happened under the covers and it’s now faster, more robust and can manage larger clusters. Feature-wise you get the following (note that a couple of these were released early as part of post-GA versions of MCM 1.1):

  • Automation of on-line backup and restore
  • Single command to start MCM and a single-host Cluster
  • Multiple clusters per site
  • Single command to stop all of the MCM agents in a Cluster
  • Provide more details in “show status” command
  • Ability to restart “initial” the data nodes in order to wipe out the database ahead of a restore

A new version of the MySQL Cluster Manager white paper has been released that explains everything that you can do with it and also includes a tutorial for the key features; you can download it here.

Watch this video for a tutorial on using MySQL Cluster Manager, including the new features:

Using the new features

Single command to run MCM and then create and run a Cluster

A single-host cluster can very easily be created and run – an easy way to start experimenting with MySQL Cluster:

billy@black:~$ mcm/bin/mcmd –bootstrap
		
MySQL Cluster Manager 1.2.1 started
Connect to MySQL Cluster Manager by running "/home/billy/mcm-1.2.1-cluster-7.2.9_32-linux-rhel5-x86/bin/mcm" -a black.localdomain:1862
Configuring default cluster 'mycluster'...
Starting default cluster 'mycluster'...
Cluster 'mycluster' started successfully
        ndb_mgmd        black.localdomain:1186
        ndbd            black.localdomain
        ndbd            black.localdomain
        mysqld          black.localdomain:3306
        mysqld          black.localdomain:3307
        ndbapi          *
Connect to the database by running "/home/billy/mcm-1.2.1-cluster-7.2.9_32-linux-rhel5-x86/cluster/bin/mysql" -h black.localdomain -P 3306 -u root

You can then connect to MCM:

billy@black:~$ mcm/bin/mcm 

Or access the database itself simply by running the regular mysql client.

Extra status information

When querying the status of the processes in a Cluster, you’re now also shown the package being used for each node:

mcm> show status --process mycluster;
+--------+----------+------ +---------+-----------+---------+
| NodeId | Process  | Host  | Status  | Nodegroup | Package |
+--------+----------+-------+---------+-----------+---------+
| 49     | ndb_mgmd | black | running |           | 7.2.9   |
| 50     | ndb_mgmd | blue  | running |           | 7.2.9   |
| 1      | ndbd     | green | running | 0         | 7.2.9   |
| 2      | ndbd     | brown | running | 0         | 7.2.9   |
| 3      | ndbd     | green | running | 1         | 7.2.9   |
| 4      | ndbd     | brown | running | 1         | 7.2.9   |
| 51     | mysqld   | black | running |           | 7.2.9   |
| 52     | mysqld   | blue  | running |           | 7.2.9   |
+--------+----------+-------+---------+-----------+---------+

Simplified on-line backup & restore

MySQL Cluster supports on-line backups (and the subsequent restore of that data); MySQL Cluster Manager 1.2 simplifies the process.

The database can be backed up with a single command (which in turn makes every data node in the cluster backup their data):

mcm> backup cluster mycluster;

The list command can be used to identify what backups are available in the cluster:

mcm> list backups mycluster;

+----------+--------+--------+----------------------+
| BackupId | NodeId | Host   | Timestamp            |
+----------+--------+--------+----------------------+
| 1        | 1      | green  | 2012-11-31T06:41:36Z |
| 1        | 2      | brown  | 2012-11-31T06:41:36Z |
| 1        | 3      | green  | 2012-11-31T06:41:36Z |
| 1        | 4      | brown  | 2012-11-31T06:41:36Z |
| 1        | 5      | purple | 2012-11-31T06:41:36Z |
| 1        | 6      | red    | 2012-11-31T06:41:36Z |
| 1        | 7      | purple | 2012-11-31T06:41:36Z |
| 1        | 8      | red    | 2012-11-31T06:41:36Z |
+----------+--------+--------+----------------------+

You may then select which of these backups you want to restore by specifying the associated BackupId when invoking the restore command:

mcm> restore cluster -I 1 mycluster;

Note that if you need to empty the database of its existing contents before performing the restore then MCM 1.2 introduces the initial option to the start cluster command which will delete all data from all MySQL Cluster tables.

Stopping all MCM agents for a site

A single command will now stop all of the agents for your site:

mcm> stop agents mysite;

Getting started with MySQL Cluster Manager

You can fetch the MCM binaries from edelivery.oracle.com and then see how to use it in the MySQL Cluster Manager white paper.

Please try it out and let us know how you get on!





MySQL Cluster 7.1.23 has been released

The binary & source versions for MySQL Cluster 7.1.23 have now been made available at https://www.mysql.com/downloads/cluster/7.1.html#downloads (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.1.23 (compared to 7.1.22) are available from the 7.1.23 Change log.





MySQL Cluster 7.2.7 released

The binary version for MySQL Cluster 7.2.7 has now been made available at http://www.mysql.com/downloads/cluster/ (GPL version) or https://support.oracle.com/ (commercial version).

A description of all of the changes (fixes) that have gone into MySQL Cluster 7.2.7 (compared to 7.2.6) are available from the 7.2.7 Change log.