On Thursday I’ll be hosting a webinar explaining how you can get the best from the NoSQL world while still getting all of the benefits of a proven RDBMS. As always the webinar is free but please register here.
Even if you can’t join the live webinar, it’s worth registering as you’ll be emailed a link to the replay as soon as it’s available.
- Thu, Mar 26: 09:00 Pacific time (America)
- Thu, Mar 26: 10:00 Mountain time (America)
- Thu, Mar 26: 11:00 Central time (America)
- Thu, Mar 26: 12:00 Eastern time (America)
- Thu, Mar 26: 13:00 São Paulo time
- Thu, Mar 26: 16:00 UTC
- Thu, Mar 26: 16:00 Western European time
- Thu, Mar 26: 17:00 Central European time
- Thu, Mar 26: 18:00 Eastern European time
- Thu, Mar 26: 21:30 India, Sri Lanka
- Fri, Mar 27: 00:00 Singapore/Malaysia/Philippines time
- Fri, Mar 27: 00:00 China time
- Fri, Mar 27: 01:00 日本
- Fri, Mar 27: 03:00 NSW, ACT, Victoria, Tasmania (Australia)
The binary and source versions of MySQL Cluster 7.4.5 have now been made available at http://www.mysql.com/downloads/cluster/.
MySQL Cluster NDB 7.4.5 is a new maintenance release of MySQL Cluster, based on MySQL Server 5.6 and including features from version 7.4 of the NDB storage engine, as well as fixing a number of recently discovered bugs in previous MySQL Cluster releases.
This release also incorporates all bugfixes and changes made in previous MySQL Cluster releases, as well as all bugfixes and feature changes which were added in mainline MySQL 5.6 through MySQL 5.6.23.
It was found during testing that problems could arise when the
node registered as the arbitrator disconnected or failed during
the arbitration process.
In this situation, the node requesting arbitration could never
receive a positive acknowledgement from the registered
arbitrator; this node also lacked a stable set of members and
could not initiate selection of a new arbitrator.
Now in such cases, when the arbitrator fails or loses contact
during arbitration, the requesting node immediately fails rather
than waiting to time out.
DROP DATABASE failed to remove
the database when the database directory contained a
.ndb file which had no corresponding table
NDB. Now, when executing
performs an check specifically for leftover
.ndb files, and deletes any that it finds.
References: See also Bug #44529.
The maximum failure time calculation used to ensure that normal
node failure handling mechanisms are given time to handle
survivable cluster failures (before global checkpoint watchdog
mechanisms start to kill nodes due to GCP delays) was
excessively conservative, and neglected to consider that there
can be at most
failures before the cluster can no longer survive. Now the value
NoOfReplicas is properly taken into
account when performing this calculation.
(Bug #20069617, Bug #20069624)
References: See also Bug #19858151, Bug #20128256, Bug #20135976.
During a node restart, if there was no global checkpoint
completed between the
START_LCP_REQ for a
local checkpoint and its
was possible for a comparison of the LCP ID sent in the
LCP_COMPLETE_REP signal with the internal
SYSFILE->latestLCP_ID to fail.
(Bug #76113, Bug #20631645)
LCP_FRAG_ORD signals as part of
master takeover, it is possible that the master not is not
synchronized with complete accuracy in real time, so that some
signals must be dropped. During this time, the master can send a
LCP_FRAG_ORD signal with its
lastFragmentFlag set even after the local
checkpoint has been completed. This enhancement causes this flag
to persist until the statrt of the next local checkpoint, which
causes these signals to be dropped as well.
This change affects ndbd only; the issue
described did not occur with ndbmtd.
(Bug #75964, Bug #20567730)
When reading and copying transporter short signal data, it was
possible for the data to be copied back to the same signal with
(Bug #75930, Bug #20553247)
NDB node takeover code made the assumption that there would be
only one takeover record when starting a takeover, based on the
further assumption that the master node could never perform
copying of fragments. However, this is not the case in a system
restart, where a master node can have stale data and so need to
perform such copying to bring itself up to date.
(Bug #75919, Bug #20546899)
A scan operation, whether it is a single table scan or a query
scan used by a pushed join, stores the result set in a buffer.
This maximum size of this buffer is calculated and preallocated
before the scan operation is started. This buffer may consume a
considerable amount of memory; in some cases we observed a 2 GB
buffer footprint in tests that executed 100 parallel scans with
2 single-threaded (ndbd) data nodes. This
memory consumption was found to scale linearly with additional
A number of root causes, listed here, were discovered that led
to this problem:
Result rows were unpacked to full
NdbRecord format before
they were stored in the buffer. If only some but not all
columns of a table were selected, the buffer contained empty
space (essentially wasted).
Due to the buffer format being unpacked,
VARBINARY columns always had
to be allocated for the maximum size defined for such
values were not taken into consideration as a limiting
factor when calculating the maximum buffer size.
These issues became more evident in NDB 7.2 and later MySQL
Cluster release series. This was due to the fact buffer size is
that the default value for this parameter was increased fourfold
(from 64 to 256) beginning with MySQL Cluster NDB 7.2.1.
This fix causes result rows to be buffered using the packed
format instead of the unpacked format; a buffered scan result
row is now not unpacked until it becomes the current row. In
MaxScanBatchSize are now used as limiting
factors when calculating the required buffer size.
Also as part of this fix, refactoring has been done to separate
handling of buffered (packed) from handling of unbuffered result
sets, and to remove code that had been unused since NDB 7.0 or
NdbRecord class declaration has
also been cleaned up by removing a number of unused or redundant
(Bug #73781, Bug #75599, Bug #19631350, Bug #20408733)
MySQL Cluster Manager 1.3.4 is now available to download from My Oracle Support and from the Oracle Software Delivery Cloud.
Details are available in the the MCM 1.3.4 Release Notes. Note that this version of MCM now supports MySQL Cluster 7.4 (as well as earlier versions or MySQL Cluster).
Documentation is available here.