Sharding & HA – MySQL Fabric Webinar Replay + Q&A

MySQL Fabric - Sharding and High Availability

On 19th June 2014, Mats Kindahl and I presented a free webinar on why and how you should be using MySQL Fabric to add Sharding (scaling out reads & writes) and High Availability to MySQL. The webinar replay is available here. This blog post includes a transcript of the questions raised during the live webinar together with the responses given – if you’re questions aren’t answered already then please feel free to post them as comments here.


MySQL Fabric is built around an extensible and open source framework for managing farms of MySQL Servers. Currently two features have been implemented – High Availability (built on top of MySQL Replication) and scaling out using data sharding. These features can be used in isolation or in combination. MySQL Fabric aware connectors allow transactions and queries to be routed to the correct servers without the need for a proxy node, so operations run as quickly as ever. In this webinar you will learn what MySQL Fabric is, what it can achieve and how it is used – by DBAs, Dev-Ops and developers. You’ll also be exposed to what is happening under the covers. In addition to the presentation, there will be live on-line Q&A with the engineering team. This is a great opportunity to learn about the latest developments directly from the people building them.


  • Is High availability still based on asynchronous replication?: MySQL Fabric will set up asynchronous replication; you can then manually switch to semisynchronous replication. MySQL Cluster is the only option to get synchronous replication.
  • Is there any possibility of split brain (among slaves) when promoting a slave to be the new master?: For the current solution, based on MySQL Replication, there will at any point be at most a single write server. It can still be the case that one of the slaves are lagging (but if you enable semi-synchronous replication then the slave will be able to catch up as it has everything at least in its relay log), but no split brain situation.
  • Are there hooks for different stages of the master pivot to update other orchestration/configuration systems?: You’re able to add script files to a directory and they will be invoked; it’s not currently very elegant but if it’s something you want to do then we can help you with it. We hope to make it more elegant/intuitive in the futures.
  • Can we use the MySQL Fabric only for HA Solutions?: Yes – sharding is optional
  • What happens if the slave is lagging and the primary goes down?: If using semisynchronous replication then the slave will apply it’s queued replication events before becoming master; if using asynchronous replication then those changes will not be there once the slave is promoted to master.
  • For promoting slave, how does Fabric determine which slave to promote? How is it guaranteed to have all available binlog events applied?: Currently, MySQL Fabric picks the slave that has the most changes from the master and re-direct the other slaves to the promoted slave. The outstanding events in the relay log is processed as normal. Note that as a user, you can specify which slaves are candidates to be promoted to master and also which use semisynchronous rather than synchronous replication.
  • What is the typical time to detect primary failure, choose slave candidate and complete the promotion of the new primary?: We haven’t run any benchmarks, but the ping frequency is configurable, so the detection time depends on how often you check the servers. Choosing a new primary and promoting it is straightforward. The deciding factor is the number of outstanding events on the promoted slave and the time to process them.
  • For HA group, how do you prevent or mitigate false positives for failover of primary?: You can specify how many connector instances need to report the master as unavailable and how many times – in that way you can decide what constitues a failure vs. a transient or local issue.
  • Are the HA Groups based on standard replication techniques? GTID’s?: The HA group is based on a set of MySQL Servers – one is a replication master and the rest are replication slaves; for HA, GTIDs are used.
  • Do the HA groups rely on GTID’s setup in combination with mysqlrpladmin/mysqlfailover tools?: MySQL Fabric uses GTID’s to failover the slaves correctly, in a similar way to how mysqlfailover does, but has its own code for performing the failover.
  • Can you explain “Zero Data Loss” in MySQL 5.7 when using MySQL Replication?: The enhancement is that with semisynchronous replication in 5.7, the commit will not be externalised until the event has been receive by a slave. This means that if any application sees the result of the transaction then you’re guaranteed that the effects of that transaction will not be lost even if the master fails.
  • Is the Fabric Connector node a single point of failure or does it have failover as well?: Each application instance has an associated connector instance; if they fail then the other instances continue to provide service. Of course, if you have a single application process then you have a vulnerability – regardless of what the database provides.
  • Does MySQL Fabric choose a primary for us (i.e. we do not get to choose which instance is primary)?: You get to specify which of the slaves are *candidates* for promotion (others can be there to just act as read-slaves) and so you have control.
  • How many nodes are required to setup Sharding (Minimum number of servers)?: For sharding you need at least one server for each shard. If you want HA then each shard needs at least 2 servers (and the Fabric node should be on a different machine). i.e. minimum of 2 servers for sharding, minimum of 3 machines for HA, minimum of 4 servers/3 machines for HA *and* sharding
  • Do the shards support typical SQL constructs such as joins and FKs across nodes?: Queries (including joins) are local to a shard. If you have global tables (all data replicated to all shards) then you can have joins between that data and the data held within a particular shard.
  • If I would start with, 2 shards (so 2 HA groups). Can I expand this to 4 shards and let Fabric rebuild the shards based on the ranges?: Yes – there is a MySQL Fabric shard splitting command – mysql fabric sharding split_shard command which splits a single shard into 2 shards (on 2 different HA groups)
  • Does a shard split trigger write locks?: Writes can continue through most of the shard split *but* there is a period at the end where they will be locked – the longest time this can be is the tie of your longest transaction.
  • Can we run global SELECT query againts multiple shards and get result from all shards?: Currently, a single query can only access global (unsharded) tables or a single shard; the application would need to send in multiple queries if it wanted to get results from multple shards.
  • Can we have a query for both global data and one of the shard data combied together?: Absolutely – yes
  • This appears to be using inline queries. What about stored procedures?: Any stored procedures would be executed locally on the MySQL Server selected by the connector (i.e. it would not be run accross multiple shards).
  • Does MySQL Fabric support the Memcache API to InnoDB?: Theres’s no support for the Memcached API at this point. However, your app can access the Fabric mapping/state data and use that to route to the correct server.
  • Are there plans to make the connector sharding logic automatic (i.e. application wouldn’t need to break out the sharding key) in the many cases where there is no ambiguity (returning with an error when the shard cannon be derived)? ie: A non-transactional select on a single table with shard key in WHERE could always go to a read-slave in the proper shard.: This is something we’d like to add but we can’t make any commitments.
  • Do MySQL Fabric-aware connectors implement automatic failover?: Yes, the MySQL Fabric-aware connectors fail over by themselves after MySQL Fabric has promoted the new master.
  • In the future will there be a transparant connector for other languages (most applications use default (and dumb) mysql connectors): We plan to make other connectors MySQL Fabric-aware and for those that aren’t owned by Oracle, we’d love to advise the community owners how to adapt them.
  • A lot of Open Source applications don’t care about the MySQL back-end, whether it is replicated or not. They use the standard MySQL connector that is supplied in their programming language. A transparant gateway-like connector would benefit here for migration.: We hope to get Fabric support into as many connectors as possible. Note that these aren’t ‘special’ PHP, Java or Python connectors – we add Fabric support to the “standard connectors”
  • are there any size limitations on the mysql databases that are being used?: There are no additional constraints imposed on the MySQL servers by MySQL Fabric.
  • Are there any plan for a fabric GUI management interface?: It would be nice to have but we don’t have anything at present; the command-line-interface and XML/RPC APIs were the priority so that the functionality can be embedded within a user’s wider infrastructure (e.g. invoking the MySQL Fabric commands from an existing script).
  • Do all machines need to have the same resources?: No, all of the MySQL Servers in a HA group (or the machines they’re running on) do not have to be equal. You can specify which servers are candidates to become the master and also specify weights so that more powerful servers receiver more queries.
  • Do you need any specific MySQL setup for using MySQL Fabric?: MySQL Fabric operate using normal MySQL servers and rely on standard replication to handle high-availability and sharding.
  • Are there any limitations wrt AWS RDS for MySQL Fabric?: The usual limitation mentioned when using AWS is when virtual IP support is required. MySQL Fabric *does not rely on virtual IP support* and handles the routing in the connectors.
  • Do we need GTID for MySQL Fabric setup?: In the current GA version of MySQL Fabric, GTID’s have to be enabled on the servers being managed in order for the HA features to work.
  • Do we need MySQL GTID for MySQL Fabric ?: For the HA aspects – yes.
  • Any specific MySQL parameters required for MySQL Fabric?: You need to enable GTIDs and you need to have a binary log on the servers that are candidates for being master. You also need to enable UUID support, since MySQL Fabric distinguishes between the servers using their UUID.
  • Can MySQL Fabric support removing nodes?: Yes, MySQL Fabric supports removing servers from HA Groups as well as adding new ones.
  • It was mentioned that the Fabric controller is a single node. How do you avoid SPOF on the controller?: The connectors hold a cache of the mapping and state data and so they can continue to route requests when the Fabric node is unavailable. Of course we would like to add redundancy for the Fabric node but queries and transactions can continue while it is unavailable.
  • Is there any Oracle RAC like (active-active) HA solution for MySQL?: MySQL offers MySQL Cluster as an active-active HA solution; unlike Oracle RAC though it does not depend on shared storage.
  • Is it useful if I use mysql cluster 7.3 instead of replication mode? since cluster has HA functionality itself: At the moment, you cannot use MySQL Cluster with MySQL Fabric. MySQL Cluster does offer transparrent sharding and High Availability. MySQL Fabric is ideal when you want simple HA and sharding but you want to continue using InnoDB as your storage engine.
  • Is MySQL Fabric stable for production deployment ?: It is Generally Available which means that we’ve completed our testing and are confident that it’s ready for live deployment but of course you should do your own testing to satisfy yourself that it’s working correctly *in your environment and with your application*
  • Does fabric require a commercial license/subscription?: It’s available under the GPL2 Open Source license or if you want a commercial license/subscription then it’s also part of MySQL Enterprise Edition and MysQL Carrier Grade Edition
  • How about Enterprise support for Fabric?: It’s in the community version of MySQL but also part of MySQL Enterprise Edition.

One comment

  1. jack says:

    mysqlfabric hash sharding table shard_rangs column lower_bound may 32 byte length

    How to repeat:
    first : like yours example
    add a shard:
    mysqlfabric sharding add_shard 1 group_id-1 –state=enabled

    In the cod:
    To add a shard, there has 2 ways
    inserting to shard_ranges table
    #Insert a HASH of keys and the server to which they belong.

    “INSERT INTO shard_ranges(”
    “shard_mapping_id, ”
    “lower_bound, ”
    “shard_id) ”
    “VALUES(%s, UNHEX(MD5(%s)), %s)”
    #Insert Split ranges.
    #NOTE: The split lower_bound does not need the md5 algorithm.
    “INSERT INTO shard_ranges(”
    “shard_mapping_id, ”
    “lower_bound, ”
    “shard_id) ”
    “VALUES(%s, UNHEX(%s), %s)”

    I found this:
    the lower_bound is unhex(md5(‘group_id-1’)),the column lower_bound is
    `lower_bound` varbinary(16) NOT NULL.

    mysql> select shard_mapping_id, hex(lower_bound), shard_id from shard_ranges;
    | shard_mapping_id | hex(lower_bound) | shard_id |
    | 1 | 48B32E5AD3C6EC00A45578E0A34B092F | 1 |

    The second:
    add another shard_id,the command
    mysqlfabric sharding split_shard 1 group_id-2
    The code call:
    def get_upper_bound(lower_bound, shard_mapping_id, type, persister=None)
    The SQL is:
    max_query = “SELECT HEX(MD5(MAX(%s))) FROM %s” % \
    the proplem is md5 return 32 asscii,hex(md5(”)) is 64 digital.
    then the upper_bound is 64 digital to insert.
    the insert sql to shard_ranges is
    #Insert Split ranges.
    #NOTE: The split lower_bound does not need the md5 algorithm.
    “INSERT INTO shard_ranges(”
    “shard_mapping_id, ”
    “lower_bound, ”
    “shard_id) ”
    “VALUES(%s, UNHEX(%s), %s)”
    The unhex(hex(md5(”))) return 32 length,but the table shard_range lower_bound column length is 16.
    my error is :
    /usr/local/bin/mysqlfabric sharding split_shard 1 group_id-2
    Procedure :
    { uuid = e01e08d4-8193-4761-b4cf-6f997bd722b4,
    finished = True,
    success = False,
    return = DatabaseError: (“Command (INSERT INTO shard_ranges(shard_mapping_id, lower_bound, shard_id) VALUES(%s, UNHEX(%s), %s), (1, ‘1a1ab19a1c31b1b2993299321bb33132570a2fde1d15109903c66e0b82579db2’, 3)) failed: 1406 (22001): Data too long for column ‘lower_bound’ at row 1”, 1406),
    activities =

    I have been alter the table shard_range’s character to latin1 or utf8. it still

    now my question is:
    1. why the first shard’s lower_bound is the group id
    2. lower_bound is 16 length is right or my test has errors?


    Suggested fix:
    alter table shard_ranges modify lower_bound varbriary(32) not null;

    But I found the data in the shard server not average.
    [8 Jul 2:12] jack tom
    def add(shard_mapping_id, shard_id, persister=None):
    “””Add the HASH shard specification. This represents a single instance
    of a shard specification that maps a key HASH to a server.

    :param shard_mapping_id: The unique identification for a shard mapping.
    :param shard_id: An unique identification, a logical representation
    for a shard of a particular table.
    shard = Shards.fetch(shard_id)
    HashShardingSpecification.INSERT_HASH_SPECIFICATION, {

    This is the the code, the lower_bound is group_id.
    why is group_id ?
    I think is may the shard’s min value.

Leave a Reply to jack Cancel reply

Your email address will not be published. Required fields are marked *