Archive for August 13, 2009

Using NDB API Events to mask/hide colum data when replicating

If you  have asynchronous replication where the slave database is using MySQL Cluster then you can use the NDB API events functionality to mask/overwrite data. You might do this for example if the replica is to be used for generating reports where some of the data is sensitive and not relevant to those reports. Unlike stored procedures, NDB API events will be triggered on the slave.

The first step is to set up replication (master->slave rather than multi-master) as described in Setting up MySQL Asynchronous Replication for High Availability).

In this example, the following table definition is used:

mysql> use clusterdb;
mysql> create table ASSETS (CODE int not null primary key, VALUE int) engine=ndb;

The following code should be compiled and then executed on a node within the slave Cluster:

#include <NdbApi.hpp>
#include <stdio.h>
#include <iostream>
#include <unistd.h>
#include <cstdlib>
#include <string.h>

#define APIERROR(error) 
  { std::cout << "Error in " << __FILE__ << ", line:" << __LINE__ << ", code:" 
  << error.code << ", msg: " << error.message << "." << std::endl; 
  exit(-1); }

int myCreateEvent(Ndb* myNdb,
const char *eventName,
const char *eventTableName,
const char **eventColumnName,
const int noEventColumnName);

static void do_blank(Ndb*, int);

int main(int argc, char** argv)
{
  if (argc < 1)
 {
    std::cout << "Arguments are <connect_string cluster>.n";
    exit(-1);
  }
  const char *connectstring = argv[1];

  ndb_init();

  Ndb_cluster_connection *cluster_connection=
  new Ndb_cluster_connection(connectstring); // Object representing the cluster

  int r= cluster_connection->connect(5 /* retries               */,
  3 /* delay between retries */,
  1 /* verbose               */);
  if (r > 0)
  {
    std::cout << "Cluster connect failed, possibly resolved with more retries.n";
    exit(-1);
  }
  else if (r < 0)
  {
    std::cout << "Cluster connect failed.n";
    exit(-1);
  }

  if (cluster_connection->wait_until_ready(30,30))
  {
    std::cout << "Cluster was not ready within 30 secs." << std::endl;
    exit(-1);
  }

  Ndb* myNdb= new Ndb(cluster_connection,
                      "clusterdb");  // Object representing the database

  if (myNdb->init() == -1) APIERROR(myNdb->getNdbError());

  const char *eventName= "CHNG_IN_ASSETS";
  const char *eventTableName= "ASSETS";
  const int noEventColumnName= 2;
  const char *eventColumnName[noEventColumnName]=
  {"CODE",
   "VALUE"};

  // Create events
  myCreateEvent(myNdb,
  eventName,
  eventTableName,
  eventColumnName,
  noEventColumnName);

  // Normal values and blobs are unfortunately handled differently..
  typedef union { NdbRecAttr* ra; NdbBlob* bh; } RA_BH;

  int i;

  // Start "transaction" for handling events
  NdbEventOperation* op;
  printf("create EventOperationn");
  if ((op = myNdb->createEventOperation(eventName)) == NULL)
    APIERROR(myNdb->getNdbError());

  printf("get valuesn");
  RA_BH recAttr[noEventColumnName];
  RA_BH recAttrPre[noEventColumnName];

  for (i = 0; i < noEventColumnName; i++) {
    recAttr[i].ra    = op->getValue(eventColumnName[i]);
    recAttrPre[i].ra = op->getPreValue(eventColumnName[i]);
  }

  // set up the callbacks
  // This starts changes to "start flowing"
  if (op->execute())
    APIERROR(op->getNdbError());

  while (true) {
    int r = myNdb->pollEvents(1000); // wait for event or 1000 ms
    if (r > 0) {
      while ((op= myNdb->nextEvent())) {
        NdbRecAttr* ra = recAttr[0].ra;
        if (ra->isNULL() >= 0) { // we have a value
          if (ra->isNULL() == 0) { // we have a non-null value
            printf("CODE: %d ", ra->u_32_value());
            do_blank(myNdb, ra->u_32_value());
          } else 
            printf("%-5s", "NULL");
          } else
            printf("%-5s", "-"); // no value
            ra = recAttr[1].ra;
            printf("n");
          }
        }
      }
    }

int myCreateEvent(Ndb* myNdb,
                  const char *eventName,
                  const char *eventTableName,
                  const char **eventColumnNames,
                  const int noEventColumnNames)
{
  NdbDictionary::Dictionary *myDict= myNdb->getDictionary();
  if (!myDict) APIERROR(myNdb->getNdbError());

  const NdbDictionary::Table *table= myDict->getTable(eventTableName);
  if (!table) APIERROR(myDict->getNdbError());

  NdbDictionary::Event myEvent(eventName, *table);
  myEvent.addTableEvent(NdbDictionary::Event::TE_INSERT);

  myEvent.addEventColumns(noEventColumnNames, eventColumnNames);

  // Add event to database
  if (myDict->createEvent(myEvent) == 0)
    myEvent.print();
  else if (myDict->getNdbError().classification ==
            NdbError::SchemaObjectExists) {
    printf("Event creation failed, event existsn");
    printf("dropping Event...n");
    if (myDict->dropEvent(eventName)) APIERROR(myDict->getNdbError());
    // try again
    // Add event to database
    if ( myDict->createEvent(myEvent)) APIERROR(myDict->getNdbError());
  } else
    APIERROR(myDict->getNdbError());

    return 0;
}

static void do_blank(Ndb* myNdb, int code)
{
  const NdbDictionary::Dictionary* myDict= myNdb->getDictionary();
  const NdbDictionary::Table *myTable= myDict->getTable("ASSETS");

  if (myTable == NULL)
  APIERROR(myDict->getNdbError());

  NdbTransaction *myTransaction= myNdb->startTransaction();
  if (myTransaction == NULL) APIERROR(myNdb->getNdbError());

  printf("Replacing VALUE with 0 for CODE: %d ", code);

  NdbOperation *myOperation= myTransaction->getNdbOperation(myTable);
  if (myOperation == NULL) APIERROR(myTransaction->getNdbError());

  myOperation->updateTuple();
  myOperation->equal("CODE", code);
  myOperation->setValue("VALUE", 0);

  if (myTransaction->execute( NdbTransaction::Commit ) == -1)
    APIERROR(myTransaction->getNdbError());

  myNdb->closeTransaction(myTransaction);
}

shell> slave_filter 127.0.0.1:1186

From the master Cluster, insert some values (note that the example can easily be extended to cover updates too):

mysql> insert into ASSETS values (101, 50),(102, 40), (103, 99);

and then check that on the slave the value has been set to 0 for each of the entries:

mysql> select * from ASSETS;
+------+-------+
| CODE | VALUE |
+------+-------+
|  100 |     0 |
|  103 |     0 |
|  101 |     0 |
|  102 |     0 |
+------+-------+

How this works…. The table data is replicated as normal and the real values are stored in the slave. The “slave_filter” process has registered against insert operations on this table and when it’s triggered it sets the VALUE field to 0. The event is processes asynchronously from the replication and so there will be some very narrow window during which the true values would be stored in the slave.





Creating a MySQL plugin to produce an integer timestamp

This article shows how to create a MySQL-plugin that can be used to create a function which can in turn be used in stored procedures. The function will produce an integer value representing the time (to the nearest usec).

I’m working on an article for conflict detection/resolution when using MySQL Cluster asynchronous replication which requires an integer column to store a timestamp for comparison purposes. In fact, it doesn’t actually need the timestamp to represent an absolute or even a relative point in time – all it cares about is that the if the function is called twice on 2 different hosts that the 2nd call will always result in a larger number than the 1st. Obviously, in a production environment the times on the 2 hosts would need to be kept in sync.

The c code (inttime.c)

#include <mysql.h>
#include <sys/time.h>

my_bool inttime_init(UDF_INIT *initid,UDF_ARGS *args, char *message) {
  return 0;
}

void inttime_deinit(UDF_INIT *initid) {};

unsigned long int inttime(UDF_INIT *initid, UDF_ARGS *args, char *result,
                          unsigned long  *length, char *is_null, char *error)
{
  struct timeval tv;
  gettimeofday(&tv,(void *)0);
  return ((double)tv.tv_usec)+tv.tv_sec*1000000;
}

Compiling and deploying the shared library

[billy@ws1 timestamp]$ gcc -I/usr/local/mysql/include/ -fPIC -shared -o inttime.so inttime.c
[billy@ws1 timestamp]$ cp inttime.so /usr/local/mysql/lib/plugin/

Creating the function

mysql> create function inttime RETURNS REAL SONAME 'inttime.so';

Test the function

mysql> select inttime();select inttime();
+------------------+
| inttime()        |
+------------------+
| 1250080524270706 |
+------------------+
1 row in set (0.00 sec)

+------------------+
| inttime()        |
+------------------+
| 1250080524270833 |
+------------------+
1 row in set (0.00 sec)

Note that the results are different and the second result is always larger than the first – function is fit for purpose 🙂

Using the function from a stored procedure

mysql> CREATE TRIGGER updateTAB1 BEFORE UPDATE ON TAB1 FOR EACH ROW SET NEW.ts = inttime();

Acknowledgements

I used the c code found at http://lists.mysql.com/internals/33702 as the starting point.





Setting up MySQL Asynchronous Replication for High Availability

Asynchronous Replication for High Availability

Asynchronous Replication for High Availability

MySQL replication is often positioned as a solution to provide extra throughput for your database (especially when dealing with high numbers of read operations). What tends to be overlooked is how it can be used to provide high availability (HA) – no matter how much redundancy you have at a local level, your system remains at risk from a single catastrophic failure – loss of power, earthquake, terrorist attack etc. By using MySQL asynchronous replication to maintain a geographically remote copy of that data, service can be maintained through any single site failure.

As this replication is asynchronous, there are no low latency demands on the network and so the two sites can be thousands of miles apart while utilising low-cost connections.

This article provides a simple example of how to set up asynchronous replication between 2 MySQL databases in order to provide a Highly Available solution. First of all, it will be 2 databases where the tables will be stored in the MyISAM storage engine and then between 2 MySQL Cluster databases where I also configure it for Multi-master replication where changes can be made at either site.

Subsequent articles will build on this to show:

  • Collision detection and resolution when using MySQL Cluster multi-master asynchronous replication
  • Introducing asynchronous replication to a running MySQL Cluster database

Throughout this article, two machines are used: ws1 (192.168.0.3) and ws2 (192.168.0.4). ws1 will be set up as the master and ws2 as the slave (in the multi-master configuration, both act as both master and slave).

Setting up replication for non-Cluster databases

Replication is performed from one MySQL Server to another; the master makes the changes available and then one or more other Servers pick up those changes and apply them to their own databases. In this example, both databases will store the table data using the same storage engine (MyISAM) but it’s possible to mix and match (for example, take a look at  MySQL Cluster – flexibility of replication). As this is intended as a simple introduction to replication, I’m keeping life simple by assuming that this is all being set up before the database goes into production – if that isn’t the case for you and you need to cope with existing data then check out the MySQL documentation or subsequent articles on this site.

The my.cnf files can be set up as normal but the one for the MySQL Server that will act as the Master needs to have binary-logging enabled. Also, each of the server needs to have a unique server-id. Here are the my.cnf files used for this example:

my.cnf (Master)

[mysqld]
datadir=/home/billy/mysql/myisam/data
basedir=/usr/local/mysql
port=3306
server-id=1
log-bin

my.cnf (Slave)

[mysqld]
datadir=/home/billy/mysql/myisam/data
basedir=/usr/local/mysql
port=3306
server-id=2

Fire up the Master MySQL Server:

[billy@ws1 myisam]$ mysqld --defaults-file=my.cnf&

The slave needs a userid/password in order to access the master server – best practice is to create a dedicated user with just the required privileges:

[billy@ws1 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 1
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> grant replication slave on *.* to 'repl'@'192.168.0.4'
 -> identified by 'repl';
Query OK, 0 rows affected (0.00 sec)

The slave can now be told to start processing the replication data that will be staged by the master server:

[billy@ws2 myisam]$ mysqld --defaults-file=my.cnf&
[billy@ws2 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.1.34-ndb-7.0.6-cluster-gpl MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> CHANGE MASTER TO
-> MASTER_HOST='192.168.0.3',
-> MASTER_PORT=3306,
-> MASTER_USER='repl',
-> MASTER_PASSWORD='repl',
-> MASTER_LOG_FILE='',
-> MASTER_LOG_POS=4;
090803 15:48:09 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=ws2-relay-bin' to avoid this problem.
Query OK, 0 rows affected (0.01 sec)
mysql> start slave;
090803 15:51:24 [Note] Slave SQL thread initialized, starting replication in log 'FIRST' at position 0, relay log './ws2-relay-bin.000001' position: 4
Query OK, 0 rows affected (0.00 sec)
090803 15:51:24 [Note] Slave I/O thread: connected to master 'repl@192.168.0.3:3306',replication started in log 'FIRST' at position 4

Now to test that replication is up and running, create a table on the master, add some data and then check that the table and data can be read from the slave:

[billy@ws1 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 3
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> create table numbers (num1 int, num2 int);
Query OK, 0 rows affected (0.00 sec)

mysql> insert into numbers values (1,10),(2,20),(3,30);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0
[billy@ws2 myisam]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 5
Server version: 5.1.34-ndb-7.0.6-cluster-gpl MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from numbers;
+------+------+
| num1 | num2 |
+------+------+
|    1 |   10 |
|    2 |   20 |
|    3 |   30 |
+------+------+
3 rows in set (0.00 sec)

Multi-Master Replication with MySQL Cluster

Multi-Master Replication for HA with MySQL Cluster

Multi-Master Replication for HA with MySQL Cluster

There are a few asynchronous replication capabilities that are unique to MySQL Cluster – one of those is that changes are replicated even if they are made directly to the data nodes using the NDB API, another is that replication can be performed in both directions i.e. multi-master. One of the advantages of this is you can share both read and write operations across both sites so that no capacity is wasted – it also gives you the confidence that either site is sane and ready to take over for the other at any point. You should aim to minimise how often the same rows are modified at the same time on both sites – conflict detection/resolution has been implemented but it can only roll-back the rows changes that conflict with other row changes rather than the full transaction.

It’s important to note that in this article, I’m not talking about the synchronous replication that takes place between data nodes within a single MySQL Cluster site (that happens in parallel and is orthogonal to the asynchronous replication to a remote site).

When performing multi-master asynchronous replication between 2 Clusters, 1 (or more) MySQL Servers in each Cluster is nominated as a master and 1 or more as slaves (it can be the same server that takes on both roles and you can have multiple channels set up in case a channel or MySQL Server is lost). Changes made to the data at either site through any of their MySQL Servers (or directly to the data nodes using the NDB API) will be replicated to the other site.

I will focus on setting up the replication, you can refer to Deploying MySQL Cluster over multiple hosts for the steps to configure and run each MySQL Cluster site.

Most of the steps are very similar to those in the first example – the main differences would come when introducing asynchronous replication to  a MySQL Cluster instance that already contains data and is up and running (processing updates) which will be covered in a subsequent article.

Binary logging needs to be enabled on the MySQL Server(s) at each site that will act as a replication master:

my1.cnf (Master)

[mysqld]
ndb-nodeid=4
ndbcluster
datadir=/home/billy/mysql/replication/7_0_6/data
basedir=/home/billy/mysql/replication/7_0_6
port=3306
server-id=1
log-bin

my1.cnf (Slave)

[mysqld]
ndb-nodeid=4
ndbcluster
datadir=/home/billy/mysql/replication/7_0_6/data
basedir=/home/billy/mysql/replication/7_0_6
port=3306
server-id=2
log-bin

The MySQL Cluster nodes (including the MySQL Servers (mysqld) from both sites should be started up as normal.

In this case, the replication users should be set up for both Clusters:


[billy@ws1 7_0_6]$ mysqld --defaults-file=my.cnf&
[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.

Your MySQL connection id is 2

Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> grant replication slave on *.* to 'repl'@'192.168.0.4'
-> identified by 'repl';

Query OK, 0 rows affected (0.01 sec)

[billy@ws2 7_0_6]$ mysqld --defaults-file=my.cnf&
[billy@ws2 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.

Your MySQL connection id is 3

Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> grant replication slave on *.* to 'repl'@'192.168.0.3'
-> identified by 'repl';

Query OK, 0 rows affected (0.00 sec)

Replication can then be setup and started on each of the MySQL Servers (those acting as slaves for each Cluster):

[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 3
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> CHANGE MASTER TO
 -> MASTER_HOST='192.168.0.4',
 -> MASTER_PORT=3306,
 -> MASTER_USER='repl',
 -> MASTER_PASSWORD='repl',
 -> MASTER_LOG_FILE='',
 -> MASTER_LOG_POS=4;
090803 17:25:00 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=ws1-relay-bin' to avoid this problem.
Query OK, 0 rows affected (0.01 sec)
mysql> start slave;
090803 17:27:20 [Note] Slave SQL thread initialized, starting replication in log 'FIRST' at position 0, relay log './ws1-relay-bin.000001' position: 4
Query OK, 0 rows affected (0.02 sec)
090803 17:27:20 [Note] Slave I/O thread: connected to master 'repl@192.168.0.4:3306',replication started in log 'FIRST' at position 4

[billy@ws2 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 4
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> CHANGE MASTER TO
 -> MASTER_HOST='192.168.0.3',
 -> MASTER_PORT=3306,
 -> MASTER_USER='repl',
 -> MASTER_PASSWORD='repl',
 -> MASTER_LOG_FILE='',
 -> MASTER_LOG_POS=4;
090803 17:25:56 [Warning] Neither --relay-log nor --relay-log-index were used; so replication may break when this MySQL server acts as a slave and has his hostname changed!! Please use '--relay-log=ws2-relay-bin' to avoid this problem.
Query OK, 0 rows affected (0.01 sec)
mysql> start slave;
090803 17:27:25 [Note] Slave SQL thread initialized, starting replication in log 'FIRST' at position 0, relay log './ws2-relay-bin.000001' position: 4
Query OK, 0 rows affected (0.00 sec)
090803 17:27:25 [Note] Slave I/O thread: connected to master 'repl@192.168.0.3:3306',replication started in log 'FIRST' at position 4

This time, to make sure that replication is working in both directions, I make changes to both Clusters and then check that they appear at the other site:

[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 8
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test;
Database changed
mysql> create table numbers (num1 int, num2 int) engine=ndb;
090803 17:32:10 [Note] NDB Binlog: CREATE TABLE Event: REPL$test/numbers
090803 17:32:10 [Note] NDB Binlog: logging ./test/numbers (UPDATED,USE_WRITE)
Query OK, 0 rows affected (0.62 sec)

mysql> insert into numbers values (1,10),(2,20),(3,30);
Query OK, 3 rows affected (0.04 sec)
Records: 3  Duplicates: 0  Warnings: 0
billy@ws2 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 8
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from numbers;
+------+------+
| num1 | num2 |
+------+------+
|    2 |   20 |
|    3 |   30 |
|    1 |   10 |
+------+------+
3 rows in set (0.01 sec)

mysql> insert into numbers values (4,40),(5,50),(6,60);
Query OK, 3 rows affected (0.00 sec)
Records: 3  Duplicates: 0  Warnings: 0
[billy@ws1 7_0_6]$ mysql -u root
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 9
Server version: 5.1.34-ndb-7.0.6-cluster-gpl-log MySQL Cluster Server (GPL)

Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
mysql> select * from numbers;
+------+------+
| num1 | num2 |
+------+------+
|    1 |   10 |
|    6 |   60 |
|    3 |   30 |
|    5 |   50 |
|    2 |   20 |
|    4 |   40 |
+------+------+
6 rows in set (0.02 sec)




Webinar: MySQL Cluster 7.0: What’s New? (AS/ANZ/IN)

Mat Keep and I will be presenting a Webinar on Wednesday, August 12, 2009 describing the new capabilities of MySQL Cluster 7.0. The time is designed to be Asia/Australasia-friendly but of course it’s open to everyone.

Register for the MySQL Cluster 7 Webinar here.

By attending this webinar, you will learn more about the next generation of MySQL Cluster, and how it enables you to deliver 99.999% database availability, with real time performance and linear scalability, while slashing TCO.

Boasting a range of breakthrough capabilities, MySQL Cluster 7.0 will enable you to stay ahead of your most demanding, mission-critical application requirements. Enhanced scalability delivers higher database performance with fewer nodes to simplify deployment and administration. Expanded platform support and interoperability delivers more choice in both the development and deployment of MySQL Cluster. Simplified cluster monitoring tools reduce DBA administration overhead and operational costs.

Wednesday, August 12, 2009: 14:00 Singapore/Malaysia/Philippines time

Wed, Aug 12: 11:30 India, Sri Lanka
Wed, Aug 12: 13:00 Thailand/Vietnam/Western Indonesia time
Wed, Aug 12: 14:00 Western Australia
Wed, Aug 12: 15:30 Northern Territory (Australia)
Wed, Aug 12: 15:30 South Australia
Wed, Aug 12: 16:00 Queensland (Australia)
Wed, Aug 12: 16:00 NSW, ACT, Victoria, Tasmania (Australia)
Wed, Aug 12: 18:00 New Zealand time