Tag Archive for Windows

Analyzing Data in Microsoft Excel with the MongoDB Connector for BI

There are many great BI tools out there that enable powerful analytics and data visualization but sometimes the right tool for the job is the one you already have sitting on your laptop.

MongoDB 3.2 introduced the MongoDB Connector for BI which presents an SQL API to allow BI tools to read data from your database in real-time. Typically, this functionality will be showcased with specialized tools such as Tableau but it’s equally applicable to Microsoft’s ubiquitous Excel.

This video demonstrates how to connect Excel running on Microsoft Windows to the BI connector and then fetch data from MongoDB.

 

The instructions in the demo assume that you already have a MongoDB database running together with a configured instance of the MongoDB Connector for BI – the documentation explains how to set that up.

Note that Excel is not able to handle the “.” character or capital letters in table of column names. To overcome this, it was necessary to edit the DRDL file produced by the mongodrdl tool to map names to lowercase equivalents and to replace each “.” (used to flatten embedded documents) with a “_”; this requires v1.1 or later of the BI connector.

The MongoDB Connector for BI is part of MongoDB Enterprise Advanced; it can be freely downloaded for evaluation – why not try it out for yourself.

More information on the BI connector as well as other MongoDB 3.2 features can be found in MongoDB 3.2: What’s New.





Holding MySQL HA workshop in Oxford

All Your Base Conference - MySQL HA WorkshopOn 17th October I’ll be running a hands-on workshop on the various technologies available to provide High Availability using MySQL. The workshop is being held on 17th October (the day before the All Your Base conference) in Oxford (UK). The cost is £250 + VAT and you can register here.

This workshop provides an introduction to what High Availability (HA) is; what technology options are available to achieve it with MySQL and how to actually implement your own HA solutions. The session will be a mixture of presentations, demonstrations and (most importantly) hands-on tutorials.

We’ll start with an overview of High Availability – in general and in the context of MySQL and then a survey of the technologies to choose from:

  • MySQL Replication
  • MySQL Cluster
  • DRBD
  • Shared storage clustering options
    • Windows Server Failover Clustering
    • Solaris Clusters
    • Oracle Virtual Machine

There will be deep-dives on MySQL Replication and MySQL Cluster and this is where the hands-on parts of the workshop will focus:

  • What these technologies achieve and how
  • Get your own database clusters up and running (hands-on)
  • “What happens when I do this…?” (hands-on)
  • Monitoring (mix of hands-on and demos)

The plan is to get everyone up and running with multiple VMs on their laptops so that they can experiment and test with MySQL Cluster and MySQL Replication over multiple (virtual) hosts. As a stretch, we’ll try creating a Cluster running over everyone’s machine at the same time – though that might be stretching the logisitics! I’ve got a supply of 30 USB sticks and hope to have 2-3 unique virtual appliances on each – what could go wrong?





MySQL HA Solutions – webinar replay

If you were unable to attend the live webinar (or you want to go back and listen to it again) then it’s now available to view on-line here.

Databases are the center of today’s web and enterprise applications, storing and protecting an organization’s most valuable assets and supporting business-critical applications. Just minutes of downtime can result in significant lost revenue and dissatisfied customers. Ensuring database highly availability is therefore a top priority for any organization. Tune into this webcast to learn more.

The session discusses:

  1. Causes, effect and impact of downtime
  2. Methodologies to map applications to HA solution
  3. Overview of MySQL HA solutions
  4. Operational best practices to ensure business continuity




MySQL with Windows Server 2008 R2 Failover Clustering

Windows Server 2008 R2 Failover Clustering

Oracle has announced support for running MySQL on Windows Server Failover Clustering (WSFC); with so many people developing and deploying MySQL on Windows, this offers a great option to add High Availability to MySQL deployments if you don’t want to go as far as deploying MySQL Cluster.

This post will give a brief overview of how to set things up but for all of the gory details a new white paper MySQL with Windows Server 2008 R2 Failover Clustering is available – please give me any feedback. I will also be presenting on this at a free webinar on Thursday 15th September (please register in advance) as well at an Oracle OpenWorld session in San Francisco on Tuesday 4th October (Tuesday, 01:15 PM, Marriott Marquis – Golden Gate C2) – a good opportunity to get more details and get your questions answered.

It sometimes surprises people just how much MySQL is used on Windows, here are a few of the reasons:

  • Lower TCO
    • 90% savings over Microsoft SQL Server
    • If your a little skeptical about this then try it out for yourself with the MySQL TCO Savings Calculator
  • Broad platform support
    • No lock-in
    • Windows, Linux, MacOS, Solaris
  • Ease of use and administration
    • < 5 mins to download, install & configure
    • MySQL Enterprise Monitor & MySQL WorkBench
  • Reliability
    • 24×7 Uptime
    • Field Proven
  • Performance and scalability
    • MySQL 5.5 delivered over 500% performance boost on Windows.
  • Integration into Windows environment
    • ADO.NET, ODBC & Microsoft Access Integration
    • And now, support for Windows Server Failover Clustering!
Probably the most common form of High Availability for MySQL is MySQL (asynchronous or semi-synchronous replication) and the option for the highest levels of availability is MySQL Cluster. We are in the process of rolling out a number of solutions that provide levels of availability somewhere in between MySQL Replication and MySQL Cluster; Oracle VM Template for MySQL Enterprise Edition was the first (overview, webinar replay, white paper) and WSFC is the second.

 

Solution Overview

MySQL with Windows Failover Clustering requires at least 2 servers within the cluster together with some shared storage (for example FCAL SAN or iSCSI disks). For redundancy, 2 LANs should be used for the cluster to avoid a single point of failure and typically one would be reserved for the heartbeats between the cluster nodes.

The MySQL binaries and data files are stored in the shared storage and Windows Failover Clustering ensures that at most one of the cluster nodes will access those files at any point in time (hence avoiding file corruptions).

Clients connect to the MySQL service through a Virtual IP Address (VIP) and so in the event of failover they experience a brief loss of connection but otherwise do not need to be aware that the failover has happened other than to handle the failure of any in-flight transactions.

Target Configuration

This post will briefly step through how to set up and use a cluster and this diagrams shows how this is mapped onto physical hardware and network addresses for the lab used later in this post. In this case, iSCSI is used for the shared storage. Note that ideally there would be an extra subnet for the heartbeat connection between ws1 and ws3.

This is only intended to be an overview and the steps have been simplified refer to the white paper for more details on the steps.

Prerequisites

  • MySQL 5.5 & InnoDB must be used for the database (note that MyISAM is not crash-safe and so failover may result in a corrupt database)
  • Windows Server 2008 R2
  • Redundant network connections between nodes and storage
  • WSFC cluster validation must pass
  • iSCSI or FCAL SAN should be used for the shared storage

Step 1 Configure iSCSI in software (optional)

Create 2 clustered disks

This post does not attempt to describe how to configure a highly available, secure and performant SAN but in order to implement the subsequent steps shared storage is required and so in this step we look at one way of using software to provide iSCSI targets without any iSCSI/SAN hardware (just using the server’s internal disk). This is a reasonable option to experiment with but probably not what you’d want to deploy with for a HA application. If you already have shared storage set up then you can skip this step and use that instead.

As part of this process you’ll create at least two virtual disks within the iSCSI target; one for the quorum file and one for the MySQL binaries and data files. The quorum file is used by Windows Failover Clustering to avoid “split-brain” behaviour.

Step 2. Ensure Windows Failover Clustering is enabled

Ensure that WSFC is enabled

To confirm that Windows Failover Clustering is installed on ws1 and ws3, open the “Features” branch in the Server Manager tool and check if “Failover Cluster Manager” is present.

If Failover Clustering is not installed then it is very simple to add it. Select “Features” within the Service Manager and then click on “Add Features” and then select “Failover Clustering” and then “Next”.

 

 

Step 3. Install MySQL as a service on both servers

Install MySQL as a Windows Service

If MySQL is already installed as a service on both ws1 and ws3 then this step can be skipped.

The installation is very straight-forward using the MySQL Windows Installer and selecting the default options is fine.

Within the MySQL installation wizard, sticking with the defaults is fine for this exercise. When you reach the configuration step, check “Create Windows Service”.

The installation and configuration must be performed on both ws1 and ws2, if necessary.

Step 4. Migrate MySQL binaries & data to shared storage

If the MySQL Service is running on either ws1 or ws3 then stop it – open the Task Manager using ctrl-shift-escape, select the “Services” tab and then right-click on the MySQL service and choose “Stop Service”.

As the iSCSI disks were enabled on ws1 you can safely access them in order to copy across the MySQL binaries and data files to the shared disk.

Step 5. Create Windows Failover Cluster

Create the Cluster (without MySQL)

From the Server Manager on either ws1 or ws3 navigate to “Features -> Failover Cluster Manager” and then select “Validate a Configuration”. When prompted enter ws1 as one name and then ws3 as the other.

In the “Testing Options” select “Run all tests” and continue. If the tests report any errors then these should be fixed before continuing.

Now that the system has been verified, select “Create a Cluster” and provide the same server names as used in the validation step. In this example, “MySQL” is provided as the “Cluster Name” and then the wizard goes on to create the cluster.

Step 6. Create Cluster of MySQL Servers within Windows Cluster

Cluster the MySQL Service

Adding the MySQL service to the new Cluster is very straight-forward. Right-click on “Services and applications” in the Server Manager tree and select “Configure a Service or Application…”. When requested by the subsequent wizard, select “Generic Service” from the list and then “MySQL” from the offered list of services. Our example name was “ClusteredMySQL”. Please choose an appropriate name for your cluster. The wizard will then offer the shared disk that has not already been established as the quorum disk for use with the Clustered service – make sure that it is selected.

Once the wizard finishes, it starts up the MySQL Service. Click on the “ClusteredMySQL” service branch to observe that the service is up and running. You should also make a note of the Virtual IP (VIP) assigned, in this case 192.168.2.18.

Step 7. Test the cluster

As described in Step 6, the VIP should be used to connect to the clustered MySQL service:

C: mysql –u root –h 192.168.2.18 –P3306 –pbob

From there create a database and populate some data.

mysql> CREATE DATABASE clusterdb;
mysql> USE clusterdb;
mysql> CREATE TABLE simples (id int not null primary key) ENGINE=innodb;
mysql> REPLACE INTO simples VALUES (1);
mysql> SELECT * FROM simples;
+----+
| id |
+----+
| 1 |
+----+

Migrate MySQL Service Across Cluster

The MySQL service was initially created on ws1 but it can be forced to migrate to ws3 by right-clicking on the service and selecting “Move this service or application to another node”.

As the MySQL data is held in the shared storage (which has also been migrated to ws3), it is still available and can still be accessed through the existing mysql client which is connected to the VIP:

mysql> select * from simples; 
ERROR 2006 (HY000): MySQL server has gone away 
No connection. Trying to reconnect... 
Connection id: 1 
Current database: clusterdb 
+----+ 
| id | 
+----+ 
| 1 | 
+----+

Note the error shown above – the mysql client loses the connection to the MySQL service as part of the migration and so it automatically reconnects and complete the query. Any application using MySQL with Windows Failover Cluster should also expect to have to cope with these “glitches” in the connection.

Conclusion

More users develop and deploy and MySQL on Windows than any other single platform. Enhancements in MySQL 5.5 increased performance by over 5x compared to previous MySQL releases. With certification for Windows Server Failover Clustering, MySQL can now be deployed to support business critical workloads demanding high availability, enabling organizations to better meet demanding service levels while also reducing TCO and eliminating single vendor lock-in.
Please let me know how you get on by leaving comments on this post.





Running MySQL Cluster over multiple Windows servers

Windows Logo
Following on from the earlier article on running MySQL Cluster on a single Windows host, this one looks at installing and running over multiple Windows machines.

Target Deployment

Target Deployment

In this post, the MySQL Cluster database will be split over 3 Windows machines:

  • 2 hosts with each running 1 data node
  • 3rd host running management node and 2 MySQL Servers

 

 

 

 

 

 

 

MySQL Cluster Downloads

MySQL Cluster Downloads

The first step is to download and install the software on each of the Windows hosts. There are 2 main ways that you can install MySQL Cluster on Windows – the first is an MSI installer and the second is extracting the files from a Zip archive – this article focuses on using the Zip file as the single host article used the MSI installer. You can get the Zip archive from the Cluster downloads page on mysql.com

Make sure that you choose the correct file MSI vs. Zip, 32 bit vs. 64 bit. Also make sure that you don’t download the source code rather than the binaries (you can tell the source archive as it’s a much smaller file).

Repeat this on each of the hosts that will run a Cluster node (or download it once and then copy it to each of the hosts).

Extract Zip archive

Extract Zip archive

Extract the contents of the Zip archive to “c:mysqlcluster”.

I prefer to keep the configuration and data files separate from the software and so I create a new folder “MySQL_ClusterMy_Cluster” in my home directory and in there create “conf” and “data” folders. Within the “data” folder on 192.168.0.19, create a sub-folder for each of the 2 MySQL Server (mysqld) processes and then copy “c:mysqlclusterdatamysql” to each – this creates the mysql database for each of the MySQL nodes containing data such as privileges and password.

Add MySQL executables to PATH

Add MySQL executables to PATH

To get access to the executables from the command line, add “c:mysqlclusterbin” to the PATH environment variable.

 

 

 

 

 

 

Create a “config.ini” file within the “conf” folder that was created on the host to run the management node (ndb_mgmd) – which in this example is 192.168.0.19:

[ndbd default]
noofreplicas=2
datadir=E:am233268DocumentsMySQL_ClusterMy_Clusterdata

[ndbd]
hostname=192.168.0.201
id=2

[ndbd]
hostname=192.168.0.4
id=3

[ndb_mgmd]
id=1
hostname=192.168.0.19

mysqld]
id=101
hostname=192.168.0.19

[mysqld]
id=102
hostname=192.168.0.19

As both of the MySQL Server nodes will also run on 192.168.0.19, we create a cnf file for each of them in that same “conf” folder:

my.101.cnf

[mysqld]
ndb-nodeid=101
ndbcluster
datadir=E:am233268DocumentsMySQL_ClusterMy_Clusterdatamysqld_1
port=3306
ndb-connectstring=192.168.0.19:1186

my.102.cnf

[mysqld]
ndb-nodeid=102
ndbcluster
datadir=E:am233268DocumentsMySQL_ClusterMy_Clusterdatamysqld_2
port=3307
ndb-connectstring=192.168.0.19:1186

Note that the “ndb-connectstring” is not strictly needed as the MySQL Servers happen to be on the same host as the management node but they’ve been included to make it clearer what to change if you moved those nodes to another machine.

Ports to open for MySQL Cluster nodes

Ports to open for MySQL Cluster nodes

The information from this table can help you set up the firewall rules to allow the nodes to communitcate with each other but if possible, turn off the firewall for connections between these 3 nodes.

The next step is to start the nodes (processes) but before then, a hint.

If you run each command from a different command prompt then your screen will quickly fill with Windows. Instead, rather than just typing “c: <command>” use “c: start /B <command>” (for example “c: start /B ndbd –initial”) so that multiple processes can be run from the same Window. To keep things simple, the “start /B” has been skipped for the rest of this article.

The first process to start is the management node; run the following from the “<home>DocumentsMySQL_ClusterMy_Cluster” folder on 192.168.0.19:

192.168.0.19: ndb_mgmd --initial -f conf/config.ini --configdir=./conf

Next, start the data nodes on 192.168.0.201 and 192.168.0.4:

192.168.0.201: ndbd -c 192.168.0.19:1186 --initial

192.168.0.4: ndbd -c 192.168.0.19:1186 --initial

and then finally start the MySQL Server nodes on 192.168.0.19:

192.168.0.19: mysqld --defaults-file=confmy.101.cnf
192.168.0.19: mysqld --defaults-file=confmy.102.cnf

Finally, check that all of the nodes are up and running:

192.168.0.19: ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @192.168.0.201  (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0, Master)
id=3    @192.168.0.4  (mysql-5.1.39 ndb-7.0.9, Nodegroup: 0)

[ndb_mgmd(MGM)] 1 node(s)
id=1    @192.168.0.19  (mysql-5.1.39 ndb-7.0.9)

[mysqld(API)]   2 node(s)
id=101  @192.168.0.19  (mysql-5.1.39 ndb-7.0.9)
id=102  @192.168.0.19  (mysql-5.1.39 ndb-7.0.9)

Known limitation of running MySQL Cluster on Windows:

  • No angel process for data nodes -> processes will not be automatically restarted even if you set StopOnError to FALSE
  • Running nodes as Windows services is not really practical (as well as software limitations, would still need to activate processes through ndb_mgm)
  • Safe to run management node or MySQL Servers (or NDB API Apps) on Windows in production: will soon be supported
  • See open MySQL Cluster bugs reported for Windows:
  • You tell us! Help us get to GA quality on Windows:




MySQL Cluster running on Windows – new Webinar

MySQL Cluster Running on Windows

MySQL Cluster Running on Windows

Replay now available from http://www.mysql.com/news-and-events/on-demand-webinars/display-od-445.html

I’ll be presenting a (free) webinar on installing, configuring and running MySQL Cluster on Windows – starting at 10:00 PT on 7th January – that’s 18:00 UK time or 19:00 for most of Europe. To attend, just register ahead of time at mysql.com

In this presentation we will explore the benefits of leveraging MySQL Cluster on Windows. We will also cover step by step, how to get started with installing, configuring, and running MySQL Cluster on Windows. Finally, a review of MySQL Cluster’s architecture and future roadmap for Windows support will be covered. If you are interested in learning how to leverage MySQL Cluster on Windows, this webinar is for you.

Thursday, January 07, 2010: 10:00 Pacific time (America)
Thu, Jan 07: 08:00 Hawaii time
Thu, Jan 07: 11:00 Mountain time (America)
Thu, Jan 07: 12:00 Central time (America)
Thu, Jan 07: 13:00 Eastern time (America)
Thu, Jan 07: 18:00 UTC
Thu, Jan 07: 18:00 Western European time
Thu, Jan 07: 19:00 Central European time
Thu, Jan 07: 20:00 Eastern European time





My first Cluster running on Windows

I figured that it was time to check out how to install, configure, run and use MySQL Cluster on Windows. To keep things simple, this first Cluster will all run on a single host but includes these nodes:

  • 1 Management node (ndb_mgmd)
  • 2 Data nodes (ndbd)
  • 3 MySQL Server (API) nodes (mysqld)

Downloading and installing

Browse to the Windows section of the MySQL Cluster 7.0 download page and download the installer (32 or 64 bit).

MySQL Cluster Windows Installer

MySQL Cluster Windows Installer

Run the .msi file and choose the “Custom” option. Don’t worry about the fact that it’s branded as “MySQL Server 7.0” and that you’ll go on to see adverts for MySQL Enterprise – that’s just an artefact of how the installer was put together.

On the next screen, I decided to change the “Install to” directory to “c:mysql” – not essential but it saves some typing later.

Go ahead and install the software and then you’ll be asked if you want to configure the server – uncheck that as we’ll want to tailor the configuration so that it works with our Cluster.

There are a couple of changes you need to make to your Windows configuration before going any further:

  1. Add the new bin folder to your path (in my case “C:mysqlbin”)
  2. Make hidden files visible (needed in order to set up multiple MySQL Server processes on the same machine)

Configure and run the Cluster

Copy the contents of “C:ProgramDataMySQLMySQL Server 7.0data” to “C:ProgramDataMySQLMySQL Server 7.0data4”, “C:ProgramDataMySQLMySQL Server 7.0data5” and “C:ProgramDataMySQLMySQL Server 7.0data6”. Note that this assumes that you’ve already made hidden files visible. Each of these folders will be used by one of the mysqld processes.

Create the folder “c:mysqlcluster” and then create the following files there:

config.ini

[ndbd default]
noofreplicas=2
[ndbd]
hostname=localhost
id=2
[ndbd]
hostname=localhost
id=3
[ndb_mgmd]
id = 1
hostname=localhost
[mysqld]
id=4
hostname=localhost
[mysqld]
id=5
hostname=localhost
[mysqld]
id=6
hostname=localhost

my.4.cnf

[mysqld]
ndb-nodeid=4
ndbcluster
datadir="C:ProgramDataMySQLMySQL Server 7.0data4"
port=3306
server-id=3306

my.5.cnf

[mysqld]
ndb-nodeid=5
ndbcluster
datadir="C:ProgramDataMySQLMySQL Server 7.0data5"
port=3307
server-id=3307

my.6.cnf

[mysqld]
ndb-nodeid=6
ndbcluster
datadir="C:ProgramDataMySQLMySQL Server 7.0data6"
port=3308
server-id=3308

Those files configure the nodes that make up the Cluster.

From a command prompt window, launch the management node:

C:UsersAndrew>cd mysqlcluster
C:mysqlcluster>ndb_mgmd -f config.ini
2009-06-16 20:01:20 [MgmSrvr] INFO     -- NDB Cluster Management Server. mysql-5.1.34 ndb-7.0.6
2009-06-16 20:01:20 [MgmSrvr] INFO     -- The default config directory 'c:mysqlmysql-cluster' does not exist. Trying to create it...
2009-06-16 20:01:20 [MgmSrvr] INFO     -- Sucessfully created config directory
2009-06-16 20:01:20 [MgmSrvr] INFO     -- Reading cluster configuration from 'config.ini'

and then from another window, check that the cluster has been defined:

 C:UsersAndrew>ndb_mgm
-- NDB Cluster -- Management Client --
ndb_mgm> show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2 (not connected, accepting connect from localhost)
id=3 (not connected, accepting connect from localhost)
[ndb_mgmd(MGM)] 1 node(s)
id=1    @localhost  (mysql-5.1.34 ndb-7.0.6)
[mysqld(API)]   3 node(s)
id=4 (not connected, accepting connect from localhost)
id=5 (not connected, accepting connect from localhost)
id=6 (not connected, accepting connect from localhost)

Fire up 2 more command prompt windows and launch the 2 data nodes:

C:UsersAndrew>ndbd
2009-06-16 20:08:57 [ndbd] INFO     -- Configuration fetched from 'localhost:118
6', generation: 1
2009-06-16 20:08:57 [ndbd] INFO     -- Ndb started
NDBMT: non-mt
2009-06-16 20:08:57 [ndbd] INFO     -- NDB Cluster -- DB node 2
2009-06-16 20:08:57 [ndbd] INFO     -- mysql-5.1.34 ndb-7.0.6 --
2009-06-16 20:08:57 [ndbd] INFO     -- Ndbd_mem_manager::init(1) min: 84Mb initi
al: 104Mb
Adding 104Mb to ZONE_LO (1,3327)
2009-06-16 20:08:57 [ndbd] INFO     -- Start initiated (mysql-5.1.34 ndb-7.0.6)
WOPool::init(61, 9)
RWPool::init(22, 13)
RWPool::init(42, 18)
RWPool::init(62, 13)
Using 1 fragments per node
RWPool::init(c2, 18)
RWPool::init(e2, 14)
WOPool::init(41, 8 )
RWPool::init(82, 12)
RWPool::init(a2, 52)
WOPool::init(21, 5)

(repeat from another new window for the second data node).

After both data nodes (ndbd) have been launched, you should be able to see them through the management client:

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0, Master)
id=3    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6)
[mysqld(API)]   3 node(s)
id=4 (not connected, accepting connect from localhost)
id=5 (not connected, accepting connect from localhost)
id=6 (not connected, accepting connect from localhost)

Finally, the 3 MySQL Server/API nodes should be lauched from 3 new windows:

C:UsersAndrew>cd mysqlcluster
C:mysqlcluster>mysqld --defaults-file=my.4.cnf

C:UsersAndrew>cd mysqlcluster
C:mysqlcluster>mysqld --defaults-file=my.5.cnf

C:UsersAndrew>cd mysqlcluster
C:mysqlcluster>mysqld --defaults-file=my.6.cnf

Now, just check that all of the Cluster nodes are now up and running from the management client…

ndb_mgm> show
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=2    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0, Master)
id=3    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6)
[mysqld(API)]   3 node(s)
id=4    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6)
id=5    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6)
id=6    @127.0.0.1  (mysql-5.1.34 ndb-7.0.6)

Using the Cluster

There are now 3 API nodes/MySQL Servers/mgmds running; all accessing the same data. Each of those nodes can be accessed by the mysql client using the ports that were configured in the my.X.cnf files. For example, we can access the first of those nodes (node 4) in the following way from (yet another) window:

C:UsersAndrew>mysql -h localhost -P 3306
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.1.34-ndb-7.0.6-cluster-gpl MySQL Cluster Server (GPLType 'help;' or 'h' for help. Type 'c' to clear the current input statement.

mysql> use test;
Database changed
mysql> create table assets (name varchar(30) not null primary key, value int) engine=ndb;
Query OK, 0 rows affected (1.44 sec
mysql> insert into assets values ('car', 950);
Query OK, 1 row affected (0.00 sec
mysql> select * from assets;
+------+-------+
| name | value |
+------+-------+
| car  |   950 |
+------+-------+
1 row in set (0.00 sec
mysql> insert into assets2 values ('car', 950);
Query OK, 1 row affected (0.00 sec)

To check that everything is working correctly, we can access the same database through another of the API nodes:

C:UsersAndrew>mysql -h localhost -P 3307
Welcome to the MySQL monitor.  Commands end with ; or g.
Your MySQL connection id is 2
Server version: 5.1.34-ndb-7.0.6-cluster-gpl MySQL Cluster Server (GPL)
Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.
mysql> use test;
Database changed
mysql> show tables;
+----------------+
| Tables_in_test |
+----------------+
| assets         |
+----------------+
1 row in set (0.06 sec)
mysql> select * from assets;
+------+-------+
| name | value |
+------+-------+
| car  |   950 |
+------+-------+
1 row in set (0.09 sec)

It’s important to note that the table (and its contents) of any table created using the ndb storage engine can be accessed through any of the API nodes but those created using other storage engines are local to each of the API nodes (MySQL Servers).

Your next steps

This is a very simple, contrived set up – in any sensible deployment, the nodes would be spread accross multiple physical hosts in the interests of performance and redundancy. You’d also set several more variables in the configuration files in order to size and tune your Cluster. Finally, you’d likely want to have some of these processes running as daemons or services rather than firing up so many windows.

It’s important to note that Windows is not a fully supported platform for MySQL Cluster. If you have an interest in deploying a production system on Windows then please contact me at andrew@clusterdb.com