MySQL Cluster Auto-Installer – labs release

Deploying a well configured cluster has just got a lot easier! Oracle have released a new auto-installer/configurator for MySQL Cluster that makes the processes extremely simple while making sure that the cluster is well configured for your application. The installer is part of MySQL Cluster 7.3 and so is not yet GA but it can also be used on MySQL Cluster 7.2. A single command launches the web-based wizard which then steps you through configuring the cluster; to keep things even simpler, it will automatically detect the resources on your target machines and use these results together with the type of workload you specify in order to determine values for the key configuration parameters.

Tutorial Video

Before going through the detailed steps, here’s a demonstration of the auto-installer in action…

Downloading and running the wizard

The software can be downloaded from MySQL Labs; just select the MySQL-Cluster-Auto-Installer build, unzip the file and then run. To run on Windows, just double click setup.bat – note that if you installed from the MSI and didn’t change the install directory then this will be located somewhere like C:Program Files (x86)MySQLMySQL Cluster 7.2. On Linux, just run ndb_setup.

Creating your cluster

MySQL Cluster auto-installer landing page

Landing page

When you run the auto-installer it starts a small web server and then (if possible) automatically connects your web browser to it – presenting you with the first page of the wizard. If this isn’t possible (for example the server isn’t running a desktop environment), then you can connect to it remotely using the URL http://your-server-name-goes-here:8081/index.html. It may take a number of seconds to load and so please be patient. Note that the machine where you run this doesn’t need to be a host that will be included in the cluster.

From the landing page, just click on the “Create new MySQL Cluster” icon to get started.

On the next page you need to specify the list of servers that will form part of the cluster. The machine where the installer is being run from needs to have ssh access to all of the cluster hosts (further, access to those machines must already have been approved from this one – if you’re uncertain, just manually connect to each one using an ssh client.

By default, the wizard assumes that ssh keys have been set up (so that a password isn’t needed) – if that isn’t the case, just un-check the checkbox and provide your username and password.

On this page, you also get to specify what “type” of cluster you want; if you’re experimenting for the first time then it’s probably safest to stick with “Simple testing” but for a production system you’d want to specify the kind of application and whether it will a write-intensive application.

Auto-discovery of target host resources

Auto-discovery of target host resources

On the next page, you will see the wizard attempt to auto-detect the resources on your target machines. If this fails then you can enter the data manually.

You can also overwrite the resource-values (for example, if you don’t want the cluster to use up a big share of the memory on the target systems then just overwrite the amount of memory.





Overwrite the default directories on the target systems

Overwriting the default directories on the target systems

It’s also on this page that you can specify where the MySQL Cluster software is stored on each of the hosts (if the defaults aren’t correct) – this should be the path to where you unzipped the MySQL Cluster tar-ball/zip file – as well as where the data (and configuration files) should be stored. You can just overwrite the values or select multiple rows and hit the “edit” button.







Defining processes

Defining processes

The following page presents you with a default set of nodes (processes) and how they’ll be distributed across all of the target hosts – if you’re happy with the proposal then just advance to the next page. So what can you change:

  • Add extra nodes
  • Move nodes from one host to another (just drag and drop)
  • Delete nodes
  • Change a node from one type to another


Add process

Add process

The diagram to the right shows an example of adding an extra MySQL Server.












Optionally override recommended configuration parameters

Optionally override recommended configuration parameters

On the next screen you’re presented with some of the key configuration parameters that have been set (behind the scenes, the wizard sets many more) that you might want to override; if you’re happy then just progress to the next screen. If you do want to make any changes then make them here before continuing. If you’d previously selected anything other than “simple” for the kind of cluster to create then you can check the “Show advanced configuration options” box in order to view/modify more parameters.




Deployment in progress

Deployment in progress

On the final screen you can review the details of the final recommended configuration and then just hit “Deploy and start cluster” and it will do just that. Depending on the complexity of the cluster, it can take a while to deploy and start everything but you’re shown a progress bar together with an explanation of what stage the process is at.

If for some reason you prefer or need to start the processes manually, this page also shows you the commands that you’d need to run (as well as the configuration files if you need to create them manually).

Once the wizard declares the process complete, you can check for yourself before going ahead and start your testing:

billy@black:~ $ ndb_mgm -e show
Connected to Management Server at: localhost:1186
Cluster Configuration
---------------------
[ndbd(NDB)]     2 node(s)
id=1    @192.168.1.106  (mysql-5.5.25 ndb-7.2.8, Nodegroup: 0, Master)
id=2    @192.168.1.107  (mysql-5.5.25 ndb-7.2.8, Nodegroup: 0)

[ndb_mgmd(MGM)] 2 node(s)
id=49   @192.168.1.104  (mysql-5.5.25 ndb-7.2.8)
id=52   @192.168.1.105  (mysql-5.5.25 ndb-7.2.8)

[mysqld(API)]   9 node(s)
id=50 (not connected, accepting connect from 192.168.1.104)
id=51 (not connected, accepting connect from 192.168.1.104)
id=53 (not connected, accepting connect from 192.168.1.105)
id=54 (not connected, accepting connect from 192.168.1.105)
id=55   @192.168.1.104  (mysql-5.5.25 ndb-7.2.8)
id=56   @192.168.1.104  (mysql-5.5.25 ndb-7.2.8)
id=57   @192.168.1.105  (mysql-5.5.25 ndb-7.2.8)
id=58   @192.168.1.105  (mysql-5.5.25 ndb-7.2.8)
id=59   @192.168.1.106  (mysql-5.5.25 ndb-7.2.8)

As always it would be great to hear some feedback especially if you’ve ideas on improving it or if you hit any problems.





166 comments

  1. Sylvain says:

    Hi,

    First of all, thanks for all your articles !

    This one especially because i plan to set up a mysql cluster and the auto installer seems to be just a dream for no-expert.

    I started to prepare a test environment in order to test the auto installer before installing a cluster “from my own hands”. I will use mysql cluster on windows environment.
    I will start testing probably tomorrow.

    I’d like to know the differences and effects on conf between “Web App” and “Real Time” when you select it on the auto installer. Is that just changing roles on nodes or other param like the way tables are managed : all on memory or maybe memory for indexes and disk for rest of data (not sure mysql cluster can provide this but.. Let me know.
    Thanks in advance !
    I’ll probably let a new comment after my test.

    • andrew says:

      Hi Sylvain, please let me know how you get on.

      The installer doesn’t do anything to set up disk-based data – you do that when you actually create the tables.

      I don’t have the real answer to your questions (and I can’t check it out at the moment) so I’d suggest trying each option and see how the proposed configuration details change.

      Regards, Andrew.

  2. Steve says:

    I have installed a 4 node cluster with the instructions provided. My question is if a restart of a node is required, now do I perform the shutdown / startup process? This configuration is great but I’m not sure how to perform rolling restarts if changes are necessary.

    • andrew says:

      Hi Steve,

      To restart a management or data node then first stop the process using the ‘stop’ command working the ndb_mgm tool and then execute the new ndb_mgmd or ndbmtd binary (including the options that the auto-installer would have shown you).

      To stop the mysqld processes use the mysqladmin shutdown command.

      Andrew.

  3. Elavarasan says:

    I’m planning to setup MySQL Cluster for HA in our organization. Current i have install two Cent OS server with MySQL database (defualt ). so please advise me that how deploy cluster into two mysql servers and monitoring as well

    Thanks,
    Ela

  4. Ramzy says:

    Hi guys,

    i really appreciate the post and will give it a try tonight.
    I have a question though: is there a realease for Linux of the auto-installer ? ( i am planning to use Ubuntu 12.04)
    Thanks

  5. jelramzy says:

    well,

    i failed to set a cluster environment based on a Virtualbox VMs environment ( not using the auto installer) , can anyone help?
    Thank you all!

  6. stephane says:

    Hi andrew,

    I tried to install (download and untar it) on a linux server, so I try to browse to the web interface from another computer (with browser) and I get a message : “La connexion avec le serveur a été réinitialisée pendant le chargement de la page.” (sorry in french).
    I made sure the port and IP are open within the firewall… I also installed an apache to check on port 80 … and it’s working but not the 8081 one …
    I also try to set the debug level to debug but nothing go in the /tmp/log file …

    Any advice would be welcome …

    Thanks
    Stéphane

    • andrew says:

      Hi Stéphane,

      did you try connecting from a browser running on the host running the installer – could help to confirm that it isn’t a firewall issue? Note that the installer can be run on a machine that isn’t part of the Cluster.

      Regards, Andrew.

  7. stephane says:

    Hi andrew,

    I did installed it on another machine (ubuntu with desktop), and it worked fine, I even managed to start the cluster BUT … the data node don’t connect …

    ndb_mgm> show
    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ———————
    [ndbd(NDB)] 2 node(s)
    id=1 @192.168.10.15 (mysql-5.5.27 ndb-7.2.8, starting, Nodegroup: 0)
    id=2 @192.168.10.16 (mysql-5.5.27 ndb-7.2.8, starting, Nodegroup: 0)

    [ndb_mgmd(MGM)] 2 node(s)
    id=49 @127.0.0.1 (mysql-5.5.27 ndb-7.2.8)
    id=52 @192.168.10.12 (mysql-5.5.27 ndb-7.2.8)

    [mysqld(API)] 14 node(s)
    id=50 (not connected, accepting connect from node-1)
    id=51 (not connected, accepting connect from node-1)
    id=53 (not connected, accepting connect from node-2)
    id=54 (not connected, accepting connect from node-2)
    id=55 (not connected, accepting connect from node-3)
    id=56 (not connected, accepting connect from node-3)
    id=57 (not connected, accepting connect from node-1)
    id=58 (not connected, accepting connect from node-1)
    id=59 (not connected, accepting connect from node-2)
    id=60 (not connected, accepting connect from node-2)
    id=61 (not connected, accepting connect from node-3)
    id=62 (not connected, accepting connect from node-3)
    id=63 (not connected, accepting connect from node-4)
    id=64 (not connected, accepting connect from node-4)

    ndb_mgm>

    I see : “Plugin ‘ndbcluster’ is disabled.” in the .err file for the nodes, which is I think the problem but I cannot find where the cnf files are for the nodes, I think there is none … but then where can I set the ndbcluster to enable ?

    Also, I would advise to add few comments on the article, for exemple, I use 6 VM on a VMWARE environnement (pCC ovh), and first try the web page with the root ssh information … but then the node where not starting up, because the root user cannot start mysql for security reason.

  8. Steve says:

    I tried doing this on OEL6.3 (server mode, no X). It looks like it starts the python webserver then opens a text based browser which tries to connect to localhost:8081 and just hangs there on the request. Eventually it will timeout. Not quite sure what’s going on there.

    • andrew says:

      The installer will be trying to launch the installer app within a web browser, in some systems this will result in a text-based browser being launched if you don’t have a desktop environment, in others you won’t get anything. If you don’t have a desktop environment then just launch a browser from a machine that does and then connect to the installer’s web server (you should have been given a message with the port to use.

      Regards, Andrew.

      • andrew says:

        Note that you can tell it not to even bother trying to open a local browser window by providing the -n option.

        Regards, Andrew.

        • Rommel says:

          First I am trying on both Debian and Oracle Linux to perform this standalone installer for the cluster on a single VM with 3 nodes but when I give the option:

          ./ndb_setup.py -n

          Both apache and tomcat cannot resolve to the host and I receive no message.

          ./ndb_setup.py

          returns the error:

          Could not control your browser. Try to opening http://localhost:8081/welcome.html to launch the application.
          —————————————-
          Exception happened during processing of request from (‘127.0.0.1’, 46162)
          Traceback (most recent call last):
          File “/usr/lib/python2.7/SocketServer.py”, line 593, in process_request_thread
          self.finish_request(request, client_address)
          File “/usr/lib/python2.7/SocketServer.py”, line 334, in finish_request
          self.RequestHandlerClass(request, client_address, self)
          File “/usr/lib/python2.7/SocketServer.py”, line 651, in __init__
          self.finish()
          File “/usr/lib/python2.7/SocketServer.py”, line 704, in finish
          self.wfile.flush()
          File “/usr/lib/python2.7/socket.py”, line 303, in flush
          self._sock.sendall(view[write_offset:write_offset+buffer_size])
          error: [Errno 32] Broken pipe
          —————————————-

          where to go from here?

          • andrew says:

            If you’re running this on a machine that has a desktop environment and have a browser installed then you can skip the -n option and it should open a browser windows. Otherwise, keep -n but add “-n ” and then the web server will accept connections from a browser that is trying to access http://:8081/welcome.html

            Andrew.

  9. Stuart says:

    Hi Andrew,

    I am trying to get the cluster auto-configuration to work in Windows. I have a 4-box Windows Server 2008 R2 test environment.

    All servers have freeSSHd installed and running as a service. I can test the SSH services with Putty; I can log in to the servers and create folders, get the OS and PROCESSOR_ARCHITECTURE, etc. However, when I use the “Cluster Type and SSH Credentials” page with the correct Username and password (I am not using key-based SSH) I get an error stating “AllowDesktopAccess failed” when “echo %OS% %PROCESSOR_ARCHITECTURE%” was called.

    I don’t really understand how the installer fails when I can perform the commands in Putty. This isn’t the only problem; I get Errno13 failed to create directory errors at the end of the wizard and the cluster is not created.

    Thanks for any help you can provide.

  10. Stuart says:

    Update: My issue was with using FreeSSHd.

    After a lot of messing around I scrapped that SSH server and tried CopSSD instead. This is an OpenSSH / CygWin based SSH server, which worked a lot better. The system auto-discovery didn’t work, but after supplying that information manually the auto-installation process did work.

    Happy Days. Now to find a GUI cluster management tool.

    • andrew says:

      Hi Stuart,

      I tried with freeSSHd today and saw the same error as you; I’ve raised a bug report with the engineering team. Thanks for sharing this.

      Regards, Andrew.

      • andrew says:

        Stuart,

        the engineering team have found the cause – freeSSHd behaves very differently (when sending in commands you have to tell it that some of them should be run within cmd.exe) compared to the ssh server that comes with Cygwin (which is what they tested with). They’re now working on a fix to be compatible with freeSSHd but for now you’re right to use Cygwin.

        Thanks again for flagging this.

        Andrew.

  11. stephane says:

    Update : better but not definitively ok

    Everything works fine with the auto-installer (except on ubuntu without desktop, the server service on port 8081 is not usable even from outside without any firewall)

    The only problem now, is with the data node not connection to the manager …

    id=50 (not connected, accepting connect from node-1)
    id=51 (not connected, accepting connect from node-1)
    id=53 (not connected, accepting connect from node-2)
    id=54 (not connected, accepting connect from node-2)
    id=55 (not connected, accepting connect from node-3)
    ….

    if someone find out …. I used all the default for the installation …

    Stéphane

    • andrew says:

      Hi Stephane,

      are you certain that there isn’t a firewall in the way?On the host running your management node, you should see a cluster log file which should report if the data nodes have attempted to connect. On each of the data node hosts you should also have a log file which should give the data nodes’ view of things.

      Regards, Andrew.

  12. Yves says:

    Hello,
    At first I would thank you for this great work, however i’m having trouble a the deploy step, I got a message wich says :

    Command ‘[u’/home/administrateur/Documents/softwares/Mysql_cluster/mysql-cluster-7.2.8-linux-x86_64/bin/ndb_mgmd’, ‘–initial’, ‘–ndb-nodeid=49’, ‘–config-dir=/home/administrateur/MySQL_Cluster/data/49/data/’, ‘–config-file=/home/administrateur/MySQL_Cluster/data/49/data/config.ini’]’ returned non-zero exit status 1

    I googled this but it returns nothing could you have any solution ?

    Thanks Yves.

  13. Yves says:

    Thanks, where could I find this log ?

  14. Yves says:

    I’m progressing but now always on the deply step I Have :

    Command `/usr/local/mysql/mysql-cluster/bin/ndbmtd –ndb-nodeid=1 –ndb-connectstring=192.168.0.104:1186,192.168.0.103:1186,’, running on 192.168.0.102 exited with 1:
    2013-02-01 15:32:21 [ndbd] INFO — Angel connected to ‘192.168.0.104:1186’
    2013-02-01 15:32:21 [ndbd] INFO — Angel allocated nodeid: 1
    2013-02-01 15:32:21 [ndbd] WARNING — Cannot change directory to ‘/home/administrateur/MySQL_Cluster/data/1/data/’, error: 2
    2013-02-01 15:32:21 [ndbd] ERROR — Couldn’t start as daemon, error: ‘Failed to open logfile ‘/home/administrateur/MySQL_Cluster/data/1/data//ndb_1_out.log’ for write, errno: 2′

    I’m working on Ubuntu 12.04 servers and I’m facing several permissions issues, but for the last log I don’t understand permissions are at 777

  15. Allen says:

    Guys were I can find another installer for MySQL-Cluster-Auto-Installer binary-release-cluster-community_windows-x86-32bit_zipmysql-cluster-gpl-7.2.8-win32.zip

    because when I download this version from the web site the file .zip fail and stop th e download, the file are damage, if you have another web site for download or torrent that I can download I will grateful with you.

  16. Allen says:

    Thanks I will be wating for you signal when the right download file is ready. Ok Andrew

  17. Brendan says:

    Hello thanks for the post. I tried to install the cluster using ./ndb_setup on a Linux server via putty and got this error message. Please help.

    # ./ndb_setup
    Traceback (most recent call last):
    File “./ndb_setup”, line 29, in
    (pymajor, pyminor) = num_py_major_minor_tuple()
    File “./ndb_setup”, line 10, in num_pyver
    return int(filter(str.isdigit, vn))
    TypeError: int() argument must be a string or a number, not ‘filter’

    • andrew says:

      Hi Brendan,

      looks strange. Could I ask you to run the following and let me know what the result is….

      > python
      Python 2.6.4 (r264:75706, Oct 21 2010, 03:48:43) [C] on sunos5 Type "help", "copyright", "credits" or "license" for more information.
      >>> import platform; platform.python_version_tuple(); ('2', '6', '4')
      >>> filter(str.isdigit, '2'); '2'
      >>>

  18. Brendan says:

    Hello Andrew thank you for the quick replay.

    I have version 3.3 of python install. Will that be a issue? Here is the output.

    # python
    Python 3.3.0 (default, Feb 28 2013, 11:58:30)
    [GCC 4.1.2 20080704 (Red Hat 4.1.2-54)] on linux
    Type “help”, “copyright”, “credits” or “license” for more information.
    >>> import platform; platform.python_version_tuple(); (‘2’, ‘6’, ‘4’)
    (‘3’, ‘3’, ‘0’)
    (‘2’, ‘6’, ‘4’)
    >>> filter(str.isdigit, ‘2’); ‘2’

    ‘2’
    >>>

    • andrew says:

      Hi Brendan,

      could you please try with Python 2.7 instead? We’ll look into getting the auto-installer to work with 3.3 in the future.

      Thanks, Andrew.

  19. Nils says:

    Hello Andrew,

    if i want to deploy the mysql cluster i always get the message:

    Cannot locate ndb_mgmd in /usr/local/bin/[‘bin’, ‘scripts’, ”, ‘../scripts’] on host 192.168.2.38

    That happens on Debian6 and CentOs6.3

    Config.ini etc is created.

    Do you know whats going wrong here?

    • andrew says:

      Hi Nils,

      have you first unpacked the Cluster tar ball on 192.168.2.38? If not then do so. If so, is it located in /usr/local/bin? If not then when you step through the installer pages override the default and point it to where the Cluster binaries are installed on 192.168.2.38.

      Regards, Andrew.

      • steven says:

        can you be a bit more specific?

        I have the same error, I have 4 VMs I installed the,

        Red Hat Enterprise Linux 7 / Oracle Linux 7 (x86, 64-bit), RPM Package
        MySQL

        Server on all four.

        The docs seems to say that is all I need?

  20. Nils says:

    Ok, I have to unpack it manually? I thought this is done by the auto installer?

    Thanks in advance,
    Nils

  21. jj says:

    Hello,

    In the first phase (define cluster), when I choose the key based SSH, it shows an error message saying “No authentication methods available”. I was hoping the page will ask me to browse a private key.pem.

    I am using EC2 ubuntu servers which are all using the same key pair.

    Please guide how I can resolve the error message for connecting to the EC2 machines.

    Thanks!

  22. Michael says:

    I’m new to the MySQL Cluster software and ran through this tutorial with 4 Ubuntu 12.04 virtual machines. I had to use non-root credentials for the SSH access to get it to complete. I have a few questions.

    Should the installer add the startup scripts to the bin directory so they are globally available?

    What is the default root password used by the installer to login to one of the mysql servers?

    • andrew says:

      Hi Michael,

      > I’m new to the MySQL Cluster software and ran through this tutorial with 4 Ubuntu 12.04 virtual machines. I had to use non-root credentials for the SSH access to get it to complete. I have a few questions.
      >
      > Should the installer add the startup scripts to the bin directory so they are globally available?

      No – the installer’s life ends once your Cluster is up and running, after that you use the normal MySQL Cluster commands (such as ndb_mgm) for ongoing management.

      There is a commercial CLI tool (MySQL Cluster Manager) that can create a Cluster and then provide ongoing management. You can see a video of it in use here… http://www.clusterdb.com/mysql-cluster/mysql-cluster-manager-1-2-using-the-new-features/

      > What is the default root password used by the installer to login to one of the mysql servers?

      I believe the password is empty.

      Regards, Andrew.

  23. Bas van den Dikkenberg says:

    Hi,

    great help this install,

    What i am missing is the install of init script

    So the server lives when is reboot or am i wrong ?

    Bas

    • andrew says:

      Hi Bas,

      you’re correct that the auto-installer starts up the Cluster but it doesn’t put anything in place to automatically start all of the processes when machines are restarted. Note that you’re given the the instructions on exactly how to start each process manually on the final, deployment page of the auto-installer. There is another option if you want things handling automatically and that’s MySQL Cluster Manager which will automatically restart failed nodes.

      Andrew.

      • steven says:

        cluster manager is a pay for however?

        • andrew says:

          Correct – MCM is part of the commercial version of MySQL Cluster – MySQL Cluster Carrier Grade Edition. Like other pieces of Oracle software, you can download it from edelivery.oracle.com and try it out for 30 days before deciding whether the added value justifies the purchase.

  24. NGnasso says:

    Hi Andrew,
    2 simple questions. Have i to install a MySQL Server on SQL nodes before unpacking mysql-cluster on it? And, may i unpack mysql-cluster in the same mysql server dir?

    Thanks in advance, i’m new of it.

    • andrew says:

      NGnasso,

      MySQL Server is included within the MySQL Cluster package and you should only use that version (i.e. never use a mysqld unless it comes from a MySQL Cluster package that you’ve installed).

      Andrew.

  25. Camille says:

    Hi Andrew

    First of all, thanks for all your articles !

    It’s only for me, or the packages MySQL-Cluster-Auto-Installer doesn’t exist anymore ?

  26. Michael says:

    Andrew –

    Thank you for the article it has been very helpful. I have pulled 4 CentOS 6.4 servers that i want to run the cluster on. I have untar’d the package on all 4 servers, and can get the ndb_setup running easily enough. I walk through the intallation options, and when i go to deploy and start (as root, not sure that matters), i end up getting an error that prevents the cluster from starting:

    Command ‘/usr/local/mysql/bin/mysqld –no-defaults –datadir=/var/lib/mysql-cluster-data/55 –tmpdir=/var/lib/mysql-cluster-data/55/tmp –basedir=/usr/local/mysql/ –port=3306 –ndbcluster –ndb-nodeid=55 –ndb-connectstring=10.66.230.124:1186,10.66.230.177:1186, –socket=/var/lib/mysql-cluster-data/55/mysql.socket’, running on 10.66.230.124 exited with 1: 2013-06-27 18:43:34 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).

    2013-06-27 18:43:34 29060 [ERROR] Fatal error: Please read “Security” section of the manual to find out how to run mysqld as root!

    2013-06-27 18:43:34 29060 [ERROR] Aborting

    2013-06-27 18:43:34 29060 [Note] Binlog end
    2013-06-27 18:43:34 29060 [Note] /usr/local/mysql/bin/mysqld: Shutdown complete

    I looked at the manual in the Security section but i must be missing something as i dont see any pointers to the answer. Would you possibly be able ot help?

    Thanks, Michael

  27. Amos says:

    Hi Andrew,
    I’ve run into the same problem with Michael. At the beginning of the auto-install procedure I sshed to my 4 hosts as root. Then I got:

    [ERROR] Fatal error: Please read “Security” section of the manual to find out how to run mysqld as root!

    After that I tried to ssh as a non-root user. But I ran into some privileged problems:
    [Errno 13] Permission denied

    I believe there is something I misunderstand.

    • andrew says:

      Hi Amos,

      what are you running when you get the Permission Denied error (e.g. if you’re trying to connect the mysql client, what exactly are you typing?

      If you’re still in the auto-installer phase then note that the ssh user you specify should have read/write permissions for whatever datadir is being recommended/set/accepted.

      If connecting with the mysql client, note that while the ssh (e.g. Linux) user shouldn’t be root but the MySQL user can be root. Also. if this is an error you see when connecting the mysql client from a remote client, you need to make sure that the user privs are set up correctly within the MySQL Servers.

      Andrew.

  28. Corne says:

    I also tried with freesshd but the version i downloaded didn’t work. The freesshd software was running on port 22 but when i tried to start the ssh server on port 22, it couldn’t. I changed the port to 10 but mysql installer didn’t like that. I downloaded another ssh server and it worked like a charm.

    On the deploy cluster screen. I have 3 machines to use for cluster. Running windows 7 64 bit. The deploy doesn’t want to work. I get “files missing” errors. Do i need to do some setup before i start the auto installer ? I tried to work around this error by manually creating directories and copying files so that the files exist. I tried the deploy cluster again. Then it complained about “closing attribute missing”. What am i doing wrong ?
    What are the steps to execute before starting auto installer (if any) ? This is for mysql 7.3 commercial release – but it is for a test bed system.

    • andrew says:

      Corne,

      could you please raise a bug report on http://bugs.mysql.com/report.php with as much information as possible (set the category to “MySQL Cluster: Configurator”.

      Thanks, Andrew.

    • mm says:

      Can you please tell, with which SSH Server it “worked as a charm”?
      I tried freesshd, now Bitvise: and it does not work.
      I specified account/password at cluster installer and it correctly logs on another Windows machine (I can see in ssh server logs), but gets “Unable to create directory error” over and over.
      BTW: at msyql Cluster installer at first step: if you FIRST enter another node IP and THEN change auth type to user/password, then the wizard is unable to fetch defaults from remote server. When you first change auth type and then type node IP – it fetches defaults correctly. However still ends-up with “cannot created directory” error when trying to deploy. Looks like bug to me.

      • andrew says:

        mm – what directory are you specifying for the installation directory and datadir for the target host? Does the parent director exist and does the user that you’re using for SSH have the required permissions to read/write to them?

        Andrew.

        • Khalil says:

          Hi Andrew,

          how we can create the users for ssh i’m using mysql-cluster on ubuntu and i installed paramiko but, i get such error Permission Denied!
          i do not know how to create ssh users

          • andrew says:

            Hi Khalil,

            the simplest way is to just run an ssh client from the machine running the auto-installer and use it to connect to each of the target hosts – use a username/password (not specific to ssh) that exists on the target machines.

            Andrew.

  29. chandrasekar says:

    Hi Andrew,

    I am getting the same error even the tar ball is unpacked..

    Cannot locate ndb_mgmd in /usr/local/bin/[‘bin’, ‘scripts’, ”, ‘../scripts’] on host 10.0.0.162…

    May I know the pre requisites and what are the things I have to do before running the mysql cluster installer?

    Thanks in advance!!!

    Chandrasekar

    • andrew says:

      Chandrasekar,

      it looks like you’ve left the default install directory of /usr/local/bin – is this where you’ve extracted the tar ball? If not then you should change the directory to where the Cluster files have been extracted (you do this right afetr the hosts have been auto-detected.

      Andrew.

      • chandrasekar says:

        Andrew,

        I changed the directory where the cluster files were extracted..Eventhough I changed the directory,its showing cannot locate ndb_mgmd error…

        Here the things which I have done…

        I have three local ips 10.0.0.200,10.0.0.201,10.0.0.202.
        and I copied the zip files and extracted the same.The user name and password are same to all the host system and I changed the directory paths as same you mentioned the Cluster installer video…Hence I am getting Cannot locate ndb_mgmd error..pls help me on this..

        Thanks,
        Chandrasekar

        • andrew says:

          Chandrasekar,

          1. Was the auto-installer able to fill in the correct details about the target hosts’ operating system, memory etc.
          2. Can you please provide the location and contents of the folders (on the target hosts) where you’ve extracted the tar ball
          3. Can you please past in the exact path you’re specifing in the auto-installer?

          Thanks, Andrew.

          • chandrasekar says:

            Andrew,

            Mysql Cluster Install directory-/home/chandru/mysql_cluster/mysql-cluster-linux64
            Mysql cluster data directory-/home/chandru/My_Cluster

            location of the tar ball that has extracted-home/chandru/mysql_cluster

            This is the path I specified…

            Two of the hosts are I5 processors and one is I7 processor and rest are same.will it Cause problem?

            Thanks,
            Chandrasekar

          • andrew says:

            What files/directories are in /home/chandru/mysql_cluster/mysql-cluster-linux64 ?

            No problem mixing different CPUs (the only issue comes if you mix Intel with SPARC).

            Andrew.

      • steven says:

        I get this from the rpm install.

        The manual says all I need is the server rpm?

  30. Jesse says:

    I’m running into the same issues as the others that have commented.

    The first I’d like to tackle is running the installer as root. For some reason the deployer isn’t adding the default –user=mysql. So, it tries to start mysqld as the root user.

    I can’t get the installer to run without root. Some bind address issue when trying to start it up as the mysql user (and myself).

    Can reference same error that Michael posted on June 28, 2013 at 1:37 pm.

  31. Kim says:

    Hi. Im new to clustering and Im doing a project on cluster database. I want to make use of MySQL Cluster. Im using it for a small scale database and this is my plan:

    5 node:
    1 management node
    2 SQL node
    2 API node.

    My questions are:
    1) Is my plan for the node process alright?
    2) What should I do when I got the error “Failed to allocate node id…”?
    3) Is it a requirement to use multi-threaded data node?
    4) Where do I place my web server page for the user to access the database?

    Please reply. Thank you so much.

    • andrew says:

      Hi Kim,

      > 1) Is my plan for the node process alright?

      You don’t have any data nodes in your list. If you want fault tolerance then you should have at least 2 data nodes, 2 MySQL Servers and 1 Management Node. You should also have an unused API slot that can be used for restoring backups. To be fully fault tolerant, these should be spread over 3 hosts, making sure that the MySQL Servers aren’t on the same machine and the same for the data nodes. The management node should not be on the same host as any of the data nodes.

      > 2) What should I do when I got the error “Failed to allocate node id…”?

      Show me your config file and how you’e starting the process and I’ll take a look.

      > 3) Is it a requirement to use multi-threaded data node?

      No – only adds value if you want to exploit multiple threads.

      > 4) Where do I place my web server page for the user to access the database?

      From a MySQL Cluster perspective, this can be anywhere you like – just make sure that the user privileges allow your user to connect from that host (this is generic MySQL behaviour – nothing special for Cluster except that you should set it up in all of the MySQL servers (or have the user privs be stored in Cluster).

      Andrew.

      • Kim says:

        Error message:
        [ndbd] ERROR — Failed to allocate nodeid, error: ‘Error: Could not alloc node id at 172.16.90.21 port 1186: Id 1 already allocated by another node.’

        Config file:
        #
        # Configuration file for MyCluster
        #

        [NDB_MGMD DEFAULT]
        Portnumber=1186

        [NDB_MGMD]
        NodeId=49
        HostName=172.16.90.21
        DataDir=/home/student/MySQL_Cluster/49/
        Portnumber=1186

        [TCP DEFAULT]
        SendBufferMemory=4M
        ReceiveBufferMemory=4M

        [NDBD DEFAULT]
        BackupMaxWriteSize=1M
        BackupDataBufferSize=16M
        BackupLogBufferSize=4M
        BackupMemory=20M
        BackupReportFrequency=10
        MemReportFrequency=30
        LogLevelStartup=15
        LogLevelShutdown=15
        LogLevelCheckpoint=8
        LogLevelNodeRestart=15
        DataMemory=405M
        IndexMemory=65M
        MaxNoOfTables=4096
        MaxNoOfTriggers=3500
        NoOfReplicas=2
        StringMemory=25
        DiskPageBufferMemory=64M
        SharedGlobalMemory=20M
        LongMessageBuffer=32M
        MaxNoOfConcurrentTransactions=16384
        BatchSizePerLocalScan=512
        FragmentLogFileSize=64M
        NoOfFragmentLogFiles=16
        RedoBuffer=32M
        MaxNoOfExecutionThreads=2
        StopOnError=false
        LockPagesInMainMemory=1
        TimeBetweenEpochsTimeout=32000
        TimeBetweenWatchdogCheckInitial=60000
        TransactionInactiveTimeout=60000
        HeartbeatIntervalDbDb=15000
        HeartbeatIntervalDbApi=15000

        [NDBD]
        NodeId=1
        HostName=172.16.90.24
        DataDir=/home/student/MySQL_Cluster/1/

        [NDBD]
        NodeId=2
        HostName=172.16.90.25
        DataDir=/home/student/MySQL_Cluster/2/

        [MYSQLD DEFAULT]

        [MYSQLD]
        NodeId=59
        HostName=172.16.90.22

        [MYSQLD]
        NodeId=61
        HostName=172.16.90.23

        • andrew says:

          Kim,

          you’ve configured the data node with a node-id to be on 172.16.90.24 but then you’re trying to start it on 172.16.90.21. Either update your config file (and start the ndb_mgmd again with the –initial option or start the data node on the correct machine.

          Andrew.

          • Kim says:

            Hi.

            1) I cant seem to set node id 1 to be used on 172.16.90.21, which is my management network. The data node, 172.16.90.24 is still taking node id 1. However, I tried editing the config.ini file and used the “ndb_mgmd -initial” command but I received new error message.
            “Command ……. running on 172.16.90.22 exited with 1:
            Fatal error: Could not find my_print_defaults.”

            2) What do you mean by starting the data node on the correct machine? My management node is on 172.16.90.21. What I did was set the process in the Auto Installer and click “Deploy and Start cluster.”

            Thank you.

          • andrew says:

            Hi Kim,

            1) This hints that you haven’t pointed the auto-installer to the correct directory holding the MySQL Cluster installation on the machine(s) that wil be running the MySQL Server(s).
            2) Again, check that the correct installation folder is being pointed to. Could you paste in the folder you specify in the auto-installer and the contents of that directory on the target machine?

            Andrew.

  32. Rommel says:

    How do I install all the prerequisites for CENTOS / Oracle Linux? Everything I go and try to automate this process I get a million dependency problems, one of which is with Paramiko and I cant get that up and running on my centos system. Could you guide me in the right direction here please

  33. Tomas says:

    I cannot put my hand on the auto-installer as http://labs.mysql.com/ doesn’t provide the entry point of the MySQL Cluster Auto-Installer.
    The only things I got is Multi-Source Replication, Hadoop Applier, etc.
    Did I do anything wrong?

  34. Alec says:

    Hi. I’m new to MySQL Cluster and I have been able to start a cluster using the auto-installer. Currently, I am testing the software to get my head around how to use and maintain it. Currently, I have created a cluster with 4 linux hosts, 2 acting as my data nodes and the other 2 each consisting of a management node and a sql node. The cluster starts fine from the auto-installer and I am able to shutdown and restart the cluster from a terminal. The issue I am having however is that the data nodes do not replicate as the NDBCLUSTER engine isn’t listed or available under SHOW ENGINES. my.cnf both contain ndbcluster option. Do you know where in the process of setting this cluster up I have gone wrong or what I have missed and what I could do to resolve please?

    Thank you

    Alec

    • andrew says:

      Hi Alec,

      for your mysqld’s, are you including the connect-string for the 2 management nodes? e.g. in the my.cnf add ndb-connectstring=192.168.0.10:1168,192.168.0.11:1168

      Andrew.

      • Alec says:

        Hi Andrew,

        Thanks for the reply.

        I am including the connect strings in both my.cnf files. This was done by auto-installer. All nodes connect to the cluster. Shown by “ndb_mgm -e show”

        I have looked all over for little bits of information online that could point me in the right direction. In the my.cnf, i have “ndbcluster=on” which I believe is the same as having “ndbcluster” option as many recommend including. To be on the safe side I used that option as well at one point but no change. One thing I have noticed but I’m not sure if relevant is that I can stop the cluster and when I go to restart it by first starting the management nodes then data nodes, the SQL/API nodes start automatically so I don’t really have an option to use –ndbcluster to possibly enable ndbcluster.

        Any other ideas/suggestions would be most appreciated.

        Alec.

        • andrew says:

          Alec,

          could it be that you already have a “regular” MySQL service running on the machine and that it’s actually that that you’re connecting to when you run the mysql client? Explicitly provide the options to force it to connect to the mysqld’s in the Cluster. Assuming that you stuck with the default port then mysql -h 127.0.0.1 -P3306

          Andrew.

          • Alec says:

            Andrew,
            I have thought that this could be the case from what I’ve read. I’ll have a closer look and let you know.

            Alec

          • Alec says:

            Andrew,

            quick update.

            Started from scratch with my hosts for my own benefit. Got it working after setting a cluster up manually and then got it working eventually after the autoinstaller. It was probably a regular mysql service interfering and possibly my inexperience with MySQL and Linux but I can push further with it now. That command you posted helped. Thank you for your time and help.

            Alec

  35. Bob says:

    I was able to get MySQL Cluster up and running with the auto installer but unable to log into the database cluster to create DBs or tables. If I try to login from command line, I’ll get host is not allied to connect to this MySQL server. Using a client from a windows machine, it’ll tell me that the login is invalid and I’ve tried root with no password.

  36. Max H says:

    Ok. I’m stumped with a final cluster deployment error. I’m on Debian 7.1 trying to install the cluster 7.3.3 (with its included mysql 5.6 server). I’ve tried installing from both the deb package and the linux source code packages, both times on a fresh install of debian. I go through the steps and get to the Deploy and Start Cluster button. The Management and Multithread Data Nodes all appear to be connected, but I get an error every time when it tries to start the SQL nodes. The error is:

    Command `nohup /usr/local/mysql-cluster-gpl-7.3.3-linux-glibc2.5-i686/bin/mysqld –defaults-file=/home/admin/mysql_cluster_data/53/my.cnf’ exited with 1:
    nohup: appending output to `nohup.out’
    2013-12-11 12:03:15 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).
    2013-12-11 12:03:15 7864 [ERROR] Fatal error: Please read “Security” section of the manual to find out how to run mysqld as root!

    I’ve tried putting the data nodes both in the default /root/mysql_cluster folder as well as the /home/admin/mysql_cluster_data folders to no avail. I cannot find any documentation on how to fix this specific error. Anyone have a clue as to what I’m doing wrong?

    Configuration is all on one machine at the moment (MGMT, Multithread Data Nodes, SQL Nodes) till I can figure out what I’m doing wrong. All my.cnf files as generated by the ndb_setup.py script.

    • Max H says:

      Oh, and FYI, during the fresh install of the debian server each time I installed only the desktop environment and the basic system files. No database was installed. And yes, I did run apt-get install python python-paramiko python-crypto libaio1 to add those files before running the python script for the cluster installer.

    • andrew says:

      I always use the linux binary tar ball (not the source version) and then use that – have you tried that?

      Andrew.

      • Max H says:

        Hi Andrew. Yes, I tried it both with the .deb package and the linux generic tar.gz (non source) versions with the same results.

        I attempted installation again today and received the following error message on attempting to deploy and start the cluster:

        Command `/opt/mysql/server-5.6/bin/ndbmtd –ndb-nodeid=2 –ndb-connectstring=’, running on 192.168.1.201 exited with 1:
        Unable to connect with connect string: nodeid=2,localhost:1186
        Retrying every 5 seconds. Attempts left: 12 11 10 9 8 7 6 5 4 3 2 1, failed.
        2014-01-02 14:45:25 [ndbd] ERROR — Could not connect to management server, error: ”

        What do you need to know for troubleshooting?

  37. Kim says:

    Hi, it’s me again. I have checked the installation folders. I think they are correct but I still cant manage to solve the “Fatal error: Could not find my_print_defaults.” problem.

  38. Cedric says:

    Hi, I’m Cedric here. I’m new to this MySQL Cluster, and I have a problem with my cluster. I tried to deploy and run the cluster, but it showed me this “[Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).” I would appreciate it if you could help me, thank you.

    • andrew says:

      Hi Cedric,

      you can just ignore that warning.

      Andrew.

      • Cedric says:

        Hi Andrew, Cedric here.Sorry to bother you. When this warning appeared my sql nodes were offline. Is there a way to bring them up? Here’s the warning for more info :

        2014-01-07 11:02:22 0 [Warning] TIMESTAMP with implicit DEFAULT value is deprecated. Please use –explicit_defaults_for_timestamp server option (see documentation for more details).
        2014-01-07 11:02:22 17058 [Warning] Buffered warning: Changed limits: max_open_files: 1024 (requested 5000)

        2014-01-07 11:02:22 17058 [Warning] Buffered warning: Changed limits: table_cache: 431 (requested 2000)

        2014-01-07 11:02:22 17058 [Note] Plugin ‘FEDERATED’ is disabled.
        2014-01-07 11:02:22 17058 [Note] NDB: Changed global value of binlog_format from STATEMENT to MIXED
        2014-01-07 11:02:22 17058 [Note] NDB: NodeID is 51, management server ‘172.16.90.21:1186’
        2014-01-07 11:02:23 17058 [Note] NDB[0]: NodeID: 51, all storage nodes connected
        2014-01-07 11:02:23 17058 [Warning] NDB: server id set to zero – changes logged to bin log with server id zero will be logged with another server id by slave mysqlds
        2014-01-07 11:02:23 17058 [Note] Starting Cluster Binlog Thread
        2014-01-07 11:02:23 17058 [Note] InnoDB: The InnoDB memory heap is disabled
        2014-01-07 11:02:23 17058 [Note] InnoDB: Mutexes and rw_locks use InnoDB’s own implementation
        2014-01-07 11:02:23 17058 [Note] InnoDB: Compressed tables use zlib 1.2.3
        2014-01-07 11:02:23 17058 [Note] InnoDB: Using Linux native AIO
        2014-01-07 11:02:23 17058 [Note] InnoDB: Not using CPU crc32 instructions
        2014-01-07 11:02:23 17058 [Note] InnoDB: Initializing buffer pool, size = 128.0M
        2014-01-07 11:02:23 17058 [Note] InnoDB: Completed initialization of buffer pool
        2014-01-07 11:02:23 17058 [Note] InnoDB: Highest supported file format is Barracuda.
        2014-01-07 11:02:23 17058 [Note] InnoDB: 128 rollback segment(s) are active.
        2014-01-07 11:02:23 17058 [Note] InnoDB: Waiting for purge to start
        2014-01-07 11:02:23 17058 [Note] InnoDB: 5.6.14 started; log sequence number 1627375
        2014-01-07 11:02:23 17058 [Note] Server hostname (bind-address): ‘*’; port: 3306
        2014-01-07 11:02:23 17058 [Note] IPv6 is available.
        2014-01-07 11:02:23 17058 [Note] – ‘::’ resolves to ‘::’;
        2014-01-07 11:02:23 17058 [Note] Server socket created on IP: ‘::’.

        • Cedric says:

          Sorry Andrew, but i have one more thing to ask, if I am to make any changes, should I make changes under my installation files, the files where I use to install my sql nodes?

          • andrew says:

            After you’ve used the auto-installer, you can go ahead and make cluster-wide changes to the config.ini file (the auto-installer would have shown you where that’s located (on the page where you start the Cluster)). The mysqld-specific configuration parameters are specified on the command-line (the same auto-installer page shows you them) – for convenience, copy these to a my.cnf file for each mysqld and then point to them using the –defaults-file option when you manually restart your mysqld processes.

            Andrew.

          • Cedric says:

            Andrew, for the config.ini part, you are referring to the config.ini file in the management node, or in the sql nodes?

          • andrew says:

            The config.ini file(s) are for the management node(a).

            Andrew.

        • andrew says:

          Hi Cedric,

          everything in the output you’ve provided so far looks ok – if there’s an error then the evidence must be further down in the output.

          Andrew.

        • Cedric says:

          Andrew, sorry to bother you, but where do you apply the
          –defaults-file option? I typed in in my command-line: mysql –default-file=my.cnf , but it say unknown variable, so how do i use it?

          • Cedric says:

            Sorry Andrew, but now I typed in this command in my command-line:mysql -p –defaults-file=my.cnf, but now it says: ERROR 1049 (42000): Unknown database ‘–defaults-file=my.cnf’.

          • andrew says:

            Try mysql -p --defaults-file=my.cnf rather than mysql -p –defaults-file=my.cnf.

  39. Mark says:

    when ” deploy and start cluster ” … I get error 13 Permission Denied
    any advice

    Thanks
    M

    • andrew says:

      Mark,

      make sure that the user (the one you specify in the SSH section on tha first screen of the wizard) has permissions to read and write to the directories (on the target hosts) for the directories that are being configured in the auto-installer.

      Andrew.

  40. Kris says:

    Hello Andrew,

    Im having problems running the web interface. Firewalls are off now. Im using CentOS6, Im unable to browse the link using my local ip. Did I miss something? Thank you!

    MySQL-Cluster-client-gpl-7.3.3-1.el6.x86_64.rpm
    MySQL-Cluster-embedded-gpl-7.3.3-1.el6.x86_64.rpm
    MySQL-Cluster-shared-gpl-7.3.3-1.el6.x86_64.rpm
    MySQL-Cluster-devel-gpl-7.3.3-1.el6.x86_64.rpm
    MySQL-Cluster-server-gpl-7.3.3-1.el6.x86_64.rpm
    MySQL-Cluster-test-gpl-7.3.3-1.el6.x86_64.rpm

    /usr/bin/ndb_setup.py
    Running out of install dir: /usr/bin
    Starting web server on port 8081
    deathkey=867955
    The application should now be running in your browser.
    (Alternatively you can navigate to http://localhost:8081/welcome.html to start it)

    • andrew says:

      Hi Kris,

      the python web server that’s included in the auto-installer is quite strict on how it’s accessed. If you looked at the output, you’re being told to connect to http://localhost:8081 – if you use an IP address instead of “localhost” then the web server doesn’t allow it. You can get around this by running /usr/bin/ndb_setup.py -n -N your-ip-address

      You’ll then be able to connect to http://your-ip-address:8081

      Regards, Andrew.

  41. Roman says:

    Hello Andrew,
    I try to setup a cluster with 4 remote hosts via Auto-Istaller and ssh credentials.
    Version: mysql-5.6-cluster-7.3

    When I try to deploy cluster I have such err:
    Unable to create directory /home/test/MySQL_Cluster/49/ on host 192.168.11.11: SyntaxError: Unexpected token <

    But when I remove one of 4 hosts, cluster deployment is successfully on three hosts. (it doesn't matter which host I remove).

    It's a problem of Auto-Istaller, not of hosts.
    How can I get it work?

    Thanks and forgive for my mistakes 🙂

    • andrew says:

      Hi Ronan,

      Could you please paste in the exact list of hosts you’re providing? Also the path your specifying to the Cluster binaries and the datadir? What operating system are the target machines and the autoinstaller running on?

      Andrew.

      • Roman says:

        Hosts system: CentOS 5.10
        Autoinstaller running on Windows 7 x64, but I tried to run on centos 5.10 autoinstaller, the same problem.

        list of hosts: 192.168.11.10,192.168.11.11,192.168.11.30,192.168.11.31

        Cluster binaries: /opt/mysql_7_3_3/
        datadir: /opt/cluster/

        Both dirs are owned by user test, which is used for ssh connection. With root ssh credentials the same problem with creating folders.

  42. Daniel Seybold says:

    Hello Andrew,
    thank you for your helpful video about the mysql cluster auto installer.

    I’ve set up a mysql cluster (simple testing) with only one node (just for very raw tests…) on a cloud instance.

    1 Management Node, 2 Multi threaded data nodes, 2 SQL nodes, 3 Api Nodes

    When I’m checking the status of the cluster with ndb_mgm -e show I get the following message:

    Connected to Management Server at: localhost:1186
    Cluster Configuration
    ———————
    [ndbd(NDB)] 2 node(s)
    id=1 @109.231.122.234 (mysql-5.6.15 ndb-7.3.4, Nodegroup: 0, *)
    id=2 @109.231.122.234 (mysql-5.6.15 ndb-7.3.4, Nodegroup: 0)

    [ndb_mgmd(MGM)] 1 node(s)
    id=49 @109.231.122.234 (mysql-5.6.15 ndb-7.3.4)

    [mysqld(API)] 5 node(s)
    id=50 (not connected, accepting connect from 109.231.122.234)
    id=51 (not connected, accepting connect from 109.231.122.234)
    id=52 (not connected, accepting connect from 109.231.122.234)
    id=53 @109.231.122.234 (mysql-5.6.15 ndb-7.3.4)
    id=54 @109.231.122.234 (mysql-5.6.15 ndb-7.3.4)

    In the admin panel in the aut-installer console all API nodes are shown as not connected and I can’t connect via mysql client to the cluster.

    Do I have to start the api nodes manually or do I have to config some additional parameters to get the api nodes running ?

    Thanks in advance for your help.

    • andrew says:

      Hi Daniel,

      ndb_mgm is showing that you have 2 MySQL Servers in the Cluster and so you should be able to connect to either of those. Unless you changed them, their ports should be 3306 and 3307. What happens if you run mysql -h 127.0.0.1 -P3306 -u root from 109.231.122.234 ?

      Andrew.

  43. Dimitris says:

    Andrew, thank you very much for your efforts. Awesome work!

    Upon successful completion of the installer on two nodes, I just have two questions, I’m new on this, so be kind 🙂

    ***newbie alert***

    1. What can i do on the systems, so the sql “service” starts automatically on boot?

    2. I do not see the point of the cluster if there is no single “Cluster IP address” so the clients connect on it and never lose connectivity if either of the nodes fail.
    If the clients connect to one of my two nodes and this node fails, they lose connectivity. Is there a workaround for this?

  44. amr says:

    Hi,

    I got this error :-
    Unable to create directory c:\users\administrator\mysql_cluster\52 err 13 permission denied
    Im using 2 windows server.
    Please Help

  45. Hiromichi says:

    Hi,
    I’m new to MySQL Cluster and I’m having a problem creating a large NDB file.
    s1 has 1M records.

    mysql> create table ndb2 engine=ndbcluster select * from s1;
    ERROR 1297 (HY000): Got temporary error 1217 ‘Out of operation records in local data manager (increase MaxNoOfLocalOperations)’ from NDBCLUSTER

    It works for # of records = 30K, but not 40K.

    I modified a couple of parameters but they don’t seem to do anything.

    MaxNoOfConcurrentTransactions=200K
    MaxNoOfLocalOperations=220K

    I checked around a bit but couldn’t find any definitive solutions.

    Do you have any idea ?

    Thanks,
    Hiromichi

    • andrew says:

      Did you perform a rolling restart to push out the change to the data nodes:

      1. Edit config.ini file(s)
      2. Restart each ndb_mgmd with the –initial option
      3. Restart the data nodes (in sequence) without the –initial option

      Andrew.

      • Hiromichi says:

        Hi Andrew,

        Thank you for your reply.
        The rolling restart did the trick.
        I also had to increase MaxNoOfConcurrentOperations.

        MaxNoOfConcurrentTransactions=1M
        MaxNoOfLocalOperations=1200K
        MaxNoOfConcurrentOperations=1200K

        Thank you very much for your help.
        Hiromichi

  46. Hiromichi says:

    Hi Andrew,

    I just noticed that a NDB table I created from the previous session
    had persisted to the next session with all the data intact.
    I was expecting the table to lose all its data since it is stored in
    memory. Is this correct behavior ?

    Thanks,
    Hiromichi

    • Hiromichi says:

      p.s.

      I did shutdown mysqld, ndbmtd1, ndbmtd2 and mgmd
      between the 2 sessions.

    • andrew says:

      Although the data is held in RAM, it is also asynchronously checkpointed to disk so that the data survives a system shutdown. You can turn that off if you prefer but in general people like to keep hold of the data.

      Andrew.

      • Hiromichi says:

        Hi Andrew,

        Thanks.
        I was trying to run the cluster with 1 data node but I can’t seem to
        make it work.

        NoOfReplicas=1
        [NDBD]
        NodeId=1
        HostName=127.0.0.1
        DataDir=/home/hwatari/MySQL_Cluster/1/

        #[NDBD]
        #NodeId=2
        #HostName=127.0.0.1
        #DataDir=/home/hwatari/MySQL_Cluster/2/

        The documentation says that it’s permitted, am I missing
        something here ?

        Thanks,
        Hiromichi

  47. Hiromichi says:

    Hi Andrew,

    I ran ndb api examples ndbapi_simple.cpp and ndb_api_s_i_ndbrecord/main.cpp on
    2 data nodes, I got correct result with ndbapi_simple.cpp but I’m getting
    incorrect result from ndb_api_s_i_ndbrecord/main.cpp (update of 5 tuples using
    unique key fails and table does not get updated at all and does not generate error
    message either).

    Is this a bug or am I missing something here ?

    Thanks,
    Hiromichi

    ======output============================================================

    ATTR1 ATTR2
    0 0 (frag=0)
    1 1 (frag=1)
    2 2 (frag=1)
    3 3 (frag=0)
    4 4 (frag=1)
    5 5 (frag=1)
    6 6 (frag=0)
    7 7 (frag=0)
    8 8 (frag=1)
    9 9 (frag=0)
    ATTR1 ATTR2
    0 0 (should be 10)
    1 1
    2 2 (should be 12)
    Detected that deleted tuple doesn’t exist!
    4 4 (should be 14)
    5 5
    6 6 (should be 16)
    7 7
    8 8 (should be 18)
    9 9

  48. Jon says:

    Hi Andrew

    Using the latest release of MySQL Cluster Windows installer on Windows Server 2012 I tried the auto installer and managed to get an SSH session using PuTTY and CopSSH. When trying to auto discover it fails and I get Garbage Received. Default settings for PuTTy and CopSSH. Any ideas?

    Thanks
    Jon

    • andrew says:

      Hi Jon,

      there seeems to be some issues with Windows targets – as a workaround, you can manually specify the host characteristics (OS, memory etc.) and then cut & paste the commands and config files from the final screen.

      Regards, Andrew.

    • andrew says:

      Hi Jon,

      there seeems to be some issues with Windows targets – as a workaround, you can manually specify the host characteristics (OS, memory etc.) and then cut & paste the commands and config files from the final screen.

  49. Jon says:

    SO following that approach I would not need the SSH connection merely run the auto-installer, wait for it to fail, add the server details, generate the configs files and try to load them on the remote server. This would fail but I then have the file to copy and paste.

    Then I would have to start the nodes in sequence. Correct?

    Thanks

    Jon

  50. haruda says:

    hi Andrew,

    I have a problem when my cluster try to deploy..

    Unable to create directory /usr/local/my_cluster/55/ on host 192.168.100.12: No authentication methods available

    rgds,
    Haruda

  51. Barry says:

    Andrew, Many Thanks in advance. I’m having a similar as others which I could not find answered.
    Using BITvise SSH on windows 2008 R2 servers. Everything goes fine until I ‘Deploy cluster’ or ‘Deploy and start cluster’ I’m getting the error “Unable to create directory E:/MySQL/Cluster01/Data/49 on host SQWxxxxxx: The request operation failed”. The user I’m using is an administrator, I also granted full control to E:\MySQL\*.* to ALL users. I also tried creating the folder manually to see if it would go onto the next node. None of that worked. The error is a bit curious to me as the slashes are forward facing…is that the issue? It did pick up the fact that they are Windows servers. I’m stuck

  52. Dave Hare says:

    Hi Andew,

    I am getting an error when starting node 53 (mysql) error: Fatal error Please read “Security” section of the manual to find out how to run mysqld as root!

    command ‘/var/lib/mysql/binysqld –defaults-file=/root/MySQL_Cluster/53/my.cnf

    -Dave

    I tried to edit the my.cnf in /root/MySQL_Cluster/53/ bu8t when you click deploy it gets overwritten.

  53. Dave Hare says:

    Hi Andrew,

    I am starting over from scratch so I can get the swings of things getting this installed properly.

    I am getting an error when I try to start the cluster:

    command ‘/usr/bin/mysql_install_db –no-defaults–datadir=/root/MySQL_Cluster/53/ –basedir=/usr/bin/

    FATAL error: could not find /fill_help_tables.sql

    I have copied help_tables.sql to /usr/bin/ but I still get the error?

    thx

  54. Dave Hare says:

    I had the wrong path in the mysql install directory. working now.

  55. Dave Hare says:

    I can get all 4 server individually to start the cluster with the auto installer. but when I add all 4 server to the host list it hangs when I try to start the cluster at 25%. The Data Layers – Multi threaded data nodes go yellow and back to red a couple times but never start (stay red) and the cluster never gets past 25%. I have restarted the servers and Cluster installer several times.

  56. Dave Hare says:

    Hi Andrew,

    I am trying to start the cluster manually, below are my configuration files on all 4 servers:

    /etc/my.cnf

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    user=root
    # Disabling symbolic-links is recommended to prevent assorted security risks
    symbolic-links=0

    default_storage_engine=ndbcluster

    ndbcluster
    ndb-connectstring=xx.xxx.xxx.42

    [mysqld_safe]
    log-error=/var/log/mysqld.log
    pid-file=/var/run/mysqld/mysqldpid

    [mysql_cluster]
    ndb-connectstring=xx.xxx.xxx.42

    /var/lib/mysql-cluster/config.ini

    [ndb_mgmd default]
    DataDir=/var/lib/mysql-cluster

    [ndbd default]
    NoOfReplicas=2
    DataMemory=256M
    IndexMemory=128M
    DataDir=/var/lib/mysql-cluster

    [ndb_mgmd]
    Nodeid=101
    HostName=xx.xxx.xxx.42

    #[ndb_mgmd]
    #Nodeid=102
    #HostName=xx.xxx.xxx.43

    [ndbd]
    Nodeid=1
    HostName=xx.xxx.xxx.43

    [ndbd]
    Nodeid=2
    HostName=xx.xxx.xxx.44

    [mysqld]
    Nodeid=51
    HostName=xx.xxx.xxx.44

    [mysqld]
    Nodeid=52
    HostName=xx.xxx.xx.45

    here are the steps I am using to start the cluster:
    1.) on .42 server:

    [root@MySQLCluster1 ~]# ndb_mgmd -f /var/lib/mysql-cluster/config.ini
    MySQL Cluster Management Server mysql-5.6.21 ndb-7.3.7

    2.) on .43 server:

    [root@MySQLCluster2R ~]# ndbd
    2014-11-12 14:47:25 [ndbd] INFO — Angel connected to ‘xx.xxx.xxx.42:1186’
    2014-11-12 14:47:25 [ndbd] ERROR — Failed to allocate nodeid, error: ‘Error: Could not alloc node id at xx.xxx.xxx.42 port 1186: Connection done from wrong host ip xx.xxx.xxx.43.’

    3.) on .42 server:

    [root@MySQLCluster1 ~]# ndb_mgm
    — NDB Cluster — Management Client —
    ndb_mgm> show
    Connected to Management Server at: xx.xxx.xxx.42:1186
    Cluster Configuration
    ———————
    [ndbd(NDB)] 2 node(s)
    id=1 (not connected, accepting connect from xx.xxx.xxx.42)
    id=2 (not connected, accepting connect from xx.xxx.xxx.42)

    [ndb_mgmd(MGM)] 1 node(s)
    id=101 @xx.xxx.xxx.42 (mysql-5.6.21 ndb-7.3.7)

    [mysqld(API)] 2 node(s)
    id=51 (not connected, accepting connect from xx.xxx.xxx.42)
    id=52 (not connected, accepting connect from xx.xxx.xxx.42)

    Not sure what is wrong. I assume what ever is causing the issue is the same issues causing the mesage above when when I try to use the cluster auto installer.

    Thanks,
    Dave

  57. Dave Hare says:

    Hi Andrew,

    Any ideas on this? I am still stuck.
    -Dave

    • andrew says:

      In your config.ini file you specify that the data nodes can connect from xx.xxx.xxx.43 & xx.xxx.xxx.44 but ndb_mgm is showing xx.xxx.xxx.42 as the permitted host. It looks like your not using the config.ini file that you think you are.

      Andrew.

  58. Dave Hare says:

    Perhaps this will offer a solution to to anyone else this circumstance?
    After countless hours of trouble shooting getting no where (see above) I was about to give up. I then deleted everything in //home/admin/MySQL_Cluster the home folder for the user I used in the mysql auto installer. And also deleting ib_logfile* in /var/lib/mysql-cluster. I tried again and mysql cluster with 4 servers is working now.

    I did deleted above on all 4 servers. I am not exatcly sure what was missing, miss-configured, bug or perhaps corrupt. But this seemed to be the fix as best as I can tell.

    • andrew says:

      Yes – looks like you’d previously run another Cluster and it was using the cached config data from that one. Use ndb_mgmd --initial to make it to load the contents from the config.ini file.

  59. Jason says:

    Command `/usr/local/mysql/bin/ndbmtd –ndb-nodeid=1 –ndb-connectstring=192.168.102.215:1186,192.168.102.216:1186,’, running on 192.168.102.217 exited with 1:
    2014-12-22 15:23:56 [ndbd] INFO — Angel connected to ‘192.168.102.215:1186’
    2014-12-22 15:24:26 [ndbd] ERROR — Failed to allocate nodeid, error: ‘Error: Could not alloc node id at 192.168.102.215 port 1186: Id 1 already allocated by another node.’

    • andrew says:

      Looks like you already have an ndbmtd process with the id of 1 running on 192.168.102.217; try running pkill ndbmtd on that machine first to clean up.

      Andrew.

  60. thangict93 says:

    Hi everybody, My name VietAnh

    I have been setting MySQL Cluster.
    – Architecture includes:
    + MySQL Cluster 7.3.7 32 bit on linux 32 bit OS (i386)
    1 server MGM: 2GB RAM
    2 server Node – API: 6GB RAM
    + Database inclueds: 30 tables, Number of record 25500/1-table and file dump size = 2GB

    I have set DataMemory = 2056M, IndexMemory=256M. All nodes started. However, my database is large, and it cannot source data up.
    After, I set DataMemory = 4096M and IndexMemory=512M. However, node don’t activate. I encountered an error, This error detail as follow:
    ndbd trying to allocate more than 4g with 32 bit.
    failed to allocate nodeid for api at . Returned error: ‘No free node id found for mysqld(API)”
    How can i set parameter DataMemory >= 4G in Node (OS i386)?

    I hope you can help me to solve this. Thank you in advance!

  61. steven says:

    I have followed this,

    http://dev.mysql.com/doc/refman/5.6/en/mysql-cluster-install-linux-rpm.html
    installing the server RHEL7 rpm on 4 VMS.

    I have kicked off the auto-installer deploy and start fails every time so I have tried kicking off deploy only. This seems to work but I dont see an signs the disks has been written to ie /usr/local/bin/ is still empty.

    ssh works I have tested it.

    I have tried setting selinux to permissive no difference.

    If I do a deploy and start cluster I get a “cannot locate ndb_mgmd in /usr/local/bin/[‘bin’, ‘sbin’, ‘scripts’, “, ‘../scripts’] on host 10.100.32.62″”

    in deed that dir is empty it doesnt look like the auto-installer is actually deploying anything!

    🙁

    • steven says:

      looking on my data node 2 find shows ndb_mgmd as /usr/sbin/ndb_mgmd

      Looks like the autoinstaller is detecting the layout properly?

      Lets go and re-run the auto-installer…

      • steven says:

        nope,

        ==========
        Command `/usr/sbin/ndb_mgmd –initial –ndb-nodeid=49 –config-dir=/var/lib/mysql/MySQL_Cluster/49/ –config-file=/var/lib/mysql/MySQL_Cluster/49/config.ini’, running on 10.100.32.62 exited with 1:
        MySQL Cluster Management Server mysql-5.6.23 ndb-7.4.4
        2015-03-18 15:42:33 [MgmtSrvr] ERROR — at line 68: Mixing of localhost (default for [NDBD]HostName) with other hostname(10.100.32.62) is illegal
        2015-03-18 15:42:33 [MgmtSrvr] ERROR — at line 68: Could not store previous default section of configuration file.
        2015-03-18 15:42:33 [MgmtSrvr] ERROR — Could not load configuration from ‘/var/lib/mysql/MySQL_Cluster/49/config.ini’
        ==========

        • steven says:

          I change the local host setting to its IP, got a bit further but now I get this,

          ========
          Cannot locate ndbmtd in /usr/sbin/[‘bin’, ‘sbin’, ‘scripts’, ”, ‘../scripts’] on host 10.100.32.71
          ========

          ?

          • andrew says:

            After the auto-installer has shown you the default paths for each host – make sure that the path for the binaries matches where you’ve actually stored them on the target machines.

            Andrew.

  62. Regan says:

    Hi Andrew,

    I’m trying to install mysql cluster 7.4.6-winx64.
    i stuck on error sounds like this “Error: DLL load failed: The specified module could not be found.” on remote host. I try to ignore the error and proceed to the final step and the error still the same Unable to create directory on host [remote host ip address]: DLL load failed: The specified module could not be found.”
    At first I think this is a SSH Server problem on remote host.
    But i’ve tried 3 SSH Server, FreeSSHd, OpenSSH, BitviseSSH, and the errors still same.

    I try to do some research related to the error, and find that there is possibility that python unable to compile the DLL. I’ve stuck for several days by now.

    maybe any suggestion that I could try?

    Thanks and Best Regards
    Regan

    • andrew says:

      Sorry – this isn’t an error that I’ve come accross – hopefully someone else can comment?

    • arri says:

      I’m facing the same problem on Windows Server 2012 R2.
      I’m using CygWin ssh server.

      Frustrating part is that i’ve had it working fine before in a cluster of 4 VM’s (all running W2K12R2).

      Any insight would be great!

  63. Sara Borghol says:

    Hello,

    When creating MySQL cluster using Auto-installer on Hyper-v , and trying to add the local machine as a host, it fail to get the resource information of the machine giving me [error : Number_of_Processors] , also, doesn’t give the right MySQL install directory , and MySQL data directory. Is this related to creating cluster using hyper-V and not on physical machine?

  64. Nezar says:

    Hello,
    I am trying to create my cluster on two hosts: 192.168.1.5 (management node, sql node, data node) and 192.168.1.6 (sql node and data node). The cluster deploys sucessfully, but when I use deploy and start cluster I get this error: SyntaxError: unexpected number
    the error appears just after starting the N49 service.
    any suggestion?

    Many thanks

Leave a Reply