10 October 2012

Server Tip: MySQL Cluster Setup


Introduction
This HOWTO is designed for a classic setup of two servers behind a load-balancer. The aim is to have true redundancy – either server can be unplugged and yet the site will remain up.
Notes:
You MUST have a third server as a management node but this can be shut down after the cluster starts. Also note that I do not recommend shutting down the management server (see the extra notes at the bottom of this document for more information). You can not run a MySQL Cluster with just two servers And have true redundancy.
Although it is possible to set the cluster up on two physical servers you WILL NOT GET the ability to “kill” one server and for the cluster to continue as normal. For this you need a third server running the management node.
we are going to talk about three servers:
node01.example.com 192.168.0.10
node02.example.com 192.168.0.20
node03.example.com 192.168.0.30
Servers node01 and node02 will be the two that end up “clustered”. This would be perfect for two servers behind a loadbalancer or using round robin DNS and is a good replacement for replication. Server node03 needs to have only minor changes made to it and does NOT require a MySQL install. It can be a low-end machine and can be carrying out other tasks.
Get the software:
For Generally Available (GA), supported versions of the software, download from
Make sure that you select the correct platform – in this case, “Linux – Generic” and then the correct architecture (for LINUX this means x86 32 or 64 bit).
Note: Only use MySQL Server executables (mysqlds) that come with the MySQL Cluster installation.
STAGE1: Installation of Data and SQL nodes on node01 and node02
On each of the machines designated to host data or SQL nodes(in our case node01 and node02), perform the following steps as the system root user:
  1. create a new mysql user group, and then add a mysql user to this group:
    shell> groupadd mysql
    shell> useradd -g mysql mysql

  2. Change location to the directory containing the downloaded file, unpack the archive, and create a symlink to the mysql directory named mysql. Note that the actual file and directory names vary according to the MySQL Cluster version number.
    shell> cd /var/tmp
    shell> tar -C /usr/local -xzvf mysql-cluster-gpl-7.1.5-linux-x86_64-glibc23.tar.gz
    shell> ln -s /usr/local/mysql-cluster-gpl-7.1.5-linux-i686-glibc23 /usr/local/mysql
    shell> export PATH=$PATH:/usr/local/mysql/bin
    shell> echo “export PATH=\$PATH:/usr/local/mysql/bin” >> /etc/bash.bashrc
  3. Change location to the mysql directory and run the supplied script for creating the system databases:
    shell> cd mysql
    shell> ./scripts/mysql_install_db –user=mysql
  4. Set the necessary permissions for the MySQL server and data directories:
    shell> chown -R root .
    shell> chown -R mysql data
    shell> chgrp -R mysql .
  5. Copy the MySQL startup script to the appropriate directory, make it executable, and set it to start when the operating system is booted up:
    shell> cp support-files/mysql.server /etc/init.d/mysql
    shell> chmod +x /etc/init.d/mysql
    shell> update-rc.d mysql defaults
STAGE2: Installation of Management node on node03
Installation of the management node does not require the mysqld binary. Only the MySQL Cluster management server (ndb_mgmd) is required; I assume that you have placed mysql-cluster-gpl-7.1.5-linux-i686-glibc23.tar.gz in /var/tmp.
As system root perform the following steps to install ndb_mgmd and ndb_mgm on the Cluster management node host (node02):
  1. Change location to the /var/tmp directory, and extract the ndb_mgm and ndb_mgmd from the archive into a suitable directory such as /usr/local/bin:
    shell> cd /var/tmp
    shell> tar -zxvf mysql-cluster-gpl-7.1.5-linux-i686-glibc23.tar.gz
    shell> cd /usr/local/mysql-cluster-gpl-7.1.5-linux-i686-glibc23
    shell> cp bin/ndb_mgm* /usr/local/bin
  2. Change location to the directory into which you copied the files, and then make both of them executable:
    shell> cd /usr/local/bin
    shell> chmod +x ndb_mgm*
STAGE3: Configuration of Management node
The first step in configuring the management node is to create the directory in which the configuration file can be found and then to create the file itself. For example (running as root):
shell> mkdir /var/lib/mysql-cluster
shell> cd /var/lib/mysql-cluster
shell> vi config.ini
For our setup, the config.ini file should read as follows:
[ndbd default]
NoOfReplicas=2
DataMemory=80M
IndexMemory=18M
[tcp default]
[ndb_mgmd]
hostname=192.168.0.30 # Hostname or IP address of MGM node
datadir=/var/lib/mysql-cluster # Directory for MGM node log files
[ndbd]
hostname=192.168.0.10 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node’s data files
[ndbd]
hostname=192.168.0.20 # Hostname or IP address
datadir=/usr/local/mysql/data # Directory for this data node’s data files
[mysqld]
hostname=192.168.0.10 # Hostname or IP address
[mysqld]
hostname=192.168.0.20 # Hostname or IP address
STAGE4: Configuration of Data and SQL nodes
The first step in configuring the management node is to create the directory in which the configuration file can be found and then to create the file itself. For example (running as root):
shell> vi /etc/my.cnf
Note :
We show vi being used here to create the file, but any text editor should work just as well.
For each data node and SQL node in our setup, my.cnf should look like this:
[client]
port = 3306
socket = /tmp/mysql.sock
[mysqld]
port = 3306
socket = /tmp/mysql.sock
skip-locking
ndbcluster # run NDB storage engine
ndb-connectstring=192.168.0.30 # location of management server
[mysql_cluster]
ndb-connectstring=192.168.0.30 # location of management server
Important :
Once you have started a mysqld process with the NDBCLUSTER and ndb-connectstring parameters in the [mysqld] in the my.cnf file as shown previously, you cannot execute any CREATE TABLE or ALTER TABLE statements without having actually started the cluster. Otherwise, these statements will fail with an error.
STAGE4: Starting the MySQL Cluster
Starting the cluster is not very difficult after it has been configured. Each cluster node process must be started separately, and on the host where it resides. The management node should be started first, followed by the data nodes, and then finally by any SQL nodes:
  1. On the management host(node03), issue the following command from the system shell to start the management node process:
    shell> ndb_mgmd -f /var/lib/mysql-cluster/config.ini –configdir=/var/lib/mysql-cluster
  2. On each of the Data/SQL node hosts, run these commands to start the ndbd and mysql server process:
    shell> /usr/local/mysql/bin/ndbd
    shell> /etc/init.d/mysql start
If all has gone well, and the cluster has been set up correctly, the cluster should now be operational. You can test this by invoking the ndb_mgm management node client. The output should look like that shown here:
node03:~# ndb_mgm
– NDB Cluster — Management Client –
ndb_mgm> SHOW
Connected to Management Server at: localhost:1186
Cluster Configuration
———————
[ndbd(NDB)] 2 node(s)
id=2 @192.168.0.10 (mysql-5.1.44 ndb-7.1.5, Nodegroup: 0, Master)
id=3 @192.168.0.20 (mysql-5.1.44 ndb-7.1.5, Nodegroup: 0)
[ndb_mgmd(MGM)] 1 node(s)
id=1 @192.168.0.30 (mysql-5.1.44 ndb-7.1.5)
[mysqld(API)] 2 node(s)
id=4 @192.168.0.10 (mysql-5.1.44 ndb-7.1.5)
id=5 @192.168.0.20 (mysql-5.1.44 ndb-7.1.5)
STAGE5: Testing the Setup
If you are OK to here it is time to test mysql. On either server node01 or node02 enter the following commands: Note that we have no root password yet:
shell> mysql
create database testdb;
use test;
CREATE TABLE cluster_test (i INT) ENGINE=NDBCLUSTER;
INSERT INTO cluster_test (i) VALUES (1);
SELECT * FROM cluster_test;
You should see 1 row returned (with the value 1).
If this works, now go to the other server and run the same SELECT and see what you get. Insert from that host and go back to previous host and see if it works. If it works then you made it!

No comments:

Post a Comment