Installing Percona XtraDB 5.7 Cluster on Centos 7
In this tutorial, we will show you how we configured Percona XtraDB Cluster 5.7 on Centos 7 server. This install will be taking advantage of the Percona repositories for Yum.
For this build we will require 3 Centos 7 systems. In our case this was done using 3 virtual machines running on a local Lan, as we planned this install to take advantage of a Load Balancer like HA Proxy.
We used the following IPs for the database servers:
DB1 – 10.10.10.211
DB2 – 10.10.10.212
DB3 – 10.10.10.213
Initial Configuration: Firewalld and SELinux
First of all we SSH into all 3 servers and configure a few options. This includes disabling SELinux and configuring the necessary firewalld rules to allow the database to operate.
Disable SELinux permanently:
nano /etc/selinux/config
Enter the following to set SELINUX to disabled:
SELINUX=disabled
Once complete we reboot the server for this to take place:
reboot
The next step will be to configure the firewall for the database service. We need to open the relevant ports to allow access. On our minimal install of Centos 7 we had firewalld installed so we used the following commands:
systemctl start firewalld firewall-cmd --zone=public --add-service=mysql --permanent firewall-cmd --zone=public --add-port=3306/tcp --permanent firewall-cmd --zone=public --add-port=4567/tcp --permanent firewall-cmd --zone=public --add-port=4568/tcp --permanent firewall-cmd --zone=public --add-port=4444/tcp --permanent firewall-cmd --zone=public --add-port=4567/udp --permanent firewall-cmd --reload
Once we have configured our SELinux and Firewall we’re ready to start with the install of Percona XtraDB Cluster 5.7, which is a clustered version of MySQL 5.7.
Installing Percona XtraDB Cluster 5.7
NOTE: These commands need to be run on all 3 nodes.
First of all we need to install the Percona yum repository:
yum install https://repo.percona.com/yum/percona-release-latest.noarch.rpm
As we don’t want to use MySQL 8 at this time (Due to some scripts not being compatible we used the original build), we need to limit Percona to the original release only:
percona-release enable-only original release percona-release enable tools release
We can now install the Percona XtraDB Cluster packages:
yum install Percona-XtraDB-Cluster-57
Now we start the Percona XtraDB Cluster server:
service mysql start
Copy the automatically generated temporary password for the superuser account:
grep 'temporary password' /var/log/mysqld.log
Change the password for the superuser account and log out. For example:
mysql> ALTER USER 'root'@'localhost' IDENTIFIED BY 'rootPass'; Query OK, 0 rows affected (0.00 sec) mysql> exit Bye
Stop the mysql service:
service mysql stop
At this stage we now have 3 servers all able to run MySQL and with a root password set. We are now going to build the clustered nodes. To do this we need to configure them in stages.
Configuring Node #1
On the first server which in our example is percona1 you edit the my.conf:
nano /etc/my.cnf
There are two lines in this config that you need to change to match your IP addresses – wsrep_cluster_address & wsrep_node_address. Also you should remove everything from the original my.cnf file and fully replace it with:
[mysqld] datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains the IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://10.10.10.211,10.10.10.212,10.10.10.213 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This InnoDB autoincrement locking mode is a requirement for Galera innodb_autoinc_lock_mode=2 # Node 1 address wsrep_node_address=10.10.10.211 # SST method wsrep_sst_method=xtrabackup-v2 # Cluster name wsrep_cluster_name=my_centos_cluster # Authentication for SST method wsrep_sst_auth="sstuser:s3cret"
Start the first node with the following command:
systemctl start mysql@bootstrap.service
After the first node has been started, cluster status can be checked with the following command:
mysql -u root -p
To test the service is running:
SHOW STATUS LIKE 'wsrep_local_state_comment';
Check the nodes in the cluster, which at this stage should be 1:
show global status like 'wsrep_cluster_size';
+--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 1 | +--------------------+-------+ 1 row in set (0.00 sec)
These commands can also be run again at the end to confirm everything is correct.
To perform State Snapshot Transfer using XtraBackup, set up a new user with proper privileges:
mysql@percona1> CREATE USER 'sstuser'@'localhost' IDENTIFIED BY 's3cret'; mysql@percona1> GRANT PROCESS, RELOAD, LOCK TABLES, REPLICATION CLIENT ON *.* TO 'sstuser'@'localhost';
Configuring Node #2
Now we configure the second node:
nano /etc/my.cnf
Now enter the same configuration as Node #1, except with the IP address we used for our second node.
datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://10.10.10.211,10.10.10.212,10.10.10.213 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This InnoDB autoincrement locking mode is a requirement for Galera innodb_autoinc_lock_mode=2 # Node 2 address wsrep_node_address=10.10.10.212 # Cluster name wsrep_cluster_name=my_centos_cluster # SST method wsrep_sst_method=xtrabackup-v2 #Authentication for SST method datadir=/var/lib/mysql user=mysql
Then we start MySQL which now can be started normally:
systemctl start mysql
Configuring Node #3
Finally we setup the 3rd server.
nano /etc/my.cnf
Again, we’re configuring this in the same way as the previous two nodes, but with the IP address of Node #3:
[mysqld] datadir=/var/lib/mysql user=mysql # Path to Galera library wsrep_provider=/usr/lib64/libgalera_smm.so # Cluster connection URL contains IPs of node#1, node#2 and node#3 wsrep_cluster_address=gcomm://10.10.10.211,10.10.10.212,10.10.10.213 # In order for Galera to work correctly binlog format should be ROW binlog_format=ROW # MyISAM storage engine has only experimental support default_storage_engine=InnoDB # This InnoDB autoincrement locking mode is a requirement for Galera innodb_autoinc_lock_mode=2 # Node 2 address wsrep_node_address=10.10.10.213 # Cluster name wsrep_cluster_name=my_centos_cluster # SST method wsrep_sst_method=xtrabackup-v2 #Authentication for SST method wsrep_sst_auth="sstuser:s3cret"
and again start MySQL
systemctl start mysql
If all the clusters start correctly you should be able to confirm this on any of the nodes by:
mysql> SHOW STATUS LIKE 'wsrep_local_state_comment';
+---------------------------+--------+ | Variable_name | Value | +---------------------------+--------+ | wsrep_local_state_comment | Synced | +---------------------------+--------+ mysql> show global status like 'wsrep_cluster_size'; +--------------------+-------+ | Variable_name | Value | +--------------------+-------+ | wsrep_cluster_size | 3 | +--------------------+-------+ 1 row in set (0.00 sec)
The next time you reboot the first server you don’t need to bootstrap the mysql any longer.
Testing the Percona setup
To test replication, lets create a new database on second node, create a table for that database on the third node, and then add some records to the table on the first node.
1. Create a new database on the second node:
mysql@percona2> CREATE DATABASE percona; Query OK, 1 row affected (0.01 sec)
2. Create a table on the third node:
mysql@percona3> USE percona; Database changed mysql@percona3> CREATE TABLE example (node_id INT PRIMARY KEY, node_name VARCHAR(30)); Query OK, 0 rows affected (0.05 sec)
3. Insert records on the first node:
mysql@percona1> INSERT INTO percona.example VALUES (1, 'percona1'); Query OK, 1 row affected (0.02 sec)
4. Retrieve all the rows from that table on the second node:
mysql@percona2> SELECT * FROM percona.example; +---------+-----------+ | node_id | node_name | +---------+-----------+ | 1 | percona1 | +---------+-----------+ 1 row in set (0.00 sec)
This simple procedure should ensure that all nodes in the cluster are synchronized and working as intended.
I hope you find this guide helpful. A lot of thanks to the Percona documentation and a few other resources to get this working.
One thing to consider next is how you can use this in production. I am going to test both HA Proxy and ProxySQL and will provide a post on this shortly. So why do we need HA Proxy or ProxySQL? Well currently you have 3 different IP’s you can connect to with MySQL and we want a single IP which can allocate the load between the 3 servers (or more).