MySQL data replication between virtual machines via SDN | Red Hat Developer (2024)

A software-defined networking approach to database replication between VMs in an OpenShift cluster

July 1, 2024

Kalyanaraman Narayanan

Related topics:
DatabasesVirtualization
Related products:
Red Hat OpenShiftRed Hat OpenShift Container Platform

In today's application landscape, ensuring data availability is crucial. For relational databases like MySQL, replication offers a robust solution to achieve it. This article explores deploying MySQL replication between virtual machines (VMs) leveraging software-defined networking in the Red Hat OpenShift platform. While this post is focused on MySQL within an OpenShift virtual machine, the same principle applies to any database replication within an OpenShift virtual machine.

Setup details

In this example, we are working with a standard OpenShift cluster with 3 master nodes and 2 or more worker nodes. Since we are replicating between virtual machines, the worker nodes are bare metal servers. The similar replication can be configured in an OpenShift compact cluster with bare metal nodes.

Takeaways

The key takeaways here extend beyond just MySQL and apply to various database technologies. By leveraging Red Hat OpenShift Virtualization, you gain a powerful platform for managing database deployments with replication capabilities. These capabilities mirror those of traditional active/passive database configurations within a cluster, offering:

  • High availability: Replication ensures minimal downtime in case of node failures. The secondary database can be promoted to primary quickly, keeping your applications operational.
  • Near zero recovery point objective (RPO): With proper configuration, data loss between the primary and secondary database can be minimized.
  • Very low recovery time objective (RTO): The failover time from primary to secondary can be very short, ensuring a swift recovery process.

This approach is particularly valuable in scenarios where the database can withstand a node failure. Promoting the secondary as the new primary and bringing up production can be achieved faster within the cluster.

Technical approach

We'll delve into the intricacies of setting up a MySQL replication environment using pod networks in OpenShift. We'll cover detailed steps for:

  1. Creating Red Hat Enterprise Linux VMs and installing MySQL in the VM.
  2. Creating the necessary services and policies facilitating communication between MySQL instances in the OpenShift virtual machines.
  3. Step-by-step guide to configure a MySQL replication.
  4. Testing the replicated MySQL setup in OpenShift.

Benefits of OpenShift Virtualization templates

Before we dive in, let's highlight the power of templates in OpenShift Virtualization.

OpenShift Virtualization templates offer a powerful approach to provisioning virtual machines within your OpenShift cluster. These pre-configured templates come packed with advantages:

  • Efficiency and consistency: Templates eliminate the need for manual configuration, saving you time and ensuring consistent deployments across your virtual machines. Imagine creating a new RHEL 9 virtual machine with just a few clicks, pre-configured with the necessary settings.
  • Repeatability and standardization: Templates allow you to define a standard configuration for your environment. This ensures all your virtual machines have the same base setup, simplifying management and reducing the risk of errors.
  • Customization flexibility: While templates provide a starting point, they're not rigid. You can easily customize them to meet your specific requirements, allowing you to tailor for your needs.

Create the virtual machines and install MySQL

Source the virtual machine:

  1. Create the project mysql-source-project (Figure 2).
  1. Select the Red Hat Enterprise Linux 9 VM template (Figure 3).
  1. Name the VM and select Quick Create VirtualMachine (Figure 4).

Set up the destination VM

  1. Create the project mysql-destination-project (Figure 5).
  1. Select the Red Hat Enterprise Linux 9 VM template (Figure 6).
  1. Name the VM and select Quick Create VirtualMachine (Figure 7).
  1. Connect to the VMs (see Connecting to VMs in Red Hat OpenShift Virtualization)via SSH, register the VMs with Red Hat, and install the MySQL server in both source and destination VMs.
  2. Register the VMs with repositories:

    #subscription-manager register
  3. Install mysql-server:

    #dnf install mysql-server

Configure MySQL parameters

  1. In the MySQL source VM, set up the mysql-server.cnf. Here, the server-id (100) is an arbitrary number and 10.0.2.2 isthe IP address of the VM seen from within the Operating System of VM.

    [root@rhel9-mysql-source-vm ~]# cat /etc/my.cnf.d/mysql-server.cnf # # This group are read by MySQL server. # Use it for options that only the server (but not clients) should see # # For advice on how to change settings please see # http://dev.mysql.com/doc/refman/en/server-configuration-defaults.html # Settings user and group are ignored when systemd is used. # If you need to run mysqld under a different user or group, # customize your systemd unit file for mysqld according to the # instructions in http://fedoraproject.org/wiki/Systemd [mysqld] datadir=/var/lib/mysql socket=/var/lib/mysql/mysql.sock log-error=/var/log/mysql/mysqld.log pid-file=/run/mysqld/mysqld.pid bind-address=10.0.2.2 server-id=100 log_bin=/var/log/mysql/mysql-bin.log gtid_mode=ON enforce-gtid-consistency=ON [root@rhel9-mysql-source-vm ~]#
  2. In the MySQL destination VM, set up the mysql-server.cnf. Here, the server-id (101) is an arbitrary number:

    [root@rhel9-mysql-destination-vm ~]# cat /etc/my.cnf.d/mysql-server.cnf## This group are read by MySQL server.# Use it for options that only the server (but not clients) should see## For advice on how to change settings please see# http://dev.mysql.com/doc/refman/en/server-configuration-defaults.html# Settings user and group are ignored when systemd is used.# If you need to run mysqld under a different user or group,# customize your systemd unit file for mysqld according to the# instructions in http://fedoraproject.org/wiki/Systemd[mysqld]datadir=/var/lib/mysqlsocket=/var/lib/mysql/mysql.socklog-error=/var/log/mysql/mysqld.logpid-file=/run/mysqld/mysqld.pidserver-id=101relay-log=/var/log/mysql/mysql-relay.loglog_bin=/var/log/mysql/mysql-bin.loggtid_mode=ONenforce-gtid-consistency=ONlog-replica-updates=ONskip-replica-start=ON[root@rhel9-mysql-destination-vm ~]# 
  3. Start mysqld.service in both the VMs:

    #systemctl start mysqld.service
  4. In both the VMs, run mysql_secure_installation to set a root password

    # mysql_secure_installation

Configure MySQL replication

Before we begin, navigate to details of the rhel9-mysql-destination-vm and take a note of the pod name (Figure 8).

In this example, the pod name is virt-launcher-rhel9-mysql-destination-vm-45l4s.

Note: The pod's IP address would change after every reboot. We need a static component to ensure replication after reboot of the destination VM.

Because only the suffix part in the pod’s name is expected to change, a wild card (%) is used.

  1. In the MySQL source VM, create the replication_user and provide the permission:

    mysql>CREATE USER 'replication_user'@'virt-launcher-rhel9-mysql-destination-vm-%' IDENTIFIED WITH mysql_native_password BY 'Test@123';Query OK, 0 rows affected (0.01 sec)
  2. For the replication_user in the MySQL destination VM, grant the replication privileges in the MySQL source VM:

    mysql> GRANT REPLICATION SLAVE ON *.* TO 'replication_user'@'virt-launcher-rhel9-mysql-destination-vm-%';Query OK, 0 rows affected (0.01 sec)
  3. Reload the grant tables in the MySQL database:

    mysql> FLUSH PRIVILEGES;Query OK, 0 rows affected (0.01 sec)
  4. Set the instance in MySQL source VM (primary) to read-only state:

    mysql> SET @@GLOBAL.read_only = ON;Query OK, 0 rows affected (0.00 sec)
  5. From a bastion host or any other host in the network in which oc and virtctl is installed and the cluster is accessible, switch to mysql-source-project:

    [root@ocp4-bastion auth]# oc project mysql-source-projectNow using project "mysql-source-project" on server "https://api.ocp.example.com:6443".
  6. From the virtctl, expose the port 3306 of the MySQL source VM through the service mysql-source-service:

    [root@ocp4-bastion auth]# virtctl expose vm rhel9-mysql-source-vm --port=3306 --target-port=3306 --name=mysql-source-service
  7. Confirm that the service mysql-source-service is created and take a note of the service’s IP address:

    [root@ocp4-bastion auth]# oc get servicesNAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGEservice/mysql-source-service ClusterIP 172.30.52.255 <none> 3306/TCP 1mservice/rhel9-mysql-source-vm-nodeport-ssh-service-zdo3 NodePort 172.30.221.246 <none> 22000:31399/TCP 29h[root@ocp4-bastion auth]# 

In this example, 172.30.52.255 is the cluster IP of mysql-source-service.

  1. Create an ingress network policy in mysql-source-project in OpenShift to restrict the traffic only from mysql-destination-project:

    kind: NetworkPolicyapiVersion: networking.k8s.io/v1metadata: name: mysql-nw-policy namespace: mysql-source-project uid: e4a01ca8-02b9-4137-b67e-d4b03664df30 resourceVersion: '3451471' generation: 1 creationTimestamp: '2024-05-24T09:55:08Z' managedFields: - manager: Mozilla operation: Update apiVersion: networking.k8s.io/v1 time: '2024-05-24T09:55:08Z' fieldsType: FieldsV1 fieldsV1: 'f:spec': 'f:ingress': {} 'f:policyTypes': {}spec: podSelector: {} ingress: - from: - podSelector: {} namespaceSelector: matchLabels: kubernetes.io/metadata.name: mysql-destination-project policyTypes: - Ingressstatus: {}

Note: Additional network policies may be needed while integrating database VM to application pods in other projects.

  1. Log in to the MySQL destination VM and switch the instance (secondary) to read-only mode:

    mysql> SET @@GLOBAL.read_only = ON;Query OK, 0 rows affected (0.00 sec)
  2. Configure the replication source using the service ip of the mysql-source-service:

    mysql> CHANGE REPLICATION SOURCE TO -> SOURCE_HOST='172.30.52.255', -> SOURCE_USER='replication_user', -> SOURCE_PASSWORD='Test@123', -> SOURCE_AUTO_POSITION=1;Query OK, 0 rows affected, 2 warnings (0.07 sec)
  3. Start the replica thread in the MySQL destination VM:

    mysql> START REPLICA;Query OK, 0 rows affected (0.02 sec)
  4. Unset the read-only state on both the primary and secondary instances:

    mysql> SET @@GLOBAL.read_only = OFF;Query OK, 0 rows affected (0.00 sec)
  5. Inspect the status of the replication in the MySQL destination VM:

    mysql> SHOW REPLICA STATUS\G;*************************** 1. row *************************** Replica_IO_State: Waiting to reconnect after a failed registration on source Source_Host: 172.30.52.255 Source_User: replication_user Source_Port: 3306 Connect_Retry: 60 – snip – Source_Server_Id: 100 Source_UUID: a5133440-1903-11ef-af94-02a4b0000000 – snip –mysql>

Test the replication

Follow the steps to test the replication in the MySQL source and destination VMs.

In the MySQL source VM

  1. List the databases:

    mysql> show databases -> ;+--------------------+| Database |+--------------------+| information_schema || mysql || performance_schema || sys || test_db_1 || test_may24 |+--------------------+6 rows in set (0.01 sec)
  2. Switch tothe database test_may24:

    mysql> use test_may24;Database changed
  3. List the tables in the test_may24 database:

    mysql> show tables -> ;+----------------------+| Tables_in_test_may24 |+----------------------+| test_may24_table |+----------------------+1 row in set (0.00 sec)
  4. Describe table test_may24_table:

    mysql> desc test_may24_table;+----------+--------------+------+-----+---------+-------+| Field | Type | Null | Key | Default | Extra |+----------+--------------+------+-----+---------+-------+| name | varchar(255) | YES | | NULL | || location | varchar(255) | YES | | NULL | || number | int | YES | | NULL | |+----------+--------------+------+-----+---------+-------+3 rows in set (0.00 sec)
  5. Check if there are any records in the table:

    mysql> select * from test_may24_table;Empty set (0.00 sec)
  6. Create some records:

    mysql> insert into test_may24_table values ('Ramu','Delhi','01');Query OK, 1 row affected (0.02 sec)mysql> insert into test_may24_table values ('Raju','Mumbai','02');Query OK, 1 row affected (0.01 sec)mysql> insert into test_may24_table values ('Ragu','Chennai','03');Query OK, 1 row affected (0.01 sec)
  7. Check if the records are created in the table:

    mysql> select * from test_may24_table;+------+----------+--------+| name | location | number |+------+----------+--------+| Ramu | Delhi | 1 || Raju | Mumbai | 2 || Ragu | Chennai | 3 |+------+----------+--------+3 rows in set (0.00 sec)

Because we are replicating all the databases from the source, we should see the test_may24 database, test_may24_table table, and the records in the destination instance.

In the MySQL destination VM

  1. Check if the test_may24 database is replicated:

    mysql> show databases;+--------------------+| Database |+--------------------+| information_schema || mysql || performance_schema || sys || test_db_1 || test_may24 |+--------------------+6 rows in set (0.01 sec)
  2. Switch to database test_may24:

    mysql> use test_may24;Database changed
  3. List the tables in the test_may24 database:

    mysql> show tables;+----------------------+| Tables_in_test_may24 |+----------------------+| test_may24_table |+----------------------+1 row in set (0.00 sec)
  4. Check if the records are replicated to the destination:

    mysql> select * from test_may24_table;+------+----------+--------+| name | location | number |+------+----------+--------+| Ramu | Delhi | 1 || Raju | Mumbai | 2 || Ragu | Chennai | 3 |+------+----------+--------+3 rows in set (0.00 sec)

Here we can see the records created in MySQL source VM are replicated to the MySQL destination VM.

Conclusion

Regardless of the specific database technology you choose, replication within your OpenShift cluster can achieve high availability and disaster recovery. This ensures minimal downtime and data protection, keeping your applications running smoothly.

We hope this article has equipped you with a understanding of how to leverage software-defined networking for database replication between virtual machines in an OpenShift cluster.

MySQL data replication between virtual machines via SDN | Red Hat Developer (2024)

References

Top Articles
Latest Posts
Article information

Author: Rueben Jacobs

Last Updated:

Views: 5990

Rating: 4.7 / 5 (77 voted)

Reviews: 84% of readers found this page helpful

Author information

Name: Rueben Jacobs

Birthday: 1999-03-14

Address: 951 Caterina Walk, Schambergerside, CA 67667-0896

Phone: +6881806848632

Job: Internal Education Planner

Hobby: Candle making, Cabaret, Poi, Gambling, Rock climbing, Wood carving, Computer programming

Introduction: My name is Rueben Jacobs, I am a cooperative, beautiful, kind, comfortable, glamorous, open, magnificent person who loves writing and wants to share my knowledge and understanding with you.