Running high-availability WordPress and MySQL on kubernetes

Source: Internet
Author: User
Tags mysql host php server wordpress php

WordPress is the main platform for editing and publishing Web content. In this tutorial, I'll step through how to use kubernetes to build a high availability (HA) WordPress deployment.


WordPress consists of two main components: a wordpress PHP server and a database for storing user information, posts, and website data. We need to make these two components in the entire application both highly available and fault tolerant.


When hardware and address changes, it can be difficult to run high-availability services: very difficult to maintain. With Kubernetes and its powerful network components, we can deploy highly available WordPress sites and MySQL databases without (almost no need) entering a single IP address.


In this tutorial, I'll show you how to create storage classes, services, configuration mappings and collections in Kubernetes, how to run highly available MySQL, and how to mount a highly available WordPress cluster to a database service. If you don't have a Kubernetes cluster, you can easily find and launch them on Amazon, Google or Azure, or use Rancher Kubernetes Engine (RKE) on any server


Architecture Overview


Now let me briefly introduce the technologies we will use and their capabilities:


· Storage of wordpress application files: NFS storage with GCE Persistent disk backup

· db cluster: MySQL with xtrabackup for parity

· application level: A wordpress dockerhub image mounted to NFS storage

· load Balancing and networking: Kubernetes-based load balancers and service networks


The architecture is as follows:


Creating storage classes, services, and configuration mappings in Kubernetes


In Kubernetes, a state set provides a way to define the order in which the pods are initialized. We will use a stateful MySQL collection because it ensures that our data nodes have enough time to replicate the records from the previous pods at startup. The way we configure this state set allows the MySQL host to start before the other attached machines, so when we scale up, we can send the clones directly from the host to the attached machine.


First, we need to create a persistent volume storage class and configuration map to apply the master-slave configuration as needed.


We use persistent volumes to prevent data in the database from being limited to any particular pods in the cluster. This prevents the database from losing data in the event that the MySQL host pod is lost, and when the host pod is lost, it can reconnect to the attached machine with xtrabackup and copy the data from the attached machine to the host. The replication of MySQL is responsible for host-secondary replication, while Xtrabackup is responsible for secondary-host replication.


To dynamically allocate persistent volumes, we use GCE persistent disks to create storage classes. However, Kubernetes provides a variety of storage scenarios for persistent volumes:


Create the class, and use the directive:$ kubectl create-f storage-class.yaml to deploy it.


Next, we will create the Configmap, which specifies some variables that are set in the MySQL configuration file. These different configurations are chosen by the pod itself, but they also provide a convenient way for us to manage potential configuration variables.


Create a Yaml file named Mysql-configmap.yaml to handle the configuration as follows:



Create the Configmap and use the directive:$ kubectl create-f mysql-configmap.yaml to deploy it.


Next we want to set up the service so that MySQL pods can communicate with each other, and our WordPress pod can use Mysql-services.yaml to communicate with MySQL. This also initiates a service load balancer for the MySQL service.

With this service statement, we have laid the groundwork for implementing a multi-write, multi-read MySQL instance cluster. This configuration is necessary, and every WordPress instance can write to the database, so each node must be ready to read and write.


Execute Command $ kubectl create-f mysql-services.yaml to create the above services.


So far, we have created the volume declaration store class, which gives the persistent disk to all the containers that requested them, we configured Configmap, set some variables in the MySQL configuration file, and we configured a network layer service that is responsible for load balancing the MySQL server request. These are just the frameworks that prepare the state set, where the MySQL server is actually running, and we'll continue to explore.


Configuring MySQL with a state set


In this section, we will write a YAML configuration file that is applied to a MySQL instance that uses a state set.


Let's define our state set first:

l create three pods and register them on the MySQL service.

l define each pod according to the following templates:

n Create an initialization container for the host MySQL server, named init-mysql.

n use mysql:5.7 image for this container

N run a bash script to start xtrabackup

n mount two new volumes for configuration files and Configmap

• Create an initialization container for the host MySQL server, named clone-mysql.

n xtrabackup:1.0 image for this container using Google Cloud registry

N run bash script to clone an existing xtrabackups of the previous sibling

n hangs on two new volumes for data and configuration files

N The container effectively hosts the cloned data, making it easier for new secondary containers to get it

l Create a basic container for a secondary MySQL server

n create a MySQL affiliate container, configure it to connect to the MySQL host

n Create a secondary xtrabackup container, configure it to connect to the Xtrabackup host

l Create a volume declaration template to describe each volume, each of which is a 10GB persistent disk


The following configuration file defines the behavior of the primary and secondary nodes of the MySQL cluster, provides a bash configuration to run the Affiliate client, and ensures that the primary node is functioning properly before cloning. The subordinate node and the master node get their own 10GB volumes, which they requested in the persistent volume storage class we defined earlier.

Save the file as mysql-statefulset.yaml, enter kubectl create-f Mysql-statefulset.yaml , and let kubernetes deploy your database.


Now when you call $ kubectl Get pods, you should see 3 pods start up or ready, with two containers on each pod.


The main node pod is represented as mysql-0, while the subordinate pods are mysql-1 and mysql-2.


Let pods perform a few minutes to ensure that the xtrabackup service is properly synchronized between pods and then be deployed to WordPress.


You can check the logs of a single container to confirm that no error messages are thrown. The command to view the log is $ kubectl logs-f-C <container_name>


The primary node xtrabackup container should display two connections from the secondary, and there should be no errors in the log.


Deploy high-availability WordPress


The final step in the whole process is to deploy our WordPress pods to the cluster. For this we want to define the service and deployment of WordPress.


In order for WordPress to be highly available, we want every container runtime to be completely replaceable, which means that we can terminate one and start another without modifying the availability of data or services. We also want to be able to tolerate at least one container error, with a redundant container responsible for handling slack.


WordPress stores important site-related data in the application catalog /var/www/html . For the two WordPress instances that you want to service for the same site, the folder must contain the same data.


When running highly available WordPress, we need to share the /var/www/html folder between the instances, so we define a NGS service as the mount point for those volumes.


Here is the configuration for setting up NFS services, which I have provided in plain English:

Use the instructions $ kubectl create-f nfs.yaml to deploy the NFS service.


Now we need to run the kubectl describe services Nfs-server to get the IP address, which will be used later.


Note: in the future, we can use the service name to say these bindings together, but now you need to hard-code the IP address.


We now create a persistent volume declaration that maps to the NFS service we created earlier, and then attaches the volume to the WordPress pod, the /var/www/html root directory, which is where WordPress is installed. This preserves all the installation and environment of WordPress pods in the cluster. With these configurations, we can start and dismantle any WordPress node, and the data can be left behind. Because the NFS service requires constant use of the physical volume, the volume will remain and will not be reclaimed or incorrectly assigned.


Use the instructions $ kubectl create-f wordpress.yaml to deploy the WordPress instance.


The default deployment will run only one WordPress instance and you can use the instruction $ kubectl scale--replicas=<number of replicas> deployment/wordpress Expand the number of WordPress instances.


To get the address of the WordPress service load balancer, you need to enter the $ kubectl Get services WordPress and get the external-ip field from the results to navigate to WordPress.


Elastic test


OK, now that we've deployed the service, let's remove them and see how our highly available architecture handles the mess. The only remaining single point of failure in this deployment is the NFS service (the reason is summarized in the end of the article). You should be able to test any other service to see how the application responds. Now I have started three copies of the WordPress service, as well as a primary two affiliate node in the MySQL service.


First, let's kill the others and just leave a WordPress node to see how the app responds:

$ kubectl Scale--replicas=1 deployment/wordpress


Now we should see a drop in the number of pods deployed by WordPress.

$ kubectl Get pods

Should be able to see the WordPress pods run into a 1/1.


Click on the WordPress service IP and we will see the same site and database as before.


If you want to extend resiliency, you can use the kubectl scale--replicas=3 deployment/wordpress.


Once again, we can see that the packets are left in three instances.


To test the status set of MySQL below, we use instructions to reduce the number of backups:

$ kubectl scale statefulsets MySQL--replicas=1

We will see that two of the dependencies are lost from the instance, and if the primary node is lost at this time, the data it saves will be saved on the GCE persistent disk. However, you must manually recover the data from the disk.

If all three MySQL nodes are turned off, they cannot be copied when the new node appears. However, if a primary node fails, a new master node is automatically started and the data from the satellite node is reconfigured via xtrabackup . Therefore, when you run the production database, I do not recommend that you run with a replication factor of less than 3.

In the conclusion section, we'll talk about what better solution for stateful data, because Kubernetes is not really designed for state.

Conclusions and recommendations


So far, you have finished building and deploying high-availability WordPress and MySQL installations in kubernetes!


But despite this effect, your research journey may be far from over. Perhaps you haven't noticed that our installation still has a single point of failure: The NFS server shares the /var/www/html directory between WordPress pods. This service represents a single point of failure, because if it is not running, the HTML directory on the pods that uses it will be lost. In the tutorial we have selected a very stable image for the server, which can be used in a production environment, but for a real production deployment, you might consider using Glusterfs to turn on multiple read and write for directories shared by WordPress instances.


This process involves running a distributed storage cluster on Kubernetes, which is not actually built by kubernetes, so although it works well, it is not ideal for long-term deployments.


For databases, I personally recommend using managed relational database services to host MySQL instances, because both Google Cloudsql and AWS Rds provide high availability and redundancy at a more reasonable price without worrying about the integrity of the data. Kuberntes is not designed around stateful applications, and any state that is built into it is more of an afterthought. There are a number of solutions available that provide the assurance you need when choosing a database service.


In other words, this is an ideal process, created by the Kubernetes tutorial, the example found on the web, to create an kubernetes example of an associated reality, and contains all the new features in Kubernetes 1.8.x.


I hope that through my guide, you can get some surprise experience when you deploy WordPress and MySQL, of course, I hope you run everything normal.


Running high-availability WordPress and MySQL on kubernetes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.