MySQL Operations management-heartbeat High availability cases and maintenance essentials for Web Services

Source: Internet
Author: User
Tags inotify rsync

1.DRBD Introduction

Distributed replicated block Device (DRBD) is software based on block devices synchronizing and mirroring data between different highly available server pairs, which enables real-time or asynchronous mirroring or synchronous replication between two servers in the network based on block device level. A system architecture project software similar to rsync+inotify. Only DRBD is based on the underlying file system, which is the block-level synchronization, and rsync+inotify is the actual physical file synchronization on top of the file system. So the DBRD is more efficient.
A block device can be a disk partition, an LVM logical volume, or an entire disk. How the 2.DRBD works

DRBD is a distributed storage system in the storage layer of the kernel of Linux, and can be used to share file systems and data between two Linux servers using DRBD. Similar to the function of a network RAID-1, on the DBRD-based high availability (HA) Two server hosts, when we write data to the local disk system, the data will be sent in real-time to another host on the network and recorded in the same form on another disk system, Keeps the data of the local (master node) and the remote host (standby node) in real-time data synchronization. At this point, if the Local System (master node) fails, then the remote host (standby node) will also retain a copy and the primary node the same data backup can continue to use, not only the data is not lost, but also improve the user access to data experience. For more details, please see the official website of DBRD http://www.dbrd.org/
DRBD Working principle diagram:

3.DRBD Copy Mode

Protocol A:

Asynchronous replication protocol. Once the local disk write has completed and the packet is already in the Send queue, the write is considered complete. In the event of a node failure, data loss can occur because the data that is written to the remote node may still be in the sending queue. Although the data on the failover node is consistent, it is not updated in a timely manner. This is typically used for geographically separate nodes

Protocol B:

Memory synchronization (semi-synchronous) replication protocol. Once the local disk write is completed and the replication packet reaches the peer node, it is considered to be written on the master node as completed. Data loss can occur in cases where the participating two nodes fail simultaneously because the data in transit may not be committed to the disk.

Protocol C:

Synchronous replication Protocol. Write is considered complete only if the disk on the local and remote nodes has confirmed that the write operation is complete. There is no data loss, so this is a popular mode for cluster nodes, but I/O throughput depends on network bandwidth.

Generally, protocol C is used, but the choice of C protocol will affect traffic, thus affecting network latency. For data reliability, we need to be cautious about which protocol to use when using a production environment. Enterprise Application Scenarios for 4.DBRD

In production scenarios, DRBD is often used for data synchronization solutions based on highly available servers.

For example: Heartbeat+drbd+nfs/mfs/gfs,heartbeat+drbd+mysql/oracle and so on. In fact, DRBD can be used in conjunction with any scenario where data synchronization is required for all services. 5. Common Data Synchronization Tools

(1) rsync (SERSYNC,INOTIFY,LSYNCD)

(2) SCP

(3) NC

(4) NFS (Network File system)

(5) Union dual machine synchronization

(6) CSYNC2 Multi-Machine synchronization

(7) The software's own synchronization mechanism (Mysql,oracle,mongdb,ttserver,redis. File into the database, synchronize to the slave library, and then take the file out.

(8) DRBD 6. Deployment of DRBD Service Requirements Description 6.1 Business requirements Description

Business requirements can be combined with the previous configuration of the heartbeat to build DBRD services, hearbeat installation and deployment of my previous article has been written. The primary server is heartrbeat-1-130 and the server is heartbeat-1-129. 6.2 DRBD Deployment Structure diagram

(1) The DRBD service synchronizes data to each other in real time via direct connection or Ethernet.

(2) Two storage servers to backup each other, under normal circumstances, each end provides a primary partition for NFS use.

(3) Between storage servers, both the storage service and the switch are dual gigabit NIC bindings.

(4) The application server accesses storage through NFS. 7.DRBD Software Installation Experiment prepare 7.1 operating systems:

Centos-6.8-x86_64 7.2 DRBD Service Host Resource Preparation

Primary Server A:

Host Name: heartbeat-1-130

Eth0 Network card Address: 192.168.1.130 (Management IP)

ETH1 Network card address: 10.0.10.4 (Heartbeat IP)

From Server B:

Host Name: heartbeat-1-129

Eth0 Network card Address: 192.168.1.129 (Management IP)

ETH1 Network card address: 10.0.10.5 (Heartbeat IP)

Virtual VIP:

Virtual VIP on the master server heartbeat-1-130

vip:192.168.1.131

You need to modify the hostname, turn off the firewall and selinux these preparations and heartbeat, like the one I installed in front of the heartbeat article, there is no telling. With the two machines installed heartbeat, the primary server heartbeat-1-130, heartbeat-1-129 from the server. 7.3 Creating an available partition

DRBD is a partition-based disk logical volume, and no partitions are available for sure. We first put heartbeat-1-130 and heartbeat-1-129 two virtual machine, then add a 1g hard disk on the main node heartbeat-1-130, and add a 2g hard disk from the node heartbeat-1-129. Adding a hard disk does not demonstrate, and then starts two machines. 7.4 Partitioning the/dev/sdb

(1) create partition (take slave server for example)
The main server divided SDB1 and sdb2 two partitions, SDB1 partition to 768M, the rest of the full SDB2 partition. The following is an example of creating a partition from the server.

[[email protected] ~]# fdisk/dev/sdbdevice contains neither a valid DOS partition table, nor Sun, SGI or OSF Disklab Elbuilding a new DOS disklabel with disk identifier 0xd3ee6f66. Changes'll remain in memory only, until the decide to write them. After that, of course, the previous content won ' t is recoverable. Warning:invalid flag 0x0000 of partition Table 4 would be a corrected by W (rite) warning:dos-compatible mode is deprecated. It's strongly recommended to switch off the mode (command ' C ') and change display units to sectors (Command ' u ').  Command (M for help): Ncommand action E Extended P primary partition (1-4) ppartition number (1-4): 1First cylinder  (1-261, default 1): Using default value 1Last cylinder, +cylinders or +size{k,m,g} (1-261, default 261): +1536mcommand (M For help): pdisk/dev/sdb:2147 MB, 2147483648 bytes255 heads, $ sectors/track, 261 cylindersunits = cylinders of 16065 * 8225280 bytessector size (logical/physical): bytes/512 bytesi/o Size (minimum/optimal): bytes/512 bytesdisk identifier:0xd3ee6f66 Device Boot Start End Blocks Id system/dev/sdb 1 1 197 1582371 Linuxcommand (M for help): Ncommand action E Extended P primary partition (1-4) ppartition Number (1-4): 2First cylinder (198-261, default 198): Using default value 198Last cylinder, +cylinders or +size{k,m,g} (19 8-261, default 261): Using default value 261Command (M for help): pdisk/dev/sdb:2147 MB, 2147483648 bytes255 heads, s  Ectors/track, 261 cylindersunits = cylinders of 16065 * MB = 8225280 bytessector size (logical/physical): bytes/512  BYTESI/O size (minimum/optimal): bytes/512 bytesdisk identifier:0xd3ee6f66 Device Boot Start End Blocks Id  SYSTEM/DEV/SDB1 1 197 1582371 LINUX/DEV/SDB2 198 261 514080-Linuxcommand (M for help): Wthe partition Table has been altered! Calling IOCTL () to re-read partition table. Syncing disks. [[email protected] ~]# partprobeWarning:WARNING:the kernel faileD to re-read the partition table ON/DEV/SDA (device or resource busy). As a result, it may not be reflect all of the your changes until after reboot. Warning: Cannot open/dev/sr0 (read-only file system) in read-write mode. /dev/sr0 has been opened as read-only. Warning: Cannot open/dev/sr0 (read-only file system) in read-write mode. /dev/sr0 has been opened as read-only. Error: Invalid partition table-/dev/sr0 a recursive partition appears. [[email protected] ~]# fdisk-ldisk/dev/sda:21.5 GB, 21474836480 bytes255 heads, sectors/track, 2610 cylindersUn its = cylinders of 16065 * 8225280 = bytessector size (logical/physical): bytes/512 bytesi/o Size (minimum/optim  AL): bytes/512 bytesdisk identifier:0x0007ed95 Device Boot Start End Blocks Id system/dev/sda1 * 1 64 512000 linuxpartition 1 does not end on cylinder boundary./dev/sda2 64261120458496 8e Linux lvmdisk/dev/sdb:21 MB, 2147483648 bytes255 heads, sectors/track, 261 cylindersunits = cylinders of 16065 * = 8225280 bytessector si Ze (logical/physical): bytes/512 bytesi/o size (minimum/optimal): bytes/512 bytesdisk Identifier:0xd3ee6f66 Device Boot Start End Blocks Id system/dev/sdb1 1 197 1582371 The LINUX/DEV/SDB2 198 261 514080 The Linux 

(2) Formatting/dev/sdb1, note/dev/sdb2 do not format

[[email protected] ~]# mkfs.ext4 /dev/sdb1 mke2fs 1.41.12 (17-May-2010)文件系统标签=操作系统:Linux块大小=1024 (log=0)分块大小=1024 (log=0)…省略…正在写入inode表: 完成 Creating journal (8192 blocks): 完成Writing superblocks and filesystem accounting information: 完成This filesystem will be automatically checked every 24 mounts or180 days, whichever comes first.  

Extension: When the hard disk data exceeds two T, Fdisk will not be able to use, then we will use parted this command.

parted non-interactive partitioning

[[email protected] ~]# parted /dev/sdb mklabel gpt警告: The existing disk label on /dev/sdb will be destroyed and all data on this disk will be lost. Do you want to continue?是/Yes/否/No? yes [[email protected] ~]# parted /dev/sdb mkpart primary 0G 2G [[email protected] ~]# parted -s /dev/sdb printModel: VMware, VMware Virtual S (scsi)Disk /dev/sdb: 5369MBSector size (logical/physical): 512B/512BPartition Table: gptNumber  Start   End SizeFile system  Name 标志 1  1049kB  2000MB  1999MB   primary

Tips:

A. This meta data partition must not be formatted to create a file system.

B. Formatted partitioned partitions cannot now be directly hung on (Mount).

C. Production environment the DRBD meta data partition can generally be set to 1-2g.

D. Check the method that the meta data partition is normally available.

[[email protected] ~]# mount /dev/sdb2 /mntmount: you must specify the filesystem type

The result above shows that the Meta Data division is right. 8. Installing the DRBD Software

Drdb software can be compiled and installed or can be downloaded to include the source Yum installation, this time using the way of compiling and installing. 8.1 Compile and install the DRDB software (note the following steps for both machines to operate)

(1) Download the DRBD software (two machines to operate)

Http://oss.linbit.com/drbd/can be downloaded from the official website.

(2) Installing GCC and gcc-c++

[[email protected] tools]# yum install gcc gcc-c++ -y

In addition to installing GCC and gcc-c++ also need to install some other dependent packages, in order to prevent compilation errors, it is best to install in advance, the following is the package I need to install when compiling.

yum install dpkg dpkg-dev dpkg-devel gcc gcc-c++ git rpm-build kernel-devel kernel-headers flex -y

(3) Compiling DRBD

[[email protected] tools]# pwd/home/linzhongniao/tools[[email protected] tools]# export LC_ALL=C[[email protected] tools]# lsdrbd-8.4.4.tar.gz [[email protected] tools]# tar –xf drbd-8.4.4.tar.gz[[email protected] tools]# cd drbd-8.4.4[[email protected] drbd-8.4.4]# ./configure --prefix=/usr/local/drbd8.4.4 --with-km --with-heartbeat --sysconfdir=/etc/

(4) Problems with compilation

The following problem occurs Yum installs dpkg, Dpkg-dev, Dpkg-devel recompile

checking for udevinfo... falseconfigure: WARNING: No dpkg-buildpackage found, building Debian packages is disabled.出现下面问题yum安装flex,重新编译configure: error: Cannot build utils without flex, either install flex or pass the --without-utils option.

(5) Loading the kernel

A. Find the kernel source first

[[email protected] drbd-8.4.4]# ls -ld /usr/src/kernels/$(uname -r)/ls: 无法访问/usr/src/kernels/2.6.32-642.el6.x86_64/: 没有那个文件或目录

No kernel source file path Yum installation Kernel-devel kernel-headers It's on the view.

[[email protected] drbd-8.4.4]# ls -ld /usr/src/kernels/$(uname -r)/drwxr-xr-x 22 root root 4096 3月   5 05:55 /usr/src/kernels/2.6.32-696.20.1.el6.x86_64/

If the uname–r command displays a system kernel that is not the same as the system kernel found under/usr/src/kernels/, it is simple to upgrade the system kernel, reboot the system and then look at the kernel.

[[email protected] drbd-8.4.4]# ls -ld /usr/src/kernels/2.6.32-696.20.1.el6.x86_64/drwxr-xr-x 22 root root 4096 Mar  6 19:24 /usr/src/kernels/2.6.32-696.20.1.el6.x86_64/[[email protected] drbd-8.4.4]# uname -r 2.6.32-642.el6.x86_64[[email protected] drbd-8.4.4]# yum -y install kernel[[email protected] ~]# uname -r 2.6.32-696.20.1.el6.x86_64

B. Loading the system kernel

[[email protected] drbd-8.4.4]# make KDIR=/usr/src/kernels/$(uname -r)/[[email protected] drbd-8.4.4]# echo $?0

(6) Installing DRBD

[[email protected] drbd-8.4.4]# make install

echo $? Install successfully for zero 9. Picture data/data each configuration parameter

The more important is the yellow section below, we also use the deployment heartbeat environment, heartbeat environment deployment of the previous article has, here is no longer demonstrated.

10. Configure the DRBD parameter (two machines to operate) 10.1 load the DRBD module to the kernel

This DRBD module will fail after restarting the computer and will not automatically load into the system kernel, we can put it into effect in the/etc/rc.local. Production is not need to put in the/etc/rc.local inside, do not let automatic start, automatic start will lead to some unnecessary problems. Using Lsmod |grep DRBD to view this content means that the kernel loading is complete.

[[email protected] drbd-8.4.4]# lsmod |grep drbd[[email protected] drbd-8.4.4]# momodinfo  modutil  mountmount.nfsmountpoint   mount.tmpfs  modprobe more mount.cifs   mount.nfs4   mountstats   [[email protected] drbd-8.4.4]# modprobe drbd[[email protected] drbd-8.4.4]# lsmod |grep drbddrbd  327370  0 libcrc32c   1246  1 drbd[[email protected] drbd-8.4.4]# echo ‘modprobe drbd‘ >>/etc/rc.local[[email protected] drbd-8.4.4]# tail -1 /etc/rc.local modprobe drbd
10.2 Compiling the configuration file for DRBD drbd.conf

(1) Configuring the configuration file for DRBD

The configuration file for DRBD is the path specified at the time we compile,/etc/.

[[email protected] etc]# pwd/etc[[email protected] etc]# cat drbd.conf global {    usage-count no;}common {    syncer {        # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb        rate 1000M;        verify-alg crc32c;    }}resource data {        protocol C;    disk {        on-io-error    detach;    }    on heartbeat-1-130 {        device      /dev/drbd0;        disk      /dev/sdb1;        address      10.0.10.4:7788;        meta-disk /dev/sdb2[0];            }    on heartbeat-1-129 {        device      /dev/drbd0;        disk      /dev/sdb1;        address      10.0.10.5:7788;        meta-disk /dev/sdb2[0];            }}

(2) configuration file parameter description

global {    usage-count no;}

The first three lines is your global configuration, the general site will live together open source site installation, Usage-count value equals no, is not allowed official statistics.

common {    syncer {        # wfc-timeout degr-wfc-timeout outdated-wfc-timeout wait-after-sb        rate 1000M;        verify-alg crc32c;    }}

The speed of synchronization is set in the common section, such as 1000M,CRC32C is an algorithm

resource data {        protocol C;    disk {        on-io-error    detach;    }    on heartbeat-1-130 {        device      /dev/drbd0;        disk      /dev/sdb1;        address      10.0.10.4:7788;        meta-disk /dev/sdb2[0];            }    on heartbeat-1-129 {        device      /dev/drbd0;        disk      /dev/sdb1;        address      10.0.10.5:7788;        meta-disk /dev/sdb2[0];            }}

The above resource section is the resource of DRBD, protocol C is a real-time synchronous data protocol, with a or B are asynchronous synchronous or semi-synchronous, this will result in data loss, unless the business requirements of data is not always high concurrency requirements. Disk indicates how an IO error is handled for a disc. Resource behind the data is the resources of DRBD, note here resource can have a number of resources, for example, want to add a resource, we can put resource this section of the copy is good, need to modify the name of the resource behind the resource and disk, The port number of the Meta-disk and the synchronized address, for example 7788. Here the on heartbeat-1-130 is followed by the machine name, note that the machine name here must be the result of Uname–n return, device represents the equipment of DRBD, disk represents drbd0 corresponding to the first partition of the machine, Address here is a synchronized address, Meta-disk is the meta device data partition corresponding to the second partition of the machine, 0 is a meta device format. 11.Enabling DRBD Resources (two machines to operate)

Both machines are going to operate in heartbeat-1-130.11.1 Initialize DRBD metadata (Create device metadata)

Initialize the resource, and note that our initial resource is the data behind the resource in drbd.conf.

[[email protected] drbd-8.4.4]# drbdadm create-md dataWriting meta data...initializing activity logNOT initializing bitmapNew drbd meta data block successfully created.
11.2 Starting the DRBD service

Drbdadm up data

Drbdadm up is the resource data set by resource, and you can specify all resources drbdadm up all.

[[email protected] ~]# drbdadm up data/usr/local/drbd8.4.4/var/run/drbd: No such file or directory/usr/local/drbd8.4.4/var/run/drbd: No such file or directory

We look at the error message/usr/local/drbd8.4.4/var/run/drbd:no such file or directory
This directory cannot find the CENTOS6 version above, so we will create a directory to start DRBD.

[[email protected] ~]# mkdir -p /usr/local/drbd8.4.4/var/run/drbd[[email protected] ~]# drbdadm up data
11.3 You can view the status of DRBD via/PROC/DRBD
[[email protected] ~]# cat /proc/drbd version: 8.4.4 (api:1/proto:86-101)GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by [email protected], 2018-03-06 21:32:41 0: cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----ns:0 nr:0 dw:0 dr:0 al:0 bm:0 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:795188

This shows that the ro:secondary/secondary ds:inconsistent/inconsistent is correct from the state of being in the non-master (primary). 11.4 Synchronizing the DRBD data to the peer server to keep the data consistent 11.4.1 specifies a synchronized resource, synchronizing the data to the peer

Description

1. If the drive is empty. There is no need to consider the data when you can do it arbitrarily.

2. If the data on both sides is different (pay special attention to synchronizing the direction of the data, you may lose data). 11.4.21 resources can only be synchronized at one end to the other end of the command

Note: Operating our primary server on the primary server is heartbeat-1-130

(1) Synchronizing data

[[email protected] ~]# drbdadm -- --overwrite-data-of-peer primary data[[email protected] ~]# cat /proc/drbdGIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by [email protected], 2018-03-06 21:32:41 0: cs:SyncSource ro:Primary/Secondary ds:UpToDate/Inconsistent C r-----ns:344064 nr:0 dw:0 dr:344724 al:0 bm:21 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:451124    [=======>............] sync‘ed: 43.6% (451124/795188)K    finish: 0:00:10 speed: 43,008 (43,008) K/sec[[email protected] ~]# cat /proc/drbd version: 8.4.4 (api:1/proto:86-101)GIT-hash: 74402fecf24da8e5438171ee8c19e28627e1c98a build by [email protected], 2018-03-06 21:32:41 0: cs:Connected ro:Primary/Secondary ds:UpToDate/UpToDate C r-----ns:795186 nr:0 dw:0 dr:795846 al:0 bm:49 lo:0 pe:0 ua:0 ap:0 ep:1 wo:f oos:0

(2) Parameter description

Take the master node as an example

Cs:Connected:Connected is the status of the connection, with the Zabbix to do the monitoring mainly see Connected.

Ro:primary/secondary:primary is the Lord, Secondary is from, that is, the local is the primary to the end is from.

Ds:uptodate/uptodate:uptodate is both on both sides of the update complete. 12. Possible problems and solutions

cs:Connected ro:Secondary/Secondary ds:Inconsistent/Inconsistent C r-----

Workaround:

1. Check whether two physical network connections or IP and host routing are correct.

2. Stop the iptables firewall.

3. It may also be the result of a column of brain-induced results.

You can try the following methods to solve

On the Slave from node:

Drbdadm secondary data

DRBDADM----Discard-my-data Connect Data---> Discard this
operate on the master node

By viewing the CAT/PROC/DRBD status, if it is not wfconnection, you need to manually connect

Drbdadm Connect Data

CAT/PROC/DRBD viewing both sides status 13. Mount Test database synchronization and view standby node synchronization status

(1) Creating a DRBD file system

[[email protected] ~]# mkdir /data[[email protected] ~]# mkfs.ext3 /dev/drbd0mke2fs 1.41.12 (17-May-2010)文件系统标签=操作系统:Linux块大小=4096 (log=2)分块大小=4096 (log=2)Stride=0 blocks, Stripe width=0 blocks49728 inodes, 198796 blocks9939 blocks (5.00%) reserved for the super user第一个数据块=0Maximum filesystem blocks=2055208967 block groups32768 blocks per group, 32768 fragments per group7104 inodes per groupSuperblock backups stored on blocks:     32768, 98304, 163840正在写入inode表: 完成Creating journal (4096 blocks): 完成Writing superblocks and filesystem accounting information: 完成This filesystem will be automatically checked every 24 mounts or180 days, whichever comes first.  Use tune2fs -c or -i to override.[[email protected] ~]# tune2fs -c 0 -i 0 /dev/drbd0tune2fs 1.41.12 (17-May-2010)Setting maximal mount count to -1Setting interval between checks to 0 seconds[[email protected] ~]# mount /dev/drbd0 /data

(2) test data synchronization of the standby node

First insert data in the Master node's DRBD file system, we insert 20 files

[[email protected] data]# touch ‘seq 10‘[[email protected] data]# ls1  10  2  3  4  5  6  7  8  9  lost+found[[email protected] data]# touch `seq 10 20`

To view the data synchronization of the standby node, we need to mount the DRBD storage device first, and we see that the data has been synchronized.

[[email protected] ~]# mount/dev/sdb1/mntmount:you must specify the filesystem type[[email protected] ~]# DRB  Dadm down data[[email protected] ~]# mount/dev/sdb1/mnt[[email protected] ~]# ll/mnt/Total usage 16-rw-r--r--1 root  Root 0 March 7 00:41 1-rw-r--r--1 root root 0 March 7 00:43 10-rw-r--r--1 root root 0 March 7 00:43 11-rw-r--r--1 root Root 0 March 7 00:43 12-rw-r--r--1 root root 0 March 7 00:43 13-rw-r--r--1 root root 0 March 7 00:43 14-rw-r--r--1 roo T Root 0 March 7 00:43 15-rw-r--r--1 root root 0 March 7 00:43 16-rw-r--r--1 root root 0 March 7 00:43 17-rw-r--r--1 ro OT Root 0 March 7 00:43 18-rw-r--r--1 root root 0 March 7 00:43 19-rw-r--r--1 root root 0 March 7 00:41 2-rw-r--r--1 ro OT Root 0 March 7 00:43 20-rw-r--r--1 root root 0 March 7 00:41 3-rw-r--r--1 root root 0 March 7 00:41 4-rw-r--r--1 roo T Root 0 March 7 00:41 5-rw-r--r--1 root root 0 March 7 00:41 6-rw-r--r--1 root root 0 March 7 00:41 7-rw-r--r--1 root Root 0 March 7 00:41 8-rw-r--r--1 root root 0 March 7 00:41 9drwx------2 root root 16384 March 7 00:38 Lost+found 

MySQL Operations management-heartbeat High availability cases and maintenance essentials for Web services

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.