Virtual Machine Migration Technology: Part 1 (implementation of migration of KVM virtual machines between physical hosts)

Source: Internet
Author: User

Preface

The migration technology of virtual machines provides a simple method for server virtualization. Currently, popular virtualization products VMWare, xen, hyper-V, and KVM all provide their own migration tools. Among them, the Linux platform's open-source virtualization tool KVM has developed rapidly, and the migration features of KVM-based virtual machines have become increasingly improved. This article describes both static migration (offline migration) and dynamic migration (online migration) of KVM virtual machines in different application environments ), the latest SuSE Linux Enterprise Edition 11 SP1 demonstrates how to apply the libvirt/virt-manager graphical tool and the command line-based
Qemu-KVM tool for migration.

Back to Top

Introduction to V2V Virtual Machine migration

V2V Virtual Machine migration refers to the virtual machine system running on vmm (Virtual Machine monitor), which can be transferred to the vmm running on other physical hosts. Vmm abstracts and isolates hardware resources, shielding the underlying hardware details. The emergence of migration technology allows the operating system to dynamically transfer between different hosts, further removing the dependency between software and hardware resources. The first article in this series, "Virtual Machine Migration Technology", introduces three methods of V2V migration. This article will detail the differences and implementation methods of the three methods.

Classification of V2V migration methods

Static migration

Static migration:It is also called conventional migration and offline migration (offline migration ). Migration from one physical machine to another when the VM is shut down or paused. Because the Virtual Machine file system is built on the virtual machine image, when the virtual machine is shut down, you only need to simply migrate the virtual machine image and the corresponding configuration file to another physical host; if you need to save the status before migration, pause the VM before migration, and then copyTo the target host, and then re-establish the virtual machine status on the target host to resume execution. In this way, the migration process needs to explicitly stop the running of virtual machines. From the user's perspective, there is a clear period of downtime, and services on the virtual machine are unavailable. This migration method is simple and suitable for scenarios where the service availability requirements are not strict.

Dynamic migration of shared storage

Live migration is also called online migration ). It is the process of moving a virtual machine system from one physical host to another while ensuring the normal operation of services on the virtual machine. This process does not significantly affect the end user, so that the administrator can perform offline repair or upgrade on the physical server without affecting the normal use of the user. Unlike static migration, to ensure the availability of Virtual Machine services during migration, the migration process has only a very short downtime. In the previous phase of migration, the service runs on the virtual machine of the source host. When the migration is in a certain stage, the target host already has the necessary resources to run the virtual machine system. After a very short switchover, the source host transfers control to the target host, and the VM system continues to run on the target host. For the virtual machine service itself, the migration process is transparent to users because the switching time is very short and the user does not feel the service interruption. Dynamic migration is suitable for scenarios where high availability of Virtual Machine services is required.

Currently, mainstream dynamic migration tools, VMWare vmotion and Citrix xenmotion, all rely on the use of San (Storage Area Network) or NAS (Network-Attached Storage) between physical machines) for centralized shared external storage devices, you only need to migrate the memory execution status of the Virtual Machine System during migration to achieve better migration performance.

Figure 1. dynamic migration of shared storage

1. In order to shorten the migration time and service interruption time, the source and target hosts share the SAN storage. In this way, dynamic migration only requires the migration of the memory execution status of the virtual machine system to achieve better performance.

Dynamic migration of local storage

Dynamic migration is based on shared storage devices to accelerate the migration process and minimize downtime. However, in some cases, dynamic migration of virtual machines based on local storage is required, which requires the dynamic migration technology of storage blocks, referred to as block migration.

  • For example, some servers do not use SAN storage, and the migration frequency is very small. If the migration time of services on virtual machines is not strict, the dynamic migration technology of storage blocks can be used, SAN storage is expensive. Although SAN storage can improve migration performance and system stability, it is not cost-effective for small and medium-sized enterprises to configure expensive SAN storage to accelerate migration.
  • In a centralized shared external storage environment, the dynamic migration technology based on shared storage can undoubtedly work well. However, considering that some computer clusters do not adopt shared external storage, they are composed of physical hosts with independent local external storage. Shared storage-based migration technology is restricted in this case. After a virtual machine is migrated to the target host, it cannot access its original external storage device, or the source host must provide support for its external storage access.

In order to broaden the application scope of dynamic migration technology, it is necessary to implement a full system dynamic migration solution, including virtual machine external storage migration. In a computer cluster environment with distributed local storage, the Migration technology can still be used to transfer the virtual machine environment and ensure the availability of Virtual Machine System services during the migration process.

Figure 2. dynamic migration of local storage

Compared with the dynamic migration based on shared storage, the dynamic migration of data blocks needs to migrate both the Virtual Machine disk image and the memory status of the Virtual Machine System, prolong the migration time and reduce the migration performance.

Back to Top

Management tools for KVM virtual machines

To be precise, KVM is only a module of the Linux kernel. More auxiliary tools are required to manage and create a complete kvm vm.

  • QEMU-KVM: in Linux, first we can use modprobe System Tools to load the KVM module, if you use RPM to install the KVM package, the system will automatically load the module at startup. After the module is loaded, you can use other tools to create virtual machines. However, only the KVM module is far from enough, because users cannot directly control the kernel module to do things, and there must be a user space tool. With regards to user space tools, KVM developers have chosen the ready-to-use open-source virtualization software qemu. Qemu is a powerful virtualization software that virtualizes different CPU architectures. For example, Virtualize
    Power CPU, and use it to compile programs that can run on power. KVM uses the x86 version of qemu to form a user space tool qemu-KVM that can control the KVM kernel module. Therefore, Linux releases are divided into kernel KVM kernel modules and QEMU-KVM tools. This is the relationship between KVM and qemu.
  • Libvirt, virsh, virt-Manager: Although QEMU-KVM tools can create and manage KVM virtual machines, RedHat has developed more auxiliary tools for KVM, such as libvirt and libguestfs. The reason is that the qemu tool is not efficient and is not easy to use. Libvirt is a set of APIs that provide multi-language interfaces. It provides a set of convenient and reliable programming interfaces for various virtualization tools. It not only supports KVM, but also supports other virtual machines such as xen. To use libvirt, you only need to connect to the KVM or xen host through the functions provided by libvirt, and then you can use the same command to control different virtual machines. Libvirt
    It not only provides APIs, but also comes with a set of text-based commands for managing virtual machines-virsh. You can use the virsh command to use all libvirt functions. But the end user is more eager for the graphical user interface, which is virt-manager. It is a virtual machine management GUI written in Python. Users can use it to operate different virtual machines. Virt-manager is implemented using libvirt APIs.

These are the general architecture of KVM virtualization technology on Linux. This article demonstrates how to use these tools to migrate KVM virtual machines.

Introduction to the experiment environment in this article

The KVM Virtual Machine Software in this article is based on Novell's SuSE Linux Enterprise Server 11 Service Pack 1 release. Sles11 SP1 was released in May 19, 2010, based on Linux kernel 2.6.32.12, including kvm-0.12.3, libvirt-0.7.6, virt-manager-0.8.4, full support for KVM virtual machines. The physical server and external shared storage configurations in this article are as follows:

Table 1. hardware configuration

Physical host Hardware configuration Host OS Host Name IP address
Source host
Source host
Xeon (r) e5506 x 4 Core
Mem: 10 GB
Sles11 SP1 Vicorty3 192.168.0.73
Target Host
Destination host
Xeon (r) e5506 x 8 core
Mem: 18 GB
Sles11 SP1 Policy4 192.168.0.74
NFS server Pentium (r) d x 2 Core
Mem: 2G
Sles11 SP1 Server17 192.168.0.17

Create a KVM VM

Before migrating a virtual machine, we need to create a virtual machine. Creating a virtual machine can either use QEMU-KVM commands or use the virt-manager graphical management tool.

  • QEMU-KVM to create virtual machine image files: see the reference resource "KVM Virtual Machine on IBM system x ".
  • Virt-Manager: see virt-Manager Help manual.

Back to Top

KVM Virtual Machine static migration

Static migration is relatively simple because it allows virtual machines to be interrupted. First, shut down the VM on the source host, move the storage image and configuration file of the VM to the target host, and then start the VM on the target host to restore the service. The implementation of static migration is slightly different depending on the virtual machine image storage method.

Use shared storage between virtual machines

If both the source host and target host can access the image of the virtual machine, you only need to migrate the virtual machine configuration file. For example, in the sles11 SP1 system in this example, the virtual machine configuration file managed by virt-manager is stored in/etc/libvirt/qemu/"your VM name. xml ". Copy the xml configuration file to the same directory of the target host and make appropriate modifications, such as files or paths related to the source host. Whenever you modify the XML file of the VM in/etc/libvirt/qemu/, you must re-run the define command to activate the new virtual machine configuration file.

List 1. Activate the virtual machine configuration file

 # virsh define /etc/libvirt/qemu/”your vm name.xml”

Use local storage for VM Images

Local Storage refers to the virtual machine's file system built on the local hard disk, can be a file or disk partition.

  • Local file storage: If the VM is based on an image file, directly copy the image file and xml configuration file from the source host to the target host, and then modify and activate the XML file.
  • Local disk partition: If the virtual machine uses a disk partition (physical or logical partition) as the storage device, first use the dump tool to convert the disk partition into an image file and then copy it to the target host. When the target host recovers the VM, the image file is restored to the disk partition of the target host. If the virtual machine system uses multiple disk partitions, You need to dump each partition into an image file. For example, if you use the "/dev/volgroup00/lv001" LVM logical volume as a storage device, you can use the following command to output it as an image file:

List 2. convert a logical volume to an image file

 dd if=/dev/VolGroup00/lv001 of=lv001.img bs=1M 

Save the running status of the Virtual Machine

During the Static Virtual migration process, the virtual machine system is shut down, so that the running status before the virtual machine is shut down will not be retained. If you want to retain the system status before migration and restore it after migration, you need to back up snapshots of the virtual machine or disable the system in sleep mode, the details and implementation methods are described in the fifth part of this series.

Back to Top

Dynamic migration based on shared storage

As described in the previous section "Classification of V2V migration methods, dynamic migration can be divided into dynamic migration based on shared storage and storage block migration based on local storage. This section implements the most widely used dynamic migration based on shared storage. One of the conditions for real-time migration is to store VM storage files in public buckets. Therefore, you need to set up a shared storage space so that both the source and target hosts can connect to the virtual media files in the shared storage space, including virtual disks, virtual optical disks, and virtual floppy disks. Otherwise, even after the migration is complete, the migrated virtual machine cannot be started because the virtual device cannot be connected.

Set lab environment

Dynamic migration encapsulates the configurations of virtual machines in a file, and then quickly transfers the configurations and memory running status of virtual machines from one physical machine to another through the high-speed network, during this period, the VM remains running. Under the existing technical conditions, most Virtual Machine software such as VMWare, hyper-V, and xen must support shared storage for dynamic migration. Typical shared storage includes network file systems of NFS and SMB/CIFS protocols, or San networks connected through iSCSI. Which Network File System is used depends on the actual situation. This experiment uses the NFS file system as the shared storage between the source host and the target host.

Figure 3. dynamic migration experiment configuration of shared storage

    1. Ensure that the network connection is correct and the source host, destination host, and NFS server can access each other.
    2. Ensure that the vmm on the source host and target host runs properly.
    3. Set the shared directory of the NFS server. The NFS server in this article also has the sles11 SP1 operating system installed.

Listing 3. configuring the NFS service

Modify the/etc/exports file and add/home/image * (RW, sync, no_root_squash) RW: read/write permission; RO: Read-Only permission; no_root_squash: if the user logging on to the NFS host is a root user, the user has the root permission. this parameter is insecure and is not recommended. Sync: data is synchronized to the storage. Async: The data is temporarily stored in the memory and not directly written to the hard disk. Restart the nfsserver service # service nfsserver restart

Use virt-manager for dynamic migration

Virt-manager is a graphical Virtual Machine Management Software Based on libvirt. Note that the virt-Manager versions may be different on different releases, and the graphic interface and operation methods may be different. This article uses the virt-manager-0.8.4 on the sles11 SP1 release.

First, add shared storage to the source host and target host. The source host is used as an example to configure the target host.

  • Add the NFS storage pool to the vit-manager of the source host and target host. Click Edit menu-> host details-> storage tab.

    Figure 4. storage pool Configuration

  • Add a new storage pool. Click "+" in the lower left corner to bring up a new window. Enter the following parameters:
    • Name: name of the storage pool.
    • Type: Select netfs: Network exported directory. This article uses NFS as the shared storage protocol.

      Figure 5. Add a shared storage pool

  • Click "Forward" and enter the following parameters:
    • Target path: the ing directory that is shared and stored locally. In this article, the directory must be consistent between the source host and the target host.
    • Format: select the storage type. NFS is required.
    • Host Name: Enter the shared storage server, that is, the IP address or hostname of the NFS server.
    • Source Path: the shared directory output on the NFS server.

      Figure 6. storage pool settings

  • Click "finish" to add the shared storage. In this case, you can view the file system list of the Linux system on the physical machine to view the shared storage ing directory.

Create a kvm vm based on shared storage on the source host.

  • Select a shared storage pool and click "new volume" to create a new storage volume.
  • Enter the storage volume parameters. In this example, a storage volume of 10 Gb in qcow2 format is created for the VM.

    Figure 7. Add a storage volume

  • Create a virtual machine on this shared storage volume. This document creates a virtual machine based on the Windows 2008 R2 system. For details about how to create a VM, see "Create a kvm vm.

Connect to the vmm on the remote host. The source host is used as an example to configure the target host.

  • Open the virt-Manager application on the source host and connect to the list of local virtual machines of localhost. Click file-> Add connection. The add connection window is displayed. Enter the following items:
    • Hypervisor: Select qemu.
    • Connection: select the connection mode. In this article, select SSH connection.
    • Hostname: Enter the host name or IP address to be connected. Enter the target host name victory4 here.

      Figure 8. Add a remote vmm connection

  • Click Connect and enter the SSH connection password. The list of virtual machines on the source and target hosts is displayed.

    Figure 9. Manage remote vmm

Dynamically migrate the KVM virtual machine from the source host to the target host.

  • Start the Virtual Machine windwos 2008 R2 on the source host.
  • Enable real-time network services in virtual machines (to verify the availability of services during migration ).
    • Enable remote access and remotely connect to the VM on other hosts.
    • Enable real-time network services. For example, open a browser and play a real-time online video.
  • Prepare for dynamic migration to ensure that all virtual storage devices are shared at this time, including ISO and CDROM.
  • In the virt-manager window of the source host, right-click the VM waiting for migration and select "migrate ".

    • New Host: select the hostname of the target host.
    • Address: Enter the IP address of the target host.
    • Port and bandwith: Specify the port and bandwidth used to connect to the target host. This document does not set the port and bandwidth. Use the default settings.

    Figure 10. Virtual Machine migration settings

  • Click "migrate" and "yes" to start dynamic migration of virtual machines.

    Figure 11. VM migration progress

  • The dynamic migration time is related to network bandwidth, physical host performance, and virtual machine configuration. The network connection in this experiment is based on 100 Mbps Ethernet, and the entire migration process takes about 150 seconds. Remote Desktop Connection (RDC) is used to remotely connect to the virtual machine without interruption during migration. The real-time network video played in the virtual machine is basically smooth, and the pause time is very short, only about 1 second. If 1000 Mbps Ethernet or optical fiber network is used, the migration time will be greatly reduced, and the pause time of the virtual machine service is almost negligible.
  • After the migration is complete, a Windows 2008 R2 virtual machine with the same name is automatically created in the vmm of the target host, and the remote connection service and online video playing are continued. The VM on the source host changes to paused and no longer provides services. Now, dynamic migration is successfully completed.

Back to Top

Dynamic migration based on data blocks

The features of block migration (Block migration) are introduced from the qemu-kvm-0.12.2 version. In the previous section "dynamic migration based on shared storage", to achieve dynamic migration, the source host and the target host need to connect to the shared storage service. With the block migration technology, you can migrate virtual disk files from the source host to the target host during the dynamic migration process. With this feature of QEMU-KVM, shared storage is no longer a necessary condition for dynamic migration, which reduces the difficulty of dynamic migration and expands the application scope of dynamic migration. Sles11 SP1 integrates kvm-0.12.3 to support block migration features. But libvirt-0.7.6, virt-manager-0.8.4 on sles11 SP1
The block migration function is not introduced yet. So the block migration experiment below this article is only based on the command line mode of QEMU-KVM.

Set lab environment

During block migration, virtual machines only use local storage, so the physical environment is very simple. You only need to connect the source host to the target host over Ethernet, as shown in Figure 2. dynamic migration of local storage.

Qemu control terminal and migration command

On the qemu control terminal, you can add the parameter "-monitor" in the QEMQ-KVM command ".

  • -Monitor stdio: output to the text console.
  • -Monitor VC: output to the graphics console.
  • The command for switching between the graphic console and the virtual machine VNC window is:
    • CTRL + ALT + 1: VNC window
    • CTRL + ALT + 2: monitor Console
    • CTRL + ALT + 3: serial0 Console
    • CTRL + ALT + 4: parallel0 Console

The QEMU-KVM provides the-incoming parameter to listen for data migration on the specified port. This parameter is required on the target host to receive migration data from the source host.

Listing 4. Migration-related qemu commands

 (qemu) help migrate  migrate [-d] [-b] [-i] uri -- migrate to URI (using -d to not wait for completion)                  -b for migration without shared storage with full copy of disk                  -i for migration without shared storage with incremental copy of disk                        (base image shared between src and destination) 

Dynamic migration of data blocks using QEMU-KVM

Create and start a virtual machine on the source host.

  • Create a virtual machine image file on the local disk. In this document, a local image file in qcow2 format is created.

    List 5. Create a VM on the source host

    victory3:~ # qemu-img create -f qcow2 /var/lib/kvm/images/sles11.1ga/disk0.qcow2 10G

  • Install the virtual machine on the image file. This document describes how to install the sles11sp1 system on a virtual machine.

    Listing 6. Installing virtual machines on the source host

     victory3:~ #  /usr/bin/qemu-kvm -enable-kvm -m 512 -smp 4 -name sles11.1ga     -monitor stdio -boot c -drive file=/var/lib/kvm/images/sles11.1ga/disk0.qcow2,     if=none,id=drive-virtio-disk0,boot=on -device virtio-blk-pci,bus=pci.0,     addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0 -drive     file=/media/83/software/Distro/SLES-11-SP1-DVD-x86_64-GM-DVD1.iso,     if=none,media=cdrom,id=drive-ide0-1-0 -device ide-drive,bus=ide.1,     unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device virtio-net-pci,vlan=0,     id=net0,mac=52:54:00:13:08:96 -net tap -vnc 127.0.0.1:3 

  • After the virtual machine is installed, run the following command to start the virtual machine. "-Monitor stdio" is added to enable the text console. The ISO file in the virtual optical drive is removed to ensure the consistency of virtual devices on the source and target hosts during migration. If an ISO file with the same name exists in the same path of the target host, you can retain the ISO file parameters during migration.

    Listing 7. Starting a VM on the source host

     victory3:~ #  /usr/bin/qemu-kvm -enable-kvm -m 512 -smp 4 -name sles11.1ga  -monitor stdio -boot c -drive file=/var/lib/kvm/images/sles11.1ga/disk0.qcow2,  if=none,id=drive-virtio-disk0,boot=on -device virtio-blk-pci,bus=pci.0,addr=0x4,  drive=drive-virtio-disk0,id=virtio-disk0 -drive if=none,media=cdrom,  id=drive-ide0-1-0 -device ide-drive,bus=ide.1,unit=0,drive=drive-ide0-1-0,  id=ide0-1-0 -device virtio-net-pci,vlan=0,id=net0,mac=52:54:00:13:08:96  -net tap -vnc 127.0.0.1:3 

Create and start a virtual machine on the target host.

  • Create an image file for the migrated system on the target host. The file size must be greater than or equal to the image file size of the source host.

    Listing 8. Create a virtual machine on the target host

     victory4:~ #  qemu-img create -f qcow2 dest.img 20G  Formatting 'dest.img', fmt=qcow2 size=21474836480 encryption=off cluster_size=0 

  • Use the same qemu-KVM parameter as the source host to change the image file created on the target host, add the-incoming parameter to specify the protocol, IP address, and port number used for dynamic migration. Because the-incoming parameter is used to monitor the port, the virtual machine on the target host is in the paused state as soon as it is started, waiting for the migration of the virtual machine on the source host. In this example, block migration uses the TCP protocol. The listening port on the target host is port 8888.

    Listing 9. Migration commands on the target host

     victory4:~ # /usr/bin/qemu-kvm -enable-kvm -m 512 -smp 4 -name sles11.1ga  -monitor stdio -boot c -drive file=/root/dest.img,if=none,id=drive-virtio-disk0,  boot=on -device virtio-blk-pci,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,  id=virtio-disk0 -drive if=none,media=cdrom,id=drive-ide0-1-0 -device ide-drive,  bus=ide.1,unit=0,drive=drive-ide0-1-0,id=ide0-1-0 -device virtio-net-pci,vlan=0,  id=net0,mac=52:54:00:13:08:96 -net tap -vnc 127.0.0.1:8 -incoming tcp:0:8888  QEMU 0.12.3 monitor - type 'help' for more information  (qemu) info status  VM status: paused 

Migrate virtual machines on the source host to the target host.

  • Return to the source host, and enable some real-time services in the VM waiting for migration to verify that dynamic migration will not interrupt the service operation. In this example, use the "top-D 1" command in the terminal window of the virtual machine to enable the top service and refresh the system process information every second.

    Figure 12. Enable the top service in the VM waiting for Migration

  • Enter the following migration command in the qemu console of the source host to start migration.

    Listing 10. source host migration commands

    (Qemu) migrate-D-B TCP: victory4: 8888-D can be used to query the migration status during the migration process. Otherwise, it can only be queried after the migration is completed. -B Migration of the VM storage file TCP: ivctory4: 8888 data migration protocol, destination host, and port. The protocol and port must be consistent with the-incoming parameter of the virtual machine on the target host.

  • During the dynamic migration, the VM of the source host continues to run, and the top service is not interrupted. You can also query the migration status on the qemu console of the source host. The virtual machine of the target host is in the paused state. The percentage of migration progress is displayed on the qemu console of the target host.

    Listing 11. Monitor the migration process of virtual machines

    The source host qemu console displays the data being migrated (qemu) info migrate migration status: Active transferred RAM: 52 Kbytes remaining RAM: 541004 Kbytes total RAM: 541056 Kbytes transferred Disk: 2600960 Kbytes remaining disk: 5787648 Kbytes total Disk: 8388608 Kbytes target host qemu console displays the percentage of completed migration (qemu) during ing block device images completed 28%

  • After the dynamic migration is completed, the source host's virtual machine changes to the paused state, and the virtual machine on the target host changes from the paused state to the running state. The top service continues to run without interruption.
  • Shut down the VM of the source host. All services have been migrated to the target host. So far, the migration has been completed.

Back to Top

Summary

This article implements static migration and dynamic migration of KVM virtual machines on SuSE Linux Enterprise Server 11 SP1 release, especially dynamic migration based on data blocks, which makes the resource configuration of virtual machines more flexible. Similar migration operations can be performed on other Linux releases that support KVM, such as Ubuntu and fedora. KVM virtual machines are constantly being enhanced and improved. The open-source community and Linux system integrators are also developing a variety of KVM-based management tools. In the future, KVM migration tools will provide performance and functionality, operability and automation are greatly enhanced.

References

Learning

  • Refer to the article "libvirt virtualization library profiling" on developerworks to learn about the usage and architecture of libvirt.

  • Refer to the article "KVM Virtual Machine on IBM system x" on developerworks to learn how to install and configure a KVM virtual machine with a QEMU-KVM on Linux.
  • Refer to the article "explore Linux kernel virtual machines" on developerworks to learn the KVM architecture and its advantages.
  • In the developerworks Linux area, find more references for Linux developers (including new Linux beginners) and refer to our most popular articles and tutorials.
  • Refer to all
    Linux skills and
    Linux tutorial.

  • Stay tuned to developerworks
    Technical activities and network broadcasts.

Source: http://www.ibm.com/developerworks/cn/linux/l-cn-mgrtvm2/index.html

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.