Docker source Analysis (vi): Docker daemon Network

Source: Internet
Author: User

1. Preface

As an open-source lightweight virtualization container engine technology, Docker has brought new development models to the cloud computing field. With container technology, Docker has completely freed up the power of lightweight virtualization technology, making container scaling and application operations easier and more efficient than ever before. At the same time, Docker has made it easy to distribute, deploy, and manage applications with powerful mirroring technology. However, Docker is a relatively new technology, in the Docker world, users are not once and for all, the most typical of which is the Docker network problem.

Undoubtedly, it has been a huge challenge for Docker managers and developers to effectively and efficiently manage the interaction between Docker containers and the network of Docker containers. At present, most of the systems in cloud computing are designed and implemented by distributed technology. However, in the pristine Docker world, Docker's network is not a cross-host capability, which is more or less lagging behind the rapid development of Docker in the field of cloud computing.

In industry, the solution to Docker's network problem is imperative, in this environment, many it enterprises have developed their own new products to help improve the Docker network. Some of these companies are like Google's top internet companies, and many start-ups are taking the lead and relentlessly exploring at the forefront. Among these new products, Google's container management and orchestration Open source project Kubernetes, Zett.io Company developed a tool for connecting a host container through a virtual network Weave,coreos team Flannel,docker The official engineer Jérôme The network overlay tool designed for kubernetes Petazzoni's own SDN Network Solutions pipework, as well as socketplane projects.

For Docker managers and developers, Docker's ability to communicate across hosts is important, but Docker's own network architecture is just as important. It is only through the deep understanding of the network design and implementation of Docker that it is possible to extend Docker's ability to host its hosts on this basis.

Docker's own network consists of two main parts: the Docker daemon network configuration, the Docker container network configuration. This paper mainly analyzes the network of Docker daemon.

2. Docker Daemon Network analysis content arrangement

This article from the source point of view, Analysis Docker daemon in the boot process, for Docker Configuration network environment, chapter arrangement as follows:

(1) Docker daemon network configuration;

(2) Running Docker daemon Network initialization task;

(3) Create a Docker bridge.

This article is "Docker Source analysis" series Sixth--docker Daemon Network chapter, seventh will arrange the Docker container network chapter.

3. Docker Daemon Network configuration

In a docker environment, Docker administrators have full permissions to configure the network mode in the Docker daemon operation. The most familiar thing about Docker's network model is the "bridging" pattern. For bridge mode, Docker's network environment topology diagram (including Docker Daemon network environment and Docker Container network environment):

Figure 3.1 Docker network bridging

However, bridging is the most common pattern in the Docker network model. In addition, Docker offers more options for users, which are described below in one by one.

3.1 Docker Daemon Network Configuration Interface

Each time the Docker daemon is started, it initializes its own network environment, which eventually provides a network communication service to the Docker container.

Docker administrators configure Docker's network environment, which can be done through Docker-provided interfaces when the Docker daemon is started. In other words, you can use the Docker binary executable file, run docker-d and add the corresponding flag parameter to complete.

The flag parameters involved are enableiptables, Enableipforward, Bridgeiface, Bridgeip, and Intercontainercommunication. The five parameters are defined in./docker/daemon/config.go, with the following code:

Flag. Boolvar (&config. Enableiptables, []string{"#iptables", "-iptables"}, True, "Enable Docker's addition of iptables rules") flag. Boolvar (&config. Enableipforward, []string{"#ip-forward", "-ip-forward"}, True, "Enable Net.ipv4.ip_forward") flag. Stringvar (&config. BRIDGEIP, []string{"#bip", "-bip"}, "", "Use this CIDR notation address for the Network Bridge ' s IP, not compatible with- B ") flag. Stringvar (&config. Bridgeiface, []string{"B", "-bridge"}, "", "Attach containers to a pre-existing network bridge\nuse ' none ' to disable cont Ainer networking ") flag. Boolvar (&config. Intercontainercommunication, []string{"#icc", "-ICC"}, True, "Enable inter-container Communication")

The effects of these 5 flags are described below:

    • Enableiptables: Ensure that Docker has added permissions to the Iptables rules on the host;
    • Enableipforward: Ensure that the net.ipv4.ip_forward can be used, so that in the multi-network interface Device mode, the datagram can be forwarded between network devices;
    • BRIDGEIP: Configure the CIDR network address for the bridge in the network environment during the Docker daemon boot process;
    • Bridgeiface: Specify a specific communication bridge for the Docker network environment, and if the value of Bridgeiface is "none", then it is not necessary to create a bridge service for Docker container and turn off the network capability of Docker container;
    • Intercontainercommunication: Ensure that communication can be done between Docker containers.

In addition to the 5 flag parameters that Docker uses, Docker uses a Defaultip variable when creating a network environment, as follows:

OPTs. Ipvar (&config. Defaultip, []string{"#ip", "-ip"}, "0.0.0.0", "Default IP address to use when binding container ports")

The function of this variable is to use Defaultip as the default IP address when binding the port of the container.

With the above network background knowledge of Docker daemon, the following is an example of using BRIDGEIP and bridgeiface to network configuration when starting Docker daemon:

Start Docker daemon use command

Purpose note

docker-d

Start Docker Daemon, Use default bridge Docker0, do not specify CIDR network address

docker-d-b= "xxx"

Start Docker Daemon, use

Bridge xxx, do not specify CIDR network address

docker-d--bip= "172.17.42.1"

Launches Docker Daemon, using the default bridge Docker0, using the specified CIDR network address "172.17.42.1"

docker-d--bridge= "xxx"--bip= "10.0.42.1"

Error, compatibility issue, cannot specify both "Bridgeip" and "Bridgeiface"

docker-d-- Bridge= "None"

Start Docker Daemon,

Do not create Docker network environment

In-depth understanding of bridgeiface and BRIDGEIP, and skilled use of the corresponding flag parameters, that is, how to configure the Docker Daemon network environment. It is important to note that the Docker Daemon network differs greatly from the Docker container network. Docker Daemon is a great environment for Docker container to create a network, and the Docker container network needs to be supported by the Docker Daemon network, but not unique. As an example of an image, Docker daemon can create a DOCKER0 bridge to support the later Docker container bridging mode, while Docker container can still create its own network based on user needs, where Docker Container network can be bridged mode network, but also can directly share the use of Host network interface, there are other modes, will be in the "Docker Source Analysis" series of the seventh Article--docker Container Network chapter detailed introduction.

3.2 Docker Daemon Network initialization

As noted in the previous section, Docker administrators can create network environments for Docker daemon by using the flag parameter Bridgeiface and BRIDGEIP associated with the web. The simplest, Docker administrator has finished running Docker Daemon by executing "docker-d", and Docker Daemon, when started, creates the appropriate network environment based on the values of the two flag parameters above.

The Docker daemon Network initialization flowchart is as follows:

Figure 3.2 Docker Daemon Network initialization flowchart

The process of Docker Daemon network initialization is generally based on the resolution of the flag parameter to determine what type of network environment to build. As you know from the flowchart, Docker Daemon has two branches when creating a network environment, and it's not hard to see what branches represent: Creating a network driver for Docker and doing nothing with Docker's network.

The following refer to the Docker Daemon network initialization flowchart for specific analysis of the implementation steps.

3.2.1 Start Docker daemon Pass flag parameter

The user launches the Docker Daemon and selectively passes in the desired flag parameter on the command line.

3.2.2 Parsing Network Flag parameters

The flag packet parses the flag parameter on the command line, with 5 flag parameters associated with the Docker Daemon network configuration, namely: Enableiptables, Enableipforward, Bridgeip, Bridgeiface and Intercontaniercommunication, the role of each flag parameter has been described above.

3.2.3 Preprocessing flag parameters

Preprocessing flag parameter information related to network configuration, including detecting the compatibility of configuration information, and determining whether to create a docker network environment.

First verify that there are incompatible configuration information for each other, the source code is located in the./docker/daemon/daemon.go#l679-l685.

There are two types of compatibility information in this section. The first is the compatibility of the BRIDGEIP and Bridgeiface configuration information, which appears to be a compatibility issue when users start Docker daemon and specify both BRIDGEIP and Bridgiface values. Because the two are mutually exclusive pairs, in other words, if the user specifies the name of the new bridge, then the bridge already exists, no need to specify the IP address of the bridge Bridgeip; If the user specifies the network IP address of the new bridge Bridgeip, then the bridge must not have been created successfully, then Docker Daemon Use the default bridge name "Docker0" when creating a new bridge. Specific as follows:

Check for mutually incompatible config Optionsif config. Bridgeiface! = "" && config. Bridgeip! = "" {return nil, fmt. Errorf ("You Specified-b &--bip, mutually exclusive options. Specify only one. ")}

The second is the compatibility of the Enableiptables and intercontainercommunication configurations, which means that the two flag parameters cannot be specified as false at the same time. The reason is simple, and if you specify intercontainercommunication to False, Docker Daemon does not allow the creation of Docker containers to communicate with each other. But to achieve this, Docker uses iptables filtering rules. Therefore, once again set Enableiptables to False, turn off the use of iptables, that is, there is a contradictory result. The code is as follows:

If!config. Enableiptables &&!config. Intercontainercommunication {return nil, fmt. Errorf ("Specified--iptables=false with--icc=false. ICC uses iptables to function. Please set--ICC or--iptables to True. ")}

After verifying the compatibility of the system configuration information, Docker daemon then determines whether the network environment needs to be configured for Docker daemon. The judgment is based on whether the value of bridgeiface is equal to the value of Disablenetworkbridge, and Disablenetworkbridge is defined as a const amount in./docker/daemon/config.go#l13. The value is the string "none". Therefore, if Bridgeiface is "none", then Disablenetwork is true, and eventually Docker Daemon does not create a network environment; if Bridgeiface is not "None" then Disablenetwork is false, Eventually Docker daemon needs to create a network environment (bridging mode).

3.2.4 Determining the Docker network mode

The Docker network mode is determined by the configuration information disablenetwork. Since the Disablenetwork value has been obtained in the previous link, the Docker network mode can be determined by this link. The source code implementation for this section is located at./docker/daemon/daemon.go#l792-l805, as follows:

If!config. disablenetwork {job: = eng. Job ("Init_networkdriver") job. Setenvbool ("enableiptables", config. enableiptables) job. Setenvbool ("intercontainercommunication", config. intercontainercommunication) job. Setenvbool ("Enableipforward", config. Enableipforward) job. Setenv ("Bridgeiface", config. Bridgeiface) job. Setenv ("Bridgeip", config. BRIDGEIP) job. Setenv ("Defaultbindingip", config. Defaultip.string ()) If err: = job. Run (); Err! = Nil {return nil, err}}

If Disablenetwork is false, it indicates that a network environment needs to be created, with the specific mode of creating Docker bridge mode. The steps to create a network environment are:

(1) Create a job named "Init_networkdriver";

(2) To configure environment variables for this job, the environment variables set are Enableiptables, Intercontainercommunication, Enableipforward, Bridgeiface, Bridgeip and DEFAULTBINDINGIP;

(3) Run the job.

Running "Init_network" is the creation of the Docker Bridge, which will be analyzed in more detail in the next section.

If Disablenetwork is true. This means that you do not need to create a network environment and that network mode belongs to none.

These are all the processes of Docker daemon network initialization.

3.3 Creating a Docker Bridge

Docker's network is often the topic most often mentioned by Docker developers. The most commonly used mode in a Docker network is bridge bridging mode. This section examines the creation process for creating a Docker bridge in detail.

The implementation of the Docker bridge is accomplished through the operation of the "init_network" job. The implementation of "Init_network" is the Initdriver function, located in the./docker/daemon/networkdriver/bridge/driver.go#l79, which runs as follows:

Figure 3.3 Docker Daemon Creating a network Bridge flowchart

3.3.1 Extracting environment variables

In the implementation of the Initdriver function, Docker first extracts the environment variable of the job "Init_networkdriver". There are 6 such environment variables, and their respective roles are described in detail above. The specific implementation code is:

VAR (Network        *net. Ipnetenableiptables = job. Getenvbool ("Enableiptables") ICC            = job. Getenvbool ("Intercontainercommunication") ipforward      = job. Getenvbool ("Enableipforward") bridgeip       = job. Getenv ("Bridgeip")) if defaultip: = job. Getenv ("Defaultbindingip"); Defaultip! = "" {DEFAULTBINDINGIP = net. Parseip (defaultip)}bridgeiface = job. Getenv ("Bridgeiface")
3.3.2 Determine the Docker bridge device name

After extracting the job's environment variables, Docker determines the name of the end-use bridge device. To do this, Docker first creates a bool variable named Usingdefaultbridge, meaning whether to use the default bridge device, and the default value is False. Then, if the value of Bridgeiface in the environment variable is NULL, the user does not specify a specific bridge device name when launching Docker, so Docker first resets the Usingdefaultbridge to true. Then use the default bridge device name Defaultnetworkbridge, that is, DOCKER0, if the value of Bridgeiface is not empty, then the judgment condition is not established, continue down execution. This part of the code is implemented as:

Usingdefaultbridge: = Falseif Bridgeiface = = "" {Usingdefaultbridge = Truebridgeiface = Defaultnetworkbridge}
3.3.3 Finding Bridgeiface Bridge devices

After determining the Docker bridge device name Bridgeiface, Docker first finds the device on the host with the Bridgeiface device name to see if it exists. If present, returns the IP address of the bridge device, or nil if it does not exist. The implementation code is located in./docker/daemon/networkdriver/bridge/driver.go#l99, as follows:

Addr, err: = Networkdriver. GETIFACEADDR (Bridgeiface)

The implementation of GETIFACEADDR is located in./docker/daemon/networkdriver/ Utils.go, the implementation step is: first through the Golang in the Interfacebyname method of the net package to obtain a bridge device named Bridgeiface, the following results are obtained:

    • If the network Bridge device named Bridgeiface does not exist, the error is returned directly;
    • If a bridge device named Bridgeiface exists, return the IP address of the bridge device.

It should be emphasized that the GETIFACEADDR function returns an error stating that there is no bridge device named Bridgeiface on the current host. This results in two different situations: first, the user specifies Bridgeiface, then Usingdefaultbridge is false, and the Bridgeiface bridge device does not exist on the host, and second, the user does not specify Bridgeiface, Then Usingdefaultbridge is true,bridgeiface named Docker0, and Docker0 Bridge does not exist on the host.

Of course, if the GETIFACEADDR function returns an IP address, it indicates that a bridge device named Bridgeiface exists on the current host. This result will also have two different situations: first, the user specified bridgeiface, then Usingdefaultbridge is false, and the Bridgeiface Bridge device already exists on the host, and second, The user does not specify Bridgeiface, then the Usingdefaultbridge is true,bridgeiface named Docker0, and the Docker0 Bridge already exists on the host. The second situation is generally: when the user first launches the Docker Daemon on the host, the default bridge device Docker0 is created, and the Docker0 bridge device is always present on the host, and then restarts the Docker Daemon without specifying the bridge device. There will be a situation where DOCKER0 already exists.

The following two subsections will be analyzed separately from bridgeiface created with Bridgeiface without creating two different scenarios.

3.3.4 Bridgeiface created condition

Docker daemon still needs to verify that the user has specified an IP address for the bridge device in the configuration information when the Bridgeiface Bridge device on the Docker Daemon host is present.

When the user launches Docker daemon, if the BRIDGEIP parameter information is not specified, the Docker daemon uses the original IP address named Bridgeiface.

When the user specifies the BRIDGEIP parameter information, you need to verify that the specified BRIDGEIP parameter information matches the original IP address information of the Bridgeiface Bridge device. If the two matches, the validation passes and the execution continues; If the two do not match, the validation does not pass, throws an error, and shows "BRIDGEIP does not match the existing bridge configuration information." This section is located in the./docker/daemon/networkdriver/bridge/driver.go#l119-l129 code as follows:

Network = addr. (*net. ipnet)//Validate the bridge IP matches the IP specified by bridgeipif bridgeip! = "" {Bip, _, Err: = Net. PARSECIDR (BRIDGEIP) if err! = Nil {return job. Error (Err)}if!network. Ip. Equal (BIP) {return job. Errorf ("Bridge IP (%s) does not match existing bridge configuration%s", network. IP, Bip)}}
3.3.5 Bridgeiface not created

When the Bridgeiface bridge device on the Docker daemon host is not created, there are two scenarios described above:

L user-specified bridgeiface not created;

L The user did not specify Bridgeiface, and Docker0 was not created.

When the user-specified bridgeiface does not exist on the host, that is, the default bridge device name for Docker is not used Docker0,docker print log information "specified bridge device not found" and returns an error message that the bridge did not find. The code is implemented as follows:

If!usingdefaultbridge {job. LOGF ("Bridge not Found:%s", Bridgeiface) return job. Error (ERR)}

When the default bridge device name is used, and the Docker0 Bridge device is not yet created, Docker Daemon immediately implements the operation to create the bridge and returns the IP address of the Docker0 bridge device. The code is as follows:

If The iface is not a found, try to create itjob. LOGF ("Creating New bridge for%s", bridgeiface) If err: = Createbridge (BRIDGEIP); Err! = Nil {return job. Error (Err)}job. LOGF ("Getting iface addr") addr, err = Networkdriver. GETIFACEADDR (bridgeiface) if err! = Nil {return job. Error (err)}network = addr. (*net. Ipnet)

The implementation of creating the Docker Daemon Bridge device Docker0, all implemented by Createbridge (BRIDGEIP), is located in the Createbridge implementation./docker/daemon/networkdriver/ bridge/driver.go#l245.

The main steps of the Createbridge function implementation process are:

(1) Determine the IP address of the bridge device Docker0;

(2) Create a DOCKER0 bridge device through the Createbridgeiface function and assign a random MAC address to the bridge device;

(3) Add the IP address identified in the first step to the newly created Docker0 Bridge device;

(4) Start the Docker0 bridge device.

The following is a detailed analysis of the 4 steps of a specific implementation.

First, the Docker daemon determines the IP address of the DOCKER0, and is implemented in a way that determines whether the user specifies BRIDGEIP. If the user does not specify BRIDGEIP, the appropriate network segment is found in the Docker pre-prepared IP Segment list Addrs. The specific code implementation is located at the./docker/daemon/networkdriver/bridge/driver.go#l257-l278, as follows:

If Len (BRIDGEIP)! = 0 {_, _, Err: = Net.  PARSECIDR (BRIDGEIP) if err! = Nil {return err}ifaceaddr = Bridgeip} else {for _, addr: = Range Addrs {_, Dockernetwork, err : = Net. PARSECIDR (addr) If err! = Nil {return err}if err: = Networkdriver. Checknameserveroverlaps (nameservers, dockernetwork); Err = = Nil {if err: = Networkdriver. Checkrouteoverlaps (dockernetwork); Err = = Nil {ifaceaddr = addrbreak} else {log. DEBUGF ("%s%s", addr, Err)}}}

The candidate segment address Addrs for the bridge device is:

Addrs = []string{"172.17.42.1/16",//Don ' t use 172.16.0.0/16, it conflicts with EC2 DNS 172.16.0.23 "10.0.42.1/16",   // Don ' t even try using the ENTIRE/8, that ' s too intrusive "10.1.42.1/16", "10.42.42.1/16", "172.16.42.1/24", "172.16.43.1/24 "," 172.16.44.1/24 "," 10.0.42.1/24 "," 10.0.43.1/24 "," 192.168.42.1/24 "," 192.168.43.1/24 "," 192.168.44.1/24 ",}

Through the execution of the above process, it can be determined to find an available IP segment address for ifaceaddr; if not found, an error log is returned, indicating that no appropriate IP address is assigned to the Docker0 Bridge device.

The second step creates a DOCKER0 bridge device through the Createbridgeiface function. The Createbridgeiface function is implemented as follows:

Func createbridgeiface (name string) error {kv, err: = Kernel. Getkernelversion ()//Only set the bridge's MAC address if the kernel version is > 3.3//before that it's not supporte DSETBRIDGEMACADDR: = Err = = Nil && (kv. Kernel >= 3 && kv. Major >= 3) log. DEBUGF ("setting bridge MAC address =%v", setbridgemacaddr) return NetLink. Createbridge (name, SETBRIDGEMACADDR)}

The above code uses the host Linux kernel information to determine whether to support setting the MAC address of the bridge device. If the Linux kernel version is greater than 3.3, the configuration MAC address is supported, otherwise it is not supported. Docker is stable on a kernel version not less than 3.8, so it can be assumed that the kernel supports configuring MAC addresses. Finally, the Docker0 Bridge is created by NetLink's Createbridge function.

NetLink is a special socket communication method in Linux, which provides two-way data transmission between user application and kernel. In this mode, the user state can use the standard socket API to use NetLink's powerful features, while the kernel state requires specialized kernel APIs to use NetLink.

Libcontainer's NetLink package Createbridge implements the creation of the actual bridge device, using the system call code as follows:

Syscall. Syscall (Syscall. Sys_ioctl, UIntPtr (s), SIOC_BRADDBR, uintptr (unsafe. Pointer (namebyteptr)))

After the bridge device is created, configure the MAC address for the Docker0 Bridge device, and the implementation function is setbridgemacaddress.

The third step is to bind the IP address for creating the Docker0 bridge device. The previous step completed only the creation of a bridge device named Docker0, and you still need to bind the IP address for the Docker0 bridge device. The specific code is implemented as:

If Netlink.networklinkaddip (Iface, ipaddr, ipnet); Err! = Nil {return FMT. Errorf ("Unable to add private network:%s", err)}

Networklinkaddip implementation is also located in the Libcontainer NetLink package, the main function is: through the netlink mechanism for a network interface device binding an IP address.

A fourth step is to start the Docker0 bridge device. The specific implementation code is:

If err: = Netlink.networklinkup (Iface); Err! = Nil {return FMT. Errorf ("Unable to start Network Bridge:%s", err)}

The implementation of the Networklinkup is also located in the NetLink package in Libcontainer, which functions as a starting Docker bridge device.

At this point, Docker0 Bridge through the determination of IP, create, bind IP, start four links, Createbridge about Docker0 Bridge equipment work complete.

3.3.6 get the network address of the bridge device

After the bridge device is created, a network address must exist for the bridge device. The bridge network address is used by Docker daemon to assign an IP address for Docker container when creating Docker container.

Docker uses code network = addr. (*net. ipnet) Gets the network address of the bridge device.

3.3.7 Configuring the iptables of Docker daemon

After the network Bridge is created, Docker Daemon configures the iptables for the container and host, including support for the link operation required between the container, and the transfer rules for all external inbound traffic on the host. The details of this section can be found in the Docker source Analysis (iv): Docker daemon Newdaemon implementation. The code is located in./docker/daemon/networkdriver/bridge/driver/driver.go#l133, as follows:

Configure iptables for link supportif enableiptables {if err: = Setupiptables (addr, ICC); Err! = nil {return job. Error (ERR)}}//We can always try removing the iptablesif err: = iptables. Removeexistingchain ("DOCKER"); Err! = Nil {return job. Error (Err)}if Enableiptables {chain, err: = Iptables. Newchain ("DOCKER", bridgeiface) if err! = Nil {return job. Error (Err)}portmapper. Setiptableschain (Chain)}
3.3.8 Configuring datagram forwarding between network devices

On Linux systems, packet forwarding is disabled by default. Packet forwarding is when the host hosts multiple network devices, if one receives a packet and needs to forward it to another network device. By modifying the value of the/proc/sys/net/ipv4/ip_forward and placing it at 1, you can guarantee that the packets within the system can be forwarded, and the code is as follows:

If Ipforward {//Enable IPv4 forwardingif err: = Ioutil. WriteFile ("/proc/sys/net/ipv4/ip_forward", []byte{' 1 ', ' \ n '}, 0644); Err! = Nil {job. LOGF ("Warning:unable to enable IPV4 forwarding:%s\n", Err)}}
3.3.9 Registration Network Handler

The last step in creating a Docker Daemon network environment is to register 4 network-related handler. These 4 handler are Allocate_interface, Release_interface, Allocate_port, and link, respectively, to allocate network equipment for Docker container and reclaim Docker Container network devices, assigning port resources to Docker container, and performing link operations between Docker container.

At this point, the Docker Daemon's network environment initialization work is all done.

4 Summary

In industry, Docker's network problems are a matter of concern. Docker's network environment can be divided into Docker daemon Network and Docker container network. This paper starts with the network of Docker Daemon, and analyzes the familiar Docker bridging mode.

Docker's container technology and mirroring technology have brought many benefits to Docker practitioners. However, the development of the Docker network still has great potential. The next Docker container Network article will bring a more flexible Docker network configuration.

5 References

Http://www.cnblogs.com/iceocean/articles/1594195.html

http://docs.studygolang.com/pkg/net/

Docker source Analysis (vi): Docker daemon Network

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.