Application of SDN and NFV technology in cloud data center scale

Source: Internet
Author: User
Tags network function representational state transfer vcard

Neo2016-1-29 | Post a comment

Editor's note: taking the Cloud data center as the point of entry, first of all, the network of SDN, SDN controller, VxLAN 3 important technical characteristics of the research, next to the NFV domain of General server performance, service chain two key issues of the specific analysis. At last, the paper expounds the progress of SDN/NFV technology Test and its related conclusions, and forecasts the application products of VDC.

1 Introduction

With the rise of cloud computing technology, the data tends to be concentrated, and the traditional telecommunication system network architecture has become a big barrier to the development of cloud data center. In order to meet the requirements of virtual network resource scheduling and sharing in the cloud computing environment, the following key problems need to be solved in future data center resource pool operation.

    • Fast response to user needs. Cloud resource pool Internal network equipment, network features complex, the use of point-to-point manual configuration, will delay the response speed of user demand.
    • A clear view of the network topology. The network topology of the cloud resource pool itself is difficult to present clearly, especially the tenant Network and cloud Resource network cannot render correspondence, which leads to complex operation.
    • Flexible resource sharing and scheduling. Resource pool is difficult to realize the multi-tenant environment of mutual isolation, and it is difficult to realize the flexible sharing and scheduling of network resources when cross data center group network.
    • Dynamic awareness of Tenant network resource requirements. Different tenants of network traffic, security policy, performance requirements, and so on, the resource pool network can not dynamically perceive the needs of tenants, resulting in resource waste or overload.

Technologies such as SDN (software defined network, software defined networks), NFV (Network function virtualization, networking functions virtualization) can enhance network differentiated service delivery capabilities and cloud, pipe, and end collaboration networking capabilities. Therefore, the introduction of SDN/NFV technology in the cloud data center can effectively solve the above problems, while the Cloud data center network based on SDN/NFV technology is also an important direction for the future evolution of telecom networks.

2 Important Technologies of SDN

In many network environments, such as backbone network, data center, enterprise network, metropolitan area network edge, access network, data center is the first place that encounters traditional network technology, and it is also the first commercial application scenario of SDN technology. In order to meet the increasing demand of Internet service, data center has been developing in the direction of large-scale, automation, virtualization and multi-tenancy, and has faced challenges in many aspects, such as network performance and flexibility. SDN technology becomes the preferred solution when it comes to meeting virtualized and multi-tenancy on-demand flexible networking requirements, especially when infrastructure for IaaS services is built.

SDN should not be defined as a network technology, but more accurately, SDN is a next-generation network architecture formed by a number of important technologies and solutions, this section focuses on 3 important SDN technologies used in Cloud data center networking: Overlay Networks (Overlay), SDN controllers, VxLAN.

2.1 Overlay Network (overlay)
Overlay network is based on the existing IP network, on which the superposition of the logical network (overlay logical networks), shielding off the underlying physical network differences, to achieve the virtualization of network resources, Enables multiple logically isolated network partitions and multiple heterogeneous virtual networks to coexist on the same shared network infrastructure. According to the above content, the concept of logical network overlay is not invented by SDN, VLAN (virtual local area network) is a typical representative, but in the Cloud Data center network domain, overlay network has become one of the important power of SDN development nowadays. Its main ideas can be summed up as decoupling, independence, control 3 aspects.

Decoupling: Refers to the control of the network from the network physical hardware, to the virtual network layer processing. This virtualized network layer is loaded on top of the physical network, shielding off the underlying physical differences, and rebuilding the entire network in virtual space. Therefore, the physical network resources will be generalized into a pool of network capabilities, just as server virtualization technology transforms server resources into computing power pools, which makes the invocation of network resources more flexible and satisfies the demand for the on-demand delivery of network resources by users.

Standalone: The overlay network scheme is hosted on the IP network, so as long as the IP can be reached, then the corresponding virtualization network can be deployed, without the original physical network architecture (such as the original network hardware, the original server virtualization solution, the original network management system, the original IP address, etc.) to make any changes. This type of solution can be easily deployed and implemented on the Web, which is its biggest advantage.

Control: means that the superimposed logical network will be controlled in a software-programmable manner. By applying this scheme, network resources can be uniformly dispatched and delivered on demand, along with computing resources and storage resources. Virtualized network devices represented by virtual switches can be consolidated in the server virtualization Hypervisor (hypervisor) and can be deployed in the gateway to integrate with external physical networks in a software way. These two types of deployment are representative of the industry currently in the overlay Network solution two major schools (the former is a VMware-represented "soft" faction, the latter is the Huawei, rung-represented "hard" faction). All kinds of virtual network equipment work together, under the unified control of the resource management platform, the virtual network is built by the need of the nodes, and the network resources are virtualized.

2.2 SDN Controller
The separation of control and data plane should be one of the most basic principles of SDN, the idea is to move the control plane of the network equipment into a centralized controller, replace the control plane in the network equipment with the standardized South interface, and add the programmable North interface to the upper layer to call the controller. Therefore, the controller is very important in the SDN architecture, in the SDN system, the interface between each level is defined as its center. An idealized SDN controller should be a collection of software systems that provide the following functionality.

    • Network state management. The management and distribution of network State can be implemented using a database, which is responsible for storing information from the controlled network element devices and related software.
    • Advanced data Model. This data model describes the relationships between managed resources, policies, and other services provided by the Controller. The Yang modeling language can be used to build this data model.
    • Use the RESTful (representational state transfer, representational status transfer) API to provide control services to application use, facilitating interaction between the Controller and the application.
    • A secure control session, that is, a TCP session between the Controller and the corresponding agent in the network element device.
    • A standard-based stateful protocol for configuring application-driven on a network element device.
    • A device, topology, and service discovery mechanism, a path computing system.

Currently, in the Cloud Data Center SDN solution, the choice of controller is divided into two camps, a closed SDN controller dominated by commercial companies, including NSX Controller (VMware), VCFC (H3C), NETMATRIX/SNC (Huawei), Contrail (Juniper) and many other products. The other is an open-ended SDN controller that is being pushed by the open source community, including open-source projects such as Nox/pox (Nicira), Ryu (NTT), and Floodlight (Big Switch Networks). From a practical perspective, both commercial SDN controllers and open-source SDN controllers have their own pros and cons. For example, commercial SDN controllers are developed independently by vendors, often with high levels of maturity and reliability, but due to the adoption of proprietary technologies, the interoperability between multiple vendors is greatly reduced. Open Source SDN Controllers ensure high levels of interoperability due to the openness of protocols and code, but there are still deficiencies in reliability.

2.3 VxLAN
VxLAN (Virtual extensible local Area network) is a network virtualization technology that aims to improve the scalability issues that existing VLAN technology encounters when deploying large-scale cloud data centers. The technology is an IETF draft, co-proposed by VMware, Cisco, Arista, Broadcom, Citrix, which encapsulates a Mac-based two-layer Ethernet frame into a three-layer UDP packet. With this MAC-IN-UDP encapsulation technology, the Vxlan provides a location-independent, two-layer abstraction for virtual machines, enabling virtual machines located in different datacenters to communicate through a sophomore network, which makes it easier to migrate virtual machines across sites. Similar to VLANs, Vxlan is also a logical network for multi-tenant isolation, and since the Vxlan with 24-bit identifier, it can be marked by the number of virtual space to reach 1 6 million, far beyond the VLAN is marked by 4 096 number of restrictions, Vxlan packet format 1 is shown.

Figure 1 Vxlan Message Encapsulation format

Typically, the operation of the VxLAN relies on the Vtep (VxLAN tunneling end Point,vxlan Tunnel terminal) component, which provides all the functionality required for a two-tier Ethernet service for the end system. It works roughly as follows: Vtep check the target MAC address in the frame to find the IP address of the target vtep. When a virtual machine communicates with other virtual machines, a broadcast ARP packet is usually sent first, and Vtep sends it to the corresponding VNI multicast group. All other vtep learn from this grouping the inner MAC address of the sender's virtual machine and the outer IP address of its vtep, the target virtual opportunity to return a unicast message to the sender in response to the ARP, the original vtep can also learn to target address mapping.

At present, the industry's implementation of the vtep is mainly divided into 3 types, the first is in the virtualization software hypervisor layer to achieve this function, such as VMware's solution is to create a vtep component in its ESXi kernel, because this approach is closest to the virtualization layer, its efficiency is the highest The second is the integration of the VTEP function in the virtual switch, the solution provided by H3C is this category, which has the advantage of supporting a multi-virtualized environment (VSphere, KVM, Xen, Hyper-V), and the third is the addition of VTEP capabilities in the hardware network device. This approach is more suitable for non-virtualized environments, such as Huawei, rung and other traditional network equipment manufacturers to use this kind of method.

Here, it is necessary to remind the operations personnel that since the Vtep has been re-encapsulated for ordinary IP packets, this will result in an additional overhead of about 50 bytes, when the device MTU value of the current network is set to 1 byte, and the data packet size exceeds the value required for shard processing, will cause IP packets based on the Vxlan package to fail to reach the destination. At this time there are two solutions, one is to change the IP message sender operating system MTU value to 1 bytes below, and the second is the IP message on all network devices along the way to open jumbo frames related functions, recommend the second method to solve the problem, Because jumbo frames-related functions are usually turned on by default in the core network devices of the backbone, the OPS personnel only need to confirm whether this function is turned on by the network device of the terminal and the peer access layer.

3 NFV Key Issues

NFV and SDN are the next generation of network technologies that have been proposed in recent years to meet the needs of new applications, and in general, they have focused on solving different network problems from different angles, and they have very close relationships. While both can improve the overall manageability of the network, their goals and methods vary. SDN realizes centralized network control by separating the control plane from the data plane, while the NFV technology originating from the operator needs is separated by hardware and software to realize the network function virtualization, its focus is to optimize the network service itself. These two technologies appear to be of different dimensions, but are highly complementary, using the flexibility of SDN technology in traffic routing, combined with the NFV architecture, can better improve the efficiency of the network, improve the overall network agility, NFV and SDN relationship 2 is shown.

Figure 2 NFV and SDN relationships

There are two key issues that need to be addressed in the industry's belief that NFV technology is truly commercially available, namely, hardware-related common server performance issues, software-related service chain issues, and the two key issues that are highlighted below.

3.1 General server Performance
By introducing NFV technology into the cloud data center, you can run any type of network functionality, such as routers, firewalls, load balancers, and so on, on a shared common server and divide them on demand into virtual machine software instances. The benefit of this is the ability to effectively reduce investment costs (capital Expenditure,capex) and maintenance costs (operating Expense,opex), and shorten business deployment and uptime. But at the same time must face a more difficult problem, that is, the general server can completely replace the old dedicated hardware network equipment.

ASIC, NP, and CPU Class 3 chips build the foundation of IT architecture. ASIC and NP based on the traditional pipeline model, the network message forwarding and processing can achieve a high performance, but the business of curing, application flexibility to load the lack of capacity. The essence of the new IT Fusion architecture is "application oriented", so the CPU common architecture represented by the x86 architecture has been widely concerned. CPU in computing power advantage is obvious, suitable for processing L4~L7 layer business, but the short board is no dedicated data surface operating system, resulting in weak I/O forwarding capacity. The DPDK (Data Plane Development kit) technology, which was officially launched by Intel in 2013, can compensate for this short board, which is mainly used for fast packet processing, which can significantly improve the performance of data packet processing. According to Intel's newly released lab test data, DPDK-based data plane forwarding capability has been achieved by up to gbit/s. While it is still a few days away from commercial scale, the industry has seen the potential for Moore's law to continue its brilliance in the NFV field. In addition, because Intel opened the DPDK source code, so that the majority of network vendors can use DPDK technology to improve network equipment forwarding performance. This technology allows the general server to completely replace the dedicated hardware network equipment, and even provide higher performance network applications, which provides a solid foundation for NFV out of the laboratory, large-scale commercial.

3.2 Service Chain (chain)
With the delivery of cloud services, especially for multi-tenant environments, network services are becoming increasingly complex. When the data message is transmitted in the network, it needs to go through a variety of business nodes to ensure that the network can provide users with safe, fast and stable network service according to the design requirements. These business nodes (service node) typically have firewalls, load balancing, intrusion detection, and so on. Typically, network traffic needs to pass through these business points in the established order required by the business logic, which is called a service chain. To implement a variety of business logic, the service chain needs to be programmable to achieve a flexible combination. As SDN and NFV continue to evolve, the service chain becomes more important. Traditional networks use specialized hardware to host individual functions and then deploy them in physical networks as a cured network topology. With the introduction of business orchestration and service chain, the network can be abstracted. Operators can define the required network functions and business flow processes for the business flow.

Through the SDN controller, multiple NFV software-based business function modules are linked together to achieve flexible operation of the business, service chain is the key to multi-business integration, but also for personalized application platform development lay the foundation. Each role in the service chain: The Business node (service function), the Flow Classification node (classification), the control plane (plane), and the proxy node work together to complete the business definition.

4 Cloud data Center scenarios based on SDN and NFV technology

With the gradual maturation of technology, more and more operators, internet companies and enterprises in SDN/NFV related areas to carry out experimental network deployment, testing and validation activities, in the SDN/NFV of demand scene identification, product verification and commercial preparation and other key links have made significant progress. The Domain2.0 program, launched in 2013, aims to transform the network infrastructure from a hardware-centric to a software-centric, open-cloud-based network through SDN/NFV technology . Unlike traditional telecom operators ' business demands for SDN networks, internet companies, as pioneers of technology, are relatively advanced in the commercialization of Sdn. For example, Google announced that by deploying SDN to increase link utilization between data centers to more than 90%, the company announced in April 2014 the introduction of the Andromeda Virtualization platform based on SDN and NFV technology.

4.1 Test progress and conclusion
Jiangsu Branch of China Telecom Co., Ltd. (hereinafter referred to as Jiangsu Telecom) has been actively exploring the application of SDN/NFV technology in cloud data center since 2014, focusing on computer room resource fragmentation, high idle rate in non-core area, network resources cannot be allocated on demand, network architecture is difficult to adjust flexibly, Network configuration is too complex and many other problems, through the province in different cities in the cloud resource pool pilot deployment of some vendors of the solution, to achieve a multi-tier network architecture across the data center, summed up and put forward a cloud data center based on SDN/NFV technology networking scheme, test environment Topology 3, Figure 4 shows.

Figure 3 Cloud Data Center network architecture based on SDN/NFV technology (vendor a)

Figure 4 Cloud Data Center network architecture based on SDN/NFV technology (vendor B)

In the SDN field, the pilot work revolves around the Sdn+vxlan technology-based cross-data center sophomore layer network, respectively, cross-room east-west, north-South interoperability and virtual machine on-line migration and other functional scenarios are verified, although the various vendors provide a slightly different scheme, but from the test results, can achieve the desired results, the basic functional scenarios and the value of the conclusions are shown in table 1.

Table 1 Sdn+vxlan Technical basic function scenario and value summary
sdn+vxlan Technology basic feature scenario technical value
Cross room same segment east Westward Interoperability Scenario (Server A and Server B are located in different rooms)
Two business systems deployed in different datacenters can be interoperable via private IP addresses, effectively saving public IP address resources
The same machine room North-South interoperability scene (server and export equipment located in the same room), cross-room north-South interoperability scene (server and export equipment located in different room) Enables multiple data center managed server exits centralized control (gateway) and on-demand deployment of value-added services (business chain, load balancing, etc.) to avoid repeated investments in each data center
can achieve dual-live business systems, primary and standby disaster recovery backup requirements, improve system reliability

At the same time, take full advantage of the two cloud resource pool (located in different computer room) has the optical fiber direct connection environment, and multi-vendor Sdn+vxlan solution to carry out performance comparison test work, respectively, the website, FTP, video, database and other 4 types of application system business scenarios are simulated, the test results are shown in table 2.

Table 2 The performance test results of the interconnection scheme between the fiber direct connection and the manufacturers in the sophomore layer

Solution Solutions

Website class FTP class Video class Database classes
Duration of Visit/s Upload/download rate/(MB/s) Packet loss rate tpm/tps/A
Optical Fiber Direct connection

(within 20 km)

0.71 581.1/244.3 <3% 6 829/115
Manufacturer A 0.56 90.4/82.7 <3% 5 866/99
Vendor B 0.51 90.9/108.2 <3% 7 113/120
Manufacturer C 0.58 51.6/13.5 <3% 6 754/114

In the NFV field, with the Jiangsu Telecom business Cloud as the test environment, the pilot deployed the industry's more mature VMware NSX solution, and pioneered the network self-service cloud process based on NFV architecture, which has been developed two times with the relevant IT support system. At present, the virtual firewall, virtual load balancer, virtual DHCP, and other 3 types of virtual network meta-function development, the relevant configuration Interface 5, figure 6 is shown.

Figure 5 Virtual Firewall Self-service configuration Interface

Figure 6 Virtual Load Balancer Self-service configuration Interface

4.2 VDC Application Product Outlook
While conducting the above-mentioned technical verification work, the plan is to integrate cloud computing and traditional data center resources into a breakthrough point, combined with IDC product features, for enterprises and enterprises to launch a class of SDN/NFV technology-based innovative product--VDC (virtual data center). The VDC is a dedicated virtual resource pool built with physical devices, and is a new type of data center form that applies cloud computing concepts to IDC. VDC Network is based on the vxlan of the sophomore layer technology to achieve a number of IDC room interconnection, to provide IDC machine room things to the bottom of the two-layer interoperability capabilities, integration of fragmented IDC resources, and through the application of SDN, NFV, automated deployment technology, to build a unified innovation VDC operation Management System, Build a scalable virtualization infrastructure that provides users with flexible, secure, private cloud services that allow them to configure and use cloud resources in their own defined virtual networks. Users can take full control of their virtual network environment, including choosing their own IP address ranges, creating subnets, and Configuring routing tables, gateways, and even complex 4~7-tier application delivery services. On this basis, it can also provide a variety of application environment configuration, security management, maintenance and other value-added services according to the needs of users. The VDC logic Model 7 shows.

Figure 7 VDC Logic model

The requirements description for the VDC business scenario is shown in table 3.

Table 3 VDC Business Scenario requirements Description
Serial number Requirement Name Demand Scenarios Requirements analysis and specific technical indicators Business Ownership
multi-engine room provides cloud services Scene 1: Build VDC offsite to provide users with virtual machine services to meet cloud business

Scenario 2: The same city to form a VDC, to provide users with virtual machine services to meet the cloud business

Scene 3:VDC virtual machine migration, resource migration, dynamic resource deployment (minutes) and other needs

  • v The DC packet loss rate should be lower than 0.1%;
  • VDC should implement user IP, tunnel identification (for example: VNI), Global MAC address management, dynamic discovery, can overlap;
  • VDC should be interoperable with the cloud resource pool, there are interfaces to support resource synergy;
  • VDC should support one-click Deployment, and the configuration of the VDC itself should be related to user business parameters such as IP, MAC) try to separate
cloud business
2 Multi-Machine room IDC resource integration The existing computer room can not meet the needs of customers in the machine room, need to choose another room in the same city of the rack to expand, and realize the two-layer interoperability between the two rooms
  • Consolidate and manage the resources of the existing data center room;
  • The system should be able to collect and utilize the scattered rack resources of the computer room in real-time and flexibly.
  • Provides a unified view of resources to the user. Enable users to perceive resource usage
IDC Business
3 Automatic redundancy and backup of user's business Customer application system for data backup and disaster recovery scenarios are: Data disaster recovery (offsite high-end storage, low-cost cloud storage), application disaster tolerance (non-real-time), for the customers with data, application disaster-tolerant demand for the bank, securities and other price-sensitive customers
  • Offsite real-time data backup, real-time application backup The recent demand is not obvious; high-end customer personalization needs more, generally through the project to meet;
  • Secure bandwidth for transmission channels, IP links during data backup
IDC Business, Cloud Business
Hybrid Cloud requirements Scenario 1: Users rent the public cloud of telecommunications and interconnect with their own private cloud layer two, sharing the application-layer business Resources

Scenario 2: When a user private cloud is connected to another private or telco public cloud, there is a quality requirement (SLA) for the connected link

Scenario 3: Intensive deployment of a value-added business platform (e.g., single-point deployment of a DDoS cleaning system via drainage for full DC service, etc.)

  • The need for a user private cloud to interoperate with the public cloud layer two. User self-planning IP address, establish three-tier connection;
  • requires networking to provide security for cloud-to-Inter connectivity;
  • There is a need for 7-layer drainage of the user's business
Customer rental bandwidth quickly adjust Scenario 1: East-West traffic Bandwidth Adjustment

Application System A, B (or part A, part B) of the same customer is deployed in different IDC room or cloud resource pool, and there is internal traffic between A and B. A/C deployment scenario is mainly in the following situations:

    Li style= "color: #555555" > physical server-physical server;
  • Cloud host--cloud host;
  • Physical server--cloud host;
  • physical server distributed deployment

In the above scenario, you need to implement the fast increase or decrease in link (transport, IP) bandwidth between customer A and B systems/parts.

Scenario 2: North-South traffic bandwidth adjustment

Customer leased IDC egress bandwidth requires rapid increase in export bandwidth within (10) minute granularity

Scene 3: Multi-computer room shared Internet export

Users rent multiple room resources, Only in one room to rent the Internet export, the other room to share this north-south export. At the same time, the traffic between north-south traffic and engine room has QoS protection requirements

  • Support users to rent things to traffic bandwidth (transport, IP) fast scheduling, activation, and ensure the QoS of the link;
  • cloud users and managed lease users have the need to schedule North-south Internet export bandwidth, The VDC needs to have a fast scheduling mechanism;
  • multi-machine room shared resources need to be combined with resource systems and support scheduling in multi-tenant scenarios, preventing scheduling conflicts
6 Automated operational requirements Users use the virtual data center, because the virtual machine or physical host location is difficult to determine in advance, completely by the system real-time allocation (minutes), so there is the need to use telecommunications to provide service generation services
  • The user lacks the cloud computing virtual machine, the network equipment maintenance means, needs the cloud management platform to provide the corresponding operation and maintenance method;
  • The user lacks the maintenance means of the host and its network equipment, it needs the VDC itself and the resource system, the DC management system to provide a new operation and maintenance means
IDC Business, Cloud Business
5 concluding remarks

Traditional network equipment has supported the development and application of network in the past few decades, with the rise of cloud computing and mobile Internet, the business demand of users presents diversification, flexibility, uncertainty and so on, the closed network architecture can not meet the needs of practical application, and is facing more and more problems and challenges. As a revolutionary new technology, SDN/NFV will have a significant impact on the future evolution of the network. Cloud network convergence has become a trend, whether for operators or Internet enterprises, who can take the lead in cloud computing and network infrastructure deep integration, who will be in the "Internet +" era to win the opportunity to gain leadership position.

Reference documents:
1 Ms. Zhao Huiling, Rai, Wang Feng, ET. SDN Core Technical analysis and practical guidance [M]. Beijing: Electronic industry Press, 2013:12~13.

2 Nadeau T D, GRAY K. Software-Defined Networking: SDN and OpenFlow parsing [M]. Bi Jun, single industry, Zhang Shaoyu, and other translations. Beijing: People's post and telecommunications press, 2014:66~67.

3 SDx Central. 2015 Network function Virtualization (NFV) report [R/ol]. [2015-04-16]. http://www.sdnlab.com/10593.html.

4 Li Xiang. Cloud Data Center introduces network function virtualization Nfv[eb/ol]. [2015-03-22]. http://server.yesky.com/datacenter/161/53732661.shtml.

5 SDN Industry Alliance. SDN Industry Development White Paper (2014) [Eb/ol]. [2015-04-23]. http://www.sdnlab.com/10752.html.

Author Profile:
Sean (1984), Male, China Telecom Co., Ltd. Jiangsu Branch operation and Maintenance Center engineer, Platform maintenance Technical support engineer, Chinese Communication Society individual member, the main research direction for the cloud computing platform Maintenance optimization, SDN/NFV technology pre-research, product development.

Ding (1977), Male, China Telecom Co., Ltd. Jiangsu Branch operation and Maintenance Center engineer, Platform maintenance Department director, the main research direction for the platform maintenance and management.

Cheng (1975), Male, China Telecom Co., Ltd. Jiangsu Branch operation and Maintenance Center engineer, Platform maintenance Department deputy Director, the main research direction for the platform maintenance and management work.

Lu Ling (1980), Male, China Telecom Co., Ltd. Jiangsu Branch operation and Maintenance Center engineer, Platform maintenance Technical support engineer, the main research direction for the cloud computing platform architecture Planning and new technology research work.

Cong Sheng (1984), Male, China Telecom Co., Ltd. Jiangsu Branch operation and Maintenance Center engineer, Platform maintenance Technical support engineer, the main research direction for the cloud computing platform Maintenance optimization, sdn/hybrid cloud pre-research and other work.

This article transferred from: Sdnlab

Application of SDN and NFV technology in cloud data center scale

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.