Develop a Winsock application with a large response scale using the completed Port

Source: Internet
Author: User
Tags readfile
Develop a Winsock application with a large response scale using the completed Port

It is not easy to develop network applications. However, you only need to master several key principles --

Create and connect a socket, try to connect, and then send and receive data. What is really difficult is to write a program that can accept less than one and more

Thousands of connected network applications. This article will discuss how to use Winsock2 in Windows NT? High scalability development with Windows 2000

Winsock applications. The main focus of this article is on the client/server model server. Of course, there are many key points.

Applicable to both sides of the model.
API and response Scale

Through the Win32 overlapping I/O mechanism, applications can submit an I/O operation. Overlapping operation requests are completed in the background at the same time.

The operation thread to do other things. After the overlap operation is completed, the thread receives the related notifications. This mechanism is special for time-consuming operations.

Not useful. However, functions such as wsaasyncselect () on Windows 3.1 and select () on UNIX are easy to use,

They cannot meet the needs of the response scale. The complete port mechanism is optimized for the operating system.

On Windows 2000, the overlapping I/o mechanism of ports can be used to truly expand the system response scale.

Complete Port

A completion port is actually a notification queue, in which the operating system puts the notifications of overlapped I/O requests. When an I/O

Once the operation is completed, a worker thread that can process the operation result will receive a notification. After a socket is created,

It can be associated with a complete port at any time.

Generally, we create a certain number of worker threads in the application to process these notifications. The number of threads depends on the application.

. Ideally, the number of threads is equal to the number of processors, but this also requires that no thread should execute such as synchronization

Read/write, wait for event notifications, and other blocking operations to avoid thread blocking. Each thread is assigned a certain CPU time during which the thread

Can run, and the other thread will be allocated to a time slice and start execution. If a thread executes a blocking operation, the operating system

The unused time slice will be deprived and other threads will start execution. That is to say, the previous thread does not fully use its time slice. When

In this case, the application should prepare other threads to take full advantage of these time slices.

You can use the port in two steps. First, create a complete port, as shown in the following code:

Handle hiocp;

Hiocp = createiocompletionport (
Invalid_handle_value,
Null,
(Ulong_ptr) 0,
0 );
If (hiocp = NULL ){
// Error
}

After creating a port, associate the socket that uses the port. The method is to call again

Createiocompletionport () function, the first parameter filehandle is set as the socket handle, and the second parameter

Existingcompletionport is set to the handle of the completed port just created.
The following code creates a socket and associates it with the created Port:

Socket S;

S = socket (af_inet, sock_stream, 0 );
If (S = invalid_socket ){
// Error
If (createiocompletionport (handle) s,
Hiocp,
(Ulong_ptr) 0,
0) = NULL)
{
// Error
}
???
}

Then, the connection between the socket and the port is completed. Any overlapping operations on this socket will be performed through the port

Completion notification. Note that the third parameter in the createiocompletionport () function is used to set a "complete

"(Translator's note: The completion key can be of any data type ). Each time the notification is completed, the application can read

Therefore, the completion key can be used to pass background information to the socket.

After creating a complete port and associating one or more sockets with it, we need to create several threads to process the completion notification. This

Some threads repeatedly call the getqueuedcompletionstatus () function and return the completion notification.

Next, let's take a look at how the application tracks these overlapping operations. When an application calls an overlapping operation function, point

Overlapped structure pointers are included in its parameters. After the operation is complete, we can use the getqueuedcompletionstatus () function

Count to get this pointer back. However, based on the overlapped structure pointed to by this pointer alone, the application cannot tell exactly what is done

Which operation is used. To track operations, you can define an overlapped structure and add the required tracing information.

Whenever an overlapping operation function is called, an overlappedplus structure is always passed through its lpoverlapped parameter (for example

Wsasend, wsarecv, and other functions ). This allows you to set certain operation status information for each overlapping call operation. When the operation ends,

You can use the getqueuedcompletionstatus () function to obtain the pointer of your custom structure. Note that the overlapped field is not required.

Is the first field of the extended structure. After obtaining a pointer to the overlapped structure, you can use containing_record

Macro extracts the pointer pointing to the extended structure (the above two segments will be the overlappedplus structure and the overlapped structure at the moment)

I don't know much about it either. Please kindly advise me ).

The overlapped structure is defined as follows:

Typedef struct _ overlappedplus {
Overlapped ol;
Socket S, sclient;
Int opcode;
Wsabuf wbuf;
DWORD dwbytes, dwflags;
// Other useful information
} Overlappedplus;

# Define op_read 0
# Define op_write 1
# Define op_accept 2

Let's take a look at the working thread in figure2.

Figure 2 worker thread

DWORD winapi workerthread (lpvoid lpparam)
{
Ulong_ptr * perhandlekey;
Overlapped * overlap;
Overlappedplus * overlapplus,
* Newolp;
DWORD dwbytesxfered;

While (1)
{
Ret = getqueuedcompletionstatus (
Hiocp,
& Dwbytesxfered,
(Pulong_ptr) & perhandlekey,
& Overlap,
Infinite );
If (ret = 0)
{
// Operation failed
Continue;
}
Overlapplus = containing_record (overlap, overlappedplus, Ol );
  
Switch (overlapplus-> opcode)
{
Case op_accept:
// Client socket is contained in overlapplus. sclient
// Add client to completion port
Createiocompletionport (
(Handle) overlapplus-> sclient,
Hiocp,
(Ulong_ptr) 0,
0 );

// Need a new overlappedplus Structure
// For the newly accepted socket. Perhaps
// Keep a look aside list of free structures.
Newolp = allocateoverlappedplus ();
If (! Newolp)
{
// Error
}
Newolp-> S = overlapplus-> sclient;
Newolp-> opcode = op_read;

// This function prepares the data to be sent
Preparesendbuffer (& newolp-> wbuf );
 
Ret = wsasend (
Newolp-> S,
& Newolp-> wbuf,
1,
& Newolp-> dwbytes,
0,
& Newolp. Ol,
Null );
    
If (ret = socket_error)
{
If (wsagetlasterror ()! = Wsa_io_pending)
{
// Error
}
}

// Put structure in look aside list for later use
Freeoverlappedplus (overlapplus );

// Signal accept thread to issue another acceptex
Setevent (hacceptthread );
Break;

Case op_read:
// Process the data read
//???

// Repost the read if necessary, reusing the same
// Receive buffer as before
Memset (& overlapplus-> ol, 0, sizeof (overlapped ));
Ret = wsarecv (
Overlapplus-> S,
& Overlapplus-> wbuf,
1,
& Overlapplus-> dwbytes,
& Overlapplus-> dwflags,
& Overlapplus-> ol,
Null );

If (ret = socket_error)
{
If (wsagetlasterror ()! = Wsa_io_pending)
{
// Error
}
}
Break;

Case op_write:
// Process the data sent, etc.
Break;
} // Switch
} // While
} // Workerthread

The content of each handle key variable is the completion key parameter set when the completion port is associated with the socket;

The overlap parameter returns a pointer to the overlappedplus structure used for overlapping operations.

Remember that when an overlapping operation fails to be called (that is, the returned value is socket_error, and the cause of the error is not

Wsa_io_pending), then the completion port will not receive any completion notification. If the overlap operation is called successfully, or the cause is

When wsa_io_pending is incorrect, the completion port will always receive the completion notification.

Socket architecture for Windows NT and Windows 2000
For developing a Winsock application with a large response scale, a basic understanding of the socket architecture of Windows NT and Windows 2000 is

Very helpful.

Unlike other types of operating systems, the transfer protocols for Windows NT and Windows 2000 do not have the same style as socket

The application directly talks to the interface, but uses a more underlying API called the Transport Driver Interface (Transport Driver ).

Interface, TDI ). Winsock's core mode driver is responsible for connection and buffer management to provide socket simulation (

Implemented in the AFD. SYS file), and is responsible for communicating with the underlying transmission driver.

Who manages the buffer zone?

As mentioned above, the application uses WinSock to communicate with the transport protocol driver, while AFD. sys is responsible for buffering the application.

Zone management. That is, when an application calls the send () or wsasend () function to send data, AFD. sys copies the data

Your internal buffer (depending on the so_sndbuf value), and then the send () or wsasend () function returns immediately. You can also say that,

AFD. sys is responsible for sending data in the background. However, if the data requested by the application exceeds the buffer set by so_sndbuf

The wsasend () function is blocked until all data is sent.

The same is true for receiving data from a remote client. As long as you do not need to receive a large amount of data from the application, and it does not exceed so_rcvbuf

AFD. sys will first copy the data to its internal buffer. When the application calls the Recv () or wsarecv () function

Copy the buffer from the internal buffer to the buffer provided by the application.

In most cases, this architecture works well, especially when applications use non-overlapping send () and receive () schemas in traditional sockets.

. But the programmer should be careful that, although so_sndbuf and so_rcvbuf can be selected through the setsockopt () API

The item value is set to 0 (the internal buffer zone is closed), but the programmer must be very clear about the consequences of disabling the internal buffer zone of AFD. SYS. Avoid

The system crash may occur due to the copy of the buffer zone when data is not sent or received.

For example, an application disables the buffer by setting so_sndbuf to 0, and then sends a blocking send () call. In this case

In this case, the system kernel locks the buffer of the application until the receiver confirms that it has received the entire buffer before sending () calls are returned.

It seems that this is a simple method to determine whether your data has been fully received by the other party, but it is not actually the case. Think about it.

The TCP notification data has been received by the client. In fact, it does not mean that the data has been successfully sent to the client application.

As a result, AFD. sys cannot copy data to the application. Another more important issue is that each thread can only perform

Sending a call is extremely inefficient.

Setting so_rcvbuf to 0 and disabling the receiving buffer of AFD. sys cannot improve the performance, which only forces the received data

Buffer at a lower level of WinSock. When you send a receive call, you also need to copy the buffer. Therefore, you wanted to avoid buffering.

The plot of copying a partition will not succeed.

Now we should be clear that disabling the buffer is not a good idea for most applications. As long as you need to pay attention to the application at any time

When several wsarecvs overlapping calls are maintained on a connection, there is usually no need to disable the receiving buffer. If AFD. sys always has an application

The buffer provided in sequence is available, so it does not need to use an internal buffer.

High-performance server applications can disable the sending buffer without compromising performance. However, such an application must be very careful.

To ensure that it always sends out multiple overlapping sending calls, rather than sending the next one after an overlapping sending is completed. If the application is

After sending the data to the next sequence, it will waste the gap time between two sending requests. In short, it is necessary to ensure that the transmission driver is

After sending a buffer, you can immediately switch to another buffer zone.

Resource restrictions
Robustness is the primary goal when designing any server application. That is to say,

Your application must be able to cope with any unexpected problems, such as the peak number of concurrent customer requests, the temporary shortage of available memory, and

Other short-term phenomena. This requires the program designers to pay attention to the resource restrictions under Windows NT and 2000 systems, and calmly

Handle emergencies.

The most basic resource you can directly control is the network bandwidth. Generally, applications that use User Datagram Protocol (UDP) May

Pay attention to bandwidth restrictions to minimize packet loss. However, the server must be very careful when using TCP connections.

To prevent network bandwidth overload from exceeding a certain period of time. Otherwise, a large number of packets or a large number of connection interruptions will be required. About bandwidth management

The method should be based on different applications, which is beyond the scope of this article.

The use of virtual memory must also be managed very carefully. By carefully applying for and releasing memory, or applying lookaside lists (a high-speed

Cache) technology to reuse allocated memory will help control the memory overhead of server applications

") To prevent the operating system from frequently switching the physical memory applied by the application to the virtual memory.

The operating system can always keep more application address space in memory "). You can also

Setworkingsetsize () This Win32 API allows the operating system to allocate more physical memory to your applications.

When using WinSock, you may encounter two other non-direct resource insufficiency situations. One is the limit of the locked memory page. If you

The buffer of AFD. sys is disabled. When the application sends and receives data, all pages in the application buffer will be locked to the physical memory. This is because

For kernel drivers, these pages cannot be exchanged during this period. If the operating system needs

Some paging physical memory is allocated, but there is not enough memory. Our goal is to prevent writing a sick

Lock all the physical memory and cause the system to crash. That is to say, when your program locks the memory, do not exceed the memory score specified by the system.

Page limit.

On Windows NT and 2000 systems, the total memory that all applications can lock is about 1/8 of the physical memory (but this is only a large

This is not the basis for your computing memory ). If your applications do not pay attention to this, when you send too many overlapping sending and receiving calls,

In addition, when I/O is not completed, the error error_insufficient_resources may occasionally occur. In this case, you need to avoid

Excessively locked memory. At the same time, note that the system will lock the entire memory page containing your buffer, so when the buffer is close to the page boundary

There is a price (the translator understands that if the buffer just exceeded the page boundary, it may be 1 byte, and the page where the byte exceeded will also

Locked ).

Another restriction is that your program may encounter insufficient resources in the system without paging pool. The so-called non-Paging pool is a piece that will never be exchanged

This memory is used to store data accessed by various kernel components. Some kernel components cannot access

Swap out the page space. Drivers for Windows NT and 2000 can allocate memory from this particular non-Paging pool.

When an application creates a socket (OR opens a file similarly), the kernel will not allocate a certain amount of memory in the paging pool,

In addition, when binding and connecting to sockets, the kernel will not reallocate some memory in the paging pool. When you observe this behavior, you will find

If you send some I/O requests (such as sending and receiving data), you will not allocate more memory in the paging pool (for example, to track a pending

You may need to add a custom structure for this operation, as mentioned above ). In the end, this may cause some problems.

The operating system limits the usage of non-Paging memory.

On Windows NT and 2000, the number of non-Paging memory allocated to each connection varies.

Windows may also be different. In order to make the application have a longer life cycle, you should not calculate the specific demand for memory in the non-Paging pool.

Your program must prevent consumption to the limit of the non-Paging pool. When there is no paging pool in the system, the remaining space is too small, and some do not have any

The related kernel drivers will go crazy and even cause system crashes, especially when there are third-party devices or drivers in the system.

Such a tragedy (and unpredictable ). At the same time, you also need to remember that there may be other tasks that consume the non-Paging pool on the same computer.

Application, so when designing your application, you must be especially conservative and cautious about the estimated amount of resources.

It is very complicated to handle the problem of insufficient resources, because you will not receive any special error code in the case of the above situation. Generally, you can only receive one

Wsaenobufs or error_insufficient_resources errors. To handle these errors, first

Adjust the configuration to a reasonable maximum value.

Http://msdn.microsoft.com/msdnmag/issues/1000/Bugslayer/Bugslayer1000.asp for Memory Optimization

If the error persists, check whether the network bandwidth is insufficient. After that, make sure that you have not sent

There are too many sending and receiving calls. Finally, if you still receive the error of insufficient resources, it is likely that you have encountered the problem of insufficient non-Paging memory pool.

. To release the non-Paging memory pool space, close a considerable part of the connection in the application and wait for the system to pass through and correct this instantaneous

Error.

Accept connection requests
One of the most common tasks for a server is to accept connection requests from clients. Use overlapping I/O on the socket to accept the unique connection

The API is the acceptex () function. Interestingly, the normally synchronous acceptance function accept () returns a new socket, while

The acceptex () function requires another socket as one of its parameters. This is because acceptex () is an overlapping operation, so you

You need to create a socket beforehand (but do not bind or connect to it) and pass the socket through the parameter to acceptex (). Below is a small

Typical pseudocode for using acceptex:

Do {
-Wait until the previous acceptex is complete.
-Create a new socket and associate it with the completion port.
-Set the background structure and so on.
-Send an acceptex request
} While (true );

As a highly responsive server, it must make enough acceptex calls, waiting, once a client connection request occurs

Engraved response. As for how many acceptex requests are sent, it depends on the communication traffic type that your server program expects. For example

If the incoming connection rate is high (because the connection duration is short or the traffic peak occurs), The acceptex that needs to be waited must be greater

There are more cases of occasional client connections. The smart way is to use an application to analyze traffic conditions and adjust the acceptex waiting time.

Instead of a fixed number.

For Windows2000, Winsock provides some mechanisms to help you determine whether the number of acceptex is sufficient. This is where the listener is created

Create an event in the socket. Use the wsaeventselect () API and register the fd_accept Event Notification to send the socket and the event

Association. Once the system receives a connection request, if no acceptex () in the system is waiting to accept the connection, the above event

A signal will be received. Through this event, you can determine whether you have issued enough acceptex (), or detect an abnormal

). This mechanism is not applicable to Windows NT 4.0.

One of the major advantages of using acceptex () is that you can accept client connection requests and receive data through one call (by transmitting

Lpoutputbuffer parameter. That is to say, if the client transmits data while sending a connection, your acceptex () calls

After the connection is created and the client data is received, it can be returned immediately. This may be useful, but it may also cause problems, because

Acceptex () is returned only when all client data is received. Specifically, if you pass an acceptex () call

With the lpoutputbuffer parameter, acceptex () is no longer an atomic operation, but is divided into two steps: accepting customer connections, etc.

Data to be received. When a mechanism is missing to notify your application of this situation: "The connection has been established and is waiting for the customer

", Which means that the client may only send connection requests but not data. If your server receives too many

Type connection, it will reject more valid client requests. This is a common form of denial-of-service (DoS) attacks.

To prevent such attacks, the connection receiving thread should call the getsockopt () function from time to time (the option parameter is so_connect_time ).

Check the socket waiting in acceptex. The option value of the getsockopt () function will be set to the time when the socket is connected, or set

-1 (indicating that no connection has been established for the socket ). In this case, the wsaeventselect () feature can be well utilized for this check. If

If the connection has been established but the data has not been received for a long time, terminate the connection by closing the connection as a parameter.

The socket for acceptex. Note: In most non-emergency situations, if the socket has been passed to acceptex () and starts waiting

But no connection has been established, so your application should not close them. This is because even if these sockets are closed

Performance considerations: the data structure of the corresponding kernel mode will not be dry until the connection is established or the socket itself is shut down.

Cleanup.

The thread that sends an acceptex () call seems to be the same as the thread that completes the Port Association operation and processes other I/O completion notifications,

However, do not forget to avoid blocking operations in the thread. One side effect of Winsock2's layered structure is the call of socket () or

The upper-layer architecture of wsasocket () API may be very important (the Translator does not quite understand the original meaning, sorry ). Each acceptex () call must be created

A new socket, so it is best to have an independent thread dedicated to calling acceptex (), rather than involved in other I/O processing. You can also use this

Threads to execute other tasks, such as event records.

The last note about acceptex (): To implement these APIs, you do not need the Winsock2 implementation provided by other providers. This point

It is also applicable to other Microsoft-specific APIs, such as transmitfile () and getacceptexsockaddrs (), and other APIs that may be added

New windows APIs. These APIs are implemented in Microsoft's underlying provider DLL (mswsock. dll) on Windows NT and 2000.

Can be called through the mswsock. Lib compilation connection, or through the wsaioctl () (option parameter is

Sio_get_extension_function_pointer) dynamically obtains the pointer of the function.

If the function pointer is not obtained in advance, call the function directly (that is, statically connect to mswsock. Lib during compilation ).

Directly calling the function), the performance will be very affected. Because acceptex () is placed outside the Winsock2 architecture, it is forced to communicate with each call

Get the function pointer through wsaioctl. To avoid this performance loss, applications that use these APIs should call wsaioctl ()

Obtain the function pointer directly from the underlying provider.

See figure 3 socket architecture:

Application
|
/|/
//
Winsock 2.0 DLL (ws2_32.dll)
|
/|/
//
Layered/base providers
RSVP | proxy | default Microsoft providers (mswsock. dll/msafd. dll)
|
/|/
//
Windows Sockets kernel-mode driver (AFD. sys)
|
/|/
//
Tramsport protocols
TCP/IP | ATM | Other

Transmitfile and transmitpackets
Winsock provides two functions specially optimized for file and memory data transmission. The transmitfile () API function is

Windows NT 4.0 and Windows 2000 can be used, while transmitpackets () will be implemented in future Windows versions.

.

Transmitfile () is used to transmit the file content through Winsock. Generally, you can call createfile () to open a file.

One file, and then call readfile () and wsasend () repeatedly until the data is sent. However, this method is inefficient because

Each call to readfile () and wsasend () involves a conversion from user mode to kernel mode. If you change

Transmitfile (), you only need to give it a handle to the opened file and the number of bytes to be sent, and the involved mode conversion operation will

It only occurs once when createfile () is called to open the file, and then transmitfile () occurs again. In this way, the efficiency is much higher.

Transmitpackets () is more advanced than transmitfile (). It allows users to send multiple specified files and memory only once.

Buffer zone. The function prototype is as follows:
Bool transmitpackets (
Socket hsocket,
Lptransmit_packet_element lppacketarray,
DWORD nelementcount,
DWORD nsendsize,
Lpoverlapped,
DWORD dwflags
);
Here, lppacketarray is a structured array, where each element can be either a file handle or a memory buffer.

Definition:
Typedef struct _ transmit_packets_element {
DWORD dwelflags;
DWORD clength;
Union {
Struct {
Large_integer nfileoffset;
Handle hfile;
};
Pvoid pbuffer;
};
} Transmit_file_buffers;
Each field is self-descriptive ).
Dwelflags field: Specifies whether the current element is a file handle or a memory buffer (using the constant tf_element_file and

Tf_element_memory );
Clength field: specifies the number of bytes that will be sent from the data source. (if it is a file, the value 0 indicates that the entire file is sent );
Untitled consortium in the structure: memory buffer (and possible offset) that contains the file handle ).

Another advantage of using these two APIS is that you can reuse the socket handle by specifying the tf_reuse_socket and tf_disconnect flag.

Every time the API completes data transmission, it will disconnect at the transport layer level, so that this socket can be re-provided

Acceptex. Using this optimized programming method will reduce the pressure on the thread dedicated to the operation to create a socket (as described above)

And ).

Both APIs share a common weakness: in Windows NT Workstation or Windows 2000 Professional Edition, functions can only

Process two call requests, only in Windows NT, Windows 2000 Server, Windows 2000 Advanced Server, or Windows

2000 data center.

Put them together

In the previous sections, we discussed the functions, methods, and possible resource bottlenecks required to develop high-performance, large-response applications.

Problem. What do these mean to you? In fact, it depends on how you construct your server and client. When you are able

When the user design is better controlled, the more you can avoid bottlenecks.

Let's look at a demonstration environment. We need to design a server to respond to client connections, send requests, receive data, and disconnect.

Then, the server will need to create a listening socket, associate it with a completed port, and create a working thread for each CPU.

Create another thread dedicated to issuing acceptex (). We know that the client will send data immediately after a connection request is sent, so if

We are ready to receive the buffer to make it easier. Of course, do not forget to round the socket used in the acceptex () call from time to time.

(Using the so_connect_time option parameter) to ensure that there is no malicious timeout connection.

There is an important issue in this design. We should consider the number of acceptex () waiting times. This is because every time

When acceptex () is used, we need to provide a receiving buffer for it at the same time, so there will be a lot of locked pages in the memory (as mentioned above)

, Each overlap operation consumes a small part of the non-Paging memory pool, and locks all the involved buffers ). This question is hard to answer

, There is no definite answer. The best way is to make this value adjustable. Through repeated performance tests, you can find

The optimal value in a typical application environment.

Well, after you make a clear estimate, the following is the problem of sending data. The focus is on how many concurrent requests you want the server to process at the same time.

Connection. Generally, the server should limit the number of concurrent connections and the number of sending and calling requests waiting for processing. Because the more concurrent connections,

The more memory pools are consumed, the more calls are sent and the more pages are locked (Be careful not to exceed the limit)

. This also requires repeated tests to know the answer.

In the preceding environment, you do not need to disable the buffer of a single socket, because only one operation to receive data is performed in acceptex ().

It is not too difficult to provide the receiving buffer for each incoming connection. However, if the interaction between the client and the server changes to one

The client needs to send more data after sending the data once. In this case, it is not good to disable the receiving buffer.

Instead of trying to ensure that each connection sends an overlapping Receiving call to receive more data.

Conclusion

Developing a Winsock server with a large response scale is not terrible, in fact, it is to set up a listening socket, accept connection requests and perform heavy

Stack sending and receiving calls. By setting a reasonable number of overlapping calls for waiting to prevent unused non-Paging memory pools, this is the most important

Challenges. Based on the principles discussed earlier, you can develop server applications with large response sizes.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.