Architecture Design of game servers with millions of users (2)

Source: Internet
Author: User
Tags call back

Login Server Design-functional requirements

As we have discussed earlier, the login server provides a simple function, that is, Account Verification. For ease of description, we will not introduce the optimization methods discussed at the moment. We will first implement them in the simplest way, and also describe them using the mangos code as a reference.

Imagine how to implement account verification. The easiest way is to send the plaintext entered by the user to the logon server using the account and password. The server extracts the password from the database according to the account, it is compared with the password entered by the user.

The security risks of this method are too large, and the transmission of plaintext passwords is too easy to intercept. Then we try to add a secret before transmission. In order for the server to compare passwords, we should adopt a reversible encryption algorithm, on the server side, the encrypted string is restored to the original plaintext password and then compared with the database password. Since it is a reversible process, plug-in makers always have a way to know our encryption process, so this method is still not safe enough.

Oh, if we just want the password to be restored, isn't it easy? Just use an irreversible hash algorithm. When a user logs on to the server, the account in plaintext is sent to the server and the irreversible password string after being hashed. After the server obtains the password, it uses the same algorithm to hashed it and then compares it. For example, we can use the most widely used MD5 algorithm. Oh, don't worry about Wang Xiaoyun's paper. If I had such good luck, would I still consider the damn server design?

It seems to be a perfect solution. Plug-ins can no longer steal our passwords. Slow down. What is the purpose of password theft by plug-ins? It is to use our account to enter the game! If we always use a fixed algorithm to hash passwords, the plug-in only needs to remember the strings after the hash, and use this as the password to successfully log on.

Well, this is a good solution. We should not use fixed algorithms for hashing. However, the problem is that the strings obtained by the hash algorithm used by the server and the client must be the same, or they can be verified to match. Fortunately, great math have prepared many excellent algorithms for us, and they have proved to be safe enough by both theory and practice.

One of these is an algorithm called SRP, which is called Secure Remote Password, that is, secure remote password. Wow uses version 6th, that is, the srp6 algorithm. I am very grateful if someone can explain the mathematical proof to me and make it clear to me. However, the Code implementation steps are not complex, and the code in mangos is still clear, so we will not go into details.

In addition to account verification, the logon server also provides another function, that is, after the player's account is verified successfully, a server list is returned to him for selection. The status of this list needs to be refreshed on a regular basis. There may be a new game world that is open, or some gaming worlds that are unfortunately stopped, all these status changes should be known to gamers as soon as possible. No matter what happens, users have the right to know. Especially for users who have paid the fee, we should not hide it, should we?

The function of this game world list will be provided by regional servers. The specific structure has been described before and will not be discussed here. Log on to the server and send the game world list obtained from the region server to the verified client. Well, the functions to be implemented by the login server are simple, right.

It is indeed too simple, but the simple structure is more suitable for us to take a look at the module structure inside the game server, as well as the implementation methods of some common server components. This will be the next article.

Server public component implementation-main game loop of mangos

When reading the source code of a project, we will probably choose to start from the main function. when starting a new project, most of the functions written in the first one are also main. Let's take a look at what the main function has done in the game server code implementation.

What I did not like most when reading a technical article is the code of a large segment, especially the code that has not been modified after Ctrl + C and CTRL + v, no technical skills! Therefore, we will try to avoid direct code in the content we will discuss in the future. In some cases, when code is required to be expressed, we will also choose to use the pseudo code.

Start with the logon server code of mangos. The logon server of mangos is a single-threaded structure. Although an independent thread can be enabled in a database connection, this thread only caches execution-type SQL statements without returned results, the query-type SQL statements that require returned results are blocked in the main logic thread.

The only thread in the login server, that is, the main loop thread, performs the Select Operation on the listening socket to read and process the data on each connected client immediately, it ends when the server receives the SIGABRT or sigbreak signal.

Therefore, the logic of the main loop of the mangos logon server also includes the logic of the subsequent game server. The key code of the main loop is actually in sockethandler, that is, the Select function. Check all connections and call the onaccept method for the new connection. If there is a data connection, call the onread method. Then the socket processor defines how to process the received data.

The structure is simple and easy to understand.

However, on servers with high performance requirements, select is generally not the best choice. If we use Windows, iocp will be the first choice; if it is Linux, epool will be the best choice. We do not plan to discuss iocp-based or epool-based server implementation. If we only want to implement server functions, we can call several very simple APIs, and there are many good tutorials on the Internet; if you want to create a mature network server product, it is not what I can do in a few simple technical articles.

In addition, on the server implementation, network I/O and logic processing are generally placed in different threads, so as to avoid the time-consuming Io process blocking the game logic that requires immediate response.

The processing of the database is similar. asynchronous processing is used to avoid blocking the main loop of the game server during time-consuming queries. Imagine how terrible it would be to have all online players on the server stuck due to a database query operation initiated by a player going online!

There are also some common components such as events, scripts, message queues, state machines, logs, and Exception Handling. We will also discuss them in the following time.

Server public component implementation-continue with the main loop

Previously, we only briefly understood the program structure of the mangos logon server and found some shortcomings. Now let's look at how to provide a better solution.

As we have discussed before, all the time-consuming Io operations will be shared to individual threads for smooth running of the game's main logic loop, such as network Io, database Io and log Io. Of course, you can also share these information in a separate process.

In addition, for most server programs, they are used as Genie processes or service processes at runtime. Therefore, the server does not need to be able to process console user input. The data to be processed comes from the network.

In this way, all you need to do in the main logic loop is to keep taking the message package for processing. Of course, these message packages not only contain player operation data packets from the client, but also management commands from the GM server, the returned message package from the database query thread is also included. This cycle will continue until you receive a message packet notifying the server to close.

The structure of the main logical loop is still very simple. The complicated part is about how to handle the logic of these message packets. We can use a simple pseudo code to describe the loop process:

While (Message * MSG = getmessage ())
{
If (MSG is the server to close the message)
Break;
Processes MSG messages;
}

Here is a question that needs to be discussed. Where should we retrieve the message during getmessage? We have considered that there will be at least three message sources, and we have discussed that the IO operations of these sources are carried out in independent threads, the main thread here should not directly go to those message sources for blocking Io operations.

It is very easy to let those independent Io threads send the data after receiving it. For example, I provide a warehouse with many suppliers. They only need to deliver the goods to the warehouse when they have the goods for me, and then I will go to the warehouse to get the goods, this warehouse is the message queue. Message Queue is a common queue implementation. Of course, security support for multi-threaded mutex access must be provided. The basic interface definition is similar to this:

Imessagequeue
{
Void putmessage (Message *);

Message * getmessage ();
}

Network I/O and database I/O threads add all the sorted message packets to the Message Queue of the main logical loop thread. The implementation of message queue and the transfer of messages between threads have complete code implementation and description in ACE, and some examples are also a good reference.

In this way, our main cycle is very clear, getting messages from the message queue of the main thread, processing the messages, and then taking the next message ......

Server public component implementation-Message Queue

Now that we have talked about the message queue, let's continue to talk a little more.

The simplest Message Queue we can think of is probably implemented by using the STL list. That is, the Message Queue maintains a list and a mutex lock. When putmessage is used, the message is added to the end of the queue, when getmessage is used, a message is returned from the queue header, and the lock resource must be obtained before getmessage and putmessage.

Although the implementation is simple, the functions absolutely meet the requirements, but the performance may be slightly unsatisfactory. The biggest problem is frequent lock competition.

Ghost Cheng proposes an optimization solution to reduce the number of lock competitions. Provides a queue container with multiple queues. Each queue can store a certain number of messages. When the network I/O thread wants to deliver messages to the logic thread, it will take an empty queue from the queue container for use until the queue is filled and then put back into the container for another empty queue. When a logic thread obtains a message, it obtains a message queue from the queue container to read the message. After processing the message, it clears the queue and places it back to the container.

In this way, the lock is required only when the queue container is operated, and the IO thread and logic thread do not need to be locked when operating the queue currently in use, therefore, the opportunity for Lock competition is greatly reduced.

The maximum number of messages is set for each queue. It seems that it is intended to be put back to the container for another queue only when the IO thread is full of the queue. In this case, sometimes the IO thread is not full of a queue, and the logic thread has no data to process, especially when the data volume is small. Ghost cheng did not discuss how to solve this problem in his description, but let's take a look at another solution.

This solution is similar to the previous solution, but does not provide queue containers any more, because only two queues are used in this solution, arthur described the implementation and part of the code in his email. Two Queues: one is read to the logic thread and the other is written to the IO thread. After the logic thread completes reading the queue, it will swap Its Queue with the queue of the IO thread. Therefore, in this solution, the number of locks is more, Io threads need to lock each time they write the queue, and logical threads also need to lock when switching the queue, but the logic thread does not need to lock when reading the queue.

Although it seems that the number of lock calls is much higher than the previous scheme, most lock calls do not actually cause blocking, A thread may be blocked only when the logic thread switches the queue. In addition, the overhead of the lock call process is completely negligible. What we can't stand is the blocking caused by the lock call.

Both solutions are excellent optimization solutions, but they also have their applicability. The ghost Cheng solution provides multiple queues so that multiple Io threads can be used by the Chief Engineer without interfering with each other, however, there is another legacy problem that we do not know about. Arthur's solution solves the problems left over from the previous solution, but because there is only one write queue, when you want to provide multiple Io threads, mutually Exclusive Data Writing between threads may increase the chance of competition. Of course, if there is only one Io thread, it will be perfect.

Server public component implementation-circular buffer

The problem of too frequent Message Queue lock calls is solved. The other annoying issue is that too many memory allocation and release operations have been performed. Frequent memory allocation not only increases system overhead, but also increases memory fragments, which is not conducive to the long-term stable operation of our servers. Maybe we can use a memory pool, such as a small memory distributor attached to sgi stl. However, for such strict first-in-first-out processing, the block size is not small, and the block size is not uniform for memory allocation, more uses a solution called the ring buffer. mangos's network code also has such a thing, and its principle is relatively simple.

It is like two people are chasing around a circular table. The runner is controlled by the network I/O thread. When writing data, the Runner runs forward. The Chaser is the logic thread, it will keep chasing until it catches up with the person who runs. What if I catch up? That is, there is no data to read. Wait for a while and wait for the runners to take a few steps forward and try again. The game cannot be left idle. What if the chasing person is too slow and the running person turns around to catch the chasing person? Take a rest first. If you keep following these steps, it is estimated that you will only be able to catch up with each other faster, or you will not be able to play this game.

We have previously stressed that strict first-in-first-out processing is required for the use of ring buffers. That is to say, everyone has to abide by the rules that the chasing person cannot cross from the table, and the running person certainly cannot run in turn. As for why, there is no need to explain it more.

The ring buffer is a good technique that does not require frequent memory allocation. In most cases, the repeated use of memory also enables us to do more with less memory blocks.

In the network I/O thread, we will prepare a ring buffer for each connection to temporarily store the received data to cope with half-pack and stick-to-pack situations. After unpacking and decryption are complete, we will copy this data packet to the Message Queue of the logic thread. If we only use one queue, it will also be a circular buffer, io threads write in, logic threads read in the back, and chase each other. However, if we use the optimization solution described above, we may no longer need the ring buffer here, at least we do not need them to be circular. Because the same queue will no longer be read and written at the same time, when each queue is fully written, it will be handed over to the logic thread for reading. After the logic thread completes reading, it will clear the queue and then it will be handed over to the IO thread for writing, A fixed buffer. It doesn't matter. Such a good technology will certainly be used elsewhere.

Server public component implementation-packet sending Method

We have been talking about the processing method when receiving data. We should use a dedicated Io thread to receive the complete message package and add it to the Message Queue of the main thread, however, the main thread has not discussed how to send data.

Generally, the most direct method is to call the relevant Socket API to send data when the logic thread wants to send data. This requires the server's Player Object to store its connected socket handle. However, direct send calls sometimes have some problems, such as the system's transmission buffer is full and congested, or only some data is sent. We can cache the data to be sent first, so that if the data is not sent, it can be sent again in the next processing of the logic thread.

If you consider data caching, there are two ways to implement this. One is to prepare a buffer for each player, and the other is to have only one global buffer, when the data to be sent is added to the global buffer, you must specify the socket to which the data is sent. If the global buffer is used, we can further use an independent thread to process data sending, similar to the data processing method of the logic thread, this independent sending thread also maintains a message queue. When the logic thread wants to send data, it only adds the data to this queue. The sending thread cyclically fetches packets to execute the send call, at this time, the blocking will not have any impact on the logic thread.

The second method can also be accompanied by an optimization solution. Generally, for broadcast messages, the data sent to gamers around is identical. If we use a Buffer Queue for each player, this packet will need to be copied multiple times, when using a global sending queue, we only need to queue the message once and specify the socket to which the packet is sent. Descriptions of the optimization are also described in the blog article about how cloud wind describes the implementation of its connection to the server. If you are interested, read it.

Server public component implementation-state machine

The design intent and implementation of the state mode will not be Excerpted from the design mode. Let's just look at how to use the State Design Mode in game Server programming.

First of all, we can see from the mangos code that the login server uses this struct when processing messages sent from the client:

Struct authhandler
{
Eauthcmd cmd;
Uint32 status;
Bool (authsocket: * Handler) (void );
};

This struct defines the processing function and the required status identification of each message code. The specified processing function is called only when the current status meets the requirements, otherwise, the message code is invalid. The status identity is defined as a macro and has two valid identifiers: status_connected and status_authed, that is, they are both unauthenticated and authenticated. The change of the status identifier is performed at runtime, specifically after receiving a message and correctly processing it.

Let's take a look at the description of the State mode in the design mode. One of the following describes the applicability of the State mode. When the operation contains a large multi-branch conditional statement, these branches depend on the state of the object, which is usually represented by one or more enumerated variables.

The description is similar to what we want to deal with here. Maybe we can try it. Then let's look at the solution provided by the state mode. The state mode puts each condition Branch into an independent class.

Because the two status identifiers are only divided into two states, we only need two independent classes to represent the two States. Then, according to the description of the State mode, we also need a context class, that is, the state machine management class, to manage the current state class. For a bit of sorting, the approximate code will look like this:

Status base interface:
Statebase
{
Void enter () = 0;
Void leave () = 0;
Void process (Message * MSG) = 0;
};

Basic interface of the state machine:
Machinebase
{
Void changestate (statebase * State) = 0;

Statebase * m_curstate;
};

Our logical processing class is derived from machinebase. When a data packet is taken and handed over to the current State for processing, the two state classes described above are derived from statebase, each status class only processes the messages to be processed under this status identity. When you want to perform state conversion, call the changestate () method of machinebase to display the state to which the state machine management class is going to go. Therefore, the state class needs to save the pointer of the state machine management class, which can be passed in during the state class initialization. The specific implementation details are not described too much.

Although the use of the state machine avoids complicated judgment statements, it also introduces new troubles. When converting the state, we may need to transfer some field data from the old state object to the new State object, which needs to be considered when defining the interface. If you do not want to copy the data, the public field data can also be stored in the state machine class, but it may not be so elegant in use. <

BR>
As described in the design model, all models are another solution to existing problems, that is, they are not the only solution. In the state mode discussed today, taking the two States processed by the login server as examples, it may be easier to use mangos's traversal processing functions, however, when the number of states in the system increases and the number of State identifiers increases, the state mode is particularly important.

For example, the status management of players on the game server, as well as various status management when implementing the npc ai, will be reserved for future topics.

Common server components-events and Signals

This section has been drafted several times over the past few days. I always feel that it is hard to organize the content clearly. However, it is necessary to take advantage of the hot work to prevent the enthusiasm from fading, first, sort out some things and put them here to continue with the following theme. If you have a chance, you can come back and complete the theme. The content of this section is not considered. I hope you can give more comments.

Some are similar to the events and signal in QT. I define some action request messages as events, and the status change messages as signals. For example, in a QT application, a mouse click event is generated and added to the event queue. When processing this event, a button control may generate a clicked () signal.

Corresponding to an example on our server, when a player logs on, it sends a login request packet to the server. The server can treat it as a user logon event, after the event is processed, a user logon signal may be generated.

In this way, like QT, We can redefine the event processing method, or even filter out some events to prevent them from being processed, but we only receive a notification for the signal, some observers are similar to observer in observe mode. When we receive update notifications, we can only update our own status, and I will not be able to make any impact on the events that have just occurred.

Let's take a closer look at the differences between events and signals. From the perspective of our needs, we only need to be able to register events or signal response functions, you can be notified when an event or signal is generated. However, the difference is that the return value of the event processing function is meaningful. We need to determine whether to continue the Event Processing Based on the returned value. For example, in QT, if the event processing function returns true, the event processing is completed, and qapplication processes the next event. If false is returned, then, the event dispatch function continues to look up for the next registration method that can handle the event. The Return Value of the signal processing function is meaningless for the signal dispatcher.

Simply put, we can define a filter for the event so that the event can be filtered. This feature is everywhere on the game server.

There are also a lot of open-source training on the Network for the Implementation of event and signal mechanisms, such as fastdelegate, sigslot, boost: signal, etc. sigslot is also used by Google, in the libjingle code, we can see how it is used.

When implementing the event and signal mechanisms, you may consider using the same set of implementations. We have analyzed the previous steps. The only difference between the two lies in the processing of return values.

Another issue that needs our attention is the priority issue during event and signal processing. In QT, events are related to the window, so the Event Callback starts from the current window and is distributed up at the first level until a window returns true, process the event. Signal processing is relatively simple, and there is no order by default. If you need a clear order, you can specify the slot position when the signal is registered.

In our requirements, because there is no window concept, event processing is similar to the signal. We need to call back the registered processor in order, therefore, the priority setting function is required.

Finally, we need to consider how events and signals are processed. In QT, events are maintained using an event queue. If a new event is generated during event processing, the new event is added to the end of the queue, after the current event is processed, qapplication takes the next event from the queue header for processing. The signal processing method is somewhat different. The signal processing method calls back immediately, that is, after a signal is generated, all the slots registered above will be called back immediately. In this way, a recursive call problem occurs. For example, another signal is generated in a signal processor, which makes the signal processing expand like a tree. One of the important issues we need to pay attention to is whether it will cause loop calls.

There are still many considerations about the event mechanism, but they are all immature ideas. In the above text, there are three similar concepts: Message, event, and signal. In actual processing, we often find that the three do not know how to define them, the actual situation is much more confusing than I described here.

This is also a pitfall, hoping to have some communication.

Login server implementation

It's too far away from our Login server implementation. Pull it back first.

We have discussed the structure of login server, regional server, and game world server. Here we will list our respective responsibilities and relationships.

Gateway/worldserver gateway/wodlserver loginserver dnsserver worldservermgr
|
Bytes ---------------------------------------------------------------------------------------------
|
Internet
|
Clients

Dnsserver is responsible for the domain name resolution service with Server Load balancer, and returns the loginserver IP address to the client. Worldservermgr maintains the world server list in the current region. loginserver sends the world server list to the client. Loginserver processes gamer logon and world server selection requests. Gateway/worldserver is an independent world server or a world server connected to the backend server through a gateway.

In the mangos code, we noticed that the logon server is a list of the World retrieved from the database, but in the official wow server, we noticed that, the world server list is not fixed at the beginning, but dynamically generated. After the maintenance is completed once a week, we can clearly see the process of generating this list. At the beginning, the world list was empty. Slowly, the world server will be added one by one. If there is a world server, it will be displayed as offline, will not be deleted from the list. However, after the current server is maintained again, all world servers will not exist and all servers will be added again.

From the above process description, we can easily think of using a temporary list to save the world server information, which is also the purpose of increasing the worldservermgr server. When gateway/worldserver is started, it automatically registers itself with worldservermgr, so that it adds the game world that it represents to the world list. Similarly, if the dnsserver allows the loginserver to register itself, the configuration file of the dnsserver does not need to be modified during the temporary loginserver.

The internal implementation of worldservermgr is very simple. It listens to a fixed port, accepts active connections from worldserver, and detects its status. Here, we can use a heartbeat packet to detect its status. If worldserver is disconnected or does not receive a heartbeat packet within the specified time, it will be updated offline. In addition, worldservermgr also processes list requests from loginserver. Because the world list does not change frequently, loginserver does not need to go to worldservermgr every time it sends the world list. loginserver can maintain a list by itself. When the list on worldservermgr changes, worldservermgr will notify all loginservers to update their own lists. This may be the event method described above, or the observer mode.

This is all about worldservermgr implementation. Let's take a look at loginserver. This is the object we will focus on today.

We have discussed some common server components, so we should try them here, not just in theory. Starting from the state machine, as mentioned earlier, the connection on the logon server has two statuses: account and password verification status and server list selection status, in fact, there is another status we have not discussed, because it has little to do with our login process, this is the status of the update package sending. The conversion process of the three States is roughly as follows:

Logonstate -- Verification Successful -- version check -- version earlier than the latest value -- go to updatestate
|
-- Version is equal to the latest value -- go to worldstate

This version checks and determines the next status in the logonstate. The next status is determined by the current status. The srp6 protocol is used for password verification. The specific process is not described much, and the methods used by each game are not the same. The version check process is even less worth exploring. It's just an IF-Else.

The upgrade status is actually the file transfer process. After the file is sent, the client is notified to start executing the upgrade file and closing the connection. The world selection status provides a list of clients, including the IP addresses, ports, and current load of all game world gateway servers. If the client is continuously connected, the list will be refreshed to the client every 5 seconds. Of course, whether it is worth doing so remains to be discussed.

The whole process does not seem to be worth exploring, but it is not complete yet. What should I do when the client chooses a world? Wow's practice is that when the client chooses a game world, the client
Will take the initiative to connect the IP and port of the world server, and then enter the game world. At the same time, the connection to the logon server has not been broken until the client is indeed connected to the selected World server and the queuing process is complete. This is a very necessary design, ensuring that we cannot connect to the world Server due to unexpected circumstances or find that the world server is waiting in line and will not need to re-verify the password if we want to try another one.

But what we need to pay attention to is not this, but how the server identifies us when the client connects to the gateway server of the game world. For example, if an unconscious player does not comply with the game rules and does not verify the account or password, he directly connects to the world server, just as if an unconscious passenger ran to the boarding gate without changing his boarding pass. At this moment, the flight attendant will tell you to change the boarding pass first. Where will the boarding pass come from? If the pass gate is changed, people will reveal your identity first and send you a boarding pass only after confirmation. In the same process, after verifying the identity of the client, our Login server will also send a boarding pass to the client. This boarding pass also has a learning name called session key.

Can the client correctly log on to the gateway of the world server with this session key? There seems to be a question: How does he know if my key is fake? No way. There are too many fake goods in China. We have to consider fake goods everywhere. The method is very simple. Ask the pass clerk who gave him the boarding pass to ask if he didn't make the card. However, when there are so many loginservers, it is too inefficient to ask them one by one, and the long lines in the back will surely start to scream. So, the loginserver stores this key in the database and enables the gateway server to perform database verification on its own? It seems to be a feasible solution.

If you think this puts too much pressure on the database, you can also consider similar worldservermgr practices, use a temporary list to save, or even save the list to worldservermgr, he is the only one in the whole region. There is no difference in the nature of the two solutions, but it depends on where you are willing to put the load. No matter where it is, the query pressure is a bit high. Think about it, all the players in the whole region. Therefore, we can also try to consider a new solution, a solution that does not need to go to the only entry query in the region.

We cannot store these session keys separately. A feasible solution is to store the session key of a client in only one place at any time, which may be the server that the client is currently connected to or the server that it is about to connect. Let's describe this process in detail. When the client passes the verification on loginserver, loginserver becomes the session key for this session, but it is only saved on the current loginserver and does not store the database, it will not be sent to worldservermgr. If the client wants to go to a game world, it must first notify the server address to which the currently connected loginserver is going. loginserver transfers the session key to the target server safely, transfer means to ensure that the target server receives the session key and the locally saved session key must be deleted. After the transfer is successful, loginserver notifies the client to connect to the target server. In this case, when verifying the validity of the session key, the target server does not need to be queried elsewhere. It only needs to be queried in the locally saved session key list.

Of course, for the sake of session key security, all servers will set a validity period after receiving a new session key. After the validity period, they have not been authenticated yet, the session key is automatically deleted. At the same time, session keys on all servers will be deleted after the connection is closed, ensuring that a session key is only used for one connection session.

However, it is clear that wow did not adopt this solution because the client did not send a message requesting confirmation to the server when selecting the world server. The session key in WoW should be stored in a place similar to worldservermgr, or like mangos, it is saved in the database. No matter how it is, we understand the process, and the code implementation is relatively simple, so we will not go into details.

 

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.