Introduction to zookeeper (reproduced)

Source: Internet
Author: User
Tags email account

I have always been confused about the application and principle of zookeeper. I read an article to talk about it very well and share it as follows:

Scenario 1

There is a scenario:There are about 100 million users in the system. Each user has an average of 3 email accounts. every 5 minutes, each email account needs to receive emails, up to 0.3 billion emails can be downloaded to the server (excluding attachments and body ). The computing pressure of 20 machines is divided into multiple different network outlets for Internet access. The computing pressure is reduced, so the computing pressure on each machine is not very high.

Through our discussions and past experiences, we can determine that parallel computing can be implemented in this scenario, but we also expect to dynamically Add/delete nodes of parallel computing, the number of parallel computing tasks updated online does not affect other computing nodes in the computing unit. However, four problems need to be solved. Otherwise, some serious problems may occur:

  1. When 20 machines work at the same time, one machine goes down. How can other machines take over computing tasks? Otherwise, some users' services will not be processed, resulting in final interruption of user services.
  2. As the number of users increases, adding machines can solve the computing bottleneck, but all computing nodes need to be restarted. If needed, the entire system may become unavailable.
  3. When the number of users increases or decreases, some machines on the computing node may experience busy resource usage, while others are idle, because the computing nodes do not know each other's Running Load Status.
  4. How to notify each node of the Load Status of each other, and how to ensure the reliability and real-time performance of the notification to each compute node.

Not to mention so many professional terms, what we need in the vernacular is: 1 record the status, 2 Event Notifications, 3 reliable and stable central scheduler, 4 easy to use, simple management.
Zookeeper can completely solve our problem. Coordinators, observers, and distributed locks in distributed computing can all be used as keywords of zookeeper. In the system, Zookeeper is used to process Event Notifications, queues, and priority queues, these features play an important role in distributed computing.

Scenario 2

Suppose we have a server with 20 search engines (each of which is responsible for a part of the total index search tasks) and a total server (responsible for sending search requests to the 20 search engine servers and merging the result set ), A backup master server (replacing the master server when the master server goes down) and a Web cgi (sending search requests to the master server ). 15 of the search engine servers now provide the search service, and 5 servers are generating indexes. the servers of the 20 search engines often need to stop the servers that are providing the search service and start generating the index. Or the server that generates the index has generated the index and can search for the service. with zookeeper, the total server can automatically detect the number of servers providing search engines and send search requests to these servers. When the standby total server is down, the Standby total server is automatically enabled, the Web cgi can automatically learn the network address changes of the total server. how can we achieve this?

 

1. All servers providing search engines create znode, zk. Create ("/search/nodes/node1 ",

"Hostname". getbytes (), IDs. open_acl_unsafe, createflags. Ephemeral );

2. The total server can obtain a list of znode subnodes from zookeeper, zk. getchildren ("/search/nodes", true );

3. The master server traverses these subnodes and obtains the data of the subnodes to generate a list of servers that provide search engines.

4. When the master server receives the event information changed by the subnode, return to step 2 again.

5. The master server creates nodes in zookeeper, zk. Create ("/search/master", "hostname". getbytes (), IDs. open_acl_unsafe, createflags. Ephemeral );

6. the standby master server monitors the "/search/master" node in zookeeper. when the node data of the znode changes, the node is started into a master server, and the network address data is put into the node.

7. The Web cgi obtains the network address data of the total server from the "/search/master" node of zookeeper and sends a search request to it.

8. the Web cgi monitors the "/search/master" node in zookeeper. When the node data of this znode changes, it obtains the network address data of the total server from this node, and change the network address of the current total server.

In my tests: In a zookeeper cluster, there are three zookeeper nodes. one Leader, two follower, stops the leader, and then two follower selects a leader. the retrieved data remains unchanged. I want zookeeper to help hadoop:

 

Hadoop: Use zookeeper event processing to ensure that the entire cluster has only one namenode and stores configuration information.

Hbase, use zookeeper event processing to ensure that the entire cluster has only one hmaster, aware that the hregionserver is online and down, and stores the access control list.

What is zookeeper?

Official comments: the zookeeper distributed service framework is a sub-project of Apache hadoop. It is mainly used to solve some data management problems frequently encountered in distributed applications, such: unified Naming Service, status Synchronization Service, cluster management, and management of distributed application configuration items.

Good abstraction. Let's change the method. First, let's take a look at what functions are provided by it, and then look at what can be done by using these functions.

What does zookeeper provide?

In short, Zookeeper = file system + notification mechanism.

1. File System

Zookeeper maintains a data structure similar to a file system:

 

Each sub-directory item, such as nameservice, is called znode. Like the file system, znode can be freely added or deleted, and znode can be added or deleted under a znode, the only difference is that znode can store data.

There are four types of znode:

1. Persistent-persistent directory node

After the client is disconnected from zookeeper, the node still exists.

2. persistent_sequential-persistent sequence number directory node

After the client is disconnected from zookeeper, the node still exists, but zookeeper sequentially numbers the node name.

3. Ephemeral-temporary directory node

The node is deleted after the client is disconnected from zookeeper.

4. ephemeral_sequential-temporary sequence number directory node

After the client is disconnected from zookeeper, the node is deleted, but zookeeper sequentially numbers the node name.

 

2. Notification Mechanism

The client registers the directory nodes it cares about. When the directory node changes (data changes, is deleted, and the subdirectory node is added or deleted), Zookeeper notifies the client.

This is simple. Let's see what we can do?

 

What can we do with zookeeper? 1. Naming Service

This seems the simplest. Create a directory in the zookeeper file system, that is, a unique path. When tborg is used to determine the machine where the upstream program is deployed, we can configure the path with the downstream program. The path can be used to explore and discover each other.

 

2. Configuration Management

The program always needs to be configured. If the program is deployed on multiple machines, it becomes difficult to change the configuration one by one. All right, now I want to put all these configurations on zookeeper and save them to a directory node in zookeeper. Then all the applications will listen to the directory node. Once the configuration information changes, every application receives a notification from zookeeper, and then obtains new configuration information from zookeeper and applies it to the system.

 

3. Cluster Management

The so-called cluster management does not care about two points: whether there are machines to exit and join or elect the master.

For the first point, all machines agreed to create a temporary directory node under the parent directory groupmembers, and then listen to the child node change message of the parent directory node. Once a machine fails, the connection between the machine and zookeeper is disconnected, and the temporary directory node created by the machine is deleted. All other machines are notified that a sibling directory is deleted. Therefore, everyone knows: It's on the ship. The addition of new machines is similar. All machines receive a notification: The Directory of new brothers is added, and highcount is available again.

For the second point, let's change it a little. Create a temporary sequential number directory node for all machines, and select the machine with the smallest number each time as the master.

 

4,Distributed Lock

With the zookeeper consistency file system, the locking problem becomes easy. The lock service can be divided into two categories: one is to maintain exclusive, and the other is to control the time sequence.

For the first type, we regard a znode on zookeeper as a lock and implement it through createznode. All clients create the/distribute_lock node, and the client that is successfully created has the lock. There is a saying in the restroom: I am also angry and angry. When I use it to delete the distribute_lock node I created, I will release the lock.

For the second type,/distribute_lock already exists in advance, and all clients create temporary sequential number directory nodes under it. Like selecting a master, the minimum number is used to obtain the lock and delete it.

5, Queue management

Two types of Queues:

1. Synchronization queue: This queue is available only when all the members of a queue are merged. Otherwise, all the Members will wait until they arrive.

2. Queues are queued and displayed in FIFO mode.

The first type is to create a temporary directory node in the agreed directory. Whether the number of listening nodes is the number we need.

The second type is consistent with the basic principles of the control time series scenario in the Distributed Lock service. The column is numbered, and the column is numbered.

I finally learned what we can do with zookeeper. But as a programmer, we always want to learn how zookeeper achieves this. It is not difficult to maintain a single file system, however, it is very difficult for a cluster to maintain a file system to maintain data consistency.

 

Distributed and Data Replication

As a cluster, Zookeeper provides consistent data services. Naturally, it must replicate data between all machines. Benefits of Data Replication:

1. Fault Tolerance
An error occurs on a node, which does not stop the entire system. Other nodes can take over its work;

2. Improve system scalability
Distribute the load to multiple nodes or add nodes to improve the load capacity of the system;

3. Improve Performance
Allow the client to access the nearest node locally to speed up user access.

 

From the perspective of the transparency of client read/write access, the Data Replication cluster system is divided into the following two types:

1. writemaster)
Data modification is submitted to the specified node. This restriction does not apply to read data from any node. In this case, the client needs to distinguish between reading and writing, commonly known as read/write splitting;

2. Write any)
Modifications to data can be submitted to any node, which is the same as reading data. In this case, the client is transparent to the roles and changes of cluster nodes.

 

For zookeeper, the write method is arbitrary. By increasing the number of machines, the read throughput and response capacities are very scalable, while the write throughput will definitely decrease as the number of machines increases (this is also the reason why it builds an observer ), the response capability depends on the specific implementation method, whether the delayed replication maintains the final consistency or the instant replication responds quickly.

We focus on how to ensure data consistency across all machines in the cluster. This involves the paxos algorithm.

 

Data Consistency and paxos Algorithm

It is said that the difficulty of paxos algorithms is as admirable as the algorithm's popularity. So we should first look at how to maintain data consistency. Here we have a principle:

In a distributed database system, if the initial status of each node is the same, and each node executes the same operation sequence, they can finally get a consistent state.

What problems does the paxos algorithm solve? The solution is to ensure that each node executes the same operation sequence. Well, this is not simple. The master maintains a global write queue, and all write operations must be placed in this queue number. No matter how many nodes we write, as long as the write operation is numbered, to ensure consistency. That's right, but if the master crashes.

The paxos algorithm uses a vote to globally number write operations. At the same time, only one write operation is approved, and concurrent write operations are required to win votes, only write operations that receive more than half of the votes will be approved (so there will always be only one write operation approved). If the competition for other write operations fails, another round of voting will be initiated, all write operations are strictly numbered in the day-to-year round voting. Numbers increase progressively. When a node receives a write operation numbered 100 and then a write operation numbered 99 (due to network latency and many other unforeseen reasons ), it immediately realizes that its data is inconsistent, automatically stops external services, and restarts the synchronization process. The failure of any node does not affect the data consistency of the entire cluster (2n + 1 in total, unless the failure of more than N nodes ).

Summary

Zookeeper, as a sub-project in the hadoop project, is an essential module for hadoop cluster management. It is mainly used to control the data in the cluster, for example, it manages namenode in the hadoop cluster, there are also master election in hbase and State synchronization between servers.

 

ZookeeperWorking Principle

Zookeeper is a distributed, open-source distributed application Coordination Service. It contains a simple primitive set. distributed applications can implement synchronization services, configuration maintenance, and naming services based on it. Zookeeper is a subproject of hadoop. Its development process does not need to be described in detail. In distributed applications, because the lock mechanism cannot be well used by engineers and message-based coordination mechanisms are not suitable for some applications, therefore, a reliable, scalable, distributed, and configurable coordination mechanism is required to unify the state of the system. This is the purpose of zookeeper. This article briefly analyzes the working principle of zookeeper and does not focus on how to use zookeeper.

Basic concept of zookeeper 1.1 Role

Zookeeper has the following three types of roles:

System Model:

1.2 design purpose

1. Final consistency: No matter which server the client connects to, it is displayed as the same view, which is the most important performance of zookeeper.

2. Reliability: It has simple, robust, and good performance. If message m is accepted by a server, it will be accepted by all servers.

3. Real-time performance: zookeeper ensures that the client obtains Server Update information or Server failure information within a time interval. However, due to network latency and other reasons, Zookeeper cannot guarantee that the two clients can get the newly updated data at the same time. If you need the latest data, you should call the sync () interface before reading the data.

4. Wait-free: slow or invalid clients cannot intervene in fast client requests, so that each client can wait effectively.

5. atomicity: update can only be successful or failed, and there is no intermediate state.

6. sequence: includes global order and partial order. Global Order means that if message a is published before message B on a server, message A will be published before message B on all servers. Partial Order means that if message B is published by the same sender after message A, message A will be placed before message B.

How zookeeper works

The core of zookeeper is atomic broadcast, which ensures synchronization between various servers. The Protocol implementing this mechanism is called the Zab protocol. The Zab protocol has two modes: recovery mode (Master selection) and broadcast mode (synchronization ). After the service is started or the leader crashes, Zab enters the recovery mode. When the leader is elected and most servers are synchronized with the leader status, the recovery mode ends. State synchronization ensures that the leader and server have the same system status.

To ensure transaction sequence consistency, Zookeeper uses an incremental transaction ID (zxid) to identify the transaction. Zxid is added when all proposals (proposal) are proposed. In implementation, zxid is a 64-bit number, and its 32-bit height is the epoch used to identify whether the leader relationship has changed. Each time a leader is selected, it will have a new epoch, identifies the leader's current rule period. Low 32 bits are used for incremental counting.

Each server has three States during its work:

  • Looking: The current server does not know who the leader is.
  • Leading: The current server is the selected leader.
  • Following: The leader has been elected and the current server is synchronized with it.
2.1 select the main process

When the leader crashes or the leader loses most of the follower, ZK enters the recovery mode. In the recovery mode, a new leader needs to be elected to restore all servers to a correct state. There are two types of ZK election algorithms: one is based on basicpaxos and the other is based on fastpaxos. The default Election Algorithm is fastpaxos. First, we will introduce the basicpaxos process:

  1. 1. The election thread is the thread from which the current server initiates the election. Its main function is to collect statistics on the voting results and select the recommended server;
  2. 2. The election thread first initiates a query (including itself) to all servers );
  3. 3. after receiving a reply, the election thread verifies whether it is a self-initiated query (verifying whether the zxid is consistent), obtains the ID (myid) of the other party, and stores it in the list of currently queried objects, finally, obtain the leader information (ID, zxid) proposed by the other party, and store the information in the voting record of the current election;
  4. 4. After receiving replies from all the servers, the server with the largest zxid will be calculated, and the server information will be set to the server for the next vote;
  5. 5. the thread sets the server with the largest zxid as the leader to be recommended by the current server. If the server that wins this time receives n/2 + 1 server votes, set the currently recommended leader as the winning server, and set its status based on the winning server information. Otherwise, continue the process until the leader is elected.

Through the process analysis, we can conclude that for the leader to obtain support from most servers, the total number of servers must be an odd 2n + 1, and the number of surviving servers must not be less than N + 1.

The preceding process is repeated after each server is started. In recovery mode, if the server is recovered from the crash state or the server is started, data and session information will be restored from the disk snapshot, zk will record the transaction log and regularly take snapshots, it is convenient to restore the status when it is restored. The specific flowchart of the master selection is as follows:

During the fastpaxos election process, a server first proposes to all servers to become a leader. When other servers receive the proposal, the epoch and zxid conflicts are resolved and the other Server accepts the proposal, then, send a message to the other party to accept the proposal, repeat the process, and finally the leader will be elected. The flowchart is as follows:

2.2 synchronization process

After selecting the leader, ZK enters the State synchronization process.

  1. 1. The leader waits for the server to connect;
  2. 2. Follower connects to the leader and sends the largest zxid to the leader;
  3. 3. The leader determines the synchronization point based on the zxid of the follower;
  4. 4. After synchronization is completed, the follower is notified that it has become uptodate;
  5. 5. After follower receives the uptodate message, it can accept the client's request again for service.

The flowchart is as follows:

2.3 workflow 2.3.1 leader Workflow

The leader has three main functions:

  1. 1. Restore data;
  2. 2. Maintain heartbeat with learner, receive learner requests, and determine the Request Message Type of learner;
  3. 3. Learner's message types include Ping message, request message, ACK message, and revalidate message. Different types of messages are processed.

The Ping message refers to the heartbeat information of learner. The request message is the proposal information sent by follower, including the write request and synchronous request. The ACK message is the reply of follower to the proposal, if more than half of the follower passes, commit the proposal. The revalidate message is used to extend the session validity period.
The leader workflow is shown in the following figure. In actual implementation, the process is much more complex than that. Three threads are started to implement functions.

2.3.2 follower Workflow

Follower has four main functions:

  1. 1. send a request to the leader (Ping message, request message, ACK message, and revalidate message );
  2. 2. Receive and process the leader message;
  3. 3. Receive client requests. If the request is a write request, send it to the leader for voting;
  4. 4. Return the client result.

The following types of messages from the leader are processed cyclically by follower:

  1. 1. PingMessage: Heartbeat message;
  2. 2. ProposalMessage: the follower vote is required for the proposal initiated by the leader;
  3. 3. CommitMessage: information about the latest proposal of the server;
  4. 4. uptodateMessage: the synchronization is completed;
  5. 5. revalidateMessage: Based on the revalidate result of the leader, disable the session waiting for revalidate or allow it to accept the message;
  6. 6. syncMessage: the sync result is returned to the client. The message is initially initiated by the client to force the latest update.

The workflow of follower is shown in the following figure. In actual implementation, follower implements functions through five threads.

The observer process is not described. The only difference between the observer process and follower is that the observer does not participate in the voting initiated by the leader.

Appendix: overview of typical use cases of zookeeper

Zookeeper is a highly available Distributed Data Management and System coordination framework. Based on the paxos algorithm, this framework ensures strong data consistency in the distributed environment. Based on this feature, Zookeeper can be applied to many scenarios. There are also many introductions on the Use Cases of ZK on the Internet. This article will introduce the use cases of ZK in combination with the project examples provided by the author. It is worth noting that ZK was not designed for these scenarios because it was a typical usage method developed by many developers based on the features of the framework. Therefore, you are also very welcome to share your amazing tricks and tricks on zk.

Scenario type

Typical Scenario Description (zk features, usage)

Specific Application Usage

Data publishing and subscription

Publishing and subscription are the so-called Configuration Management. As the name suggests, data is published to the zk node for the subscriber to dynamically obtain data and implement centralized management and dynamic updates of configuration information. For example, global configuration information and address list are very suitable for use.

1. The index information and the status of machine nodes in the cluster are stored on some designated nodes of ZK for subscription by each client. 2. system logs (processed) are stored. These logs are usually cleared 2-3 days later.

 

3. some configuration information used in the application is managed in a centralized manner. When the application is started, it takes the initiative to obtain the configuration information once and register a watcher on the node. The configuration will be updated each time and the application will be notified in real time, obtain the latest configuration information.

4. some global variables need to be used in business logic. For example, some message queues in message middleware usually have an offset, which is stored on zk, in this way, each sender in the cluster can know the current sending progress.

5. Some information in the system needs to be dynamically obtained and manually modified. Previously, interfaces are usually exposed, such as JMX interfaces. After ZK is available, you only need to store the information on the zk node.

Name Service

This is mainly used as a distributed Naming Service. by calling the create node API of ZK, it is easy to create a globally unique path, which can be used as a name.

 

Distributed notification/coordination

Zookeeper has a special watcher registration and asynchronous notification mechanism, which can effectively implement notifications and coordination between different systems in a distributed environment and implement real-time processing of data changes. Generally, different systems register the same znode on ZK and listen to znode changes (including znode content and sub-nodes). One system updates znode, then the other system can receive the notification and handle it accordingly.

1. Another heartbeat detection mechanism: the detection system and the detected system are not directly associated, but are associated through a node on ZK, greatly reducing system coupling. 2. Another System Scheduling mode: A system consists of a console and a push system. The console is responsible for controlling the push system to push data. Some operations performed by the management personnel on the console are actually modifying the status of some nodes on ZK, and ZK notifies them of these changes to register the watcher client, that is, the push system. Therefore, make corresponding push tasks.

 

3. another work report mode: similar to the task distribution system, after a sub-task is started, register a temporary node in zk, in addition, you can periodically report your progress (write the progress back to this temporary node) so that the task manager can know the task progress in real time.

In short, the use of zookeeper for Distributed notification and coordination can greatly reduce the coupling between systems.

Distributed Lock

Distributed Lock. This is mainly because zookeeper ensures strong data consistency for us. That is, as long as you fully trust every moment, any node in the zk cluster (a zk server) the data of the same znode on must be the same. The lock service can be divided into two types,One is exclusive, and the other is time series control.

 

The so-called persistence exclusive means all clients trying to obtain the lock, and only one client can successfully obtain the lock. The common practice is to regard a znode on ZK as a lock and implement it through the create znode method. All clients create the/distribute_lock node, and the client that is successfully created has the lock.

The control sequence is the client that obtains the lock from all views. It will be scheduled for execution in the end, but there is a global sequence. The method is basically similar to the above, but here/distribute_lock already exists, and the client creates a temporary ordered node under it (this can be specified through the node attribute control: createmode. ephemeral_sequential ). The ZK parent node (/distribute_lock) maintains a sequence to ensure the time sequence created by the subnode, thus forming the global sequence of each client.

 

Cluster Management

1.Cluster machineMonitoring: This is usually used in scenarios that have high requirements on the machine status and on-line rate in the cluster, and can quickly respond to machine changes in the cluster. In such a scenario, a monitoring system is usually used to check whether the Cluster machine is alive in real time. In the past, the monitoring system used some means (such as PING) to periodically detect each machine or report "I am still alive" to the monitoring system ". This approach is feasible, but there are two obvious problems: 1. When there are changes to machines in the cluster, there are a lot of things related to modifications. 2. There is a certain delay.

 

Zookeeper has two features, so that another active monitoring system for cluster machines can be stored in real time:. the client registers a watcher on node X. If the child node of x changes, the client is notified. B. Create a node of the ephemeral type. Once the session between the client and the server ends or expires, the node disappears.

For example, if the monitoring system registers a watcher on the/clusterservers node and then creates a node of the ephemeral type under/clusterservers for each machine dynamically added, the node/clusterservers/{hostname} is created }. in this way, the monitoring system will be able to know the increase and decrease of machines in real time. As for the subsequent processing, it will be the business of the monitoring system.
2.MasterElection is the most classic use case in zookeeper.

In a distributed environment, the same business application is distributed on different machines, and some business logic (such as time-consuming computing and network I/O processing ), generally, only one machine in the entire cluster needs to be executed, and other machines can share the result, which can greatly reduce repetitive work and improve performance, so this master election is the main problem encountered in this scenario.

With the strong consistency of zookeeper, the global uniqueness of node creation can be ensured in the case of high Distributed Concurrency, that is, multiple client requests are sent to create/currentmaster node at the same time, in the end, only one client request can be created successfully.

With this feature, you can easily select clusters in a distributed environment.

In addition, this scenario evolved from Dynamic Master election. This requires the features of nodes of the ephemeral_sequential type.

As mentioned above, only one client creation request can be created successfully. A slight change here is to allow all requests to be successfully created, but there must be a creation order. Therefore, one possible situation for all requests to eventually create results on ZK is as follows: /currentmaster/{sessionid}-1,/currentmaster/{sessionid}-2,/currentmaster/{sessionid}-3 ..... Each time the machine with the smallest serial number is selected as the master, if the machine fails, because the node he created will be immediately hourly, then the smallest machine will be the master.

1. In the search system, if each machine in the cluster generates a full index, it is not only time-consuming, but cannot guarantee that the index data is consistent with each other. Therefore, let the master in the cluster generate full indexes and synchronize them to other machines in the cluster. 2. in addition, the Disaster Tolerance measures for Master election are that you can manually specify the master at any time. That is to say, when ZK cannot obtain the master information, it can obtain the master from a local location through HTTP.

Distributed Queue

In terms of queue, I currently feel there are two types,One is the conventional first-in-first-out queue, and the other is to be executed in a unified order after the queue members are aggregated.. The basic principles of the second advanced FIFO queue are the same as those of the control timing scenario in the Distributed Lock service.

 

The second queue is actually an enhancement based on the FIFO queue. Generally, a/queue/num node can be created in advance under the znode/queue and assigned N (or directly assigned n to/Queue), indicating the queue size, then, each time a queue member joins, it determines whether the queue size has been reached and whether execution can be started. In a distributed environment, a large task, task a, can be used only when many subtasks are completed (or the conditions are ready. At this time, when one of the sub-tasks is completed (ready), create your own temporary time series node (createmode. ephemeral_sequential). When/tasklist finds that its subnodes meet the specified number, you can proceed to the next step in order.

 

 

 

Refer:

Http://zookeeper.apache.org/
Http://blog.csdn.net/cutesource/article/details/5822459
Http://blog.csdn.net/pwlazy/article/details/8080626
Http://nileader.blog.51cto.com/1381108/795265
Http://nileader.blog.51cto.com/1381108/926753
Http://nileader.blog.51cto.com/1381108/795230
Http://netcome.iteye.com/blog/1474255

Introduction to zookeeper (reproduced)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.