Open source software: NoSQL database-Graph database Cassandra

Source: Internet
Author: User
Tags cassandra memcached neo4j



In a previous article, "Introduction to Graphical Database Neo4j", we introduced a very popular method of using graphical database neo4j. In this article, we'll simply introduce another type of NoSQL database--cassandra.



The reason for exposure to Cassandra is the same as that of contact neo4j: Our products need to be able to record large amounts of data that cannot be processed quickly by a series of relational databases. Cassandra, and the MongoDB that will be introduced later, are an alternative to our technology selection process. Although we did not choose Cassandra at the end, but in the entire technology selection process in contact with a series of internal mechanisms, thinking methods, etc. are very interesting. It also draws on some of the experience of the cam (Cloud availability Manager) group in the actual use process throughout the selection process. So I'm here to summarize my notes into an article and share them.






Technology selection



Technology selection is often a very rigorous process. Since a project is usually developed in collaboration with dozens of or even hundreds of developers, a precise technology selection can often significantly improve the overall project development efficiency. When trying to design a solution for a particular class of requirements, we often have a wide variety of technologies to choose from. In order to accurately select a suitable technology for these requirements, we need to consider a range of learning curve, development, maintenance and many other factors. These factors mainly include:


    • Whether the functionality provided by this technology can solve the problem in its entirety.
    • What is the extensibility of this technology? Whether to allow users to add custom components to meet special needs.
    • Whether the technology has a rich and complete documentation and can be professionally supported in the form of free or even paid.
    • Whether the technology is used by many people, especially in some large enterprises, and there are successful cases.


In the process, we will gradually sift through the various technologies available on the market and ultimately determine the one that suits our needs.



For the requirements that we have just mentioned-documenting and processing the large amounts of data generated automatically by the system-we have many options in the initial stages of technology selection: Key-value databases such as redis,document-based databases, such as MongoDB, column-based databases, such as Cassandra. And when implementing certain functions, we can often build a solution from any of the databases listed above. It can be said that how to choose between these three kinds of databases is often the most headache for NoSQL database beginners. One reason for this is that key-value,document-based and column-based are actually a more general classification of NoSQL databases. NoSQL databases provided by different database providers often have slightly different implementations and provide different sets of functions, which in turn cause the boundaries between these database types to be less clear.



As the name implies, the Key-value database stores the data in the form of key-value pairs. The structure of the hash table is often used internally to record data. When used, the user only needs to read or write the corresponding data via key. As a result, it is very fast for crud operations on a single piece of data. And the flaw is just as obvious: we can only access the data by key. In addition, the database does not know other information about the data. So if we need to filter the data based on a particular pattern, then the Key-value database will run inefficiently because the Key-value database often needs to scan all the data that exists in the Key-value database.



So in a service, the Key-value database is often used as a server-side cache to record the results of a series of complex calculations that are time-consuming. The most famous is Redis. Of course, memcachedb, which adds persistence to memcached, is also a key-value database.



The difference between the document-based database and the Key-value database is that the data stored is no longer a string, but a document of a particular format, such as XML or JSON. These documents can record a series of key-value pairs, arrays, and even embedded documents. Such as:


1 {2     name: "Jefferson", 3     Children: [{4         Name: "Hillary", 5         age:14 6     }, {7         name: "Todd", 8         Ag E:12 9     }],10     age:45,11     Address: {         number:1234,13         Street: "Fake Road", City         : "Fake City ", the state         :" NY ",         Country:" USA "     }18}


Some readers may have doubts, and we can also store JSON or XML-formatted data through the Key-value database, right? The answer is that document-based databases often support indexing. As we have just mentioned, the Key-value database is inefficient in performing data lookup and filtering. With the help of the index, the document-based database can support these operations well. Some document-based databases even allow join operations like relational databases. The document-based database also retains the flexibility of the Key-value database compared to relational databases.



And the column-based database is very different from the previous two kinds of databases. We know that the data recorded in a relational database is often organized in rows. Each row contains multiple columns that represent different meanings and is sequentially recorded in the persistence file. As we know, a common operation in a relational database is to filter and manipulate data with specific characteristics, and the operation is often done through a WHERE clause:


1 SELECT * FROM customers WHERE country= ' Mexico ';


In a traditional relational database, the table that the statement operates on May resemble the following:






In the database file corresponding to the table, each value in each row is recorded sequentially, resulting in the data file as shown:






Therefore, when executing the above SQL statement, the relational database does not continuously manipulate the data recorded in the file:






This greatly reduces the performance of the relational database: In order to run the SQL statement, the relational database needs to read the ID and Name fields in each row. This results in a significant increase in the amount of data to be read by the relational database and a series of offset calculations when accessing the required data. Moreover, the example above is only one of the simplest tables. If the table contains dozens of columns, the data read volume will increase by dozens of times times, and the offset calculation will become more complex.



So how do we solve this problem? The answer is to keep the data in one column together continuously:






This is the core idea of the column-based database: Data is recorded in the data file according to the column for better request and traversal efficiency. Here are two things to note: First, the column-based database does not indicate that all data is organized in columns, nor is it necessary. A column store of some data that requires execution of the request. The other point is that Cassandra's support for query is actually associated with the data model that it uses. In other words, support for query is limited. We'll cover the limit in a few sections below.



So far, you should be able to choose the right NoSQL database for your needs based on the features of the various databases.






Cassandra First Experience



OK, after a brief introduction to key-value,document-based and the three different types of NoSQL databases in column-based, we're going to start experimenting with Cassandra. Since I am personally using a series of NoSQL databases and often encounter their version updates without API compatibility, I use the DataStax Java driver sample directly here. This also allows readers to view sample code for the latest version of the client from this page.



One of the simplest Java code to read a record is as follows:


Cluster Cluster = null;try {    //Create client connected to Cassandra    Cluster = Cluster.builder ()            . Addcontactpoint (" 127.0.0.1 ")            . Build ();    Creating user sessions Session Session    = Cluster.connect ();    Execute CQL Statement    ResultSet rs = Session.execute ("Select Release_version from system.local");    Remove the first result from the returned result    row row = Rs.one ();    System.out.println (row.getstring ("release_version"));} finally {    //Call the Close () function of the cluster variable and close all links associated with it    if (cluster! = null) {        cluster.close ();    }}


It looks simple, doesn't it? In fact, with the help of the client, the operation of Cassandra is actually not very difficult thing. In turn, how to design a model for the data recorded by Cassandra is most needed for careful consideration by the reader. Unlike the relational database modeling methods most familiar to us, the design of data model in Cassandra needs to be join-less. Simply put, that is because the data is distributed across different nodes of the Cassandra, so the join operation of the data cannot be executed efficiently.



So how do we define models for these data? First, we need to understand the basic data model supported by Cassandra. These basic data models are: Column,super Column,column family and Keyspace. Here's a brief introduction to them.



Column is the most basic data model supported by Cassandra. The model can contain a series of key-value pairs:


1 {2     "name": "Auther Name", 3     "value": "Sam", 4     "timestamp": 1234567895}


The Super column contains a series of column. The attribute in a Super column can be a collection of column:


1 {2     "name": "Cassandra Introduction", 3     "value": {4         "Auther": {"name": "Auther name", "Value": "Sam", "Times Tamp ": 123456789},5         " publisher ": {" name ":" publisher "," Value ":" China Press "," timestamp ": 234567890}6     }7}


It is important to note that the Cassandra documentation is no longer recommended for excessive use of super Column, and the reason is not directly explained. It is said that this and super column often need to be related to deserialization when data access is performed. One of the most common evidence is that some developers on the web often add too much data to the Super column, which in turn results in slow requests for those super column-related requirements. Of course it's just speculation. But since official documents have already started to be cautious about super column, we also need to avoid using super column in our daily use.



A column family is a collection of a series of column. In the collection, each column has a key associated with it:


1 authers = {2     "1332": {3         "name": "Auther name", 4         "value": "Sam", 5         "timestamp": 123456789 6     }, 7"1452": {8         "name": "Auther name", 9         "value": "Lucy",         "timestamp": 01234343711     }12}


The column family example above contains a series of column. In addition to this, Column family can also contain a series of Super Column (please use sparingly).



Finally, Keyspace is a collection of a series of column family.



Did you find it? There is no way to refer to another column (Super column) through one column (Super column), but only through the Super column, which contains other column methods, to complete the inclusion of this information. This is very different from the usage that we associate with other records in the relational database design process through foreign keys. Remember the name of the method that we used to create the data association with a foreign key? That's right, normalization. The method can effectively eliminate the redundant data in the relational database by the correlation relation indicated by the foreign key. In Cassandra, the method we use is denormalization, which is to allow a certain amount of data redundancy that is acceptable. In other words, the associated data is directly recorded in the current data type.



When using Cassandra, what should not be abstracted into the Cassandra data model, and which data should have an independent abstraction? It all depends on the read and write requests that our application often performs. Think about why we use Cassandra, or the advantages of Cassandra compared to relational databases: fast execution of Read or write requests on massive amounts of data. If we only abstract the data model based on what we are doing, and ignore the efficiency of Cassandra's execution on these models, and even cause these data models to not support the corresponding business logic, then our use of Cassandra has no practical significance. Therefore, a more correct approach is to first define an abstract concept based on the needs of the application and begin to design a request to run on the abstract concept and the business logic of the application. Next, software developers can decide how to design the model for these abstractions based on these requests.



In the abstract design model, we often need to face another problem, that is, how to specify each column family the various keys used. In various documents related to Cassandra, we often encounter the following series of key nouns: Partition key,clustering key,primary key and composite key. So what are they referring to?




1 CREATE TABLE sample (2     key text PRIMARY key,3     data Text4);


In the example above, we specified the key field as the primary key for sample. A primary key can also be composed of multiple columns, if needed:


1 CREATE TABLE Sample {2     key_one text,3     key_two text,4     data text,5     PRIMARY key (Key_one, key_two) 6};


In the example above, the primary key we created is a composite key consisting of two columns Key_one and Key_two. The first component of the composite key is called the partition key, and the subsequent components are called the clustering key. Partition key is used to determine which node in the cluster the Cassandra uses to record the data, and each Partition key corresponds to a specific Partition. Clustering key is used to sort inside the partition. If a primary key contains only one domain, it will only have partition key and no clustering key.



Partition Key and clustering key can also be made up of multiple columns:


1 CREATE TABLE Sample {2     key_primary_one text,3     key_primary_two text,4     key_cluster_one text,5     Key_ Cluster_two text,6     Data text,7     PRIMARY KEY (Key_primary_one, Key_primary_two), Key_cluster_one, Key_cluster_ ) 8};


In a CQL statement, the conditions indicated by the WHERE clauses can only use the columns used in primary key. Depending on your data distribution, you need to decide what should be partition key and which should be used as clustering key to sort the data.



A good partition key design often greatly improves the performance of the program. First, because partition key is used to control which node records data, partition key can determine whether the data can be distributed more evenly across Cassandra nodes to make the most of these nodes. At the same time, with the help of partition key, your read request should try to use a smaller number of nodes. This is because the Cassandra needs to coordinate processing of the datasets obtained from each node when the read request is performed. Therefore, in response to a read operation, fewer nodes can provide higher performance. Therefore, in the model design, how to specify the model's partition key according to each request that needs to run is a key in the whole design process. A field that is evenly distributed, but often in the request as an input condition, is often a partition Key that can be considered.



In addition to this, we should also consider how to set the clustering Key of the model properly. Because clustering key can be used to sort inside the partition, it is better supported for various requests that contain scope filtering.






Cassandra Internal Mechanisms



In this section, we will briefly describe a series of internal mechanisms for Cassandra. Many of these internal mechanisms are the industry's most common solutions. So after you understand how Cassandra uses them, you can easily understand the use of these mechanisms by other class libraries, and even use them in your own projects.



These common internal mechanisms are: Log-structured merge-tree,consistent hash,virtual node and so on.






Log-structured Merge-tree



One of the most interesting data structures is log-structured merge-tree. A similar structure is used internally by Cassandra to improve the efficiency of service instance operation. So how does it work?



Simply put, a log-structured Merge-tree consists primarily of two tree-structured data: C0 that exist in memory, and C1 that exist primarily on disk:






When you add a new node, log-structured Merge-tree first adds a record about the node insertion in the log file, and then inserts the node into the tree C0. The records added to the log file are primarily based on data recovery considerations. After all, C0 trees are in memory and are very susceptible to system outages and other factors. While reading the data, log-structured Merge-tree first tries to find the data from the C0 tree and then finds it in the C1 tree.



After a certain condition is met by the C0 tree, the data it contains will be migrated to C1 if it occupies too much memory. In log-structured merge-tree This data structure, this operation is called the rolling Merge. It merges a series of records from the C0 tree into the C1 tree. The result of the merge will be written to the new contiguous disk space.






Almost the original artwork in the paper.



In a single tree, C1 is a bit like our familiar B-tree or a + + tree, isn't it?



I wonder if you noticed. The above introduction highlights a word: continuous. This is because the nodes at the same level in the C1 tree are continuously recorded on disk. This allows the disk to be continuously read to avoid excessive pathfinding on the disk, which greatly improves the efficiency of the operation.






memtable and the sstable



Well, just now we've mentioned Cassandra internal use and log-structured merge-tree similar data structures. In this section, we will introduce some of the main data structures and operating procedures of Cassandra. It can be said that if you have a general understanding of the previous section on Log-structured Merge-tree, then understanding these data structures will also be very easy.



There are three very important data structures in Cassandra: Memtable recorded in memory, and commit log and sstable saved on disk. Memtable records Recent changes in memory, while Sstable records most of the data that Cassandra hosts on disk. A series of key-value pairs arranged according to the key are recorded inside the sstable. Typically, a Cassandra table corresponds to a memtable and multiple sstable. In addition, to improve the speed of searching and accessing data, Cassandra also allows software developers to create indexes on specific columns.



Since the data may be stored in memtable, it may have been persisted to sstable, so Cassandra need to merge data from memtable and sstable when reading the data. At the same time, in order to improve the speed of operation and reduce unnecessary access to sstable, Cassandra provides a composition called Bloom filter: Each sstable has a bloom filter, To determine whether the sstable associated with it contains one or more of the data requested by the current query. If it is, Cassandra will attempt to extract the data from the sstable, and if not, Cassandra will ignore the sstable to reduce unnecessary disk access.



After judging by Bloom filter that the sstable associated with it contains the data required for the request, Cassandra begins to attempt to extract the data from the sstable. First, Cassandra checks to see if the partition key cache has cached index entries for the requested data index Entry. If present, Cassandra will query the address of the data directly from the compression Offset map and retrieve the required data from that address, if partition Key cache is not caching the index Entry, Then Cassandra will first find the approximate position of the index Entry from the partition summary, and then start searching for the partition index from that location to find the index Entry of the data. After the index entry is found, Cassandra can find the corresponding entry from the compression Offset map and obtain the required data based on the displacement of the data recorded in the entry:






The entries of the document is more than the adjustment



Did you find it? In fact, the data recorded in Sstable is still a sequential record of the various fields, but the difference is that its search first through the partition Key cache and compression Offset map a series of components. These consist of only a series of correspondence, which is equivalent to continuously recording the data required for the request, thus improving the speed of the data search, right?



The Cassandra write process is also very similar to the log-structured merge-tree write process: The log in log-structured merge-tree corresponds to the log,c0 of the commit memtable tree, The C1 tree corresponds to the collection of sstable. When writing, Cassandra will first write the data to memtable and add the record corresponding to the write at the end of the commit log. In this case, the Cassandra can still recover the data in the memtable by means of a commit log when the machine is powered off and other anomalies.



After the data is continuously written, the size of the memtable will gradually grow. The Cassandra Data migration process is triggered when its size reaches a certain threshold. The process adds the data from memtable to the end of the corresponding sstable, and the write record in the commit log is removed on the other hand.



This also creates a confusing problem: if you write new data to the end of sstable, how does the data migration process perform updates to the data? The answer is: when the data needs to be updated, Cassandra adds a record with the current timestamp at the end of the sstable so that it can identify itself as the most recent record. And the original records in the sstable are declared void.



This leads to the problem that a large number of updates to the data can lead to a rapid increase in disk space consumed by sstable, and that many of the recorded data is already out of date data. As a result, the disk's space utilization will drop dramatically after some time. At this point we need to compress sstable to free up the space occupied by these outdated data:






Now there is a problem that we can judge which is the latest data based on the timestamp of repeated data, but how do we handle the deletion of the data? In Cassandra, the deletion of data is done through a composition called Tombstone. If a piece of data is added to a tombstone, it is considered to be a deleted data at the next compression and is not added to the compressed sstable.



During the compression process, the original sstable and the new sstable exist simultaneously on the disk. These original sstable are used to complete support for data reading. Once the new sstable is created, the old sstable will be deleted.



Here are a few things to keep in mind in the daily use of Cassandra. First of all, because rebuilding memtable through commit log is a time-consuming process, we need to try to manually trigger the merge logic before we need to rebuild the memtable sequence to persist the data in memtable on that node to sstable. One of the most common things to rebuild memtable is to restart the node where Cassandra is located.



Another area to be aware of is that you should not use indexes excessively. Although the index can greatly increase the data read speed, but we also need to maintain the data when it is written, resulting in a certain performance loss. At this point, Cassandra is not much different from the traditional relational database.






Cassandra Cluster



Of course, running Cassandra with a single DB instance is not a good choice. A single server can cause a single point of failure for a service cluster, and it cannot take full advantage of Cassandra's scale-out capabilities. So starting from this section, we'll simply explain the Cassandra clusters and the various mechanisms used in the cluster.



The following series of components are often included in a Cassandra cluster: node, data center, and cluster (Cluster). A node is the most basic structure used to store data in a Cassandra cluster; a data center is a collection of nodes in the same geographic region, and clusters are often composed of multiple data centers in different regions:






The Cassandra Cluster presented is comprised of three data centers. Two of the three datacenters are in the same region, while the other data center is in a different region. It can be said that the two data centers in the same region is not uncommon, but Cassandra official documents also did not negate the way the cluster is built. Each data center contains a series of nodes to store the data that the Cassandra Cluster will host.



With clustering, we need to use a series of mechanisms to complete the collaboration between clusters, taking into account the range of non-functional requirements required by the cluster: node state maintenance, data distribution, extensibility (Scalability), high availability, disaster recovery, and more.



Probing the state of a node is the first step in high availability and is the basis for distributing data between nodes. Cassandra uses a point-to-point communication scheme called gossip to share and pass the state of each node between nodes in the Cassandra Cluster. Only in this way can Cassandra know which nodes are able to effectively store the data, and then distribute the operation of the data to each node.



In the process of saving data, Cassandra uses a composition called Partitioner to determine which nodes the data should be distributed to. The other component associated with data storage is snitch. It provides a way to determine how to read and write data based on the performance of all nodes in the cluster.



These components also use a range of methods commonly used in the industry. For example, Cassandra internally through Vnode to handle the performance of each hardware different, so that at the physical hardware level of a similar "Enterprise Load Balancing Introduction" in the article mentioned in the weighted Round Robin solution. Another example of its internal use of consistent Hash, we also in the "memcached Introduction" in the article has been introduced.



Well, the introduction is complete. In the following sections, we will describe these mechanisms used by Cassandra.






Gossip



The first is gossip. It is the protocol used to transmit the node state between the nodes in the Cassandra Cluster. It runs once per second and swaps the state of the current Cassandra node and the state of the other nodes it knows with up to three other nodes. By this method, the effective node of Cassandra can quickly understand the state of other nodes in the current cluster. The status information also contains a timestamp to allow gossip to determine exactly which state is the updated state.



In addition to exchanging the state of each node between nodes in the cluster, the gossip also needs to be able to handle a series of actions that operate on the cluster. These operations include node additions, removal, re-entry, etc. In order to be able to handle these situations better, gossip proposes a concept called seed node. It is used to provide a starting gossip Exchange entry for each newly added node. After joining the Cassandra Cluster, the new node can first attempt to swap state with a series of seed nodes that it records. This provides information about other nodes in the Cassandra cluster, allowing them to communicate with these nodes, and passing the information they have added through these seed node. Because the node state information obtained by a node is often recorded in a persistent composition such as a disk, it can still communicate through these persistent node information after a reboot to rejoin the gossip interchange. In the case of a node failure, the other node will periodically send a probe message to that node to try to recover the connection. But this will cause trouble for us to permanently remove a node: the other Cassandra nodes always feel that the node will rejoin the cluster at some point, and therefore always send probing information to that node. At this point we need to use the node tool provided by Cassandra.



So how does gossip determine if a node fails? If the other party participating in the Exchange does not answer for a long time during the exchange process, the current node will mark the target node as invalid and pass the state through the gossip protocol. Because the topology of the Cassandra Cluster can be complex, such as cross-region, the criteria used to determine whether a node is invalidated is not determined to be invalid for how long it is not responding. After all, this can cause a big problem: two of the nodes in the same lab have a very fast state exchange, and cross-region exchanges are slower. If we set the time is short, then the cross-region state exchange will often be erroneously reported as invalid, if we set the time is longer, then the gossip to the node failure detection sensitivity will be reduced. To avoid this, gossip is using a decision-making logic that is based on a number of factors, such as exchanging history between previous nodes. This will have a larger time window for two nodes that are far away, thus producing no false positives. For two nodes with close proximity, the gossip will use a smaller time window to improve the sensitivity of the probe.






Consistent Hash



The next thing we're going to talk about is consistent Hash. The concept of buckets is often included in the usual hashing algorithm. Each hash calculation is determined in which bucket the specific data needs to be stored. If the number of buckets changes, the previous hash calculation will be invalidated. The consistent hash solves the problem well.



How does the consistent hash work? First consider a circle that distributes multiple points on the circle to represent integers 0 through 1023. These integers are evenly distributed across the entire circle:






In, we highlight the six blue dots that divide the circle by six, representing the six nodes used to record the data. These six nodes will each be responsible for a range. For example, 512 the node corresponding to the blue dot will record data from a hash value of 512 to 681 of this interval. In Cassandra and some other fields, the circle is called a ring. Next we hash the data that we currently need to store and get the hash value for that data. For example, if the hash value of a piece of data is 900, it is between 853 and 1024:






So the data will be recorded by the node corresponding to the blue dot 853. This will not change the node where the data is located once the other nodes have failed:






How is the hash value of each piece of data calculated? The answer is Partitioner. The input is the partition Key of the data. The position of the result on the ring determines what nodes are to be stored in the data.






Virtual Node



Above we introduce the operation principle of consistent hash. But here's another question, what about the data on the node that failed? So we can't access it? This depends on our settings for Cassandra Cluster data replication. Typically, we enable this feature so that multiple nodes can simultaneously record a copy of the data. In the case where one of the nodes fails, the other nodes can still be used to read the data.



One thing to deal with here is that each physical node has a different capacity. Simply put, if a node can provide a much smaller service than other nodes, assigning the same load to it will overwhelm it. To deal with this situation, Cassandra provides a solution called Vnode. In this solution, each physical node is divided into a series of vnode with the same capacity based on its actual capacity. Each vnode is used to charge a piece of data on the ring. For example, for a ring that has just been shown with six nodes, the relationship between the individual vnode and the physical machine might look like this:






One thing we often need to be aware of when using Vnode is the replication factor setting. In terms of its meaning, the replication factor in Cassandra is no different from the replication factor used in other common databases: The value it has is used to indicate how many copies of the data recorded in Cassandra. For example, if it is set to 1, Cassandra will only save one copy of the data. If it is set to 2, then Cassandra will save a copy of this data more.



The following are a number of factors that we need to consider when deciding which replication factor to use for Cassandra clusters:


    • The number of physical machines. Imagine if we set the replication factor to more than the number of physical machines, then there is bound to be a two-part copy of the same piece of data stored by the physical machine. This does not actually make much of a difference: once the physical machine has an exception, it loses multiple copies of data at a time. Therefore, in the case of high availability, the number of Replication factor is more significant than that of a physical machine.
    • Heterogeneity of the physical machine. The heterogeneity of the physical machine often affects the effect of the replication factor you set up. To cite an extreme example. If we have a Cassandra cluster and it's made up of five physical machines. One of the physical machines is 4 times times the capacity of the other physical machines. Then setting the replication factor to 3 o'clock will cause the same data to be stored on a physical machine with a larger capacity. It's not much better than setting it to 2.


So in determining the replication factor of a Cassandra cluster, we carefully set an appropriate value based on the number and capacity of the physical machines in the cluster. Otherwise it will only result in more useless copies of the data.






Note: This article was written in August 15. Since NoSQL databases are developing very quickly and often have a series of changes that affect post-compatibility (such as the spring Data neo4j already does not support @fetch). So if you find any description has changed, please leave a comment for other readers to refer to. Thank you for this






Open source software: NoSQL database-Graph database Cassandra


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.