MongoDB Authoritative Guide--Notes

Source: Internet
Author: User
Tags bulk insert findone mongodump mongorestore

MongoDB does not have features that are common in relational databases, such as connectivity and complex multi-line transactions.

The collection--document-->ID ID is unique within the collection to which the document belongs.

Db.help () View Help at the database level, db.foo.help () to view help at the collection level.

Db.foo.update-Do not add (), view function specific functions, use parameters.

To run the script:
MONGO--quite Host:port/foo scripts.js Connect to the specified server's Foo database to execute the script scripts.js file.

The load () function runs the script from interactive mode: Load ("Scripts.js")

Db.version ()--Returns the version information of the server
Db.getcollection ("version")--access to the version database: Access Invalid collection (only letters, numbers, $, _, and not beginning with a number)

x.y=x[' y '--use this method to access the collection of invalid properties
var name= "#[email protected]! "
Db[name].find ()

var collections = ["Cneter", "Account", "chat"]
for (var i in collections) {
Print (Db.blog[collections[i]]);
}

Insert:
Db.center,insert ({"Test": 1})
BULK INSERT:
Db.center.batchInsert ({"Test": 1},{"Test": 2},{"Test": 3})
--Insert multiple documents into a collection, and you cannot insert multiple documents into multiple collections in one request.

The driver allows the bulk insert of the document to be restricted, the desired data before the failure is successfully inserted, and then all the data is lost.
Use the Collecyiononerror option to ignore the error and continue with the subsequent insert operation.
The shell does not support the COLLECTIONONERROR option, which is supported by all drivers.

MongoDB only makes the most basic checks, and inserting illegal data is easy, so you should only allow trusted sources (application servers) to connect to the database.

Delete:
Db.center.remove ()--Delete all documents
Db.center.remove ({"Test": 1})--delete data for the specified condition
All deletions are permanent and cannot be undone, restored
Delete efficiency: Db.center.remove () <db.center.drop ()

$set: Specifies the value of a field that is created if the field does not exist.
Db.cneter.update ({"_id": 1234},{"$set": {"Favorite book": "Hao Ma,hao de"}})
--Add favorite book field, modify field can use "$set".
Db.center.update ({"_id": 1234},{"$unset": {"favorite": 1}})--delete the added field
Replacing with a key-value pair overwrites the entire document with a new key-value pair, requiring the ¥ decorator to be used to modify the key-value pair.

"$inc": increases the value of an existing key.
Db.center.update ({"Name": "Kasumi", "$inc": {"score": 50}})--value 50
Db.center.update ({"Name": "Kasumi", "$inc": {"score": 100}})--value 150
Modify only numeric types

$push: Adds an element to the end of the array.
$slice: Limits the maximum length that the array contains.
"$sort": Continue sorting. --"$slice" and "$sort" need to be combined with "$push" and "$each" must be used.

"$addToSet": avoid duplication of data when inserting.
Db.center.update ({"_id": 1234}, "$addToSet": {"email": {"$each": ["[Email protected]", "[Email protected]"}})

Delete:
1) Consider the array as a queue or stack, "$pop"--{"$pop": {"key": 1}}: Deletes an element from the end of the array.
2) Delete an element based on a specific condition: "$pull"-Deletes all matching documents instead of just one.

The document in the middle of the collection is modified so that if the document size becomes larger, the document will be moved to the end of the collection.
When you move a document, you modify the fill factor in the collection, and when the document size increases, the position it occupies is idle when it moves to the tail.
Inserting a new document into the collection creates a new fill factor, and the size of the fill factor decreases as you move the position without producing it.
--Try to set the fill factor value close to 1.
Moving documents frequently results in a large number of empty data files, if there are too many blank spaces that cannot be reused--error:was empty,skipping ahead
Db.runcommand ({"Collmod": CollectionName, "usepowerof2sizes": true})--increases the disk reuse rate.

By default, only the first document that meets the criteria is updated, and all documents that meet the criteria need to be changed, and the fourth parameter of Uodate needs to be set to true.

Db.runcommand ({getlasterror:1})--The number of documents that are queried for updates.
Findandmodify--Returns the modified data for redefinition.
Db.runcommand ({"Findandmodify": "Center", "query": {"status": "Ready"}, "Update": {"$set": {"status": "Running"}})


Inquire:
Find empty Query document ({}): matches the entire contents of the collection. Without specifying a query condition, the default is {}.

Specify the key that needs to be returned: The second parameter of Find or findone specifies the desired key.
Db.center.find ({},{"username": 1, "email": 1})
A key value of 1 is returned in the query result with a key value of 0, which is excluded from the query results.

Restriction: The value that is passed to the database query document must be a constant and cannot refer to the values of other keys in the document.

Query criteria:

Db.center.find ({"Age": {"$gte": $, "$lte": 30}})--scope query
$lt: Less than $lte: less than and Equel $GT: Grate than $gte: grate than and Equel $ne: not Equel

Db.center.find ({"Ticket_no": {"$in": [20,24,45]}})--or query
$nin: Not in
$or: Match multiple key matches
Db.cneter.find ({"$or": [{"Ticket_no": 45},{"username": "Kasumi"}]})

The $mod divides the value of the query by the first given value, and the match succeeds if the remainder equals the second given value.
Db.center.find ({"Idnum": {"$mod": [5,1]}})
Reverse Operation: Db.center.find ("Idnum": {"$not": {"$mod": [5,1]}})

A key can be someone with multiple conditions, but a key cannot correspond to more than one change decorator.

The query optimizer does not optimize the $and:
Db.center.find ({"$and": [{"x": {"$lte": 1}},{"x": 4}]})
Db.center.find ({"x": {"$lte": 1, "$in": [4]}})--query is more efficient

Specific types of queries:
Null
Db.center.find ({"Z": {"$in": [null]}, "$exists": true})--the key value is null, and the key exists for the document
Db.center.find ({"Z": null})--the document that the key does not exist will also be returned.

Regular Expressions:

Query array:
Db.cneter.find ({fruit:{$all: ["Apple", "Banana"]}})
$all--Match an array with multiple elements, and if you use $all to match an element, there is no difference from a singleton query.
The above query contains an array that repeats multiple rows, returns only once, and does not return the array that contains only the query.
Query the value of the array at a specific location: Db.cneter.find ({"fruit.2": "Peach"})
$size--querying arrays of specific lengths
Db.center.find ("fruit": {"$size": 3}})
$slice--Returns a subset of the array elements that match a key
Db.blog.findone (criteria,{"comments": {"$slice": 10}})--Return to top 10 reviews
Db.blog.findone (criteria,{"comments": {"$sliice":-10}})--return 10 reviews
Db.blog.findone (criteria,{"comments": {"$slice": [20,30]}})
$slice will return all keys in the document
Returns the matching array element: Db.blog.find ({"Comments.name": "Bob", {"comments.$": 1}})
Only the first document is returned, and if Bob has more than one comment, only the first comment will be returned.

$elemMatch--Uses the Sonata statement in the query condition to compare with an array element, but does not match a non-array element
Db.test.find ({"x": {"$elemMatch": {"$GT": Ten, "$lt": 20}})

Query for inline documents:
Db.people.find ({"Name.first": "Joe", "Name.last": "Schmoe"})--regardless of the order of the query.
Db.people.find ({"name": {"First": "Joe", "Last": "Schmoe"}})--the document must match exactly

Specifying a set of criteria correctly without having to specify each key requires the use of $elemmatch:
Db.blog.find ({"Comments": {"$elemMatch": {"author": "Joe", "score": {"$GT": 5}}})

$where:
Db.foo.find ({"$where": function () {
For (Var-in-this) {
Fro (var and this) {
if (current! = other && This[current]==this[other]) {
return true;
}
}
}
return false;
}})


Server-side JavaScript is vulnerable to injection attacks and can be specified when running MONGODN--noscripting

Cursor: for (i=0;i<100;i++) {
Db.cillection.insert ({x:i})
}
var cursor = Db.collection.find ();

Db.center.find (). Limit (3): Limit the number of returned query results
Db.center.find (). Skip (3): Skip first three return values for query results
Db.center.find (). Sort ({username:1,age:-1}): Sort by field name, 1: Ascending,-1: Descending

The ordering of the keys of a mixed type is pre-defined:
Minimum <null< number < String < object/Document < array < binary data < object id< Boolean < date < timestamp < regular expression < Max

Avoid skip skipping large amounts of data:
var page1=db.foo.find (). Sort ({"Date": -1}). Limit (100)
var latest=null
Show First Page
while (Page1.hashnext ()) {
Latest=page1.next ();
Display (latest);
}
Get Next Page
var page2=db.foo.find ({"date": {"$GT": Latest.date}});
Page2.sort ({"Date": -1}). Limit (100);

Select a random document:
Inefficient methods of use

var total=db.foo.count ()
var random = Math.floor (Math.random () *total)
Db.foo.find (). Skip (Random). Limit (1)

You can add an extra random key when inserting data:
Db.people.insert ({"Name": "Joe", "Random": Math.random ()})

var random =math.random ()
result= Db.people.findOne ({"random": {"$GT": Random}})

Advanced query Options:
$maxscan: Specifies the maximum number of scanned documents in a query
Db.foo.find (criteria). _addspecial ("$maxscan": 20)
$min: Start Condition for query
$max: End Condition for query
$SHOWDISKLOC: Used to display the position of the result on the disk
Db.foo.find (). _addspecial (' $showDiskLoc ', true)

When querying, the cursor shifts to the right, when asked to return to the database, the volume increases, the reserved space is insufficient, will move to the end of the collection, resulting in the cursor to the end.
Workaround: Use snapshot. Db.foo.find (). Snapshot ()
The snapshot slows down the query and uses the snapshots only when necessary.
Database cursors automatically destroy cursors when they are not in use for a period of time, in order to prolong the use of the cursor, using the immortal function, so that the cursor does not time out.

Database command:
Db.listcommands (): View all database commands
Some commands require administrator privileges: and they need to be executed on the admin library.
To execute an administrator command in another library:
Use Temp
Db.admincommand ({"Shutdown": 1})


---------------------------------------------------------------------------------------------------
Design applications
--------
Index
----
Use explain () to see everything you've done in the query process.
Db.users.find ({username: "Kasumi"}). Explain ()

Build Index: Db.users.ensureIndex ({"username": 1})
Create an index if it does not return within a few seconds, execute it in another shell: db.currentop () or view the Mongod log to see the progress of the index creation.
Disadvantages:
For each index that is added, each write operation (insert, UPDATE, delete) consumes more time, and when the data changes, it not only needs to update the document, but also updates all indexes on the collection.

The default sort of query results is ascending.
The size of the result set is more than 32MB, and the database will go wrong and refuse to sort so much data.

The index is essentially a tree, with the smallest value on the leftmost sub-leaf, and the largest value on the rightmost sub-leaf. Use more new data-keep the data at the far right of the tree and make the index right-balanced as much as possible.

Low-efficiency operator:
$where, $exists--Cannot use index
Try to use $in instead of $or-two queries combine results less efficiently than word queries, $or to merge the results of two queries.

Indexes are built on the document, only the fully matched subdocuments will use the index, and the sub-document's query needs to be indexed.

Multi-key index: This key in a document is an array, is marked as a multi-key index, after being marked as a multi-key index, can only delete the index and rebuild, to revert to a non-multi-key index.

Index cardinality: The number of different worth of a field in a collection.
The higher the cardinality of a field, the more useful the index on the key.
Index the higher cardinality of the field, placing the key with the higher cardinality before the composite Index. (before the key with lower cardinality)

Hint (): Forces the specified index to be used
Db.c.find ({"Age": +, "username":/.*/}). Hint ({"username": 1, "Age": 1})
--Default usage ("Age": 1, "username": 1)--first exact match and then fuzzy matching

Query optimizer:
If there are multiple indexes available, the query plan returned first is cached and the other queries are aborted.
Allplans in explain () displays all query plans that have been tried.

{"$natural": 1}--force the use of a full table scan, returning results in the order of the disks.

Index Type:
---------
Unique index:
Db.users.ensureIndex ({"username"; 1},{"unique": true})--duplicate values will not be inserted
If duplicate values already exist in the collection, creating a unique index can result in a failure.
Db.users.ensureIndex ({"username"; 1},{"unique": true, "dropdups": true})--The time when the repetition is worth deleting

Sparse indexes:
Db.ensureindex ({"Email": 1},{"unique": true, "sparse": true})--The field can have no value, but must be unique when the field has a value.

Index management:
--------
Database index information is stored in the System.indexes collection, which is a reserved collection that cannot be inserted or deleted in the document
It can only be manipulated by Ensureindex or dropindexes.

Db. [Collectionname].getindexes (): View all index information on a given collection.

When an index is created, all read requests and write requests to the database are blocked until the index is created.
When you create an index with the background option, when you create an index, the process of creating the index pauses if a new database request is to be processed.


Special indexes and collections:
-----------------
Fixed collection:
---------
Inserts data into a fixed collection that is already full, and a pinned collection automatically deletes the oldest document from the collection.
Db.createcollection ("My_collection", {"capped": true, "size": 10000})
Convert an existing regular collection to a fixed collection: Db.runcommand ({"converttocapped": "Test", "Size": 10000})
Natural sort:
Db.my_collection,find (). Sort ({"$natural":-1})

There is no collection of _id indexes:
When calling CreateCollection to create a collection, specify Autoindexid as false.
There is no collection of _id indexes and cannot replicate the Mongod.
Prior to version 2.2, fixed collections were not _id indexed by default.

TTL index:
-------
Set a time-out for each document, and a document will be deleted after it reaches the preset aging level.
Db.foo.ensureIndex ({"lastupdated": 1},{"Expireaftersecs": 60*60*24})

Full-text index:
----------
Db.admincommand ({"Setparameter": 1, "textsearrchenabled": true})--Enable full-text indexing
Weight: Db.hn.ensureIndex ({"title": "Text", "desc": "Text", "Author": "Text"},{"weights": {"title": 3, "Author": 2})--default value is 1
Once the index is created, you cannot change the weight of the field.

Gridfs Storage File:--for storing large binary files
1. Using Gridfs can simplify the stack.
2.GridFS automatically balances existing replications or sets automatic sharding for MongoDB, so it is easier to fail over or scale out the file store.
3. To store user uploaded files, Gridfs can be more comfortable to solve some other file system problems that may be encountered.
4. The concentration of file storage is relatively high.
Disadvantages:
1. Performance is relatively low.
2. If you want to repair a document on Gridfs, you can only delete the existing document and then resave the entire document.

From Pyymongo import Connection
Import Gridfs
db = Connection (). Test
FS = Gridfs. GRIDFS (DB)
file_id = Fs.put ("Hello, Kasumi", filename= "Foo.txt")
Fs.list ()
Fs.get (file_id). Read ()


Polymerization:
-----
Aggregation framework:
Filter filtering, cast projecting, group grouping, sort sorting, restrict limiting, skip skipping

Db.articles.aggregate ({"$project": {"author": 1}},{"$group": {"_id": "$author", "Count": {"$sum": 1}}},{"$sort": {" Count ": -1}},{" $limit ": 5})

Mathematical Expressions:
$add: +
$subtract:-
$multiply: *
$divide:/
$mod: Seeking Redundancy
Db.employees.aggregate ({"$project": {"Totalplay": {"$add": ["$salary", "$bonus"]}})

String expression:
$substr: In the character of the first argument, the number of bytes of the third parameter is truncated starting with the second argument
$concat: Concatenate the given string together
$toLower:
$toUpper:

Logical expression:

Array operators:
$addToSet: If the argument is not in an array, it is added to the array.
$push: Unconditionally add parameters to the array.

Db.runcommand ({"distinct": "People", "key": "Age"})--Specify collection and key

Application design:
-------------
Paradigm can improve the speed of data writing, inverse paradigm can improve the speed of data reading.

Delete old data:
Fixed set: When the collection is filled, the old data is extruded from the fixed set.
TTL collection: A more precise time to control the deletion of documents.
Delete documents regularly: Use separate documents every month and delete them regularly.

Scenarios that are not suitable for use with MongoDB:
1.mongodb does not support transactions.
2. Connect different types of data on several different dimensions.
3. Using the tool does not support MongoDB.


Copy:
---------------------------------------------------------------------------------------------
To create a replica set:
-----------
The client cannot perform a write operation on the backup node.
By default, clients cannot read data from the backup node.

Each member in the replica set must be able to connect to all other members.

The composition of the replica set:
--------------
Synchronous:

The replication functionality is implemented using the action log Oplog, which contains each operation of the master node. Oplog is a fixed collection in the local database of the master node.
The backup node gets the action that needs to be performed from the currently used synchronization source, and then performs these operations on its own dataset. Finally, these actions are written to their own oplog.


To connect a replica set from an application:
---------------------
After the primary node is hung, the application cannot perform a read operation, and if you want the master node to be hung, the application can perform a read operation that can read the data from the backup node.
When you read data from a backup node, you don't care whether the data you read is up-to-date.

Low-latency Read and write: Shards must be used, and replica set values allow write operations on the primary node.

Management:
------
Set the number of votes for a member to 0:
Rs.add ({"_id": 1, "host": "server-1:27017", "vote": 0})
When there are a large number of replica sets, you can set some members to have no voting limits.

Rs.reconfig (config,{"force": true})--forced reconfiguration of replica set

Block elections:
If you need to maintain the primary node, but do not want other members to be elected as the primary node during this time, then you can execute the freeze command on each backup node to force them to be in the state of the backup node.
Rs.freeze (10000)
Release: Rs.freeze (0)

Monitoring replication:
---------
Rs.status ()

Server1.admincommand ({replsetgetstatus:1}) [' Syncingto ']-View the server's synchronization source

Increase the size of the Oplog:
1. If the current node is the master node, let it abdicate so that the data of other members can be updated as soon as possible to match it.
2. Close the current server.
3. Start the current server in stand-alone mode.
4. Temporarily save the last insert operation of Oplog to another collection.
5. Delete the current oplog:db.oplog.rs.drop ()
6. Create a new Oplog:db.createCollection ("Oplog.rs", {"capped": true, "size": 10000})
7. Write the last action record back to Oplog
Ensure that the data is inserted successfully, and if no insert succeeds, all data is deleted after the server is added to the replica set.
8. Restart the server as a replica set.

Master and slave settings:
Mongod--master
Mongod--slave--source Masterip:port

Master-Slave mode switches to replica set mode:
1. Stop all write operations on the system.
2. Close all Mongod services.
3. Restart the master node using the--replset option.
4. Initialize this replica set with only one member.
5. Use the--replset and--fastsync options to start the slave node.
6. Use Rs.add () to join the previous slave node to the replica set.
7. Repeat 5-6 steps for each slave node.
8. When all the slave nodes become backup nodes, you can turn on the system write function.
9. Remove the Fastsync option from the configuration file, command line, and memory.

Sharding:
-------------------------------------------------
Sharding:
------
Sh.status () can see the status of the cluster: Shard summary information, database summary information, collection summary information.
The primary shard differs from the primary node in the replica set. The primary shard refers to the entire replica set that makes up the Shard, whereas the primary node in the replica set refers to a single server that can handle write requests.

To Shard a collection, first enable sharding on the database for that collection: sh.enablesharding ("center")
Before you enable sharding, you now want to create an index on the key that is the slice key: Db.users.enableIndex ({"username": 1})
Shard The collection according to username: sh.shardcollection ("Center.users", {"username": 1})

If there are multiple existing replica sets that are not shards, they can be added to the cluster as entirely new shards as long as they do not have a database of the same name.

Db.locks.findOne ({"_id": "Balancer"})--View the equalizer for MONGOs.

MONGOs
Use Config
Db.shards.find ()--View all shard information in the cluster.
Db.databases.find ()--keeps track of all database information.
Db.collection.find ()--keeps track of all Shard collection information.
Chunks--record information for all blocks in the collection
Changelog--tracking the operation of the recording cluster
Db.tags.find ()--occurs when a shard label is configured for the system.
Settings-Document information for current equalizer settings and block size.

View Connection Statistics: Db.admincommand ({"Connpoolstats": 1})
Host1 such a host name is a connection from the configuration server, name/host1 such a host is a connection from a shard.
Running commands on MONGOs and Mongod is not valid.

To add a server:
As long as the correct set of configuration services is specified in the MONGOs--configdb option, MONGOs can immediately establish a connection with the client.

To modify a shard's server:
You need to manually modify Config.shards only if you are using a stand-alone server as a shard instead of using a replica set as a shard.

To modify a stand-alone server Shard to a replica set shard:
1. Stop sending requests to the system
2. Shut down the standalone server server1 and all MONGOs processes
3. Restart the server in replica set mode Server1
4. Connect to Server1 and initialize it as a member replica set
5. Connect to the configuration server, replace the entry for the Shard, and replace the name of the Shard in Config.shards.

To delete a shard:
Db.admincommand ({"Removeshard": "Server1"})
To ensure that the equalizer is turned on, the data is migrated to other shards when the Shard is deleted.
To view the status of the execution, you can execute the above command again to view the current status.

Data equalization:
Sh.setbalancerstate (False)--perform all database management operations Ah, you should turn off the equalizer first.
Db.locks.find ({"_id": "Balancer"}) ["State"]--check whether the equalizer is off

To refresh the configuration:
Db.admincommand ({"Flushrouterconfig": 1})--flush all caches manually.


Application Management:
----------------------------------------------------------------------------------------------
To view an operation in progress:
Db.currentop ()

Use the System Profiler to find operations that take too long, but the overall performance of the mongod is also degraded, so only the parser is periodically opened for information.
Db.setprofilinglevel (2)
Set the analysis level to 0 to close the parser: Db.setprofilinglevel (0)

Mongotop--View the busiest of the collection
Mongostat-Provides information about the server, with the default output of a list of current states per second.


Data management:
---------
Identity authentication:
Shard, the database admin is saved on the configuration server, so the Mongod in the Shard doesn't even know it exists.
So, in their view, they turned on authentication but there was no administrator user. As a result, sharding allows a local native client to read and write data without authentication.
The network configuration allows clients to access the MONGOs process. However, if you are concerned that the client is running locally on the Shard and not directly connected to the Shard through the MONGOs process, you can add an administrator user to the Shard.

To move a collection:
Cannot move collection between databases, but can change collection name: Db.sourceColl.renameCollection ("NewName")
Move a collection to another mongod: Db.runcommand ({"Clonecollection": "Collname", "from": "Histname:port"})

Durability:
--------
Mongod built-in recovery tool: Mongod--dbpath/data/gst/mongodb2.6.12/data--repair [--repairpath/data]: Specifies the recovered directory

Mongodump--repair

Mongod.lock file: If journaling system is enabled, this file does not appear
Mongod when the normal exit, the file will be cleared, if not clear the file, it means that mongod exit abnormally.
The Mongod.lock file prevents Mongod from starting, and if Mongod.lock prevents the Mongod from starting, it cannot simply delete the Mongod.lock file, and the data needs to be repaired.


Server Management:
----------------------------------------------------------------------------------------------
Start and stop MongoDB:
-------------------
--dbpath
--port
--fork
--logpath
--DIRECTORYPERDB: Store Each database in a separate directory
--config
When you start the database, MongoDB writes a document to Local.startup_log >db.startup_log.find ()

Shut down:
Admin Library: Db.shutdownserver ()
Db.admincommand ({"Shutdown": 1, "Force": true})--forced shutdown


Monitoring mongodb:
------------

Backup:
-----
To copy a data file:
Db.fsynclock ()--Locks the database, disables any writes, and synchronizes. All write operations join the queue waiting.
Db.fsyncunlock ()


Mongodump/mongorestore:--drop is deleted before recovering a collection.

To back up a replica set:
Backup nodes are backed up, reducing the burden on the primary node.
Using mongodump when backing up, you need to use--oplog to get a snapshot based on a simple, otherwise the state of the backup will not match the state of any other cluster member.
You need to run--oplogreplay when recovering.

To back up a shard cluster:
Entire cluster:
Close the equalizer, run mongodump through MONGOs, restore the backup needs to run Mongorestore and connect to a mongos.


Mongooplog for incremental backups:
1. Record the last operation time in the Oplog of a
Op=db.oplog.rs.find (). Sort ({$natural: -1}). Limit (1). Next ();
Start =op[' ts ' [' t ']/1000
2. Back up the data and restore to the database directory on B
3. Periodically add the action on A to B to complete the copying of the data.
Mongooplog--from A--seconds 1234567
The seconds parameter should be the difference between the start variable and the current time calculated in the first step, plus a few extra seconds.


To deploy MongoDB:
------------

MongoDB Authoritative Guide--Notes

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.