Java concurrency control mechanism detailed _java

Source: Internet
Author: User
Tags cas semaphore sleep static class volatile

In general development, I often see many students in the Java Concurrent development model will only use some basic methods. Like volatile,synchronized. High-level contracts such as lock and atomic are not often used by many people. I think most of the reasons are due to the nature of the principle. In the busy development work, who will be able to accurately grasp and use the correct concurrency model?

So recently based on this idea, I intend to organize this part of the concurrency control mechanism into an article. Both a memory of their own knowledge, but also hope that this talk of the class capacity can help most developers.

Parallel program development inevitably involves multithreading, multitasking collaboration and data sharing. In JDK, there are many ways to achieve concurrency control between multithreading. For example: internal locks, reentrant locks, read and write locks, and semaphores.

Java memory Model

In Java, each thread has a working memory area, which holds a copy of the value of a variable in the main memory shared by all threads. When a thread executes, it operates these variables in its own working memory.

In order to access a shared variable, a thread typically acquires a lock and clears its working memory area, which guarantees that the shared variable is properly loaded from the shared memory area of all threads into the thread's working memory area, guaranteeing that the value of the variable in the working memory area is threads unlocked to shared memory when the line is in place.

When a thread uses a variable, it gets a value that must be stored in the variable by itself or by another thread, whether or not the program uses the thread synchronization operation correctly. For example, if two threads store different values or object references in the same shared variable, the value of the variable is either this thread or that thread, and the value of the shared variable is not combined by a reference value of two threads.

A variable is an address that a Java program can access, including not only basic type variables, reference type variables, but also array type variables. Variables saved in the main memory area can be shared by all threads, but it is not possible for one thread to access the parameters or local variables of another thread, so the developer does not have to worry about the thread safety of the local variables.

volatile variables – visible between multiple threads

Because each thread has its own working memory area, when a thread changes data in its working memory, it may be invisible to other threads. To do this, you can use the volatile keyword to break all the lines Cheng June read and write in-memory variables, making volatile variables visible across multiple threads.

Variables declared as volatile can be guaranteed as follows:

1, other threads to change the variable, you can timely response in the current thread;
2, to ensure that the current thread of the volatile variable changes, can write back to the shared memory in time, and other threads see;
3, the use of volatile declared variables, the compiler will ensure its order.

Sync keyword synchronized

Synchronization keyword synchronized is one of the most common methods of synchronization in the Java language. In earlier versions of JDK, the performance of synchronized was not too good, and the value was suitable for locking the competition is not particularly intense occasions. In JDK6, the gap between the synchronized and the unequal locks has narrowed. More importantly, the synchronized is more concise, the code readability and maintenance is better.

method to lock an object:
Public synchronized Void Method () {}
when the methods () method is invoked, the calling thread must first obtain the current object, and if the current object lock is held by another thread, the calling thread waits and the object lock is freed after the violation, and the above method is equivalent to the following:

 public void Method () {
synchronized (this) {
//do something ...
}
} 

Second, the synchronization block can be constructed using synchronized, and the synchronization block can more precisely control the range of synchronization code than the synchronization method. A small sync code is very fast forward to and from the lock, thus allowing the system to have higher throughput.

 public void method (Object o) {
//before
synchronized (o) {
//do something ...
}
After
} 

Synchronized can also be used for static functions:
Public synchronized static void method () {}
This place must be noted that the synchronized lock is added to the current class object, so all calls to the method must obtain a lock on the class object.

Although synchronized can guarantee the thread safety of an object or code snippet, using only synchronized is not sufficient to control thread interaction with complex logic. In order to achieve the interaction between multithreading, you also need to use the wait () and notify () method of Object objects.

Typical usage:

 Synchronized (obj) {while
  (<?>) {
    obj.wait ();
    After receiving the notification, proceed with the execution.
  }
} 

Before you use the Wait () method, you need to obtain an object lock. When the Wait () method executes, the current thread or the exclusive lock that releases obj is available for use by other threads.

When waiting on the obj thread to receive obj.notify (), it can regain the exclusive lock of obj and continue running. Note that the Notify () method randomly evokes a thread waiting on the current object.

The following is an implementation of a blocking queue:

 public class blockqueue{
 private List = new ArrayList ();

 Public synchronized Object POPs () throws interruptedexception{while
 (List.size () ==0) {
 this.wait ();
 }
 if (List.size () >0) {return
 list.remove (0);
 } else{return
 null;
 }

 Public synchronized object (object obj) {
 list.add (obj);
 This.notify ();
 }

 

Synchronized with Wait (), notify () should be the basic skills that Java developers must master.

Reentrantlock Lock

Reentrantlock is called a reentrant lock. It has more powerful features than synchronized, which can be interrupted and timed. In the case of high concurrency, it has a significant performance advantage over synchronized.

Reentrantlock offers both a fair and an unfair type of lock. A fair lock is the acquisition of a lock is advanced first out, and not a fair lock can jump the queue. Of course, from the performance analysis, the performance of the unfair lock is much better. Therefore, in the absence of special needs, should be preferred the unfair lock, but synchronized provide the lock industry is not absolutely fair. Reentrantlock can specify whether the lock is fair when it is constructed.

When using a reentrant lock, be sure to release the lock at the end of the program. General release lock code to write in finally. Otherwise, if the program is abnormal, the loack will never be released. The synchronized lock is the last automatic release of the JVM.

The classic uses are as follows:

 try {if
 (Lock.trylock (5, Timeunit.seconds)) {//if locked, try to wait 5s to see if a lock can be obtained and return FALSE if the lock is still not available after 5s
 Lock.lockinterruptibly (); Can respond to interrupt event
 try { 
 //operations
 } finally {
 lock.unlock ();}}
catch ( Interruptedexception e) {
 e.printstacktrace ()/////////////(interrupt) when the line is interrupted 
 

Reentrantlock provides a wide range of lock control features, and the flexibility to apply these control methods can improve application performance. But this is not a highly recommended use of reentrantlock. The re-entry lock is the advanced development tool available in the JDK.

Readwritelock Read and Write lock
read-write separation is a very common idea of data processing. In SQL, it should be a technology that must be used. Readwritelock is a read-write separation lock provided in the JDK5. Read-write detach locks can effectively help reduce lock competition to improve system performance. The read-write detach usage scenario is primarily if the number of reads is much greater than the write operation in the system. The use of the following methods:

 Private Reentrantreadwritelock Readwritelock = new Reentrantreadwritelock ();
Private Lock Readlock = Readwritelock.readlock ();
Private Lock Writelock = Readwritelock.writelock ();
Public Object Handleread () throws Interruptedexception {
  try {
    readlock.lock ();
    Thread.Sleep (1000);
    return value;
  } finally{
    Readlock.unlock ();
  }
Public Object Handleread () throws Interruptedexception {
  try {
    writelock.lock ();
    Thread.Sleep (1000);
    return value;
  } finally{
    Writelock.unlock ();
  }
 

Condition objects
the Conditiond object is used to coordinate the complex collaboration between multithreading. Primarily associated with locks. The Newcondition () method in the lock interface enables you to generate a condition instance that is bound to lock. The relationship between a condition object and a lock is like using the object.wait (), object.notify () two functions, and the Synchronized keyword.
Here you can take the source of Arrayblockingqueue to look at:

 public class Arrayblockingqueue extends Abstractqueue implements Blockingqueue, java.io.Serializable {/** Main lock gua
Rding All Access */FINAL reentrantlock lock;
/** Condition for Waiting takes * * Private final Condition notempty;

/** Condition for waiting puts * * Private final Condition notfull;
  public arrayblockingqueue (int capacity, Boolean fair) {if (capacity <= 0) throw new IllegalArgumentException ();
  This.items = new Object[capacity]; 
  lock = new Reentrantlock (fair); Notempty = Lock.newcondition ();
Generates the condition Notfull = Lock.newcondition () that is bound to the lock;
  public void put (e) throws interruptedexception {Checknotnull (e);
  Final Reentrantlock lock = This.lock;
  Lock.lockinterruptibly ();
    try {while (count = = items.length) notfull.await ();
  Insert (e);
  finally {Lock.unlock ();
  } private void Insert (E x) {Items[putindex] = x;
  Putindex = Inc (PUTINDEX);
  ++count; Notempty.signal (); Notification} public E take () throws Interruptedexception {final Reentrantlock lock = This.lock;
  Lock.lockinterruptibly ();
  try {while (count = = 0)//If the queue is an empty notempty.await ()//The consumer queue waits for a non-empty signal to return extract ();
  finally {Lock.unlock ();
  } private E Extract () {final object[] items = this.items;
  E x = This.<e>cast (Items[takeindex]);
  Items[takeindex] = null;
  Takeindex = Inc (TAKEINDEX);
  --count; Notfull.signal ();
Notifies the put () thread queue that there is already free space return x; 
 }//Other code}

Semaphore Signal Volume
Semaphores provide a more powerful control method for multithreaded collaboration. The semaphore is an extension of the lock. Whether an internal lock synchronized or a reentrant lock reentrantlock, one thread is allowed to access a resource at a time, while semaphores can specify multiple threads to access a resource at the same time. From the constructor you can see that:
Public semaphore (int permits) {}
Public semaphore (int permits, Boolean fair) {}//can specify whether fair
permits specifies the amount of access to the semaphore, that is, how many licenses can be applied at the same time. When each thread requests only one license at a time, this is equivalent to specifying how many threads can access a resource at a time. Here is a list of the main methods used:
public void Acquire () throws Interruptedexception {}//attempt to obtain permission for an entry. If it is not available, the thread waits, knowing that the cable is releasing a license or when the front thread is interrupted.
public void acquireuninterruptibly () {}//is similar to acquire (), but does not respond to interrupts.
public Boolean Tryacquire () {}//attempt to get, false if success is true. This method does not wait and returns immediately.
public boolean Tryacquire (long timeout, timeunit unit) throws Interruptedexception {}//how long to wait
public void releases ()//is used to release a license after the onsite access resource ends so that other threads waiting for permission can access the resource.

Here's a look at the example of using semaphores in the JDK documentation. This example is a good illustration of how to control resource access through semaphores.

 public class Pool {private static final int max_available = + Private final semaphore AVAILABLE = new Semaphore (max_
AVAILABLE, True);
  Public Object GetItem () throws Interruptedexception {Available.acquire ();
Apply for a license//At the same time only 100 threads to access the available items,//More than 100 will need to wait for return Getnextavailableitem ();
    public void Putitem (Object x) {//Put the given item back into the pool, marked as not used if (markasunused (x)) {available.release (); A new available item is released, a permission is freed, the thread requesting the resource is activated a}//only for example reference, protected data object[] items = new Object[max_available]; For object pool reuse object protected boolean[] used = new boolean[max_available]; Tag action protected synchronized Object Getnextavailableitem () {for (int i = 0; i < max_available; ++i) {if!us
      Ed[i]) {Used[i] = true;
    return items[i];
} return null; Protected synchronized Boolean markasunused (Object) {for (int i = 0; i < max_available; ++i) {if (item
        = = Items[i]) {if (Used[i]) {Used[i] = false;
   return true;   else {return false;
}} return false; 

 }
}

This example simply implements an object pool with a maximum object pool capacity of 100. Therefore, when there are 100 object requests at the same time, the object pool will have a resource shortage, and the thread that failed to get the resource will have to wait. When a thread finishes using the object, it needs to return the object to the object pool. At this point, because of the increased availability of resources, you can activate a thread that waits for that resource.

threadlocal Thread local variable
At the beginning of contact with threadlocal, it is difficult for the author to understand the use of local variables of this thread. When you look back now, threadlocal is a solution for concurrent access to variables between multithreading. Unlike synchronized locks, Threadlocal does not provide a lock at all, and uses a space-time method to provide a separate copy of the variable for each thread to ensure thread safety, so it is not a data-sharing solution.

Threadlocal is a good way to solve thread-safety problems, the Threadlocal class has a map that stores the variable copy of each thread, the key of the element in the map is the thread object, and the value of the corresponding thread's variable copy, because the key value is not repeatable, each "thread object" A "variable copy" of the corresponding thread, which has reached thread safety.

Particularly noteworthy place, in terms of performance, threadlocal does not have absolute is, in the concurrent volume is not very high, also row lock performance will be better. However, as a set of completely unrelated thread security solutions, the use of threadlocal can reduce the lock competition to some extent in high concurrency or competitive situations.

Here is a simple use of a threadlocal:

 public class Testnum {//Overrides Threadlocal's InitialValue () method with an anonymous inner class, specifying the initial value private static
 ThreadLocal seqnum = new ThreadLocal () {public Integer InitialValue () {return 0;
 }
 };
 Gets the next sequence value public int getnextnum () {Seqnum.set (Seqnum.get () + 1);
return Seqnum.get ();
 }public static void Main (string[] args) {testnum sn = new Testnum ();
 3 threads share SN, each producing serial number testclient T1 = new TestClient (SN);
 TestClient t2 = new TestClient (SN);
 testclient t3 = new TestClient (SN);
 T1.start ();
 T2.start ();
 T3.start ();
 private static class TestClient extends Thread {private testnum sn; public testclient (Testnum sn) {this.sn = SN; public void Run () {for (int i = 0; i < 3; i++) {//3 sequential values per thread System.out.println ("thread[" + Thread.currentthre
 AD (). GetName () + "]--> sn[" + sn.getnextnum () + "]"); }
 }
 }
 } 

Output results:
Thread[thread-0]–> Sn[1]
Thread[thread-1]–> Sn[1]
Thread[thread-2]–> Sn[1]
Thread[thread-1]–> Sn[2]
Thread[thread-0]–> Sn[2]
Thread[thread-1]–> Sn[3]
Thread[thread-2]–> Sn[2]
Thread[thread-0]–> Sn[3]
Thread[thread-2]–> Sn[3]
The output information can be found that each thread produces a sequence number that shares the same Testnum instance, but they do not interfere with each other, but instead produce separate serial numbers, because threadlocal provides a separate copy for each thread.

performance and optimization of locks
"Lock" is one of the most common methods of synchronization. In the normal development, often can see a lot of students directly to the lock plus a large section of code. Other students will only use a lock to solve all the shared problems. Obviously such coding is unacceptable. Especially in the high concurrency environment, the fierce lock competition will lead to a decline in the performance of the program de more obvious. Therefore, the rational use of locks is directly related to the performance of the program.

1, the cost of the thread
in multi-core case, using multithreading can obviously improve the performance of the system. But in practice, using multithreading can add additional overhead to the system. In contrast to the resource consumption of a single core system task itself, multi-threaded applications also need to maintain additional multithreading-specific information. For example, the thread itself is metadata, thread scheduling, thread context switching, and so on.

2. Reduce lock holding time
in programs that use locks for concurrency control, when a lock is contested, a single thread has a direct relationship to the lock's holding time and system performance. If a thread holds a lock for a long time, the lock becomes more competitive. Therefore, in the process of program development, should be as far as possible to reduce the time to occupy a lock, to reduce the possibility of mutual exclusion between threads. For example, the following section of code:

 Public synchronized void Syncmehod () {
beforemethod ();
Mutexmethod ();
Aftermethod ();
} 

This instance if only the Mutexmethod () method is required for synchronization, and the Beforemethod (), and Aftermethod () do not need synchronous control. If Beforemethod (), and Aftermethod () are heavyweight methods, it can take longer CPU time. At this point, if the concurrent volume is large, using this synchronization scheme will cause the waiting thread to increase significantly. Because the currently executing thread will not release the lock until all the tasks have been performed.

The following are optimized scenarios that are synchronized only when necessary, which can significantly reduce the time that a thread holds a lock and increase the throughput of the system. The code is as follows:

 public void Syncmehod () {
beforemethod ();
Synchronized (this) {
mutexmethod ();
}
Aftermethod ();
} 

3, reduce the size of the lock

Reducing the size of the lock granularity is also a kind of effective means to weaken the competition of multithreading lock, this kind of technique is typical use of concurrenthashmap. The lock of a collection object is always obtained in normal hashmap whenever the add () or get () operation is performed on the collection. This is a complete synchronization behavior because locks are on the entire collection object, so intense lock competition affects the throughput of the system at high concurrency.

If you read the source of the classmate should know HashMap is the array + linked list of ways to do it. Concurrenthashmap the whole hashmap into several segments (Segment) on the basis of HashMap, each segment is a sub-hashmap. If you need to add a new table entry, not the HashMap lock, 20 search line according to hashcode get the table item should be stored in which paragraph, and then lock the segment, and complete the put () operation. Thus, in a multithreaded environment, if multiple threads write at the same time, as long as the entries that are written do not exist in the same segment, then there is real parallelism between the threads. The concrete implementation wants the reader to spend some time to read Concurrenthashmap This kind of source code, here will not do too much description.

4, lock separation
In front of the Readwritelock read and write lock, then read and write separation of the extension is the separation of locks. The same can be found in the JDK lock-separated source code linkedblockingqueue.

public class Linkedblockingqueue extends Abstractqueue implements Blockingqueue, java.io.Serializable {/* Lock held by T
Ake, poll, etc/private final reentrantlock takelock = new Reentrantlock ();

/** wait queue for waiting takes * * Private final Condition notempty = Takelock.newcondition ();

/** Lock held by put, offer, etc * * Private final Reentrantlock Putlock = new Reentrantlock ();

/** wait queue for waiting puts * * Private final Condition notfull = Putlock.newcondition ();
  Public E take () throws interruptedexception {e x;
  int c =-1;
  Final Atomicinteger count = This.count;
  Final Reentrantlock takelock = This.takelock; Takelock.lockinterruptibly ();
    Cannot have two threads simultaneously reading the data try {while (count.get () = = 0) {//If no data is currently available, wait for the notice notempty.await (); } x = Dequeue (); Removing a c = count.getanddecrement () from the head; Size minus 1 if (C > 1) notempty.signal (); Notifies other take () operations} finally {Takelock.unlock ();//Release Lock} if (c = = capacity) SignalnOtfull ();
Notify put () operation, there is free space return x;
  The public void put (e e) throws interruptedexception {if (E = = null) throw new NullPointerException (); 
  Note:convention in all put/take/etc are to preset local var//holding count negative to indicate failure unless set.
  int c =-1;
  node<e> node = new node (E);
  Final Reentrantlock putlock = This.putlock;
  Final Atomicinteger count = This.count; Putlock.lockinterruptibly (); Cannot have two threads at the same time put data try {* * * Note this is the count is used in wait guard even though it are * not protected by lo Ck. This works because count can * is decrease at this point (all other puts are shut * out by Lock), and we (or S ome waiting put) are * signalled if it ever changes from capacity.
     Similarly * for all of the other uses of count on other wait guards.
    */while (count.get () = = capacity) {//queue full then wait for notfull.await (); } enqueue (node); Join Queue C = count.getandincrement ();//Size plus 1 if (c+ 1 < capacity) notfull.signal (); 
If there is enough room to notify the other thread} finally {Putlock.unlock ();/release Lock} if (c = = 0) signalnotempty ()//After the insert succeeds, notify the take () operation to read the data 

 }//Other code}

It should be explained here that the take () and put () functions are independent of each other, and there is no lock competition relationship between them. Only the Take () and put () respective methods are required to compete against Takelock and Putlock respectively. Thus, the possibility of the lock competition was weakened.

5, lock coarsening
As mentioned above, the reduction of the lock time and granularity is done in order to satisfy each thread holding the lock for as short a time as possible. However, in the granularity should grasp a degree, if the use of a lock on the continuous request, synchronization and release, it will consume the system's valuable resources, but increased the system overhead.
What we need to know is that when a virtual machine encounters a succession of successive requests and releases to the same lock, it integrates all the lock operations into a single request for the lock, reducing the number of requests to the lock, which is called the coarsening of the lock. Here is a demonstration of a consolidation example:

 public void Syncmehod () {
synchronized (lock) {
method1 ();
}
Synchronized (lock) {
method2 ();
}
} 
The form after the JVM is consolidated: public
 void Syncmehod () {
synchronized (lock) {
method1 ();
Method2 ();
}
 

Therefore, such integration gives us a good demonstration of the grasp of the lock granularity of the developers.

Non-lock parallel computing
It took a lot of space to talk about the lock, but also mentioned that the lock is the additional resource cost of a certain context switch, in high concurrency, the "lock" of the fierce competition may become a system bottleneck. Therefore, a non-blocking synchronization method can be used here. This kind of lock-free method still guarantees the consistency of data and program in high Concurrency environment.
1. Non-blocking sync/No lock
non-blocking synchronization is actually reflected in the previous threadlocal, each thread has its own independent copy of the variable, so in parallel computing, do not have to wait each other. Here, the author mainly recommends a more important method of Compare concurrency control based on the comparison and swap (CAS) algorithm.

The process of the CAS algorithm: it contains 3 parameters CAs (v,e,n). V represents the variable to be updated, E represents the expected value, and n represents the new value. The value of V is set to n only if the V value equals E, and if the V value is different from the E value, the current thread does nothing. Finally, the CAs returns the true value of the current v. CAs operation with optimism, it always believes that it can successfully complete the operation. When multiple threads use CAs to manipulate a variable at the same time, only one wins and succeeds, and the remaining Joon-FAI fails. The failed thread is not suspended, is only told to fail, and is allowed to try again, and of course it allows the failed thread to abort the operation. Based on this principle, CAS operations are not locked in time, and other threads can be found to interfere with the current thread, and properly handled.

2, Atomic weight operation
the JDK's Java.util.concurrent.atomic package provides an atomic operation class implemented using a lock-free algorithm, where the implementation of the underlying native code is used primarily within the code. Interested students can continue to follow the native level of code. Here is not the surface of the code to implement.

The following is an example to show the performance gap between common and lock-free synchronization:

public class Testatomic {private static final int max_threads = 3; private static final int task_count = 3; private stat
IC final int target_count = 100 * 10000;
Private Atomicinteger acount = new Atomicinteger (0);
private int count = 0;

Synchronized Int Inc. () {return ++count}

synchronized int GetCount () {return count;}
  public class Syncthread implements Runnable {String name;
  Long StartTime;

  Testatomic out;
    Public Syncthread (Testatomic o, long starttime) {this.out = O;
  This.starttime = StartTime;
    @Override public void Run () {int v = out.inc ();
    while (v < target_count) {v = out.inc ();
    Long endtime = System.currenttimemillis ();
  System.out.println ("Syncthread spend:" + (Endtime-starttime) + "MS" + ", v=" + V);
  } public class Atomicthread implements Runnable {String name;

  Long StartTime;
  Public Atomicthread (Long starttime) {this.starttime = StartTime; @Override public void Run () {int v = acount.incrementandget ();
    while (v < target_count) {v = acount.incrementandget ();
    Long endtime = System.currenttimemillis ();
  System.out.println ("Atomicthread spend:" + (Endtime-starttime) + "MS" + ", v=" + V); @Test public void Testsync () throws interruptedexception {executorservice exe = Executors.newfixedthreadpool (max_t
  Hreads);
  Long starttime = System.currenttimemillis ();
  Syncthread sync = new Syncthread (this, starttime);
  for (int i = 0; i < Task_count i++) {exe.submit (sync);
} thread.sleep (10000); @Test public void Testatomic () throws interruptedexception {executorservice exe = Executors.newfixedthreadpool (max_t
  Hreads);
  Long starttime = System.currenttimemillis ();
  Atomicthread atomic = new Atomicthread (starttime);
  for (int i = 0; i < Task_count i++) {exe.submit (atomic);
} thread.sleep (10000); 

 }
}

The test results are as follows:
Testsync ():
Syncthread spend:201ms, v=1000002
Syncthread spend:201ms, v=1000000
Syncthread spend:201ms, v=1000001
Testatomic ():
Atomicthread spend:43ms, v=1000000
Atomicthread Spend:44ms, v=1000001
Atomicthread Spend:46ms, v=1000002
It is believed that the performance difference between the internal lock and the non-blocking synchronization algorithm is very obvious in this test result. Therefore, the author is more recommended to directly see the same atomic under the Atomic class.

Conclusion
finally put the want to express these things finished, in fact there are some want to countdownlatch such a class did not say. However, the above mentioned is absolutely the core of concurrent programming. Perhaps some readers can see a lot of such knowledge on the Internet, but the individual still feel that knowledge only on the basis of comparison can find the appropriate use of the scene. Therefore, this is also a small compilation of the reasons for this article, but also hope that this article can help more students.

The above is the entire content of this article, I hope to help you learn, but also hope that we support the cloud habitat community.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.