How the Java GC works, how to optimize GC performance, and how to interact effectively with the GC

Source: Internet
Author: User
Tags cas jconsole

How the Java GC works, how to optimize GC performance, and how to interact effectively with the GC

A goodJavaProgrammers must understandGC, how to optimize GC performance, how to interact effectively with gc, because some applications have higher performance requirements, such as embedded systems, real-time systems, etc. Only the overall increase in memory management efficiency can improve the performance of the entire Application. This article first briefly introduces the working principle of gc, then discusses several key issues of gc, and finally puts forward some Java programming suggestions to improve the performance of Java program from the point of view of Gc.
Fundamentals of GC
JavaMemory management is actually the management of objects, including the allocation and release of objects, for programmers, the allocation of objects using the new keyword, when releasing an object, as long as the object all references are assigned to null, so that the program can not access to the object, we call the object "unreachable." The GC will be responsible for reclaiming the memory space of all "unreachable" Objects.
For gc, When a programmer creates an object, the GC starts to monitor the Object's address, size, and Usage. typically, a GC uses a graph to record and manage all objects in the heap (heap) in such a way as to determine which objects are "accessible" and which are "unreachable". When the GC determines that some objects are unreachable, it is the responsibility of the GC to reclaim those memory Spaces. however, in order to ensure that the GC can differentiate between platform implementations, the Java specification Standard does not strictly govern many of the Gc's Behavior. For example, There are no clear rules as to what type of recovery algorithm to use, when to recycle, and so On. therefore, different JVM implementations often have different implementation algorithms. This also brings a lot of uncertainty to the development of Java Programmers. This paper studies several issues related to GC work and tries to reduce the negative impact of this uncertainty on Java programs.
@@ 增量 GC (Incremental GC)
A GC is typically implemented by a process or group of processes in the jvm, which itself consumes heap space as a user program, consumes CPU at runtime, and stops running when the GC process is running. therefore, when the GC runs longer, the user can feel the Java Program's pause, on the other hand, if the GC runs too short, the object recovery may be too low, which means that there are still many objects that should be reclaimed are not recycled, still occupy a lot of memory. therefore, when designing a gc, a tradeoff must be made between the pause time and the recovery Rate.
A good GC implementation allows users to define their own required settings, such as memory-constrained devices, very sensitive to memory usage, hoping that the GC can accurately reclaim memory, it does not care about the slow program speed, and some real-time network games, can not allow the program to have a long interruption. Incremental GC is the use of a certain recovery algorithm, a long interruption, divided into a number of small interruptions, in this way to reduce the impact of GC on the user Program. Although an incremental GC may not be as efficient as a normal GC in overall performance, it can reduce the maximum downtime of a program.
The hotspot JVM provided by the Sun JDK can support incremental gc. The HotSpot JVM default GC mode is to not use Delta gc, in order to start the Delta gc, we must increase THE-XINCGC parameter when running the Java Program. The HotSpot JVM incremental gc, implemented using the train GC algorithm, has the basic idea of grouping all objects in the heap by creation and usage (layering), using frequent and relevant objects in a team, and continuously adjusting the group as the program runs, when the GC is running, It always recycles the oldest (recently infrequently accessed) objects, and if the whole group is recyclable, the GC will recycle the whole set, so that each GC runs only a certain percentage of unreachable objects, guaranteeing smooth running of the Program.
Finalize () function
Finalize is a way of thinking that is located in the object class, and the access modifier for this approach is protected, because all classes are subclasses of object, so the user class is easily accessible to this method of Thinking. Because the Finalize function does not automatically implement chained calls, we must implement them manually, so the last statement of the Finalize function is usually super.finalize (). In this way, we can implement finalize calls from bottom to top, freeing our own resources before releasing the resources of the parent class.
According to the Java language Specification standard, the JVM guarantees that the object is unreachable until the Finalize function is called, but the JVM does not guarantee that the function will be called. In addition, the specification also guarantees that the Finalize function runs at most ONCE.
Many Java beginners will consider this method of thinking similar to the destructor in C + +, the release of many objects, resources are placed in this Function. In fact, this is not a very good way, for the following reasons: first, the GC in order to support the finalize function, to cover the function of the object to do a lot of additional work, and second, after the finalize run is completed, the object may become reachable, the GC will also check whether the object is reachable, therefore, using finalize reduces the performance of the gc, and third, because the time that the GC calls Finalize is indeterminate, freeing the resource in this way is also Indeterminate.
In general, finalize is used for the release of some very important resources that are not easy to control, such as some I/O operations, data connections, and so on, and the release of these resources is critical for the application as a whole. In this case, the programmer should be managed by the program itself, including the release of these resources, with the Finalize function to release resources as a supplement to form a double-insurance management mechanism, and should not rely solely on finalize to release Resources.
How the program interacts with the GC (does not understand ...) )
JAVA2 enhances the memory management feature by adding a Java.lang.ref package that defines 3 reference Classes. These 3 reference classes are softreference, weakreference, and phantomreference, respectively. by using these reference classes, programmers can interact with the GC to some extent to improve GC Productivity. These reference classes have a reference strength between the unreachable object and the unreachable Object.
Some suggestions for Java coding
Based on how the GC works, we can make the GC run more efficiently and more in line with the requirements of the application with some tips and tricks. Here are a few suggestions for programming:
1, The most basic suggestion is to release the reference of the useless object as soon as Possible. Most programmers use temporary variables so that reference variables are automatically set to null after exiting the active domain (scope). When we use this method, we must pay special attention to some complex object graphs, such as arrays, queues, trees, graphs, etc., which have mutual references and complex relationships. For such objects, GC recycling is generally less efficient. If the program allows, the unused reference object will be assigned null as soon as Possible. This can speed up the work of the GC.
2. Use the Finalize function as little as Possible. The Finalize function is a chance that Java provides the programmer with an opportunity to release objects or resources, but it increases the amount of GC work and therefore minimizes the use of a Finalize method to reclaim Resources.
3, Note the collection data types, including arrays, trees, graphs, linked lists and other data structures, these data structures for GC recycling is more Complex. In addition, note the global variables, as well as static variables, which tend to cause hanging objects (dangling reference), resulting in memory wastage.
4, when the program has a certain waiting time, the programmer can manually execute SYSTEM.GC (), notify the GC to run, but the Java language specification does not guarantee that the GC will be executed, the use of incremental GC can shorten the Java program pause Time.

Java when the GC process runs, does the application stop running??

For large applications that create large numbers of objects, the JVM spends a lot of time on garbage collection (GC). by default, when a GC is in progress, the entire application must wait for it to complete, which may take a few seconds or more (the command-line option of the Java Application Launcher -VERBOSE:GC will cause each GC event to be reported to the console). To minimize these GC-induced pauses, which may affect the execution of fast tasks, you should minimize the number of objects created by your Application. similarly, It is helpful to run the plan code in a separate JVM. At the same time, several fine-tuning options can be tried to minimize GC pauses. For example, an incremental GC would try to spread the cost of the primary collection to a few small collections. this, of course, will reduce the efficiency of the GC, but this may be an acceptable cost to the time plan
Data reference: http://www.knowsky.com/362375.html

Java Virtual Machine Optimization option, GC description

Reference http://blog.sina.com.cn/s/blog_6d003f3f0100lmkn.html

There are many JVM options that affect benchmark Testing. The more important options Are:

* JVM type: server (-server) and client (-client).
* Make sure there is enough memory available (-xmx).
* The type of garbage collector used (advanced JVM provides many tuning options, but be careful to use).
* Whether to allow class garbage collection (-xnoclassgc). The default setting is to allow class GC; USING-XNOCLASSGC may compromise performance.
* Whether to perform escape analysis (-xx:+doescapeanalysis).
* Whether to support large page heap (-xx:+uselargepages).
* Whether to change the thread stack size (for example,-xss128k).
* Use JIT compilation: always use (-xcomp), never use (-xint), or use only hotspots (-xmixed; This is the default option, which produces the best performance).
* The amount of profiling data collected during JIT compilation (-xx:compilethreshold), during background JIT compilation (-xbatch), or during a staged JIT compilation (-xx:+tieredcompilation).
* Whether to perform a biased lock (biasedlocking,-xx:+usebiasedlocking); Note that JDK 1.6 and later will automatically perform this feature.
* Whether to activate the most recent experimental performance tuning (-xx:+aggressiveopts).
* Enable or disable assertions (-enableassertions and-enablesystemassertions).
* Enable or disable strict native call checking (-xcheck:jni).
* Enable memory location optimization (-xx:+usenuma) for NUMA multi-cpu systems.

Class Data sharing classes are Shared.
Java5 introduces a class sharing mechanism that optimizes some of the most common base classes to a shared file when the Java program is first started, and temporarily supports only CLIENTVM and serialgc. stored in client/classes.jsa, This is why the program is performing slower for the first Time. Turn on the Parameter-xshare.

J2se6 (code: Mustang Mustang) One of the main design principles is to improve the performance and scalability of j2se, mainly by maximizing operational efficiency, better garbage collection and some client performance.

1, biased lock (biased Locking)
Java6 previous lock operations result in an atomic CAs (compare-and-set) operation, and CAS operations are time-consuming, even if the lock does not actually conflict and is owned by only one thread, which can incur significant overhead. In order to solve this problem, the biased locking technique is introduced in Java6, that is, a lock is biased to the first locked thread, and the subsequent lock operation of the thread does not need to be synchronized. The approximate implementation is as follows: a lock is initially neutral state, when the first line is Cheng, the state of the lock is modified to biased, and the thread ID is recorded, and when a subsequent lock operation is made, if the discovery status is biased and the thread ID is the current thread id, then only the lock flag is set. No CAS operations are required. Other threads to add this lock, you need to use a CAS operation to replace the state with revoke, and wait for the lock flag to clear, the state of the lock will become the default, often old algorithm processing. This feature can be disabled by the-xx:-usebiasedlocking Command.

2. Lock coarsening (lock Coarsening)
If a piece of code is frequently locked and unlocked, and there is nothing to do between unlocking and the next lock, you can combine multiple Gaga unlock operations into a pair. This feature can be-xx:-eliminatelocks Forbidden.

3, Adaptive Spin (Adaptive Spinning)
Generally on MULTI-CPU machine lock implementation will contain a short-term spin process. The number of spins is not very good decision, spin less will cause the thread is suspended and the context switch increases, spin more CPU Consumption. An adaptive spin technique is introduced for this Java6, which dynamically adjusts the spin times based on the probability of a lock's recent Spin-lock success.

4. Heap of common large memory distributions (large page Heap)
The large paging is the TLB mismatch rate on the X86/AMD64 architecture used to reduce the size of the TLB (virtual address to Physical address translation cache). This technique can be used by the memory heap in the Java6.

5. Improve array Copy performance
Write a custom assembly array copy program for each type Size.

6. Code optimization in the background
Background compilation in Hotspot™client Compiler: code optimization in the background

7. Linear Scan Register allocation algorithm (Linear scan registerallocation):
A new register allocation strategy, based on the SSA (static singleassignment), improves performance by about 10%. The common register allocation algorithm regards register allocation as a graph coloring problem, the time complexity is O (n^4), and does not apply to the JIT compilation of Java. The original JVM is based on some local heuristic rules to allocate registers, the effect is not very good, the Linear Scan register algorithm used in JAVA6 can achieve the same effect as the graph color algorithm, and the time complexity is linear.

8. Parallel Scaling and garbage collector (Parallel compaction Collector)
Use parallel garbage collection for full GC (originally non-full GC in JDK 5 is parallel but FULLGC is serial), USE-XX:+USEPARALLELOLDGC to turn this feature on

9. Parallel Low-pause Garbage collector (Concurrent-Collector)
Explicit invocation of GC (such as System.gc) can also be done in parallel to mark-sweep garbage collection, using-xx:+explicitgcinvokesconcurrent to Open.

10. Ergonomics in the 6.0 Java Virtual machine
Automatically adjusts the garbage collection strategy, heap size, and other configurations, which are added in JDK 5, significantly enhanced in JDK6, and SPECjbb2005 performance increased by 70%.

11, the Boot class loader optimization
A meta-index file describing the Package's jar file is added to the JRE to speed up the ClassLoader load class performance and improve desktop Java application startup speed (+15%). Memory footprint also reduced by 10%

12, Graphics Program Optimization
Displays splash before the JVM Starts.

OutOfMemoryError is a memory overflow, and there are a number of situations where memory overflow occurs.
1.java Heap Overflow Java.lang.OutOfMemoryError:Java heapspace.
2.java permanent heap overflow, usually reflection, proxy used more cause the class to generate too much, java.lang.OutOfMemoryError:PermGen space.
3. Local heap overflow, which may be due to the operating system can not allocate enough memory, may be the system is no memory, or the Java Process memory space exhaustion, here is a bit of meaning, generally 32-bit system process only 4 G address space, And because the Java implementation uses the local heap or memory map area as the Java heap storage space, and then removes the kernel mapping area, Java uses the heap generally only 2G, and if the Java heap xmx is too large, causing the JNI local heap too small, also generates memory overflow. the local heap can be jni with new , malloc, or it could be an instance of Directbuffer.
java.lang.outofmemoryerror:request<size> bytes For<reason>. Out of swap space?
In this case, if the Java heap is sufficient, reducing the value of the XMX will solve the problem instead.
Overflow of the 4.jni method. the former is a local overflow detected by the jvm, and this is not possible to allocate memory when the JNI method is Called.
java.lang.outofmemoryerror:<reason> <stacktrace> (Native METHOD)


JDK7 Performance Optimization.

1. (Zero Based) Compressed OOPS
In a 64-bit cpu, the Jvm's Oop (ordinary object pointer) is 64-bit, simply speaking, oop can be considered a reference to an object, although the basic type bits in Java are fixed, but the reference type (simplified C language pointer) The address used to point to the heap is naturally extended to the length of the MACHINE. the 32-bit system has a maximum accessible memory of 4G, in order to break this limit 64-bit systems are already very common, but the mere reference from 32 to 64 bits, the heap footprint will probably increase by half, although the memory is very cheap, but the memory bandwidth, The CPU cache cost is very Expensive.

Compressedoops compresses manageable references to 32-bit to reduce heap footprint, adding codec/decode directives when the JVM executes, similar to 8086 segment management, which uses
<narrow-oop-base (64bits) > + (<narrow-oop (32bits) ><< 3) +<field-offset> formula determines the memory address.
The JVM encodes objects when they are stored in the heap and decodes them when they are read from the HEAP.

The zero based compressed oops further places the base site at 0 (not necessarily the memory space address is 0, but the JVM has a relative logical address of 0, as the available CPU registers are relatively addressed) and the conversion formula Becomes:
(<narrow-oop << 3) +<field-offset>
This further improves Performance. however, this requires OS Support.
If the Java heap <4g,oops uses a low virtual address space, it does not need to be encoded/decoded for direct use.

The Zero based compressed oops uses multiple policies for different heap Sizes.
1. Heap less than 4G, no need to Encode/decode Operation.
2. Less than 32G and greater than 4G, use zero based compressed oops
3. Greater than 32G, do not use compressed oops.

Escape Analysis Improvements
When a variable (or Object) is allocated in a method, its pointer may be returned or globally referenced, which is referred to by other procedures or threads, which is called a pointer (or Reference) escape (escape), which means that the variable is not only used within this method. Java objects are generally considered to be always allocated in the heap, which makes it necessary for any object to be garbage collected. in most cases, the objects within the method are used only in this method and can be stored entirely using stacks, which are the most natural and performance-free in the Stack. The struct in C is allocated in the STACK. if you implement a reference escape analysis, you can assign an object without a reference escape to the stack, and you don't have to add a new definition method to the language, and the reference escape analysis is Automatic. JDK7 has started the default supported escape Analysis. This also eliminates synchronization, and if its analysis learns that the object is a non-reference escape, all synchronization of that object can be canceled (this is, of course, a programmer's task, such as stringbuffer), Some or all of the other optimized objects are saved in the CPU Register.

NUMA Collector Enhancements
Numa (Non Uniform memoryaccess), Numa is implemented in a variety of computer systems, in short, segmented memory access, similar to a cluster in the raid,oracle of a hard disk, which the JVM simply Applies.

The above three features can also be opened in some JDK6, depending on the version of the CHANGENOTES.JAVA6 to add such as the following performance optimization measures:
Lightweight locks Use the CAS mechanism to reduce the performance consumption of locks.
Bias Lock (biased Locking)
Lock coarsening (lock Coarsening)
The lock omit escape analysis generated by escape (escape) analysis can also be allocated within the stack to reduce the pressure of memory recycling.
Adaptive spin lock (adaptive Spinning) spin lock is only effective in physical multi-cpu.
Lock Removal (elimination)

In multicore cpus, Lock acquisition is much more expensive than a single-core system, because in multicore systems, Lock acquisition requires a cpu-blocking data bus and cache Writeback.
So sometimes, in a single-core system, we often get stringbuffer and StringBuilder performance of the use cases, and because of the lock elimination and other technologies, some cases in the multi-core CPU will also get a little performance difference.

It is believed that Java7 will also support Opengl's acceleration function by Default.

Class Data sharing is added to the JDK1.5, which is to cache some of the commonly used Java basic classes in a file or shared memory for use by all Java Processes.

From JRE1.5, when the Java program starts, such as non-use-client|server instruction display specified, the virtual opportunity to automatically select the corresponding vm, such as in 64-bit system, only implemented servervm, All virtual machines will use the Servervm.32-bit system, Windows defaults to clientvm, and Linux,solaris determines whether to use SERVERVM based on the number of CPUs and memory, such as Jre6 to 2CPU,2GB physical memory.

Gc
There are two parameters to measure GC efficiency, one is throughput (I.E. efficiency), one is pause time, and the other is footprint, which is the heap size Occupied.

GC Algorithm.
1. copy, the whole block of memory can be recycled after moving all surviving objects to another piece of memory. This method is efficient, but requires a certain amount of free memory, and the copy also has OVERHEAD.
2. Trace collector to trace the object reference graph that is collected to track starting from the root Node. The basic tracking algorithm is called "mark and clear", which is the two phases of garbage collection. Tag stage, The garbage collector iterates through the number of references, marking each encountered Object. During the purge phase, objects that are not marked are Freed. It is possible to set the tag on the object itself, or to set the tag with a separate bitmap. Compression (optional), garbage collection should also be a task for defragmenting. Tagging and purging typically use two strategies to eliminate heap fragmentation: compression and copying, both of which are fast moving objects to reduce fragmentation and add together called Mark-sweep-compact.
3. There is also a reference count collector, which has a reference count for each object in the heap, minus 1 when referencing an assignment with 1, null, or as a reference to a base type that is out of life (such as a method exit and stack reclaim), and whose circular reference to multiple objects is not 0, but the reference count is not and the increase or decrease of the reference number brings additional overhead, so it is no longer used.

Generational collectors
According to the program statistics, most objects have a short life cycle and are quickly released. but there are also some objects that have a long life cycle, or even a permanent effect. for the copy algorithm, all active objects are moved to move at each Collection. For short life of the object is good to say, often can be solved in situ, but for the long life cycle of the object is purely a manual work, it moved to remove the consumption of a lot of time, did not produce any benefits. Generational collections can directly allow long life-cycle objects to stay in one place for long periods of time. The Gc's energy can be spent more on collecting short-lived objects.
In this method, the heap is divided into two or more sub-heaps, each of which serves as a "generation" object. The youngest generation has the most frequent garbage collection. Because most objects are short-lived, only a small subset of young objects can survive the first Collection. If one of the youngest objects is still alive after several garbage collections, the object becomes a Higher-life generation, which is transferred to a different sub-heap. The collection of older generations is not frequent in the younger Generation. Each time an object becomes mature in its own age (after multiple garbage collection survives), it can be transferred to a higher-age Generation.
Generational collection is generally applied to the copy algorithm in the young heap, and the old generation is applied to the mark clearing Algorithm. In either case, grouping the heap according to the age of the object can improve the performance of the most basic garbage collection.

In general, the generational collector in Java divides the heap into younger generations, older generations, and permanent generations. The recovery of the young generation is called minorgc, because in this period, the object life cycle is very relatively high efficiency, the old generation called fullgc, the corresponding efficiency is lower, time is longer, should try to reduce the number of fullgc.

Vm
The Client VM is suitable for desktop applications, fast startup, short running time, so it does not pre-load too many classes, too much optimization of the class.
The Server VM is suitable for service programs, the start time is not important, the running time is longer, the most basic classes are pre-loaded, and the class is Optimized.

GC Type.
Serial Serial collector (default)
When the GC is running, the application logic is all paused, using a single-threaded "copy" for the generation of garbage collection, a single thread using "mark-clear-compress" for the old generation (tenured) garbage Collector. high throughput Rate. suitable for single CPU Hardware.

Parallel Parallel collector
For younger generations using multiple GC threads for "copy" Garbage collection, The program pauses for the younger generation of GC runs, and the old generation is still single-threaded using "mark-purge-compress" for old generation garbage collections, while the GC runs, the application is also paused. on large memory, multiprocessor machines, Consider using this PARALLELGC (specified with parameter-xx:+useparallelgc), which, when GC for younggen, can take advantage of multiple processors, thus reducing the time to pause, but the focus is on improving throughput, but The same algorithm is still used with SERIALGC when it oldgen the Gc. So in jdk5u6, the parallel compactingcollector (specified with the Parameter-xx:+useparalleloldgc) is introduced, and this GC can also benefit from multiple processors for the Oldgen gc. Since the GC for Oldgen is much more time consuming than younggen, theoretically this garbagecollector can provide better performance, and it is worth noting that Parallel compacting GC will eventually replace Parallelgc.

Concurrent mark-sweep concurrency Collector.
For younger generation and MULTI-GC thread "copy" recycling, This GC also needs to pause the application, but due to the high efficiency of the minorgc, there is no big pause, and for the older generation use the concurrent way tag-recycle mechanism that runs concurrently with the application, and the steps are subdivided again, part of the stage (initial tag, Re-tagging) It also completely causes the app to pause, but the time is short, and most of the time the application is concurrently with a single GC thread, reducing the time it takes for the application to Pause. This GC uses a Younggen collection algorithm that is consistent with parallelgc, while the GC uses a more complex algorithm for oldgen, providing a very short pause time. however, complex algorithms also cause greater overhead, and this parallel GC is non-compacting, so it uses a free block list to manage the oldgenheap, and the overhead of allocating space is increased. in some scenarios, A shorter pause time is more important than a larger throughput, so consider using this gc, known as Cmsgc.

The incremental collector (train Algorithm) is gradually deprecated, and the concurrent GC is selected in THE-XINCGC 1.5.

In Sun J2se 5.0, the so-called behavior-based Parallel collectortuning is introduced, and this tuning method is based on three goal:

Maximum Pause time Goal: specified with the parameter-xx:maxgcpausemillis=n, the default value is Empty. After this parameter is specified, the GC pause time in three memory areas will be kept to a maximum of n milliseconds, and if not, the corresponding memory area will be reduced to shorten the time of GC pause;

Throughput Goal: with the Parameter-xx:gctimeratio=n specified, The default value is 99, which is the GC time for the total application run time of 1%. If not, the corresponding memory area will be enlarged to increase the elapsed time between two GC of the application;

Footprint Goal: because of the current memory flooding, so this Goal is generally not worth attention;

The priority of these three Goal is from top to bottom, that is, first satisfying maximum Pause time Goal, then satisfying throughputgoal, and finally satisfying footprint Goal.


Using the Parameters-xloggc:file and-xx:+printgcdetails to print gclog, and then using Gcviewer to view gclog, it has the advantage of generating statistics, throughput, maximum and minimum pause times, FULLGC time as a percentage of the total GC time, etc., can be viewed using this tool, but currently only supported to 1.5.

Jconsole is a Java monitoring and management console that allows you to monitor a wide variety of VM Resource runtime usage scenarios. Actually in java5, you need to add a parameter, in Java6 because attach Api,jconsole is supported, the JMX agent inside the JVM is loaded automatically.
The Jstat command prints a wide variety of VM statistics, including memory usage, garbage collection time, class loading, and timely compiler statistics. The Jmap command allows you to get the heap histogram and heap dumps at Runtime. The Jhat command allows you to analyze heap dumps. The Jstack command allows you to obtain thread stack Traces. These diagnostic tools can be attached to any application and do not need to be started in a special way.

Reference http://blog.sina.com.cn/s/blog_6d003f3f0100lmkn.html

Related: triggering full GC execution four cases

From:http://www.cnblogs.com/langtianya/p/4401083.html

How the Java GC works, how to optimize GC performance, and how to interact effectively with the GC

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.