Tij Reading Notes (13th chapter)

Source: Internet
Author: User
Tags execution mutex new set sleep thread thread class time in milliseconds time limit
Notes
13: Concurrent Programming

Object-oriented enables us to divide the program into separate modules. But you will often encounter, not only to break down the program, but also to make its various parts can run independently of the problem.

This stand-alone subtask is thread. When you are programming, you can assume that threads are capable of running independently, with their own CPU subtasks. In fact, some of the underlying mechanisms are splitting the CPU time for you, but you don't know it. This approach simplifies multithreaded programming.

A process is a self-contained (self-contained) program that has an exclusive address space. The multitasking (multitasking) operating system creates an effect that can have multiple processes (programs) running at the same time, by switching the CPU periodically between different tasks. A thread is an independent, orderly flow of instructions within a process. As a result, a process can contain multiple threads that execute concurrently.

The use of multithreading is very broad, but summed up nothing more than a part of the program is waiting for an event or resource, and you do not want it to block the entire program. So you can create a thread that is related to the event or resource so that it runs separately from the main program.

Learning concurrent programming is like visiting a new world, learning a new programming language at the very least, and accepting a new set of ideas. With the majority of micro-computer operating systems provide multi-threaded support, programming languages and class libraries have also done a corresponding expansion. In summary, multithreaded programming:
It's not only mysterious, but it also requires you to change the concept of programming. Various languages support multithreading in a similar way, so understanding threads equals mastering a common language
Understanding concurrent programming is as difficult as understanding polymorphism. Multithreading is easy to look at in fact.

Motivation

One of the most important uses of concurrent programming is to create responsive user interfaces. Imagine a program that, due to a lot of CPU-intensive operations, completely ignores user input and becomes very slow. To solve this problem, the key is that the program in the operation at the same time, but also to control the right to return to the user interface, so that users can make timely response to the operation. Suppose there is a "Quit" button, you will not want to write a piece of code to do a poll of the bar, you want to "quit" can respond to the user's operation in a timely manner, as you do in a timed check.

The conventional approach is that it is impossible to give control to other programs while running the command. This sounds almost impossible, as if the CPU can appear in two places at the same time, but multithreading creates this effect.

Concurrent programming can also be used to optimize throughput rates.

In the case of multiprocessor systems, threads are also divided into multiple processors.

One thing to keep in mind is that multithreaded routines must also be able to run on a single CPU system.

Multithreading is the most commendable or its underlying abstraction, that is, the code does not need to know whether it is running on a single CPU or a multiple-CPU system. Multitasking and multithreading are a good way to make the most of a multiprocessor system.

Multithreading allows you to design more loosely coupled (more loosely-coupled) applications.

Basic Threads

The easiest way to create a thread is to inherit java.lang.Thread. This class has made the necessary configuration for the creation and operation of threads. Run () is the most important method of thread, and if you want the thread to do it for you, you must override this method. So, run () contains code to be executed "concurrently" with other threads in the program.

Main () created thread, but did not take its reference. If it's a normal object, that's enough to make it junk, but thread won't. Thread will be "registered" for itself, so actually reference is still in place. The garbage collector cannot move the thread until the run () exits.

Yielding

If you know that run is over, you can give a hint to the thread-scheduling mechanism and tell it you're done and you can get other threads to use the CPU. This hint (note, just hint--there is no guarantee that the JVM you use will not respond to this) is given in the form of yield ().

Java's thread-scheduling mechanism is preemptive (preemptive), meaning that it interrupts the current thread and switches to other threads whenever it deems it necessary. Therefore, if the I/O (through the main () thread) takes too long, the thread-scheduling mechanism stops the run () before it runs to yield (). In short, yield () works only in very few cases and cannot be used for serious tuning.

Sleeping

Another way to control a thread is to use sleep () to stop it for a period of time in milliseconds.

Sleep () must be placed in a try field because it is possible to have a situation where time has not been interrupted. This happens if someone gets the reference of the thread and calls its interrupt (). (Interrupt () also affects threads in the wait () or join () state, so the two methods are also placed in the Try field. If you are going to wake the thread with interrupt (), it is best to use wait () instead of sleep (), because the catch statements for the two are not the same. The principle we follow here is: "Do not capture unless you know how to handle the anomaly." So we throw it out as a runtimeexception.

Sleep (int x) is not a way to control thread execution. It just pauses the thread. The only guarantee is that it sleeps at least x milliseconds, but it may take longer to recover, because the thread scheduling mechanism takes time to take over after hibernation is over.

If you have to control the order in which the threads are executed, the most thorough approach is to use no threads. You can write a collaborative program to exchange the program's running rights in a certain order.

Priority level

The priority of a thread (priority) is to tell the thread how important the thread is in the process. Although the order of the CPU server threads is inconclusive, if a lot of threads are stuck there waiting to start, the thread scheduling mechanism tends to start with the highest priority thread first. This does not mean that low-priority threads will not be able to run (that is, priority does not cause deadlocks). Low priority only means fewer chances to run.

You can use GetPriority () to read the priority of a thread, and to modify the priority of the thread at any time with setpriority ().

Although the JDK provides a level 10 priority, it does not map well to many operating systems. For example, there are 7 levels on the Windows 2000 platform that are not yet fixed, so the mappings are indeterminate (although Sun's Solaris has 231 levels). The only way to maintain portability is to keep an eye on min_priority, norm_priority, and min_prority when prioritizing.

Daemon Threads

The so-called "daemon thread" means that, as long as the program is still running, it should provide some kind of public service thread in the background, but the daemon is not part of the program's core. So, when all the non-daemon threads are running, the program ends. Conversely, the program cannot end as long as there are no daemons running. For example, a thread running main () belongs to a non-daemon thread.

To create a daemon, you must Setdaemon () before it starts.

You can use Isdaemon () to determine whether a thread is a daemon. Threads created by the daemon are also automatically daemon threads. Take a look at the following example:

Connecting Threads

The thread can also invoke the join () of another thread, and then continue running after that thread has finished. If the thread calls T.join () that calls another thread T, the main thread is suspended before the end of the thread (the criterion is, t.isalive () is equal to false).

When calling join (), you can give a timeout parameter, which can be in milliseconds or in nanoseconds, so that if the target thread does not end after the time limit expires, join () is forced to return.

Join () calls can be interrupted by the interrupt () of the host thread, so join () is also enclosed in try-catch.

Another way

So far, what you've seen is a few simple examples. These threads inherit thread, which is very sensible, with objects just as threads and not doing anything else. However, a class might have inherited another class, so that it could no longer inherit thread (Java does not support multiple inheritance). At this point, you will use the Runnable interface. Runnable means that this class implements the run () method, and thread is runnable.

The runnable interface has only one method, that is run (), but if you want to do something with thread objects (for example, GetName () in ToString), you must use Thread.CurrentThread () To get its reference. The thread class has a constructor that takes runnable and the name of the thread as arguments.

If the object is runnable, it only means that it has the run () method. This is nothing special, that is, not because it is runnable, so it has the innate function of the thread, which is different from the derived class of thread. So you have to create threads like routines, using Runnable objects. Pass the Runnable object to the thread's constructor and create a separate thread object. The start of the thread is then invoked, initialized by it, and then the thread's scheduling mechanism can invoke run ().

The advantage of Runnable interface is that everything belongs to the same class, which means that Runnable lets you create mixin (mixed classes) of base classes and other interfaces. If you want to access other things, just use it, and don't have to deal with one another. But the inner class also has this capability, and it can also directly access the members of the host class. So this reason is not enough to convince us to give up thread's inner class and use Runnable's mixin.

Runnable means that you need to use code-that is, the run () method-to describe a process, rather than creating an object that represents the process. There has been a controversy over how to understand threads. It depends on whether you treat the thread as an object or as a process. If you think it is a process, you are free from the oo dogma of everything. But at the same time, if you just want the process to take over a part of the program, you have no reason to make the whole class runnable. For this reason, it is often a wiser choice to hide the thread code in the form of an inner class.

Unless you are compelled to use runnable, Thread is selected.

Create a responsive user interface

Creating a user interface that reflects agility is one of the main uses of multithreading.

To make the program responsive, you can put the operation into run () and let the preemptive scheduler manage it.

Share a limited resource

You can assume that a single-threaded is a solitary individual who wanders through a problem space and does only one thing at a time. Because there is only one, you do not need to consider the question of two entities applying for the same resource at the same time. The problem is a bit like two people parked in a parking space at the same time, wearing a door, and even speak at the same time.

But in a multithreaded environment, things are not that simple, and you have to consider two or more threads requesting the same resource at the same time. Conflicts in resource access must be eliminated.

Accessing resources in an incorrect way

Let's have a preview of the following routine. Alwayseven will "guarantee" that an even number will be returned each time the GetValue () is invoked. There is also a "watcher" thread that periodically calls GetValue () and then checks if the number is really even. This may seem superfluous, because it is clear from the code that the value is definitely an even number. But the accident came. The following is the source code:

Sometimes you don't have to worry about whether someone is using that resource. But for multithreaded environments, you have to have a way to prevent two of threads from accessing the same resource at the same time, at least not at the critical time.

To prevent this kind of conflict is very simple, as long as the thread is running when the resources to lock the line. The first thread to access the resource locks it, and before it unlocks it, no other thread can access the resource, and then another thread locks the resource and then uses it so that it loops.

Test framework

Conflict with resource access

Semaphore is a flag object that is used for communication between threads. If the value of semaphore is zero, the thread can obtain the resource it is monitoring, and if it is not 0, then the thread cannot get the resource, so the thread must wait. If a resource is applied, the thread increments the semaphore first and then uses the resource. Increment and decrement are atomic operations (Atomicoperation, that is, operations that are not interrupted), and this semaphore prevents two threads from using the same resource at the same time.

If the semaphore can properly care for the resources it monitors, the object will never fall into an unstable state.

Resolving conflicts for shared resources

In fact, all multithreaded architectures use serial access to solve the conflict of shared resources. In other words, only one thread at a time can access the shared resource. This is usually done by setting a lock-and-lock statement around the code so that only one thread at a time can execute the code. Because the locking statement produces the effect of a mutex (mutual exclusion), this mechanism is often referred to as a mutex.

In fact, the threads that wait outside are not lined up, but because the thread's scheduling mechanism is inconclusive, no one knows who will be next. We can use yield () and setpriority () to give some advice to the thread-scheduling mechanism, but it's also about the platform and JVM.

Java provides a built-in solution to prevent resource conflicts, which is the Synchronized keyword. It works much like the Semaphore class: When a thread wants to execute code that is guarded by synchronized, it checks to see if its semaphore is available, and if it does, it gets semaphore first, executes the code, and then releases the semaphore. But unlike the semaphore we write, synchronized is built into the language, so there is no problem.

Typically, a shared resource is a memory that behaves as an object, but it can also be a file, I/O port, or a printer. To control access to shared resources, first put it in the object. Then all the methods to access this resource are made into synchronized. As long as one thread is still calling the Synchronized method, the other threads are not allowed to access all the synchronized methods.

You usually set the members of the class private and then use the method to access it, so you can make the method synchronized. The following is the declaration of the Synchronized method:

synchronized void F () {/* ... */}

synchronized void g () {/* ... */}

Each object has a lock (also known as Monitor Monitors), which is something the object is born with (so you don't have to write any code for it). When you call the Synchronized method, the object is locked. No one can invoke other synchronized methods of the same object until the method returns and unlocks. Just say the above two methods, if you call F (), then you cannot invoke the G () of the same object until the F () is returned and unlocked. So for any particular object, all synchronized methods share a lock that prevents two or more threads from reading and writing a shared memory at the same time.

A line Cheng Nen the lock of the object multiple times. That is, one synchronized method calls another synchronized method, the latter calls another synchronized method, and so on. The JVM keeps track of how many times an object has been locked. If the object is not locked, its counter should be zero. When a thread first obtains a lock on an object, the counter is one. Each time a thread receives a lock on an object, the counter adds one. Of course, only the first thread to acquire an object lock can acquire the lock more than once. When a thread exits a synchronized method, the counter is reduced by one. The object is unlocked when it is reduced to zero, so other threads can use the object.

In addition, each class also has a lock (it belongs to class object) so that when the synchronized static method of the class reads the static data, it does not interfere with each other.

Rewrite Evengenerator with synchronized

Be sure to remember that all methods of accessing shared resources must be synchronized, otherwise the program will definitely go wrong.

Atomic operation

"Atomic operations (atomic operation) do not need synchronized", which is a cliché of Java multithreaded programming. Atomic operations are operations that are not interrupted by thread-scheduling mechanisms; Once the operation has started, it has been run backwards and there will be no context switch (switch to another thread) in the middle.

Generally speaking, atomic operations include assigning values to primitive that are not long and double, and returning primitive that are not both. The reason to exclude them is because they are all relatively large, and the JVM's design specification does not require that read operations and assignment operations must be atomic operations (the JVM can try to do this, but not guarantee). But if you add a volatile to the long or double front, it's definitely atomic.

If you are from C + +, or have other low-level language experience, you will think that increment is definitely an atomic operation, because it is usually the CPU's instructions to achieve. But in the JVM, incrementing is not an atomic operation, it involves reading and writing. So even if it is such a simple operation, multithreading can also be an opportunity.

If you define a variable as volatile, the compiler will not do any optimizations. The meaning of optimization is to reduce the data synchronization of reading and writing.

The safest atomic operation is only read and assign to primitive. But atomic operations can also access objects that are in an invalid state, so never take it for granted. As we begin, long and double operations are not necessarily atomic operations (although some JVMs guarantee long and double are atomic operations, if you do use this feature, the code is not portable.) )

The safest approach is to follow the following guidelines:
If you want to synchronize a method of class, simply synchronize all the methods. To determine which method to synchronize, which method can not synchronize, is usually difficult, and not sure. Be absolutely careful when deleting the synchronized. This is usually done for performance, but the cost of synchronized has been significantly reduced in JDK1.3 and 1.4. In addition, only in the use of profiler analysis, to confirm that synchronized is really the bottleneck of the premise to do so.
Remember the supreme principle of concurrent programming: Never take it for granted.

Object locks and Synchronized keywords are Java built in semaphore, so there's no need to do it again.

Key segment

Sometimes you just have to prevent multiple threads from accessing one part of the method at the same time, not the entire method. This code that needs to be quarantined is called a critical segment (critical section). You need to use synchronized keywords to create critical segments. Here, the function of synchronized is to indicate which object's lock is to be obtained by executing the following code.

Synchronized (SyncObject) {

This code can be accessed

By only one thread in a time

}



The critical segment is also known as the "Sync Block", and the thread must first obtain the SyncObject lock before it synchronized the code. If another thread has already acquired the lock, the thread cannot run the code in the critical segment until it unlocks.

Synchronization is divided into two types, code synchronization and Method synchronization. Synchronizing a piece of code can significantly increase the chances that other threads will get the object, compared to synchronizing the entire method.

Ultimately, of course, programmers will have to: all code that accesses shared resources must be wrapped in a synchronization segment.

The state of the thread

The state of the thread can be summed up in the following four ways:
New: The thread object has been created, but has not yet started (start) and therefore cannot be run. Runnable: In this state of the thread, as long as the time-sharing mechanism allocated to its CPU cycle, it can run. That is, it may or may not be running at a specific point in time, but no one can stop it when it is running, and it has no dead and is not blocked. Dead: To abort a thread, it's a good idea to quit run (). You can also call Stop () before Java 2, but it is not recommended now because it is likely to cause instability in the running state of the program. There is also a destroy () (but it has not yet been implemented, perhaps not in the future, that is to say, has been abandoned). Blocked: In the thread itself, it can be run, but there are other reasons to prevent it from running. The thread scheduling mechanism skips the blocked thread and does not allocate CPU time to it at all. Nothing can be done unless it comes back into the runnable state.
into a blocking state

If the thread is blocked, there must be something wrong with it. There are several possible problems:
You use the sleep (milliseconds) method to call the thread hibernate. During this time, the thread is not able to run. You hang up the thread with the Wait () method. The thread cannot re-enter the runnable state unless a notify () or notifyall () message is received. This part of the story will be in the following. Line thread and so on I/O end. The thread wants to invoke the Synchronized method of another object, but has not yet gotten the lock on the object.
You may also have seen suspend () and resume () in the old code, but Java 2 has abandoned both methods (because it is very easy to cause deadlocks), so this is not covered here.

Collaboration between threads

Once you understand how threads collide with each other and how to prevent such conflicts, the next step is to learn how to make threads work together. The key to doing this is to allow threads to "negotiate (handshaking)" with each other. This task is done by Wait () and notify () of object.

Wait and notify

The first thing to emphasize is that thread sleep () does not release the lock of the object, but the lock of the object is released when Wait (). That is, during a thread wait (), other threads can invoke its synchronized method. When a thread calls an object Wait () method, it aborts and releases the object lock.

Java has two kinds of wait (). The first requires a time in milliseconds to be used as a parameter, which means, like sleep (), "Pause for a while." "The difference is:
Wait () releases the lock on the object. In addition to the time, wait () can also be used notify () or Notifyall () to stop
The second wait () does not require any parameters; it has a wider use. After the thread invokes this wait (), it waits until (another thread calls the object) notify () or Notifyall ().

and sleep () are different from thread, wait (), notify (), and Notifyall () are methods of the root object. While this approach, which places a multithreaded service approach into the common root class, seems odd, it is necessary. Because they manipulate the locks that each object will have. So the conclusion is that you can call wait () in the synchronized method of the class, as it does not inherit thread, it does not realize that runnable is irrelevant. In fact, you can only call wait () in the Synchronized method or synchronized paragraph, notify () or Notifyall () (sleep) without this restriction because it does not operate on the lock. If you call these methods in a synchronized method, the program can still be compiled, but a illegalmonitorstateexception will be created once it runs. This anomaly carries a very puzzling "current thread not owner" message. This message means that if the thread wants to invoke the object's Wait (), notify (), or the Notifyall () method, it must first "have" (get) the lock on the object.

Normally, if the condition is controlled by a force other than the method (most commonly, it is modified by another thread), then you should use Wait (). Wait () allows you to hibernate while waiting for changes in the world, and when (other threads call the object) notify () or Notifyall (), the thread wakes up and checks if the condition changes. So Wait () provides a way to synchronize the activity between threads.

Using pipelines for I/O operations between threads

In many cases, threads can also use I/O to communicate. The Multithreaded Class library provides a "pipe (pipes)" To implement I/O between threads. For Java I/O class libraries, this class is PipedWriter (which allows threads to write data to the pipe) and pipedreader (let another thread read the data from this pipe). You can interpret it as a variant of the "Producer-consumer" problem, and the pipeline provides a ready-made solution.

Note that if you start the thread without creating the object, the behavior of the pipe on a different platform may be inconsistent.

More complex Synergy

Here we only talk about the most basic synergy (that is, the producer-consumer pattern that is usually implemented by wait (), notify ()/notifyall ()). It has been able to solve most of the threading synergies, but there are many more complex ways to collaborate in advanced textbooks.

Dead lock

Because the thread can be blocked, and because the synchronized method can prevent other threads from accessing this object, it is possible that the following situation may occur: The line Cheng is waiting for thread two (releasing an object), the line Cheng is waiting for the line to Cheng, and so on until there is a line thread lines Cheng. This creates a loop where each thread waits for the other to release the resource, and none of them can run. This is called a deadlock (deadlock).

If the program is running on a deadlock, that's easy. You can begin to solve the problem at once. But the real trouble is that the program seems to work, but it lurks in the hidden danger of deadlock. Maybe you don't think there's going to be a deadlock, and the bugs just lurk. Until one day, let a user hit (and the bug is probably not repeatable). Therefore, for concurrent programming, preventing deadlocks is an important task in the design phase.

Let's take a look at the classic deadlock scene found by Dijkstra: The philosophers eating problem. There are five philosophers in the original story (though our routines allow for any number of them). These philosophers only do two things, think and eat. When they think, they don't need any resources to share, but when they eat, they have to sit at the table. The cutlery on the table is limited. In the original story, the cutlery is a fork, and two forks are used to remove the noodles from the bowl when eating. But obviously, it would be more reasonable to replace the fork with chopsticks, so: a philosopher needs two chopsticks to eat.

Now the key to the problem: these philosophers are poor and can only afford five chopsticks. They sat in a circle and put a chopstick in the middle of two people. Philosophers must simultaneously get chopsticks from the left hand side and the right hand side when eating. If any one of his side is using chopsticks, he only waits.

The interesting thing about this is that it demonstrates a program that seems to work, but it can easily cause deadlocks.

Before telling you how to fix this problem, first understand that the deadlock occurs only if the following four conditions are met:
Mutex: Perhaps threads will use a lot of resources, but at least one of them is not shared. At least one process will occupy one resource while waiting for another resource that is being occupied by other processes. (Scheduling systems or other processes) cannot rob resources from the process. All processes must properly release resources. There must be a waiting ring. A process is in a resource that has been preempted by another process, and that process is waiting for another resource to be preempted by a third process, and so on, until a process is waiting for the resource that was preempted by the first process, thus creating a paralyzing blockage.
Since deadlocks satisfy these four conditions at the same time, it is possible to prevent deadlocks by removing one of them.

The Java language does not provide any mechanism to prevent deadlocks, so it's up to you to design it.

The correct way to stop a thread

To reduce the likelihood of deadlocks, Java 2 discards the thread class stop (), suspend (), and resume () methods.

The reason to give up the stop () is because it does not release the lock on the object, so if the object is in an invalid state (that is, it is corrupted), other threads may see and modify it. The consequences of this problem can be very microsecond and therefore difficult to detect. So stop using stop (), instead you should set a flag (flag) to tell the thread when to stop.

Interrupting a blocked thread

Sometimes the thread is blocked and can no longer poll, such as in the input, then you can not query the flag as before. In this case, you can use the Thread.Interrupt () method to interrupt the blocked thread.

Thread Group

A thread group is a thread-loaded container (collection). In the words of Joshuabloch, its meaning can be summed up as follows:

"It's best to think of the thread group as an unsuccessful experiment, or just when it doesn't exist." "

There is one small purpose left for the thread group. If the thread in the group throws an exception that is not caught by the exception handler, the Threadgroup.uncaughtexception () is started. And it prints out the stack's trajectory on the standard error stream. To modify this behavior, you must override this method.

Summarize

It is important to know when to use concurrency and when not to have concurrency. The main reasons for using concurrency include managing a large number of tasks to run at the same time to improve system utilization (including transparent allocation of load on multiple CPUs), more reasonable organization code, and user-friendly. A classic case of balancing loads is to do the calculations while waiting for I/O. The classic case for users is to monitor the "Stop" button when a user downloads a large file.

The additional benefit of threading is that it provides a "lightweight" (100-instruction-level) Runtime environment (execution context) switch, while the process environment (processing context) switch is "heavy" (thousands of instructions). Because all threads share the memory space of the process, a lightweight environment switch will only change the program execution order and local variables. The heavy process environment switch must exchange all memory space.

The main drawbacks of multithreading include:
The runtime slows down while waiting for the shared resource. Thread management requires additional CPU overhead. If the design is not unreasonable, the program will become unusually responsible. Will cause some abnormal state, like starvation (starving), competition (racing), Deadlock (deadlock), Live Lock (Livelock). There will be some inconsistencies on different platforms. For example, when I was developing this book routine, I found that competition was very fast on some platforms, but when I changed the machine, it didn't appear at all. If you develop in the latter and then publish it to the former, it will be disastrous.
The difficulty with threading is that multiple threads share the same resource--such as the object's memory--and you have to make sure that there are no two or more threads accessing that resource at the same time. This requires a reasonable use of synchronized keywords, but must be fully understood before use, otherwise it will quietly put the deadlock in.

There is also some art in the use of threads. Java is designed to allow you to create as many objects as you need to solve the problem, at least in theory. (It's not realistic to create millions of objects, such as engineering finite element analysis, for Java.) But there should be a limit to the number of threads you can create, because at this number, the thread freezes. This critical point is hard to find, usually determined by the OS and JVM, perhaps within 100 or thousands of. But usually you can just create a few threads to solve the problem, so it's not a limitation, but for more general design, that's a limitation.

Threading is an important, but less intuitive, conclusion. That is, usually you can plug in the yield () in the main loop of run (), and then let the thread scheduling mechanism help you speed up the operation of the program. This is definitely an art, especially when the wait is prolonged and performance rises. This is because a shorter delay causes the running thread to receive the end of hibernation before it is ready for hibernation, so that the scheduling mechanism has to stop and wake up the thread before it can hibernate after it has finished working. Switching from an additional running environment can lead to a decrease in running speed, while yield () and sleep () prevent this unwanted switch. To understand how troublesome this problem is, you really have to think about it.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.