Hasen Linux Device-Driven development learning journey--linux concurrency and race in device drivers

Source: Internet
Author: User
Tags mutex semaphore

/** * Author:hasen * Reference: Linux device Driver Development Details * Introduction: Android small Novice Linux * device driver Development Learning Journey * Topic: Concurrency and race in Linux device drivers * date:2014-11- 04 */1, concurrency, and race concurrency (concurrency) refers to multiple execution units being executed concurrently and concurrently, while concurrent execution units access to shared resources (global variables on the software, static variables, etc.) can easily lead to race (race conditions). The main races occur in the following situations: (1) symmetric multi-processing (SMP) multiple CPUs (2) Single-CPU processes and processes that preempt it (3) interrupts (hard interrupts, soft interrupts, tasklet, bottom halves) interrupt masking between processes, atomic operations, Spin locks and semaphores are mutually exclusive paths that can be used in Linux device drivers. 2. Interrupt masking the simple and easy way to avoid race in a single CPU range is to block the interruption of the system before entering the critical section. Interrupt masking is used in the following ways: Local_irq_disable ()/* Shielded interrupt */...critical setion//critical section */...local_irq_enable ()///Open Interrupt */long-time shielded interrupt is very dangerous. In addition to preventing system outages, you can also save the current CPU's interrupt bit information Local_irq_save (flags) Critical setion ... local_irq_restore (flags) If you just want to disable the bottom half of the interruption Local_ Bh_disable () ... critical setion ... local_bh_enable () 3, atomic Operation Atomic operations are operations that are not interrupted by other code paths during execution. Integer atomic operation: (1) Sets the value of the atomic variable void Atomic_set (atomic_t *v,int i);/* Sets the value of the atomic variable to i*/atomic_t v = atomic_init (0);/* Defines the atomic variable V and initializes it to 0*/(2) Gets the value of the atomic variable Atomic_read (atomic_t *v);/* Returns the value of the atomic variable */(3) atomic variable Plus/minus void atomic_add (int i, atomic_t *v);/* Atomic variable increased i*/void atomic_ Sub (int i, atomic_t *v);/* Atomic Variable Reduction i*/(4) Atomic variable self-increment/decrement void Atomic_inc (atomic_t *v);/* Atomic variable self-increment 1*/void atomic_dec (atomic_t *v);/* Atomic variable self-decrement 1*/(5) operation and test int atomic_inc_and_test (atomic_t *v);/* Test return value is 0 after self-increment Returns True*/int Atomic_dec_and_test (atomic_t *v) for 0,/* The test return value is 0, 0 returns true*/int atomic_sub_and_test (int i, atomic_t *v) -*//minus operation after testing whether the return value is 0, 0 returns true*/(6) operation and returns int Atomic_add_return (int i, atomic_t *v); */* Plus operation returns the new value */int atomic_sub_return (int i , atomic_t *v);/* Minus operation returns the new value */int Atomic_inc_return (atomic_t *v);/* Returns the new value after the increment operation */int Atomic_dec_return (atomic_t *v);/* The new value is returned after the decrement operation: (1) Set the bit void Set_bit (nr,void *addr),///* Set the NR bit of the addr address, write the bit as 1*/(2) Clear bit void Clear_bit (nr,void *addr);/* Clear the addr address of the NR bit, will be written as 0*/(3) to change the bit void Change_bit (nr,void *addr);/* Reverse the Addr address of NR */(4) test bit test_bit (nr,void *addr);/* This operation returns the NR bit of the addr address */(5) test and operation Bit int test_set_bit (nr,void *addr); int test_clear_bit (nr,void *addr); int Test_change_bit (NR , void *addr); Using atomic variables to implement a device can only be turned on by a process atmoic_t xxx_available = atomic_init (1);/* Defines an atomic variable */static int xxx_open (struct Inode *inode,struct file *filp) {... if (!atomic_dec_and_test (&xxx_available)) {Atomic_inc (&xxx_avaliable); return-ebusy;/* already open */}...return 0;/* successfully */}static int xxx_release (struct inode *inode, struct file *flip) {atomic_inc (&xxx_avaliable);/* release device */return 0;} 4. Spin lock spin lock (Spin lock) is a typical means of mutually exclusive access to critical resources. To get a spin lock, code that runs on a CPU needs to perform an atomic operation that tests and sets (Test-and-set) a memory variable, because it is an atomic operation, so it is not possible for other execution units to access this memory variable until the operation is complete. If the test results indicate that the lock is idle, the program obtains the spin lock and continues execution, and if the test results indicate that the lock is still occupied, the program will repeat the "test and set" operation in a small loop, that is, the so-called "spin". When the owner of the spin lock releases the spin lock by resetting the variable, a waiting test and set operation reports to its caller that the lock has been released. Spin Lock related operation: (1) define the spin lock spinlock_t lock;(2) Initialize the spin lock spin_lock_init (lock);/* The macro is used to dynamically initialize the spin lock lock*/(3) to get a spin lock/* The macro is used to obtain the spin lock lock, If the lock can be obtained immediately, return immediately, otherwise, spin until the */spin_lock of the spin lock is released;/* The macro attempts to obtain a spin lock lock, if the lock can be obtained immediately, it obtains the lock and returns true, otherwise immediately returns false, no longer spins */spin_trylock (lock);(4) Release spin lock/* The macro releases the spin lock lock, which is paired with the Spin_trylock or spin_lock using the */spin_unlock (lock) spin lock, which is generally used: spinlock_t lock; spin_ Lock_init (&lock); Spin_lock (&lock);/* Get spin lock, protect critical area */*/* Critical Zone */spin_lock_unlock (&lock); /* Unlock/Use spin lock to pay attention to the problem: (1) Spin lock is actually busy waiting, when the critical area is large, or has a shared resource occupation lock time is very long, the use of spin lock will reduce the performance of the System (2) The spin lock may cause system deadlock (3) during the spin lock lock can not invoke the functionExample: Using a spin lock to implement a device can only be opened by a process int xxx_count = 0;/* Defines the number of times the file is opened count */static int xxx_open (struct inode *inode, struct file *filp) {... spin _lock (&xxx_lock); if (Xxx_open) {/* already open */spin_unlock (&xxx_lock); return-ebusy;} Xxx_count +/* Increase usage count */spin_unlock (&xxx_lock);... return 0; /* Successfully */}static int xxx_release (struct inode *inode,struct file *filp) {... spin_lock (&xxx_lock); xxx_count--;/* Reduce use Count */spin_unlock (&xxx_lock); return 0;} Read-write spin lock read-write spin lock can only have a write process, in the read operation, can have multiple read execution unit. Of course, reading and writing cannot be done at the same time. The operation of the read-write spin lock is as follows: (1) define and initialize read/write spin lock rwlock_t my_rwlock rw_lock_unlocked;/* static initialization */rwlock_t My_rwlock; Rwlock_init (&my_ Rwlock); /* Dynamic initialization */(2) Read lock void Read_lock (rwlock_t *lock); void Read_lock_irqsave (rwlock_t *lock,unsigned long flags); void Read_ LOCK_IRQ (rwlock_t *lock); void Read_lock_bh (rwlock_t *lock);(3) Read unlock void Read_unlock (rwlock_t *lock); void Read_unlock_ Irqstore (rwlock_t *lock,unsigned long flags); void Read_unlock_irq (rwlock_t *lock); void Read_unlock_bh (rwlock_t *lock) ;(4) write lock void Read_lock (rwlock_t*lock); void Read_lock_irqsave (rwlock_t *lock,unsigned long flags); void Read_lock_irq (rwlock_t *lock); void Read_lock_ BH (rwlock_t *lock); int Write_trylock (rwlock_t *lock);(5) Read unlock void Read_unlock (rwlock_t *lock); void Read_unlock_ Irqstore (rwlock_t *lock,unsigned long flags); void Read_unlock_irq (rwlock_t *lock); void Read_unlock_bh (rwlock_t *lock) Read-write spin locks are used as follows: rwlock_t lock;/* define RWLOCK*/RWLOCK_INIT (&lock);/* Initialize rwlock*//* read to acquire lock */read_lock (&lock); Critical Resource */read_unlock (&lock);/* Acquires Lock */write_lock_irqsave (&lock,flags) when writing,/* Critical resource */write_unlock_irqstore (& lock,flags), sequential lock sequence Lock (Seqlock), an optimization of read and write locks, when using sequential locks, reads are not blocked by the write execution unit, that is, when writing to a critical resource, it is also possible to read from this critical resource, which reads and writes at the same time, but the write is not allowed. If the read execution unit has already written to the write execution unit during the read operation, then the read execution unit must be restarted, which guarantees the integrity of the data, which may be minimal. Sequential lock performance is very good, while he allows read and write at the same time, greatly improving the concurrency. But he has a limitation: the shared resource does not contain pointers, because the write execution unit may invalidate the pointer, but if the read execution unit is about to access the pointer, it will cause oops (the word online search means: a startling exclamation.) I understand that accessing the pointer will result in unexpected results). In the Linux kernel, the sequential lock operation of the write execution unit design is as follows: (1) obtaining sequential lock void Write_seqlock (seqlock_t *sl); int Write_tryseqlock (seqlock_t *sl); Write_seqloCk_irqsave (lock,flags); Write_seqlock_irq (lock); Write_seqlock_bh (lock); Where: write_seqlock_irqsave () = Local_irq_ Save () + Write_seqlock (); WRITE_SEQLOCK_IRQ () = local_irq_disable () + Write_seqlock (); WRITE_SEQLOCK_BH () = Local_bh_ Disable () + Write_seqlock ();(2) Release sequence lock void Write_sequnlock (seqlock_t *sl); Write_sequnlock_irqrestore (lock,flags); WRITE_SEQUNLOCK_IRQ (Lock) WRITE_SEQUNLOCK_BH (lock) Where: write_sequnlock_irqrestore () = Write_sequnlock () + lock_irq_ Restore () WRITE_SEQUNLOCK_IRQ (lock) = Write_sequnlock () + lock_irq_enable () write_sequnlock_bh (lock) = Write_sequnlock () + local_bh_enable () the mode of the write execution unit is as follows: Write_seqlock (&SEQLOCK_A);/* Write code block */write_sequnlock (&SEQLOCK_A); Read start unsigned read_seqbegin (const seqlock_t *SL) Read_seqbegin_irqsave (lock,flags) the read unit needs to call this function before it accesses the shared resource protected by the sequential lock SL. The function returns the current order number of the sequential lock. Where: read_seqbegin_irqsave () = Local_irq_save () + read_seqbegin () reread int read_seqretry (const seqlock_t *sl,unsigned IV); The Read_seqretry_irqrestore (lock, Iv,flags) Read execution unit needs to call this function to check if there is a write during read access after accessing the shared resource that is protected by the sequence lockOperation, and if there is a write operation, the read execution unit will be read again. Where: read_seqretry_irqrestore () = Read_seqretry () + local_irq_restore () Read execution unit use sequential lock mode as follows: Do{seqnum = Read_seqbegin (& SEQLOCK_A);/* Read operation code block */...} while (Read_seqretry (&seqlock_a,seqnum)); read-copy-update RCU details see URL: http://www.ibm.com/developerworks/cn/linux/l-rcu/ , here no longer elaborate rcu (read-copy-update), as the name implies is read-copy modification, it is based on its principle named. For a shared data structure that is protected by RCU, the reader does not need to acquire any locks to access it, but the writer first copies a copy while accessing it, then modifies the copy, and finally uses a callback (callback) mechanism to re-point the pointer to the original data at the appropriate time to the new modified data. The timing is that all CPUs referencing the data exit from the shared data operation. So RCU is actually an improved rwlock, the reader has little synchronization overhead, it does not need to be locked, does not use atomic instructions, and does not require a memory grid on all architectures except Alpha, which does not cause lock contention, memory latency, and pipeline stagnation. It is easier to use without a lock because the deadlock problem does not need to be considered. The writer's synchronization overhead is relatively large, it needs to delay the release of data structures, copy the modified structure, it must also use some kind of lock mechanism to synchronize the other writer's modification operation. The reader must provide a signal to the writer so that the writer can determine when the data can be safely released or modified. There is a dedicated garbage collector to detect the reader's signals, and once all the readers have signaled that they are not using the data structure protected by RCU, the garbage collector invokes the callback function to complete the final data release or modification operation. The difference between RCU and Rwlock is that it allows multiple readers to access protected data at the same time, while allowing multiple readers and more than one writer to access the protected data at the same time (note: If there can be more than one writer parallel access depending on the synchronization mechanism used between the writer), the reader does not have any synchronization overhead. The synchronization cost of the writer depends on the synchronization mechanism between the writer and user. But RCU can not replace Rwlock, because if write more long, the performance of the reader can not compensate for the loss caused by the writer. The reader cannot be blocked during access to shared data protected by RCU, which is a basic premise for the RCU mechanism to be implemented., it is said that when the reader refers to the shared data protected by the RCU, the reader's CPU can not be a context switch, both spinlock and Rwlock need this premise. The writer does not need to compete with the reader for any locks when accessing shared data protected by the RCU, but only if there is more than one writer who needs to acquire some kind of lock in order to synchronize with other writer. The writer first copies a copy of the modified element before modifying the data, then modifies it on the copy and then registers a callback function with the garbage collector to perform the actual modification at the appropriate time. Waiting for the right time for this period is called grace period, while the CPU has a context switch called going through a quiescent state,grace period is the time required for all CPUs to experience a quiescent state. The garbage collector does this by invoking the writer registration callback function after grace period to complete the actual data modification or data release operation. The following is an example of the deletion of linked list elements detailing this process.  The writer wants to remove element B from the linked list, which first iterates through the list to get a pointer to element B, and then modifies the next pointer of element B to point to the element c of the next pointer to element B, and the prep pointer to the element c to which the next pointer of element B points to element B. The prep pointer points to the element A, during which the reader may have access to the linked list, the action pointed to by the modifier pointer is atomic, so no synchronization is required, and the pointer to element B is not modified because the reader may be using the B element to get the next or previous element. After completing these operations, the writer registers a callback function to delete the element B after grace period, and then considers that the delete operation has been completed. When the garbage collector detects that all CPUs are not referencing the linked list, that is, all CPUs have gone through quiescent state,grace period has passed, the callback function that called just the writer registration has been removed to remove element B. The RCU operation is as follows: (1) Read lock Rcu_read_lock () Rcu_read_lock_bh () (2) Read unlock Rcu_read_unlock () rcu_read_unlock_bh () the mode of reading with RCU is as follows: Rcu_ Read_lock (), ..../* Read critical section */rcu_read_unlock (); Where: Rcu_read_lock () and Rcu_read_unlock () are just the preemption of the forbidden and enabled cores. #define Rcu_read_lock () preempt_disable () #defineRcu_read_unlock () preempt_enable () its variant rcu_read_lock_bh (), RCU_READ_LOCK_BH () is defined as: #define RCU_READ_LOCK_BH () Local_ Bh_disable () #define RCU_READ_UNLOCK_BH () local_bh_enable () (3) synchronous rcu/* The function is called by the RCU write execution unit, which blocks the write execution unit until all read execution units have completed read execution * Unit critical section, write execution unit can continue next operation */SYNCHRONIZE_RCU ()/* * Kernel code uses this function to wait for all CPUs to be preempted, we recommend using synchronize_sched () */synchronize_kernel () Hook callback/* Function CALL_RCU is also called by the RCU write execution unit, which does not block the write execution unit, so it can be used in an interrupt context or soft interrupt, which hooks the func to the RCU callback function chain and immediately returns */void Call_rcu (struct Rcu_head *head,void (*func) (struct rcu_head *rcu);/* similar to CALL_RCU (), the only difference is that it uses the completion of a soft interrupt as a silent state, so if the write execution unit is using the function, The read execution unit in the process context must use the RCU_READ_LOCK_BH () */void call_rcu_bh (struct rcu_head *head,void (struct *func rcu_head)); RCU also adds a RCU version of the list operation function:/* list element new inserted into RCU protected list head */static inline void List_add_rcu (struct list_head *new,struct list_ Head *head);/* Link list element new insert to RCU protected list head end */static inline void List_add_tail_rcu (struct list_head *new,struct list_head * */* Removes the specified list element from the RCU protected list entry*/static the inline void List_del_rcu (struct list_head *entry);/* Use the newThe linked list element new replaces the old one, and the memory grid guarantees that the fixup of the linked pointer is visible to all read execution units before referencing the new list element */static-inline void List_replace_rcu (struct list_head *old,struct list_head *new);/* Iterates through the chain list head protected by the RCU, as long as the function is used in the critical section of the read execution unit, it can safely and other _RCU list operation functions concurrently execute */LIST_FOR_EACH_RCU (POS, Head)/* Similar to LIST_FOR_EACH_RCU, the difference is that he allows to safely delete the current list element Pos*/list_for_each_safe_rcu (pos,n,head)/* Similar to LIST_FOR_EACH_RCU, The difference is that it is used to traverse a list of data structures of a specified type, and the current list element POS is a specific data structure that contains a struct list_head structure */LIST_FOR_EACH_ENTRY_RCU (pos,head,member)/* It removes the linked list element from the RCU protected hash list n*/static inline void Hlist_del_rcu (struct hlist_node *n)/* This function is used to insert the list element n into the beginning of the RCU protected hash list. But also allows the read execution unit to traverse the hash list. The memory grille ensures that the changes to the pointer are visible to all read execution units before referencing the new linked list element */static inline void Hlist_add_head_rcu (struct hlist_node *n,struct hlist_head *h)/* The macro is used to traverse the hash list head protected by RCU, so long as the function is used in the read-critical section, it can safely execute concurrently with the other _RCU-linked list operation functions */hlist_for_each_rcu (pos,head)/* Similar to Hlist_for_each _RCU () differs in that it is used to traverse a list of data structures of a specified type, and the current list element POS is a specific data structure */HLIST_FOR_EACH_ENTRY_RCU (Tpos,pos,head) that contains a struct list_head structure. Member) 5, Semaphore signal Volume (semaphone) is a common method for the protection of critical areas, its use is similar to the spin lock, and the same as the spin lock, only the process to get the signal volume to execute the critical section of the code. However, withUnlike spin locks, when the semaphore is not acquired, the process does not spin in place and goes into a dormant wait state. The operations associated with semaphores in Linux are: (1) Define the semaphore struct semaphore sem;(2) Initialize the semaphore/* Initialize the semaphore, define the value of the semaphore sem to val*/void sema_init (struct semaphore * Sem,int val);/* The macro is used to initialize a semaphore for mutual exclusion, which sets the value of the semaphore to 1*/#define INIT_MUTEX (SEM) sema_init (sem,1)/* The macro is used to initialize a semaphore for mutual exclusion. It sets the value of the semaphore to 0*/#define INIT_MUTEX_LOCKED (SEM) sem_init (sem,0) below two is the "shortcut" Declare_mutex (name)/* that defines and initializes the semaphore Define a semaphore named name and initialize to 1*/declare_mutex_locked (name)/* To define a semaphore named name and initialize it to 0*/(3) to get the semaphore/* The function is used to get the semaphore SEM, which causes sleep, Therefore, you cannot use */void down (struct semaphore *sem) in an interrupt context, or/* This function is similar to down, except that a process that goes to sleep because of down () cannot be interrupted by a signal, but because Down_ Interruptible () While the process of going to sleep can be interrupted by a signal, the signal will also cause the function to return, when the return value is not 0*/void down_interruptible (struct semaphore *sem);/* The function attempts to obtain the semaphore SEM, if it can be obtained immediately, it obtains the semaphore and returns 0, otherwise returns a non-0 value, it does not cause the caller to sleep, can be used in the interrupt context */int Down_trylock (struct semaphore *sem); When using down_interruptible () to obtain the semaphore, the return value is generally checked, if not 0, return-erestartsys, as follows: if (Down_interruptible (&dem)) return- Erestartsys;(4) releases the semaphore void up (struct semaphore *sem);/* The function releases the semaphore SEM, which wakes the waiting person/semaphore in general use:/* Defines the semaphore */declare_mutex (Mount_sem);d Own (&MOUNT_SEM);/* Get the semaphore, protect the critical area */...critical setion//Critical Zone */...up (&MOUNT_SEM); /* Release semaphore/* Example: using semaphores to implement a device can only be opened by a process static Declare_mutex (xxx_lock);/* defines a mutex */static int xxx_open (struct inode *inode, struct file *filp) {... if (Down_trylock (&xxx_lock))/* gets open lock */return-ebusy;/* device busy */...return 0;/* successful */}static int xxx _release (struct inode *inode,struct file *filp) {up (&xxx_lock);/* Release Open lock */return 0;} The semaphore is initialized to 0 o'clock and can be used for synchronization. Linux provides a better synchronization mechanism, which is the amount of completion (completion). There are 4 actions related to completion operations: (1) Define the amount of completion//The following code defines the amount of completion called My_completion */struct Completion my_completion;(2) Initialize completion/* The following code initializes my_completion this completion amount */init_completion (my_completion); to My_ The definition and initialization of completion can be accomplished by the following shortcuts: Declare_completion (my_completion);(3) Wait for completion/* The following function is used to wait for a completion to be awakened */void Wait_ For_completion (struct completion *c);(4)/* The following two functions are used for wake-up completion */void complete (struct completion *c); void Complete_all ( struct completion *c), 3 Principles of spin Lock and semaphore selection: (1) When a lock cannot be acquired, the cost of using the semaphore is the process context switch time TSW, the overhead of using the spin lock is to wait for the spin lock (determined by the critical section execution time), and if the TCS is small, It is advisable to use a spin lock and a large amount of semaphore. (2) Signal volumeThe protected critical section can contain code that may cause blocking, while a spin lock is absolutely avoided to protect the critical section that contains such code. Because blocking means switching processes, if a process is switched out and another process attempts to acquire this spin lock, a deadlock can occur. (3) The semaphore exists in the process context, so if the protected shared resource is to be used in the event of an interruption or soft interruption, only the spin lock can be selected. If the semaphore is not to be used, it can only be done through the down_trylock () and cannot be retrieved and returned immediately to avoid blocking. Read and write semaphores can cause blocking, but can allow n read execution units to access shared resources at the same time, with a maximum of one write execution unit. The operations involved in reading and writing semaphores include: (1) definition and initialization of both read and write semaphore struct Rw_semaphore My_rws; /* Define read/write semaphore */void init_rwsem (struct rw_semaphore *sem);/* Initialize read/write semaphore//(2) Read semaphore get void Down_read (struct rw_semaphore *sem); int Down_read_trylock (struct rw_semaphore *sem);(3) Read semaphore release void Up_read (struct rw_semaphore *sem);(4) write semaphore get void Down_ Write (struct rw_semaphore *sem), int down_write_trylock (struct rw_semaphore *sem);(5) write semaphore release void Up_write (struct rw_ Semaphore *sem); the continuation of the semaphore is generally used in this way: Rw_semaphore Re_sem;/* Defines the read/write Semaphore */init_rwsem (&RE_SEM);/* Initialize read/write semaphore *//* read to get the semaphore */down_ Read (&rw_sem), ..../* Critical Resource */up_read (&RW_SEM),/* Get semaphore */down_wirte (&rw_sem) when writing, .../* Critical resource */up_write (& RW_SEM); 7. The mutex is an authentic mutex (mutex) that exists in Linux. (1) Define and initialize mutex struct mutex My_mutex; Mutex_init (&my_mutex);(2) get mutex void inline __sched mutex_lock (struct mutex *lock); int __sched mutex_lock_interruptible (struct mutex *lock); int __sched Mutex_tr Ylock (struct mutex *lock);(3) releases the mutex void __sched mutex_unlock (struct mutex *lock); the use of mutexes is exactly the same as when the semaphore is used for mutual exclusion: struct mutex My_mutex;/* Define Mutex */mutex_init (&MY_MUTEX);/* Initialize Mutex */mutex_lock (&MY_MUTEX);/* Get mutex*/.../* Critical resource */mutex_ Unlock (&my_mutex);/* Release MUTEX*/8, summary concurrency, and a wide range of races, interrupt masking, atomic manipulation, spin locks, and semaphores are all mechanisms to solve concurrency problems. Interrupt masking is rarely used alone, atomic operations can only be done for integers, so spin locks and semaphores use the most. A spin lock causes a dead loop that does not allow blocking during locking, so the critical section of the lock is required to be small. The signal volume allows critical areas to be blocked for large critical areas. Read-Write spin locks and read and write semaphores are relaxed the conditions of spin locks and semaphores, which allow multiple execution units to read concurrently on shared resources.

Hasen Linux Device-Driven development learning journey--linux concurrency and race in device drivers

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.