FreeRTOS Advanced Chapter 6 Analysis of the signal volume of the---freertos

Source: Internet
Author: User
Tags error code inheritance mutex semaphore set time

The FreeRTOS semaphore includes a binary semaphore, a count semaphore, a mutex (hereafter referred to as a mutex), and a recursive mutex (hereafter referred to as a recursive mutex). The difference between them can be referred to in the article " FreeRTOS series 19th---freertos semaphore ".

Semaphore API functions are actually macros, which use the existing queue mechanism. These macros are defined in the Semphr.h file. If the semaphore or mutex is used, the Semphr.h header file needs to be included.

The creation of binary semaphore, count semaphore, and mutex semaphore API functions are independent, but both the fetch and release API functions are the same, and the creation, acquisition, and release of recursive mutex semaphores are independent of the API functions. 1. Semaphore Creation

In theFreeRTOS Advanced 5---freertos cohort analysis , we analyzed the implementation of the queue, including queue creation, queuing, and outbound operations. In that article we said that the Create Queue API function is actually called the Universal queue creation function xqueuegenericcreate (). In fact, not only does the creation of a queue actually call the Universal queue creation function, the binary semaphore, count semaphore, mutex, and recursive mutex are also directly or indirectly using this function, as shown in table 1-1. The Red font in table 1-1 indicates that the Xqueuegenericcreate () function is called indirectly.

Table 1-1: Queues, semaphores, and mutexes create macros with direct (indirect) execution functions

1.1. Create a binary semaphore

Binary semaphore creation is actually a function xqueuegenericcreate () created directly using the universal queue. Creating a binary Semaphore API interface is actually a macro, defined as follows:

#define Xsemaphorecreatebinary ()         \
       xqueuegenericcreate (              \
                (ubasetype_t) 1,       \
                Semsemaphore_queue_item_length,  \
                null,              \
                null,              \
                queuequeue_type_binary_semaphore\
                )

With this macro definition we know that creating a binary semaphore actually creates a queue with 1 queue items, but the size of the queue item is 0 (macro semsemaphore_queue_item_length is defined as 0).

With the knowledge of queue creation, we can easily draw the initialized binary semaphore memory, as shown in Figure 1-1.


Figure 1-1: Initializing binary semaphore object memory

Maybe more than just a stranger like me, create a queue with no queue item storage, and what the semaphore represents. In fact, the release and acquisition of binary semaphores are implemented by manipulating the queue structure member uxmessagewaiting (Figure 1-1, Red, uxmessagewaiting represents the number of current queue items in the queue). After initialization, the variable uxmessagewaiting is 0, which indicates that the queue is empty, that is, the semaphore is in an invalid state. Before using the API function Xsemaphoretake () to get a signal, a semaphore needs to be released first. This is also described in detail later in the release and acquisition of Binary semaphores. 1.2. Create Count Semaphore

Creating a count semaphore indirectly uses the universal queue to create the function xqueuegenericcreate (). The Create Count Semaphore API interface is also a macro definition:

#define Xsemaphorecreatecounting (Uxmaxcount, uxinitialcount)             \
       Xqueuecreatecountingsemaphore ((uxMaxCount ), (Uxinitialcount), (NULL))

Create a Count Semaphore API interface has two parameters, meaning the following:

Uxmaxcount: The maximum count value, when the signal reaches this value, it will no longer grow. Uxinitialcount: The initial value when the semaphore is created.

Let's take a look at how the function Xqueuecreatecountingsemaphore () is implemented:

queuehandle_t Xqueuecreatecountingsemaphore (const ubasetype_tuxmaxcount, const ubasetype_t Uxinitialcount, staticqueue_t *pxstaticqueue)
{
queuehandle_t xhandle;
 
    Configassert (Uxmaxcount! = 0);
    Configassert (Uxinitialcount <= uxmaxcount);
   
    /* Call Universal Queue creation function *
    /xhandle =xqueuegenericcreate (
          uxmaxcount,
          queuesemaphore_queue_item_length,
          NULL ,
          Pxstaticqueue,
          queuequeue_type_counting_semaphore);
 
    if (xhandle! = NULL)
    {
        ((queue_t *) xhandle)->uxmessageswaiting = Uxinitialcount;
    }
    Configassert (xhandle);
    return xhandle;
}

As you can see from the code, the create count semaphore still calls the universal queue creation function xqueuegenericcreate () to create a queue, the number of queue items is specified by the parameter uxmaxcount, and the size of each queue item is queuesemaphore_queue_ by the macro Item_length pointed out that we found this macro definition found that the macro was defined as 0, which means that the queue created only the queue data structure storage space and no queue item storage space.

If the queue creation succeeds, the queue struct member uxmessagewaiting is set to the initial count semaphore value. The amount of the initialized count semaphore memory is shown in Figure 3-1.


Figure 1-2: Number of initialized count semaphore object Memory 1.3 Create mutex

Creating a mutex indirectly uses the universal queue to create the function xqueuegenericcreate (). Create Mutex API interface is also a macro, defined as follows:

#define XSEMAPHORECREATEMUTEX ()             \
          Xqueuecreatemutex (Queuequeue_type_mutex, NULL)

Where macro Queuequeue_type_mutex is used for universal queue creation functions, which indicates that the type of creation queue is mutex, in the article "FreeRTOS advanced 5---freertos queue analysis " This macro is mentioned in the description of the general queue creation function parameter.

Let's take a look at how the function Xqueuecreatemutex () is implemented:

#if (configuse_mutexes = = 1) queuehandle_t Xqueuecreatemutex (const uint8_tucqueuetype, staticqueue_t *pxStaticQueue
    ) {queue_t *pxnewqueue;
 
        Const UBASETYPE_TUXMUTEXLENGTH = (ubasetype_t) 1, uxmutexsize = (ubasetype_t) 0;
       
        /* Prevent compiler from generating warning message */(void) Ucqueuetype; /* Call Universal Queue creation function */Pxnewqueue = (queue_t *) xqueuegenericcreate (uxmutexlength, Uxmutexsize, NULL, Pxstaticqueue, Ucq
 
        Ueuetype); /* Successfully assigned a new queue structure? The/if (pxnewqueue! = NULL) {/*xqueuegenericcreate () function sets all the queue struct members according to the general queue, but we are creating the mutex. Therefore, it is necessary to Some struct members are re-assigned values.
          
            */Pxnewqueue->pxmutexholder = NULL;  Pxnewqueue->uxqueuetype =queuequeue_is_mutex;
 
            NULL/* For recursive Mutex creation */pxnewqueue->u.uxrecursivecallcount = 0;
        /* Use an expected state to start semaphore */(void) xqueuegenericsend (Pxnewqueue, NULL, (ticktype_t) 0U, queuesend_to_back); } return pxnewqueue;
    } #endif/* configuse_mutexes */ 

This function is conditionally compiled, and the function is compiled only if the macro configuse_mutexes is defined as 1.

The function first calls the universal queue creation function xqueuegenericcreate () to create a queue with a queue entry number of 1 and a queue item size of 0, indicating that the queue created only has queue data structure storage space and no queue item storage space.

If the queue is created successfully, the Universal queue creation function also initializes all the queue struct members in the same way as the universal queue. But the mutex is created here, so some struct members must be re-assigned. In this code, you may wonder if there are Pxmutexholder and uxqueuetype in the queue struct members. In fact, these two identifiers are only macro definitions and are specifically defined for mutexes, as follows:

#define Pxmutexholder                                     pctail
#define UXQUEUETYPE                                       pchead
#define QUEUEQUEUE_IS_MUTEX                               NULL

When the queue struct is used for mutexes, the member Pchead and Pctail pointers are no longer needed, and the Pchead pointer is set to NULL, indicating that the Pctail pointer actually points to the mutex holder task TCB, if any.

The last Call to function Xqueuegenericsend () frees a mutex, which is equivalent to the mutex creation, and can be used to obtain this mutex directly using the Get Semaphore API function. If a resource has only one task access at a time, the resource can be protected with a mutex. This resource must be present, so when creating a mutex, a mutex is released first, indicating that the resource can be used. When a task wants to access a resource, it first gets the mutex, and then releases it when the resource is exhausted. That is, once the mutex is created, it is acquired, then released, and acquired and freed in the same task. This is also an important difference between the mutex and the binary semaphore, which can be obtained or released in any task, and can then be freed or fetched in any task. The mutex is different from the binary semaphore also: the mutex has a priority inheritance mechanism, the binary semaphore is not, the mutex can not be used to interrupt the service program, the binary semaphore can be.

The amount of mutex memory after initialization is shown in Figure 1-3.


Figure 1-3: Initializing mutex object memory 1.4 Creating a recursive mutex

Create a recursive mutex indirectly using the Universal queue creation function xqueuegenericcreate (). Creating a recursive Mutex API interface is also a macro, defined as follows:

#definexSemaphoreCreateRecursiveMutex ()                 \
          Xqueuecreatemutex (Queuequeue_type_recursive_mutex, NULL)

Where macro Queuequeue_type_recursive_mutex is used for universal queue creation functions, which indicates that the type of creation queue is recursive mutex, in the article "FreeRTOS advanced 5---freertos queue analysis " This macro is mentioned in the description of the general queue creation function parameter.

Creating mutexes and creating recursive mutexes is the same function called Xqueuecreatemutex (), as for parameter Queuequeue_type_recursive_mutex, as we know in FreeRTOS article, it's just for visual debugging, Therefore, creating a mutex and creating a recursive mutex can be considered the same, and the initialized recursive mutex object memory is the same as the mutex, as shown in Figure 1-3. 2. Releasing the semaphore

They use the same fetch and release API functions, regardless of the binary semaphore, the count semaphore, or the mutex. The release semaphore is used to make the semaphore effective, divided into two versions without interrupt protection and with interrupt protection. 2.1 xsemaphoregive ()

Used to release a semaphore without interrupt protection. The amount of the emitted semaphore can be a binary semaphore, a count semaphore, and a mutex. Note that recursive mutexes cannot be freed using this API function. In fact, the semaphore release is a macro, the function that really calls is xqueuegenericsend (), the macro is defined as follows:

#definexSemaphoreGive (xsemaphore)                    \
              xqueuegenericsend (                       \
                     (queuehandle_t) (xsemaphore), \
                     NULL,                \
                     semgive_block_time,  \
                     queuesend_to_back)

You can see that the release semaphore is actually a queued operation, and the blocking time is 0 (macro semgive_block_time definition).

For binary semaphore and count semaphore, according to the contents of the previous chapter can be summed up, the release of a semaphore process can actually be simplified to two cases: first, if the queue is not full, the queue structure members uxmessagewaiting plus 1, to determine whether there are blocking tasks, some words unblocked, The success message is then returned (Pdpass), and second, if the queue is full, the error code (ERR_QUEUE_FULL) is returned, indicating that the queue is full.

The mutex is more complicated because the mutex has a precedence inheritance mechanism.

What is the process of priority inheritance? Let's give an example. A resource x can have only one task access at a time, there are now tasks A and task C to access the resource, task A has a priority of 1 and Task C has priority 10, so task C has a priority greater than task A. We protect resource x with a mutex and the current task A is accessing resource X. When task a accesses resource x, an interrupt is made, and the interrupt event causes task C to execute. Task C also wants to access resource X during execution, but because resource X is also exclusive to task A, task C cannot obtain a mutex and will go into a blocking state. At this point, the low priority task a inherits the priority of the high priority task C, the priority of task A is promoted temporarily, and the priority becomes 10. This mechanism minimizes the impact of the priority reversal that has occurred.

So what is priority reversal? In the example above, Task C takes precedence over task A, but task C goes into a block because it does not get a mutex, and only waits for the low-priority task A to release the mutex before it can run, which is the priority reversal.

That's why priority inheritance can reduce the impact of priority reversal. Let's look at the example above, but we'll add another task B with a priority of 5, and all three tasks are in the ready state. If there is no priority inheritance mechanism, the priority order for the three Tasks is Task c> Task b> Task A. When task C is blocked because it does not have a mutex, Task B Gets the CPU permissions, and task A does not execute until task B is actively or passively yielding the CPU, and task a releases the mutex and task C will be run. Looking at the case of priority inheritance, task a inherits the priority of task C when task C is blocked because it does not have a mutex, and now the priority order for three tasks is task c= Task a> task B. When task C is blocked because it does not have a mutex, task a gains CPU permissions, and when task a releases the mutex, task C gets run. Look, task C waits for a shorter time.

With the basic theory above, we can understand why it is more complicated to release the mutex. It can also be reduced to two cases: first, if the queue is not full, in addition to the queue structure member uxmessagewaiting plus 1, but also to determine whether the task of obtaining mutexes has precedence inheritance, if any, also to restore the priority of the task to the original value. Of course, reverting to the original value is also conditional, that the task must be able to restore the inherited priority to the original value without using a different mutex. Then determine if there is a blocking task, some words unblocked, and finally return success information (Pdpass); second, if the queue is full, the error code (ERR_QUEUE_FULL) is returned, indicating that the queue is full. 2.2xSemaphoreGiveFromISR ()

Used to release a semaphore with interrupt protection. The amount of the emitted semaphore can be a binary semaphore and a count semaphore. Unlike the normal version of the release Semaphore API function, it cannot release the mutex because the mutex cannot be used in interrupts. The priority inheritance mechanism for mutexes can only work in tasks and is meaningless in interrupts. The semaphore release with interrupt protection is actually a macro, and the function that is actually called is XQUEUEGIVEFROMISR (), and the macro is defined as follows:

#definexSemaphoreGiveFromISR (Xsemaphore, pxhigherprioritytaskwoken)     \
            xqueuegivefromisr (                     \
                ( queuehandle_t) (xsemaphore),  \
                (Pxhigherprioritytaskwoken))

We look at the source of the function that is actually called (after finishing):

basetype_t Xqueuegivefromisr (queuehandle_t xqueue, basetype_t * constpxhigherprioritytaskwoken)
{basetype_t Xreturn;
ubasetype_t Uxsavedinterruptstatus;
    queue_t * Const PXQUEUE = (queue_t *) Xqueue;
    Uxsavedinterruptstatus =portset_interrupt_mask_from_isr (); 
        {/* When the queue is used to implement semaphores, there will never be a data entry queue, but you still have to check if the queue is empty */if (Pxqueue->uxmessageswaiting < pxqueue->uxlength) {/* A task can get multiple mutexes, but there can be only one inheritance priority, and if the task is a mutex holder, the mutex is not allowed to be released in the interrupt service program. So there's no need to decide whether to restore the original priority value of the task, just simply update the queue item counter.
 
            */+ + (pxqueue->uxmessageswaiting); /* If the list is locked, you cannot change the queue's event list. */if (Pxqueue->xtxlock = = queueunlocked) {if (Listlist_is_empty (& Pxqu eue->xtaskswaitingtoreceive) = = Pdfalse) {if (Xtaskremovefromeventlist (& P xqueue->xtaskswaitingtoreceive))! = Pdfalse) {/* unblocked task has higher priority, so record context switch
 Request */                       if (pxhigherprioritytaskwoken! = NULL) {*pxhig
                        Herprioritytaskwoken= pdtrue; }}}} else {/* Increment Theloc K count so the task is unlocks the queue knows that data wasposted while it is locked.
            */+ + (Pxqueue->xtxlock);
        } Xreturn = Pdpass;
        } else {xreturn = Errqueue_full;
 
    }} PORTCLEAR_INTERRUPT_MASK_FROM_ISR (Uxsavedinterruptstatus);
return xreturn; }

Because there is no mutex involved, there is no blocking, the function Xqueuegivefromisr () is unusually simple, if the queue is full, the error code (ERR_QUEUE_FULL) is returned directly, and if the queue is not full, the queue struct member uxmessagewaiting plus 1. Then decide whether to unblock the task (if you have a message), depending on whether the queue is locked. If you find it difficult to understand, you need to first look at theFreeRTOS Advanced 5---freertos cohort analysis . 3. Get the semaphore

They use the same fetch and release API functions, regardless of the binary semaphore, the count semaphore, or the mutex. The acquired semaphore consumes the semaphore, and if the acquisition semaphore fails, the task may block, the blocking time is specified by the function parameter xblocktime, and if 0, it is returned directly without blocking. The acquisition of Semaphores is divided into two versions without interrupt protection and with interrupt protection. 3.1 Xsemaphoretake

For the acquisition of semaphores without interrupt protection. The semaphore obtained can be a binary semaphore, a count semaphore, and a mutex. Note that recursive mutexes cannot be obtained using this API function. Actually get the semaphore is a macro, the function that really calls is xqueuegenericreceive (), the macro is defined as follows:

#definexSemaphoreTake (Xsemaphore, xblocktime)        \
              xqueuegenericreceive (                    \
             (queuehandle_t) ( Xsemaphore),         \
              NULL,                                    \
             (xblocktime),                           \
              pdfalse)

As you can see from the macro definition above, getting the semaphore is actually performing an out-of-band operation.

For binary semaphores and counting semaphores, it can be simplified to three cases: first, if the queue is not empty, the queue structure member uxmessagewaiting minus 1, to determine whether there is a task blocked by the team, some words unblocked, and then return to the success of the message (Pdpass); If the queue is empty and the blocking time is 0, the error code (ERRQUEUE_EMPTY) is returned directly, indicating that the queue is empty, and thirdly, if the queue is empty and the blocking time is not 0, the task will be blocked by waiting for the semaphore and the task will be attached to the delay list.

For mutexes, it can be simplified to three cases, but the process is a bit more complicated: first, if the queue is not empty, the queue struct member uxmessagewaiting minus 1, the current task TCB struct member Uxmutexesheld plus 1, indicating the number of the task gets the mutex, The queue struct member pointer pxmutexholder points to the task TCB, determines whether there are tasks that are blocked due to queues, some unblock, and then returns success information (Pdpass); second, if the queue is empty and the blocking time is 0, the error code is returned directly (Errqueue_ Empty), which indicates that the queue is null, and thirdly, if the queue is empty and the blocking time is not 0, the task will go into a blocking state because it waits for semaphores, and before the task is hooked to the delay list, it will determine the priority of the current task and the task with the mutex, and if the current task is high priority, The task with the mutex inherits the current task priority. 3.2xSemaphoreTakeFromISR ()

For the acquisition of semaphores with interrupt protection. The obtained semaphore can be a binary semaphore and a count semaphore. Unlike the normal version of the Get Semaphore API function, it cannot obtain a mutex because the mutex cannot be used in interrupts. The priority inheritance mechanism for mutexes can only work in tasks and is meaningless in interrupts. The acquired semaphore with interrupt protection is actually a macro, and the function that is actually called is XQUEUERECEIVEFROMISR (), and the macro is defined as follows:

#definexSemaphoreTakeFromISR (Xsemaphore, pxhigherprioritytaskwoken)   \
              xqueuereceivefromisr (              \
             ( queuehandle_t) (xsemaphore),   \
              NULL,                              \
             (Pxhigherprioritytaskwoken))

Similarly, because there is no mutex involved and no blocking, the function Xqueuereceivefromisr () is also exceptionally simple: if the queue is empty, the error code (PDFAIL) is returned directly, and if the queue is not empty, the queue struct member uxmessagewaiting minus 1. Then decide whether to unblock the task (if you have a message), depending on whether the queue is locked. 4. Releasing the recursive mutex

The function xsemaphoregiverecursive () is used to release a recursive mutex. A task that has obtained a recursive mutex can repeatedly get the recursive mutex. Using the Xsemaphoretakerecursive () function to successfully obtain a few recursive mutexes, the xsemaphoregiverecursive () function is used to return several times, before which the recursive mutex is in an invalid state. For example, if a task succeeds in obtaining a recursive mutex of 5 times, the mutex is not valid for another task until it returns 5 times the recursive mutex.

Like other semaphores, Xsemaphoregiverecursive () is also a macro definition, which eventually uses the existing queue mechanism, and the function actually executes is xqueuegivemutexrecursive (), which is defined as follows:

#definexSemaphoreGiveRecursive (Xmutex)            \
                xqueuegivemutexrecursive ((Xmutex)    )

We focus on the implementation of function xqueuegivemutexrecursive (). After finishing (remove trace debug statement) The source code is as follows:

#if (configuse_recursive_mutexes = = 1) basetype_txqueuegivemutexrecursive (queuehandle_t Xmutex) {BaseType
    _t Xreturn;
 
        queue_t * Const PXMUTEX = (queue_t *) Xmutex; /* The mutex and the recursive mutex are acquired and disposed in the same task, and when the mutex or recursive mutex is obtained, the queue struct member pointer pxmutexholder points to the task TCB that obtains the mutex or recursive mutex, Therefore, when releasing the recursive mutex, it is necessary to check if the TCB pointed to by this pointer is the same as the current task TCB, and if it is not the same, the recursive mutex cannot be freed! Note: This check is not required to release the mutex, and FreeRTOS's author puts the check in the assertion (configassert (PXTCB = = PXCURRENTTCB);). */if (pxmutex->pxmutexholder = = (void *) Xtaskgetcurrenttaskhandle ()) {/* Each time a task acquires a recursive mutex, the queue struct member U.uxrecursivecallcount is incremented by 1, and the mutex does not use this
 
            Variable, which is used to hold the number of recursion. So, when releasing the recursive mutex, subtract 1*/(pxmutex->u.uxrecursivecallcount)--; /* Recursive counter is 0? */if (Pxmutex->u.uxrecursivecallcount = = (ubasetype_t) 0) {/* Call the queued function to release a mutex Time (macro Queuemutex_give_block_time defined) is 0 */(void) xqueuegenericsend (Pxmutex, NULL,QUEUEMUTEX_GIVE_BL
            Ock_time, Queuesend_to_back); } xReturn = Pdpass;
        } else {/* If you do not have this mutex for this task, return the error code directly */Xreturn = Pdfail;
    } return Xreturn; } #endif/* configuse_recursive_mutexes */

This function is conditionally compiled, and the function is compiled only if the macro configuse_recursive_mutexes is defined as 1.

The biggest difference between a mutex and a recursive mutex is that a recursive mutex can be obtained repeatedly by a task that has acquired this recursive mutex, and this recursive invocation function is through the queue struct member U. Implemented by Uxrecursivecallcount. This variable is used to store the number of recursive calls, and each time a recursive mutex is obtained, this variable is added 1, minus 1 after releasing the recursive mutex. Only this variable is reduced to 0, that is, when the number of releases and fetches is equal, the mutex is valid again and a recursive mutex is freed using the queued function. 5. Get the recursive mutex amount

The function xsemaphoretakerecursive () is used to get a recursive mutex. Like other semaphores, Xsemaphoretakerecursive () is also a macro definition, which eventually uses the existing queue mechanism, and the function actually executes is xqueuetakemutexrecursive (), which is defined as follows:

#definexSemaphoreTakeRecursive (Xmutex, xblocktime)       \
               xqueuetakemutexrecursive ((Xmutex), (Xblocktime))

Gets the recursive mutex with the blocking timeout parameter, which can block the set time if the mutex is being used by another task. We focus on the implementation of function xqueuetakemutexrecursive (). After finishing (remove trace debug statement) The source code is as follows:

#if (configuse_recursive_mutexes = = 1)
    basetype_txqueuetakemutexrecursive (queuehandle_t Xmutex, TickType_ txtickstowait)
    {
    basetype_t xreturn;
    queue_t * Const PXMUTEX = (queue_t *) Xmutex;
    /* Mutex and recursive mutex to be fetched and freed in the same task, the recursive mutex can be obtained multiple times in a task, and when the first time the recursive mutex is obtained, the queue struct member pointer pxmutexholder points to the task TCB that obtains the recursive mutex, where the recursive mutex is obtained. If this pointer points to a TCB that is the same as the current task TCB, simply add 1 to the recursive counter u.uxrecursivecallcount and do not need to operate the queue again. */
        if (Pxmutex->pxmutexholder = = ( void *) Xtaskgetcurrenttaskhandle ())
        {
            (pxmutex->u.uxrecursivecallcount) + +;
            Xreturn = Pdpass;
        }
        else
        {
            /* call out the team function *
            /Xreturn =xqueuegenericreceive (Pxmutex, NULL, xtickstowait, pdfalse);
 
            /* After successfully obtaining the recursive mutex, add the recursive count counter to 1*/
            if (xreturn! = pdfail)
            {
                (pxmutex->u.uxrecursivecallcount) + +;
            }
        }
 
        return xreturn;
    }
#endif/* configuse_recursive_mutexes */

This function is conditionally compiled, and the function is compiled only if the macro configuse_recursive_mutexes is defined as 1.

The program logic is relatively simple, if it is the first time to obtain the recursive mutex, the direct use of the queue function, the success of the recursive count counter plus 1, if it is the second or more times to obtain the recursive mutex, you only need to add a recursive count counter 1.


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.