Linux kernel preemption Mechanism-Introduction

Source: Internet
Author: User
Tags mutex semaphore switches

This article starts in Http://oliveryang.net, please include the original text or the author's website link when reproduced.

This paper mainly discusses the implementation of Linux kernel scheduler preemption. The general operating system and x86 processor and hardware concepts involved may also apply to other operating systems.

1. Background knowledge

To understand the preemption, you have to do a full comb of the Context Switch of the operating system. Finally, you can understand the differences and linkages between preemption and Context Switch concepts.

1.1 Context Switch

Context switch refers to any operating system context that is saved and resumed execution state so that it can be safely interrupted and later correctly resumed. In general operating systems, context switching is usually caused by the following three ways

    • Task scheduling (Job scheduler)

      Task scheduling is typically done by the scheduler code in kernel space.
      It is usually necessary to switch the current CPU execution code, user or kernel stack, address space to the next code to run the task , user or kernel stack , address space .

    • Interrupt (interrupt) or Exception (exception)

      Interrupts and exceptions are generated by the hardware but are responded to and processed by the software.
      This process involves switching the user or kernel-state code to interrupt processing code . It may also involve the user process stack or the kernel stack switching to the interrupt stack . A processor that supports protected mode may also involve switching between protected modes. The x86 processor is completed via the Interrupt Gate (interrupt gate).

    • System call (Systems calls)

      System calls are initiated by the user-state code, which causes the user process to fall into the various system invocation services defined by the kernel invocation kernel. In this process, it involves switching the user-state code of the task and the stack on the same task context to the kernel- system calling code and the kernel stack of the same task.

1.2 preemption

preemption (preemption) refers to a task that the operating system allows to meet certain important conditions (for example: priority, fairness) to interrupt tasks that are currently running on the CPU and get scheduled execution. And this interrupt does not require the mates of the currently running task, and the interrupted program can be resumed again later.

Multitasking operating systems can be broken down by cooperative multitasking (collaborative multitasking) and preemptive multitasking (preemptive multitasking). In essence, preemption is the ability to allow high-priority tasks to immediately break down a low-priority task and get run. For low scheduling Latency (scheduling delay) or real time (realtime) operating system requirements, the feature that supports full preemption is required.

Three context switches, system calls always occur in the context of the same task, only the interrupt exception and the task scheduling mechanism involve a task being interrupted by a context. Preemption eventually needs to use task scheduling to complete the task interruption. However, task scheduling is closely related to these three kinds of context switches, and to understand preemption, there must be an in-depth understanding of the three mechanisms.

2. Task Scheduling

The dispatch of the task requires kernel code to be caused by invoking the function of the scheduler core schedule . It mainly accomplishes the following work,

    • Context switch required to complete task scheduling
    • Scheduling algorithm related implementation: Select the next task to run, task run status and run queue (running queues) maintenance, etc.

This article focuses on context switching and causes of task scheduling.

2.1 Task Scheduling Context Switches

One of the important things about kernel schedule functions is task context switch. The task context switch of the scheduler mainly does two things,

    • The context switch for the task address space.

      Completed on Linux via the SWITCH_MM function.
      The x86 CPU is implemented by loading the page directory address of the next task to be run MM->PGD to the CR3 register.

    • The context switch for the task CPU running state.

      The main is the switching of the CPU registers, including the general register, floating-point register and System register context switch.

      In the Linux x86 64-bit implementation, the switch of the instruction ' Cs:eip ' and Stack ' SS:ESP ' and other universal registers is done by switch_to. Linux describes the task's data structure struct task_struct , where the thread member ( struct thread_struct ) is used to hold the CPU state of the task when the context switches.

      Because the floating-point register context switch is expensive, and in many usage scenarios, the scheduled task may not have used the FPU (floating point unit) at all, so Linux and many other OSes have adopted the design of the Lazy FPU context switch. But as Intel introduced the Xsave feature this year to speed up the FPU save and Restore, the Linux kernel introduced the Non-lazy FPU context switch in 3.7. The Non-lazy mode is used when the kernel detects that the CPU supports the xsave instruction set. This is also the section of Intel Intel IA-32 architectures software Developer ' s Manual Volume 3 13.4 Designing OS FACILITIES for SAV ING X87 FPU, SSE and EXTENDED states on TASK OR CONTEXT switches suggested way.

In general, task scheduling, or task context switching, can be divided into the following two major ways to do,

    • Voluntary context switch (Active context switch)
    • Involuntary context switch (forced contextual switch)
2.2 Active Context Switching

An active context switch is a context switch that is initiated by the task by invoking the schedule function directly or indirectly. A common opportunity to trigger an active context switch

    1. The task is blocked by waiting for the IO operation to complete or other resources.

      Before the task explicitly calls schedule, the task run state is set to TASK_UNINTERRUPTIBLE . Ensure that the task is blocked and the sleep process cannot be interrupted by the arrival of the signal, thus being awakened. The Linux kernel has a variety of synchronous mutex primitives, such as mutex,semaphore,wait_queue,r/w Semaphore, and various other kernel functions that cause blocking.

    2. Wait for resources and specific events to occur and to sleep actively.

      Before the task explicitly calls schedule, the task run state is set to TASK_INTERRUPTIBLE . Ensure that even if the waiting condition is not satisfied, it can be awakened by the signal received by the task and re-enter the running state. The Linux kernel has a variety of synchronous mutex primitives, such as Mutex,semaphore,wait_queue, and various other kernel functions that cause sleep.

    3. Special purposes, such as Debug and trace.

      The task uses Set_current_state to set the task to a non-task_running state before explicitly using the schedule function. For example, set the task_stopped state, and then call the schedule function.

2.3 Forcing context switching

A forced context switch is a context switch that is not caused by the task itself to invoke the schedule function. As you can see from the definition, the main reasons for forcing context switching are related to preemption.

2.3.1 Trigger Preemption2.3.1.1 Tick preemption

In periodic clock interrupts, the kernel scheduler checks whether the current running task is running longer than the time limit supported by the specific scheduling algorithm, thus deciding whether to deprive the current task of running. Once the decision is taken to deprive the task on the CPU, a flag is set for the current task that is running on the CPU with a request rescheduling : TIF_NEED_RESCHED .

It is important to note that after the tif_need_resched flag is placed, there is no immediate call to the schedule function for context switching. The real context switch action is done by the code of User preemption or Kernel preemption.

The User preemption or Kernel preemption places a flag to check the current task on many code paths TIF_NEED_RESCHED and explicitly invokes schedule logic. Next there will be a chance to call schedule to trigger a task switch, when preemption is really done. When a context switch occurs, the next scheduled task is selected from the run queue by a specific scheduler algorithm.

For example, if the clock interrupt is just interrupting a process that is running in user space, the Tick preemption code will set the flag of the currently interrupted user process TIF_NEED_RESCHED . Then, the clock interrupts processing is completed and the user space is returned. At this point, the code for user preemption checks the flag when the interrupt returns to the user space TIF_NEED_RESCHED , and if it calls schedule to complete the context switch.

2.3.1.2 Wakeup preemption

When the cause needs to wake up another process, try_to_wake_up the kernel function will help the awakened process select a CPU's run queue, then insert the process into the run queue and set it to TASK_RUNNING state. The selection of the CPU run queue and the run queue insert operation during this process are all implemented by invoking the specific scheduling algorithm callback function.

After the

task is inserted into the run Queue, the scheduler immediately hands over the newly awakened task and the task being performed on the CPU to the specific scheduling algorithm to compare and decide whether to deprive the current task of running. As with Tick preemption, once you decide to deprive the running of a task performed on the CPU, a tif_need_resched flag is set for the current task. The actual schedule call is not done at this point. But what's really special about Wakeup preemption here is that the task of performing a wake-up operation might insert the awakened task into the local CPU Run Queue, but it might also be plugged into the remote CPU The Run Queue for the . Therefore, the call to the try_to_wake_up function is based on the wake-up task inserting the CPU that the run Queue belongs to and the CPU relationship that is running for the wake-up task, in the following two cases,

    • Shared cache

      The target CPU for the wake-up task and the currently running wake-up CPU shared cache.

      Wake-up function during the return process, as long as the current task runs to any code in User preemption or Kernel preemption, the code checks the tif_need_resched flag and invokes Schedule, the context switch does not really occur. In fact, if Kernel preemption is open, the spin_unlock at the end of the wake-up operation, or any subsequent possible interrupt exit paths, have Kernel preemption call schedule.

    • Does not share the cache

      The target CPU of the awakened task and the currently running wake CPU do not share the cache.

      In this case, when the wake operation sets the tif_need_resched flag, it immediately sends an IPI (inter-processor interrupt) to the CPU to which the wake-up task Run Queue belongs, before returning. In the case of the Intel x86 architecture, the reschedule_vector of the remote CPU is initialized to respond to this interrupt, and the final interrupt handler Scheduler_ipi executes on the remote CPU. In the early Linux kernel, scheduler_ipi was actually an empty function because all of the exits where the interrupt returned to the user space or the kernel space already had the code for the user preemption and Kernel preemption there, So schedule is bound to be called. Later, the Linux kernel uses scheduler_ipi to let the remote CPU do the primary operation of remote wakeup, thereby reducing the Run Queue lock competition. So now Scheduler_ipi has added new code.

When a context switch occurs due to Wakeup preemption, the next scheduled task is selected from the run queue by a specific scheduler algorithm. For a task that has just woken up, if the Wakeup preemption is successfully triggered, some specific scheduling algorithm gives it an opportunity to prioritize the dispatch.

2.3.2 Execute Preemption2.3.2.1 User preemption

User preemption occurs in the following two typical situations,

    • System calls, interrupts, and exceptions before returning to the user space , check the flag of the task that the CPU is currently running, TIF_NEED_RESCHED and call the schedule function directly if the position is set.

    • TASK_RUNNINGcalled directly or indirectly when the task is stateschedule

      An example of an indirect invocation: kernel-state code calls cond_resched (), yield () and other kernel APIs in the loop body, giving other tasks the opportunity to schedule and prevent exclusive misuse of the CPU.

      Code that causes long loops in kernel-state write logic can cause kernel deadlock or cause long scheduling delays, especially if Kernel preemption is not open. You can call the cond_resched () kernel API in the loop body, conditionally yielding the CPU. Here is conditional because cond_resched to check the flag to TIF_NEED_RESCHED see if there is a new preemption request. The yield kernel API, without checking the TIF_NEED_RESCHED flags, triggers the task switch unconditionally, but without the other tasks on the CPU Run Queue, no real task switching occurs.

2.3.2.2 Kernel preemption

The early Linux kernel only supports User preemption. 2.6 Kernel Kernel preemption support was introduced.

Kernel preemption occurs in the following cases,

    • Interrupt, when the exception finishes processing, is returned to the kernel space.

      In x86, for example, Linux determines whether to return kernel space and then invoke a preempt_schedule_irq check TIF_NEED_RESCHED flag to trigger a task switch in the common code portion of the interrupt and exception handling code (that is, after exiting from the specific handler code).

    • Prohibit kernel preemption at the end of processing

      As a fully preempted kernel, Linux only allows the use of prohibit preemption when the current kernel context requires a prohibition of preemption, and preempt_disable kernel code should call enable preemption as soon as possible after a block preemption, preempt_enable avoiding the introduction of high scheduling delays. In order to deal with the rescheduling request pending during the Prohibition preemption process, the kernel preempt_enable will invoke the preempt_schedule check TIF_NEED_RESCHED flag to trigger the task switching as soon as possible.

      preempt_disableThere are preempt_enable many kernel contexts, typical and well-known, with various kernel lock implementations, such as Spin lock,mutex,semaphore,r/w Semaphore,rcu.

Unlike the User preemption, when the two Kernel preemption situations occur, the running state of the task may have been set to TASK_RUNNING sleep outside the state, such as TASK_UNINTERRUPTIBLE . At this point the next kernel __schedule code will have special handling, check PREEMPT_ACTIVE the previous Preempt task skip the Remove queue operation, to ensure that Kernel preemption as soon as possible to be processed. The user preemption does not occur in a TASK_RUNNING state other than the current task, because the user preemption always occurs in TASK_RUNNING a special location where the current task is located.

3. Interrupts and exceptions

Interrupt (interrupts) are usually signals that a processor triggered by a hardware or special software directive needs to respond immediately. Exception (Exception) is classified as an interrupt in a generalized way. But in the narrow sense, the biggest difference between interrupts and exceptions is that interrupts occur and processing is asynchronous, but the occurrence and processing of exceptions are synchronous.

System Call can also be classified as an exception by using special software instructions to produce a synchronized trap (TRAP)for the processor. However, due to the obvious difference in design and use and exception handling, a separate section will be introduced in another chapter. This section mainly describes interrupts and exceptions.

3.1 Context switches for interrupts and exceptions

In many English technical documents and discussions, this type of interrupt action is called a pin, meaning that the current task is not switched off, but is pinned and cannot move. This interruption, unlike the Context switch, involves the switching of the address space. This interruption is also related to the interrupt mechanism of the processor and peripheral hardware. Depending on the implementation of different operating systems, possible interrupts or exception handlers have their own independent kernel stacks, such as implementations of the current Linux version on 32-bit and 64-bit x86, or the task's current kernel stack, such as early Linux implementations on 32-bit x86.

In the case of Intel's x64 processor, the CPU interrupts the execution of the current task by Interrupt Gate (the interrupt gate) when the peripheral interrupts. At this point, the interrupt gate unconditionally saves the execution context of the current task execution, regardless of whether the task being executed is user-state or kernel-state. These registers have the next code snippet instruction register to execute CS:RIP and the current task stack pointer register for the current task SS:RSP . The value of the code snippet instruction register for the new interrupt context is CS:RIP set to the common IRQ Routine (interrupt-handling routine) function for all peripheral interrupts, as long as the initialization code associated with the IDT (Interrupt descriptor x86) of the system is started. In the Irq_entries_start assembler function of this common interrupt processing routine in Linux 3.19, the SAVE_ARGS_IRQ macro definition will entry_64.s the kernel IRQ Sta stored in the PER-CPU variable irq_stack_ptr CK (interrupt stack) is assigned to SS:RSP. In this way, a complete interrupt context switch is performed by both the CPU and the interrupt processing routines. When the interrupt execution is complete, the return process from the interrupt routine takes advantage of the previously saved context and restores the previously interrupted task.

x64 's anomaly mechanism is similar to the interrupt mechanism, which uses IDT to complete the save of interrupted tasks CS:RIP SS:RSP , but the common entry function of the IDT table is a different function. And in the assembly implementation of this function, the code that switches to the kernel IRQ Stack is the value saved in the IST (Interrupt service table) that is pre-initialized by the kernel in the hardware SS:RSP , which is different from the normal peripheral interrupt processing.

In addition, the IDT mechanism of x64 and x32, when entering from the kernel state to the interrupt gate, SS:RSP has a significant difference in the processing of the register stack of the current task. In addition, when IDT initializes, IDT describes the character's ist select bit if nonzero, which means that the kernel IRQ Stack switch is implemented by the kernel code with the IST. However, if the IST-select bit of the IDT descriptor is zero, the kernel's IRQ stack switch is implemented by the kernel interrupt stack variable by kernel code with PER-CPU.

Because of the subject and space limitations, the context switching mechanism for interrupts is not described in detail here. Understanding the context switching mechanism for x86 platform interrupts and exceptions requires an understanding of the hardware specifications of the x86 processor. Intel Intel IA-32 Architectures software Developer ' s Manual Volume 3 6.14 EXCEPTION and INTERRUPT handling in The 64-bit MODE section provides a detailed description of the hardware, in particular the 32-bit and 64-bit, as well as the detailed differences between interrupts and exceptions.

3.2 Interrupt-induced task scheduling

In the Linux kernel, interrupts and exceptions may trigger the following types of task scheduling when they are returned, depending on the context in which they are interrupted.

    • User preemption

      Interrupts and exceptions interrupt user-run tasks, check flags on return, and TIF_NEED_RESCHED decide whether to invoke them schedule .

    • Kernel preemption

      Interrupts and exceptions interrupt the kernel-run task, which is called on return preempt_schedule_irq . Its code checks TIF_NEED_RESCHED the flag and decides whether to invoke it schedule .

The code for User and Kernel preemption is implemented in the common processing code layer for all interrupts and exception handling functions in the Linux kernel, so that the specific processing function of the interrupt exception is returned and executed. Although all types of interrupts can cause task switching, and task scheduling and preemption are closely related, the following two types of interrupts are directly associated with the scheduler, which is part of the kernel scheduler design.

    • Timer Interrupt (Clock interrupt)
    • Scheduler IPI (interrupt between scheduler processors)
3.2.1 Clock Interrupt

Timer Interrupt (clock interrupt) has a special meaning for operating system scheduling.
As mentioned earlier, the periodic clock interrupt handler function triggers the Tick preemption request. Subsequent interrupts may execute logic to User preemption and Kernel preemption, depending on the context returned, before returning. The interruption here can be any interruption of the operating system, such as a general peripheral. Because the operating system generally in the specific interrupt processing function before and after the exit has the public interrupt processing logic, so preemption is generally implemented here, and the specific interrupt processing function is not preemption knowledge. And we know that the peripheral interrupts are generally random, so if there is no clock interruption, then the implementation of the preemption is probably difficult to have time to guarantee. As a result, periodic clock interrupts play an important role here. Of course, in addition to the preemption, the clock interrupt also takes care of many important functions of the system, such as scheduling queue equalization, Process time update, software timer execution and so on. The following is a brief discussion of the relationship to the clock interrupt from the perspective of preemption,

    • Clock interrupt Source

      The kernel's clock interrupt is implemented based on the device that it is running hardware supports that can periodically trigger clock interrupts. Therefore, on different hardware platforms, the implementation mechanism and differences are relatively large. Early Linux x86 support PIT and HPET do clock interrupt interrupt source. Now Linux defaults to using the x86 processor's Local APIC Timer as the clock interrupt source. The biggest difference between the Local APIC timer and the pit and HPET is that the APIC timer interrupt is per-cpu, but the pit and HPET are system-wide. Therefore, the APIC Timer interrupt per CPU is more suitable for the preemption implementation of the SMP system.

    • Clock Interrupt Frequency

      Early Linux and some Unix service operating system cores set the clock interrupt frequency to 100HZ. This means that the execution period of the clock interrupt is 10ms. The new Linux kernel, by default, increases the frequency of x86 on the Linux kernel to 1000HZ. Thus, the processing period of the clock interrupt is shortened to 1ms on the x86. A clock interrupt cycle is often called a Tick. Typically, Unix/linux uses a global technology to count the number of clock interrupts since the system started. This global variable in the Linux kernel is called Jiffies. So a Tick in the Linux kernel is also called a jiffy.

      When a Tick is shortened from 10ms to 1ms, the system will theoretically increase the overhead of processing high-frequency clock interrupts, but it also brings a faster and lower latency preemption. The negative impact of this change is limited due to improved hardware performance, but the benefits are obvious.

3.2.2 Scheduler interrupt between processors

Scheduler IPI (interrupt between scheduler processors) was originally introduced primarily to address an SMP system where wake-up code triggers Wakeup preemption, requiring remote CPU assistance to generate User preemption or Kernel The mechanism introduced by preemption. Its specific process is as follows,

    1. The wake-up code selects the CPU for the wake-up task through a specific scheduler algorithm.
    2. When the selected CPU is remote, wakes the process that is sleeping and puts it into the Run Queue that the remote CPU belongs to
    3. The wake-up code calls the specific scheduling algorithm to check if Wakeup preemption is triggered and triggers the Scheduler IPI (interrupt between scheduler processors) before returning.
    4. The code being executed by the remote CPU is interrupted, and the Scheduler IPI handler function is executed.
    5. There is no actual processing for preemption within the Scheduler IPI processing function.
    6. When the Scheduler IPI handler exits, it enters the common code portion of the interrupt process, triggering the user preemption or Kernel preemption, depending on whether the interrupt returns the context as a client or a kernel context.

It should be noted that in the new x86 platform and the Linux kernel, both the Timer Interrupt and the Scheduler IPI are the interrupts of the CPU Local APIC processing. In the Linux kernel, the public entrances and returns are handled in the Entry_64.s apicinterrupt . apicinterruptand other peripherals interrupt the public entry return code to share the processing logic of User preemption or Kernel preemption ret_from_intr .

4. System calls

system Call is a set of relatively stable programming interfaces and service routines designed to request operating system kernel services for an application. This section focuses on the context switching of system calls and the resulting task scheduling section.

4.1 Context switches for system calls

The biggest difference between system calls and interrupts and exceptions is that the system calls occur synchronously, and the application is actively triggered by the programming interface, so there is no interruption to the current execution of the task. So the system call itself is part of the task being performed, except that it executes code for the task in kernel space. Therefore, when a task invokes a system call to actively sink into the kernel execution system call code, a context switch is bound to occur, in general, this context switch is assisted by the hardware to complete.

As an example of an Intel x86 processor, a system call to complete the user-space-to-kernel context switch is implemented by a hardware mechanism called trap Gate. The Linux operating system supports the following two ways to trigger the Intel x86 trap Gate,

    • The int 0x80 instruction is used by older processors that do not support system invoke directives.
    • Sysenter Fast system call instruction, newer processor support.

The trap door is similar to the interrupt used by the interrupt door, but its door call occurs without interrupting as if it were an interrupt gate. When the user-state code sends the above instruction through GLIBC code to trigger the system call, its context switch occurs as follows:

    1. The code for the current task CS:RIP points to the system call public entry function that the trap gate has already initialized for the system call vector
    2. The CPU trap Gate saves the context of the user task automatically, for example, the system call number, the user space code CS:RIP and the user stack, and SS:RSP so on. Please refer to the hardware manual for specific layout.
    3. The system calls the public entry code to save the other register context, and finally points to the SS:RSP address of the task's kernel stack struct thread_info , completing the switch of the task user stack to the kernel stack.
    4. After the system calls the public entry code to do the necessary checks, call the Global System call table and enter into the service routines of the specific system call.
4.2 Scheduling of tasks caused by system calls

Similar to interrupt handling, when a system call function exits, the public system call code returns to user space, which may trigger user preemption, the check TIF_NEED_RESCHED flag, to decide whether to invoke schedule . System calls do not trigger Kernel preemption, because when a system call returns, it is always returned to the user space, which differs greatly from interrupts and exceptions.

5. Scheduling Trigger Timing Summary

Linux Kernel source schedule of comments written very refined, so it is not verbose, directly on the source,

/* * __schedule () is the main scheduler function. * The main means of driving the scheduler and thus entering this function is: * * 1. Explicit Blocking:mutex, semaphore, Waitqueue, etc. * * 2. Tif_need_resched flag is checked on interrupt and userspace return * paths.      For example, see ARCH/X86/ENTRY_64.S. * To drive preemption between tasks, the scheduler sets the flag in Timer * Interrupt Handler Scheduler_tick (). * * 3. Wakeups don ' t really cause entry into schedule (). They add a * task to the Run-queue and that ' s it. * Now, if the new task added to the run-queue preempts the current * task and then the wakeup sets Tif_need_resch ED and schedule () gets * called on the nearest possible occasion: * *-If the kernel is preemptible (CONFIG_PR EEMPT=Y): * *-in Syscall or exception context, at the next outmost * preempt_enable (). (This might is as soon as the wake_up () ' s * spin_unlock ()!) * *-In IRQ Context, return from Interrupt-handler to * preemptible context * *-If The kernel are not preemptible (C Onfig_preempt is isn't set) * Then on the next: * *-cond_resched () call *-Explicit schedule () Call *-Return from Syscall or exception to User-space *-return from Interrupt-handler to User-space */
6. Associated Reading

This article focuses on the basic concepts needed to understand preemption, and how the Linux kernel implements User preemption and Kernel preemption. Because context Switch is closely related to preemption, it is also analyzed in detail with the Intel x86 processor. These are covered in many Linux kernel books, but it's hard to understand in depth whether you need to learn in conjunction with some processor architecture-related knowledge. Therefore, it is necessary to understand some hardware-related knowledge.

    • Intel Intel IA-32 Architectures software Developer ' s Manual Volume 3 6.14 and 13.4 chapters
    • Getting started with x86 system calls
    • Proper Locking under a preemptible Kernel
    • Linux Kernel Stack

Linux kernel preemption Mechanism-Introduction

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.