CPU protection mode in-depth quest

Source: Internet
Author: User
Tags microsoft dynamics

The original link is: http://www.chinaunix.net/old_jh/23/483510.html

Architecture of protection methods

Problems:
Register model of Protection mode
Description of Protection mode and page table entry
Storage management and address translation of protection methods
Multi-task mechanism and protection implementation
Virtual 8086 mode

A register model of protection mode
Four new registers (pointers-----pointing to special data tables in memory):
Global Descriptor Descriptor Register GDTR
Local Descriptor Descriptor Register LDTR
Interrupt Descriptor Descriptor Register IDTR
Task Register TR

1. GDT and GDTR
Global descriptor tables Define a data table in the system's physical memory address space called the Global Descriptor descriptor GDT.
The GDTR is a 48-bit register within the CPU:
16-bit limit length: 15-bit---00-bit (1 less than actual value)
24-bit base: 39-bit---16-bit (total 32-bit address space)
8-bit base: 47-bit---40-bit
A global register is a generic system storage resource that may be shared by many tasks or used independently, and is managed by a GDT. The table contains a fragment descriptor that is used to table the segments in global memory. Each descriptor is 8 bytes, so that a 64KB segment can hold 8,192 descriptors.
The main descriptors are: Global descriptor GDTR, local descriptor LDTR, interrupt descriptor IDTR

2, LDTR
The local descriptor LDTR is also an organic part of the memory management support mechanism. Each specific task accesses the global descriptor and its governing memory, as well as its own local descriptor descriptor and its governing memory. This local special description table is called "Local description Table Ldt", which defines the local memory address space used by the task, where the segment descriptor is used to access the data and code in the current task memory segment.
Because each task has its own bucket, there are many local descriptor tables that may be included in the protected mode architecture.
LDTR does not directly define a local description table, only a selector selector that points to the LDT descriptor in the GDT. If the LDTR is loaded with selector, the corresponding descriptor can be read from the global memory and loaded into the local descriptor descriptor cache. Only the descriptor determines the local descriptor table. Loading the descriptor into the cache creates a LDT for the current task, that is, each time the LDTR is loaded with selector, the descriptor of the local descriptor is cached and a new LDT is activated.

3, IDTR
Interrupt Descriptor descriptor The descriptor in IDT is not a segment descriptor, but rather a "gate" descriptor that provides a mechanism that will program the interrupt and execute the interrupt service.
Limit Length:---bit 00
Base:---bit 16
---bit 40
The interrupt descriptor is also 8 bytes long, so a segment of 64KB can accommodate 8,192 interrupt descriptors.

4, CR
The protected mode model consists of four system control registers: CR0 ~ CR3
CR0:PE (Protected mode allowed), MP (math coprocessor present),
EM (coprocessor emulation), TS (task switching)
R (coprocessor extension type), PG (paging)
Which PE, MP, EM, TS, R is often said MSW (machine state word)
CR1: Reserved
CR2: Missing pages linear address
CR3: Page Directory base register (PDBR)

5, TR
The Task Register TR provides a protection mode task switching mechanism for high-end microprocessors. TR holds a selector for a 16-bit index value. The selector started by the TR is loaded by the software, it starts the first task, and then the selector is automatically modified when the task Switch command is executed.
The selector in the TR is used to indicate the position of the descriptor in the Global descriptor table. The TSS descriptor is automatically read from memory and loaded into the task descriptor cache (48-bit) when the selector is loaded into the TR. This descriptor defines the storage block for the TSS in the task status segment, providing the segment start address base and segment size limit.
Each task has its own TSS. TSS contains the information that is necessary to start a task.



6. Other function registers
Some registers that can be used in both real mode and protected mode will change when the mode is switched.
The true-mode segment register (CS, SS, DS, ES) whose value is the base address of the segment.
CS = Code Segment
such as the segment selector register (CS, SS, DS, ES) for protected mode, whose value is no longer the base address of the segment, but the selector of the segment. CS = Code Segment Selector

AX, BX, CX, DX are also extended to 32-bit EAX, EBX, ECX, EDX
The associated instructions are also increased, such as MOV class, add Class 、....
IP, SP, BP, SI, DI are also extended to 32-bit EIP, ESP, EBP, ESI, EDI

Ii. description of Protection mode and page table entry
The protection of high-end microprocessors supports a variety of "descriptors" that serve different system functions, such as system segment descriptors, local descriptors, call gate descriptors, task status segment descriptors, Task gate descriptors, interrupt descriptors, and so on.
Descriptors are the basic elements used to manage the fragmentation of 64T bytes of virtual storage space. A descriptor corresponds to a storage "segment" in the virtual storage space, which is responsible for mapping the "virtual address" of code, data, stack, and task status segments to linear addresses, and assigning "Access attributes" to segments.
8 bytes per descriptor,
Byte1~0 limit:---bit 0 (or offset)
BYTE6 low four bit---bit 16
Byte4~2 BASE:---bit 0
BYTE7 BASE:---bit 24

BYTE5 bit 0 A 1 visited
Type W writable, R-readable, C-proof, E-Executable, ed-expanded direction
S 1 Segment Descriptor or 0 door descriptor
DPL Descriptor Privilege Level (2-bit)
P-Presence bit: 1-segment mapping to physical memory, 0 no innuendo
BYTE6 four-bit high
AVL can be used by programmers
G Particle size
X


III. Storage Management and address translation of protection mode
The storage organization of the protection mode is relatively complex, and the MMU mainly manages the address space and the virtual (logical) address into the physical address, including the "segmentation" and the paging model.


1. Virtual (logical) address and space segmentation
The MMU uses a 48-bit memory pointer (selector 16:: Offset 32), called the virtual address, to specify the memory location of the data/instruction, where selector is stored in the segment selector register (CS, DS, ES, SS) for "segment" Selection. The offset can be stored in other accessible registers.
The 32-bit offset pointer specifies a segment capacity of 4G: 1 BYTE ~ 4GB.
The 16-bit selector is also divided into 13-bit indexes, 1-bit table-select Bits, 2-bit request privilege levels
In this way, the 14-bit "segment" and 32-bit offsets are given a 46-bit "address" addressing 64T (40 bits = 1T).

In the segmented model: 64T of space = Low 32T global space + high 32T local space
The "table" bit in selector indicates "global" or "local"
A maximum of 8,192 buckets are allowed in each space.
Segment descriptors define segment properties, not all segments are in use
When a task is started, the global memory segment and the local memory segment can be activated.
Local segments hold the "local content" for each task, and all Tasks share "global content."

2. Physical address and virtual address translation
The logical space for the programmer "Transparent" space is 64T. However, the 32-bit address bus for the CPU only supports 4G of contiguous physical space. As a result, only the "virtual" portion of the information is in the physical storage segment, and the temporary "unused" segment can be stored on the "secondary".
If you access a segment that is not on "physical memory" and there is physical space, then read in, and if there is no physical space, you must implement "swap" and send the other unused segments to "secondary" to make room for the new information. OS memory management controls the allocation, release, and exchange of memory.

The segmentation and paging mechanisms support the mapping of 48-bit virtual address = = 32-bit physical address.
Mapping process:
(1) Segment conversion: Segment conversion of virtual addresses
(2) Address translation: If paging is forbidden, the resulting linear address is the physical address;
If paging is allowed, the resulting linear address
The physical address is generated after the paging conversion
At the same time, the MMU determines whether the corresponding "segment" or "page" of the virtual address space is in physical storage. If present, the operation is normal, otherwise the interchange loads the segment or page.

2.1-Segment Address translation
The segment selector in the segment selector register, selector, selects the "Table Selection" in the Segment descriptor cache register that goes into "global" or "local".
(Note: Defining a Segment property is a descriptor, not a selector.)

Each segment Selector Register selector has a 64-bit internal segment descriptor cache register that "transparently" loads the descriptor when the instruction executes.

For example MOV Ds,ax
The selectors in Microsoft Dynamics AX are loaded into DS, and then the descriptors in the corresponding local descriptor table are read out from memory and loaded into the DS corresponding segment descriptor Cache register (if the descriptor is already cached, it can be referenced directly). The MMU then checks the validity of the information in the descriptor.
The "content" in the segment is determined by the 32-bit offset in the virtual address.

In fact, loading the value of the segment descriptor cache register implements the mapping from the 16-bit selector selector to the corresponding 32-bit segment base address. The descriptor in the Segment Descriptor cache register will change dynamically with the execution of the task. The MMU only allows 6 memory to be active, corresponding to CS, DS, ES, SS, FS, GS

2.2-page Address translation
If paging is prohibited, each segment can allocate up to 1BYTE ~4GB of physical space, and if paging is allowed, the physical space is divided into 1048496 pages, 4096BYTE per page.
The paging storage mechanism works under the segmented storage management mechanism, and the address space organization differs if you allow paging.

The paging mechanism simplifies the implementation of the MMU program. The linear address generated by the segment conversion process is no longer used as a physical address, but also through a second conversion process (page conversion).
Linear address = 10-bit directory domain, 10-bit page field, 12-bit offset address field

The address of the memory page catalog table is determined by the page directory base register PDBR in CR3. The 20 bits of the PDBR are actually the MSB of the base address, and the lower 12 bits indicate the in-page addresses. The page directory contains a 4K byte memory address, and all of the page directories consist of 1K 32-bit addresses. Each of these addresses points to a page table in a physical memory.

A 10-bit directory field is an offset from PDBR, where PDBR selects a 1K 32-bit page catalog entry in the page catalog table. The pointer is cached in the transform detection cache and serves as the base address for the page table in memory. Like a page directory, each page table is 4 K, which contains 1K 32-bit addresses, called page frame addresses. Each page frame address points to a 4K data frame in physical memory.

10-bit page field of a linear address select a 1K 32-bit page table entry from the page table (also cached in the transform detection cache).

The conversion detection cache can hold 32 table entries, so a total of 128K bytes of storage can be accessed directly, without having to read from the page table and be directly accessible. If the content that you want to access is not in these pages, there is an additional overhead of reading the page table entry into the conversion detection buffer.

Four, multi-task mechanism and protection realization

More than 80286 of high-end microprocessors realize the multi-tasking software architecture. The so-called task is that its hardware allows multiple tasks in the software system and can be arranged in ticks, that is, the program is programmed to move from one task to another after a fixed length of time.

For example, a task can be executed in a cyclic manner, that is, the most recently executed task goes back to the end of the execution list. Although the process is done in ticks, the CPU will still make the "person" feel that all tasks are running at the same time.

A task is a collection of programs that perform a particular function, or it can be called a procedure. Software systems often require many processes. In a protected-mode microprocessor system, each process is a separate task, and the CPU provides a highly efficient mechanism called task switching to ensure a switch between tasks. For example, a 80386DX task-switching operation that runs in 16MHZ requires only 19us.

When a task is transferred to the runtime, it has both a global and a local memory resource. The local memory address space is divided by task, which means that each task typically has its own local memory segment. Segments in global memory can be shared by all tasks. Thus, a task can access all the segments in the global storage.

1. Protection and Protection mode

Protection mode software systems can be added to protect against unauthorized or improper access to task memory resources. This memory protection is called protection, and the CPU hardware implements a protection mechanism. In a multitasking environment, this mechanism restricts local and system resource access to tasks, isolating each task.

Fragmentation, paging, and descriptors are key elements of the CPU protection mechanism. In the segmented memory model, a segment is the smallest unit in the virtual memory address space with one protection attribute. Its properties are defined by the access information and the length limit fields in the segment descriptor. The hardware protection mechanism performs a series of checks during the memory access process. For example, when the deposit germanium cache is consistent, the offset is also checked to make sure that the area is within the segment limit length. The list of protective checks and restrictions that the CPU adds to the software:
Type check, limit check, address domain limit, process entry point limit, instruction set limit

Review the properties of the access information in the segment descriptor. The P-bit defines whether the segment is in physical memory. Assuming that the segment is in physical memory, the 4th bit of the Type field C/D whether it is a code snippet or a data segment (0 represents a data segment and 1 represents a code snippet). Other bits such as readable er, writable EW, confirmed EC, up Ed and down extensions, and whether or not have been visited are specified by another bit in the Type field. The privilege level is specified by the DPL domain.

When a section is accessed, its base address and limit length are cached to the CPU. But before the descriptor is loaded, the MMU verifies that the selected segment is currently in physical memory, whether the current program is an accessible privilege level, whether the type is consistent with the target segment selector register (CS snippet, Ds,es,fs,gs, or SS stack segment), and the reference to that segment does not exceed the segment boundary. An error report is issued if a violation occurs. The storage Manager can determine the cause, resolve the issue, and restart the operation.

Violation one: If the selector that loads CS points to the descriptor of the data segment, the type check produces a violation error.
Violation two: An attempt to read an operand in a code snippet that is marked as unreadable.
Violation three: If the data byte offset that you are trying to access is greater than limit, or the offset of the double word
Greater than or equal to LIMIT-2, then the data segment boundary is exceeded, resulting in a violation of the protective
Measures.

The CPU provides four privileged levels for each task: 0, 1, 2, and 3. Where level 0 is the highest privilege level, Level 3 is the lowest privileged level.

System software and application software are usually divided into: kernel, system services, general extensions, applications ...

The core representative provides application-independent software for microprocessor-based functions (I/O control, task scheduling, and memory management). For this reason it is the highest level of privilege level 0.
Level 1 is the process of providing system services such as file access.
Level 2 is used to implement some routines that support system-specific operations.
Level 3 is the level at which the user's application runs.

This division also shows how to use the privilege level to separate system-level software (from level 0 to level 2) from user-level applications (Level 3). Tasks at one level can use more advanced programs but cannot modify their contents. This allows the application to use a high Level 3 system program without worrying about destroying its integrity.

Finally, some protective measures are added to the instruction set. such as system control directives can only be executed in a code snippet with a protection level of 0.

Each task has its own local descriptor table. Therefore, as long as the descriptor in the local Descriptor table of the task cannot be referenced by another task, the task is isolated from the other tasks. In other words, the task already specifies a separate virtual address space.

The above shows that the segment, privilege level, and local descriptor list provide protective measures for the code and data in the task. This type of protection improves the reliability of the software because errors in one application do not affect the operating system and other applications.

Take a look at how privilege levels are assigned to code snippets or data segments.

When a task runs, it can be accessed both locally and globally, in code snippets, data segments, and stack segments. The privilege level is assigned to each segment through the access rights information in the segment descriptor. You can specify any privilege level on the segment as long as the level number is written to the DPL bit. To provide more flexibility, the input/output has two privileged levels.

First the I/O driver belongs to the system resource and specifies a privilege level. The I/O control program is part of the core, which specifies privilege level 0.

directives in, INS, out, outs, CLI, and STI are called trust directives. Because the CPU protection model adds additional restrictions to their use. They can only be executed at privileged levels greater than or equal to IOPL (input/output privilege level) codes.

IOPL is the second privileged level of I/O. The IOPL bit is in protected mode in the flag register. The IOPL bit must specify the privilege level to be given to the input/output instructions through the software. IOPL values vary depending on the task.

Assuming that the I/O directives above privileged level 3 limit the application to perform I/O directly, the application must request the program through the operating system's I/O driver if it wants to do I/O.

2. Access code and data in protected mode

During task execution, the CPU may need to transfer control to other priority programs, or access data in segments of different privileged levels. Access to code or data in segments of different privileged levels is often strictly limited, and these rules will ensure that code or data at high privileged levels are not compromised by programs with low privilege levels.

Before discussing how the program can access data in the same or different special levels, let's first know some of the terminology about privileged levels. We have used the term DPL (descriptor privilege level) and IOPL (I/O privilege level). The two terms CPL (current privilege level) and RPL (Request privilege level) are also used when discussing protection mechanisms in data or code access. The CPL is the privileged level of the code or data segment that the task is currently accessing.
For example, a CPL that is running a task is the access byte in the descriptor cache. This value is typically the DPL,RPL of the code snippet, which is the privilege level of the selector for the newly installed segment register. For example, for code, RPL is the privileged level of the code snippet that contains the called program, that is, RPL is the DPL that controls the code snippet to be forwarded.
A task may also need to access a program in a segment at another privileged level when it runs. As the program executes, the current privilege level of the task changes dynamically, and the CPL typically switches to DPL, which is currently accessing the code snippet.

The CPU protection rules determine which code or data the program can access. Before discussing how to control the code to different protection levels, let's take a look at how the data segment is accessed by the code at the current privileged level.
Protection level checks made by data access (Figure 832):

The general rule is that code can only access data of the same or lower privileged level. For example, the current privilege level for a task is 1, so it can access the operands in a data segment with a DPL of 1, 2, or 3. As long as the DS, ES, FS, or GS registers are loaded with new selectors, it is important to check the DPL of the target data segment to ensure that it is less than or equal to the maximum privilege level of the CPL and RPL. If DPL satisfies the condition, the descriptor is cached in the CPU and the data can be accessed.

One exception to this rule is when the SS register is assigned, DPL must be equal to Cpl. In other words, the active stack (one at each privilege level) is always at the CPL level.

"Example 12" assumes dpl=2,cpl=0,rpl=2, can you make data access?
"Solution": the target segment of the DPL is 2, the privilege level is less than the cpl=0,0 level is the maximum privilege level in CPL and RPL.
As a result, protection standards are met and data access is possible.


The rules used by program control (JMP, RET) to transfer between code at the same privilege level and at different privileged levels are also different, in order to pass the program control to another instruction in the same code segment, only the transfer or invoke instruction is required. In both cases, you only need to check the limit to make sure that the target of the transfer or call does not exceed the bounds of the snippet.

To transfer control to another piece of code at the same or different privilege levels, the heart must use a far shift or invoke instruction. For this type of program control transfer, both the limit and the type are checked. Two conditions are required for a program control transfer to occur.
First, if the CPL equals DPL, then the two segments are in the same protection level transfer occurs;
Second, if the CPL represents a higher privilege level than DPL and the confirmation code (c bit) in the new segment type domain is set to 1, then the program is executed at the CPL level.

The general rule for transferring control to code in segments with different privileged levels is that the new code snippet must have a higher privilege level. A descriptor called a door descriptor is used to implement a privilege level change. The instructions to transfer control to a high-privileged code snippet are still far-calling or far-forward instructions. This directive does not directly specify the location of the target code, it refers to a door descriptor. In this case, the CPU performs a more complex program control transfer mechanism.

4 Types of door descriptors: Call doors, task gates, break doors, and trap doors. The call gate implements the indirect control transfer from the CPL level to the higher privileged level. It defines a valid entry point in the higher privileged segment, where the value of the call gate is the virtual address of the entry point: the target selector and the target offset. The target bias points to the instructions to be executed in the segment, and the call gate can be placed in the GDT or the LDT.

The invocation directives include offsets and selectors. When the instruction executes, the selector is loaded into CS and points to the calling gate. In turn, the call gate causes its target selector to Mount Cs. This causes the descriptor of the called Code snippet to be cached, and the code snippet descriptor bit memory snippet provides a base address. Notice that the offset in the call Gate descriptor locates the entry point in the code snippet.

A new stack is activated whenever the current privilege level of the task changes. As part of the program context switch sequence, the old ESP and SS are saved to the new stack along with the old EIP and CS and other parameters, which need to be saved to return to the old program environment. Now the process of high privilege level begins to execute.

At the end of execution, the RET instruction will program the program back to the call. The RET instruction will cause the old EIP and CS values, some parameters and their old ESP and SS values to pop up from the stack. This restores the original program environment. Now the program starts execution from the next instruction in the low-privileged code snippet called the instruction. For a successful call to the gate the DPL must be equal to CPL, and the RPL of the called Code must be higher than the CPL privilege level.

3. Task switching and Task Status Section table

The task is a key element in the CPU multitasking software architecture, and it is pointed out that the other important feature is the high performance task switching mechanism. Tasks can be activated either indirectly or directly, which only requires the execution of inter-segment transfers or inter-segment invocation instructions. The return link for the previous task is not saved when you initiate a task switch with a transfer command. However, if you switch to a new task by using a call, the return link information is automatically saved, which guarantees that the new task returns to an instruction that follows the calling instruction in the old task after it finishes executing.

Each task performed by the CPU specifies a selector, called the task status selector. The selector is the index of the task status segment descriptor in the Global descriptor table.

If the transfer or invoke instruction takes the task status selector as its operand, then the task has a direct entry. When executing a call instruction, the selector loads the CPU's task register (TR). The corresponding task status segment descriptor is then read from the GDT and loaded into the task register cache, which only occurs when the condition of the descriptor's access information declaration is satisfied. That is, the descriptor is in physical memory (p=1), the task is not busy (b=0), the protective measures are not violated (Cpl must be equal to DPL), and the base address and limit length in the post-load descriptor define the initial point and size of the task State segment (TSS). The TSS contains all the information for starting or stopping a task.

The contents of the task segment.
The typical TSS minimum is 103 bytes. Thus, the minimum limit length that can be declared in the TSS descriptor is 67H. Note that the section contains the following information: The microprocessor state used to start the task (Universal register, segment selector, instruction pointer, and flag), the task is invoked when the previous active task is the return link selector for TSS, the local Descriptor List Register selector, the privilege level micro 0, 1, and 2 stack selectors and pointers, and the i/ o Allow bits.

The process that the task starts.
Assuming that a new task has an active task before it is invoked, the new task is called a nested task, which causes the NT location of the glyph to be 1. In this scenario, the current task is first suspended, and then the user-accessible register state of the CPU is saved to the old TSS. Next, the B-bit in the new task descriptor is marked as busy. The TS in the machine status word is set to 1 to indicate that the task is active; the status information of the TSS is loaded into the MPU, and the old TSS selector is saved to the new task status segment as the return link selector. The task switch operation ends here, and then executes from the directives determined by the Code snippet selector (CS) and instruction Pointer (EIP) new values.

The old program context is preserved by saving the old TSS selector as a return-link selector in the TSS. A return instruction is executed at the end of the new task, and the return link selector is automatically re-loaded into the TR. This activates the old TSS, where the previous program environment is restored and the program is now executed from the place where the old task left off.

A task gate is transferred to or called when the method of the task is indirectly activated. This is a way to pass control to its RPL higher than the CPL task. This directive includes a selector to the task gate, not a task status selector, which may be in a GDT or a LDT. The TSS selector in the door loads the TR to select the TSS and initiate the task.

An example of a task switching principle.
The table in Figure 841 contains the TSS descriptor SELECT0 to SELECT3. These four descriptors contain the access rights and selectors for tasks 0 through 3, respectively. To activate the task represented by the selector SELECT2 in the data segment, you can use the following procedure. First, the data segment register is loaded into the address Selector_data_seg_start, which points to the segment that contains the selector. The following directives are available:
MOV AX, Selector_data_seg_start
MOV DS, AX
Since each selector is 8 bytes long, SELECT2 is 16 bytes from the segment header and the offset is loaded into the register EBX:
MOV EBX, 0F
At this point, you can use the following instructions to achieve inter-segment transfer:
JMP DWORD PTR [EBX]
The instruction execution will give the program control to the task specified by the selector in the descriptor SELECT2. In this scenario, the program link is not preserved. If you use the Invoke command:
Call DWORD PTR [EBX]
The link is retained.

V, virtual 8086 mode

8086/8088 applications, such as those written for PC-series machine operating systems, can run directly in real-mode CPUs. Protected mode operating systems, such as UNIX, can also run DOS applications without modification. This is done by virtual 8086 mode. The CPU supports the 8086CPU programming model in this mode and can run programs in 8086/8088 directly. Or, it creates a virtual 8086 machine for executing the program.

For such applications, the CPU switches back and forth between protected mode and virtual 8086 mode. UNIX operating systems and UNIX applications run in protected mode, and if you want to run DOS operating systems and DOS applications, then the CPU switches to virtual 8086 mode. This mode switches by program control called Virtual 8086.

Virtual 8086 mode is selected by the virtual mode (VM) bit in the extended flag register. VM Switching to 1 allows virtual 8086 mode operation. In fact, the VM bits in the eflags are not switched directly to 1 by the software. This is because virtual 8086 mode is typically entered in a protected mode task. Thus, the value of copy eflags should include VM equals 1, assigning to EFlags is also part of the task switching process. This, in turn, initiates the virtual 8086 mode. The virtual 8086 program runs at privileged Level 3. The virtual 8086 thread is responsible for setting and purging the VM bits in the eflags copy of the task, and allows both protected-mode and virtual 8086-mode tasks to be present in a multi-tasking program environment.

Another way to start the virtual 8086 mode is to return through interrupts. In this case, the eflaags is re-loaded from the stack. Similarly, the VM bit in the eflags copy should be set to 1 to enter virtual 8086 mode operation.

http://blog.csdn.net/sqlserverdiscovery/article/details/8110083

CPU protection mode in-depth quest

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.