Computer hardware BASICS (CPU)-DIY hardware

Source: Internet
Author: User
Tags ranges
I. CPU clock speed

This is a metric most favored by new users. It refers to the clock frequency (CPU clock speed) of CPU core operation ). In general, how many MHz is a CPU, and how many MHz is the "CPU clock speed ". Although the clock speed is related to the CPU speed, it is not absolutely proportional, because the CPU computing speed depends on the performance indicators (cache, instruction set, CPU, etc ). Therefore, the clock speed does not represent the overall performance of the CPU, but increasing the clock speed is crucial to improving the CPU computing speed. The formula for calculating the frequency is: frequency = frequency X multiplier.

Ii. External frequency:

The external frequency is the baseline frequency of the CPU and even the entire computer system. The unit is MHz (Z ). In early computers, the synchronization speed between the memory and the motherboard is equal to the external frequency. In this way, it can be understood that the external frequency of the CPU is directly connected to the memory, synchronous running status between the two. For the current computer system, the two can be completely different, but the significance of the external frequency still exists. In the computer system, most of the frequencies are based on the external frequency, multiplied by a certain number of multiples.

Iii. Frequency Doubling

The frequency doubling of the CPU. There is a ratio between the core operating frequency of the CPU and the external frequency, which is the multiplier coefficient. In theory, the frequency doubling is from 1.5 to infinite, but note that the frequency doubling is measured at an interval of 0.5. The frequency is multiplied by the multiplier, so any increase can increase the CPU frequency. In the past, there was no concept of Frequency Doubling. The clock speed of the CPU is the same as that of the system bus, but the CPU speed is getting faster and faster, and the frequency doubling technology is acceptable. It enables the system bus to work at a relatively low frequency, and the CPU speed can be infinitely increased by doubling. The CPU clock speed is calculated as follows: frequency = frequency X frequency. That is, the frequency doubling refers to a multiple of the differences between the CPU and the system bus. When the external frequency remains unchanged, the frequency doubling increases, and the CPU clock speed increases.

Iv. Assembly Line

For the CPU, its operations can be divided into several steps: Command acquisition, decoding, calculation, and result. The first two steps are completed by the Command Controller, and the last two steps are completed by the supervisor. In the traditional way, all commands are executed in order. The Command Controller first works to complete the first two steps of an instruction, and then the worker works. The last two steps are completed ...... Obviously, when the Command Controller is working, the host is basically idle, and the Command Controller is resting while the host is working, resulting in a considerable waste of resources. As a result, the CPU draws on the pipeline design widely used in industrial production. When the Command Controller completes the first two steps of the first command, it directly starts the operation of the second command, this forms the pipeline. The pipeline design maximizes the use of CPU resources, so that each component is working in each clock cycle, thus increasing the computing frequency of the CPU.

The method of adding workers in industrial production to lengthen the pipeline operation can effectively increase the production volume per unit time, the CPU uses more pipelines to process more commands within the same period of time, effectively increasing the Running frequency. For example, Intel has 20 pipelines in the Northwood core Pentium 4 processor, and 31 pipelines in the Prescott core Pentium 4 processor, which is the use of ultra-long pipelines, this makes Pentium 4 an advantage in the frequency war with athlon XP (integer assembly line 10 and floating point assembly line 15.

When the CPU is working, the commands are not isolated. Many Commands need to be completed in a certain order. Once an error occurs in a certain command during the operation, the entire pipeline may be paused, the longer the pipeline is, the more errors it has, and the greater the impact of errors. In a pipeline, if the result of the first command is required for the second command, this is called correlation. Once an error occurs during the operation of a command, related commands also become meaningless.

Finally, due to the delay of the conductive body, the longer the pipeline level, the more the number of conductive delays, the longer the total latency, and the longer the CPU completes a single task. Therefore, the pipeline design is not longer and better.

V. CPU Cache

The cache memory is the temporary memory between the CPU and memory. It has a smaller capacity than the memory but is faster than the memory. The data in the cache is a small part of the memory, but this small part is about to be accessed by the CPU in a short time. When the CPU calls a large amount of data, you can avoid calling the memory directly from the cache to speed up reading. It can be seen that adding cache to the CPU is an efficient solution, so that the entire internal memory (Cache + Memory) becomes a high speed of the existing cache, there is a large memory storage system. The cache has a great impact on CPU performance, mainly because of the Data Exchange sequence of the CPU and the bandwidth between the CPU and the cache.

The principle of caching is that when the CPU needs to read a piece of data, it first looks for it from the cache. if it finds it, it immediately reads it and sends it to the CPU for processing. If it does not, it reads data from the memory at a relatively slow speed and sends it to the CPU for processing. At the same time, it transfers the data block of the data to the cache, this allows you to read the entire block of data from the cache without calling the memory.

This reading mechanism makes the CPU read cache hit rate very high (most CPUs can reach about 90%), that is, 90% of the data to be read by the CPU next time is in the cache, only about 10% needs to be read from memory. This greatly saves the time for the CPU to directly read the memory, and does not need to wait for the CPU to read data. In general, the CPU reads data in the first cache and then the memory.

The earliest CPU cache was a whole, and its capacity was very low. Intel classified the cache from the Pentium era. At that time, the cache integrated in the CPU kernel was insufficient to meet the CPU requirements, while the manufacturing process constraints could not greatly increase the cache capacity. Therefore, the cache integrated with the CPU kernel is called a level-1 cache, while the external cache is called a level-2 cache. Data Cache (D-Cache) and Instruction Cache (Instruction Cache, I-Cache) are also divided in the first-level cache ). The two commands are used to store and execute the data respectively. They can be accessed by the CPU at the same time, reducing conflicts caused by contention for cache and improving the processor performance. When Intel launched the Pentium 4 processor, it replaced the instruction cache with a new level-1 trace cache with a capacity of 12 K μops, indicating that it could store 12 K micro-commands.

With the development of the CPU manufacturing process, the secondary cache can be easily integrated into the CPU kernel, and the capacity is also increasing year by year. It is no longer accurate to define level 1 and level 2 caches with or without integration in the CPU. As the second-level cache is integrated into the CPU kernel, the gap between the second-level cache and the CPU is also changed. At this time, it works at the same clock speed, it can provide higher transmission speed for the CPU.

Second-level cache is one of the keys to CPU performance. without changing the CPU core, increasing the second-level slow storage capacity can greatly improve the performance. The high and low-end CPUs of the same core are often different in the level-2 cache, which shows the importance of level-2 cache for CPU. When the CPU finds useful data in the cache, it is called hit. When there is no data required by the CPU in the cache (this is called Miss), the CPU accesses the memory. Theoretically, In a CPU with a second-level cache, the hit rate of reading the first-level cache is 80%. That is to say, the useful data found in the CPU level-1 cache accounts for 80% of the total data, and the remaining 20% are read from the level-2 cache. Because the data to be executed cannot be accurately predicted, the hit rate of reading the second-level cache is also around 80% (16% of the total data read from the second-level cache ). There is still data that has to be called from the memory, but this is already a very small proportion. The current high-end CPU also has a level-3 cache, which is designed to read data that is not hit after the level-2 Cache. In the CPU with level-3 cache, only about 5% of the data needs to be called from the memory, which further improves the CPU efficiency.

To ensure a high hit rate during CPU access, the content in the cache should be replaced by a certain algorithm. A common algorithm is the least recently used (LRU) algorithm, which removes the rows that have been least accessed in the recent period. Therefore, you need to set a counter for each row. The LRU algorithm clears the counters of hit rows and Adds 1 to the counters of other rows. When a replacement is required, the data row with the largest counter value is eliminated. This is an efficient and scientific algorithm. The counter clearing process can remove unnecessary data from the cache after frequent calls, improving the cache utilization.

In CPU products, the primary cache capacity is basically between 4 kb and 64 KB, and the secondary cache capacity is divided into 128kb, 256kb, 512kb, 1 MB, and 2 MB. The primary cache capacity varies slightly between products, while the secondary cache capacity is the key to improving CPU performance. The increase in the Level 2 slow storage capacity is determined by the CPU manufacturing process. The increase in the capacity will inevitably lead to an increase in the number of transistors in the CPU. A larger cache should be integrated into a limited CPU area, the higher the requirement for manufacturing process.

Vi. Front-End bus

The frontend bus is a data channel between the processor and the motherboard's Northbridge chip or memory control hub. Its frequency directly affects the speed of CPU access to memory. BIOS can be seen as a software related to memory computers, you can use it to adjust relevant settings. The BIOS is stored in a chip on the board. The chip is named coms Ram. But like ATA and IDE, most people confuse them.

A bus is a set of transmission lines that transmit information from one or more source components to one or more target components. Generally speaking, it is a public connection between multiple parts, which is used to transmit information between parts. People often describe the bus frequency at a speed expressed in MHz. There are many bus types. The English name of the front-end bus is the front side bus, which is usually expressed as FSB. It is the bus that connects the CPU to the North Bridge Chip. The front-end bus frequency of the computer is determined by the CPU and the North Bridge Chip.

The CPU is connected to the beiqiao chip through the front-end bus (FSB), and then exchanged data with the memory and video card through the beiqiao chip. The front-end bus is the main channel for data exchange between the CPU and the outside world. Therefore, the data transmission capability of the front-end bus has a great effect on the overall performance of the computer. If the front-end bus is not fast enough, A strong CPU does not significantly increase the overall speed of the computer. The maximum bandwidth of data transmission depends on the width and transmission frequency of all data transmitted simultaneously, that is, the data bandwidth = (bus frequency × data Bit Width) ÷ 8. Currently, the frontend bus frequency on a PC is 266 MHz, 333 MHz, 400 MHz, 533 MHz, or MHz. The higher the frontend bus frequency, it indicates that the greater the data transmission capability between the CPU and beiqiao chip, the more CPU functions can be fully utilized. The current CPU technology is developing rapidly, and the computing speed is increasing rapidly. The large enough front-end bus can ensure that enough data is provided to the CPU. The low front-end bus cannot supply enough data to the CPU, this limits the CPU performance and becomes a system bottleneck.

The speed of the bus between the CPU and the North Bridge Chip represents the speed of the CPU and external data transmission. The concept of external frequency is based on the fluctuation speed of Digital pulse signals. That is to say, a 10 thousand MHz external frequency refers to a digital pulse signal that oscillates million times per second, it affects the frequency of PIC and other bus. The two concepts of Front-End bus and outer frequency are confusing, mainly because during a long period of time (mainly before the emergence of Pentium 4 and when the emergence of Pentium 4 ), the frequency of the front-end bus is the same as that of the outer frequency. Therefore, it is often called the front-end bus as the outer frequency, which leads to such misunderstanding. With the development of computer technology, it is found that the frequency of the front-end bus must be higher than that of the outer frequency. Therefore, QDR (quad date rate) technology or other similar technologies are used to achieve this currently. The principles of these technologies are similar to the 2x or 4x of AGP. They make the front-end bus frequency twice, 4x, or even higher than the outer frequency, since then, the difference between the front-end bus and the external frequency has been paid attention.

VII. CPU Core Type

Core Types of athlon XP

Athlon XP has four different core types, but they all share the same idea: both use socket A interfaces and use PR nominal values.

Palomino

This is the core of the earliest athlon XP, which adopts the 0.18um manufacturing process. The core voltage is about 1.75v, the secondary cache is 266 KB, The encapsulation method is opga, And the frontend bus frequency is MHz.

Thoroughbred

This is the first athlon XP core that adopts the 0.13um manufacturing process and is divided into thoroughbred-A and thoroughbred-B versions. The core voltage is about 1.65v-1.75v, and the secondary cache is 256kb, the encapsulation method adopts opga, And the frontend bus frequency is 266mhz and 333 MHz.

Thorton

The 0.13um manufacturing process is adopted. The core voltage is about 1.65v, and the second-level cache is 333 kb. The opga encapsulation mode is adopted, and the frontend bus frequency is MHz. It can be seen as Barton shielding half of the second-level cache.

Barton

The 0.13um manufacturing process is adopted. The core voltage is about 1.65v, and the second-level cache is 400 Kb. The encapsulation mode is opga, and the front-end bus frequency is 333mhz and MHz.

Core Types of New duron

Applebred

It adopts the 0.13um manufacturing process, the core voltage is about 266 V, the second-level cache is 64 KB, The encapsulation method adopts opga, And the frontend bus frequency is MHz. The nominal PR value is not used, but the actual frequency is used. There are three types: 1.4 GHz, GHz, and GHz.

Core Types of athlon 64 series CPU

Clawhammer

It adopts the 0.13um manufacturing process, the core voltage is about V, the second-level cache is 1 MB, The encapsulation method is mpga, the Hyper Transport Bus is used, and the built-in memory controller is bit. Socket 754, socket 940, and socket 939 interfaces are used.

Newcastle

The main difference between it and clawhammer is that the level-2 cache is reduced to KB (this is also the result of AMD's relatively low-price policy for market needs and accelerating the promotion of 64-bit CPUs). Other performance is basically the same.

AMD dual-core processor

They are dual-core opteron series and new athlon 64 X2 series Processors. Among them, athlon 64 X2 is a dual-core desktop processor series used to compete with Pentium D and Pentium extreme edition.

AMD's athlon 64 X2 is a combination of the Venice Cores Used on two athlon 64 processors. Each core has an independent 512kb (1 MB) L2 cache and execution unit. In addition to an extra core, the architecture of athlon 64 has not changed significantly.

Most of the dual-core athlon 64 X2 specifications and functions are no different from the familiar athlon 64 architecture. That is to say, the new athlon 64 X2 dual-core processor still supports the 1 GHz hypertransport bus, the DDR memory controller supports dual-channel settings.

Unlike Intel dual-core processors, the two kernels of athlon 64 X2 do not need to be coordinated by MCH. AMD provides a system request queue (system request queue) technology in the athlon 64 X2 dual-core processor. At work, each core puts its request in srq, after a resource is obtained, the request is sent to the corresponding execution core. That is to say, all the processing processes are completed within the CPU core and do not need to use external devices.

For dual-core architecture, amd integrates two cores into the same silicon crystal kernel, while Intel's dual-core processing method is more like simply combining the two cores. Compared with Intel's dual-core architecture, AMD's dual-core processor system does not have a transmission bottleneck between two cores. Therefore, the athlon 64 X2 architecture is significantly better than the Pentium d Architecture.

Although amd does not have to worry about power consumption and power consumption of the Prescott core compared with intel, it also needs to consider reducing power consumption for dual-core processors. Therefore, amd does not adopt a method to reduce the clock speed. Instead, it uses the so-called dual stress liner strain silicon technology in the athlon 64 X2 processor produced using the 90nm process, which is used in combination with the SOI technology, it can produce transistors with higher performance and lower power consumption.

The most affordable advantage of AMD's athlon 64 X2 processor is that the new dual-core processor can be used without the need to change the platform. You only need to upgrade the BIOS of the old motherboard, compared with the practice that Intel dual-core processors must be replaced with a new platform to support, upgrading dual-core systems will save a lot of money.

Intel CPU Core

Tualatin

This is the famous "" core, is intel in socket 370 architecture of the last kind of CPU core, using 0.13um manufacturing process, encapsulation method using FC-PGA2 and ppga, the core voltage is also reduced to 1.4 V, and the clock speed ranges from 1 GHz to 100 GHz. The external frequencies are 133 MHz (SAI Yang) and MHz (Pentium III), respectively ), the second-level cache is 512kb (Pentium III-S) and 256kb (Pentium III and sayang), the strongest socket 370 core, with performance even surpassing the early low frequency Pentium 4 series CPU.

Willamette

This is the core of the early Pentium 4 and P4 race Yang, initially using the socket 423 interface, and later switched to the socket 478 interface (the race Yang only has 1.7ghz and 1.8ghz, both socket 478 interfaces ), using the 0.18um manufacturing process, the front-end bus frequency is 400 MHz and the clock speed ranges from 1.3ghz to 2.0 GHz (socket 423) and 1.6ghz to 2.0 GHz (socket 478 ), the second-level cache is 423 KB (Pentium 4) and KB (SAI Yang) respectively. Note that some models of the socket interface Pentium 4 have no second-level cache! Core voltage about 423 V, packaging method using Socket 423 ppga int2, ppga int3, Ooi 478-pin, ppga FC-PGA2 and socket ppga FC-PGA2 as well as the ppga race adopted. Willamette's core manufacturing process lags behind, with high heat and low performance, and has been replaced by the Northwood core.

Northwood

This is the core of the mainstream Pentium 4 and sayang. The biggest improvement between it and Willamette is that it adopts the 0.13um manufacturing process and uses Socket 478 interfaces with a core voltage of about v, the second-level cache is 400/533 KB (SAI Yang) and 800 KB (Pentium 4), and the front-end bus frequency is 400/MHz (SAI Yang only has MHz), respectively ), the clock speed ranges from 2.8 GHz to 2.6 GHz (SAI Yang), 400 GHz to 3.06 GHz (533 MHz FSB Pentium 4), and GHz to GHz (MHz FSB Pentium 4) and 3.4 GHz to 800 GHz (3.06 MHz FSB Pentium 4), and 800 GHz Pentium 4 and all MHz Pentium 4 support hyper-Threading Technology ), the encapsulation method is ppga FC-PGA2 and ppga. According to Intel's plan, the Northwood core will soon be replaced by the Prescott core.

Prescott

This is Intel's latest CPU core. Currently, Pentium 4 XXX (such as Pentium 4 530) and celeon d use this core, and a small number of CPUs above GHz use this core. The biggest difference between it and Northwood is that it adopts the 0.09um manufacturing process and more pipeline structures. In the initial stage, socket 478 interface is used, and all the products currently produced are forwarded to the LGA 775 interface, the core voltage is 1.25-1.525 v. The frontend bus frequency is 533 MHz (hyper-Threading Technology is not supported) and 800 MHz (hyper-Threading Technology is supported). The Pentium 4 Ultimate Edition has a maximum of 10 66mhz. Compared with Northwood, the L1 data cache increases from 8 KB to 16 KB, while the L2 cache increases from kb to 1 MB or 2 MB. The encapsulation method is ppga, prescott core has replaced Northwood core to become the mainstream product in the market.

Intel dual-core processor

Intel's dual-core processors include Pentium D and Pentium extreme edition, and the 945/955 chipset is released to support new dual-core processors, the two new dual-core processors produced using the 90nm process use the LGA 775 interface without pins, but the number of chip capacitors at the bottom of the processor increases and the arrangement method is also different.

The core code of the desktop platform is the Smithfield processor, which is officially named as the Pentium D processor. In addition to switching from Arabic numerals to English letters to represent generations of dual-core processors, D is also easier to think of dual-core.

Ntel's dual-core architecture is more like a dual-CPU platform. The Pentium D processor continues to use the Prescott architecture and 90nm production technology. The Pentium D kernel is actually composed of two independent 2 independent Prescott cores, each of which has an independent 1 MB L2 cache and execution unit. The two cores have a total of 2 MB, however, because the two cores in the processor have independent caches, it is necessary to ensure that the information in each second-level cache is completely consistent; otherwise, an operation error occurs.

To solve this problem, Intel handed over the coordination between the two cores to the external MCH (beiqiao) chip. Although the data transmission and storage between caches is not huge, however, because the external MCH chip is needed for coordination, there is no doubt that the entire processing speed will be delayed, thus affecting the overall performance of the processor.

Because of the Prescott kernel, Pentium D also supports em64t technology and XD bit security technology. It is worth mentioning that the Pentium D processor will not support hyper-Threading Technology. The reason is obvious: it is not easy to correctly allocate data streams and balance computing tasks between multiple physical processors and multiple logical processors. For example, if an application requires two computing threads, it is obvious that each thread corresponds to a physical kernel, but what if there are three computing threads? To reduce the complexity of the dual-core Pentium d Architecture, Intel decided to remove support for hyper-Threading Technology in Pentium D for mainstream markets.

Similar to Intel, the names of the Pentium D and Pentium extreme edition dual-core processors vary in specifications. The biggest difference between them is the support for hyper-Threading Technology. Pentium D does not support hyper-Threading Technology, whereas Pentium extreme Edition does not. When hyper-Threading Technology is enabled, the dual-core Pentium extreme edition processor can simulate two other logic processors and be recognized as a four-core system by the system.

8. CPU Process

It refers to the connection line width of each component in the CPU production process on the silicon material, which is generally expressed in micron. The smaller the micron value, the more advanced the production process, the higher the CPU can reach the higher frequency, the more integrated transistor can be. At present, Intel's P4 and amd xp have both reached a manufacturing process of 0.65 microns.

9. CPU Extended Instruction Set

The CPU relies on commands to calculate and control the system. Each CPU is designed to specify a series of command systems that work with its hardware circuit. Command strength is also an important indicator of CPU, and instruction set is one of the most effective tools to improve the efficiency of the microprocessor. From the mainstream architecture at present, the instruction set can be divided into two parts: Complex Instruction Sets and simplified instruction sets. From the specific application perspective, such as Intel MMX (Multi Media extended), SSE, sse2 (streaming-Single Instruction Multiple Data-extensions 2), see3 and AMD 3 dnow! And so on are CPU extended instruction sets, which enhance the processing capabilities of CPU multimedia, graphic images, and Internet. We usually call the Extended Instruction Set of CPU as the instruction set of CPU. Sse3 instruction set is currently the smallest instruction set. MMX contains 57 commands, SSE contains 50 commands, sse2 contains 144 commands, and sse3 contains 13 commands. Sse3 is also the most advanced instruction set.

10. assembly line and ultra-assembly line

Although the assembly line has been mentioned before, we can say that it is a super assembly line here.

Pipeline is the first time intel has used it on a 486 chip. The pipeline works like an assembly line in industrial production. From 5 ~ Six circuit units with different functions form a command processing pipeline, and then divide an x86 command into 5 ~ After six steps, these circuit units will execute them separately, so that an instruction can be completed in a CPU clock cycle, thus improving the computing speed of the CPU. Superpiplined refers to the number of pipelines in a CPU that exceed 5 ~ More than six steps, for example, the Pentium Pro pipeline is up to 14 steps. The faster the pipeline design step (level) is to complete a command, so that it can adapt to the CPU with a higher clock speed. However, the excessive length of the pipeline also brings some side effects. It is likely that the actual computing speed of the CPU with a high clock speed is low. This is the case with Intel's Pentium 4, although its clock speed can be as high as GB or above, its computing performance is far inferior to that of AMD's GB fast dragon or even Pentium III.

11: encapsulation form

CPU encapsulation uses specific materials to solidify the CPU chip or CPU module to prevent damages. Generally, the CPU must be encapsulated before it can be delivered to users. The CPU encapsulation method depends on the CPU installation method and the integrated design of the device. In the big classification, CPUs installed using Socket sockets are usually encapsulated using PGA (raster array, the CPUs installed with slot x slot are all encapsulated in the form of SEC (Single Side plug-in box. There are also encapsulation technologies such as plastic land Grid Array and Olga (organic land Grid Array. Due to the increasingly fierce competition in the market, the current development direction of CPU encapsulation technology is mainly to save costs.

12: Ht (hyper-threading)

Let's talk about intel's high-performance HT technology. Intel officially released the hyper-Threading Technology (hyper-Threading Technology), which is the first technology to be applied on the xwing processor. By using this technology, Intel will provide the world's first physical processor integrated with dual-logic processor units (in fact, two logical processor units are integrated on one processor ), said to be able to improve the performance of 40% of the processor, similar technology seems to appear on the AMDK8-Hammer processor.

What is hyper-threading:

Today's processor development is generally moving towards improving the tiled speed of processor commands. However, the performance improvement is not satisfactory because of conflicts in the CPU resources used. Hyper-threading technology is used to integrate two logical processors (Note: processors rather than computing units) on one processor, this makes the new type of CPU with this technology capable of executing multiple threads at the same time, which is not available to other microprocessors.

To put it simply, hyper threading is a technology that synchronizes multiple threads (SMT, simultaneous multi-threading). Its principle is very simple, that is, to use one CPU as two, converting a hyper-threading "entity" processor into two "logic" processors, while the logical processor is no different from the physical processor for the operating system, therefore, the operating system will assign the worker thread to the "two" processors for execution, so that multiple threads can be used for multiple applications or a single application ), it can be executed on the same processor at the same time, but the two logical processors share all the execution resources of the CPU.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.