Annotation of high-end UNIX operating system server technology

Source: Internet
Author: User

For servers (whether PC servers or UNIX servers), it is becoming more and more difficult to simply improve the computing power and processing power of a single processor, although many manufacturers have made unremitting efforts in the areas of materials, technology and design, and continue to keep the CPU at a high rate of growth in the near future, But the high power consumption under high-frequency caused by the problem of battery capacity and heat dissipation problems and other negative effects, as well as these negative effects on the machine system generated by the electromagnetic compatibility problem, but also in turn the CPU computing power to push to the old age. Obviously, to improve the speed and performance of a single processor is a spent force, and the development of multiple CPUs parallel processing technology is really to improve the modern server processing capacity and speed of the effective way. This is also why multiprocessor servers are not only a patent for UNIX servers, but are also widely used in PC servers. At present, the industry pays more attention to the parallel processing technology mainly includes SMP technology, MPP Technology, coma Technology, cluster technology and NUMA technology.

1. SMP Technology

SMP (symmetric multi-processing-symmetrical multiprocessing) technology is a widely used parallel technology in the relative asymmetric and multiple processing technology. In this architecture, multiple processors run a single copy of the operating system and share memory and other resources on one computer. All processors have equal access to memory, I/O, and external interrupts.

In an asymmetric multiprocessing system, tasks and resources are managed by different processors, some CPU only handles I/O, and some CPUs only handle the submitting tasks of the operating system, and it is obvious that asymmetric multiprocessing systems cannot achieve load balancing. In symmetric multiprocessing systems, system resources are shared by all CPUs in the system, and workloads can be distributed evenly over all available processors.

At present, the CPU of most SMP systems is to access the data through the shared system bus to achieve symmetric and multiple processing. If some RISC server vendors use crossbar or switch to connect multiple CPUs, while performance and scalability are superior to the Intel architecture, the scalability of SMP is still limited.

The difficulty in adding more processors to SMP systems is that the system has to consume resources to support processor preemption memory and two major problems with memory synchronization. Preemption memory means that when multiple processors share data in memory, they cannot read and write data at the same time, although one CPU is reading a piece of data, other CPUs can read the data, but when a CPU is modifying a piece of data, the CPU will lock the data, Other CPUs must wait to manipulate this data.

Obviously, the more CPUs, the more serious the waiting problem, the system performance can not only improve, or even fall. In order to increase the CPU as much as possible, now the SMP system is basically to increase the cache capacity of the server to reduce the preemption memory problem, because the cache is the CPU "local memory", it and the CPU data exchange speed far higher than the memory bus speed. And because the cache support does not share, so there will not be multiple CPU preemption the same memory resource problem, many data operations can be in the CPU built-in cache or CPU external cache successfully completed.

However, the cache function solves the problem of preemption memory in SMP system, but it also causes another difficult problem of "memory synchronization". In SMP system, when each CPU accesses the memory data through the cache, the system must keep the data in memory consistent with the data in the cache, if the cache content is updated, the contents of the memory should be updated accordingly, otherwise it will affect the consistency of the system data. Because each update needs to occupy the CPU, but also locked in the memory of the updated fields, and the high frequency of updates will inevitably affect system performance, the update interval is too long may lead to cross read and write data errors, so the SMP update algorithm is very important. The current SMP system uses the interception algorithm to ensure that the data in the CPU cache is consistent with the memory. The larger the cache, the less the probability of preemption memory reproduction, at the same time because the cache data transmission speed is high, cache increase also improve the efficiency of the CPU, but the system to keep the memory synchronization is very difficult.

On the hardware side, SMP can be implemented on UltraSPARC, SPARCserver, Alpha, and PowerPC architectures, and can be implemented using all Intel chips that include more than 486.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.