Document directory
- 1. Why does microservices need to be switched from single-core to multi-core?
Introduction:
It corresponds to the evolution from single-core processor to multi-processor and multi-core, from process to thread and multi-thread in the development process of the operating system. Therefore, we will think of these problems:
1. Why does microservices need to be switched from single-core to multi-core?
Power Consumption Limits the way for single-core processors to continuously improve performance:
As the core of the computer, the core component that affects the computer performance is the processor. It is reflected in the efficiency of the processor to execute commands.
Processor performance = clock speed X IPC
It can be seen that the main indicator to measure the performance of the processor is the number of commands that can be executed in each clock cycle (IPC) and the frequency of the processor.
Therefore, there are two ways to improve the processor performance:
Increase the clock speed and increase the number of commands executed in each clock cycle (IPC ).
1) Changes in the processor's microarchitecture can change IPC, and a more efficient microarchitecture can improve IPC to improve the performance of the processor. However, for the same generation architecture, it is very limited to improve the degree of IPC by improving the architecture. Therefore, in the single-core processor era, improving the performance by increasing the frequency of the processor has become the only means.
2) However, there is no end to increasing the frequency of the processor. We can see from the following derivation that the power consumption of the processor is proportional to the sum of the squares of the current and voltage in the processor, the clock speed is proportional to the voltage.
Processor power consumption is proportional to current x voltage x frequency
The clock speed is proportional to the voltage.
The power consumption of the processor is proportional to the power of the clock speed.
3) if the performance of the processor is improved by increasing the clock speed, the power consumption of the processor will rise sharply at a cubic power rather than a non-linear (single-party) speed and will soon reach frequency wall. If the energy consumption is too high, we need to find factors to improve the performance of the processor and improve IPC.
The improvement of IPC can be achieved by increasing the degree of parallelism of instruction execution. There are two ways to improve the degree of Parallelism: one is to improve the degree of parallelism of the processor microarchitecture, and the other is to adopt a multi-core architecture.
With the same microarchitecture, we can use multi-core methods to effectively control the sharp increase in power consumption for the purpose of processor IPC.
Processor power consumption is proportional to current x voltage x frequency
IPC proportional to current
Processor power consumption is proportional to IPC
From a single-core processor to a dual-core processor, if the clock speed remains unchanged, the IPC can be doubled theoretically, and the power consumption can be doubled theoretically, because the increase in power consumption is linear. The actual situation is that when the performance of a dual-core processor reaches the same performance as that of a single-core processor, the former can have a lower clock speed, so the power consumption is reduced by a cubic power. It is reflected that the take-off frequency of the dual-core processor is lower than that of the single-core processor, and the performance is better.
It can be seen that the trend of processor development in the future is: in order to achieve higher performance, in the case of the same microarchitecture, the number of CPU cores can be increased while maintaining a low frequency. The result of this design is that more parallel increases of IPC, and lower frequency effectively controls the increase of power consumption.
2. Why thread?
The objective is:
1) As mentioned above, better support for SMP and multi-core: processes can use multiple CPUs or cores to execute various threads at the same time on SMP, and good support for parallel processing.
2) reduce the overhead of context switching: Resources and status records are shared by threads of the same process; memory blocks of similar tasks are shared and logical memory is reused, however, only physical memory can be shared between processes.
3. Thread Model
For the above two purposes, two major thread models are proposed: user-level threads and core-level threads. The classification criteria are as follows: whether the thread scheduler is in the core or not.
Among them, the core can achieve concurrency; the external core makes the context switching overhead less.
1. The core is to control thread distribution and scheduling;
2. The scheduling of Simple user threads is all implemented outside the core, and only the switching of the thread running stack is required. However, because the core signals are all processes, the thread cannot be located, therefore, it cannot be used in a multi-processor system.
As a result, most operating systems do not use a single thread model, but combine the two models. For example, Solaris and Linux use a hybrid model like this:
1. provide core threads, and the high-level layer includes basic abstraction, such as hardware status and software context, to meet SMP requirements;
2. Supports the thread library. A Core Thread becomes the dispatcher of multiple user threads, thus reducing the context overhead. Each user thread can be scheduled and executed independently of other threads of the process.
In this model, a thread (kernel thread) of a process is scheduled by the kernel, and it can schedule user threads. The purpose is to ensure that a process can create a large number of user threads and provide conditions for concurrency under the condition that the kernel load is minimal.