Analyzing the fundamental differences between multi-core and multithreading technology

Source: Internet
Author: User
Keywords Execution multi-core multithreading operating system acceleration

What is multi-core technology? Multi-core refers to the integration of two or more complete compute engines (kernels) in one processor. The development of multi-core technology stems from the understanding of engineers that simply increasing the speed of a single core chip generates too much heat and does not bring about performance improvements, as previously http://www.aliyun.com/zixun/aggregation/2578.html "> This is the case with the processor product. They realized that at that rate in the previous product, the heat generated by the processor would soon exceed the surface of the sun. Even if there is no heat problem, its cost performance is difficult to accept, the faster processor price is much higher.
Intel engineers have developed multi-core chips to meet the "horizontal scaling" (rather than "vertical expansion") approach to improve performance. The framework realizes the "divide and conquer law" strategy.

By dividing tasks, thread applications can take full advantage of multiple execution cores and perform more tasks within a specific time. A multi-core processor is a single chip (also known as a "silicon core") that can be plugged directly into a single processor slot, but the operating system uses all of the relevant resources to use each of its execution cores as a discrete logical processor. By dividing tasks between two execution cores, multi-core processors can perform more tasks within a specific clock cycle.

Multicore architecture enables current software to run better and create a more sophisticated architecture for future software authoring. While serious software vendors are still exploring new software concurrency patterns, existing software can support multi-core platforms without being modified as they migrate to multi-core processors. The operating system is designed to fully utilize multiple processors and can be run without modification. To take full advantage of multi-core technology, application developers need to incorporate more ideas into programming, but the design process is the same as the current design process for Chenduo processing (SMP) systems, and existing single-threaded applications will continue to run.

Now, thanks to the application of threading technology, it will show superior performance scalability when running on multi-core processors. Such software includes multimedia applications (content creation, editing, and local and data streaming playback), engineering and other technical computing applications, and intermediate and back-tier server applications such as application servers and databases. Multi-Core technology enables servers to process tasks in parallel, which in the past may require the use of multiple processors, a multi-core system that is easier to expand, and the ability to incorporate more robust processing in a more delicate form, which uses less power and has fewer calories to compute power. Multi-core technology is the inevitable development of the processor in the past 20 years, the main factors to improve the performance of microprocessors are two: the rapid progress of semiconductor technology and the continuous development of the architecture. Every progress of semiconductor technology has put forward new problems for the research of microprocessor architecture, and has opened up a new field. The progress of the architecture has further improved the performance of the microprocessor based on the development of semiconductor technology. These two factors are mutually influential and mutually reinforcing. Generally speaking, the development of process and circuit technology makes the processor performance increase approximately 20 times times, the development of the architecture makes the processor performance increase approximately 4 times times, the development of compiling technology makes the processor performance increase about 1.4 times times. But today, this regularity is difficult to maintain. The emergence of multi-core is the inevitable outcome of technology development and application demand.

There is no doubt that the word "multi-core" and "multithreading" have quickly become the two major famous in the processor architecture design, as the history of the Warring States Period to "Confucianism", "Ink" two major school, but the two Zhi thought school is to win you dead me, and multi-core, multi-thread is mutually inclusive, Today, almost any processor is moving toward a more multi-core, multi-threaded route.

Although the two words are everywhere, can someone know the actual difference between them? What is the priority in performing the design? Is this multi-core first or multi-threaded advanced? As we all want to know more about this, this article tries to explain the difference, and as far as possible without involving actual complex details, let you have an understanding of both the mechanism concept and the difference.

Trip earlier than thread

Based on the development of information technology, in the software program execution of subdivision, and then cut the miniaturization unit, first has the trip (process), after only the thread (thread), the unit is smaller than the stroke, a trip can have multiple threads, in a stroke under the various threads, Are the memory addressing resources and memory management mechanisms that share the same itinerary, including executive power order, memory space, stack location, and so on, each thread itself has only a little bit of the variable self attribute required for execution, and the rest is based on the rules established by the compliance itinerary.

In contrast, programs and programs use different memory settings, including paging, segmentation, such as the start of the different, executive power order, stack depth, and so on, a processor if the implementation of a stroke to be changed to perform B-trip, this must be a memory management configuration of the relocation, change, And this relocation if in the processor is OK, if in the cache or even the main memory system, this switch, transfer program on the performance of the damage is very large, because the completion of the relocation, the same time the switch program, the processor can execute dozens of to thousands of instructions long.

Accelerated thinking of two routes

So, to avoid this kind of switching efficiency loss, there are two ways to think, the first is to expand to the overall operational system to solve the problem, in a computer design, configure more processors, and then the same operating system to control and manage multiple processors, and will be executed by the procedures of the procedures, A program feed (also known as: Send a pie) to a processor to perform, so many simultaneous execution, each processor executes a program, so can accelerate the overall execution efficiency.

Of course! This acceleration must have a prerequisite, that is, the operating system must be able to control the compilation, play and use of multiple stroke technology, if the single travel system configuration to compile, then the operating system can not control more than one processor in the server, So there is no need to talk about the operating system responsible for allowing application programs to perform simultaneous multiple simultaneous execution deliveries.

Even if the operating system supports multiple programs, and if the application is still only supporting a single program, then the situation is no good, the operating system can not be a single process to split, still only feed a single processor to execute, no acceleration.

At the same time with multiple processors to execute, and each processor to perform a trip, this is an acceleration method, the other is: as far as possible without the memory management configuration of the switch to avoid the effectiveness of the switch, threading is the product of this concept.

However, threads also need to match the program to play, the concept of threading and implementation has been "C + + See, Java look long" stage, so C + + can only use the API to call the way to support and use of multithreading, so must rewrite the past of the program to the line, changed into a call to use to support multithreaded API just line. Comparatively, the Java that comes out later than C + + is native support multithreading, without rewriting can also play and use the characteristics of multi-threaded and accelerate the benefits.

With threads, the execution of segmentation, cutting more delicate, not only in a number of processors in the system can be accelerated, in a single processor also benefits, in multiple processors on the system of each processor can not only execute a single program, of course, can also execute a separate thread, But on the single processor system because eliminates the memory management configuration The relocation, therefore can accelerate, very obvious, the thread makes the execution the dispatch, the distribution is more exquisite and nimble.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.