C + + Concurrent Programming-1th Chapter Hello, C + + concurrency World

Source: Internet
Author: User
Tags switches

C + + Concurrent Programming-1th Chapter Hello, C + + concurrency World

Reprinted from Concurrent Programming network –ifeve.com


This article is "C + + concurrent Programming" The first chapter, thanks to the people's post and telecommunications publishing house authorized concurrent Programming network published this article, copyright, please do not reprint. The book will be listed in the near future.

Main contents of this chapter

    • What is concurrency and multithreading
    • Why use concurrency and multithreading in your applications
    • The development of C + + concurrency support
    • What is a simple C + + multi-threaded

This is an exciting time for C + + users. 13 years after the release of the C + + standard from the beginning of 1998, the C + + Standards Committee gave the program language and its support library a major change. The new C + + standard (also known as c++11 or c++0x) was released in 2011 and brought a lot of changes, making C + + applications easier and more productive.

One of the most important new features in the C++11 standard is the support for multithreaded programs. This is the first time that the C + + standard recognizes the existence of multithreaded applications in the language and provides components in the library for writing multithreaded applications. This makes it possible to write multithreaded C + + programs without relying on platform-dependent extensions, allowing portable multithreaded code to be written with guaranteed behavior. This also coincides with the programmer's quest for more pervasive concurrency, especially multithreaded programs, to improve application performance.

This book is about the use of multithreading concurrency in C + + programming, as well as the C + + language features and library tools that make it possible. I'll start by explaining the implications of concurrency and multithreading and why you want to use concurrency in your application. After a quick detour explaining why concurrency is not being used in an application, I'll summarize concurrency support in C + + and end this chapter with a simple C + + concurrency instance. Readers with experience developing multithreaded applications can skip the previous sections. More extensive examples will be covered in the subsequent chapters, with a deeper understanding of the library tools. This book concludes with an in-depth reference to the multi-threaded and concurrent all C + + standard library tools.

So, what is concurrency (concurrency) and multithreading (multithreading)?

1.1 What is concurrency

At the simplest and most basic level, concurrency refers to the simultaneous occurrence of two or more independent activities. Concurrency is ubiquitous in life, we can talk while walking, we can do different things with two hands at the same time, and each of us lives independently of each other--when I swim you can watch the game, and so on.

1.1.1 Concurrency in a computer system

When we refer to the concurrency of computer terminology, we are referring to the simultaneous execution of multiple independent activities in a single system, rather than sequentially or one-to-one. This is not a new phenomenon, multi-tasking operating systems through task switching allows a computer to run multiple applications at the same time for many years, and some high-end multiprocessor servers enable real concurrency for longer periods of time. What is really new is to increase the universality of the computer's real parallel multitasking, not just the illusion.

Previously, most computers had a single processor or core, and so far many desktop machines still do. This kind of computer can only actually perform one task at a time, but it can switch tasks many times per second. By doing a little bit of this task and then doing a little other task, it looks like the task is happening in parallel. This is task switching. We still think of this system as concurrency (concurrency)because the task switches too fast to distinguish between when a task is suspended and when it switches to another task. Task switching provides an illusion of concurrency for both the user and the application itself. Since this is only an illusion of concurrency, when an application executes in a single-processor task-switching environment, its behavior is subtly different than execution in a real concurrency environment. In particular, assumptions about the incorrect memory model (see Chapter 5th) may not occur in such an environment. This will be discussed in depth in the 10th chapter.

Computers with multiple processors are used for servers and high-performance computing tasks for many years, and computers that now have more than one core processor (multi-core processor) on a single chip are becoming more and more common desktop devices. Whether they have multiple processors or a multi-core processor (or both), these computers can actually run more than one task in parallel. We call it hardware concurrency (Hardware concurrency).

Figure 1.1 shows the ideal scenario for a computer to handle exactly two tasks, each divided into 10 equal-sized blocks. In a dual-core machine (with two processing cores), each task can be executed at its own core. When a task is switched on a single-core machine, the block of each task is interleaved. But they are also separated by one (the thickness of the gray divider shown in the figure is greater than the separator bar of the dual-core machine); In order to achieve interleaving, the system has to perform a context switchevery time it switches from one task to another. And that's going to take time. In order to perform context switching, the operating system must save the state of the CPU and the instruction pointer for the currently running task, figure out which task to switch to, and reload the processor state for the task to switch to. The CPU may then load the memory of the new task's instructions and data into the cache, which may prevent the CPU from executing any instructions, resulting in further delays.

Figure 1.1 Concurrency in two ways: parallel execution of dual-core machines vs. task switching of single-core machines

While the availability of hardware concurrency is more pronounced on multiprocessor or multicore systems, some processors can execute multiple threads on one core. The most important factor to consider is the number of hardware threads (Hardware threads) : That is, how many independent tasks the hardware can actually run concurrently. Even with systems that have real hardware concurrency, it is easy to have more tasks than hardware can run in parallel, so task switching will still be used in these cases. For example, on a typical desktop computer, you might have hundreds of tasks running, performing background operations, even if the computer is nominally idle. It is task switching that enables these background tasks to run and allows you to run the word processor, compiler, editor, and web browser (or any combination of apps) at the same time. Figure 1.2 shows the task switching of four tasks on a dual-core machine, which is still ideal for neatly dividing tasks into equal-sized blocks. In fact, many factors make fragmentation and scheduling irregular. Part of these factors will be covered in chapter 8th, when we look at factors that affect the performance of parallel code.

All the technologies, functions, and classes involved in this book can be used, whether your application is running on a single-core processor or multi-core processor, whether it's a task switch or real hardware concurrency. But you can imagine how using concurrency in your application will largely depend on the hardware concurrency available. This will be covered in the 8th chapter, where we specifically study C + + code parallel design issues.

Figure 1.2 Four tasks switching between two cores

The way of 1.1.2 concurrency

Imagine two programmers working together on a software project. If your developers are in separate offices, they can work peacefully, without interfering with each other, and they each have their own set of reference manuals. However, communication is less straightforward; you can't turn around and talk to each other, they have to use the phone, e-mail, or go to each other's office. At the same time, you need to control the costs of two offices and buy more than one reference manual.

Now imagine moving the developer to the same office. They can now talk to each other to discuss the design of the application, they can also easily use paper or whiteboard to draw charts, to help explain the design ideas. You now have only one office to manage, as long as a group of resources can be met. The downside is that they may find it difficult to concentrate and there may be a problem with resource sharing ("Where is the reference manual?"). ”)

These two approaches to organization developers represent two basic approaches to concurrency. Each developer represents a thread, and each office represents a single processor. The first approach is to have multiple single-threaded processes, which are similar to having each developer in their own office, while the second approach is to have multiple threads in a single process, which is similar to having two developers in the same office. You can combine it at will, and have multiple processes, some of which are multithreaded, some single-threaded, but the principle is the same. Let's take a brief look at both of these approaches in one application.

Multi-process concurrency

The first way to use concurrency in one application is to divide the application into multiple, separate, single-threaded processes that run at the same time as if you were able to do both Web browsing and word processing at the same time. These independent processes can pass messages (signals, sockets, files, pipelines, and so on) to each other through all the regular interprocess communication channels, as shown in 1.3. One drawback is that communication between these processes is often complex, or slow, or both, because the operating system typically provides a lot of protection between processes to prevent one process from accidentally modifying data that belongs to another process. Another drawback is the inherent overhead required to run multiple processes: The START process takes time, the operating system must invest in internal resources to manage processes, and so on.

Of course, it's not all a disadvantage: the additional protection operations and higher-level communication mechanisms provided between the operating system threads mean that secure concurrency code can be written more easily than a thread. In fact, similar to the environment provided for the Erlang programming language, the use of processes as a major function of concurrent basic constructs is fast.

Using a standalone process to implement concurrency also has an additional advantage-you can run separate processes on different machines that are connected over the network. While this increases the cost of communication, on a well-designed system, it can be a low-cost way to improve parallel lines of availability and improve performance.

Figure 1.3 Communication between a pair of concurrently running processes

Multithreading concurrency

Another way of concurrency is to run multiple threads in a single process. Threads are like lightweight processes: each thread runs independently of each other, and each thread can run a different sequence of instructions. But all threads in the process share the same address space, and most of the data is accessed from all threads-global variables are still global, pointers, object references, or data can be passed between thread threads. While it is often possible to share memory between processes, this is difficult to establish and often difficult to manage because the memory addresses of the same data are not the same in different processes. Figure 1.4 shows that two threads in a process communicate through shared memory.

Figure 1.4 Communication between a pair of concurrently running threads in the same process

Shared address space, and lack of data protection between threads, makes multithreading-related overhead much smaller than using multiple processes because the operating system has less bookkeeping to do. However, the flexibility of shared memory comes at a cost: if the data is to be accessed by multiple threads, the programmer must ensure that the data that is visible when each thread accesses is consistent. The issues that may be encountered with data sharing between threads, the tools used, and the guidelines to be followed to avoid problems are covered in this book, especially in chapters 3rd, 4, 5 and 8. These problems are not insurmountable, as long as the code is written with due care, but this means that there must be a lot of thinking about the communication between the threads.

Rather than starting multiple single-threaded processes and communicating between them, starting multithreading in a single process and communicating in between is less expensive, which means that the potential problem of shared memory is not taken into account, and it is a preferred concurrency path for mainstream languages including C + +. In addition, the C + + standard does not provide any native support for interprocess communication, so applications that use multi-process will have to rely on platform-dependent APIs for implementation. Therefore, this book focuses on the use of multithreading concurrency, and later mentions that concurrency is assumed to be achieved by using multithreading.

After defining what concurrency is, let's take a look at why concurrency is being used in applications.

1.2 Why is concurrency used?

There are two main reasons for using concurrency in your application: separation of concerns and performance. In fact, I can even say that they are almost the only reason to use concurrency; When you look carefully enough, all other factors can be attributed to one of the two (or perhaps both, of course, except for reasons like "because I'm willing").

1.2.1 Using concurrency for Point-of-care separation

Separation of concerns is almost always a good idea when writing software, and by putting related code together and separating irrelevant code, you can make your program easier to understand and test, thereby reducing the likelihood of errors. You can use concurrency to separate functional areas, even if operations in these different functional areas need to occur at the same time, and if concurrency is not explicitly used, you are either forced to write a task-switching framework or actively invoke an unrelated piece of code in an operation.

Consider a class of dense-processing applications with a user interface, such as a DVD player for desktop computers. Such an application basically has two sets of functions: it not only to read the data from the CD, decode the image and sound, and timely output to the video and audio hardware, so as to achieve error-free playback of the DVD; it also accepts input from the user, such as when the user clicks the Pause or return menu or even exits the button. In a single thread, the application must periodically check the user's input during playback, so the DVD playback code and user interface code are combined. By using multithreading to separate these concerns, user interface code and DVD playback code no longer need to be so tightly intertwined; one thread can handle the user interface and the other handles DVD playback. There will be interactions between them, such as user click Pauses, but now these interactions are directly related to the task at hand.

This gives the illusion of responsiveness because the user interface thread can usually respond to a user's request immediately, even when the request is communicated to the working thread, and the response is simply to display a busy cursor or a waiting message. Similarly, standalone threads are often used to run tasks that must run continuously in the background, such as monitoring changes to the file system in a desktop search program. Using threads in this way generally makes the logic of each thread simpler, because the interactions between them can be limited to clearly identifiable points, rather than spreading the logic of different tasks everywhere.

In this case, the number of threads is independent of the number of cores available to the CPU, because threading is based on conceptual design rather than trying to increase throughput.

1.2.2 Using concurrency for performance

Multiprocessor systems have been around for decades, but until recently they can only be seen in supercomputers, mainframes, and large server systems. However, chip manufacturers are increasingly inclined to design multicore chips that integrate 2, 4, 16 or more processors on a single chip to achieve better performance than a single core. Therefore, multi-core desktop computers, and even multicore embedded devices, are now becoming more common. The computational power of these computers is not derived from making a single task run faster, but rather from running multiple tasks in parallel. In the past, programmers had sat and watched their programs grow faster with processors, without any effort on their side. But now, as Herb Sutter said, "The free lunch is over." [1] if the software wants to take advantage of the growing computational power, it must be designed to run multiple tasks concurrently . Programmers must therefore be mindful, and those who have hitherto neglected concurrency must pay attention to it and add it to their toolbox.

There are two ways of using concurrency for performance. The first and most obvious is to divide a single task into parts and run it in parallel, reducing the total elapsed time. This is mission parallelism (Task parallelism). While this may sound intuitive, it can be a fairly complex process, because there may be a lot of dependencies between the various parts. The difference may be in terms of the process--one thread executes part of the algorithm and another thread executes another part of the algorithm--or in terms of data--each thread performs the same action on different pieces of data. The latter method is known as data parallelism (parallelism).

Algorithms that are susceptible to this parallelism are often referred to as easy parallelism (embarrassingly parallel). Aside from the implication that you might be embarrassed to face code that is easily parallelized, this is a good thing: the other terms I've met about this algorithm are natural parallelism (naturally parallel) and facilitates concurrency (conveniently concurrent). The easy parallel algorithm has a good extensibility feature--the parallelism of the algorithm can be matched with the increase of the number of available hardware threads. Such an algorithm is the proverb "people more power" perfect embodiment. For the part of the non-easy parallel algorithm, you can divide the algorithm into a fixed (and therefore not extensible) number of parallel tasks. The techniques for dividing tasks between threads are covered in the 8th chapter.

The second way to use concurrency to improve performance is to use the available parallel methods to solve larger problems, rather than working with a file at the same time, with 2 or 10 or 20 discretionary. While this is actually an application of data parallelism , there are different points of focus by performing the same operations on multiple sets of data at the same time. It still takes the same time to process a block of data, but more data can be processed at the same time. Of course, this approach also has limitations and is not beneficial in all cases, but the throughput gains of this approach can make some new things possible, for example, if the parts of the picture can be processed in parallel, the resolution of the video processing can be improved.

1.2.3 when not to use concurrency

Knowing when to not use concurrency is as important as knowing when to use it. Basically, the only reason not to use concurrency is when the benefits are not equal to the cost. Code that uses concurrency is difficult to understand in many cases, so multithreaded code written and maintained has a direct mental cost, while additional complexity can lead to more errors. Do not use concurrency unless the potential performance gain is large enough or the focus separation is clear enough to offset the additional development time required to ensure that it is correct and the additional costs associated with maintaining multithreaded code.

Similarly, the performance gain may not be as large as expected, and there is an inherent overhead when starting a thread because the operating system must allocate the relevant kernel resources and stack space, and then join the new thread to the scheduler, all of which take time. If the task that runs on the thread finishes quickly, the task actually occupies a negligible time compared to the overhead time of the startup thread, and may cause the overall performance of the application to perform the task directly through the resulting thread.

In addition, threads are limited resources. If too many threads run at the same time, the operating system resources are consumed and the operating system runs more slowly on the whole. Not only that, too many threads run out of the free memory or address space of the process, because each thread requires a separate stack space. This is especially problematic for a 32-bit process with a flat architecture with an available address space of 4GB: If each thread has a 1MB stack (typical for many systems), then 4,096 threads will run out of all address spaces and no longer have room for code, static data, or heap data. Although 64-bit (or larger) systems do not have this direct address space limitation, they still have limited resources: if you run too many threads, it will eventually cause problems. Although the thread pool (see Chapter 9th) can be used to limit the number of threads, this is not a panacea and they have their own problems.

If the server side of the client/server application starts a separate thread for each link, it works for a small number of links, but when the same technique is used for high-demand servers that need to handle a large number of links, the system resources are quickly exhausted by starting too many threads. In this scenario, it is prudent to use the thread pool to provide optimized performance (see Chapter 9th).

Finally, the more threads you run, the more context switches the operating system needs to do. Each context switch consumes time that can be spent on valuable work, so at some point, adding an extra thread actually degrades rather than improves the overall performance of the application. For this reason, if you are trying to get the best performance from your system, it is necessary to consider the available hardware concurrency (or lack of it) and adjust the number of running threads.

Using concurrency for performance is like any other optimization strategy: it has the potential to dramatically improve application performance, but it can also complicate code, making it more difficult to understand and more error-prone. Therefore, it is only worthwhile to have performance-critical parts of the application that have significant gain potential. Of course, if the potential for performance gains is second only to design clarity or separation of concerns, it may also be worthwhile to use multithreaded design.

Suppose you've decided that you really want to use concurrency in your application, whether for performance, separation of concerns, or because of "multithreaded Monday," What does it mean for C + + programmers?

1.3 Using concurrency and multithreading in C + +

Providing standardized support for concurrency through multithreading is a novelty for C + +. Only in the upcoming C++11 standard can you write multithreaded code without relying on platform-related extensions. To understand the rationale behind many of the rules in the new version of the C + + threading Library, it is important to understand its history.

1.3.1 C + + multithreading process

1998 C + + Standard Edition does not recognize the existence of threads, and the operational effects of various language features are written in the form of abstract machines in order. Not only that, the memory model is not formally defined, so for the 1998 C + + standard, you can't write multithreaded applications without the compiler-related extensions.

Of course, compiler vendors are free to add extensions to the language, and the popularity of C APIs for multithreading--such as those in the POSIX C and Microsoft Windows APIs--has led many C + + compiler vendors to support multithreading through various platform-related extensions. This compiler support is generally limited to the use of the platform's corresponding C API and to ensure that the C + + runtime library (such as code for exception handling mechanisms) runs in the presence of multithreading. Although very few compiler vendors provide a formal multithreaded-aware memory model, the actual performance of compilers and processors is good enough that a large number of multithreaded C + + programs have been written.

Because they are not satisfied with the use of platform-related C APIs to handle multithreading, C + + programmers have expected their class libraries to provide object-oriented, multi-threading tools. Application frameworks like MFC, and C + + generic C + + class libraries like boost and Ace have accumulated multiple sets of C + + classes, encapsulating the underlying platform-related APIs and providing advanced multithreaded tools to simplify tasks. The specifics of the various libraries, especially in terms of starting new threads, are very different, but the overall structure of these classes has many similarities. There is a particularly important design that is common to many C + + class libraries and provides great convenience for programmers, which is the initialization of the resource with locks (RAII, Resource acquisition is initialization) idiom to ensure that the mutex is unlocked when the relevant scope is exited.

In many cases, the existing C + + compiler provides multithreaded support, such as Boost and Ace, which synthesizes the usability of platform-related APIs and platform-agnostic class libraries, providing a solid foundation for writing multithreaded C + + code, and therefore about millions of lines of C + + The code is written as part of a multithreaded application. But the lack of standard support means there is a lack of thread-aware memory models to cause problems, especially for those trying to achieve higher performance by using the processor's hardware capabilities, or to write cross-platform code but the actual performance of the compilers between different platforms.

Concurrency support in the new 1.3.2 standard

All of this has changed with the release of the new C++11 standard. In addition to a new thread-aware memory model, the C + + standard library has been expanded to include classes for managing threads (see Chapter 2nd), protecting shared data (see Chapter 3rd), synchronization between threads (see Chapter 4th), and low-level atomic operations (see Chapter 5th).

The new C + + line libraries is largely based on previous experience gained through the use of the C + + class library mentioned earlier. In particular, the Boost line libraries is used as the primary model on which the new class library is based, and many classes share the naming and structure with the counterparts in boost. This is a two-way flow in the evolution of the new standard, and the Boost line libraries has changed itself to match the C + + standard in multiple ways, so users migrating from boost will find themselves very accustomed.

As mentioned at the beginning of this chapter, support for concurrency is only one of the changes in the new C + + standard, and there are many improvements to the programming language itself that make it easier for programmers to work. Although these contents are not within the scope of this book, some of these changes have had a direct impact on the libraries itself and how it is used. Appendix A gives a brief introduction to these language features.

Direct support for atomic operations in C + + allows programmers to write efficient code with deterministic semantics without the need for platform-related assembly language. This is a real benefit for those who are trying to write efficient, portable code, and not only does the compiler take care of the specifics of the platform, but it can also write the optimizer to consider the semantics of the operation, thus allowing the program to be better optimized as a whole.

1.3.3 C + + line libraries efficiency

For C + + as a whole and for C + + classes that contain low-level tools-especially those in the new C + + threading Library-The one thing that developers involved in high-performance computing often focus on is efficiency. If you are looking for the ultimate in performance, it is important to understand the implementation cost of using advanced tools compared to using low-level tools directly from the bottom. The cost is abstract punishment (Abstraction penalty).

The C + + Standards Board has focused on this when designing the C + + standard library as a whole and in designing standard C + + line libraries. One of the goals of the design is to provide virtually no benefit by directly using low-level APIs when providing the same tools. The library is therefore designed to be efficiently implemented on most major platforms (with very low abstraction penalties).

Another goal of the C + + Standards Committee is to ensure that C + + provides enough low-level tools for programmers who want to work more closely with the hardware to get the ultimate performance. To achieve this, along with the new memory model, there is a comprehensive library of atomic operations that directly controls the visibility of individual bits, bytes, inter-thread synchronization, and all changes. These atomic types and the corresponding operations can now be used in a number of places that were previously often chosen by the developer to be delegated to the platform-related assembly language. Code that uses new standard types and operations is therefore better portability and easier to maintain.

The C + + standard library also provides a higher level of abstraction and tools that make it easier and less error-prone to write multithreaded code. Sometimes using these tools does bring a performance cost, because additional code must be executed. But this performance cost does not necessarily imply a higher abstraction penalty; Overall, this performance cost is no more expensive than writing equivalent functions by hand, and the compiler may well inline most of the extra code.

In some cases, advanced tools provide additional functionality beyond the specific usage requirements. In most cases it's not a problem: you don't pay for the part you don't use. In rare cases, these unused features can affect the performance of other code. If you are more concerned with the performance of your program and the cost is too high, you might be better off using a lower-level tool to manually implement the required functionality. In the vast majority of cases, the additional complexity and probability of error is much greater than the potential benefits of a small performance boost. Even if there is evidence that the bottleneck appears in the tools of the C + + standard library, this can also be attributed to poor application design rather than inferior class library implementations. For example, if too many threads compete for a mutex, this can significantly affect performance. Rather than trying to scrape off a little bit of time on a mutex, it's better to reconstruct the application to reduce the competition on the mutex. Designing applications to reduce competition is described in chapter 8th.

In very rare cases where the C + + standard library does not provide the required performance or behavior, it is necessary to use platform-related tools.

1.3.4 platform-related tools

Although the C + + line libraries provides a fairly comprehensive tool for multithreading and concurrent processing, there are additional platform-related tools on all platforms. To make it easy to access those tools without abandoning the benefits of using standard C + + line libraries, the types in the C + + thread library can provide a native_handle () member function that allows direct manipulation of the underlying implementation by using the platform-related APIs. By its very nature, any operation using Native_handle () is entirely platform-dependent, and beyond the scope of this book (and also the standard C + + library itself).

Of course, before considering the use of platform-related tools, it is important to understand what the standard library can offer, so let's start with an example.

1.4 Getting Started

OK, now you have a great c++11 compatible compiler. What's next? What does a multithreaded C + + program look like? It looks like all other C + + programs, usually a combination of variables, classes, and functions. The only real difference is that some functions can run concurrently, so you need to ensure that concurrent access to shared data is secure, as described in Chapter 3rd. Of course, in order to run functions concurrently, you must use specific functions and objects to manage each thread.

1.4.1 Hello, concurrent world

Let's start with a classic example: a print "Hello world." The program. A very simple hello that runs in a single thread, the world program looks like this, and when we talk about multi-threading, it can be a benchmark.

1 #include <iostream>2 int main () 3 {4     std::cout << "Hello world\n"; 5}

What this program does is write "Hello World" into the standard output stream. Let's compare it to the simple Hello, Concurrent World program shown in the following list, which launches a separate thread to display this information.

Listing 1.1 A simple Hello, Concurrent World program

1 #include <iostream> 2 #include <thread>  //①3 void Hello ()  //②4 {5     std::cout << "Hello C OnCurrent world\n "; 6} 7 int main () 8 {9     std::thread t (hello);  ③10     t.join ();  ④11}

The first difference is the addition of #include <thread>①. The Declaration of multithreading support in a standard C + + library is in a new header file: Functions and classes for managing threads are declared in <thread> , while functions and classes that protect shared data are declared in other header files.

Secondly, the code that writes the message is moved to a separate function in the ②? This is because each thread must have an initial function (initial functions)and the execution of the new thread starts here. For an application, the initial thread is main (), but for all other threads, this is specified in the constructor of the Std::thread Object-In this case, the STD named t? ③ : The thread object has a new function , hello (), as its initial function.

Next difference: Unlike writing directly to standard output or calling Hello () from Main ( ), the program launches a completely new thread that divides the number of threads-the initial thread starts at Main () and the new thread starts at Hello ().

After the new thread is started? ③, the initial thread continues execution. If it does not wait for the new thread to end, it will run its own way to the end of main () , thus ending the program--possibly before the new thread has a chance to run. That's why the join () is called here in the ④--see Chapter 2nd, which causes the calling thread (in main () ) to wait for the thread associated with the Std::thread object. That is the Tin this example.

If this looks like a lot of work just to write a piece of information to standard output, then it does--as described in section 1.2.3 above, it is generally not worth using multi-threading for such a simple task, especially if the initial thread is idle during this time. In the later part of this book, we'll show you how to use multithreading in scenarios to get a clear benefit from the examples.

1.5 Summary

In this chapter, I mention the meaning of concurrency and multithreading and why you choose to use (or not use) it in your application. I also mentioned the development of multithreading in C + +, with a complete lack of support from the 1998 standard, a variety of platform-related extensions, and the right multithreading support in the new C++11 standard. This support comes at the right time, allowing programmers to take advantage of the more powerful hardware concurrency that comes with the new CPU, because chip manufacturers choose to increase processing power in a multi-core format that allows more tasks to be executed concurrently, rather than increasing the execution speed of a single core.

My example in section 1.4 shows how simple classes and functions in the C + + standard library are. In C + +, the use of multithreading is not complex in itself, but it is complex to design code to achieve its intended behavior.

After trying out the example of section 1.4, it's time to look at more substantive content. In the 2nd chapter, we will take a look at the classes and functions used to manage threads.

[1] "The free Lunch are over:a fundamental Turn toward Concurrency in software," Herb Sutter, Dr. Dobb ' s

Journal, (3), March 2005. Http://www.gotw.ca/publications/concurrency-ddj.htm.

original articles, reproduced please specify: reproduced from the Concurrent programming network –ifeve.com

This article link address: "C + + Concurrent Programming"-1th chapter Hello, the concurrency of C + + world

C + + Concurrent Programming-1th Chapter Hello, C + + concurrency World

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.