C + + Series: Boost Thread Programming Guide

Source: Internet
Author: User
Tags posix strtok thread class

Reprinted from: http://www.cppblog.com/shaker/archive/2011/11/30/33583.html

Dozb
C + + Boost Thread Programming Guide
0 Preface
1 Creating Threads
2 Mutex
3 Condition variables
4 Thread-Local storage
5 routines that run only once
6 Boost line Libraries's future
7 References:
0 Preface

Standard C + + threads are coming soon. Cuj predicts it will be derived from the boost line libraries, and Bill is now leading us to explore the boost line libraries.
Just a few years ago, it was a very unusual thing to execute a program with multithreading. However, today's Internet Application service programs generally use multithreading to improve the efficiency of multi-client links; To achieve maximum throughput, the transactional server runs the service on a separate thread; GUI applications run those time-consuming, complex processes in a thread-like manner, This ensures that the user interface responds to the user's actions in a timely manner. There are many examples of using multithreading in this way.

But the C + + standard does not involve multi-threading, which allows programmers to start to wonder if it is possible to write multithreaded C + + programs. Although it is not possible to write standard multithreaded programs, programmers will write multithreaded C + + programs using multithreaded libraries provided by multi-threaded operating systems. But there are at least two problems with this: most of these libraries are done in C, and if you want to use them in a C + + program you have to be very careful, and every operating system has its own set of classes that support multithreading. Therefore, there is no standard to write the code, nor is it applicable everywhere (non-portable). The Boost line libraries is designed to solve all of these problems.

Boost is initiated by a member of the C + + standards Committee Class Library team to develop a new class library for C + + organization. Now it has nearly 2000 members. Many libraries can be found in the release version of the boost source code. To make these class libraries thread-safe (thread-safe), the Boost line libraries is created.

Many C + + experts are involved in the development of boost line libraries. All interfaces are designed starting from 0 and are not a simple encapsulation of the C threading API. Many C + + features, such as constructors and destructors, functional objects (function object), and templates, are used to make the interface more flexible. The current version works under the Posix,win32 and Macintosh carbon platforms.

1 Creating Threads

Just as the Std::fstream class represents a file, the Boost::thread class represents an executable thread. The default constructor creates an instance that represents the current thread of execution. An overloaded constructor takes an argument with a function object that does not require any arguments, and there is no return value. This constructor creates a new executable thread that invokes the function object.

At first, it was thought that traditional C's method of creating threads seemed more useful than this design, because C creates a thread by passing in a void* pointer, in which case the data can be passed in. However, since the Boost line libraries uses a function object instead of a function pointer, the function object itself can carry the data required by the thread. This approach is more flexible and type-safe (type-safe). When used with a feature library such as Boost.bind, this method allows you to pass any amount of data to the new thread.

Currently, the thread object functionality created by Boost line libraries is not very powerful yet. In fact, it can only do two things. Thread objects make it easy to compare = = and! = To determine whether they represent the same thread; You can also call Boost::thread::join to wait for the thread to finish. Some other thread libraries can allow you to do some other things to the thread (such as setting a priority, or even canceling a thread). However, since it is not simple to add these operations to the universally applicable (portable) interface, it is still discussed how to add these to the boost thread library.

Listing1 shows one of the simplest uses of the Boost::thread class. The newly created thread simply prints "Hello,world" on the Std::out, and the main function ends after it has finished executing.


Example 1:

#include <boost/thread/thread.hpp>
#include <iostream>

void Hello ()
{
Std::cout <<
"Hello World, I ' m a thread!"
<< Std::endl;
}

int main (int argc, char* argv[])
{
Boost::thread thrd (&hello);
Thrd.join ();
return 0;
}
2 Mutex

Anyone who writes too many thread routines knows the importance of avoiding simultaneous access to shared areas by different threads. If a thread changes one of the data in the shared area while the other is reading the data, the result will be undefined. In order to avoid this situation, some special primitive types and operations will be used. The most basic of these is the mutex (mutex,mutual exclusion abbreviation). A mutex allows only one thread at a time to access the shared area. When a thread wants to access a shared area, the first thing to do is lock the mutex. If the other thread has locked the mutex, you must wait for that thread to unlock the mutex so that only one thread can access the shared area at the same time.

There are many variants of the concept of mutexes. The Boost line libraries supports two major types of mutexes, including simple mutexes and recursive mutexes (recursive mutexes). If the same thread locks on the mutex two times, a deadlock (deadlock) occurs, which means that all threads waiting to be unlocked will wait. With a recursive mutex, a single thread can lock the mutex multiple times and, of course, must unlock the same number of times to ensure that other threads can lock the mutex.

In these two categories of mutexes, there are multiple variants of how threads are locked. There are three ways a thread can lock a mutex:

Wait until no other thread locks the mutex.
Returns immediately if another mutex has been locked for the mutex.
Wait until no other thread mutex locks until it expires.
It seems that the best mutex type is a recursive mutex, which can use all three lock forms. Every variant, however, has a price. So boost line libraries allows you to use the most efficient mutex types depending on your needs. The Boost line libraries provides 6 of the mutex types, which are sorted by efficiency:

Boost::mutex,
Boost::try_mutex,
Boost::timed_mutex,
Boost::recursive_mutex,
Boost::recursive_try_mutex,
Boost::recursive_timed_mutex

If the mutex is unlocked, a deadlock will occur. This is a common mistake, and the Boost line libraries is to make it impossible (at least it is difficult). Locking and unlocking the mutex directly is not possible for users of the Boost line libraries. The mutex class implements the locking and unlocking of the mutex by teypdef the type defined in the RAII. This is what you know about the scope lock mode. To construct these types, you pass in a reference to a mutex. The constructor locks the mutex, and the destructor unlocks the mutex. C + + guarantees that the destructor will be called, so even if there is an exception thrown, the mutex will always be properly unlocked.
This approach guarantees proper use of mutexes. However, it is important to note that although the scope lock mode guarantees that the mutex is unlocked, it does not guarantee that the contributing resources will remain available after the exception is thrown. So just like executing a single-threaded program, you must ensure that exceptions do not cause procedural state exceptions. In addition, this locked object cannot be passed to another thread because the state they maintain is not forbidden to do so.

List2 gives a simple example of using Boost::mutex. The example creates two new threads, each of which has 10 loops, prints the number of thread IDs and current loops on the std::cout, and the main function waits for the two threads to finish executing. Std::cout is a shared resource, so each thread uses a global mutex to ensure that only one line Cheng Nen writes to it at the same time.

Many readers may have noticed that passing data to a thread in List2 also has to write a function manually. Although this example is simple, it is annoying to write code like this every time. Don't worry, there's a simple solution. The library allows you to create a new function by binding another function and passing in the data needed for the call. List3 shows you how to use the Boost.bind library to simplify code in LIST2, so you don't have to write these function objects manually.

Example 2:

#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <iostream>

Boost::mutex Io_mutex;

struct count
{
COUNT (int ID): ID (ID) {}

void operator () ()
{
for (int i = 0; i <; ++i)
{
Boost::mutex::scoped_lock
Lock (Io_mutex);
Std::cout << ID << ":"
<< i << Std::endl;
}
}

int id;
};

int main (int argc, char* argv[])
{
Boost::thread Thrd1 (count (1));
Boost::thread Thrd2 (Count (2));
Thrd1.join ();
Thrd2.join ();
return 0;
}
Example 3://This example and Example 2, in addition to using Boost.bind to simplify the creation of threads to carry data, avoid using function objects

#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/bind.hpp>
#include <iostream>

Boost::mutex Io_mutex;

void count (int id)
{
for (int i = 0; i <; ++i)
{
Boost::mutex::scoped_lock
Lock (Io_mutex);
Std::cout << ID << ":" <<
I << Std::endl;
}
}

int main (int argc, char* argv[])
{
Boost::thread Thrd1 (
Boost::bind (&count, 1));
Boost::thread Thrd2 (
Boost::bind (&count, 2));
Thrd1.join ();
Thrd2.join ();
return 0;
}
3 Condition variables

Sometimes it is not enough to just lock in shared resources to use it. Sometimes shared resources can be used only when they are in certain States. For example, if a thread is to read data from the stack, it must wait for the data to be stacked if there is no data in the stack. It is not enough to use a mutex in this case for synchronization. Another way of synchronizing--The condition variable, can be used in this case.

The use of condition variables is always associated with mutexes and shared resources. The thread first locks the mutex and then verifies that the state of the shared resource is in a available state. If not, then the thread waits for the condition variable. To point to such an operation, the mutex must be unlocked at the time of the wait so that other threads can access the shared resource and change its state. It also has to be guaranteed that the mutex is locked from waiting for the thread to return. When another thread changes the state of a shared resource, it notifies the thread that is waiting for the condition variable and returns it to the waiting thread.

List4 is a simple example of using boost::condition. There is a class that implements bounded buffers and a fixed-size first-in-one container. This buffer is thread-safe due to the use of the mutex Boost::mutex. Put and get use condition variables to ensure that the thread waits for the required state to complete the operation. There are two threads created, one putting 100 integers in buffer and the other pulling them out of buffer. This bounded cache can only hold 10 integers at a time, so these two threads must periodically wait for another thread. To verify this, put and get output diagnostic statements in Std::cout. Finally, when two threads are finished, the main function is executed.

#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/condition.hpp>
#include <iostream>

const int buf_size = 10;
const int iters = 100;

Boost::mutex Io_mutex;

Class buffer
{
Public
typedef boost::mutex::scoped_lock
Scoped_lock;

Buffer ()
: P (0), C (0), full (0)
{
}

void put (int m)
{
Scoped_lock lock (mutex);
if (full = = buf_size)
{
{
Boost::mutex::scoped_lock
Lock (Io_mutex);
Std::cout <<
"Buffer is full. Waiting ... "
<< Std::endl;
}
while (full = = buf_size)
Cond.wait (lock);
}
Buf[p] = m;
p = (p+1)% Buf_size;
++full;
Cond.notify_one ();
}

int get ()
{
Scoped_lock lk (mutex);
if (full = = 0)
{
{
Boost::mutex::scoped_lock
Lock (Io_mutex);
Std::cout <<
"Buffer is empty." Waiting ... "
<< Std::endl;
}
while (full = = 0)
Cond.wait (LK);
}
int i = buf[c];
c = (c+1)% Buf_size;
--full;
Cond.notify_one ();
return i;
}

Private
Boost::mutex Mutex;
Boost::condition cond;
unsigned int p, c, full;
int buf[buf_size];
};

Buffer buf;

void writer ()
{
for (int n = 0; n < iters; ++n)
{
{
Boost::mutex::scoped_lock
Lock (Io_mutex);
Std::cout << "Sending:"
<< n << Std::endl;
}
Buf.put (n);
}
}

void Reader ()
{
for (int x = 0; x < iters; ++x)
{
int n = buf.get ();
{
Boost::mutex::scoped_lock
Lock (Io_mutex);
Std::cout << "Received:"
<< n << Std::endl;
}
}
}

int main (int argc, char* argv[])
{
Boost::thread Thrd1 (&reader);
Boost::thread Thrd2 (&writer);
Thrd1.join ();
Thrd2.join ();
return 0;
}

4 Thread-Local storage

Most of the functions are not reentrant. This means that when a function is called by a thread, it is unsafe if you call the same function again. A non-reentrant function saves a static variable by successive calls or returns a pointer to the static data. For example, Std::strtok is not reentrant because it uses static variables to hold strings that are to be split into symbols.

There are two ways to make non-reusable functions a reusable function. The first method is to change the interface, using pointers or references instead of where static data was originally used. For example, POSIX defines a reentrant variable in Strok_r,std::strtok, which replaces static data with an additional char** parameter. This approach is simple and provides the best possible results. But this has to change the public interface, which means you have to change the code. Instead of changing the public interface, the other method uses the local storage thread (thread local storage) instead of the static data (sometimes it becomes a special thread store, thread-specific storage).

Boost Line libraries provides a smart pointer boost::thread_specific_ptr to access local storage threads. When each thread first uses an instance of this smart pointer, its initial value is NULL, so it must first check if it is empty and assign a value to it. Boost line libraries guarantees that the data saved in the local storage thread will be purged after it is finished.

LIST5 is a simple example of using BOOST::THREAD_SPECIFIC_PTR. Two threads were created to initialize the local storage thread, and there were 10 loops, each increasing the value pointed to by the smart pointer and outputting it to std::cout (because Std::cout is a shared resource, it is synchronized through the mutex). The main thread waits for the two threads to exit after the end of the thread. From this example output you can see that each thread handles its own data instances, although they all use the same boost::thread_specific_ptr.

Example 5:

#include <boost/thread/thread.hpp>
#include <boost/thread/mutex.hpp>
#include <boost/thread/tss.hpp>
#include <iostream>

Boost::mutex Io_mutex;
Boost::thread_specific_ptr<int> ptr;

struct count
{
COUNT (int ID): ID (ID) {}

void operator () ()
{
if (ptr.get () = = 0)
Ptr.reset (new int (0));

for (int i = 0; i <; ++i)
{
(*ptr) + +;
Boost::mutex::scoped_lock
Lock (Io_mutex);
Std::cout << ID << ":"
<< *ptr << Std::endl;
}
}

int id;
};

int main (int argc, char* argv[])
{
Boost::thread Thrd1 (count (1));
Boost::thread Thrd2 (Count (2));
Thrd1.join ();
Thrd2.join ();
return 0;
}
5 routines that run only once

There is another problem that is not resolved: How to make initialization work (such as constructors) also thread-safe. For example, if a reference program is to produce a unique global object, due to the instantiation order problem, a function is called to return a static object, which must be guaranteed to produce this static object the first time it is called. The problem here is that if multiple threads call this function at the same time, then the constructor of the static object is called multiple times, so that the error occurs.

The solution to this problem is the so-called "one-time implementation" (once routine). "One implementation" can only be performed once in an application. If multiple threads want to perform this operation at the same time, then only one is actually executed, and the other thread must wait for the operation to end. To ensure that it is executed only once, the routine is called indirectly by another function, and this function passes it a pointer and a special flag indicating whether the routine has been invoked. This flag is initialized in a static manner, which ensures that it is initialized during compilation rather than at runtime. Therefore, there is no problem that multiple threads will initialize it at the same time. Boost Line libraries provides boost::call_once to support "one-time implementation" and defines a flag Boost::once_flag and a macro boost_once_init that initializes the flag.

List6 is an example of the use of boost::call_once. It defines a static global integer with an initial value of 0 and a static Boost::once_flag instance initialized by Boost_once_init. The main function creates two threads, all of which want to initialize the global integer by passing in a function call Boost::call_once, which adds 1. The main function waits for two threads to end and outputs the final result to Std::cout. The final result shows that the operation was executed only once, because its value is 1.

#include <boost/thread/thread.hpp>
#include <boost/thread/once.hpp>
#include <iostream>

int i = 0;
Boost::once_flag flag =
Boost_once_init;

void Init ()
{
++i;
}

void Thread ()
{
Boost::call_once (&init, flag);
}

int main (int argc, char* argv[])
{
Boost::thread Thrd1 (&thread);
Boost::thread Thrd2 (&thread);
Thrd1.join ();
Thrd2.join ();
Std::cout << i << Std::endl;
return 0;
}
6 Boost line Libraries's future

Boost line libraries is planning to add some new features. This includes Boost::read_write_mutex, which allows multiple threads to read data from the shared area at the same time, but only one thread at a time can write data to the share; Boost::thread_barrier, which causes a set of threads to wait. Know that all the threads have entered the barrier area; Boost::thread_pool, he allows small routine to be executed without having to create or destroy a single thread each.

Boost line libraries has been submitted to the C + + Standards committee as an attachment in the Standard Class Library technical report, and its appearance also sounded the first horn for the next edition of C + + standards. Members of the board gave a very high rating to the first draft of the boost line libraries, but they also considered other multi-threaded libraries. They are interested in adding support for multithreading in the C + + standard. It can also be seen from this point that the future of multithreading in C + + is bright.

7 References:

The Boost.threads Library by Bill Kempf
http://www.boost.org


More Views:

C + + Boost Library Document index document
C + + Boost Assign documentation

C + + Series: Boost Thread Programming Guide

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.