Thread-specific-storage for C + +

Source: Internet
Author: User
Tags posix

Citation Source: Https://www.cse.wustl.edu/~schmidt/PDF/TSS-pattern.pdf

Summary:

Multithreading can theoretically improve program performance, but in reality, multithreading is often worse than a single thread because of the overhead of acquiring and releasing locks. In addition, multithreaded programming is difficult to avoid competition and deadlocks that require complex concurrency control protocols.

This article introduces thread-specific storage mode (thread-specific storage), which solves some problems of multithreading performance and programming complexity. Thread-specific-storage pattern lets multithreading use a logical global access point to obtain thread-specific data, and does not cause lock overhead each time it is fetched.

1 Intent

Thread-specific-storage pattern lets multithreading use a logical global access point to obtain thread-specific data, and does not cause lock overhead each time it is fetched.

2 Motivation2.1 Context and forces

Thread-specific-storage pattern should be applied in multithreaded frequently acquired objects that are logically global but are physically proprietary between threads and are not the same. For example, the operating system provides errno response error messages. When an error occurs in the system call, the OS settings errno report an error and return an Error record status. When the app detects an error state and looks at errno to determine which error type occurred. For example, the following code writes the accept cache from a non-blocking tcpsocket:

//One global errno per-process.extern interrno;void*worker (socket socket) {//Read from the network connection//and process the data until the connection//Is closed.     for (;;) {CharBuffer[bufsiz]; intresult = recv (socket, buffer, Bufsiz,0); //Check to see if the recv () call failed.      if(Result = =-1) {if(errno! =Ewouldblock)//Record error result in thread-specific data.printf ("recv failed, errno =%d", errno); } Else        //Perform the work on success.process_buffer (buffer); }}    

If Recv returns-1, the code looks at errno! = Ewouldblock and prints the error message, RECV>0 handles the receive cache.

2.2 Common Traps and pitfalls

Although the "Global error variable" method shown above works well for single-threaded applications, there are subtle problems in multithreaded applications. In particular, a competitive condition in a preemptive multithreaded system may cause the errno value set by a method in one thread to be interpreted incorrectly by an application in another thread. Therefore, if multiple threads execute the worker function at the same time, the global version of errno may be set incorrectly due to a race condition.

For example, two threads (T1 and T2) perform recv calls, T1 recv returns 1 and sets errno to Ewouldbloc, stating that there is no data in the socket at a moment. Before T1 view this state, T1 was preempted, T2 run, assuming T2 interrupt, set errno to Eintr. If T2 is immediately preempted, the T1 runtime incorrectly considers that its recv call produced an interrupt and performs an error behavior. This program is both wrong and non-portable because its behavior depends on the order in which the threads are executed.

The root of the problem is to set up and detect this global errno variable with two steps: (1) recv call to set this variable; (2) apply to detect this variable. So a simple errno wrapper lock does not solve the competition problem because the Set/test wraps multiple operations (not atomic).

One way to solve this problem is to create a more advanced locking mechanism. For example, the RECV call internally acquires a errno mutex, which is released by the app after the app detects the errno returned by Recv. However, this method appears unexpectedly without releasing the lock, which will lead to starvation and deadlock. In addition, if the application must check the error state after each call to the library, the additional locking overhead can significantly degrade performance even if multiple threads are not used.

2.3 Solution:thread-specific Storage

(1) Efficiency: Thread-specific storage allows sequential methods within threads to atomically access thread-specific objects without causing locking overhead for each access.
(2) Simplify application programming: thread-specific storage is easy to use for application programmers, because system developers can use thread-specific storage entirely transparently through data abstraction or macros at the source code level.
(3) Highly portably: thread-specific storage is available on most multi-threaded OS platforms and can be easily implemented on platforms that lack it.

3 Applicability Applicability

The thread-specific Storage mode is applicable when the application has the following characteristics:

(1) It was originally written with the assumption of a single control thread, and was ported to a multithreaded environment without changing the existing API. (2) It contains multiple preemptive control threads that can be executed concurrently in any dispatch order.
(3) Each control thread invokes a method sequence that shares only the data that is common to that thread.
(4) Data shared by objects within each thread must be accessed through a globally visible access point shared with other threads "logical", but the "physical" uniqueness of each thread.
(5) data is implicitly passed between methods, not explicitly through parameters.

Do not apply thread-specific Storage pattern when the application has the following characteristics:

(1) Multiple threads collaborate on a single task and require concurrent access to shared data. For example, a multithreaded application can perform both reads and writes in the in-memory database. In this case, threads must share non-thread-specific records and tables. If you use thread-specific storage to store the database, the thread cannot share the data. Therefore, you must use synchronization primitives (for example, mutexes) to control access to database records so that threads can collaborate on shared data.
(2) The physical and logical separation of the data is more intuitive and effective. For example, by explicitly passing data as a parameter to all methods, you can have the thread range access to the data only on each line. In this case, threadspecific storage mode may not be required.

4 Structure and participants

Application Threads

Application threads uses TS Object proxies to obtain TS Objects in online thread storage.

Thread-specific (TS) Object Proxy

TS Object Proxy defines the interface for TS Object, which is responsible for providing a separate object for each application thread through the Getspecific and Setspecific methods.
A TS Object Proxy instance is a type of object that mediates access to a thread-specific TS object. For example, multithreading uses the same TS Object proxy to get thread-specific errno variables. TS Object Collection uses Key-value storage, which is created and passed to the collection using the Getspecific and Setspecific methods.

The purpose of TS Object proxites is to hide keys and TS object collections. If there is no proxy, application Threads gets the collection and explicitly uses keys.

Thread-specific (TS) Object

A TS object is a Thread-specific object instance of a thread, for example, a thread-specific errno is a type int. It is managed by TS Object collection and can only be obtained through TS object proxy.

Thread-specific (TS) Object Collection

In a complex multithreaded application, the errno value of a thread may be one of many types of data residing in thread-specific storage. Therefore, for a thread to retrieve its thread-specific error data, it must use key. This key must be associated with errno to allow the thread to access the correct entry in TS Object collection.

TS Object Collection contains a collection of all thread-specific objects related to a thread. Each thread has a unique TS Object Collection. The TS Object Collection maps key to thread-specific TS Objects. A TS object proxy uses key to remove a specific TS object from the TS object collection using Get_object (key) and Set_object (key).

5 collaborations

The interaction diagram in Figure 3 illustrates the following collaboration between participants in a thread-specific storage pattern:

(1) Locate the TS Object Collection: Each application thread method uses TS object proxy getspecific and setspecific to obtain the TS object Collection, It is stored inline or in the global structure indexed by thread ID
(2) Get TS object from Thread-specific Store: Once you get the TS Object Collection,ts oject Proxy uses key to retrieve the TS object from the collection
(3) Set/get TS Object state: The application thread uses the normal C + + method call to manipulate TS objects. The lock is not required because the object is referenced by a pointer that is accessed only by the call line range.

6 Consequences6.1 Benefits

There are several benefits to using thread-specific storage patterns, including:

Efficiency:

You can implement thread-specific storage patterns so that you do not need to lock thread-specific data. For example, by putting errno into thread-specific storage, each thread can reliably set and test the completion state of a method within that thread without using a complex synchronization protocol. This eliminates the locking overhead of shared data within threads, which is faster than acquiring and releasing mutexes.

Easy to use:

Thread-specific storage is easy to use for application programmers, because system developers can use thread-specific storage at the source level through data abstraction or macros.

7 implementation

Thread-specific storage patterns can be implemented in a variety of ways. This section describes each step that is required to implement the pattern. The steps are summarized as follows:

(1) Establish TS Object Collections:
If the operating system does not provide a thread-specific implementation of the store, you can implement it by using any of the mechanisms available to maintain consistency of the data structures in the TS object collection.

(2) Details of package thread-specific storage:
Thread-specific storage interfaces are usually weakly typed and error-prone. Therefore, once the implementation of thread-specific storage is available, use C + + programming language features (such as templates and overloads) to hide the low-level details of thread-specific storage behind the OO API.

7.1 Designing the TS Object collection

The collection is a pointer to a table that points to TS Objects, the index is keys, and the thread must find its TS object collection before accessing the thread-specific object by key. Therefore, the first design challenge is to determine how to locate and store a collection of TS objects.

A collection of TS objects can (1) be stored outside all threads, or (2) stored inside each thread. Each of these methods is described and evaluated below:

(1) external to all threads:

This method defines the global mapping (4) of the ID of each thread to its TS Object collection table. Finding the correct collection may require the use of a reader/writer lock to prevent a race condition. However, once a collection is found, no additional locking is required because only one thread in the TS object collection is active.

(2) internal to each thread: This method requires that each thread in the process store a collection of TS objects in its other internal state, such as the runtime thread stack, program counter, universal Register, and thread ID. When a thread accesses a thread-specific object, the object is retrieved (5) by using the corresponding key as the index of the thread's internal TS object collection. This method does not require additional locking.

For external and internal implementations, if the thread-specific key range is relatively small, you can store the TS object collection as a fixed-size array. For example, the POSIX pthread standard defines the minimum number of keys that must be supported by compliant implementations POSIX THREAD keys MAX. If the size is fixed (for example, for 128 keys, which is the default for POSIX), the lookup time can be O (1) and 5, by using the object's key to simply index the array of TS object collections.

However, the range of thread-specific keys can be large. For example, the Solaris thread does not have a predefined number of key limits. Therefore, Solaris uses a variable-size data structure, which may increase the time it takes to manage collections of TS objects.

Thread IDs can range from very small to very large. This has no problem with the internal implementation because the thread ID is implicitly associated with the corresponding set of TS objects contained in the thread state.

However, for an external implementation, it might be impractical to have a fixed-size array with an entry for each possible thread ID value for the array. Conversely, it is more space-saving to have threads use dynamic Data structures to map thread IDs to a collection of TS objects. For example, one method is to use a hash function on the thread ID to get the offset in the hash table bucket that contains the tuple chain (4) that maps the threading ID to its corresponding set of TS objects.

The internal method stores the TS object collection locally, while the external method stores them globally. Depending on the implementation of the external table, the global location allows threads to access the collection of TS objects for other threads. While this seems to break all the essentials of thread-specific storage, it is useful if the thread-specific storage implementation provides automatic garbage collection by reclaiming unused keys. This feature is especially important for implementations that limit the number of keys to smaller values.

However, using external tables increases the access time for each thread-specific object, because if you modify a globally accessible table (for example, when you create a new key), you need a synchronization mechanism (such as a reader/writer lock) to avoid a race condition. On the other hand, keeping the collection of TS objects locally requires more per-thread storage in the state of each thread, although the total memory consumption is considerable.

8 Boost::thread_specific_ptr

Boost Library provides THREAD_SPECIFIC_PTR implementation thread-specific-storage mechanism

//#include <boost/thread/tss.hpp>namespaceboost{Template<typename t>classThread_specific_ptr { Public: Thread_specific_ptr (); ExplicitThread_specific_ptr (void(*cleanup_function) (t*)); ~thread_specific_ptr (); T*Get()Const; T*operator()Const; T&operator*()Const; T*release (); voidReset (t* new_value=0); };}
thread_specific_ptr();

Requires:

delete this->get()Is well-formed.

Effects:

Construct a thread_specific_ptr object for storing a pointer to the object of type T specific to each thread. The default delete -based cleanup function would be used to destroy any thread-local objects if reset() is called, or the thre Ad exits.

Throws:

boost::thread_resource_errorIf an error occurs.

explicit thread_specific_ptr(void (*cleanup_function)(T*));
Requires:

cleanup_function(this->get())Does not throw any exceptions.

Effects:

Construct a thread_specific_ptr object for storing a pointer to the object of type T specific to each thread. The supplied would be cleanup_function used to destroy any thread-local objects when reset() is called, or the thread exits.

Throws:

boost::thread_resource_errorIf an error occurs.

thread_specific_ptr();
Requires:

All the thread specific instances associated-thread_specific_ptr (except maybe the one associated to this thread) Must is null.

Effects:

Calls this->reset() to clean up the associated value is the current thread, and destroys *this .

Throws:

Nothing.

Remarks:

The requirement is due to the fact and the order to delete all these instances, the implementation should being forced to Mai Ntain a list of all the threads have an associated specific PTR, and which is against the goal of thread specific data.

T* get() const;
Returns:

The pointer associated with the current thread.

Throws:

Nothing.

Note

The initial value associated with a instance of is to each boost::thread_specific_ptr NULL thread.

T* operator->() const;
Returns:

this->get()

Throws:

Nothing.

T& operator*() const;
requires:

this->get   is Not null .

returns:

* (this->get ())

throws:

Nothing.

void reset(T* new_value=0);  
Effects:

Ifthis->get()!=new_valueandthis->get()Is non-NULL, invokedelete this->get()Orcleanup_function(this->get())as appropriate. Storenew_valueAs the pointer associated with the current thread.

Postcondition:

this->get()==new_value

Throws:

boost::thread_resource_errorIf an error occurs.

T* release();

Effects:

Return and store as the pointer associated with the current this->get() NULL thread without invoking the cleanup function.

Postcondition:

this->get()==0

Throws:

Nothing.

  

Thread-specific-storage for C + +

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.