(10) learning together the thread control of advanced programming in Unix environment (APUE), and advanced programming apue

Source: Internet
Author: User

(10) learning together the thread control of advanced programming in Unix environment (APUE), and advanced programming apue

.

.

.

.

.

Directory

(1) learning standard IO for advanced programming in Unix environment (APUE) together

(2) learning together the file IO of advanced programming in Unix environment (APUE)

(3) Learning Together files and directories of advanced programming in Unix environments (APUE)

(4) Studying the system data files and information of advanced programming in Unix environment (APUE) together

(5) learning the process environment of advanced programming in Unix environment (APUE) together

(6) Process Control for advanced programming (APUE) in Unix environment

(7) learning the process relationship and daemon process of advanced programming in Unix environment (APUE) together

(8) learning the signal of advanced programming in Unix environment (APUE) together

(9) learning together the Unix environment advanced programming (APUE) thread

(10) learning together the thread control of advanced programming in Unix environment (APUE)

 

 

Previously, we used the default attributes when creating a thread. This chapter mainly discusses the attributes of a custom thread.

Using default attributes can basically solve most of the problems encountered, so custom attributes are rarely used in actual projects.

 

1. Thread attributes

Attributes in the P341 table of APUE 3 can be used to limit the maximum number of threads that a process can create, but macros that limit the number of threads do not have to be taken seriously, in the previous blog, we mentioned that the number of threads that can be created by a thread is affected by many factors, not necessarily based on these macro values.

The thread attribute is indicated by the pthread_attr_t type.

1 # include <stdio. h> 2 # include <stdlib. h> 3 # include <pthread. h> 4 # include <string. h> 5 6 static void * func (void * p) 7 {8 puts ("Thread is working. "); 9 10 pthread_exit (NULL); 11} 12 13 int main () 14 {15 pthread_t tid; 16 int err, I; 17 pthread_attr_t attr; 18 19 pthread_attr_init (& attr); 20 // modify the stack size of each thread 21 pthread_attr_setstacksize (& attr, 1024*1024); 22 23 for (I = 0 ;; I ++) 24 {25 // test the number of threads that can be created by the current process. 26 err = pthread_create (& tid, & attr, func, NULL); 27 if (err) 28 {29 fprintf (stderr, "pthread_create (): % s \ n", strerror (err); 30 break; 31} 32 33} 34 35 printf ("I = % d \ n", I); 36 37 pthread_attr_destroy (& attr); 38 39 exit (0); 40}

 

The preceding chestnuts modify the stack space allocated to each thread through the thread attribute, so that the number of created threads is different from the default one.

The thread attribute is initialized using the pthread_attr_init (3) function, and then destroyed using the pthread_attr_destroy (3) function.

Thread attributes can not only set the stack space of a thread, but also create separate threads.

 

2. mutex attributes

The mutex attribute is represented by the pthread_mutexattr_t type. Like the thread attribute, it must be initialized before use and destroyed after use.

The pthread_mutexattr_init (3) function is used to initialize the mutex attributes. Its usage is similar to the thread attributes.

 

 1 pthread_mutexattr_getpshared, pthread_mutexattr_setpshared  -  get  and 2        set the process-shared attribute 3  4 #include <pthread.h> 5  6 int pthread_mutexattr_getpshared(const pthread_mutexattr_t * 7        restrict attr, int *restrict pshared); 8  9 int pthread_mutexattr_setpshared(pthread_mutexattr_t *attr,10        int pshared);

 

The p in the function name refers to process. The two functions are used to set whether the attributes of a thread can be used across processes. This is a bit messy, right? How can the thread attributes be used across processes? Don't worry. Let's first look at the clone (2) function.

1 clone, __clone2 - create a child process2 3 #define _GNU_SOURCE4 #include <sched.h>5 6 int clone(int (*fn)(void *), void *child_stack,7           int flags, void *arg, ...8           /* pid_t *ptid, struct user_desc *tls, pid_t *ctid */ );

 

If CLONE_FILES is set for the flags of clone (2) process, the Parent and Child processes share the file descriptor table. Normally, the file descriptor table is shared among threads, because multithreading runs within the address space of the same process.

Although the description of the clone (2) function is to create sub-processes, if the flags attribute is set to extreme separation (exclusive to various resources), it is equivalent to creating a sub-process;

If the flags attribute is set to an extreme approximation (all resources are shared), it is equivalent to creating a sibling thread. Therefore, the kernel does not have the process concept, but only the thread concept. Whether the process or thread you created does not affect the scheduling of the kernel.

If you need to create a "something" to share a part of resources with the current thread, and occupy a part of resources exclusively, you can use clone (2) the function creates a "thing" that is neither a thread nor a process, because for the kernel, processes and threads are inherently vague.

Now I can understand why the above mentioned pthread_mutexattr_setpshared (3) function is used to set the thread attributes to be used across processes?

 

There are four mutex types. Different mutex quantities have different validity periods in different situations. P347, the third edition of APUE, is illustrated in figure 12-5, LZ copied it here.

Mutex type Unlock again Unlock when not occupied Unlock when unlocked
PTHREAD_MUTEX_NORMAL (General) Deadlock Undefined Undefined
PTHREAD_MUTEX_ERRORCHECK (check error) Error returned Error returned Error returned
PTHREAD_MUTEX_RECURSIVE (recursion) Allow Error returned Error returned

PTHREAD_MUTEX_DEFAULT

(This is what we usually use by default)

Undefined Undefined Undefined

Table 1 mutex type

LZ explains what the description on the header means:

1) re-lock when no unlock occurs: The current mutex has been locked and the lock is again;

2) unlock when the account is not occupied: when someone else locks you to unlock the account;

3) unlock when the lock has been unlocked: The current mutex has been unlocked, and the next unlock situation;

 

3. reimport

The first time I saw that the re-entry was in the signal phase, right.

If a function can be safely called by multiple threads at the same time point, it is calledThread Security.

The POSIX standard requires that all libraries must support thread security after the thread standard is set. If thread security is not supported, add the _ unlocked suffix to the function name, or publish a function that supports thread security. The function name must be suffixed with _ r.

We have seen many functions with the _ r Suffix in the man manual.

 

4. Thread-specific data

It is an improvement for some data to support multi-thread concurrency. The most typical is errno. errno was originally a global variable and has now become a macro definition.

Let's pre-compile errno to see its true nature.

1 #include <errno.h>2 3 errno;

 

1 >$ gcc -E errno.c2 # 2 "errno.c" 23 4 (*__errno_location ());5 >$

 

 

5. Thread Cancellation

As we said in the previous blog, the pthread_cancel (3) function only initiates a cancellation request and cannot forcibly cancel the thread.

The cancellation of a thread can be divided into two situations: the cancellation is allowed or the cancellation is not allowed.

Pthread_cancel (3) whether to allow cancellation is determined by the canceled thread.

There is nothing to say about disallow. Let's talk about disallow.

Canceling is allowed in two cases: asynchronous cancel and deferred cancel (default)

1) asynchronous cancel: it is the kernel operation method, which is not explained here.

2) postponed cancel: PostponedCancel pointThen respond to the cancel operation. The cancellation point is actually a function, and the code of the cancellation point will not be executed when the cancellation request is received.

Psegments 12-14 of the third edition of APUE are system calls that may cause blocking. They are all cancellation points defined by POSIX. P363 Figure 12-15 shows the optional cancellation points defined by POSIX. Whether or not these functions are actually canceled depends on the specific implementation of the Platform.

Why should we adopt a deferred cancellation policy instead of canceling requests from anywhere immediately? Let's take a look at the pseudocode below to illustrate this problem:

1 thr_func () 2 3 {4 5 p = malloc (); 6 7 --------------------------> received a cancellation request 8 9 ----------------------------> pthread_cleanup_push ();-> free (p ); // not the cancellation point. Continue to execute 10 11 fd1 = open (); // The cancellation point. Respond to the cancellation action 12 13 ----------------------------> pthread_cleanup_push () before the cancellation point is executed (); -> close (fd1); 14 15 fd2 = open (); 16 17 --------------------------> pthread_cleanup_push ();-> close (fd2); 18 19 pthread_exit (); 20 21}

 

The cancellation request may be received at any time when the thread executes the function. Assume that the above function has just allocated a piece of memory using the malloc (3) function dynamically, before you can hook up the sub-function, you will receive a cancellation request. If you immediately respond to this cancellation request, it will cause memory leakage. The macro pthread_cleanup_push of the hook function is not the cancellation point, so the cancellation request will be postponed to continue working. After mounting the hook function, it continues to run the open (2) function. Because the open (2) function is effective at the cancellation point, the cancellation request is returned, the thread is canceled and the space applied for by the above malloc (3) is released through the hook function. This is the most obvious function of postponing cancellation.

The pthread_setcancelstate (3) function is used to modify the cancelable state of a thread, which can be set to canceled or uncanceled.

The pthread_setcanceltype (3) function is used to modify the cancel type, that is, asynchronous cancel and deferred cancel can be selected.

The pthread_testcancel (3) function is used to place the cancellation point manually. If a thread performs mathematical operations for 10 minutes when it is started, and does not call any function, the thread cannot respond to cancellation, in order for this thread to respond to cancellation, we can use this function to manually place the cancellation point.

 

6. threads and Signals

Figure 1 line-Level Signal bitmap

In the previous blog on the signal, LZ gave you a sketch of the signal processing process. In this figure, the standard signal of a thread is simply drawn into two bitmaps. In fact, each Thread level holds a mask bitmap and a padding bitmap. Each process level holds a padding bitmap instead of a mask bitmap. Before returning from the kernel state to the user State, the current thread uses its own mask bitmap and process-level padding for bitwise AND (&) operations. If there is a signal, it must be processed; then, we use our own mask bitmap and our own padding bitmap to perform bitwise AND operations, and then process the corresponding signal.

Therefore, when a thread is scheduled, the thread responds to the process-level signal.

It can be seen that threads can send signals to each other.

 

1 pthread_kill - send a signal to a thread2 3 #include <signal.h>4 5 int pthread_kill(pthread_t thread, int sig);6 7 Compile and link with -pthread.

The pthread_kill (3) function is used to send signals in the thread stage. The thread indicates the thread to which the signal is sent and the sig sends the signal.

Since this function is easy to use, LZ won't post the chestnuts here. You can write and try it yourself.

 

When pthread_sigmask (3) function is used, the human intervention line level mask bitmap. Similar to the sigsetmask (3) function, you can try it on your own.

 

7. threads and fork

This section mainly describes the existing ambiguity of fork (2) on different platforms.

There are two main camps in the fork development process. One camp uses the copy-on-write technology, and the other camp uses a vfork (2)-like strategy.

These two strategies have been discussed in the previous blog on the process relationship. If you are interested in shoes, you can read the description on your own. I will not introduce them too much here.

 

8. threads and I/O

This section mainly introduces the pread (2) and pwrite (2) functions, which are not used in reality, if you are interested, read the introduction in the book or read the instructions in the man manual. We will not discuss it too much here. If you have any questions, you can leave a comment.

 

Now the POSIX standard thread has been introduced. * The standard of nix platform threads is not only POSIX, but also has different thread implementation methods defined by standards such as OpenMP.

 

9. OpenMP Standard

We use the OpenMP standard to write a Hello World Program.

 1 #include <stdio.h> 2 #include <stdlib.h> 3 #include <omp.h> 4  5 int main() 6 { 7 #pragma omp parallel sections 8 { 9 #pragma omp section10         printf("[%d]:Hello\n",omp_get_thread_num());11 #pragma omp section12         printf("[%d]:World\n",omp_get_thread_num());13 }14 15         exit(0);16 }

OpenMP standard multithreading is implemented using the # pre-processing label. The-fopenmp parameter must be added during GCC compilation.

>$ make hellocc -fopenmp -Wall    hello.c   -o hello>$ ./hello[0]:Hello[1]:World>$ ./hello[1]:World[0]:Hello>$

 

From the preceding running results, we can see that the thread has been created and the competition has occurred.

GCC has supported the OpenMP standard since version 4.0 and later.

Because the OpenMP standard is not introduced in "APUE", so we do not discuss too much here, interested friends can go to http://www.openmp.org to learn more.

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.