Multi-thread programming in C language in Linux

Source: Internet
Author: User
Tags posix

1 Introduction
Thread technology was proposed as early as 1960s, but the real application of multithreading to the operating system was in the middle of 1980s, and Solaris was a leader in this field. Traditional UNIX also supports the concept of thread, but only one thread is allowed in a process, so multithreading means multi-process. Now, multithreading technology has been supported by many operating systems, including Windows/NT and, of course, Linux.
Why do we need to introduce threads after the concept of a process is introduced? What are the advantages of multithreading? What systems should adopt multithreading? We must answer these questions first.
One of the reasons for using multithreading is that compared with processes, it is a very "frugal" multi-task operation method. We know that in a Linux system, starting a new process must be allocated to it with an independent address space, creating a large number of data tables to maintain its code segment, stack segment, and data segment, this is an "expensive" way of multitasking. While multiple threads running in a process use the same address space for each other to share most of the data. The space required to start a thread is much less than the space required to start a process, in addition, the time required for switching between threads is much less than the time required for switching between processes. According to statistics, in general, it seems that there is no such thing as a waste of money? About 0 times. Of course, in a specific system, this data may be significantly different.
The second reason for using multithreading is the convenient communication mechanism between threads. For different processes, they have independent data space, and data transmission can only be performed through communication. This method is not only time-consuming, but also inconvenient. The thread is not the case. Because the threads in the same process share data space, the data of one thread can be directly used by other threads, which is fast and convenient. Of course, data sharing also brings about other problems. Some Variables cannot be modified by two threads at the same time, and some subprograms declare static data, which is more likely to cause catastrophic damage to multithreaded programs, these are the things you need to pay attention to when writing multi-threaded programs.
In addition to the advantages mentioned above, multi-threaded programs, as a multi-task and concurrent working method, certainly have the following advantages:
1) Improve application response. This is especially meaningful for graphic interface programs. When an operation takes a long time, the entire system will wait for this operation. At this time, the program will not respond to keyboard, mouse, and menu operations, but will use multithreading technology, putting time consuming in a new thread can avoid this embarrassing situation.
2) Make the multi-CPU system more effective. The operating system ensures that different threads run on different CPUs when the number of threads is not greater than the number of CPUs.
3) Improve the program structure. A long and complex process can be considered to be divided into multiple threads and become several independent or semi-independent running parts. Such a program will facilitate understanding and modification.
Next we will try to write a simple multi-threaded program.

2. Simple multi-thread programming
Multithreading in Linux follows the POSIX thread interface, which is called pthread. To compile a multi-threaded program in Linux, you need to use the header file pthread. H. You need to use the library libpthread. A for connection. By the way, the implementation of pthread in Linux is achieved by calling clone. Clone () is a Linux-specific system call. It is used in a similar way as fork. For details about clone (), interested readers can refer to the relevant documentation. The following is a simple multi-threaded program example1.c.

/* Example. C */
# Include <stdio. h>
# Include <pthread. h>
Void thread (void)
{
Int I;
For (I = 0; I <3; I ++)
Printf ("This Is A pthread. \ n ");
}

Int main (void)
{
Pthread_t ID;
Int I, RET;
Ret = pthread_create (& ID, null, (void *) thread, null );
If (Ret! = 0 ){
Printf ("create pthread error! \ N ");
Exit (1 );
}
For (I = 0; I <3; I ++)
Printf ("this is the main process. \ n ");
Pthread_join (ID, null );
Return (0 );
}

We compile this program:
GCC example1.c-lpthread-O example1
Run example1 and we get the following results:
This is the main process.
This is a pthread.
This is the main process.
This is the main process.
This is a pthread.
This is a pthread.
Run again and we may get the following results:
This is a pthread.
This is the main process.
This is a pthread.
This is the main process.
This is a pthread.
This is the main process.

The two results are different, which is the result of two threads competing for CPU resources. In the above example, we used two functions, pthread_create and pthread_join, and declared a variable of the pthread_t type.
Pthread_t is defined in the header file/usr/include/bits/pthreadtypes. h:
Typedef unsigned long int pthread_t;
It is the identifier of a thread. The pthread_create function is used to create a thread. Its prototype is:
Extern int pthread_create _ p (pthread_t * _ thread, _ const pthread_attr_t * _ ATTR,
Void * (* _ start_routine) (void *), void * _ Arg ));
The first parameter is the pointer to the thread identifier. The second parameter is used to set the thread attribute. The third parameter is the starting address of the thread running function, and the last parameter is the parameter of the running function. Here, our function thread does not need parameters, so the last parameter is set as a null pointer. We also set the second parameter as a null pointer to generate a thread with the default attribute. The setting and modification of thread attributes will be described in the next section. When the thread is successfully created, the function returns 0. If the value is not 0, the thread creation fails. The common error codes returned are eagain and einval. The former indicates that the system restricts the creation of new threads. For example, the number of threads is too large. The latter indicates that the second parameter indicates that the thread attribute value is invalid. After the thread is successfully created, the newly created thread runs the function with parameters 3 and 4, and the original thread continues to run the next line of code.
The pthread_join function is used to wait for the end of a thread. Function prototype:
Extern int pthread_join _ p (pthread_t _ th, void ** _ thread_return ));
The first parameter is the identifier of the waiting thread, and the second parameter is a user-defined pointer, which can be used to store the return value of the waiting thread. This function is a thread-blocking function. The function called will wait until the end of the waiting thread. When the function returns, the resources of the waiting thread will be reclaimed. There are two ways to end a thread. One is that the function ends and the thread that calls it ends, as in the preceding example; another method is to use the pthread_exit function. Its function prototype is:
Extern void pthread_exit _ p (void * _ retval) _ attribute _ (_ noreturn __));
The unique parameter is the return code of the function. As long as the second thread_return parameter in pthread_join is not null, this value will be passed to thread_return. Finally, it should be noted that a thread cannot be waited by multiple threads. Otherwise, the first thread that receives the signal will return success, and the other threads that call pthread_join will return the error code esrch.
In this section, we write a simple thread and master the three most commonly used functions pthread_create, pthread_join, and pthread_exit. Next, let's take a look at some common attributes of the thread and how to set these attributes.

3. modify attributes of a thread
In the example in the previous section, we used the pthread_create function to create a thread. In this thread, we used the default parameter to set the second parameter of the function to null. Indeed, for most programs, it is enough to use the default attribute, but we still need to understand the relevant attributes of the thread.
The property structure is pthread_attr_t, which is also defined in the header file/usr/include/pthread. H. You can check the attribute structure by yourself. Attribute values cannot be set directly. Related functions must be used for operations. The initialized function is pthread_attr_init, which must be called before the pthread_create function. Attribute objects mainly include binding, splitting, stack address, stack size, and priority. The default attributes are non-bound, non-separated, 1 MB stacks by default, and have the same priority as the parent process.
Thread binding involves another concept: lwp: Light Weight process ). A lightweight process can be understood as a kernel thread, which is located between the user layer and the system layer. The system allocates thread resources and controls threads through lightweight processes. A lightweight process can control one or more threads. By default, the number of light processes started and the light processes to control which threads are controlled by the system are called unbound. Under the binding condition, a thread is bound to a light process. The bound thread has a high response speed because the CPU time slice is scheduled to light processes. The bound thread can ensure that there is always a light process available when needed. By setting the priority and scheduling level of the bound process, the bound thread can meet requirements such as real-time response.
The function used to set the thread binding status is pthread_attr_setscope. It has two parameters: the first is the pointer to the attribute structure, and the second is the binding type. It has two values: pthread_scope_system (bound) and pthread_scope_process (unbound ). The following code creates a bound thread.
# Include <pthread. h>
Pthread_attr_t ATTR;
Pthread_t tid;

/* Initialize the property value, which is set to the default value */
Pthread_attr_init (& ATTR );
Pthread_attr_setscope (& ATTR, pthread_scope_system );

Pthread_create (& tid, & ATTR, (void *) my_function, null );

The separation status of a thread determines how a thread terminates itself. In the preceding example, we use the default attribute of the thread, that is, the thread is not in the detached state. In this case, the original thread waits for the creation of the thread to end. Only when the pthread_join () function returns, the created thread is terminated and the system resources occupied by it can be released. The separation thread is not like this. It is not waiting by other threads. When the running ends, the thread is terminated and system resources are released immediately. Programmers should select appropriate separation States based on their own needs. The function for setting the thread separation status is pthread_attr_setdetachstate (pthread_attr_t * ATTR,
Int detachstate ). The second parameter can be pthread_create_detached and pthread _ create_joinable ). Note that if you set a thread as a separate thread and the thread runs very fast, it is likely to terminate before the pthread_create function returns, after it is terminated, it may hand over the thread number and system resources to other threads for use. In this way, the thread that calls pthread_create gets the wrong thread number. To avoid this situation, you can take some synchronization measures. One of the simplest methods is to call the pthread_cond_timewait function in the created thread, so that the thread can wait for a while, leave enough time for the function pthread_create to return. Setting a wait time is a common method in multi-threaded programming. However, do not use functions such as wait (), which sleep the entire process and cannot solve the thread synchronization problem.
Another common attribute is the thread priority, which is stored in the schema sched_param. Use the pthread_attr_getschedparam function and the pthread_attr_setschedparam function to store the data. Generally, we always take the priority and modify the obtained value before storing it back. The following is a simple example.
# Include <pthread. h>
# Include <sched. h>
Pthread_attr_t ATTR;
Pthread_t tid;
Sched_param Param;
Int newprio = 20;

Pthread_attr_init (& ATTR );
Pthread_attr_getschedparam (& ATTR, & PARAM );
Param. sched_priority = newprio;
Pthread_attr_setschedparam (& ATTR, & PARAM );
Pthread_create (& tid, & ATTR, (void *) myfunction, myarg );
  
4. Thread Data Processing
Compared with a process, one of the biggest advantages of a thread is data sharing. Each process shares the data segment that follows the parent process to conveniently obtain and modify data. But this also brings many problems to multithreaded programming. We must be careful that there are multiple different processes accessing the same variable. Many functions cannot be reentrant, that is, they cannot run multiple copies of a function at the same time (unless different data segments are used ). Static variables declared in functions often cause problems and return values of functions. If the returned address is the address of the Space statically declared by the function, when a thread calls the function to obtain the address and uses the data pointed to by the address, other threads may call this function and modify this data segment. Shared variables must be defined with the keyword volatile in the process to prevent the compiler from changing their usage methods during optimization (for example, using the-ox parameter in GCC. To protect variables, we must use semaphores, mutex, and other methods to ensure correct use of variables. Next, we will gradually introduce the relevant knowledge when processing thread data.

4.1 thread data
In a single-threaded program, there are two basic types of data: global variables and local variables. However, in multi-threaded programs, there is also a third data type: thread data (TSD: thread-specific data ). It is similar to a global variable. In a thread, each function can call it like a global variable, but it is invisible to other threads outside the thread. The necessity of such data is obvious. For example, the common variable errno returns standard error information. Obviously, it cannot be a local variable. Almost every function can call it, but it cannot be a global variable, otherwise, the error message of line B may be output in line. To implement such variables, we must use thread data. We create a key for each thread data, which is associated with this key. In each thread, this key is used to represent the thread data, but in different threads, this key represents different data. In the same thread, it represents the same data content.
There are four main functions related to thread data: Create a key, specify thread data for a key, read thread data from a key, and delete a key.
The function prototype for creating a key is:
Extern int pthread_key_create _ p (pthread_key_t * _ key,
Void (* _ destr_function) (void *)));
The first parameter is a pointer to a key value, and the second parameter specifies a destructor function. If this parameter is not null, when each thread ends, the system will call this function to release the memory block bound to this key. This function is often used with the function pthread_once (pthread_once_t * once_control, void (* initroutine) (void) to create this key only once. The pthread_once function declares an initialization function. When pthread_once is called for the first time, it executes this function and will be ignored in future calls.

In the following example, we create a key and associate it with a certain data. We need to define a function createwindow, which defines a graphical window (the data type is fl_window *, which is the data type in the graphic interface development tool fltk ). Since each thread calls this function, we use thread data.
/* Declare a key */
Pthread_key_t mywinkey;
/* Function createwindow */
Void createwindow (void ){
Fl_window * Win;
Static pthread_once_t once = pthread_once_init;
/* Call the createmykey function to create a key */
Pthread_once (& once, createmykey );
/* Win points to a new window */
Win = new fl_window (0, 0,100,100, "mywindow ");
/* Make some possible settings for this window, such as the size, position, and name */
Setwindow (WIN );
/* Bind the window pointer value to the key mywinkey */
Pthread_setpecific (mywinkey, Win );
}

/* The createmykey function creates a key and specifies the Destructor */
Void createmykey (void ){
Pthread_keycreate (& mywinkey, freewinkey );
}

/* Function freewinkey, release space */
Void freewinkey (fl_window * Win ){
Delete win;
}

In this way, the createmywin function can be called in different threads to obtain window variables that can be seen inside the thread. This variable is obtained through the pthread_getspecific function. In the above example, we have used the pthread_setspecific function to bind thread data with a key. The two functions are prototype as follows:
Extern int pthread_setspecific _ p (pthread_key_t _ key ,__ const void * _ pointer ));
Extern void * pthread_getspecific _ p (pthread_key_t _ key ));
The parameter meanings and usage of these two functions are obvious. Note that when pthread_setspecpacific is used as a key to specify new thread data, the original thread data must be released to recycle space. This process function pthread_key_delete is used to delete a key. The memory occupied by this key will be released, but note that it only releases the memory occupied by the key, does not release the memory resources occupied by the thread data associated with the key, and it does not trigger the destructor function defined in the pthread_key_create function. The release of thread data must be completed before the release key.

4.2 mutex lock
Mutex lock is used to ensure that only one thread is executing a piece of code within a period of time. Necessity is obvious: assuming that each thread writes data to the same file in sequence, the final result must be disastrous.
Let's take a look at the following code. This is a read/write program. They share a buffer zone, and we assume that a buffer zone can only save one piece of information. That is, the buffer has only two States: information or no information.

Void reader_function (void );
Void writer_function (void );

Char buffer;
Int buffer_has_item = 0;
Pthread_mutex_t mutex;
Struct timespec delay;
Void main (void ){
Pthread_t reader;
/* Define the latency */
Delay. TV _sec = 2;
Delay. TV _nec = 0;
/* Use the default attribute to initialize a mutex lock object */
Pthread_mutex_init (& mutex, null );
Pthread_create (& reader, pthread_attr_default, (void *) & reader_function), null );
Writer_function ();
}

Void writer_function (void ){
While (1 ){
/* Lock the mutex lock */
Pthread_mutex_lock (& mutex );
If (buffer_has_item = 0 ){
Buffer = make_new_item ();
Buffer_has_item = 1;
}
/* Open the mutex lock */
Pthread_mutex_unlock (& mutex );
Pthread_delay_np (& delay );
}
}

Void reader_function (void ){
While (1 ){
Pthread_mutex_lock (& mutex );
If (buffer_has_item = 1 ){
Consume_item (buffer );
Buffer_has_item = 0;
}
Pthread_mutex_unlock (& mutex );
Pthread_delay_np (& delay );
}
}
The mutex variable mutex is declared here. The structure pthread_mutex_t is an undisclosed data type, which contains an attribute object allocated by the system. The pthread_mutex_init function is used to generate a mutex lock. The null parameter indicates that the default attribute is used. To declare a mutex lock for a specific attribute, call the pthread_mutexattr_init function. The pthread_mutexattr_setpshared function and the pthread_mutexattr_settype function are used to set the mutex lock attribute. The previous function sets the property pshared, which has two values: pthread_process_private and pthread_process_shared. The former is used to synchronize threads in different processes, and the latter is used to synchronize different threads in the process. In the above example, we use the default property pthread_process _
Private. The latter is used to set mutex lock types. Optional types include pthread_mutex_normal, pthread_mutex_errorcheck, pthread_mutex_recursive, and pthread _ mutex_default. They define different on-board and unlock mechanisms. Generally, the last default attribute is used.
The pthread_mutex_lock statement starts to lock with mutex lock. Subsequent code is locked until pthread_mutex_unlock is called, that is, only one thread can call and execute the lock at a time. When a thread executes at pthread_mutex_lock, if the lock is used by another thread at this time, the thread is blocked, that is, the program will wait for another thread to release the mutex lock. In the above example, we used the pthread_delay_np function to sleep the thread for a period of time to prevent a thread from occupying this function.
The above example is very simple and I will not introduce it any more. It is suggested that a deadlock may occur when mutex lock is used: two threads try to occupy two resources at the same time, and lock the corresponding mutex lock in different order. For example, both threads need to lock mutex lock 1 and mutex lock 2. Thread a First locks mutex lock 1, line B First locks mutex 2, and a deadlock occurs. In this case, we can use the function pthread_mutex_trylock, which is a non-blocking version of the function pthread_mutex_lock. When it finds that a deadlock is inevitable, it will return the corresponding information, and the programmer can handle the deadlock accordingly. In addition, different mutex lock types have different deadlocks, but the most important thing is that programmers should pay attention to this in programming.

4.3 condition Variables
The previous section describes how to use mutex to share and communicate data between threads. One obvious drawback of mutex is that it has only two States: Lock and non-lock. Conditional variables make up for the lack of mutex lock by allowing the thread to block and wait for another thread to send signals. They are often used together with mutex locks. When a condition variable is used to block a thread, when the condition is not met, the thread often unlocks the corresponding mutex and waits for the condition to change. Once another thread changes the condition variable, it will notify the corresponding condition variable to wake up one or more threads that are blocked by this condition variable. These threads will re-lock the mutex and re-test whether the conditions are met. In general, condition variables are used for line-to-line synchronization.
The condition variable structure is pthread_cond_t. The pthread_cond_init () function is used to initialize a condition variable. Its prototype is:
Extern int pthread_cond_init _ p (pthread_cond_t * _ cond ,__ const pthread_condattr_t * _ cond_attr ));
Cond is a pointer to the structure pthread_cond_t, and cond_attr is a pointer to the structure pthread_condattr_t. The structure pthread_condattr_t is the attribute structure of the condition variable. Like the mutex lock, we can use it to set whether the condition variable is available in the process or between processes. The default value is pthread _ process_private, this condition variable is used by various threads in the same process. Note that the initialization condition variables can be reinitialized or released only when they are not used. The function for releasing a condition variable is pthread_cond _ destroy (pthread_cond_t
Cond ).
The pthread_cond_wait () function blocks the thread on a condition variable. Its function prototype is:
Extern int pthread_cond_wait _ p (pthread_cond_t * _ cond,
Pthread_mutex_t * _ mutex ));
The thread unlocks the lock pointed to by mutex and is blocked by the condition variable cond. The thread can be awakened by the pthread_cond_signal function and the pthread_cond_broadcast function. However, it must be noted that the condition variable only blocks and wakes up the thread. the user must provide the specific judgment conditions, for example, whether the value of a variable is 0 is shown in the following example. After the thread is awakened, it will re-check whether the conditions are met. If the conditions are not met, the thread should still be blocked here and be waiting for the next wake-up. This process is generally implemented using the while statement.
Another function used to block threads is pthread_cond_timedwait (). Its prototype is:
Extern int pthread_cond_timedwait _ p (pthread_cond_t * _ cond,
Pthread_mutex_t * _ mutex, _ const struct timespec * _ abstime ));
It has one more time parameter than the pthread_cond_wait () function. After a period of time in abstime, blocking is also removed even if the condition variable is not met.
The prototype of the function pthread_cond_signal () is:
Extern int pthread_cond_signal _ p (pthread_cond_t * _ Cond ));
It is used to release a thread that is blocked on the condition variable cond. When multiple threads are blocked on this condition variable, which thread is awakened is determined by the thread's scheduling policy. Note that the mutex lock of the Protection Condition variable must be used to protect this function. Otherwise, the signal meeting the condition may be sent between the test condition and the call of the pthread_cond_wait function, this causes unlimited waiting. The following is a simple example of using the functions pthread_cond_wait () and pthread_cond_signal.

Pthread_mutex_t count_lock;
Pthread_cond_t count_nonzero;
Unsigned count;
Decrement_count (){
Pthread_mutex_lock (& count_lock );
While (COUNT = 0)
Pthread_cond_wait (& count_nonzero, & count_lock );
Count = count-1;
Pthread_mutex_unlock (& count_lock );
}

Increment_count (){
Pthread_mutex_lock (& count_lock );
If (COUNT = 0)
Pthread_cond_signal (& count_nonzero );
Count = count + 1;
Pthread_mutex_unlock (& count_lock );
}
When the Count value is 0, the decrement function is blocked at pthread_cond_wait and the mutual exclusion lock count_lock is enabled. At this time, when the increment_count function is called, The pthread_cond_signal () function changes the condition variable and informs decrement_count () to stop blocking. Readers can try to let the two threads run the two functions separately to see what results will appear.
The pthread_cond_broadcast (pthread_cond_t * Cond) function is used to wake up all threads blocked on the condition variable cond. After these threads are awakened, they will compete for the corresponding mutex lock again. Therefore, you must use this function with caution.

4.4 semaphores
Semaphores are essentially non-negative integer counters used to control access to public resources. When public resources increase, the sem_post () function is called to increase the semaphore. Public resources can be used only when the signal value is greater than 0. after use, the sem_wait () function reduces semaphores. The sem_trywait () function plays the same role as the pthread _ mutex_trylock () function. It is a non-blocking version of The sem_wait () function. Next we will introduce some functions related to semaphores one by one, which are defined in the header file/usr/include/semaphore. h.
The data type of the semaphore is sem_t, which is essentially a long integer. The sem_init () function is used to initialize a semaphore. Its prototype is:
Extern int sem_init _ p (sem_t * _ SEM, int _ pshared, unsigned int _ value ));
SEM is a pointer to the semaphore structure. If pshared is not 0, the semaphore is shared among processes. Otherwise, it can only be shared among all threads of the current process. value indicates the initial value of the semaphore.
The sem_post (sem_t * SEM) function is used to increase the semaphore value. When a thread is blocked on this semaphore, calling this function will make one of the threads not blocked. The selection mechanism is also determined by the thread scheduling policy.
The sem_wait (sem_t * SEM) function is used to block the current thread until the semaphores SEM value is greater than 0. After blocking is removed, the SEM value is reduced by one, indicating that the public resources are reduced after use. The sem_trywait (sem_t * SEM) function is a non-blocking version of The sem_wait () function, which directly reduces the semaphores SEM value by one.
The sem_destroy (sem_t * SEM) function is used to release semaphores.
Here is an example of using semaphores. In this example, there are a total of four threads, two of which are responsible for reading data from the file to the public buffer, the other two threads read data from the buffer for different processing (addition and multiplication ).
/* File Sem. C */
# Include <stdio. h>
# Include <pthread. h>
# Include <semaphore. h>
# Define Max stack 100
Int stack [maxstack] [2];
Int size = 0;
Sem_t SEM;
/* Read data from the file 1. dat. Each time it is read, the semaphore is incremented by one */
Void readdata1 (void ){
File * fp = fopen ("1.dat"," R ");
While (! Feof (FP )){
Fscanf (FP, "% d", & stack [size] [0], & stack [size] [1]);
Sem_post (& SEM );
++ Size;
}
Fclose (FP );
}
/* Read data from file 2. dat */
Void readdata2 (void ){
File * fp = fopen ("2.dat"," R ");
While (! Feof (FP )){
Fscanf (FP, "% d", & stack [size] [0], & stack [size] [1]);
Sem_post (& SEM );
++ Size;
}
Fclose (FP );
}
/* Block wait for the buffer to have data. After reading the data, release the space and continue waiting */
Void handledata1 (void ){
While (1 ){
Sem_wait (& SEM );
Printf ("plus: % d + % d = % d \ n", stack [size] [0], stack [size] [1],
Stack [size] [0] + stack [size] [1]);
-- Size;
}
}

Void handledata2 (void ){
While (1 ){
Sem_wait (& SEM );
Printf ("Multiply: % d * % d = % d \ n", stack [size] [0], stack [size] [1],
Stack [size] [0] * stack [size] [1]);
-- Size;
}
}
Int main (void ){
Pthread_t T1, T2, T3, T4;
Sem_init (& SEM, 0, 0 );
Pthread_create (& T1, null, (void *) handledata1, null );
Pthread_create (& T2, null, (void *) handledata2, null );
Pthread_create (& T3, null, (void *) readdata1, null );
Pthread_create (& T4, null, (void *) readdata2, null );
/* Prevent the program from exiting too early and keep it waiting for an indefinite period of time */
Pthread_join (T1, null );
}

In Linux, run the GCC-lpthread Sem. C-o SEM command to generate the executable file SEM. We have edited the data file in advance. dat and 2.dat. assume that their content is 1 2 3 4 5 6 7 8 9 10 and-1-2-3-4-5-6-7-8-9-10, respectively., run SEM to obtain the following results:
Multiply:-1 *-2 = 2
Plus:-1 +-2 =-3
Multiply: 9*10 = 90
Plus:-9 +-10 =-19
Multiply:-7 *-8 = 56
Plus:-5 +-6 =-11
Multiply:-3 *-4 = 12
Plus: 9 + 10 = 19
Plus: 7 + 8 = 15
Plus: 5 + 6 = 11

We can see the competition between threads. The value is not displayed in the original order because the value of size is randomly modified by various threads. This is often a problem that needs to be paid attention to in multi-threaded programming.

5 Summary
Multi-threaded programming is a very interesting and useful technology. Network ant financial, which uses multithreading technology, is one of the most commonly used download tools, the grep using multithreading technology is several times faster than the single-thread grep. There are many other similar examples. I hope you can use multithreading technology to write efficient and practical programs.

 

1. Multi-thread programming in Linux
Http://linux.chinaunix.net/doc/program/2001-08-11/642.shtml
2. Pthread_delay_np (here is an example of POSIX conditional variables)
Http://bbs.chinaunix.net/archiver? Tid-584593.html
3. Pthread_join and segment errors (Thank you very much for your attention)
Http://www.bczs.net/xml/2005/11/5/4374188.xml
4. POSIX Thread Programming Guide [learn multithreading in Linux, and you will regret it if you don't look at it]
Http://www.linuxforum.net/forum/showflat.php? Cat = & board = Program & number = 294073 & page = 0 & view = collapsed & SB = 5 & O = 7 & fpart =

5. http://blog.readnovel.com/article/htm/tid_507071.html]

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.