Linux Posix Thread

Source: Internet
Author: User
Tags mutex posix terminates

An execution path (routine) in a program is called a thread. A more accurate definition is that the thread is "a control sequence inside a process". All processes have at least one thread of execution.
Processes and Threads
Process is the basic unit of resource competition
A thread is the smallest unit of program execution
Threads share process data, but also have their own part of the data
Thread ID
Program Counter PC pointer
Register Group
Data stacks
Errno
A thread inside a process can share resources
Code snippet
Data segment
Open files and Signals
Program placed on disk static data, a collection of instructions, data + instruction composition

Process program dynamic execution process, equivalent to CPU scheduling, data segment, stack segment, code snippet PCB control block

Linux controls processes through the PCB

The smallest unit of thread Linux execution, the thread body is a function call.

Fork and create new thread differences
When a process executes a fork call, a new copy of the process is created, and the new process has its own variable and its own PID. The running time of this new process is independent, and it is executed almost completely independently of the process that created it. When a new thread is created in the process, the new execution thread will have its own stack (and hence its own local variable), but share the benefits of global variables, file descriptors, signal processors, and current working directory status
Threads with its creator
The cost of creating a new thread is much smaller than creating a new process
switching between threads requires the operating system to do less work than switching between processes
threads consume much less resources than processes
can take advantage of multiprocessor parallelism
while waiting for slow i/ O at the end of the operation, the program can perform other computational tasks
computationally intensive applications, to be able to run on multiprocessor systems, to decompose computations into multiple threads to implement
I/O intensive applications, and to overlap I/O operations in order to improve performance. Threads can wait for different I/O operations at the same time.

Thread disadvantage
Performance loss
A compute-intensive thread that is rarely blocked by external events often cannot share the same processor with its threads. If the number of compute-intensive threads is more than the available processors, there may be a significant performance penalty, where the performance penalty is to increase the additional synchronization and scheduling overhead, while the available resources are the same.
Robustness Reduction
Writing multithreading requires a more comprehensive and in-depth consideration, in a multithreaded program, there is a great chance that a small deviation in time allocation or a negative impact due to sharing of variables that should not be shared, in other words, is a lack of protection between threads. The
lack of access control
processes is the basic granularity of access control, and invoking certain OS functions in one thread affects the entire process.
Improved programming difficulty
writing and debugging a multithreaded program is much more difficult than a single thread.

Thread Scheduling Competition Range
The operating system provides a variety of models for scheduling application-created threads. The main difference between these models is that the thread scheduling competition range (thread-scheduling contention scope) is different when competing system resources (especially CPU time).
Process contention scope: Each thread competes for "scheduled CPU time" in the same process (but does not compete directly with threads in other processes).
System contention Scope: threads compete directly with other threads in system-wide.
The properties of a thread can be modified by default, which is the system's competitive range.

Thread Invoke API familiarity

The Pthread_create function creates a new thread
int Pthread_create (pthread_t *thread, const pthread_attr_t *attr, void * (*start_routine) (void*), void *arg);
Thread: Returns the threading ID
attr: Setting the properties of a thread, attr null means using the default property
Start_routine: is a function address, the function to execute after the thread starts, the execution body of the thread
ARG: Arguments passed to the thread start function
Successful return 0, failure return error code

When the thread finishes executing the function call disappears, it does not execute the code below the Pthread_create, which is the difference from the process. In general, processes and threads are run in parallel. The thread is dependent on the process, and if the process dies, the thread will die. Process is different, the parent process is dead, and the child process can run itself because the child process has its own independent memory space. Threads depend on the life cycle of the process, which is different from the child process,

pthread_self function returns the thread ID
pthread_t pthread_self (void);
Return value: Always successfully returns the caller's Tid

Pthread_join function waits for child thread to end
int Pthread_join (pthread_t thread, void **value_ptr);
Thread: Threads ID
Value_ptr: It points to a pointer, which points to the return value of the thread to the result of the parent process's operation
Successful return 0, failure return error code

Pthread_exit function Thread Termination
void Pthread_exit (void *value_ptr);
Value_ptr:value_ptr do not point to a local variable.
Return value: No return value, same as process, cannot be returned to its caller (itself) at the end of the thread

The Pthread_cancel function cancels an executing thread
int Pthread_cancel (pthread_t thread);
Parameters
Thread: Threads ID
Successful return 0, failure return error code

In a process, the SIGCHLD signal needs to be ignored if the parent process regardless of the child process

The Pthread_detach function separates one thread from the other
int Pthread_detach (pthread_t thread);
Thread: Threads ID
Return value: Successful return 0, failure return error code

Functions that run out of process,

In general, call the detach function call as soon as the thread execution body is called. The Pthrea_join is also called in the parent process, which prevents the blocking state of the parent process.

Error checking
Some of the traditional functions are to return 0 successfully, fail to return 1, and assign a value to the global variable errno to indicate an error.
The Pthreads function does not set the global variable errno when it goes wrong (and most other POSIX functions do). Instead, the error code is returned by the return value.
Pthreads also provides errno variables within the thread to support other code that uses errno. For errors in the Pthreads function, it is recommended that the return value industry be judged, because reading the return value is less expensive than reading the errno variable within the thread

Passing data between threads and processes

Passing data through global variables

By passing data through parameters, the data passed by the parameter cannot be a local variable of the line stacks, because the local variable of the line stacks is destroyed.

If you use the Pthread_detach function, you cannot throw data out of the data with Pthread_exit. But use return to do so.

The effect of using the Champions League Pthread_exit () is the same as the return effect.

Self-test detach function is not stable when used with join function

In general, the results of the operation are rarely thrown out of the thread, the results of the threads tell the parent process, only an int variable is required.

If the parent process dies, the child thread also dies, and this time the join and detach functions need to be considered together, as well as the child thread and the parent thread ending the problem.

The properties of the thread are set (the default properties are generally used)

Thread Detach Properties

Thread's Stack attribute 10M

Stack overflow protected Area,

Thread Competition Range

The scheduling strategy of the thread is random

Set scheduling priority by default is 0

Concurrency is mapped in the way that suits you best

Process mutexes use semaphores, and threads can also be mutually exclusive with semaphores. The signal volume is very large

Threads also have locks, the thread lock, the most used is the Poxis mutex

The detach attribute is used to determine how a thread terminates itself. In a non-detached situation, when a thread ends, the system resources it consumes are not released, that is, there is no real termination. Only when the Pthread_join () function returns does the created thread release the system resources that it occupies. In the case of detached attributes, the system resources that it occupies are released immediately at the end of a thread. One thing to note here is that if you set the Detach property of a thread, and the thread runs very fast, it is likely to terminate before the Pthread_create () function returns, and it may pass the thread number and system resources to other threads for use after it terminates.

Multithreading Synchronization issues:

(1) Resource and address space for thread-sharing processes

(2) The operation of any thread on system resources will affect other threads

Multithreading Synchronization Method:

How to lock a thread-level mutex

Semaphore process-level approach

Condition variable

POSIX Mutual exclusion lock

A mutex is a simple locking method that controls atomic operations on shared resources. There are only two states of this mutex, that is, lock and unlock, and you can think of the mutex as a global variable in a sense. Only one thread can master a mutex at the same time, and a locked thread can operate on the shared resource. If another thread wants to lock a mutex that has already been locked, the thread hangs until the locked thread releases the mutex. It can be said that this mutex guarantees that each thread will atomically manipulate the shared resources sequentially.

The mutex mechanism mainly includes the following basic functions.

Mutex initialization: Pthread_mutex_init ()

Mutex Lock: Pthread_mutex_lock ()

Lock exclusion: Pthread_mutex_trylock ()

Mutex Lock: Pthread_mutex_unlock ()

Eliminate mutex: Pthread_mutex_destroy ()

Among them, the mutex can be divided into fast mutex, recursive mutex and error-checking mutex. The difference between these three types of locks is primarily in the fact that other threads that do not occupy the mutex need to block waiting when they want to obtain a mutex. A quick lock is a call thread that blocks until the line threads unlocked that owns the mutex. A recursive mutex can return successfully and increase the number of times the calling thread locks on the mutex, whereas a fault-checking mutex is a non-blocking version of the fast mutex, which returns immediately and returns an error message. The default property is a quick mutex. Can be set through the properties of the lock.

By locking mechanism, you can create a critical section that makes the code of a critical section an atomic operation.

A thread that competes only in the process is called a user thread, and in this system a competing thread mapping is called a lightweight process, and for so many threads Linux is controlled and managed through a Linux thread.

At the time of the thread

not mutually exclusive (no shackles) do not detach process wait OK

Mutex (locking) does not detach process wait OK

Non-exclusive (no yoke) detach process waiting for NOK

Non-exclusive (no yoke) separation process does not wait for NOK

In thread development, avoid multiple threads modifying the value of a variable at the same time


int Pthread_mutex_init (pthread_mutex_t *restrict mutex,const pthread_mutexattr_t *restrict attr);
pthread_mutex_t mutex = Pthread_mutex_initializer;
The Pthread_mutex_init () function creates a mutex in a dynamic manner, and the parameter attr specifies the property of the new mutex. If the parameter attr is empty, the default mutex property is used, and the default property is a quick mutex. The properties of the mutex are specified when the lock is created, there is only one lock type attribute in the Linuxthreads implementation, and the different lock types behave differently when attempting to lock an already locked mutex.
When the Pthread_mutexattr_init () function completes successfully, it returns zero, and any other return value indicates an error occurred.
After the function executes successfully, the mutex is initialized to an unlocked state.
Mutex properties
The use of mutexes (mutexes) enables threads to execute sequentially. Typically, mutexes synchronize multiple threads by ensuring that only one thread executes the critical segment of code at a time. Mutexes can also protect single-threaded code.
To change the default Mutex property, you can declare and initialize the Property object. Typically, the mutex property is set somewhere at the beginning of the application so that you can quickly find and easily modify
Destroying the Mutex object
Pthread_mutexattr_destroy () can be used to unassign the storage space used to maintain the property objects created by Pthread_mutexattr_init ().
Pthread_mutexattr_destroy syntax
int Pthread_mutexattr_destroy (pthread_mutexattr_t *mattr)
Pthread_mutexattr_destroy () returns zero after successful completion. Any other return value indicates an error occurred. The function fails and returns the corresponding value if the following conditions occur. EINVAL Description: The value specified by MattR is invalid.

Synchronization and mutual exclusion of threads, and a simple PC model

#include <string.h>#include<stdlib.h>#include<stdio.h>#include<unistd.h>#include<pthread.h>//defining locks and initializing thempthread_mutex_t Mutex =Pthread_mutex_initializer;//define conditions and initializepthread_cond_t cond =Pthread_cond_initializer;#defineCustom_count 2#defineProduct_count 4intG_count =0;void*consume (void*Arg) {    intInum = (int) Arg;  while(1) {Pthread_mutex_lock (&mutex); printf ("consum%d\n", Inum);  while(G_count = =0) {printf ("consum:%d started to wait \ n", Inum); Pthread_cond_wait (&cond, &mutex); printf ("consum:%d woke up \ n", Inum); } printf ("consum:%d Consumer Products begin\n", Inum); G_count--;//Consumer Productsprintf"consum:%d Consumer Products end\n", Inum); Pthread_mutex_unlock (&mutex); Sleep (1); } pthread_exit (0);} //Producer Threadsvoid*produce (void*Arg) {    intInum = (int) Arg;  while(1) {Pthread_mutex_lock (&mutex); if(g_count> -) {Pthread_mutex_unlock (&mutex); Sleep (1); }        Else{Pthread_mutex_unlock (&mutex); } pthread_mutex_lock (&mutex); printf ("Product Quantity:%d\n", G_count); printf ("produce:%d Production Products begin\n", Inum); G_count++; //as long as I produce a product, I tell consumers to spendprintf"produce:%d Production Products end\n", Inum); printf ("produce:%d conditions Signal begin\n", Inum); Pthread_cond_signal (&cond);//notifications, threads waiting on a conditionprintf"produce:%d conditions Signal end\n", Inum); Pthread_mutex_unlock (&mutex); Sleep (1); } printf ("Produce%d\n", Inum); Pthread_exit (0);} intMain () {pthread_t Tidarray[custom_count+Product_count]; //Create a consumer thread     for(intI=0; i<custom_count; i++) {pthread_create (& (Tidarray[i]), NULL, consume, (void*) (i); }        //Create a line process     for(intI=0; i<product_count; i++) {pthread_create (& (Tidarray[i+custom_count]), NULL, Produce, (void*) (i); }         for(intI=0; i<custom_count+product_count; i++) {pthread_join (tidarray[i], NULL);//wait for thread to end    }        return 0;}

Linux Posix Thread

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.