13th Week Study Notes

Source: Internet
Author: User
Tags semaphore

Chapter 12th Concurrent Programming the first section of the process-based concurrency programming the simplest way to construct a concurrency program: process

Common functions are as follows:

    • Fork
    • Exec
    • Waitpid
Section II concurrent programming based on I/O multiplexing

is to use the Select function to require the kernel to suspend a process and return control to the application only after one or more I/O events have occurred.

The Select function handles a collection of type Fd_set, which is a descriptor set , and is logically described as a bit vector of size n, with each bit b[k] corresponding to descriptor K, but when and only if b[k]=1, descriptor K indicates that it is an element of the descriptor collection.

Attention:

The Read collection needs to be updated each time the Select function is called

I. Concurrent event-driven server based on I/O multiplexing

Event driver: Model the logical flow as a state machine.

State machine:

    • State
    • Input events
    • Transfer

The overall process is:

    • The Select function detected an input event
    • add_client function to create a new state machine
    • The Check_clients function performs a state transfer (a loopback input line in the textbook example) and deletes the state machine when it is complete.

Other:

    • Init_pool: Initializing the client pool
    • Add_client: Add a new client to the active client pool
    • Check_clients: Loopback a text line from each of the prepared connected descriptors
Ii. advantages and disadvantages of I/O multiplexing Technology 1. Benefits
    • Compared to the process-based design, the programmer is given more control over the program
    • Runs in a single process context , so each logical stream can access the full address space of the process, and shared data is easy to implement
    • You can use GDB to debug
    • Efficient
2. Disadvantages
    • Complex coding
    • Cannot take full advantage of multi-core processors
Section III thread-based concurrency programming

This pattern mixes both of the above methods

Thread: Is the logical flow running in the context of the process.

Each thread has its own thread context :

    • A unique integer thread Id--tid
    • Stack
    • Stack pointer
    • Program counter
    • General purpose Registers
    • Condition code
One, threading execution Model 1. Main thread

Each process begins its life cycle as a single thread-the main thread, which differs from other processes only: it is always the first thread running in a process.

2. Peer Threading

At some point the main thread is created, and then two threads run concurrently.

Each peer thread can read and write the same shared data.

3. Why the main thread switched to the peer thread:
    • The main thread performs a slow system call, such as read or sleep
    • Interrupted by the system interval timer

Switch mode is context switch

After a peer thread executes for a period of time, it controls the delivery back to the main thread, and so

4. The difference between threads and processes
    • The context switch of a thread is much faster than a process
    • Organization:
      • Process: Strict parent-child hierarchy
      • Threads: A process-dependent thread that makes up a peer (thread) pool, independent of the threads of other processes. A thread can kill any of its peer threads, or wait for any of his peers to terminate.
Second, POSIX threading

A POSIX thread is a standard interface for processing threads in a C program. The basic usage is:

    • The code of the thread and the local data are encapsulated in a thread routine
    • Each thread routine takes a generic pointer as input and returns a generic pointer.

Universal function:

void func (void parameter)
typedef void
(uf) (void para)

Iii. Creating Threads 1. Create Thread: pthread_create function
#include <pthread.h>typedef void *(func)(void *);int pthread_create(pthread_t *tid, pthread_attr_t *attr, func *f, void *arg);成功返回0,出错返回非0

Creates a new thread, with an input variable arg, running thread routine Fin the context of the new thread.

attr default is NULL

Parameter tid contains the ID of the newly created thread

2. View the thread id--pthread_self function
#include <pthread.h>pthread_t pthread_self(void);返回调用者的线程ID(TID)
Iv. terminating threads 1. Several ways to terminate a thread:
    • Implicit termination: Top-level threading routines return
    • Show Termination: Call Pthread_exit function
      * If the main thread is called, it waits for all other peer threads to terminate before terminating the main thread and the entire process, and the return value is Pthread_return
    • A peer thread calls the UNIX Exit function to terminate the process and its associated thread
    • Another peer thread terminates the current thread by calling Pthread_cancle with the current thread ID as a parameter
2.pthread_exit function
#include <pthread.h>void pthread_exit(void *thread_return);若成功返回0,出错为非0
3.pthread_cancle function
#include <pthread.h>void pthread_cancle(pthread_t tid);若成功返回0,出错为非0
V. Reclaim resources for terminated threads

With the Pthread_join function:

#include <pthread.h>int pthread_join(pthread_t tid,void **thrad_return);

This function blocks, knows that the thread tid terminates, assigns the (void*) pointer returned by the thread routine to the location pointed to by Thread_return, and then reclaims all memory resources occupied by the terminated thread

Vi. Separating threads

At any point in time, threads are either associative or detached.

1. Threads that can be combined
    • Be able to be recovered by other threads and kill their resources
    • The money was withdrawn, and its memory resources were not released.
    • Each of the Cheng threads is either retracted by another thread or separated by a call to the Pthread_detach function
2. Detached threads
    • Cannot be recycled or killed by another thread
    • The memory resource is automatically released by the system when it terminates
3.pthread_detach function
#include <pthread.h>void pthread_detach(pthread_t tid);若成功返回0,出错为非0

This function can be separated from the thread tid.

Threads are able to detach themselves by Pthread_detach with Pthread_self () as parameters.

Each peer thread should detach himself before he begins to process the request so that the system can reclaim its memory resources after it terminates.

Vii. Initialize Thread: pthread_once function
#include <pthread.h>pthread_once_t once_control = PTHREAD_ONCE_INIT;int pthread_once(pthread_once_t *once_control, void (*init_routine)(void));总是返回0
Section fourth shared variables in multithreaded programs

A variable is shared when and only if multiple threads refer to an instance of the variable.

One, the thread memory model

Note that you should be aware of:

Registers are never shared, and virtual memory is always shared.

Ii. Mapping variables to memory

Three, shared variables

The variable v is shared-when and only if one of its instances is referenced by more than one thread.

The Cheng and Progress chart of signal volume synchronization line

A progress map is the execution of n concurrent threads modeled as a trace line in an n-dimensional Cartesian space, where the origin corresponds to the initial state of no thread completing an instruction.

When n=2, the state is relatively simple, is more familiar with the two-dimensional coordinate diagram, the horizontal ordinate each represents a thread, and the conversion is represented as a forward edge

Conversion rules:
    • The legitimate conversion is to the right or up, that is, one instruction in a thread is completed
    • Two directives cannot be completed at the same time, i.e. diagonal lines are not allowed
    • The program cannot run in reverse, i.e. it cannot appear down or to the left

The execution history of a program is modeled as a trace line in the state space .

Decomposition of thread Loop code:
    • H: instruction block in the loop head
    • L: Load shared variable CNT to thread I register%EAX instructions.
    • U: Update (Increase)%EAX instructions
    • S: Save the updated value of%eax back to the instruction of the shared variable cnt
    • T: instruction block at the tail of the loop

Second, the signal volume

The principle of mutual exclusion of semaphores

  • Two or more processes work together by signaling that a process can be forced to temporarily stop execution (blocking the wait) at a location until it receives a "forward" signal (awakened);
  • The variable that implements the semaphore is called the semaphore, which is often defined as the record variable s, one of which is an integer, the other is a queue, and its element is the blocking process (FIFO) that waits for that semaphore.
  • Signal Volume Definition:

    Type Semaphore=record
    Count:integer;
    Queue:list of Process
    End
    var S:semaphore;

Define two atomic operations on the Semaphore--p and V

P (Wait)

Wait (s)
S.count: =s.count-1;
If S.count<0 Then
Begin
Process blocking;
The process enters the s.queue queue;
End

V (signal)

Signal (s)
S.count: =s.count+1;
If S.count≤0 Then
Begin
Wake up the team first process;
Move the process out of the S.queue blocking queue;
End

It is important to note that each semaphore must be initialized before it is used .

Third, using semaphores to achieve mutual exclusion 1. Basic ideas

Associate each shared variable (or a set of related shared variables) with a semaphore s (initial 1), and then surround the corresponding critical section with P and V operations.

Applications of 2.wait (s)/signal (s)
  • Before the process enters the critical section, the wait (s) primitive is executed first, and if s.count<0, the process calls the blocking primitive, blocks itself, and inserts into the s.queue queue;
  • Note that the blocking process does not consume processor time, not "busy". Until a process that exits from the critical section executes the signal (s) primitive, wakes it up;
  • Once another process has performed the s.count+1 operation in the signal (s) primitive, the discovery s.count≤0, that is, the blocked process in the blocking queue, invokes the wake-up primitive, modifies the first process in the S.queue to the ready state, and prepares the queue to execute the critical section code.

And

  • The wait operation is used to request resources (or use rights), and the process may block itself when executing the wait primitive;
  • The signal action is used to free up resources (or to return resource rights), and it is the responsibility of the process to wake up a blocking process when it executes the signal primitive.
Third, the use of signal volume to dispatch shared resources

The semaphore has two functions:

    • Implement mutex
    • Scheduling shared resources

The signal volume is divided into: mutually exclusive semaphore and resource semaphore.

  • The mutex is used to request or release the use of resources, often initialized to 1;

  • Resource semaphores are used to request or return resources, and can be initialized to a positive integer greater than 1, indicating the number of resources available in the system.

1. The physical meaning of the semaphore
  • S.count >0 indicates the number of processes (available resources) that can also execute wait (s) without blocking. Each time a wait (s) operation is performed, it means that the request is allocated a unit of resources.
  • When s.count≤0 indicates that no resources are available, the process that requested the resource is blocked. At this point, the absolute value of the s.count equals the number of wait processes in the queue that the semaphore is blocking. Performing a signal operation means releasing a unit of resources. If s.count<0, indicates that there is a blocked process in the S.queue queue, it needs to wake the first process in the queue and move it to the ready queue.
Section seventh, thread safety

A thread is secure, and it always produces the correct result when it is called repeatedly by multiple concurrent threads.

Four disjoint thread unsafe function classes and Countermeasures:

    • Functions that do not protect shared variables--protect shared variables with synchronous operations such as P and V
    • A function that maintains a state that spans multiple calls--overrides without any static data.
    • A function that returns a pointer to a static variable--① overridden; ② uses the lock-copy technique.
    • function calling thread unsafe function--refer to the previous three types
Second, re-entry sex

When they are called by multiple threads, no shared data is referenced.

1. Explicit re-entry:

All function arguments are pass-through, no pointers, and all data references are local automatic stack variables, not static or full-play variables.

2. Implicit re-entry:

The calling thread carefully passes pointers to non-shared data.

Iii. using existing library functions in a thread-in-line program

In a word, the reentrant version of the thread unsafe function is used, and the name ends with a suffix of _r.

Iv. Competition 1. The reason for the occurrence of competition:

The correctness of a program relies on the x point of one thread to reach its control flow before another thread reaches the y point. That is, the programmer assumes that the thread will follow a particular trajectory through the execution state space, forgetting a guideline that the threaded program must work correctly for any viable trajectory.

2. Elimination Method:

Dynamically assigns a separate block to each integer ID, and passes to the thread routine a pointer to the block

Five, Deadlock:

A set of threads is blocked, waiting for a condition that will never be true.

1. Conditions 2. Method of resolving Deadlocks A. Do not let deadlocks occur:
    • Static strategy: Design appropriate resource allocation algorithm, do not let deadlock occur---deadlock prevention ;
    • Dynamic policy: When a process requests a resource, the system reviews whether it will generate a deadlock, and does not allocate--- deadlock avoidance If a deadlock is generated.
B. Let the deadlock occur:

The process does not restrict the resource when it is requested, the system periodically or irregularly detects if a deadlock occurs and resolves the deadlock---- deadlock detection and cancellation when detected.

13th Week Study Notes

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.