13th Week Study Report

Source: Internet
Author: User

12th Chapter Concurrent Programming

If logical control flows overlap in time, they are concurrent.

Applications that use application-level concurrency are called concurrent programs, and three basic ways to construct concurrent programs: process, I/O multiplexing, threading

12.1 Process-based concurrency programming

First step: The server accepts connection requests from the client

Step two: The server derives a subroutine for this client service

Step three: The server accepts another connection request

Fourth step: The server derives another subroutine for the new client service

12.1.1 process-based concurrent servers

You must include a SIGCHLD handler to reclaim the resources of the Zombie subroutine

To avoid memory leaks, the respective CONNFD copies must be closed

The connection to the client terminates until the CONNFD of the parent-child process is closed

12.1.2 about the pros and cons of the process

Shared state information model between parent and child processes: share a file table, but do not share user address information

Have a separate address space: advantages: One process cannot accidentally overwrite the virtual memory of another process

Cons: Sharing state information becomes difficult and must be done using an explicit IPC mechanism; slow

?

12.2 Concurrent programming based on I/O multiplexing

The server must respond to two isolated I/O events

    1. Network Client initiates connection request
    2. The user types the command line on the keyboard

Basic idea: Use the Select function to require the kernel to suspend a process

The SELECT function has two inputs: 1. The descriptor collection of the Read Collection 2. Cardinality n

When and only if a request to read a byte from the descriptor is not blocked, the descriptor K indicates that it is ready to be read

12.2.1 concurrency Event-driven server based on i/multiplexing

I/O multiplexing can be used as a basis for concurrent event drivers

A state machine is a set of states, input events, and transitions

Self-loops are transitions between the same input and output states

The collection of active clients is maintained in a pool structure

After initializing the pool by calling Init_pool, the server enters an infinite loop

In each iteration of the loop, the server calls the Select function to detect two different types of input events:

    1. The connection request from a new client arrives
    2. A connected descriptor for an existing client is ready to read

The Init_pool function initializes the client pool

The Add_client function adds a new client to the active client pool

The Check_clients function echoes a text line from each prepared connected descriptor

    1. The pros and cons of I/O multiplexing technology

Pros: 1. More control over program behavior by programmers than process-based design

2. Each logical stream has access to the full address space of the process

Cons: 1. Coding complexity 2. Unable to take full advantage of multicore processors

?

12.3 Thread-based concurrency programming

A thread is a logical flow running in the context of a process

Each thread has its own thread context, including a unique integer thread ID, stack, stack pointer, program counter, general purpose register, and condition code

All threads running in a process share the entire virtual address space of the process

12.3.1 Threading Execution Model

Each process starts its life cycle as a single thread, called the main thread

At some point, the thread creates a peer thread, starting at this point in time, two threads running concurrently

Finally, because the main thread executes a slow system call, the control is passed through the context switch to the peer thread

The peer thread executes for a period of time, and then control passes back to the main thread

12.3.2Posix Threads

A POSIX thread is a standard interface for processing threads in a C program

Pthreads running programs to create, kill, and reclaim threads, and securely share data with peer threads

12.3.3 Creating Threads

Pthread_create function

Create a new thread and take an input variable arg

Running thread routines in the context of a new thread F

When it returns, the parameter TID contains the ID of the newly created thread

12.3.4 terminating a thread

Method: 1. Top Thread routines return (implicit)

2. Call the Pthread_exit function (Explicit)

3. A peer thread calls the UNIX Exit function

4. Another peer thread calls the Pthread_cancle function by using the current thread ID as a parameter

12.3.5 recovering resources for terminated threads

Call the Pthread_join function to wait for another thread to terminate

Blocks until the thread TID terminates, assigns the (void*) pointer returned by the thread routine to the location pointed to by Thread_return, and then reclaims all memory resources that were consumed by the terminated thread

12.3.6 Separating threads

Pthread_detach separation can be combined with thread tid

Threads are able to detach them by Pthread_detach with Pthread_self () as parameters

12.3.7 initialization Process

Pthread_once

?

12.4 Shared variables in multi-threaded programs

12.4.1 Thread Memory model

A set of concurrent processes running in the context of a process

Each thread has its own separate thread context

12.4.2 Mapping variables to memory

Global variables are variables that are defined outside of a function

Local automatic variables are variables that are defined inside a function but do not have a static property

A local static variable is a variable that is defined inside a function and has a static property

12.4.3 Shared variables

A variable v is shared when and only if one of its instances is referenced by more than one variable

?

12.5 Synchronizing Threads with semaphores

In general, there is no way to predict whether the operating system will choose a correct order for threads, and a progress map can be used to illustrate

12.5.1 Progress Chart

A progress graph models the execution of n concurrent threads into a trace line in an n-dimensional Cartesian space

Each axis k corresponds to progress of process K

Each point I represents the process has completed the state of command I

The origin of the graph corresponds to the initial state of no thread completing an instruction

A progress map transforms the instruction execution model into one state to another State

The legal conversion is right or up.

An instruction (L,u,s) that operates on a shared variable CNT content forms a critical section

The intersection of two critical sections is called the unsafe zone

The trajectory line around the unsafe zone is called the safety trajectory line, which, conversely, is called the unsafe trajectory line.

?

12.5.2 Signal Volume

The semaphore S is a global variable with nonnegative integers and can only be processed by two operations.

P (s): if S is nonzero, then P returns s minus 1, and if S is zero, hangs

V (s): Will s plus 1, if there is any thread blocking in P operation waiting for s to become nonzero, then the V operation will restart one of the threads, then s minus 1

?

12.5.3 using semaphores to achieve mutual exclusion

Basic idea: Link each shared variable to a semaphore s, and use P and V operations to surround the critical area.

The value of the binary semaphore is always 0 or 1

A semaphore that is intended to provide mutual exclusion is also known as a mutex.

P: Lock V: Unlock

A semaphore that is used as a counter for a set of available resources is called a count semaphore

Key idea: Create a Forbidden Zone

?

12.7 Other concurrency issues

12.7.1 Thread Safety

Four thread unsafe function classes:

    1. Functions that do not protect shared variables
    2. To save a function that spans the state of multiple calls
    3. Returns a function that points to a pointer to a static variable
    4. Functions that call thread unsafe functions

12.7.2 Re-entry

Reentrant functions: When they are called by multiple threads, no shared data is referenced

12.7.3 using existing library functions in a thread-in-line program

See p693 Figure 12-39

In addition to Rand and strtok, these thread unsafe functions are class 3rd.

12.7.4 Competition

Competition occurs when the correctness of a program relies on one thread to reach the X point in its control flow before another thread reaches the y point.

12.7.5 deadlock

The semaphore introduces a run-time error, called a deadlock

A set of threads is blocked, waiting for a condition that never will be true

Mutex lock Order rule: If for each pair of mutex (S,T) in the program, assign a whole sequence to all the locks, each thread requests the lock in this order, and releases it in reverse, then the program is deadlock-free.

Reference: "In-depth understanding of computer systems"

?

?

13th Week Study Report

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.