20135210 Ellis--The basic design of information security system 13th Week study summary

Source: Internet
Author: User
Tags posix

The 12th chapter concurrent programming 648three basic ways to construct concurrent programs: process, I/O multiplexing, threading.

12.1 Process-based concurrency programming

Process-based concurrent servers

About the pros and cons of the process

The process has a very clear model of sharing state information between the parent and child processes: The shared file table, but does not share the user address space. Process has separate address controls loving you is both an advantage and a disadvantage. Because of the independent address space, the process does not overwrite the virtual memory of another process. On the other hand, inter-process communication is cumbersome, at least costly.

12.2 Concurrent programming based on I/O multiplexing

I/O multiplexing technology

Basic idea: Use the Select function to require the kernel to suspend a process and return control to the application only after one or more I/O events have occurred.

Concurrent event-driven server based on I/O multiplexing

Event driver: Model the logical flow as a state machine.

State machine:

  • State
  • Input events
  • Transfer

the pros and cons of I/O multiplexing technology

  • One advantage of event-driven design is that it gives programmers more control over program behavior than process-based design. For example, we can imagine writing an event-driven concurrent server that provides some customers with the services they need, which is difficult for concurrent servers in a new process
  • Another advantage is that an I/O multiplexing-based event drive is run in a single process context, so each logical flow can access the full address space of the process. This makes it easy to share data between streams, and one of the advantages associated with running as a single process is that you can use familiar debugging tools, such as GDB, to debug your concurrent servers, just like in a sequential program. Finally, event-driven designs are often much more efficient than the design based on the benefit, because they do not require process context switching to dispatch new streams.
  • One obvious drawback of event-driven design is the complexity of coding, and the need for our event-driven concurrency servers to refer to the number of instructions executed per time slice per logical stream. Another major drawback of event-based design is that they do not fully benefit from multicore processors.

12.3 Thread-based concurrency programming

Each thread has its own thread context, including a thread ID, stack, stack pointer, program counter, general purpose register, and condition code. All threads running in a process share the entire virtual address space of the process. Because the thread runs in a single process, it shares the entire contents of the process's virtual address space, including its code, data, heap, shared library, and open files.

Threading Execution Model

The execution model of threads and processes is somewhat similar. The declaration cycle for each process is a thread, which we call the main path.

POSIX threads

A POSIX thread is a standard interface for processing threads in a C program. Pthreads defines approximately 60 functions that allow the program to create, kill, and reclaim threads, share data securely with peer threads, and also notify peer-to system state changes.

Creating Threads

Threads create additional processes by calling the Pthread_create function.

Terminating a thread

  • Line Cheng terminates when thread routines on the When top layer return

  • By calling the Pthread_exit function, the thread will show that it waits for all other peer threads to terminate before terminating the termination.
  • A peer thread calls the UNIX Exit function, which terminates the process and all threads associated with the process

Reclaim resources for terminated threads 660

Detach Thread 660

Initialize thread 660

12.4 Shared variables in multi-threaded programs

Thread memory model

A set of concurrent threads runs in the context of a process. Each thread has its own separate thread context, including thread ID, stack, stack pointer, program counter, condition code, and general purpose register value. Each thread shares the remainder of the process context with other threads. This includes the entire user virtual address space, which consists of read-only text code, read/write data, heaps, and all shared library code and data regions. Threads also share the same set of open files.

From a practical point of view, it is not possible to have one thread read or write the register value of another thread. On the other hand, any thread can access any location of the shared virtual storage. If a thread modifies a memory location, then each of the other threads will eventually be able to see the change as it reads this location. Therefore, the registers are never shared, and the virtual memory is always shared.

The memory model of the respective independent line stacks is not so neat and clear. These stacks are stored in the stack area of the virtual address space and are usually accessed independently by the corresponding thread. We say that usually, not always, because different thread stacks are not fortified by other threads, so if a thread gets a stacks in a way that points to another line, it can read and write to any part of the stack.

mapping variables to storage

In threaded C programs, variables are mapped to virtual storage according to their storage type:

Global variables

A global variable is a variable defined outside a function, and at run time, the Read/write Region field of the virtual memory contains only one instance of each global variable that any thread can reference.

Local automatic variables

Local automatic variables are variables that are defined inside a function but do not have a static property, and at run time, each thread's stack contains instances of its own local automatic variables. This is true even when multiple threads are executing the same thread routine.

Local static variables

Shared variables

We say that a variable v is shared when and only if one of its instances is referenced by more than one thread.

12.5 Synchronizing thread progress graphs with semaphores

The process diagram models the execution of n concurrent processes into a trace line in an n-dimensional Cartesian space.

Signal Volume

The semaphore S is a global variable with a nonnegative integer value and can only be handled by two special operations, both of which are called P and V

  • P (s): if S is nonzero, then p will subtract s by 1 and return immediately. If S is zero, then the thread is suspended until s becomes nonzero, and a Y operation restarts the thread. After a restart, the P operation will subtract s by 1 and return control to the caller
  • V (s): v operation will add S 1. If there is any thread blocking in P operation waiting for s to become nonzero, then the V operation will restart one of these threads, then the thread will reduce s by 1, complete its p operation, the test in P and the minus 1 operation is indivisible, that is, once the predicted semaphore s becomes nonzero, it will reduce the s by 1 and cannot be interrupted. The plus 1 operation in V is also indivisible, that is, there is no interruption in the process of loading, adding, and storing semaphores. Note that the definition of V does not define the order in which the waiting threads are restarted. The only requirement is that V must only restart a waiting process.
Using semaphores to achieve mutual exclusion

Semaphores provide a convenient way to ensure mutually exclusive access to shared variables.

The basic idea is to associate each shared variable (or a set of related shared variables) with a semaphore.

The semaphore that protects a shared variable in this way is called a two-dollar semaphore because its value is always 0 or 1.

The two-dollar semaphore, which is intended to provide mutual exclusion, is often also referred to as a mutex. Performing P operations on a mutex is known as locking the mutex. Similarly, performing a V operation is known as unlocking the mutex lock. A thread that has a lock on a mutex but has not yet been unlocked is called an exclusive lock.

A semaphore that is used as a counter for a set of available resources is called a count semaphore.

The key idea is that this combination of P and V operations creates a set of states called the Forbidden Zone.

Because of the invariance of the semaphore, there is no practical trajectory line that can contain the state in the forbidden area. Moreover, since the Forbidden Zone completely includes the unsafe zone, there is no practical track line that can touch any part of the unsafe zone.

As a result, each practical trajectory is safe, and the program correctly increments the value of the counter regardless of the order of the runtime directives.

Using semaphores to dispatch shared resources

The semaphore has two functions:

    • Implement mutex
    • Scheduling shared resources
Concurrent server based on pre-threading

In the concurrent server, we create a new thread for each new client The disadvantage of this approach is that we create a new thread for each new client, which results in a small price. A pre-threaded server attempts to reduce this overhead by using a producer-consumer model. A server is composed of a main thread and a set of worker threads. The main thread continuously accepts the connection request from the client and places the resulting connection descriptor in an unlimited buffer. Each worker thread repeatedly removes the descriptor from the shared buffer, serves the client, and then waits for the next descriptor.

12.6 using threading to improve parallelism12.7 Other concurrency issuesThread Safety

In our programming process, we write the thread-safe function as much as possible, that is, a function that always produces the correct result when it is called repeatedly by multiple concurrent threads. If this condition is not done, we call it a thread unsafe function.

Four types of thread unsafe functions:

Functions that do not protect shared variables. The solution is the PV operation.
Maintains a state function that spans multiple calls. For example, a function that uses static variables. The workaround is not to use static variables or to use a readable static variable.
Returns a function that points to a pointer to a static variable. The solution is lock-and-copy (Shackle-copy)
Functions that call thread unsafe functions

Re-entry Accessibility

There is a class of important thread-safe functions called reentrant functions. The feature is that they have a property that does not reference any shared data when they are called by multiple threads. Although thread safety and re-entry are sometimes (correctly) used as synonyms, there is a clear technical difference between them and deserves attention. The graph shows the collection relationships between reentrant functions, thread-safe functions, and thread unsafe functions. The collection of all functions is divided into disjoint thread-safe and thread-unsafe function collections. A reentrant function collection is a true subset of thread-safe functions.

Reentrant functions are generally more efficient than non-reentrant thread-safe functions because they do not require synchronous operations.

Using existing library functions in a thread-in-line program

Competition
    • Competition occurs when the correctness of a program relies on one thread to reach the X point in its control flow before another thread reaches the y point. The competition usually occurs because the programmer assumes that the thread will work correctly on a particular trajectory and forgets another guideline that the threaded program must work correctly for any viable trace line.
Dead lock

The semaphore introduces a potentially nasty runtime error called a deadlock. It refers to a set of threads that are blocked, waiting for a condition that will never be true.

Programmers use P and V operations incorrectly, so that the forbidden areas of two semaphores overlap. If one of the execution trajectories happens to reach the deadlock state D then there is no further progress, since overlapping blocks block progress in each legal direction. In other words, the program is deadlocked because each thread waits for a V operation that is not possible at all.

Deadlock is a rather difficult problem because it is not always predictable. Some lucky lines of execution will bypass the deadlock area, while others will fall into the area.

20135210 Ellis--The basic design of information security system 13th Week study summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.