20135223 He Weizin-Information Security system design basics 13th Week study Summary

Source: Internet
Author: User
Tags posix terminates

Learning tasks:1. Mastering three concurrency modes: process, thread, I/O multiplexing2. Mastering thread control and related system calls3. Mastering thread synchronization mutex and related system call1. The meaning of concurrency
    • concept : As long as the logical control flow overlaps in time, it can be called concurrency.
    • meaning :
      • To access a slow device (such as an I/O device): "The CPU can" make a shot at such a slow speed and then do something else to keep itself busy. " 】
      • Interacting with people: each time a user makes a request for an operation, "the user will not be dealing with other programs while the system is intended to operate," and a separate logical concurrency stream is used to process the program.
      • Reduce latency by postponing work
      • Serving multiple network clients
      • Process: Each logical control flow is a process that is scheduled and maintained by the kernel. Communication between processes must also have an explicit communication mechanism. "Each process has its own virtual address space, and there must be a mechanism to communicate."
      • I/O multiplexing: In concurrent programming, a program invokes its own logical flow in context. Because the program is a separate process, all the streams share an address space. "That is, the" transport "of data in and out of the data space from which the process belongs is to be provided to multiple logical streams.
      • Threads: Called by the kernel as a process, sharing the same virtual address space as I/O multiplexing
2. Constructing Concurrent Server procedures
    1. (assuming a server-side two clients, the server is always listening) the server is listening for a descriptor 3, client 1 to the server side of the request, the server side returned a connection descriptor has been established 4;
    2. The server derives a child process that processes the connection. The child process has a complete copy from the parent Process Server descriptor list. At this point, the parent process and the child process need to separate the connection: the parent process disconnects the connection Descriptor 4 from its descriptor list, and the child process disconnects its copy of the Listener Descriptor 3;
    3. The server receives a new client 2 connection request, also returns the connection Descriptor 5, deriving a child process;
    4. The server side continues to wait for requests, and two child processes handle client connections concurrently.

The pros and cons of process-based concurrency programming

    • Advantage: The process cannot accidentally overwrite the virtual memory of another process.
    • Cons: separate address spaces make it more difficult for processes to share state information. In order to share information, they must use the displayed IPC (interprocess communication) mechanism, while process control and IPC are expensive and therefore slower.

The advantages and disadvantages of concurrent programming based on I/O multiplexing technology

    • Advantages:
      • It gives programmers more control over the behavior of the program than the process-based design.
      • Each logical stream has access to the full address space of the process, which makes it easy to share data between streams.
    • Cons: coding is complex.

Threads and Thread Pools

Threads are not organized according to strict parent-child hierarchies. and a thread-related thread that makes up a peer (thread) pool is independent of the threads created by other threads.

The main thread differs from other threads only in that it is always the first one running in the process. In the peer pool, a thread can kill any of its peer threads, or wait for any of its peers to terminate. In addition, each peer thread can read and write the same shared data.

POSIX threads

a POSIX thread is a standard interface for processing threads in a C program.

    • Creating Threads

      Threads create additional processes by calling the Pthread_create function.

      The Pthread_create function creates a new process, with an input variable arg, running thread routines F in the context of the threads. You can use the attr parameter to change the default properties of the newly created thread. When Pthread_create returns, the parameter TID contains the ID of the newly created thread. A new thread can get its own thread ID by calling the Pthread_self function.

    • The Pthread_create function creates a new process, with an input variable arg, running thread routines F in the context of the threads. You can use the attr parameter to change the default properties of the newly created thread. When Pthread_create returns, the parameter TID contains the ID of the newly created thread. A new thread can get its own thread ID by calling the Pthread_self function.

    • Terminating a thread

      A thread can be terminated in the following ways:

      • When the thread routine of the when top layer returns, the line terminates Cheng.
      • By calling the Pthread_exit function, the thread terminates with a display. If the main thread calls Pthread_exit, it waits for all other peer threads to terminate before terminating the main thread and the entire process, and the return value is Thread_return.

      • A peer thread calls the UNIX Exit function, which terminates the process and all threads associated with the process.
      • Another peer process terminates the current thread by calling the Pthread_cancle function with the current thread ID as a parameter.

    • Reclaim Resources for terminated threads

      Thread waits for other threads to terminate by calling the Pthread_join function

      The Pthread_join function blocks until the thread TID terminates, assigns the (void*) pointer returned by the thread routine to the location pointed to by Thread_return, and then reclaims any memory resources that were consumed by the terminated thread.

    • Detach thread

      Threads are either associative or separable.

      • A binding thread can be recycled by other threads and killed, and its memory resources are not released until it is reclaimed by another thread.
      • A disconnected thread cannot be recycled or killed by another thread. Its memory resource is automatically released by the system when it terminates.

Detach thread

At any point in time, threads are either associative or detached.

1. Threads that can be combined
    • Be able to be recovered by other threads and kill their resources
    • The money was withdrawn, and its memory resources were not released.
    • Each of the Cheng threads is either retracted by another thread or separated by a call to the Pthread_detach function
2. Detached threads
    • Cannot be recycled or killed by another thread
    • The memory resource is automatically released by the system when it terminates
3.pthread_detach function
#include <pthread.h>void pthread_detach(pthread_t tid);若成功返回0,出错为非0

This function can be separated from the thread tid.

Threads are able to detach themselves by Pthread_detach with Pthread_self () as parameters.

Each peer thread should detach himself before he begins to process the request so that the system can reclaim its memory resources after it terminates.

Initialize Thread: pthread_once function
#include <pthread.h>pthread_once_t once_control = PTHREAD_ONCE_INIT;int pthread_once(pthread_once_t *once_control, void (*init_routine)(void));总是返回0
Considerations for thread-based concurrent servers 1. How do I pass a connected descriptor to a peer process when I call pthread_create?

Passing pointers.

2. Competition issues?

See section Seventh.

3. Avoid memory leaks?

Each thread must be detached so that its memory resources can be retracted when it terminates.

Shared variables in multithreaded programs

A variable is shared when and only if multiple threads refer to an instance of the variable.

One, the thread memory model

Note that you should be aware of:

Registers are never shared, and virtual memory is always shared.

Ii. Mapping variables to memory

Three, shared variables

The variable v is shared-when and only if one of its instances is referenced by more than one thread.

The signal volume synchronization thread

In general, there is no way to predict whether the operating system will choose the correct order for your thread.

So--progress chart

First, the progress map

A progress map is the execution of n concurrent threads modeled as a trace line in an n-dimensional Cartesian space, where the origin corresponds to the initial state of no thread completing an instruction.

When n=2, the state is relatively simple, is more familiar with the two-dimensional coordinate diagram, the horizontal ordinate each represents a thread, and the conversion is represented as a forward edge

Conversion rules:
    • The legitimate conversion is to the right or up, that is, one instruction in a thread is completed
    • Two directives cannot be completed at the same time, i.e. diagonal lines are not allowed
    • The program cannot run in reverse, i.e. it cannot appear down or to the left

The execution history of a program is modeled as a trace line in the state space .

Decomposition of thread Loop code:
    • H: instruction block in the loop head
    • L: Load shared variable CNT to thread I register%EAX instructions.
    • U: Update (Increase)%EAX instructions
    • S: Save the updated value of%eax back to the instruction of the shared variable cnt
    • T: instruction block at the tail of the loop
Several concepts
    • Critical section: For thread I, the instructions for manipulating the l,u,s content of the shared variable form a critical section about the shared variable CNT.
    • Unsafe zone: The state of formation of the intersection of two critical zones
    • Safety track lines: track lines that bypass unsafe areas

The specific relevant in the operating system course is described in more detail, such as:

Critical Zone use principle (mutex condition)

  • Free to enter: If the critical section is idle, it should be entered as soon as a process request is made;
  • No empty wait: Only one process is allowed to be in the critical section at a time;
  • Multi-choice: when there is no process in the critical section, and at the same time there are multiple processes required to enter the critical area, only one of them into the critical area, other processes must wait;
  • The right to wait: the process of entering the critical zone, cannot wait for an event for a long time in the critical area, so that other processes wait indefinitely outside the critical area;
    You cannot limit the number of concurrent processes and the progress of execution.
Second, the signal volume

The principle of mutual exclusion of semaphores

  • Two or more processes work together by signaling that a process can be forced to temporarily stop execution (blocking the wait) at a location until it receives a "forward" signal (awakened);
  • The variable that implements the semaphore is called the semaphore, which is often defined as the record variable s, one of which is an integer, the other is a queue, and its element is the blocking process (FIFO) that waits for that semaphore.
  • Signal Volume Definition:

    Type Semaphore=record
    Count:integer;
    Queue:list of Process
    End
    var S:semaphore;

Define two atomic operations on the Semaphore--p and V

P (Wait)

Wait (s)
S.count: =s.count-1;
If S.count<0 Then
Begin
Process blocking;
The process enters the s.queue queue;
End

V (signal)

Signal (s)
S.count: =s.count+1;
If S.count≤0 Then
Begin
Wake up the team first process;
Move the process out of the S.queue blocking queue;
End

It is important to note that each semaphore must be initialized before it is used .

Third, using semaphores to achieve mutual exclusion 1. Basic ideas

Associate each shared variable (or a set of related shared variables) with a semaphore s (initial 1), and then surround the corresponding critical section with P and V operations.

2. Several concepts
    • Binary semaphore: The amount of signal that is used to protect shared variables in this way is called a two-dollar semaphore , and the value is always 0 or 1.
    • Mutex: A two-dollar semaphore for the purpose of providing mutual exclusion
    • Locking: Perform p operation on a mutex
    • unlock; Perform a v operation on a mutex
    • Count Semaphore: The semaphore of a counter that is used as a set of available resources
    • Forbidden Zone: Because of the invariance of the semaphore, there is no actual possible trajectory that can contain the state in the Forbidden Zone.
Applications of 3.wait (s)/signal (s)
  • Before the process enters the critical section, the wait (s) primitive is executed first, and if s.count<0, the process calls the blocking primitive, blocks itself, and inserts into the s.queue queue;
  • Note that the blocking process does not consume processor time, not "busy". Until a process that exits from the critical section executes the signal (s) primitive, wakes it up;
  • Once another process has performed the s.count+1 operation in the signal (s) primitive, the discovery s.count≤0, that is, the blocked process in the blocking queue, invokes the wake-up primitive, modifies the first process in the S.queue to the ready state, and prepares the queue to execute the critical section code.

And

  • The wait operation is used to request resources (or use rights), and the process may block itself when executing the wait primitive;
  • The signal action is used to free up resources (or to return resource rights), and it is the responsibility of the process to wake up a blocking process when it executes the signal primitive.
Third, the use of signal volume to dispatch shared resources

In other words, the semaphore has two functions:

    • Implement mutex
    • Scheduling shared resources

The signal volume is divided into: mutually exclusive semaphore and resource semaphore.

  • The mutex is used to request or release the use of resources, often initialized to 1;

  • Resource semaphores are used to request or return resources, and can be initialized to a positive integer greater than 1, indicating the number of resources available in the system.

1. The physical meaning of the semaphore
  • S.count >0 indicates the number of processes (available resources) that can also execute wait (s) without blocking. Each time a wait (s) operation is performed, it means that the request is allocated a unit of resources.
  • When s.count≤0 indicates that no resources are available, the process that requested the resource is blocked. At this point, the absolute value of the s.count equals the number of wait processes in the queue that the semaphore is blocking. Performing a signal operation means releasing a unit of resources. If s.count<0, indicates that there is a blocked process in the S.queue queue, it needs to wake the first process in the queue and move it to the ready queue.
2. Frequently Asked Questions

There are frequently asked questions about producer-consumer issues, and reader-writer questions that are described in detail in the operating system curriculum.

He concurrency problem one, thread safety

A thread is secure, and it always produces the correct result when it is called repeatedly by multiple concurrent threads.

Four disjoint thread unsafe function classes and Countermeasures:

    • Functions that do not protect shared variables--protect shared variables with synchronous operations such as P and V
    • A function that maintains a state that spans multiple calls--overrides without any static data.
    • A function that returns a pointer to a static variable--① overridden; ② uses the lock-copy technique.
    • function calling thread unsafe function--refer to the previous three types
Second, re-entry sex

When they are called by multiple threads, no shared data is referenced.

1. Explicit re-entry:

All function arguments are pass-through, no pointers, and all data references are local automatic stack variables, not static or full-play variables.

2. Implicit re-entry:

The calling thread carefully passes pointers to non-shared data.

Iii. using existing library functions in a thread-in-line program

In a word, the reentrant version of the thread unsafe function is used, and the name ends with a suffix of _r.

Iv. Competition 1. The reason for the occurrence of competition:

The correctness of a program relies on the x point of one thread to reach its control flow before another thread reaches the y point. That is, the programmer assumes that the thread will follow a particular trajectory through the execution state space, forgetting a guideline that the threaded program must work correctly for any viable trajectory.

2. Elimination Method:

Dynamically assigns a separate block to each integer ID, and passes to the thread routine a pointer to the block

Five, Deadlock:

A set of threads is blocked, waiting for a condition that will never be true.

How to resolve a deadlock

A. Do not allow deadlocks to occur:

B. Let the deadlock occur:

Resources:

1. Teaching Materials: 11th chapter "Network Programming"

2. Teaching Materials: 12th chapter "Concurrent Programming"

20135223 He Weizin-Information Security system design basics 13th Week study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.