20145234 Huangfei "Information Security system design basics" 13th Week study Summary

Source: Internet
Author: User

Course Content Summary Concurrency: Logical control flow is overlapped in time by process-based concurrent programming

For example, a client request is accepted in the parent process and then a new child process is created to serve each client.

    • Assuming we have two clients and a server, the server is listening for a request on a listener descriptor. Now assume that the server accepts a connection request for client 1.
    • Process-based concurrent servers:
      1. Need to include a sigchid handler to retract the dead process.
      2. Parent-child processes must close their individual connfd copies.
      3. The connection to the client terminates because the reference count in the socket's file table entry is until the CONNFD of the parent-child process is closed.
Concurrent programming based on I/O multiplexing
    • Using the Select function requires the kernel to suspend the process and return control to the application only after one or more I/O events have occurred.
    • The Select function handles a collection of type Fd_set, also called a descriptor collection, and is logically described as a bit vector of size n: b_n-1,... b_1,b_0*

    • Descriptor:

      1. Assign them
      2. Assign a variable of this type to another variable
      3. Use Fd_zero, Fd_set, FD_CLR, and Fd_isset macros to modify and examine them
    • Concurrent event-driven server based on I/O multiplexing

      1. The process-based design gives programmers more control over the behavior of the program.

      2. An I/O multiplexing-based event-driven server is run in a single process context, so each logical flow can access the full address space of the process. , making it easy to share data between streams.

      3. Event-driven designs are often much more efficient than process-based designs because they do not require a process context switch to dispatch new processes.

Thread-based concurrency process
  • This is a mixture of the above two methods, combining the characteristics of the above two methods.

    1. Like a process, it is dispatched by the kernel, and the kernel identifies the thread by an integer ID.
    2. As with I/O multiplexing-based streams, multiple threads run in the context of a single process. So share the entire contents of the virtual address space of this process: code AH, data, heap, shared library and open file.
  • A thread is a logical flow that runs in the context of a process.
  • Each thread has its own thread context, including:

    1. A unique integer thread ID
    2. Stack
    3. Stack pointer
    4. Program counter
    5. General purpose Registers
    6. Condition code
  • All threads running in a process share the entire virtual address of the process.
  • Main thread: Each process starts its life cycle as a single thread, which is called the main thread, which is the first one running in the process.
  • Peer thread: is created by the main thread and runs concurrently with the main thread.
  • Peer (thread) pool: A thread can kill any of its peer threads, or wait for any of its peer threads to terminate. Each peer thread can read and write the same shared data.
  • POSIX threads: Both the thread code and the local data are encapsulated in a thread routine. Each thread takes a generic pointer as input and returns a generic pointer.
  • Create thread: The thread calls the Pthread_create function to create another thread.
  • Terminating a thread

    1. The top-level thread returns, implicitly terminating.
    2. Call the Pthread_exit function to display the termination. (Waits for all other peer threads to terminate before terminating the main thread and the entire process, returning the thread return)
    3. A peer thread calls the UNIX Exit function, which terminates the process and all threads associated with the process.
    4. Another peer thread calls pthread_cancle with the current thread ID as a parameter to terminate the current process.
  • The thread calls the Pthread_join function to wait for other threads to terminate. This function blocks until the thread TID terminates, assigns the (void*) pointer returned by the thread routine to the location pointed to by Thread_return, and then reclaims all memory resources occupied by the terminated thread.
  • Detach thread: At any point in time, the process is either associative or detached.
  • Initialization process: Call Pthread_once to initialize the related state of the thread;

Shared variables in multithreaded programs
    • is mapped to the virtual storage according to the store type.
    1. Global variables. Variables other than the function. At run time, the read/write region of the virtual memory contains only one instance of each global variable, and any thread is available.
    2. Local automatic variables define variables that do not have a static property inside the function. At run time, each thread contains its own instances of local auto variables
    3. Local static variable. Inside the function, there is static. As with global variables, the read/write region of a virtual memory contains only one instance of a local static variable declared in the program.
    • Shared variables: Say a variable v is shared when and only if one of its instances is referenced by more than one thread.
    • Progress map

      A progress map is the execution of n concurrent threads modeled as a trace line in an n-dimensional Cartesian space.

      Transforms the instruction execution model into one state to another state.

      Legal conversions: Right or up.

      Point (L1,S2) corresponds to thread completion L1, and thread 2 completes S2 state

    • Semaphore: is a signal to solve the synchronization problem, the semaphore s is a global variable with nonnegative integer values, there are two special operations to deal with, called P and V:

      1. P (s): if s nonzero, then p will subtract s by 1 and return immediately.

      2. V (s): v operation adds S + 1 and restarts a blocked thread

    • Use semaphores to achieve mutual exclusion: Associate each shared variable (or a set of related shared variables) with a semaphore s (initial 1), and then surround the corresponding critical section with P (s) and V (s) operations.
    • Using semaphores to dispatch shared resources
      1. Producer-Consumer issues: because both insert and remove items involve updating shared variables, ensure that access to the buffers is mutually exclusive.

      2. Reader-writer question: the thread that modifies the object is called the writer, and the thread of the read-only object is called the reader. The writer must have exclusive access to the object, and the reader can share the object with an unlimited number of other readers.

    • Concurrent server based on the prevention of threading session
    • Use threading to improve parallelism: multicore processing can be parallel.

Other concurrency issues
    • unsafe function Classes

      1. Functions that do not protect shared variables
      2. Functions that maintain state across multiple calls
      3. Returns a function that points to a pointer to a static variable
      4. Functions that call thread unsafe functions
    • reentrant function: When called by multiple threads, no shared data is referenced. (is a true subset of thread-safe functions)
      1. Explicitly reentrant: All function arguments are pass-through, no pointers, and all data references are local automatic stack variables, without reference to static or full-play variables.

      2. Implicitly reentrant: The calling thread carefully passes pointers to non-shared data.

    • Competition: The programmer assumes that the thread will pass through the execution state space according to a particular trajectory, forgetting that a threaded program must work correctly for any viable trajectory.
    • Deadlock: A set of threads is blocked, waiting for a condition that will never be true.

20145234 Huangfei "Information Security system design basics" 13th Week study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.