12th Week Study Summary

Source: Internet
Author: User

Information Security System Design Foundation 13th Week Study Summary

"Learning Time: 5 hours"

"Learning content: chapter12--concurrent Programming"

First, the textbook knowledge point carding 1. concurrency
    • Concept: As long as the logical control flow overlaps in time, it can be called concurrency.
    • Significance:
      • Accessing slow devices (such as I/O devices)
      • Interacting with people: each time a user requests a certain operation, a separate logical concurrency stream is used to process the program.
      • Reduce latency by postponing work
      • Serving multiple network clients
      • Process: Each logical control flow is a process that is scheduled and maintained by the kernel. Communication between processes must also have an explicit communication mechanism.
      • I/O multiplexing: In concurrent programming, a program invokes its own logical flow in context. Because the program is a separate process, all the streams share an address space.
      • Threads: Called by the kernel as a process, sharing the same virtual address space as I/O multiplexing
2. Constructing Concurrent Server procedures
    1. (assuming a server-side two clients, the server is always listening) the server is listening for a descriptor 3, client 1 to the server side of the request, the server side returned a connection descriptor has been established 4;
    2. The server derives a child process that processes the connection. The child process has a complete copy from the parent Process Server descriptor list. At this point, the parent process and the child process need to separate the connection: the parent process disconnects the connection Descriptor 4 from its descriptor list, and the child process disconnects its copy of the Listener Descriptor 3;
    3. The server receives a new client 2 connection request, also returns the connection Descriptor 5, deriving a child process;
    4. The server side continues to wait for requests, and two child processes handle client connections concurrently.
    5. Code

"The connection to the client will not be terminated until the parent-child process is closed"

3. Concurrent programming based on I/O multiplexing
    • Note: If the server is able to respond to the user's keyboard input as well as the corresponding client connection request, there will be a conflict waiting for WHO. Using the Select function requires the kernel to suspend the process and return control to the process only after one or more I/O events have occurred.
    • Select function

      • Prototype: int Select (int n,fd_set *fdset,unll,null,null); Returns the number of nonzero characters of the prepared descriptor, or 1 if an error occurs
      • Example:
      • Description: The SELECT function has two inputs, a descriptor set called a read collection, and the cardinality of the collection. The Select function blocks until one or more descriptors in the collection are ready to be read. The description characters is ready to read when and only if a request to read bytes from the descriptor is not rejected. In addition, the Read collection is updated every time the Select function is called.
4. The concurrent ECHO server based on I/O multiplexing
    • Explain:
      • Summary: The server detects the occurrence of an input event with the Select function. When each already connected descriptor is ready for readability, the server performs a transfer for the corresponding state machine (that is, reads from the descriptor and writes back a line of text). "What is writeback?" You can refer to the explanation of the Check_client function in the code below. "
      • Explained in step:
        • Collection of active clients "note: The meaning of this should be that the data that is possible and can be sent by a client that reads a connection request for text-line reading is maintained in a pool structure.
        • Call the INIT function to initialize the pool, and then loop indefinitely.
        • In each iteration of the loop, the server side invokes the SELECT function to detect two different types of input events: The connection request from a new customer arrives "Open the connection first, then call the Add_client function to add it to the pool"; a connected descriptor for an already existing customer is ready to read. "Because it is an infinite loop, each time the loop starts, the server will present the active client description assigned value to the descriptor that is ready, and then detect if there is a" new "client (in the pool if there is one), and finally the text line of the prepared descriptor is written back".
5. State machine

I/O multiplexing can be used as the basis for concurrent event drivers, and in concurrent events, the flow progresses because of some sort of event. Typically, the logical flow is modeled as a state machine.

A state machine can be simplified to a set of states, input events, and transitions (mapping the first two to a state). (self-loop is the transfer between the same input and output states)

Usually the state machine is drawn as a graph, where nodes represent states, arcs represent transitions, and markings on arcs represent input events. For each new client K, the concurrent server based on I/O multiplexing creates a new state machine SK and connects it to the connected descriptor DK.

6. Thread-based concurrency programming
    • A thread is a logical flow running in the context of a process that is automatically dispatched by the kernel. Each thread has its own thread context, including a unique integer thread ID, stack, stack pointer, and so on. All threads running in a process share the process's address space.
    • Threading Execution Model
      • Each process starts its life cycle as a single thread, which is called the main thread, and at some point the main thread creates a peer thread, starting at this point in time and running two threads concurrently.
      • Differences from the process:
        • The context of the thread is much smaller than the process context, and the context switch is much faster than the process;
        • Threads do not have a strict parent-child hierarchy like a process, and the main thread differs from other threads in that it is the first thread running in the process (that's all) ———— the important effect is that the thread can kill any of its peer threads.
7.Posix Threads
    • A POSIX thread (Pthreads) is a standard interface for processing threads in a C program and can be invoked on most UNIX systems. It defines about 60 functions that allow a program to create, kill, and reclaim threads, share data securely with peer threads, and so on.
      • Description: The main thread creates a peer thread and then waits for its termination. The peer thread outputs "hello,world!\n" and terminates. After the main thread detects that the peer thread has terminated, it terminates the process by calling exit.
      • Analysis: The code of the thread and the local data are encapsulated in a threading example (thread routine). Line 2nd, each thread uses a generic pointer for input, and returns a generic pointer, line 7th, the main thread creates a peer thread, and after the Pthread_create function returns, the main thread and the peer line friend run concurrently.
8. Progress Map
    • Concept: The execution of n concurrent threads is modeled as a trace line in an n-dimensional Cartesian space. Each axis k corresponds to the progress of thread K. Each point (i1,i2,i3......,in) indicates that the thread K has completed the instruction IK state. A progress map models the instruction execution into a transition from one state to another. The transformation is represented as a forward edge from one point to an adjacent point.
9. Mutex

We want to make sure that each thread has access to the shared variable mutually exclusive when it executes the instruction in its critical section. This behavior is often referred to as mutual exclusion. In a progress map, the state space region formed by the intersection of two critical sections is called an unsafe zone (excluding those points adjacent to the unsafe zone).

10. Signal Volume
    • Concept: The semaphore S is a global variable with nonnegative integer values and can only be handled by two special operations
    • Operation
      • P (s): If S is 0, then p will subtract s by 1 and return immediately. If S is zero, the thread is suspended until s is not 0, and the v operation restarts the thread. After a restart, the P operation will subtract s by one and return control to the caller
      • V (s): v operation will s plus one; if there are any threads blocking the P operation waiting for s to become non 0, then the v operation will restart one of these threads, then the thread will subtract s by 1 to complete its P operation.

Note: The tests in P and the minus 1 operations are inseparable, and the restart and add 1 operations in V are also inseparable. In addition, the definition of V does not describe the order in which waiting threads are restarted, that is, when multiple threads are waiting for the same semaphore, you cannot predict which thread the V operation will restart.

11. Implement mutually exclusive code with semaphores
volitale int cnt = 0;sem_t mutex;Sem_init(&mutex,0,1);//mutex = 1;for(int i =0;i<niters;i++){    P(&mutex);    cnt++;    V(&mutex);}
12. Scheduling shared resources using mutually exclusive semaphores-reader & writer issues
    • Overview: A set of concurrent threads to access a shared object. Some threads read only objects, and other threads only modify objects. The former is called the reader, the latter called the writer. The writer must have exclusive access to the object, and the reader can share the object with an infinite number of other readers.
    • Solution: Semaphore W Controls access to the critical section of the shared object. The semaphore mutex protects access to the shared variable readcnt, readcnt counts the number of readers currently in the critical section. Whenever a writer enters the critical section, it locks on W and unlocks the W when it leaves. This ensures that there is at most one writer in the buffer at any one time. On the other hand, only the first reader to enter the critical section will lock on W and the last reader to leave the critical section unlocks the W. This means that the reader can enter the critical section without hindrance.
    • Code:

Second, after-school exercises 1. Exercises 12.1

The 33rd line of code, after the parent process closes the connection descriptor, the child process can still use this descriptor and client communication. Why?

When a parent process derives a child process, it gets a copy of the already connected descriptor and adds a reference count from 1 to 2 for the related file table, and the reference count is reduced from 2 to 1 when the parent process closes its descriptor copy (the kernel does not close the file knowing its reference count becomes 0). So the connection remains open

2. Exercises 12.3

What happens if you type ctrl-d when the above program blocks the call to select?

If the requirement to read a byte from a descriptor is not blocked, then it is ready to be read. This is true for EOF, too. If EOF is true on a descriptor, the descriptor is also ready to be read, as the read operation returns a 0 return code that represents EOF. Therefore, typing ctrl-d causes the Select function to return.

Third, experience

This is already the last chapter of the book, looking back on the previous study, found that many of their own changes have occurred unconsciously. For example, in this chapter of the study, I have been able to very consciously Baidu will not be the "proper noun" or the search for books before the content.

12th Week Study Summary

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.