Course Content Summary Concurrency: Logical control flow is overlapped in time by process-based concurrent programming
For example, a client request is accepted in the parent process and then a new child process is created to serve each client.
- Assuming we have two clients and a server, the server is listening for a request on a listener descriptor. Now assume that the server accepts a connection request for client 1.
- Process-based concurrent servers:
- Need to include a sigchid handler to retract the dead process.
- Parent-child processes must close their individual connfd copies.
- The connection to the client terminates because the reference count in the socket's file table entry is until the CONNFD of the parent-child process is closed.
Concurrent programming based on I/O multiplexing
- Using the Select function requires the kernel to suspend the process and return control to the application only after one or more I/O events have occurred.
The Select function handles a collection of type Fd_set, also called a descriptor collection, and is logically described as a bit vector of size n: b_n-1,... b_1,b_0*
Descriptor:
- Assign them
- Assign a variable of this type to another variable
- Use Fd_zero, Fd_set, FD_CLR, and Fd_isset macros to modify and examine them
Concurrent event-driven server based on I/O multiplexing
The process-based design gives programmers more control over the behavior of the program.
An I/O multiplexing-based event-driven server is run in a single process context, so each logical flow can access the full address space of the process. , making it easy to share data between streams.
Event-driven designs are often much more efficient than process-based designs because they do not require a process context switch to dispatch new processes.
Thread-based concurrency process
This is a mixture of the above two methods, combining the characteristics of the above two methods.
- Like a process, it is dispatched by the kernel, and the kernel identifies the thread by an integer ID.
- As with I/O multiplexing-based streams, multiple threads run in the context of a single process. So share the entire contents of the virtual address space of this process: code AH, data, heap, shared library and open file.
- A thread is a logical flow that runs in the context of a process.
Each thread has its own thread context, including:
- A unique integer thread ID
- Stack
- Stack pointer
- Program counter
- General purpose Registers
- Condition code
- All threads running in a process share the entire virtual address of the process.
- Main thread: Each process starts its life cycle as a single thread, which is called the main thread, which is the first one running in the process.
- Peer thread: is created by the main thread and runs concurrently with the main thread.
- Peer (thread) pool: A thread can kill any of its peer threads, or wait for any of its peer threads to terminate. Each peer thread can read and write the same shared data.
- POSIX threads: Both the thread code and the local data are encapsulated in a thread routine. Each thread takes a generic pointer as input and returns a generic pointer.
- Create thread: The thread calls the Pthread_create function to create another thread.
Terminating a thread
- The top-level thread returns, implicitly terminating.
- Call the Pthread_exit function to display the termination. (Waits for all other peer threads to terminate before terminating the main thread and the entire process, returning the thread return)
- A peer thread calls the UNIX Exit function, which terminates the process and all threads associated with the process.
- Another peer thread calls pthread_cancle with the current thread ID as a parameter to terminate the current process.
- The thread calls the Pthread_join function to wait for other threads to terminate. This function blocks until the thread TID terminates, assigns the (void*) pointer returned by the thread routine to the location pointed to by Thread_return, and then reclaims all memory resources occupied by the terminated thread.
- Detach thread: At any point in time, the process is either associative or detached.
Initialization process: Call Pthread_once to initialize the related state of the thread;
Shared variables in multithreaded programs
- is mapped to the virtual storage according to the store type.
- Global variables. Variables other than the function. At run time, the read/write region of the virtual memory contains only one instance of each global variable, and any thread is available.
- Local automatic variables define variables that do not have a static property inside the function. At run time, each thread contains its own instances of local auto variables
- Local static variable. Inside the function, there is static. As with global variables, the read/write region of a virtual memory contains only one instance of a local static variable declared in the program.
- Shared variables: Say a variable v is shared when and only if one of its instances is referenced by more than one thread.
Progress map
A progress map is the execution of n concurrent threads modeled as a trace line in an n-dimensional Cartesian space.
Transforms the instruction execution model into one state to another state.
Legal conversions: Right or up.
Point (L1,S2) corresponds to thread completion L1, and thread 2 completes S2 state
Semaphore: is a signal to solve the synchronization problem, the semaphore s is a global variable with nonnegative integer values, there are two special operations to deal with, called P and V:
P (s): if s nonzero, then p will subtract s by 1 and return immediately.
V (s): v operation adds S + 1 and restarts a blocked thread
- Use semaphores to achieve mutual exclusion: Associate each shared variable (or a set of related shared variables) with a semaphore s (initial 1), and then surround the corresponding critical section with P (s) and V (s) operations.
- Using semaphores to dispatch shared resources
Producer-Consumer issues: because both insert and remove items involve updating shared variables, ensure that access to the buffers is mutually exclusive.
Reader-writer question: the thread that modifies the object is called the writer, and the thread of the read-only object is called the reader. The writer must have exclusive access to the object, and the reader can share the object with an unlimited number of other readers.
- Concurrent server based on the prevention of threading session
Use threading to improve parallelism: multicore processing can be parallel.
Other concurrency issues
20145234 Huangfei "Information Security system design basics" 13th Week study Summary