Summary of Knowledge points
Concurrent Programming * Sequence
- Concurrency: Logical control flows overlap in time.
- Application-level Concurrency:
- Accessing slow I/O devices
- Interacting with people
- Reduce latency by postponing work
- Serving multiple network clients
- Parallel computing on multicore machines
- Process
- I/O multiplexing
- Thread
Process-based concurrency programming
The simplest way to construct a concurrency program is a process.
is a process-based concurrent echo server:
Figure
A few things to explain:
- Because the server is running for a long time, a SIGCHLD handler is required to back up the resources of the dead child process .
- Parent-child processes must close their respective CONNFD copies to avoid memory leaks.
- The connection to the client terminates because the reference count of the socket's file table entry is until the CONNFD of the parent-child process is closed.
About the pros and cons of the process:
For sharing state information between parent and child processes, the process has a very clear model: share the File table but do not share the user.
Concurrent programming based on I/O multiplexing
Basic idea: Use the Select function to require the kernel to suspend a process and return control to the application only after one or more I/O events have occurred.
The Select function handles a collection of type Fd_set, also called a description compliance set. When and only when bk=1, the descriptor K indicates that it is an element in the Descriptor collection.
Only allow you to do three things with the set of operators: 1. Assign them, 2. Assign one of the variables of this type to another variable; 3. Modify and examine them with the Fd_zero,fd_set,fd_clr,fd_isset macro directive.
The SELECT function has two inputs: a description of the read set and the cardinality of the read set (n).
Descriptor k is ready to read when and only if a request to read one byte from the descriptor is not blocked.
Figure
Concurrent event-driven server based on I/O multiplexing
Model the logical flow as a state machine. A state machine is a set of states, input events, and transitions, one of which is mapping state and input events to states. Each transfer maps one (input state, input event) pair to an output state. The self-loop is the transfer of the same input and output state. A state machine executes from an initial state, and each input event raises a transfer of the current state to the next state.
The server uses I/O multiplexing and detects the occurrence of input events with the Select function. When each connected descriptor is ready to be read, the server performs a transfer for the corresponding state machine, where it reads from the descriptor and writes back a line of text.
Thread-based concurrent programming
A thread is a logical flow that runs in the context of a process.
POSIX threads
Figure
Creating Threads
Figure
Terminating a thread
Figure
Reclaim Resources for terminated threads
Figure
Detach thread
Figure
Initializing threads
Figure
A thread-based concurrent server
Figure
Shared variables in multithreaded programs
Thread memory model
A set of concurrent threads runs in the context of a process. Each thread has its own separate thread context, including thread ID, stack, stack pointer, program counter, and general purpose register value. Each thread shares other parts of the process context with other threads. This includes the entire user virtual address space, which consists of read-only text (code), read-write data, heaps, and all shared library code and data regions. Threads also share the same set of open files. Registers are never shared.
mapping variables to storage
In threaded C programs, variables are mapped to virtual storage according to their storage type:
- Global variables
- Local automatic variables
- Local static variables
Shared variables
A variable v is shared when and only if one of its instances is referenced by more than one thread.
Synchronizing Threads with semaphores
Information Security System Design Foundation 13th Week Study Summary