11th. The first section of network programming client-server programming model
- Each network application is based on the client-server model. With this model, an application is a service provided by a server's client. The server manages a resource and provides some kind of service to its clients by manipulating that resource. -An FTP server manages a set of disk files that it stores and retrieves for clients. Similarly, an e-mail server manages a number of files that are read and updated for the client.
- The basic operation in the client-server model is the transaction
The transaction consists of four steps:
1) When a client needs a service, it sends a request to the server to initiate a transaction. For example, when a Web browser needs a file, it sends a request to the Web server 2) after the server receives the request, interprets it, and operates its resources in the appropriate manner. For example, when the Web server receives a request from the browser, it attends a disk file 3) The server sends a response to the client and waits for the next request. For example, the Web server sends the file back to the client; 4) The client receives a response and processes it. For example, when a Web browser receives a page from the server, it displays the page on the screen.
Section II Network
- Clients and servers typically run on different hosts and communicate through the computer's hardware and software resources. The network is a complex system, and here we just want to know a little fur. Our goal is to give you a working thinking model from the programmer's point of view. For a host, the network is just another I/O device, as a data source and data receiver. An adapter that plugs into an I/O bus expansion slot provides a physical interface to the network. The data received from the network is copied from the adapter through the I/O and the memory bus to the memory, typically via DMA (translator Note: Direct memory access). Similarly, data can be copied from memory to the network.
- An Ethernet segment, including cables and hubs, each with the same maximum bit bandwidth, and the hub does not differentiate each bit received on one port to all other ports. Therefore, each host can see each bit.
- Each Ethernet adapter has a globally unique 48-bit address, which is stored on the non-volatile memory of this adapter. This frame is visible to each host adapter, but only the destination host actually reads it.
- Bridging Ethernet connects multiple Ethernet segments by cables and bridges, resulting in a larger LAN. The cable transfer rates for connecting bridges can vary (e.g. 1gb/s between bridges and bridges, 100mb/s between bridges and hubs).
- Bridge function: Connect different network segments. When a to B transmits data in the same network segment, the frame arrives at the bridge input port, and the bridge discards it and does not forward it. A when transferring data to C in another network segment, the bridge copies the frame to the port connected to the corresponding network segment.
The LAN consists of hubs and bridges and connected cables.
Section III Global IP Internet
- The global IP Internet is the most famous and successful internet implementation. Since 1969, it has existed in such or such form. Although the internal architecture of the Internet is complex and changing, the organization of client-server applications has remained fairly stable since the early the 1980s. Shows a basic hardware and software organization for an Internet client-server application. Each Internet host runs software that implements the TCP/TP protocol, which is supported by almost every modern computer system. The Internet's client and server mix uses socket interface functions and UNIX I/O functions to communicate. Socket functions are typically implemented as system calls that fall into the kernel and invoke TCP/IP functions of various kernel modes.
IP Address
- An IP address is a 32-bit unsigned integer.
- The network program stores the IP address in the IP address structure shown.
- Because Internet hosts can have different host byte orders, TCP/IP defines a uniform network byte order (big endian byte order) for any integer data item, such as an IP address, which is carried across the network in the packet header. The addresses stored in the IP address structure are always stored in the (big-endian) network byte order, even if the host byte order is a small-end method.
Internet domain name
- An IP address is used when the Internet client and server communicate with each other. However, large integers are hard to remember for people, so the internet also defines a more user-friendly domain name and a mechanism for mapping domain names to IP addresses. A domain name is a string of words separated by a period (letters, numbers, and dashes).
- The Domain name collection forms a hierarchical structure in which each domain name encodes its position in this hierarchy. With an example you will easily understand this. Part of the domain name hierarchy is shown below. Hierarchies can be represented as a tree. The node of the tree represents the city name, and the path to the root forms the domain name. A subtree is called a subdomain. The first layer in the hierarchy is an unnamed root node. The next layer is a group of first-level domain names defined by non-profit organizations (the Internet Wine Name Number Association). Common first-tier domain names include com, edu, gov, org, net, which are assigned by ICANN's various authorized agents on a first-come-first-served basis. Once an organization has a two-level domain name, it can create any new domain name in the subdomain.
Internet connection
- Internet clients and servers communicate by sending and receiving byte streams on the connection. In the sense of connecting a pair of processes, the connection is point-to-point. It is full-duplex from the point of view where the data can flow in both directions. and from (except for some, such as careless tiller operators cut off the cable causing disaster-to-sexual failure), the stream of bytes emitted by the source process is ultimately trusted by the destination process to receive it in the order in which it is issued.
- A socket is an endpoint of a connection. Each socket has a corresponding socket address, which consists of an Internet address and a 16-bit integer port, denoted by "Address: Port". When a client initiates a connection request, the port in the client socket address is automatically assigned by the kernel, called the ephemeral port. However, the port in the server socket address is usually a well-known port, which corresponds to the service. For example, a Web server typically uses port 80, and the e-mail server uses port 25.
Section fourth socket interface socket address structure
- From the Unix kernel's point of view, a socket is an endpoint of communication.
Socket function
- The socket function client and server use a function to create a socket descriptor.
- Among them, Af_inet shows that we are using the Internet, and scket_stream that the socket is an Internet connection endpoint. The CLIENTFD descriptor returned by the socket is only partially open and cannot be used for read-write. How to complete the job of opening a socket depends on whether we are a client or a server.
Connect function
- The client establishes a connection to the server through the Connect function.
- The Connect function attempts to establish an Internet connection to a server with a socket address of serv_addr, where Addrlen is size of (sockaddr_in). The Connect function blocks until the connection is successfully established or an error occurs. If successful, the SOCKFD descriptor is now ready to read and write, and the resulting connection is depicted by a socket pair.
OPEN_CLIENTFD function Bind function Listen function
- The Listen function converts SOCKFD from an active socket to a listening socket. The socket can accept connection requests from the client. The backlog parameter implies the number of outstanding connection requests that the kernel waits in the queue before it starts rejecting the connection request
OPEN_LISTENFD function Accept function section fifth Web server Web base
- The interaction between the Web client and the server is a text-based application-level protocol called HTTP.
- HTTP is a simple protocol. A Web client (that is, a browser) opens an Internet connection to the server. The browser reads the content and requests some content. The server responds to the requested content, and then closes the connection. The browser reads it and displays it inside the screen
- The main difference is that Web content can be written in HTML. An HTML Program (page) contains directives (tags) that tell the browser how to display various text and graphics objects on this page.
Web content
The Web server provides content to the client in two different ways:
- Takes a disk file and returns its contents to the client.
- Runs an executable file and returns its output to the client.
HTTP transaction
- HTTP request
- HTTP response
Service Dynamic Content
- How the client passes program parameters to the server
- How the server passes parameters to child processes
- How the server passes additional information to child processes
- Where the child process sends its output
The 12th chapter of concurrent programming three basic methods of constructing concurrent programs: Process I/O multiplexing threads the first section of process-based concurrent programming constructs concurrency programs The simplest way-with process
Common functions are as follows:
1. The parent process needs to close the copy of its connected descriptor (the child process also needs to be closed)
2. You must include a SIGCHLD handler to reclaim the resources of the dead child process
3. The file table is shared between the parent and child processes, but the user address space is not shared, as mentioned in the previous learning process
Section II concurrent programming based on I/O multiplexing
is to use the Select function to require the kernel to suspend a process and return control to the application only after one or more I/O events have occurred.
The Select function handles a collection of type Fd_set, which is a descriptor set , and is logically described as a bit vector of size n, with each bit b[k] corresponding to descriptor K, but when and only if b[k]=1, descriptor K indicates that it is an element of the descriptor collection.
Three things a descriptor can do:
- Assign them
- Assign a variable of this type to another variable
- Use Fd_zero, Fd_set, FD_CLR, and Fd_isset macros to modify and examine them
Concurrent event-driven server based on I/O multiplexing
Event driver: Model the logical flow as a state machine.
State machine:
- State
- Input events
- Transfer
For the understanding of state machine, refer to the drawing and state machine of the state conversion diagram in the EDA course.
The overall process is:
- The Select function detected an input event
- add_client function to create a new state machine
- The Check_clients function performs a state transfer (a loopback input line in the textbook example) and deletes the state machine when it is complete.
Several functions to be aware of:
- Init_pool: Initializing the client pool
- Add_client: Add a new client to the active client pool
- Check_clients: Loopback a text line from each of the prepared connected descriptors
The pros and cons of I/O multiplexing technology 1. Advantages
- Compared to the process-based design, the programmer is given more control over the program
- Runs in a single process context , so each logical stream can access the full address space of the process, and shared data is easy to implement
- You can use GDB to debug
- Efficient
2. Disadvantages
- Complex coding
- Cannot take full advantage of multi-core processors
Section III thread-based concurrency programming
This pattern mixes both of the above methods
Thread: Is the logical flow running in the context of the process.
Each thread has its own thread context :
- A unique integer thread Id--tid
- Stack
- Stack pointer
- Program counter
- General purpose Registers
- Condition code
Threading execution Model 1. Main thread
Each process begins its life cycle as a single thread-the main thread, which differs from other processes only: it is always the first thread running in a process.
2. Peer Threading
At some point the main thread is created, and then two threads run concurrently.
Each peer thread can read and write the same shared data.
3. Why the main thread switched to the peer thread:
- The main thread performs a slow system call, such as read or sleep
- Interrupted by the system interval timer
Switch mode is context switch
After a peer thread executes for a period of time, it controls the delivery back to the main thread, and so
4. The difference between threads and processes
- The context switch of a thread is much faster than a process
- Organization:
- Process: Strict parent-child hierarchy
- Threads: A process-dependent thread that makes up a peer (thread) pool, independent of the threads of other processes. A thread can kill any of its peer threads, or wait for any of his peers to terminate.
POSIX threads
A POSIX thread is a standard interface for processing threads in a C program. The basic usage is:
- The code of the thread and the local data are encapsulated in a thread routine
- Each thread routine takes a generic pointer as input and returns a generic pointer.
Create thread 1. Create Thread: pthread_create function
Creates a new thread, with an input variable arg, running thread routine Fin the context of the new thread.
attr default is NULL
Parameter tid contains the ID of the newly created thread
2. View the thread id--pthread_self function
返回调用者的线程ID(TID)
Terminating thread 1. Several ways to terminate a thread:
- Implicit termination: Top-level threading routines return
- Show Termination: Call pthread_exit function * If the main thread is called, it waits for all other peer threads to terminate before terminating the main thread and the entire process, and the return value is Pthread_return
- A peer thread calls the UNIX Exit function to terminate the process and its associated thread
- Another peer thread terminates the current thread by calling Pthread_cancle with the current thread ID as a parameter
2.pthread_exit function 3.pthread_cancle function to reclaim resources for terminated threads
With the Pthread_join function, this function blocks, knows that the thread tid terminates, assigns the (void*) pointer returned by the thread routine to the location pointed to by Thread_return, and then reclaims all memory resources occupied by the terminated thread
Detach thread
At any point in time, threads are either associative or detached.
1. Threads that can be combined
- Be able to be recovered by other threads and kill their resources
- The money was withdrawn, and its memory resources were not released.
- Each of the Cheng threads is either retracted by another thread or separated by a call to the Pthread_detach function
2. Detached threads
- Cannot be recycled or killed by another thread
- The memory resource is automatically released by the system when it terminates
3.pthread_detach function
Threads are able to detach themselves by Pthread_detach with Pthread_self () as parameters.
Section fourth shared variables in multi-threaded programs One, thread memory model
Registers are never shared, and virtual memory is always shared.
Ii. Mapping variables to memory
Three, shared variables
The variable v is shared-when and only if one of its instances is referenced by more than one thread.
The Cheng and Progress chart of signal volume synchronization line
A progress map is the execution of n concurrent threads modeled as a trace line in an n-dimensional Cartesian space, where the origin corresponds to the initial state of no thread completing an instruction.
When n=2, the state is relatively simple, is more familiar with the two-dimensional coordinate diagram, the horizontal ordinate each represents a thread, and the conversion is represented as a forward edge
Conversion rules:
- The legitimate conversion is to the right or up, that is, one instruction in a thread is completed
- Two directives cannot be completed at the same time, i.e. diagonal lines are not allowed
- The program cannot run in reverse, i.e. it cannot appear down or to the left
The execution history of a program is modeled as a trace line in the state space .
Decomposition of thread Loop code:
- H: instruction block in the loop head
- L: Load shared variable CNT to thread I register%EAX instructions.
- U: Update (Increase)%EAX instructions
- S: Save the updated value of%eax back to the instruction of the shared variable cnt
- T: instruction block at the tail of the loop
Several concepts
- Critical section: For thread I, the instructions for manipulating the l,u,s content of the shared variable form a critical section about the shared variable CNT.
- Unsafe zone: The state of formation of the intersection of two critical zones
- Safety track lines: track lines that bypass unsafe areas
Second, the signal volume
the principle of mutual exclusion of semaphores ;
Define two atomic operations on the Semaphore--p and V
The P (wait) process is blocked; The process enters the s.queue queue; end;
V (signal) wakes up the team first process; moves the process out of the S.queue blocking queue; End
Third, using semaphores to achieve mutual exclusion wait (s)/signal (s) of the application
- Before the process enters the critical section, the wait (s) primitive is executed first, and if s.count<0, the process calls the blocking primitive, blocks itself, and inserts into the s.queue queue;
- Note that the blocking process does not consume processor time, not "busy". Until a process that exits from the critical section executes the signal (s) primitive, wakes it up;
- Once another process has performed the s.count+1 operation in the signal (s) primitive, the discovery s.count≤0, that is, the blocked process in the blocking queue, invokes the wake-up primitive, modifies the first process in the S.queue to the ready state, and prepares the queue to execute the critical section code.
- The wait operation is used to request resources (or use rights), and the process may block itself when executing the wait primitive;
- The signal action is used to free up resources (or to return resource rights), and it is the responsibility of the process to wake up a blocking process when it executes the signal primitive.
Third, the use of signal volume to dispatch shared resources
The semaphore has two functions:
- Implement mutex
- Scheduling shared resources
The physical meaning of signal volume
- S.count >0 indicates the number of processes (available resources) that can also execute wait (s) without blocking. Each time a wait (s) operation is performed, it means that the request is allocated a unit of resources.
- When s.count≤0 indicates that no resources are available, the process that requested the resource is blocked. At this point, the absolute value of the s.count equals the number of wait processes in the queue that the semaphore is blocking. Performing a signal operation means releasing a unit of resources. If s.count<0, indicates that there is a blocked process in the S.queue queue, it needs to wake the first process in the queue and move it to the ready queue.
Section Seventh other concurrency issues one, thread safety
A thread is secure, and it always produces the correct result when it is called repeatedly by multiple concurrent threads.
Four disjoint thread unsafe function classes and Countermeasures:
- Functions that do not protect shared variables--protect shared variables with synchronous operations such as P and V
- A function that maintains a state that spans multiple calls--overrides without any static data.
- A function that returns a pointer to a static variable--① overridden; ② uses the lock-copy technique.
- function calling thread unsafe function--refer to the previous three types
Second, re-entry sex
When they are called by multiple threads, no shared data is referenced.
1. Explicit re-entry:
All function arguments are pass-through, no pointers, and all data references are local automatic stack variables, not static or full-play variables.
2. Implicit re-entry:
The calling thread carefully passes pointers to non-shared data.
Iii. Competition 1. The reason for the occurrence of competition:
The correctness of a program relies on the x point of one thread to reach its control flow before another thread reaches the y point. That is, the programmer assumes that the thread will follow a particular trajectory through the execution state space, forgetting a guideline that the threaded program must work correctly for any viable trajectory.
2. Elimination Method:
Dynamically assigns a separate block to each integer ID, and passes to the thread routine a pointer to the block
Iv. deadlock resolution method A. Do not let a deadlock occur:
- Static strategy: Design appropriate resource allocation algorithm, do not let deadlock occur---deadlock prevention ;
- Dynamic policy: When a process requests a resource, the system reviews whether it will generate a deadlock, and does not allocate--- deadlock avoidance If a deadlock is generated.
B. Let the deadlock occur:
The process does not restrict the resource when it is requested, the system periodically or irregularly detects if a deadlock occurs and resolves the deadlock---- deadlock detection and cancellation when detected.
Information Security System Design Foundation 13th Week Study Summary