In most cases, we talk about these concepts when it comes to I/O operations when the computer is waiting for the data to go from disk or other storage devices (network sockets) to the space used by the user process.
We think that first the CPU will issue an I/O operation notification, then the file system or other will call the relevant device to perform these operations, and finally when the data arrives at the user space after the completion of an interrupt flag, so in this from the CPU issued the call to receive the completion flag in the process there is a time difference. Now there are two important concepts: the completion of the logo and the time difference . synchronous and asynchronous are for getting the completion flag, while blocking and non-blocking are for the time difference.
Synchronous and asynchronous : Gets the way to complete the flag. If polling is used to monitor whether the I/O operation is complete, called synchronization, and the completion flag is obtained by callback notification, it is called Async.
blocking and non-blocking : During that time, the CPU has no other things to deal with, and if other things are handled, it is blocked if nothing else has been done.
The *1,2 represents the status of polling I/O operations, and 3 represents I/O operations have been completed
We think of an I/O call as a and B two processes above, a phase is the CPU makes I/O calls "This phase is very fast", and B is the process of transferring data from the target location to the user space by the related device "this stage will be very different due to the amount of data and the distance of the device where the data resides", It is easy to understand that the above four concepts are for phase B in the process/thread's CPU state during data migration, so take a look at the combination of the four concepts above:
1. Synchronous blocking: That is, in the B stage the CPU has been polling until the completion flag is obtained, so this time the CPU has been blocked on this I/O operation.
2. Synchronization does not block: in the B-stage still use polling until the completion flag, but this poll is different from the polling process above, but in the adjacent polling completed the context switch to deal with other tasks, so the synchronization does not block
3. Asynchronous blocking: That is, there is no above-mentioned process, when the I/O operation completes, the callback notifies the CPU that it has completed "that is the 3 process", but this phase of the CPU is dormant and does not handle other tasks.
4. Asynchronous non-blocking: As with the above, there is no process but a callback to know the I/O operation is complete, but not hibernate, but to handle other tasks at this stage.
In summary: Asynchronous non-blocking is the most efficient.
In practice, multithreading is often used to simulate the ideal asynchronous nonblocking mode: A main thread is used for computation, and multiple threads are used to perform I/O operations "may be any of the above four"
Several common server models:
1. Synchronous: Processing one request at a time and remaining requests in wait state
2. Per request/per process: Start a process for each request "No extended functionality, limited system volunteering"
3. Per request/per thread: Start a thread for each request "each thread occupies a certain amount of memory, so it is limited by memory, and slows down the server" "Apache is the way it is used."
4. Event-Driven: node and Nginx are event-driven without creating a new thread "eliminates the overhead of thread creation/deletion and thread context switching, so you can handle more connections" "Python's Twisted,ruby Event Machine and Perl Ayevent are also event-driven, but they are not very successful "
It should be noted that for high concurrency "Note 1" programs tend to adopt "synchronous non-blocking" rather than "multi-threaded synchronization blocking", in the reasonable design of the different stages of task scheduling can make the concurrency is much larger than the number of parallel, it is important to note that in high concurrency, creating a thread for each task is very expensive, So we don't use multi-threaded synchronous blocking.
Note 1: concurrency : number of simultaneous tasks
parallel : physical resources that can work concurrently (CPU cores, etc.)
Reference:
1:http://blog.jobbole.com/99765/
2: In layman's node
synchronous, asynchronous, blocking, non-blocking