HTTP Concept Advanced

Source: Internet
Author: User
Tags node server

1. What is a callback?

In Java, where Class A calls a method B in Class B, and then Class B calls a method A in class A at some point, the A method is called a callback method for a.

Pubilc Interface callback{

public void Callbackmethod ();

}

Public Class A implements callback{//a implements the interface CallBack

b b = new B ();

public void do () {

B.dosomething (this); A runtime invokes the DoSomething method in B, passing in the parameter itself, B has obtained a, can be recalled at any time a method implemented in the callback interface

}

public void Callbackmethod () {///For A, this method is the callback method

System.out.println ("Callbackmethod is executing!");

}

}

public class b{

public void DoSomething (CallBack cb) {//b has a method with a parameter of CallBack interface type

System.out.println ("I AM processing My affairs ... ”);

System.out.println ("Then, I need invoke Callbackmethod ...");

Cb.callbackmethod ();

}

}


2. What is synchronous/asynchronous

Process synchronization is used to realize the reproducibility of program concurrency execution.

A The concept of process synchronization and Asynchrony

1. Process synchronization: When a function call is made, the call does not return until the result is obtained. That is, one thing must be done . , You can do the next thing when you're done with the previous one . Like waking up in the morning, first wash, then can eat, can not be finished in wash, start eating. By this definition, the vast majority of functions are synchronous calls (such as sin,isdigit, etc.). But generally speaking, we are talking about synchronous and asynchronous tasks, especially those that require other components to collaborate or need to be done in a certain amount of time. The most common example is

SendMessage The function sends a message to a window that does not return until the other party finishes processing the message. When the other party finishes processing, the function returns the LRESULT value returned by the message handler function to the caller.

2. Asynchronous

Asynchronous concepts and synchronization are relative. When an asynchronous procedure call is made, the caller cannot get the result immediately. The part that actually handles the call notifies the caller via status, notification, and callback after completion.

Take the Casycsocket class as an example (note that CSocket derives from CAsyncSocket, but its functionality has been converted from asynchronous to synchronous), and when a client makes a connection request by calling the Connect function, the caller thread can run down immediately. When the connection is actually set up, the bottom of the socket sends a message informing the object.

It is mentioned here that the executing parts and callers return results in three ways: status, notifications, and callbacks. Which one can be used depends on the implementation of the execution part, which is not controlled by the caller unless the execution part provides multiple choices. If the execution part is notified by the state, then the caller needs to check every time, the efficiency is very low (some beginners of multithreaded programming, always like to use a loop to check the value of a variable, which is actually a very serious error). If you are using notifications, the efficiency is high because there is almost no extra action required to execute the part. As for the callback function, there is not much difference from the notification.

Basic concepts of Process synchronization

In the computer system, because of the limited resources, which leads to the competition and sharing of the resources between the processes, the concurrent execution of the process is not only the randomness of the execution start time of the user program and the result of improving the resource utilization, but also the restriction of resource competition and sharing on the process execution. So, what are the constraints in the concurrent execution of the process?

Two Synchronous and asynchronous transfers:

1. Asynchronous transfer

Typically, an asynchronous transfer is a transmission unit, with each character appended with a 1-bit starting bit and a 1-bit stop bit to mark the start and end of a character and to synchronize the data transfer in this implementation. The so-called asynchronous transfer refers to the time interval between a character and a character (beginning with the end of a character to the next character) that is variable, and does not need to strictly limit their time relationship. The starting bit corresponds to the binary value 0, which is expressed as a low level and occupies a 1-bit width. The stop bit corresponds to a binary value of 1, expressed as a high level, occupying a bit width. A character occupies the 5~8 bit, depending on the character set used by the data. For example, the Telegraph code character is 5 bits, the ASCII character is 7 bits, the Chinese character code is 8 bits. Additionally, a 1-bit parity bit is attached, and you can choose odd or even parity to implement simple error control for the character. In addition to the same data format (the number of bits of the character, the number of bits in the stop bit, the check digit and the check mode, etc.), the transmitting end and the receiving end should adopt the same transmission rate. Typical rates are: 9 b/S, 19.2kb/s, 56kb/s and so on.

Asynchronous transmission, also known as the end-to-end asynchronous communication, has the advantage of being simple and reliable and suitable for character-oriented and low-speed asynchronous communication situations. For example, the communication between a computer and a modem is this way. Its disadvantage is that the communication overhead, each transmission of a character to be additional two or three bits, communication efficiency is low. For example, when using a modem to surf the Internet, the general sense of speed is very slow, in addition to low transmission rate, and communication overhead, low communication efficiency is closely related.

2. Synchronous transmission

Typically, a synchronous transfer is a data block as a transmission unit. The head and tail of each block are appended with a special character or sequence of bits, marking the beginning and end of a block of data, and typically attaching a checksum sequence (such as a 16-bit or 32-bit CRC checksum) to control the data block in error. The so-called synchronous transmission refers to the time interval between the data block and the data block is fixed, and must strictly stipulate their time relationship.

Three Synchronous blocking and asynchronous blocking:

Synchronization is blocking mode, asynchronous non-blocking mode.

My understanding: Synchronization refers to the two-thread run being related, where one of the threads is blocking waiting for another thread to run. Async means that two threads are irrelevant and run their own.

Synchronization means that after the sender sends out the data, the receiver sends back the response before sending the next packet to the communication mode.

Asynchronous means that after the sender sends out the data, the receiver sends back the response, and then the next packet is communicated.

To cite a less appropriate example, like this:

SendMessage (...)

TRACE0 ("Just like Send");

PostMessage (...)

TRACE0 ("Just like WSASend using overlapped");

SendMessage is called when the call does not return, and so on after the message response to execute TRACE0, which is synchronization.

PostMessage is returned immediately after the call and executes TRACE0 without a message response, which is asynchronous.

Four Other explanations:

The difference between synchronous and asynchronous

For example: Ordinary B/s mode (synchronous) Ajax Technology (asynchronous)

Synchronization: Submit request, wait for server processing, processing completed returns this period the client browser cannot do anything

Asynchronous: Requests are processed through event triggering server processing (which is still something the browser can do)

Synchronization is when you ask me to eat, I hear you go to dinner, if not heard, you keep barking until I tell you to hear, to go to dinner together.

Async is you call me, and then go to dinner, I can get the news immediately, may wait until after work to eat.

So, if I ask you to eat in a synchronized way, ask me to eat with the Async method, so you can save money.

For example, a synchronous message is asynchronous when you call


3. What is I/O?
I/O is an abbreviation for Input/output, which is the input and output port. Each device will have a dedicated I/O address to handle its own input and output information. The connection between CPU and external device, memory and data exchange need to be realized through the interface device. , it is customary to say that the interface simply refers to the I/O interface.


4. What is single thread/multithreading?
For example, a single thread is when you go to the kitchen to cook and cook, a person runs back and forth; Multithreading is two people, a single cook, a single cooking. This explanation should be better understood than pure theory, right?
Again, multithreading is a CPU virtual a few CPUs, and the dual core is actually there are two threads, of course, you can also go to each core virtual multiple threads (also can be understood as a number of pipeline bar)
5. What is blocking/non-blocking?
BlockingA blocking call means that the current thread is suspended until the call results are returned. Functions are returned only after the result is obtained. One might equate blocking calls with synchronous invocations, in fact they are different. For synchronous calls, many times the current thread is still active, but logically the current function does not return. For example, we call the receive function in CSocket, and if there is no data in the buffer, the function waits until there is data to return. At this point, the current thread will continue to process a wide variety of messages. If the main window and the calling function are in the same thread, the main interface should be refreshed unless you call in a special interface action function. Another function that the socket receives data recv is an example of a blocking call. When the socket is working in blocking mode, if the function is called without data, the current thread is suspended until there is data. non-blockingThe concept of non-blocking and blocking corresponds to a function that does not block the current thread and returns immediately until the result is not immediately available. blocking mode and blocking function calls for objectsWhether the object is in blocking mode and if the function is not a blocking call has a strong correlation, but not one by one corresponds. Blocking objects can have non-blocking calls, we can use a certain API to poll the state, the appropriate time to call the blocking function, you can avoid blocking. For non-blocking objects, calling a special function can also enter a blocking call. The function Select is an example of this.6. What is an event?

= = self-understanding: Presumably it was the operation that produced the change corresponding to this thing

7. What is event-driven?
Early programs use the input-action-output mechanism, and the entire process is set by the programmer in advance.
Event-driven mechanism is used in object-oriented program design. For example, the mouse left click, double-click is a specific event, according to these events to enable pre-set corresponding action is the event-driven mechanism.



8. What is an event-driven callback?

This is all due to the fact that node. JS is event-driven. Well, I don't really know the meaning of the sentence in particular. But I'll try to explain why it makes sense for us to write Web applications with node. JS (based application).

When we use the http.createserver method, of course we don't just want a server that listens on a port, we also want it to do something when the server receives an HTTP request.

The problem is that this is asynchronous: The request can arrive at any time, but our server is running in a single process.

When writing PHP applications, we are not worried at all: whenever a request comes in, the Web server (usually Apache) creates a new process for the request and starts executing the appropriate PHP script from start to finish.

So in our node. JS program, when a new request arrives at Port 8888, how do we control the process?

Well, that's where Node.js/javascript's event-driven design can really help-although we have to learn some new concepts to master it. Let's take a look at how these concepts are applied in our server code.

We created the server and passed a function to the method that created it. Whenever our server receives a request, the function is called.

We don't know when this is going to happen, but we now have a place to deal with the request: It's the function we passed in the past. It does not matter whether it is a pre-defined function or an anonymous function.

This is the legendary callback . We pass a function to a method that calls this function to make a callback when a corresponding event occurs.

At least for me, it takes some effort to understand it. If you're still unsure, read Felix's blog post again.

Let's try to figure out the new concept. How do we prove that our code continues to work after the server is created, even if there is no HTTP request coming in and our callback function is not invoked? Let's try this:

var http = require ("http");   function onrequest (request, Response) {Console.log ("request received.");   Response.writehead ($, {"Content-type": "Text/plain"});   Response.Write ("Hello World"); Response.End (); } http.createserver (ONrequest). Listen (8888); Console.log ("Server has started.");

Note: In the place where onrequest (our callback function) was triggered, I output a text with Console.log . after the HTTP server has started working, a text is also output.

When we run its node server.js as usual, it will output "server has started." On the command line immediately. When we make a request to the server (in the browser to access http://localhost:8888/), "request received." This message will appear on the command line.

This is the event-driven asynchronous server-side JavaScript and its callbacks!

(Note that when we access the Web page on the server, our server may output two times "Request received.") That's because most servers will try to read Http://localhost:8888/favicon.ico when you visit http://localhost:8888/


9. What is a time loop?

One of the basic arguments you need to know before learning about node. JS is that I/O is "expensive".

So for the current programming technology, the biggest waste comes from waiting for I/O to finish. Here are a few ways to improve the problem, one of which can help you improve performance:

    • Sync: At some point, only one request is processed at a time. However, in this case, any request will "delay" (block) all other requests.
    • Fork a new process: For each request, you start a new process to process. In this case, a good extension cannot be achieved, and hundreds of connections means hundreds of processes exist. The fork () function is a UNIX programmer's hammer, because it is convenient to use it, so each program looks like a nail (all like knocking it with a hammer). Therefore, it often causes overuse and some past corrections.
    • Threads: Opens a new thread to handle each request. This is simple, and using threads for the kernel is "kind" than the fork process, because threads typically spend less overhead than processes. Cons: Your machine may not support thread-based programming, and thread-based programs can grow in complexity very quickly, and you'll have concerns about accessing shared resources.

The second argument you need to know is that every connection that is handled by the thread is "memory expensive."

Apache uses multithreading to process requests. It "hatches" a thread (or process, depending on the configuration) to process for each request. You'll see how much memory is consumed by those costs as the number of concurrent connections grows and more threads need to serve multiple clients. Nginx and node. JS are not based on multithreaded models because both threads and processes require very large memory overhead. They are all single-threaded, but event-based. This single-threaded model eliminates the overhead of creating hundreds or thousands of threads or processes to handle many requests.

node. JS maintains a single-threaded running environment for your code

It's really a single-threaded operation, and you can't write any code to perform concurrency, such as performing a "sleep" operation that will block the entire server for 1 seconds.

[JavaScript]View Plaincopyprint?
    1. while (new Date (). GetTime () < now + +) {
    2. // do nothing
    3. }

Therefore, when the code is running, node. JS will not respond to other requests from the client because it has only one thread to execute your code. Or, if you have some CPU-intensive operations, such as resetting the size of the picture, it will also block all other requests.

... However, in addition to your code, everything else isConcurrent ExecutionOf

In a separate request, there is no way to make the code run in parallel. However, all I/O is time-based and asynchronous, so the next code will not block the server:

[JavaScript]View Plaincopyprint?
  1. C.query (
  2. ' SELECT SLEEP; ',
  3. function (Err, results, fields) {
  4. if (err) {
  5. throw err;
  6. }
  7. Res.writehead ($, {' content-type ': ' text/html '});
  8. Res.end ('
  9. C.end ();
  10. }
  11. );

If you do this in one request, other requests can be executed very well.

Why is this a better way? When do we need to move from synchronous to asynchronous/concurrent execution?

It's a good idea to use synchronous execution because it makes coding easier (compared to threads, concurrency problems often leave you in the wrong way).

In node. js, you don't have to worry about what happens to your code on the backend. You just need to use a callback when you do the I/O operation. You will be assured that your code will not be interrupted and that I/O operations will not block other requests (because there is no overhead required for those threads/processes, such as excessive memory in Apache, etc.).

Using asynchronous I/O is also good, because I/O is more expensive than doing other things, and we should do something more meaningful instead of waiting for I/O.

An event loop refers to an entity that handles external events and translates them into execution of callbacks. Therefore, the I/O call becomes node. js that can switch from one request to another "point", your code saves the callback and returns control to the node. JS runtime Environment. The callback is executed after the data is finally obtained.

Of course, within node. js, it still relies on threads and processes for data access and other task execution. However, none of this is explicitly exposed to your code, so you don't need to worry about how the internal interactions between I/O are handled internally. Compared to the Apache model, it goes a long way away from many threads and threading overhead because a separate thread is not required for each connection. It is only when you absolutely need to have an operation execute concurrently that the thread is required, but even so the thread is managed by node. JS itself.

In addition to I/O calls, node. JS expects all requests to return quickly. For example, CPU-intensive work should be isolated to another process (by interacting with events or using abstractions like webworker). This obviously means that when you interact with events, if there is no other thread on the backend (node. js runtime), you cannot parallelize the execution of the code. Basically, all objects that can emit events (such as instances of Eventemitter) support event-based asynchronous interactions and you can also interact with "blocking code" (for example, using files, Sockets or in node. JS is a eventemitter child process). With this approach, you can take advantage of multicore, and see: Node-http-proxy.

Internal implementation

Internally, node. JS relies on the event loop provided by Libev, Libeio is a supplement to Libev, and node. JS uses pooled threads to provide support for asynchronous I/O. If you want to know more details, you can take a look at Libev's documentation.

How to implement Async in node. js

Tim Caswell describes the entire pattern in its PPT:

    • First-classfunction: For example, we pass function as data and parcel them to execute when needed.
    • Function assembly: As you know about asynchronous functions or closures, execute after triggering an I/O event.
    • Callback counter: For event-based callbacks, you cannot guarantee that I/O events will be executed for any particular command. So, once you have multiple queries to complete a process, you usually only need to count any concurrent I/O operations, and then check if the necessary actions are all done when you really need the final result (one example is in the event callback, by counting the returned database query). Queries are executed concurrently, and I/O also provides support for this, such as the ability to implement concurrent queries through the pooling of connections.
    • Event loop: As mentioned above, you can wrap a blockingcode into an event-based abstraction (for example, by running a subprocess and then returning when it is done).

Again, the original source: http://blog.mixu.net/2011/02/01/understanding-the-node-js-event-loop/

HTTP Concept Advanced

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.