The difference between GET and POST
(GET) Note that the query string (name/value pair) is sent in the URL of the GET request:/test/demo_form.asp?name1=value1&name2=value2
Get requests can be cached
GET requests remain in browser history
Get requests can be bookmark-Favorites
GET requests should not be used when handling sensitive data
Get request has a length limit
The GET request should only be used to retrieve the data post method (POST) Note that the query string (name/value pair) is sent in the HTTP message body of the POST request: post/test/demo_form.asp http/1.1host: W3schools.comname1=value1&name2=value2
POST requests are not cached
POST requests are not persisted in browser history
POST cannot be bookmarked
POST request has no requirement for data length
protocol used by DNS
Use both TCP and UDP
The maximum length of a UDP message is 512 bytes, while TCP allows the message to be longer than 512 bytes. When a DNS query exceeds 512 bytes, the protocol's TC flag appears with a delete flag, which is then sent using TCP. Typically, a traditional UDP message is generally no larger than 512 bytes.
The secondary domain server queries the primary name server periodically (typically 3 hours) to see if the data is changed. If there is a change, a zone transfer is performed and the data is synchronized. Zone transfers will use TCP instead of UDP, because the amount of data that is transferred synchronously is much larger than the amount of data requested and answered.
TCP is a reliable connection that guarantees the accuracy of the data.
The client queries the DNS server for domain names, and generally returns no more than 512 bytes, which can be transmitted with UDP. There is no TCP three handshake, so the DNS server loads less and responds faster. While it is theoretically possible for a client to specify TCP when querying to a DNS server, in fact, many DNS servers are configured to support only UDP query packets.
Power, etc.
One of the characteristics of a idempotent operation is that its arbitrary multiple executions have the same effect as one execution. Idempotent functions, or idempotent methods, are functions that can be executed repeatedly with the same parameters and can obtain the same result. These functions do not affect the state of the system, and do not worry that repeated execution will cause changes to the system. For example, the "GetUserName () and Settrue ()" Functions are a idempotent function.
Cookies and Session Differences
Cookies are a technique that allows a Web server to store small amounts of data on a client's hard disk or memory, or to read data from a client's hard disk. Cookies are a very small text file that is placed on your hard drive by a Web server when you browse a website, and it can record information such as your user ID, password, pages visited, time spent, etc. Session: When a user requests a Web page from an application, if the user does not yet have a session, the Web server automatically creates a Session object. When the session expires or is discarded, the server terminates the session. Cookie mechanism: The use of the client to maintain the state of the scheme, and the session mechanism is to maintain the state of the service side of the scheme. At the same time, we see that because the server-side state-of-the-shelf scenario requires an identity to be saved at the client, the session mechanism may need to use a cookie mechanism to achieve the purpose of preserving the identity.
A session is a means by which a server can track a user, each session has a unique identifier: Session ID. When the server creates a session, the response message sent to the client contains the Set-cookie field, which has a key-value pair named Sid, which is the session ID. After the client receives the cookie, it saves the browser, and the request report sent later contains SessionID. HTTP is through the session and the cookie two send together to achieve tracking user status, session for the server, cookie for the client
Causes of TCP sticky packets and unpacking
Application writes data with a larger byte size than the socket send buffer
An MSS-sized TCP segment. MSS is the abbreviation of the maximum message segment length. MSS is the maximum length of a data field in a TCP message segment. The data field plus the TCP header is equal to the entire TCP message segment. So MSS is not the maximum length of the TCP segment, but rather: Mss=tcp message segment length-tcp header length
Ethernet payload is larger than the MTU for IP sharding. MTU refers to the maximum packet size that can be passed on a layer of a communication protocol. If the IP layer has a packet to be transmitted, and the length of the data than the link layer MTU, then the IP layer will be fragmented, the packet is divided into dry slices, so that each piece does not exceed the MTU. Note that IP shards can occur on the original send-side host or on an intermediate router.
Solution strategies for TCP sticky packets and unpacking
The message is fixed long. For example, 100 bytes.
Add a carriage return at the end of the package or special characters such as whitespace, typically such as the FTP protocol
Divides the message into the message header and the end of the message.
Other complex protocols, such as the RTMP protocol.
Three-time handshake
First handshake: When a connection is established, the client sends a SYN packet (SYN=J) to the server and enters the Syn_send state, waiting for the server to confirm;
Second handshake: The server receives the SYN packet, it must confirm the customer's SYN (ACK=J+1), and also send itself a SYN packet (syn=k), that is, the Syn+ack packet, when the server enters the SYN_RECV state;
Third handshake: The client receives the server's Syn+ack packet, sends the acknowledgment packet ack (ACK=K+1) to the server, the packet is sent, the client and the server enter the established state, and the handshake is completed three times.
Three handshake completed, client and server start transmitting data
Wave four times
The client sends fin first to enter the FIN_WAIT1 state
The server receives fin, sends an ACK, enters the close_wait state, the client receives this ACK, enters the fin_wait2 state
Service side sends Fin, enters Last_ack state
The client receives the fin, sends an ACK, enters the TIME_WAIT state, the server receives an ACK, and enters the close state
The state of the time_wait is the active disconnected party (this is the client), which is sent after the last ACK. And the duration is quite long. Client time_wait twice times the MSL duration, approximately 60s in the Linux system, converted to a close state
Time_wait
Time_wait is formed when the link is actively closed, waiting for 2MSL time, about 4 minutes. The primary is to prevent the last ACK from being lost. Because the time_wait time can be very long, the server side should minimize the active shutdown connection
Close_wait
Close_wait is a passive shut-off connection is formed. Depending on the TCP state machine, the server side receives the fin sent by the client, then sends an ACK according to the TCP implementation, thus entering the close_wait state. However, if the server side does not perform close () and cannot be migrated from close_wait to Last_ack, there will be many close_wait state connections in the system. At this point, it may be that the system is busy processing read and write operations, and does not have a connection to the fin already received, close. At this point, Recv/read has received a connection socket for fin and will return 0.
Why do I need time_wait status?
Assuming that the final ACK is lost, the server will resend Fin,client must maintain TCP state information so that the final ACK can be re-sent, or the RST will be sent, resulting in the server thinking an error has occurred. The TCP implementation must reliably terminate the two-direction (full-duplex shutdown) of the connection, and the client must enter the TIME_WAIT state because the client may face a situation where the final ACK is re-issued.
Why does the time_wait state need to remain 2MSL for such a long time?
If the TIME_WAIT state hold time is not long enough (for example, less than 2MSL), the first connection terminates normally. A second connection with the same related five-tuple appears, and the first concatenated duplicate message arrives, interfering with the second connection. The TCP implementation must prevent a duplicate message of a connection from appearing after the connection is terminated, so that the time_wait state is kept long enough (2MSL), and the TCP message in the corresponding direction is either completely unresponsive or discarded. When a second connection is established, it is not confused.
Time_wait and close_wait status socket too many
If the server is out of the ordinary, 80% or 90% is the following two scenarios:
1. The server maintains a large number of time_wait states
2. The server maintains a large number of close_wait states, simply speaking, the number of close_wait is too large due to passive shutdown connection improper processing.
A complete HTTP request process
Domain name resolution--Initiates a TCP 3 handshake--initiates an HTTP request after the TCP connection is established--the server responds to the HTTP request, the browser gets the HTML code--the browser parses the HTML code, and requests the resources in the HTML code (such as JS, CSS , pictures, etc.)-browser renders the page to the user
Talk about long connections
A long connection based on the HTTP protocol
Support for long connections is available in both the HTTP1.0 and HTTP1.1 protocols. Where HTTP1.0 need to add "connection:keep-alive" header in request to support, and HTTP1.1 default support.
The client issues a request with a header: "Connection:keep-alive"
After the server receives this request, according to http1.0 and "connection:keep-alive" to determine that this is a long connection, will be in the header of response also added "Connection:keep-alive", The same is not to close the established TCP connection.
After the client receives the service-side response, it finds that it contains "connection:keep-alive", which is considered a long connection and does not close the connection. Send the request again using the connection. Go to a)
Second, the heartbeat pack. Send a packet every few seconds.
How does TCP ensure reliable transmission?
Three-time handshake.
Truncates the data to a reasonable length. Application data is segmented into the data block that TCP considers most suitable for sending (by byte number, reasonable fragmentation)
Time-out to re-send. When TCP sends a segment, it initiates a timer, and if it cannot receive a confirmation in time, it will be re-sent.
For received requests, a confirmation response is given
Check out the packet error, discard the message segment, do not give a response
Reordering of out-of-order data before being handed over to the application tier
The ability to discard duplicate data for duplicate data
Flow control. Each side of a TCP connection has a fixed-size buffer space. The receiving side of TCP only allows the other end to send the data that the receiving buffer can accept. This prevents faster hosts from causing buffer overruns for slower hosts.
Congestion control. Reduce the transmission of data when the network is congested.
Detailed Introduction to HTTP
The HTTP protocol is an abbreviation for the Hyper Text Transfer Protocol (Hypertext Transfer Protocol), which is used to transfer hypertext to the local browser from the World Wide Web (www:world Wide Web) server.
Characteristics
Simple and fast: When a customer requests a service from the server, it simply transmits the request method and path. The request method commonly has, POST. Each method specifies a different type of contact between the customer and the server. Because the HTTP protocol is simple, the HTTP server's program size is small, so the communication speed is fast.
Flexible: HTTP allows the transfer of any type of data object. The type being transmitted is marked by Content-type.
No connection: The meaning of no connection is to limit the processing of only one request per connection. When the server finishes processing the customer's request and receives the customer's answer, the connection is disconnected. In this way, the transmission time can be saved.
Stateless: The HTTP protocol is a stateless protocol. Stateless means that the protocol has no memory capacity for transactional processing. A lack of state means that if the previous information is required for subsequent processing, it must be re-routed, which may cause the amount of data to be transferred per connection to increase. On the other hand, it responds faster when the server does not need the previous information.
Support b/S and C/s mode.
Requesting message Request
A request line that describes the type of request, the resource to access, and the HTTP version used.
The request header, followed by the section after the request line (that is, the first row), is used to indicate that the additional information the server is using is the request header from the second line, and that the host will indicate the destination of the request. User-agent, server-side and client script can access it, It is an important basis for browser type detection logic. This information is defined by your browser and is automatically sent in each request, etc.
Blank line, a blank line behind the request header is required
The request data is also called the principal, and any additional data can be added.
Response message Response
The status line consists of the HTTP protocol version number, the status code, and the status message.
A message header that describes some additional information that the client will use
Blank line, a blank line after the message header is required
Response body, the text information that the server returns to the client.
Status code
$ OK//client request succeeded
301 Moved Permanently//permanent redirect, use domain name jump
302 Found//temporary redirect, non-logged user Access User Center Redirect to login page
Bad Request//client requests have syntax errors and cannot be understood by the server
401 Unauthorized//request unauthorized, this status code must be used with the Www-authenticate header field
403 Forbidden//server receives request, but refuses to provide service
404 Not Found//request resource not present, eg: Wrong URL entered
Internal Server error//server unexpected errors
503 Server Unavailable//server is currently unable to process client requests and may return to normal after some time
Methods of HTTP
Get: The client initiates a request to the server to obtain resources. Request to get the resource where the URL is located.
Post: Submits a new request field to the server. Adds new data after requesting the URL's resource.
Head: Request a response report for a URL resource that gets the header of the URL resource
Patch: Request a locally modified data item for the resource where the URL resides
Put: Request to modify the data element of the resource where the URL resides.
Delete: request to delete data for URL resource
The difference between a URI and a URL
The URI, which is the Uniform Resource identifier, is the Uniform Resource identifier used to uniquely identify a resource. Every resource available on the Web, such as HTML documents, images, video clips, programs, etc., is a URI to locate.
URIs are generally made up of three parts:
naming mechanism for accessing resources
Host name of the storage resource
The name of the resource itself, represented by a path, with emphasis on resources.
The URL is the Uniform Resource Locator, a Uniform Resource locator, which is a specific URI that the URL can use to identify a resource and also how to locate the resource. URLs are strings used on the Internet to describe information resources, mainly used in various WWW client programs and server programs, especially the famous mosaic. URLs can be used in a unified format to describe various information resources, including files, server addresses and directories.
The URL is generally composed of three parts:
Protocol (or service mode)
The host IP address (and sometimes the port number) that contains the resource
The specific address of the host resource. such as directory and file name, etc.
The difference between HTTPS and HTTP
The HTTPS protocol requires a certificate to be applied to the CA, and the general free certificate is very small and requires a fee.
HTTP is a Hypertext Transfer Protocol, information is transmitted in plaintext, and HTTPS is an SSL encrypted transport protocol with security.
HTTP and HTTPS use a completely different connection, the same port, the former is 80, the latter is 443.
The HTTP connection is simple and stateless; The HTTPS protocol is a network protocol built by the SSL+HTTP protocol for encrypted transmission and authentication, which is more secure than the HTTP protocol.
HTTP uses port 80 by default, HTTPS uses 443 port by default
How HTTPS ensures the security of data transmission
HTTPS is actually in the TCP layer and the HTTP layer between the SSL/TLS to secure the upper layer, the main use of symmetric encryption, asymmetric encryption, certificates, and other technologies for client and server data encryption transmission, and ultimately to ensure the security of the entire communication.
Authenticate users and servers to ensure that data is sent to the correct client and server;
Encrypt data to prevent the data from being stolen in the middle;
Maintain the integrity of the data to ensure that the data is not changed during transmission.
Computer network problems Frequently asked in the Java interview