Web crawler and HTTP protocol

Source: Internet
Author: User

Most of the web crawler is based on the HTTP protocol, to become a master of Web crawler, familiar with the HTTP protocol is an essential skill.
Web crawler is basically divided into two kinds of basic embedded browser, visual operation, the other is the background process run,
The first advantage is the simple operation, easy to learn, the disadvantage is that the efficiency is too low, suitable for small data volume collection
The second benefit is high operational efficiency, suitable for large data acquisition, but requires some more professional skills to complete
Can be selected according to the needs of the use
HTTP protocol is the application layer of the network protocol, is built on the Reliable Network Transport protocol TCP, in other words, the HTTP protocol is implemented with the TCP protocol. As shown, network service access based on the HTTP protocol is actually a connection access to TCP.

The Web server listens to port 80, the client connects to the server, the URL to be accessed, the crawler name (user-agent) and other information by Key:value by row split into a data string, ending with "\\r\\n\\r\\n", the data string sent to the server, The server locates the file to be accessed according to the URL, and sends the contents of the file to the client in the form of a data stream (the contents of the file are preceded by a response header, the "\\r\\n\\r\\n" is separated from the actual content, the status of the URL corresponding to the header record, the encoding, the length, the update time, Format and other information to facilitate user-side resolution.

Following this rationale, the basic steps for implementing a client-side access (crawler download) HTTP server are as follows:
The IP address of the server is resolved by the domain name in the URL.
Connect to the server's 80 port, based on the address, which establishes a connection to the Web server
A string, such as Url/user-agent, is sent to the server through the established connection.
Attempts to read data from a connection established with the server until it can read the data, read to the head of the server response, get the length of the returned data content, character encoding, etc., and then read the contents of the returned data
The content of the read data is encoded and converted, and the contents of the resources (Web pages, files, etc.) to be downloaded are obtained.
Once the connection to the server is closed, an HTTP server access process is completed.

Reprint please specify the source number bring together (professional data provided) http://www.shuhuiji.com/detail.jsp?id=7

Web crawler and HTTP protocol

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.