Python crawler: What you need to know before you learn a reptile

Source: Internet
Author: User

This is the 14th article on Python, which mainly introduces the principles of the crawler.

When it comes to crawlers, we have to talk about Web pages, because the crawlers we write are actually designed for Web pages. parsing Web pages and crawling the data is something that crawlers do.

For most web pages, its code composition consists mainly of three languages: HTML, CSS, JavaScript, and most of the time we crawl data from HTML and CSS.

So, we need to get to the bottom of things before we learn about reptiles.

First, you need to understand the exchange mechanism between client and server.

Each time we visit the page, we are actually initiating a request to the server, which we call the request; When the server receives the call, it gives us a response called response; the two behaviors are combined, that is, the HTTP protocol.

That is, theHTTP protocol is a way of our client (Web page) and server sessions.

When requesting to the server, the request mainly contains 8 methods, get, post, head, put, options, connect, trace and delete, we use the Get method most of the time, the subsequent in the actual operation will be expanded in detail.

Response is the information that the server responds to us. When we make a request to the server, the server returns the information we want.

Secondly, understanding the basic structure of Web pages

A Web page consists of three parts, the head (header), the main contents (content), and the bottom (footer).

We can open a Web page, such as Pmcaff's featured page: Http://www.pmcaff.com/site/selection, opened with Google Chrome, carefully observed, it is the top of the navigation bar, logo, etc. form the header, In the middle of the article for content, the following partners constitute footer.

Then, we right-click to select "Check", you can see the source code of the page, carefully observed, the common label contains at least the following:

    • <div>...</div> Partitioning
    • <li>...</li> List
    • <p>...</p> Paragraph
    • Pictures
    • <a href = "" >...</a> links

Finally, before the crawler, we need to learn to parse the Web page.

So, we need to learn to use Beautifulsoap on how to parse a webpage.

The specific content will be explained in detail in the next article, use Request+beautifulsoap to crawl the real web data.

Operating environment: Python version, 3.6;pycharm version, 2016.2; PC: Mac

-----End-----

Du Wangdan, public number: Du Wangdan, Internet Product Manager.

Python crawler: What you need to know before you learn a reptile

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.