The paging and thinking of crawler 1:get request

Source: Internet
Author: User

Beginning to touch the crawler, understanding is not thorough, say some initial stage of the idea {1. Because of the way a get is requested (the request body has no data, you cannot add data through the Request.add_data () function to page the URL Need to operate directly on the URL to achieve page paging function) 2.post request data data request (can be added via the Request.add_data () function, to achieve page to the URL)}

Here are the differences between the standard teacher's summary

{   
  1. get
是从服务器上获取数据,post是向服务器传送数据。
  2. GET请求参数显示,都显示在浏览器网址上,POST请求参数在请求体当中,消息长度没有限制而且以隐式的方式进行发送
尽量避免使用Get方式提交表单,因为有可能会导致安全问题。比如说在登陆表单中用Get方式,用户输入的用户名和密码将在地址栏中暴露无遗。
}

#coding =utf-8
#1. Importing header files

#2. Do not cycle first, first set out the first page, and then loop to do the next few pages

#步骤 #3. Change the content of the URL and the content of the site without changes

#4. Make a dictionary of the changed content, then encode it to create a browser-aware content
#5. Web site application (unchanged URL content plus changing content dictionary)
#6. Open the contents of the application with the system library and then read the content
#7. Processing with XPath to get a single piece of content

   

Two problems encountered in code writing: 1. Incorrect use of the dictionary, understanding the wrong direction 2.xpath () function returns an object that is not clear

First of all: It was value={' start ': ' J '} This causes J to become a string type, and the value of the variable after the loop does not change, so the content of the crawl that has been printed is the first page;

Again, the second one: the return value type of the XPath () function is not known at the time, so you do not understand the following name = ... [0]; the query found that the function returns a value type of 0].text, and the value to be taken from the list needs to be subscript. The previous result takes a value with no subscript, because the for loop takes out the contents.

There is another: the 28th line of code, because this crawl is the content of the recruitment site, need to know the specific requirements of each job, you need to specifically enter the link; This uses the concatenation of the string, because the content of the directly crawled property is missing the value of the host property, it is not directly used to paste ; concatenation of the host property value and the crawled Herf property value with string concatenation ensures that the crawled links are pasted and copied directly using the

The paging and thinking of crawler 1:get request

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.