Crawlers download pictures of Baidu Post bars and crawlers of Baidu Post bars

Source: Internet
Author: User

Crawlers download pictures of Baidu Post bars and crawlers of Baidu Post bars

The post crawled this time is Baidu's beauty. It gives some encouragement to the masses of male compatriots.

Before crawling, You need to log on to the Baidu post Bar account in the browser. You can also use post in the code to submit or add cookies.


Crawling address: Kw = % E7 % BE % 8E % E5 % A5 % B3 & ie = UTF-8 & pn = 0

#-*-Coding: UTF-8 -*-
Import urllib2
Import re
Import requests
From lxml import etree

These are the libraries to be imported, and the Code does not use regular expressions. The xpath is used. If the regular expressions are difficult, try again.

We recommend that you first use the basic library to write data so that you can learn more.

Links = [] # traverse the url address
K = 1
Print u 'Enter the last page number :'
EndPage = int (raw_input () # The final page number (R' \ d + (? = \ S * Page) This is a common code for capturing the total number of pages with regular expressions.

# Manually enter the number of pages to avoid excessive content

For j in range (0, endPage ):
Url = 'HTTP: // Kw = % E7 % BE % 8E % E5 % A5 % B3 & ie = UTF-8 & pn = '+ str (j) # url of the page number
Html = urllib2.urlopen (url). read () # read the content of the Home Page
Selector = etree. HTML (html) # convert to xml for next Recognition
Links = selector. xpath ('// div/a [@ class = "j_th_tit"]/@ href') # capture the URLs of all posts on the current page

# You can use the source code viewing tool provided by the browser to view elements at the specified destination, which is faster.

For I in links:
Url1 = "" + I # because the crawling address is relative address, so add Baidu domain
Html2 = urllib2.urlopen (url1). read () # read the content of the current page
Selector = etree. HTML (html2) # convert to xml for recognition
Link = selector. xpath ('// img [@ class = "BDE_Image"]/@ src') # capture an image. You can also change it to a regular expression or other content you want.


# Traversal and download

For each in link:
# Print each
Print U' downloading % d' % k
Fp = open ('image/'{str(k}}'.bmp ', 'wb') # download it to the image folder in the current directory. The image format is bmp.
Image1 = urllib2.urlopen (each). read () # read the image content
Fp. write (image1) # write an image
Fp. close ()
K + = 1 # k is the name of the object. Every time a file is downloaded, 1 is added.

Print U' download complete! '


If you want to crawl the content of other sites, you can refer

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.