Crawlers download pictures of Baidu Post bars and crawlers of Baidu Post bars
The post crawled this time is Baidu's beauty. It gives some encouragement to the masses of male compatriots.
Before crawling, You need to log on to the Baidu post Bar account in the browser. You can also use post in the code to submit or add cookies.
Crawling address: http://tieba.baidu.com? Kw = % E7 % BE % 8E % E5 % A5 % B3 & ie = UTF-8 & pn = 0
#-*-Coding: UTF-8 -*-
Import urllib2
Import re
Import requests
From lxml import etree
These are the libraries to be imported, and the Code does not use regular expressions. The xpath is used. If the regular expressions are difficult, try again.
We recommend that you first use the basic library to write data so that you can learn more.
Links = [] # traverse the url address
K = 1
Print u 'Enter the last page number :'
EndPage = int (raw_input () # The final page number (R' \ d + (? = \ S * Page) This is a common code for capturing the total number of pages with regular expressions.
# Manually enter the number of pages to avoid excessive content
For j in range (0, endPage ):
Url = 'HTTP: // tieba.baidu.com/f? Kw = % E7 % BE % 8E % E5 % A5 % B3 & ie = UTF-8 & pn = '+ str (j) # url of the page number
Html = urllib2.urlopen (url). read () # read the content of the Home Page
Selector = etree. HTML (html) # convert to xml for next Recognition
Links = selector. xpath ('// div/a [@ class = "j_th_tit"]/@ href') # capture the URLs of all posts on the current page
# You can use the source code viewing tool provided by the browser to view elements at the specified destination, which is faster.
For I in links:
Url1 = "http://tieba.baidu.com" + I # because the crawling address is relative address, so add Baidu domain
Html2 = urllib2.urlopen (url1). read () # read the content of the current page
Selector = etree. HTML (html2) # convert to xml for recognition
Link = selector. xpath ('// img [@ class = "BDE_Image"]/@ src') # capture an image. You can also change it to a regular expression or other content you want.
# Traversal and download
For each in link:
# Print each
Print U' downloading % d' % k
Fp = open ('image/'{str(k}}'.bmp ', 'wb') # download it to the image folder in the current directory. The image format is bmp.
Image1 = urllib2.urlopen (each). read () # read the image content
Fp. write (image1) # write an image
Fp. close ()
K + = 1 # k is the name of the object. Every time a file is downloaded, 1 is added.
Print U' download complete! '
If you want to crawl the content of other sites, you can refer