Python cralwer (crawler) Experience

Source: Internet
Author: User

Recently made a small crawler with Python, you can automatically organize some of the content of the site, pushed to local documents, do a small summary.


The main Lib is urllib and BeautifulSoup.


Urllib and Urllib2 are very handy web page extraction libraries, the core is to send a variety of custom URL request, and then return to the Web page content. The simplest function to determine whether a Web page exists:

def isurlexists (URL):  req = urllib2. Request (URL, headers=headers)  try:    urllib2.urlopen (req)  except:    return 0;  return 1;

The headers can be customized or left blank. The main purpose of the customization is to mimic the header of the General browser, bypassing some websites blocking the crawler.

If you want to get site content and get content that returns an exception, you can:

def fetchlink (URL):  req = urllib2. Request (URL, headers=headers)  try:    response = Urllib2.urlopen (req)  except URLLIB2. Urlerror, E:    print ' Got URL Error while retrieving: ', Url, ' and the exception is: ', E.reason  except Urllib2. Httperror, E:    print ' Got Http Error while retrieving: ', url,  ' with reponse code: ', E.getcode (), ' and exception : ', E.reason  else:    htmlcontent = Response.read ()    return htmlcontent


The above code returns HTML directly.

BeautifulSoup (documentaion:http://www.crummy.com/software/beautifulsoup/bs4/doc/) is a concise library of HTML analysis tools. Once you get the HTML, you can get the information you want directly from the regular expression that comes with Python, but it's a little cumbersome. Beautifulshop directly parses HTML in a JSON-like way, forming a standard tree structure that can be manipulated directly to get elements. In addition, the search for elements is supported and so on.


  Content = BS4. BeautifulSoup (content,from_encoding= ' GB18030 ')  posts = Content.find_all (class_= ' post-content ') for  post in Posts:    PostText = Post.find (class_= ' Entry-title '). Get_text ()

In this example, the content is first converted to the BS4 object, then all class=post-content chunks are found, and then the class=entry-title text is obtained. Note that the first line of parse can be selected encoding, which is used in Simplified Chinese.


The above is the acquisition of HTML text content. If there is a picture, the above code will directly generate the picture to connect to the original picture position. If you need to do any download, you can use the Urlretrieve method. There's not much to say here.


Python cralwer (crawler) Experience

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.