My first Python crawler -- getting all the articles in my blog,

Source: Internet
Author: User

My first Python crawler -- getting all the articles in my blog,

Recently, I learned how to use my blog as a simple python crawler practice.

 

Python crawler script:

1. Obtain the title and address of all articles;

2. obtain personal information on the right-side bulletin board.

1 #! /Usr/bin/env python3.5 2 #-*-coding: UTF-8-*-3 4 from bs4 import BeautifulSoup 5 import requests 6 import re 7 8 name = 'artech '9 my_cnbolgs = 'HTTP: // www.cnblogs.com/'+ name10 11 # The document list. Each element is a dictionary in the format of {'title': aaa, 'url ': bbb} 12 article_list = [] 13 14 15 def get_articles (name): 16 page = get_total_pages_v2 (name) # obtain the total number of pages 17 for I in range (1, page + 1 ): 18 url = 'HTTP: // www.cnblogs.com/##/default.html? Page = {}'. format (name, I) 19 get_one_page_articles (url) 20 21 22 def get_one_page_articles (url): 23 wb_data = requests. get (url) 24 soup = BeautifulSoup (wb_data.text, 'lxml') 25 articles = soup. select ('a. postTitle2 ') 26 27 for article in articles: 28 data = {29 'title': article. get_text (), 30 'url': article. get ('href ') 31} 32 article_list.append (data) 33 return article_list34 35 36 # The get_total_pages method only intercepts one character. The number of the article pages Cannot exceed 9 pages 37 def get_total_pages (name): 38 url = 'HTTP: // www.cnblogs.com/##/default.html? Page = 2 '. format (name) 39 web = requests. get (url) 40 soup = BeautifulSoup (web. text, 'lxml') 41 page = soup. select ('div. pager ') 42 num = page [1]. get_text () [4: 5] # Total number of pages, which is a string of 43 return int (num) 44 45 46 def get_total_pages_v2 (name): 47 url = 'HTTP: // www.cnblogs.com/##/default.html? Page = 2 '. format (name) 48 web = requests. get (url) 49 soup = BeautifulSoup (web. text, 'lxml') 50 page = soup. select ('div. pager ') 51 num = page [1]. get_text () [4: 5] 52 53 # print (page [1]. get_text () 54 SEARCH_PAT = re. compile (r '\ d +') 55 pat_search = SEARCH_PAT.search (page [1]. get_text () 56 if pat_search! = None: 57 # print (pat_search.group () 58 num = pat_search.group () 59 return int (num) 60 61 SEARCH_PAT = re. compile (r 'iops \ s * = \ s * (\ d +) ') 62 63 def get_info (name): 64 url = 'HTTP: // www.cnblogs.com/mvc/blog/news.aspx? BlogApp = '+ name65 web = requests. get (url) 66 soup = BeautifulSoup (web. text, 'lxml') 67 info = soup. select ('div> A') 68 # nick = info [0]. get_text () 69 # age = info [1]. get_text () 70 ## followers = info [2]. get_text () 71 ## follwees = info [3]. get_text () 72 data = [info [I]. get_text () for I in range (0, 4)] 73 # nick, age, followers, follwees = data74 return data75 76 if _ name _ = '_ main _': 77 nick, age, followers, follwees = get_info (name) # retrieve personal information 78 get_articles (name) # obtain the article list 79 print ('nickname: '+ nick, 'Age:' + age, 'fan: '+ followers, \ 80'follow: '+ follwees, 'Article:' + str (len (article_list) 81 print () 82 for I in article_list: 83 print (I ['title'], I ['url'], sep = '\ n') 84 print ()

 

 

Some instructions 1. Obtain information in announcements (obtain asynchronously loaded data)

Learning crawlers have more understanding of the web page: found that the blog address http://www.cnblogs.com/luoxu34 is only the body part (article list), all the information on the side is js asynchronously loaded out.

# The get_total_pages method only intercepts one character. The number of document pages cannot exceed 9. def get_total_pages (name): url = 'HTTP: // www.cnblogs.com/#/default.html? Page = 2 '. format (name) web = requests. get (url) soup = BeautifulSoup (web. text, 'lxml') page = soup. select ('div. pager ') num = page [1]. get_text () [4: 5] # Total number of pages, which is a character return int (num)

 

The total number of blog posts is not displayed on the blog template you use. Therefore, the script uses the len method to calculate the length of the article_list list as the number of articles.

To get all the articles, you need to know the total number of pages.

Def get_total_pages_v2 (name): url = 'HTTP: // www.cnblogs.com/##/default.html? Page = 2 '. format (name) web = requests. get (url) soup = BeautifulSoup (web. text, 'lxml') page = soup. select ('div. pager ') num = page [1]. get_text () [4: 5] # print (page [1]. get_text () SEARCH_PAT = re. compile (r '\ d +') pat_search = SEARCH_PAT.search (page [1]. get_text () if pat_search! = None: ## print (pat_search.group () num = pat_search.group () return int (num)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.