Python2.7 crawls the Douban film top250 and writes it to TXT, Excel, MySQL database, and python2.7top250.

Source: Internet
Author: User

Python2.7 crawls the Douban film top250 and writes it to TXT, Excel, MySQL database, and python2.7top250.
Python2.7 crawls the Douban film top250 and writes it to TXT, Excel, and MySQL databases. 1. Task

  • Crawling the Douban film top250
  • Save as a txt file
  • Save as Excel document
  • Input data to the database
2. Analysis
  • Collection of Chinese names of movies can be viewed: http://www.cnblogs.com/carpenterworm/p/6026274.html
  • Collection of Movie Links:

You can see that the movie link is placed in <a class = "" href = "">, so you can use re. compile (R' <a class = "" href = "(. *) ">.

If re. compile (R' <a class = "" href = "https://movie.douban.com/subject (. *)/"> '). The result is a number such as 1292052 instead of a link.

3. In general, the program is relatively simple. I will not describe it here.
#! /Usr/bin/python #-*-coding: UTF-8-*-# import requests, sys, re, openpyxl, MySQLdb, timefrom bs4 import BeautifulSoupreload (sys) sys. setdefaultencoding ('utf-8') print' is capturing data from the Douban film Top250 ...... '# -------------------------- create a list to store data --------------------------- # nameList = [] linkList = [] # crawler crawling module metadata # def topMovie (): for page in range (10 ): url = 'https: // Movie.douban.com/top250? Start = '+ str (page * 25) print' is crawling the --- '+ str (page + 1) +' --- page ...... 'html = requests. get (url) html. raise_for_status () try: soup = BeautifulSoup (html. text, 'html. parser ') soup = str (soup) # use a regular expression to convert the webpage text to a string name = re. compile (R' <span class = "title"> (. *) </span> ') links = re. compile (R' <a class = "" href = "(. *) "> ') movieNames = re. findall (name, soup) movieLinks = re. findall (links, soup) for name in movieNames: if name. find ('/ ') =-1: # Remove the English name (the English name is characterized by'/') nameList. append (name) for link in movieLinks: linkList. append (link) failed t Exception as e: print e print 'crawling is complete! 'Return nameList, linkList # ------------------------------- save as a text file ------------------------------------- # def save_to_txt (): print 'txt file in storage ...... 'Try: f=open('data.txt ', 'w') for I in range (250): f. write (nameList [I]) f. write ('\ t' * 3) f. write (linkList [I]) f. write ('\ n') f. close () failed t Exception as e: print e print 'txt file storage ends! '# --------------------------------- Save as an excel file ----------------------------------- # def save_to_Excel (): print' in the Excel file storage ...... 'Try: wb = openpyxl. workbook () sheet = wb. get_active_sheet () sheet. title = 'movie Top 250 'for I in range (1,251): one = 'A' + str (I) two =' B '+ str (I) sheet [one] = nameList [I-1] sheet [two] = linkList [I-1] wb.save(ur'doupai top250.xlsx') # ensure the file name is Chinese Character t Exception as e: print e print 'excel File Storage ended! '# --------------------------------- Store the data in the database --------------------------------- # def save_to_MySQL (): print' MySQL database is stored ...... 'Try: conn = MySQLdb. connect (host = "127.0.0.1", user = "root", passwd = "******", db = "test", charset = "utf8 ") cursor = conn. cursor () print "database connection successful" cursor.exe cute ('drop table if EXISTS MovieTop250 ') # Delete time if the table EXISTS. sleep (3) cursor.exe cute (''' create table if not EXISTS MovieTop250 (MovieName VARCHAR (200), link VARCHAR (200) ''') for I in range (250): SQL = 'insert into MovieTop250 (movieName, link) VALUES (% s, % s) 'Param = (nameList [I], linkList [I]) cursor.exe cute (SQL, param) conn. commit () cursor. close () conn. close () failed t Exception as e: print e print 'mysql database storage ends! '# Primary module -------------------------------------- # if _ name __= = "_ main _": try: topMovie () save_to_txt () save_to_Excel () save_to_MySQL () failed t Exception as e: print e

 

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.