Python Web crawler (News capture script)

Source: Internet
Author: User
Tags python web crawler

===================== crawler principle =====================

Access the news homepage through Python, get all the news links on the homepage, and store them in the URL collection.

Remove the URL from the collection, and access the link to get the source code, resolving the new URL link to add to the collection.

To prevent duplicate access, set up a historical visit to filter the newly added URLs.

Parse the DOM tree, get information about the article, and store the information in a article object.

Saves the data in the article object to the database through Pymysql.

Each time the data is stored, the counter increments and prints the article title, otherwise the error message is printed.

If the URL in the collection is all read or the number of data reaches the set value, the program ends.

===================== storage Structure =====================

CREATE TABLE' News ' (' ID ' )int(6) unsigned not NULLauto_increment, ' url 'varchar(255) not NULL, ' title 'varchar( $) not NULL, ' author 'varchar( A) not NULL, ' Date 'varchar( A) not NULL, ' about 'varchar(255) not NULL, ' content 'text  not NULL,  PRIMARY KEY(' id '),UNIQUE KEY' url_unique ' (' url ')) ENGINE=InnoDBDEFAULTCHARSET=UTF8;

===================== script code =====================

" "Baidu Hundreds of news collection" "ImportRe#Network Connection ModuleImportBs4#DOM Parsing moduleImportPymysql#Database Connection ModuleImportUrllib.request#Network access Module#Configuration ParametersMaxcount = 1000#Number of dataHome ='http://baijia.baidu.com/'  #Start Position#Database Connection ParametersDb_config = {    'Host':'localhost',    'Port':'3310',    'username':'Woider',    'Password':'3243',    'Database':'python',    'CharSet':'UTF8'}url_set= Set ()#URL CollectionUrl_old = set ()#Expired URLs#Get Homepage Linkhtml = Urllib.request.urlopen (home). Read (). Decode ('UTF8') Soup= BS4. BeautifulSoup (HTML,'Html.parser') Pattern='http://\w+\.baijia\.baidu\.com/article/\w+'links= Soup.find_all ('a', href=Re.compile (pattern)) forLinkinchLinks:url_set.add (link['href'])#Article Class Definitionsclassarticle (object):def __init__(self): Self.url=None Self.title=None Self.author=None self.date=None self.about=None self.content=None#connecting to a databaseConnect =Pymysql. Connect (Host=db_config['Host'], Port=int (db_config['Port']), user=db_config['username'], passwd=db_config['Password'], DB=db_config['Database'], CharSet=db_config['CharSet']) Cursor=connect.cursor ()#Handling URL InformationCount =0 whileLen (url_set)! =0:Try:        #Get linksURL =Url_set.pop () url_old.add (URL)#Get Codehtml = urllib.request.urlopen (URL). read (). Decode ('UTF8')
#Dom parsingSoup = bs4. BeautifulSoup (HTML,'Html.parser') Pattern='http://\w+\.baijia\.baidu\.com/article/\w+' #Link matching rulesLinks = Soup.find_all ('a', href=Re.compile (pattern))#Get URL forLinkinchLinks:iflink['href'] not inchUrl_old:url_set.add (link['href']) #data anti-weightsql ="SELECT ID from news WHERE url = '%s '"Data=(URL,) cursor.execute (SQL%data)ifCursor.rowcount! =0:RaiseException ('Data Repeat Exception:'+URL)#Get informationArticle =article () Article.url= URL#URL Informationpage = Soup.find ('Div', {'ID':'page'}) Article.title= Page.find ('H1'). Get_text ()#Title Informationinfo = Page.find ('Div', {'class':'Article-info'}) Article.author= Info.find ('a', {'class':'name'}). Get_text ()#Author InformationArticle.date = Info.find ('span', {'class':' Time'}). Get_text ()#date informationArticle.about = Page.find ('blockquote'). Get_text () Pnode= Page.find ('Div', {'class':'Article-detail'}). Find_all ('P') Article.content="' forNodeinchPnode:#Get article paragraphArticle.content + = Node.get_text () +'\ n' #Append paragraph information #Storing Datasql ="INSERT into News (URL, title, author, date, about, content)"SQL= SQL +"VALUES ('%s ', '%s ', '%s ', '%s ', '%s ', '%s ')"Data=(Article.url, Article.title, Article.author, Article.date, Article.about, article.content) cursor.execute (sql %data) Connect.commit ()exceptException as E:Print(e)Continue Else: Print(article.title) Count+ = 1finally: #Determine if data collection is complete ifCount = =Maxcount: Break#To close a database connectioncursor.close () connect.close ( )

===================== Running Results =====================

Set Parameters Maxcount = ten, home = ' http://baijia.baidu.com/'

Query data SELECT title, author from Python.news;

Python Web crawler (News capture script)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.