The driving force of demand-driven learning.
Because we can not work on the outside of the network so read the news is so painful, try to crawl the page to save, and then read offline. Today crawl is Cnbeta technology news, crawl address is http://m.cnbeta.com/wap/index.htm?page=1, we need to crawl is the first 5 pages on the line. The code is as follows:
#!/usr/bin/python#-*-coding:utf-8-*-ImportUrllib2,re,time,jsonImportSYS fromBs4Importbeautifulsoupreload (SYS) sys.setdefaultencoding ('Utf-8') n=0f= Open ('Cnbeta.txt','a') Headers= {'user-agent':'mozilla/5.0 (Windows; U Windows NT 6.1; En-us; rv:1.9.1.6) gecko/20091201 firefox/3.5.6'} mainurl="Http://m.cnbeta.com/wap" forIinchRange (1,5): Add='http://m.cnbeta.com/wap/index.htm?page='+Str (i) Req= Urllib2. Request (Add, headers=headers) WB=Urllib2.urlopen (req). Read () Soup=beautifulsoup (WB) file=open (str (i) +'cnbetamain.html','a') File.write (WB) elv1ment=soup.find_all ('Div',{'class':'List'}) forElvinchelv1ment:n=n+1URL=elv.find ('a', href=true). Get ('href') name=elv.find ('a', href=True). Get_text ()PrintName +','+'http://m.cnbeta.com'+URL f.write (str (n)+','+name +','+'http://m.cnbeta.com'+url+'\ n') Try: HTML=urllib2.urlopen (URLLIB2. Request ('http://m.cnbeta.com'+url, headers=headers)). Read () filename=name+'. html'file=open (filename,'a') file.write (HTML)except: Print 'Not FOUND' #Print filenameTime.sleep (1) F.close () file.close ()Print ' Over'
First need to crawl the page, the loop address, this place needs to note because many websites prohibit the machine to visit so need headers, omnipotent
headers = {' user-agent ': ' mozilla/5.0 (Windows; U Windows NT 6.1; En-us; rv:1.9.1.6) gecko/20091201 firefox/3.5.6 '}
Get the page data need to find this page contains the article and the article address, with BeautifulSoup access processing html,beautifulsoup need to hierarchical processing, there is Head body div title href Several modes, here need to use Div class= "List". After finding the article address, open the URL and save it under the current folder, named after the article name.
Python Python Primer Learning web crawler Cnbeta article save