I. Using requests to crawl the course pictures of Mu-class net
Website: http://www.imooc.com/course/list?page=1
Step analysis
1. Import Module
2. Two methods of grasping source code
Using Requests.get
Open a TXT file, use Ctrl+f to find the part of the source code to match, paste into the TXT file, in the next step to match the
3, according to the source code using regular expressions to match the picture
4, for loop download picture, and save the picture to the local folder
The code is as follows:
#step1Import ModuleImportReImportRequests#reading source code filesf =Open(' Tupian5-4-2.txt ',' R ')## paste the part of the code that you want to match in the URL source code here.html = F.read () f.close ()#Match image URLPic_url = Re.findall (' src= ' (. *?) "', Html,re. S#and thetxtThe review element in the matchi =0 for each inchPic_url:Print' now downloading: '+ eachPic = Requests.get ( each) FP =Open(' pic\\'+Str(i) +'. jpg ',' WB ')# name of the location where the picture is savedWB Fp.write (pic.content) fp.close () i + =1Print"Well done!"
As follows:
Here I only crawled the first page of the picture, can add pages, crawl different pages of course pictures
Second, Requests crawl Web page data
Crawl Liao Xuefeng website data url:http://www.liaoxuefeng.com/wiki/001374738125095c955c1e6d8bb493182103fac9270762a000
Step analysis
1. Import Module
2. Fetch source Code
3. Regular expressions match the data information to be crawled
4. Use the For loop to save crawled data to a TXT document
The code is as follows
Import re
Import requests
Reload(SYS)SYS. setdefaultencoding ("GB18030")## encoding cast types available in ChineseType =SYS. Getfilesystemencoding ()# SnatchFetch Source CodeURL =' http://www.liaoxuefeng.com/wiki/001374738125095c955c1e6d8bb493182103fac9270762a000 'Header = {' User-agent ':' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/45.0.2454.101 safari/537.36 '}HTMLL = Requests.get (URL,Headers=header) Html=htmll.text#html. Encoding = ' Utf-8 '#print HTML
How to Match
Match the Directory page URL, and then crawl the data for different directory URLs
Pattern = Re.findall (' <li id=.*?>.*?<a href= ' (. *?) " >.*?</a> ', Html,re. S#content = Re.findall (' <p> (. *?) </p> ', Html,re. S) #Regular Expressions foreachinchPattern:page =' http://www.liaoxuefeng.com '+each htmll2 = requests.get (page,Headers=header) html2=htmll2.text PrintPage pattern2 = Re.findall (' <p> (. *?) </p> ', Html2,re. S forEach2inchPATTERN2:PrintEach2 f =Open(' Liaoxuefeng3.txt ',' A + ') F.write (EACH2) f.close ()Print'Crawl Complete'
The results are as follows:
This matches the use of regular expressions, crawling data is not very neat, but also can use BeautifulSoup can solve the problem
Use requests to crawl pictures and Web page data