Python bulk download American drama from Renren TV Hr-hdtv

Source: Internet
Author: User
Tags xml parser

I prefer to watch the United States play, especially like everyone in the film on the HR-HDTV 1024 resolution of the HD two-word American drama, here wrote a script to obtain the designated American drama of all Hr-hdtv ed2k download links, and in order to write to a text file, for download tools for bulk download. The source code is as follows:

# Python3 implementation, the following example of the 3-part American drama climb up about the Simport urllib.requestimport redef get_links (URL, name= ' Yyets '): data = Urllib.request.urlopen (URL). read (). Decode () pattern = ' "(ed2k://\|file\|[ ^"]+?\. (s\d+) (e\d+)    [^ "]+?1024x576[^"]+?) "' Linksfind = Set (Re.findall (pattern, data)) Linksdict = {} total = Len (linksfind) for I in Linksfind:links Dict[int (I[1][1:3]) * + int (i[2][1:3])] = I with open (name + '. txt ', ' W ') as F:for i in sorted (list (LINKSDI  Ct.keys ()): F.write (linksdict[i][0] + ' \ n ') print (linksdict[i][0]) print ("Get Download links of: ", Name, str (total)) if __name__ = = ' __main__ ': #----------Jailbreak, Shameless, Game of Thrones---------------------------get_links (' http    ://www.yyets.com/resource/10004 ', ' Prision_break ') get_links (' http://www.yyets.com/resource/10760 ', ' shameless ') Get_links (' http://www.yyets.com/resource/d10733 ', ' Game_of_thrones ') print (' All is okay! ') 
This Python crawler is relatively short, using the urllib.request and re two modules, the former responsible for crawling the Web page, the latter is responsible for parsing the text. Everyone does not restrict crawler access, so there is no need to modify the HTTP head user-agent,for some screen crawler Web pages, you need to modify the value of the next user-agentThe One approach is to construct a request object using the constructor of the request class in Urllib.request, which assigns itself to the header and then passes the object into the Urlopen () of the module. The crawler can be disguised as a browser to crawl Web pages. For example, CSDN is blocking the crawler, you need to modify the value of User-agent, as follows:
Import Urllib.requesturl = ' http://blog.csdn.net/csdn ' head={' user-agent ': ' mozilla/5.0 (compatible; MSIE 10.0; Windows NT 6.1; trident/6.0) '}req = urllib.request.Request (URL, headers=head) data = Urllib.request.urlopen (req, timeout=2). Read (). Decode () print (data)

After crawling the page is the parsing of the HTML document, the use of the regular expression module, for a specific single content is very convenient. If you need more complex parsing can be used pyquery or Beautiful Soup, they are written in Python html/xml parser, which pyquery is the jquery style, more useful .

About regular expressions here is a tool Regexbuddy, with powerful regular expression debugging, a regular expression in the script that uses this tool for debugging, and this blog post on Python is good: A Python regular expression guide.

Want to further strengthen the function of reptile, can use reptile frame scrapy, this is the official Tutoria of Scrapy. There is, if the content of the Web page is JavaScript generation, then need a JS engine, PyV8 can take to try, and then there is a crawler based on JS, such as with Casperjs and Phantomjs.

"Address: http://blog.csdn.net/thisinnocence/article/details/39997883"


Python bulk download American drama from Renren TV Hr-hdtv

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.