Recently a friend asked me to help climb a novel, this novel is directly readable in the first 30 chapters, the following chapters need to recharge VIP visible. So you need to use VIP account login, construct a cookie, and then use Python to get the URL of each chapter, get the content and then use pyquery parsing content.
Note: In the process of constructing a cookie, you need to log in Chrome/firefox, then view the cookie yourself in the console and then manually join.
Step one: Manually construct a cookie to bypass login [I'm not cracking an account here, trying to hack please go out and turn left]
1 #version 2.72 3 #!/usr/bin/python4 5 ImportHtmlparser6 ImportUrlparse7 ImportUrllib8 ImportUrllib29 ImportCookielibTen Importstring One ImportRe A -CJ =Cookielib. Lwpcookiejar () - the defMake_cookie (name, value): - returnCookielib. Cookies ( -version=0, -Name=name, +Value=value, -port=None, +Port_specified=False, Adomain="yourdomain", atDomain_specified=True, -domain_initial_dot=False, -Path="/", -Path_specified=True, -Secure=False, -expires=None, indiscard=False, -Comment=None, toComment_url=None, +rest=None - ) the *Cj.set_cookie (Make_cookie ("name","value")) $ Panax Notoginseng -Cookie_support =Urllib2. Httpcookieprocessor (CJ) theOpener =Urllib2.build_opener (Cookie_support, Urllib2. HttpHandler) + Urllib2.install_opener (opener) A the +Request ="http://vip.xxx.com/m/xxx.aspx?novelid=12&chapterid=100&page=1" - $Response =Urllib2.urlopen (Request) $Text =Response.read () - PrintText
Note: Modify the 22 line of domain, add 35 lines of cookie entries, modify the 43 line of your page to crawl the address, the following is chrome under the login account to see the cookie.
This section references: Baidu know, Blog Park: Python simulation login.
Python to crawl website content after login (fiction)