The contents of the Front (1) (2) are enough for crawlers such as Web sites that do not need to log in to get data directly.
and to crawl social networking site is a distinct feature is the need to log in, or many things can not be acquired. After testing the discovery, Weibo, know is not very good login, know that sometimes the verification code will be similar to 12306, and micro-bo in addition to the verification code, in the transfer of parameters will be base64 encryption of the user name. Here's a simple watercress login and a simple crawl.
For Chrome kernel browser, you can right-click, review elements, select Network, login to your account.
Check login will have various post or GET,URL, connector various information.
Pull down to find Formdat, like Micro Bo Formdata to encrypt.
form data has all the information we need to log in, where Captcha-solution is the login verification code, sometimes there are sometimes not so in the request, you need to determine if there is.
Import requestsimport refrom BS4 import BeautifulSoup as Bsimport sysreload (SYS) sys.setdefaultencoding (' utf-8 ') s = Requests. Session () Url_login = ' http://accounts.douban.com/login ' url_contacts = ' https://www.douban.com/contacts/list ' Formdata = {' source ': ' Index_nav ', ' redir ': ' https://www.douban.com ', ' form_email ': ' 22222 ', ' Form_password ': ' 111111 ', ' Login ': U ' login '}headers = {' user-agent ': ' mozilla/5.0 (Windows NT 6.1) applewebkit/537.36 (khtml, like Gecko) chrome/50.0.2661.102 safari/537.36 '}r = S.post (Url_login, Data=formdata, headers=headers) content = R.textsoup = BS ( Content, ' lxml ') Captcha = Soup.find (' img ', id= ' captcha_image ') if Captcha:captcha_url = captcha[' src '] re_captcha_id = R ' <input type-"hidden" name= "Captcha-id" value= "(. *?)" /' captcha_id = Re.findall (re_captcha_id, content) Print captcha_id print Captcha_url Captcha_text = raw_input (' Please input verification code AH ') formdata[' captcha-solution '] = captcha_text formdata[' captcha-id '] = Captcha_ID r = s.post (Url_login, Data=formdata, Headers=headers)
This will successfully log in.
In fact, there is a simple way, is to choose to remember me when logging in, and then in the request headers copy down Cookie,cookie can be used for a long time, so still enough for personal use.
Import Requestsimport osheaders = {' user-agent ': ' mozilla/5.0 (Windows NT 6.1) applewebkit/537.36 (khtml, like Gecko) chrom e/50.0.2661.102 safari/537.36 '}cookies = {' Cookie ': 1 '}url = ' http://www.douban.com ' r = Requests.get (URL, cookies= cookies,headers=headers) r.encoding = ' utf-8 ' Print R.textwith open (' douban.txt ', ' wb+ ', encoding = ' utf ') as F: F.write (r.content)
Python crawler (3) Watercress Login