The main use is request and BS4, encountered the biggest problem is the target station is gb2312 code, PYTHON3 encoding although better than 2 of the processing is much better but still good trouble,
The first thing to write is to use a cookie to simulate the landing, but this is very cumbersome to use in practice, you need to first login to the target website, and then copy the cookie down to the code ... Laziness is
First Power!
Ready to use Firefox's httpfox to get the data and address of the target post, found that Firefox automatically upgraded to 55.x, plugin can only be used in 35.x version, and then use Chrome to find this
A Web site to submit a POST request is open a new page, and then the new page point F12 is late, do not see the post, and then Baidu found a new tab can be set to open f12!:
Then know this site is post what data, began to use requests mold to post, but found every time logging failed, and crawl page content is garbled, with str (' info ', encoding= ' utf-8 ') only to improve
Found that there is no login success, and then prompted to enter the account password login.
A flash of inspiration!!!
It is estimated that my post data is UTF8 and the target station receives post is GB2312, do not understand Ah! Decisive User name (user name is Chinese!!!) Username.encode ("gb2312") after successful login success! and then
Open session to keep cookies, persistent login. Then every minute the last ID is equal to the saved ID to determine whether to crawl.
#-*-coding:utf-8-*-#编码声明import requests,re,time,json,osfrom BS4 import beautifulsoupfrom time import strftime, Gmtimelogin_url = ' http://www.3456.tv/Default.aspx ' #请求的URL地址username = ' user name ' password = ' password ' DATA = {' Web_top_two2 $txtName ": Username.encode (" gb2312 ")," Web_top_two2$txtpass ":p assword, ' __viewstate ': '/ wepdwullteynzc4mjm2otbkgaefhl9fq29udhjvbhnszxf1axjlug9zdejhy2tlzxlfxxybbrh3zwjfdg9wx3r3bzikaw1nqnrutg9naw6/ pqbjqqv358gfyjdoiok+ek4vwa== ', ' __eventvalidation ': '/wewbal3y5plcglhgt+5bgl3r9v/ Cglx77pnd5r1xxtegn4lxvbdrb6odryc4xlk ', ' web_top_two2$imgbtnlogin.x ': ' A ', ' web_top_two2$imgbtnlogin.y ': ' 8 '} # Login System account password, also we request data headers = {' user-agent ': ' mozilla/5.0 (Windows NT 6.1; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/60.0.3112.90 safari/537.36 '} #模拟登陆的浏览器S = requests. Session () login = S.post (login_url,data=data,headers=headers) #模拟登陆操作def getData (num): URL = ' http://www.3456.tv/user/ list_proxyall.html ' res = s.get (URL) content = res.content return contentdef getLast (): url = ' http://www.3456.tv/user/list_proxyall.html ' res = s.get (URL) content = res.content Soup = Beaut Ifulsoup (content, ' html.parser ') TB = Soup.find_all (' tr ', style= ' text-align:center; ') For tag in tb:see = Tag.find (' A ', attrs={' class ': ' See '}) Seestr = see[' onclick '] seenum = re.sub ("\d "," ", seestr) Break return Seenumdef isnew (): Newlastid = GetLast () with open (' lastid.txt ') as txt: Last = Txt.read () if int. (NEWLASTID)! = Int (last): print (' Current time: ' + strftime ("%h-%m") + ', find new message, get in! ') Getnewuser () else:print (' Current time: ' + strftime ("%h-%m") + ', temporarily no new message ') def getnewuser (): url = ' http://www.3456.tv /user/list_proxyall.html ' res = s.get (URL) content = res.content soup = beautifulsoup (content, ' html.parser ') t b = Soup.find_all (' tr ', style= ' text-align:center; ') With open (' Lastid.txt ') as Txt:last = Txt.read () userinfo = "for tag in tb:see = Tag.find (' A ', att rs={' class ': ' See '}) Seestr = see[' onclick '] seenum = re.sub ("\d", "", seestr) if int (seenum) = = I NT (last): Break userinfo + = (str (seeinfo (int (seenum)), encoding = "utf-8") + ' \ n ') Userfilename = str Ftime ("%h-%m") + ' txt ' with open (Userfilename, ' W ') as F:f.write (str (userinfo)) Os.system (Userfilename) With open (' Lastid.txt ', ' W ') as F2:f2.write (str (getlast ())) print (' This fetch completed, current time: ' + strftime ("%h-%m") + ', 60 second successor Continue to execute ') def seeinfo (id): url = ' http://www.3456.tv/user/protel.html ' info = {' id ': id} res = s.get (url,data=info) Content = res.content return contentsetsleep = #修改这个设置每次抓取间隔, 60 is 60 seconds print (' This is today first time start? ') FIRSTSTR = input (' Input yes or No and press ENTER: ') if firststr = = ' Yes ': print (' crawling ... ') LastID = GetLast () w ITH open (' Lastid.txt ', ' W ') as F:f.write (str (lastid)) print (' Current time: ' + strftime ("%h:%m") + ', the current first Data ID is ' + lasti d) Print (str (setsleep) +' Continue execution after ') Else:print (str (setsleep) + ' seconds ') while 1:isnew () time.sleep (int (setsleep))
20170820_python real-time access to a website message information