Python login csdn and automatically comment download resource script

Source: Internet
Author: User

Function

1. Automatic Login Csdn

2. Find non-commented resources and automatically comment

Library to use

1.python comes with requests, get and send web data

2.python comes with time, used as hibernation, CSDN resources allow only one resource to be commented on for a while, per comment a resource to take a break for a while

3.BeautifulSoup, parsing HTML text, searching for tags and attributes specified in text

Process

1. Use Chrome's developer tools to crawl login and comment packets to get the packet format

2.requests gets the specified page HTML text

3.BeautifulSoup Analyze Page Properties, extract comments and page information required for login

4. Screen the results of BeautifulSoup, remove some of the interfering label information, and determine if the resource has commented

5. Assemble the comment message and post.

Script

ImportRequests
fromBeautifulSoupImportBeautifulSoup
ImportTime

defCommitfunc (SOURCE_ID, refer):
Commiturl='http://download.csdn.net/index.php/comment/post_comment?jsonpcallback=jsonp1419934439524&sourceid= '+source_id+'&content=%e6%88%90%e5%8a%9f%e9%85%8d%e5%af%b9%ef%bc%8c%e5%8f%af%e4%bb%a5%e4%bd%bf%e7%94%a8%e3%80%82 &rating=5&t=1419935091974'
CommitData = {
"Accept":"Text/javascript, Application/javascript, */*",
"accept-encoding":"gzip, deflate, SDCH",
"Accept-language":"zh-cn,zh;q=0.8",
"Connection":"keep-alive",
"Content-type":"application/x-www-form-urlencoded",
"Cookies":"",
"Host":"download.csdn.net",
"Refer": Refer,
"user-agent":"mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/39.0.2171.95 safari/537.36",
"X-requested-with":"XMLHttpRequest",

"Jsonpcallback":"jsonp1419934439524",
"SourceID": source_id,
"content":"a good resource, it ' s worth to download it",
"rating":"5",
"T":"1419935091974",
}
Sess_source.post (Commiturl, CommitData)

defGetpagecount ():
Url_source_page = Url_source +"1"
Html_source = Sess_source.get (url_source_page). Text
Soup_source = BeautifulSoup (Html_source)
#resource_once = Soup_source.findall (' h3 ')
Page_count = Soup_source.find ('Div', attrs={'class':"Page_nav"}). Text
Page_list = Page_count.split ()
Page_ac = Page_list[2].split ('&')
returnPage_ac[0][1:len (Page_ac[0])-1]

defCommitwholepage (PAGE_NU):
Url_source_page = Url_source + page_nu
Html_source = Sess_source.get (url_source_page). Text
Soup_source = BeautifulSoup (Html_source)

Resource_once = Soup_source.findall ('H3')
forElementinchResource_once:
if(len (Element) > 1):
#print type (element.contents[0].attrs[0][1])
#Print Element.contents[0].attrs[0][1]
attr = Element.contents[0].attrs[0][1].split ('/')
Reftext ='/detail/'+ attr[2] +'/'+ attr[3] +'#comment'
result = Soup_source.findall ('a', attrs={'href': Reftext,'class':'btn-comment'})
ifLen (result)! = 0:
#sess_source.get (url_source_page)
Commitfunc (Attr[3], attr[2])
PrintATTR[2]
PrintATTR[3]
Print"Sleep"
Time.sleep (70)






defLOGINCSDN ():
Html_login = Sess_source.get (url_login). Text
Soup_login = BeautifulSoup (html_login)

Lt_value = Soup_login.findall ('input', attrs={'name':"LT"}) [0]['value']
Execution_value = Soup_login.findall ('input', attrs={'name':"Execution"}) [0]['value']
Data_login = {
"LT": Lt_value,
"Execution": Execution_value,
"_eventid":"Submit",
"username":"xxxxx",
"Password":"xxxxx"
}
Sess_source.post (Url_login, Data_login)


#Main begin
Url_login ="Https://passport.csdn.net/account/login"
Url_source ="http://download.csdn.net/my/downloads/"
Sess_source = Requests.session ()

LOGINCSDN ()
Total_page = Getpagecount ()
forNuminchRange (1,int (total_page)):
Commitwholepage (num)


Welcome to consult and correct, Python rookie one.

BeautifulSoup Chinese information station:http://www.crummy.com/software/BeautifulSoup/bs3/documentation.zh.html#Iterating%20over %20a%20tag

Python login csdn and automatically comment download resource script

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.