Python3 requests + BeautifulSoup crawl Sun NET complaint sticker details Instance code

Source: Internet
Author: User

Use of requests, beautifulsoup, Urllib, etc., the specific code is as follows.

#-*-coding:utf-8-*-"""Created on Sat 09:13:07 2018@author:brave_manemail: [email protected] Here's a hole first. There is no 404 hole in the page. First, we refer to a page that contains 30 complaints, called a main interface. Each main interface is comprised of 30 complaint stickers, we get a hyperlink to each complaint post, and then, the obtained hyperlink is uploaded to Getdetails () to get the details of each complaint post, including the title, content, processing status, etc. When I climbed the first time, climbed to the tenth page, the index is out of range, went to look for a bit, open the relevant complaint paste, the display is 404, the page does not exist, the program error. In order to enhance the robustness of our spiders, when getting each complaint sticker details, try it first with a try statement, of course, provided you are sure that you are not going to make an error while getting the page elements. """ImportRequests fromBs4ImportBeautifulSoup#Import JSON#From threading Import ThreadImportUrllib fromTimeImportSleepdefgetdetails (URL):Try: Headers= {"user-agent":"mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) gecko/20100101 firefox/6.0"} res= Requests.get ("{}". Format (URL), headers =headers) res.encoding="GBK"Soup= BeautifulSoup (Res.text,"Html.parser")                    Try: Content= Soup.select (". Contentext") [0].text.strip ()except: Content= Soup.select (". Greyframe") [0].text.split ("\ n") [7].strip ()Try: Imgurl="http://wz.sun0769.com/"+ Soup.select (". Textpic") [0].img["src"] Imgsaveurl="D:\\downloadphotos"+"\\"+ Soup.select (". Textpic") [0].img["src"][-10:] Urllib.request.urlretrieve (Imgurl,"D:\\downloadphotos"+"\\"+ Soup.select (". Textpic") [0].img["src"][-10:]) except: Imgsaveurl="No picture"                        Try: Status= Soup.select (". QGRN") [0].textexcept:            Try: Status= Soup.select (". Qblue") [0].textexcept: Status= Soup.select (". qred") [0].text details= {"Title": Soup.select (". TGRAY14") [0].text[4:-12].strip (),"Code": Soup.select (". TGRAY14") [0].text[-8:-2],                   " Picture": Imgsaveurl,"Content": Content,"Status": status,"Netfriend": Soup.select (". te12h") [0].text.lstrip ("User:") [0:-27],                   " Time": Soup.select (". te12h") [0].text[-21:-2]}#JD = Json.dumps (Details)#Print (Type (JD))        Try: With open ("SaveComplaints.txt","a") as F:f.write (str (details))except:            Print("failed to deposit")    except:        Print("page does not exist") Sleep (5)defgeta (URL): Headers= {"user-agent":"mozilla/5.0 (Windows NT 6.1; WOW64; rv:6.0) gecko/20100101 firefox/6.0"} res= Requests.get ("{}". Format (URL), headers =headers) res.encoding="GBK"Soup= BeautifulSoup (Res.text,"Html.parser")     forIinchSoup.select (". News14"): URL= i["href"] getdetails (URL)defgetpages (): Rurl="http://wz.sun0769.com/index.php/question/questionType?type=4&page="     forIinchRange (30): URL= Rurl + str ((i-1) * 30) geta (URL)if __name__=="__main__":#Geta ("http://wz.sun0769.com/index.php/question/questionType?type=4")#getdetails ("http://wz.sun0769.com/html/question/201807/379074.shtml")GetPages ()

In the code, there are some small details of the processing is not skilled, such as file read and write. Here's another one.

#-*-coding:utf-8-*-"""Created on Sat Jul 13:51:40 2018@author:brave_manemail: [email protected]"""ImportJSONTry: With open ("SaveComplaints.txt","R") as F:Print("start reading") s=F.readline ()#print (s)except:    Print("failed to deposit")#read data from a fileS1 = S.encode ("UTF8"). Decode ("Unicode-escape")Print(S1)#convert to JSON formatJD =json.dumps (S1)Print(JD)#d = {"Name": "Zhang Fei", "Age": "$"}#print (str (d))#JD = Json.dumps (d)#print (JD)#js = json.loads (JD)#print (JS)

Crawler crawl the first 30 pages saved to a local file, in fact, you can consider the multi-threaded, thread pool method to crawl each master page separately, this may be more efficient. As for the multi-threaded part, still not very skilled, need to pay more attention.

Python3 requests + BeautifulSoup crawl Sun NET complaint sticker details Instance code

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.