Python crawler Exercises (ii) crawl the public interest src vendor domain name URL (November 22, 2017)

Source: Internet
Author: User

Under Introduction:

Complements the day is the domestic well-known vulnerability response platform, aims at enterprise and white hat to win together.

White hat here to submit vendor loopholes, to obtain the library currency and honor, vendors from here to publish the audience, get vulnerability reports and repair recommendations.

Before March 2017, the vendor domain URL is very good crawl, even if not landing to the platform can still easily get to the bulk of the vendor URL address, and then the white hat with large vulnerability scanning tool for bulk leakage.

Later, the platform may be in order to protect the URL of the manufacturer as far as possible abuse, take some measures.

These measures limit the following:

1). Must login to the platform

2). Click on the vendor name and enter the Submit vulnerability page

3). Display the vendor URL domain name only on the submission page

Below, take a piece of Python code to get the latest patch vendor URL, then how to use it with the reader's personal wishes.

Introduced:

1. Log in to the platform and copy the cookie to the location in the code.

2. Only the top three pages of the crawl, 30 vendors per page are shown here.

3. Using the regular Fetch URL

4. Crawl results saved in the Script sibling directory ' butian_company_url.txt ' file, if not enough 90 URLs, may be some vendors did not fill out (ie empty)

Import Requests,re,json,timehead = {' user-agent ': ' mozilla/5.0 (Windows NT 6.3; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/43.0.2357.130 safari/537.36 '}cook = {"Cookie": "Write here your cookie for the day you log in"}  url = ' http://butian.360.cn/Reward/pub ' for page in range (1,3): #前三页 data = {' s ': ' 1 ', ' P ': page, ' token ': '} HTML = Requests.post (url, headers = head, data=data, cookie = Cook). Content Jscont = Json.loads (Html.decode ()) Jsdata = j        scont[' data ' for I in jsdata[' list ']: linkaddr = ' http://butian.360.cn/Loo/submit?cid= ' + i[' company_id '] Print (linkaddr,end= ' \ t ') sHTML = requests.get (linkaddr,headers = head, cookies = Cook). Content #正则模版 <in Put class= "Input-xlarge" type= "text" name= "host" placeholder= "Please enter the manufacturer domain name" value= "www.grgtest.com"/> Company_url = Re.findall (' <input class= ' Input-xlarge "type=" text "name=" host "placeholder=" Please enter the vendor domain name "value=" (. *) "/>", Shtml.decode ()) Time.sleep (0.5) # Control crawl speed print (company_Url[0]) Com_url = company_url[0] with open (' Butian_company_url.txt ', ' A + ') as F:f.write (Com_url + ' \ n ')

 

Operation Result:


 

Python crawler Exercises (ii) crawl the public interest src vendor domain name URL (November 22, 2017)

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.