[Python crawler] crawlers crawl website results based on query words

Source: Internet
Author: User

The query term is related to the remaining seven locations of the non-shadow part. If brute-force cracking occurs, a total of (26 + 10) ^ 7 = 78364164096 ~ There are many useless 78 billion URLs. It was too slow to write a crawler and gave up this idea for the time being. I wanted to simulate a browser and store the query results based on the query words. I found a lot of information online and finally got it done. The Mechanism module is used, which is very suitable for simulating browser modules. You can use this module to do what the browser wants to do, such as automatically entering a form. Main features: the automatic processing of the HTTP-EQUIV and refresh the following to solve the problem oriented, record the completion of STEP 0. preparation environment: linux python 2.7 installation module: mechanic cookielib BeautifulSoup 1. initialize and create a browser object copy code import reimport sysimport mechanic izeimport cookielibfrom bs4 import BeautifulSoup br = mechanic. browser () # create a Browser object cj = cookielib. LWPCookieJar () # by importing the cookielib module and setting browser cookies, you do not need to authenticate the network behavior to log on to the br. Set_cookiejar (cj) # associate cookies ### set some parameters. Because it simulates client requests, it is necessary to support some common functions of the client, such as gzip and referer. set_handle_equiv (True) br. set_handle_gzip (True) br. set_handle_redirect (True) br. set_handle_referer (True) br. set_handle_robots (False) ### this is a degbug ## you can see the execution process in the middle of it, which is helpful for debugging code br. set_handle_refresh (mechanic. _ http. HTTPRefreshProcessor (), max_time = 1) br. set_debug_http (True) br. set_debug_redirects (True) br. set_debug_respon Ses (True) br. addheaders = [('user-agent', 'mozilla/5.0 (X11; U; Linux i686; en-US; rv: 1.9.0.1) Gecko/2008071615 Fedora/3.0.1-1. fc9 Firefox/3.0.1 ')] Copy Code 2. simulate browser behavior (obtain web pages to simulate website queries) Copy code r = br. open (sys. argv [1]) query = sys. argv [2] br. select_form (nr = 0) br. form ['q'] = querybr. submit () html = br. response (). read () copy the code. The selected form here is nr = 0. You can obtain the form information in the following way to use the required form number. For f in br. forms: print f: the query word variable is 'Q', which is obtained by analyzing the website source code, such as 3. the BeautifulSoup module is used to parse the required content. For more details, copy the code def parseHtml (html): ''' @ summary: capture structured data ''' content = "" wordpattern = '

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.