Python Crawler (1) Brief introduction

Source: Internet
Author: User

Python is easy to get started with, free open source, unlimited cross-platform, object-oriented, frameworks and libraries are rich.

Python:monty Python's Flying circus (Python's name source, and Python is actually irrelevant).

Multiple versions of Python can be maintained through homebrew and pyenv.

Related knowledge HTML

HTTP = Hypertext Transfer Protocol

URI = Uniform Resource Identifier Stress Resource

URL = Uniform Resource Locator Emphasize the location of the resource

It can be said that the URL is a specific uri,uri is an abstract concept, that is, the URL flag a URI, and indicate the location.

The so-called Web API is implemented via HTTP requests.

Head: Ask the server for the corresponding match to the GET request, and the response will not be returned.

GET: A request is made to a specific resource.

PUT: Uploads the latest content to the specified resource location.

POST: Submits data to the specified resource for processing requests.

Delete: Deletes the resource identified by the specified URI.

PATCH: Modifies a resource.

Basically use is get and post. The crawler is sending HTTP requests for processing.

HTTP Common Status code:

200/ok: Request succeeded

201/created: The request is implemented and the new resource is established on request.

202/accepted: The server has been accepted but has not yet been processed.

400/bad Request: Requests cannot be understood by the server.

401/unauthorized: The request needs to be verified. The user password is wrong with this.

403/forbidden: The server is understood and refuses to execute.

404/not Found: Not Found

Html/xml/json

HTML, markup language, not programming language. XML is similar to HTML format

< Tag properties = value of property > </label >

DOM Document model (BeautifulSoup processing is convenient)

CSS. (dot) is a reference to a class, multiple tags can use the same class, and a label can also use more than one class.

#是对应唯一的ID, emphasizing personalization.

JSON, simpler than XML, small, fast, easy to parse.

Database: Mysql,sqlite,mongodb,redis, et cetera.

A brief introduction to reptiles

Work Flow:

Put the seed URL into the queue, get the URL from the queue, crawl the content, parse the content, put the URL that needs to crawl further into the work queue, store the parsed content.

Crawl strategy: Depth first, breadth first, PageRank.

Go to Weight: hash table, Bloom filter.

Robots specification, the website through the Robots protocol tells the search engine which pages can crawl, which cannot crawl. Web site and crawler communication methods, to guide the search engine to better crawl site content.

Requests bag is better than urllib2.

A simple example:

#-*-Coding:utf-8-*-import sysreload (SYS) sys.setdefaultencoding (' utf-8 ') import requestsimport threadingdef Display_ Info (code):    url = ' http://hq.sinajs.cn/list= ' + code    response = requests.get (URL). text    # js_info = Response.read ()    print responsedef Single_thread (codes): For    code in codes:        code = Code.strip ()        Display_info (code) def multi_thread (tasks):    threads = [Threading. Thread (Target=single_thread, args= (codes,)) for codes in the tasks] for    T in Threads:        T.start () for    T in threads :        t.join () if __name__ = = ' __main__ ':    codes = [' sh600001 ', ' sh600002 ', ' sh600003 ', ' sh600004 ', ' sh600005 ']    thread_len = Int (len (codes)/4)  # Each thread has to process several stocks    T1 = Codes[0:thread_len]    t2 = Codes[thread_len:  Thread_len * 2]    t3 = Codes[thread_len * 2:thread_len * 3]    T4 = Codes[thread_len * 3:]    multi_thread ([T1, T2, T3, T4])

  

Python Crawler (1) Brief introduction

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.