The crawler production of Baidu Post Bar is basically the same as that of baibai. key data is deducted from the source code and stored in the local txt file. The crawler production of Baidu Post Bar is basically the same as that of baibai. key data is deducted from the source code and stored in the local txt file.
Do
The crawler production of Baidu Post Bar is basically the same as that of baibai. Key Data is deducted from the source code and stored in the local TXT file.
Project content:
Web Crawler of Baidu Post Bar written in Python.
Usage:
Create a new bugbaidu. py file, copy the
http://blog.csdn.net/pleasecallmewhy/article/details/8932310
Qa:
1. Why a period of time to show that the encyclopedia is not available.
A : some time ago because of the scandal encyclopedia added header test, resulting in the inability to crawl, need to simulate header in code. Now the code has been modified to work properly.
2. Why you need to create a separate thread.
A: The basic process is this: the
Python web crawler implementation code
First, let's look at a Python library for capturing web pages: urllib or urllib2.
What is the difference between urllib and urllib2?You can use urllib2 as the extension of urllib. The obvious
: This article mainly introduces [Python] web crawler (3): exception handling and HTTP status code classification. For more information about PHP tutorials, see. Let's talk about HTTP exception handling.
When urlopen cannot process a response, urlError is generated.
However, Python
Python is a powerful computer programming language. It can also be seen as an object-oriented general language. It has outstanding features and greatly facilitates the application of developers. Here, let's take a look at the Python city and county web crawler methods.
Today, I saw a webpage, and it was very troublesom
) print imglist cnt = 1 for Imgurl in imglist:
urllib.urlretrieve (Imgurl, '%s.jpg '%cnt) cnt + 1if __name__ = = ' __main__ ': html = gethtml (' http://www.baidu.com ') getimg (HTML)
According to the above method, we can crawl a certain page, and then extract the data we need.
In fact, we use urllib this module to do web crawler efficiency is extremely low, let us introduce Tornado
download and save the picture. Open the file as follows. The next step is to start identifying the verification code in the image, which requires the Pytesser and pil libraries. The first is to install TESSERACT-OCR and install it after downloading it online. The default installation path is C:\Program FILES\TESSERACT-OCR. Add the path to the system property's path path. Then install pytesseract and PIL via Pip . Let's see how it's used. The
1, Python code, such as, we fromhttp://gitbook.cn/Crawl data in this site.2, before running the code to download the installation of Chardet and requests installation package, you can download the two installation packages in my blog for free. Unzip and place in the directory where Python is installed, such as3. Open t
The code and tools usedSample site source + Framework + book pdf+ Chapter codeLink: https://pan.baidu.com/s/1miHjIYk Password: af35Environmentpython2.7Win7x64Sample Site SetupWswp-places.zip in the book site source codeFrames used by the Web2py_src.zip site1 Decompression Web2py_src.zip2 then go to the Web2py/applications directory3 Extract the Wswp-places.zip to the applications directory4 return to the previous level directory, to the Web2py directo
Project
tutorial/: The project's Python module, which will reference the code from here
tutorial/items.py: Project Items file
tutorial/pipelines.py: Project's Pipelines file
tutorial/settings.py: Setup file for Project
tutorial/spiders/: Directory for crawler storage
2. Clear Target (Item)
In Scrapy, items is a container for loading crawling
The crawler simply says it consists of two steps: Get the Web page text, filter the data.
1. Get HTML text.Python is very handy for getting HTML, and just a few lines of code can do what we need.
The code is as follows:
def gethtml (URL):page = Urllib.urlopen (URL)html = Page.read ()Page.close ()return HTML
Such
# Python3 Import Request Package from Urllib ImportRequestImport SYSImport io# If you need print printing, you can set the output environment first if an exception occursSys.StdOut=Io.Textiowrapper (SYS.StdOut.Buffer, encoding=' Utf-8 ')# The URL you need to getUrl= ' http://www.xxx.com/'# header FileHeaders={"User-agent":"mozilla/5.0 (Windows NT 10.0; WOW64) applewebkit/537.36 (khtml, like Gecko) chrome/64.0.3282.186 safari/537.36 "}# Generate Request ObjectReq=Request.Request (URL, headers=Hea
The reptile simply includes two steps: getting the Web page text, filtering the data.
1, get the HTML text.Python is handy for getting HTML, and a few lines of code can do what we need.
Copy Code code as follows:
def gethtml (URL):
page = Urllib.urlopen (URL)
html = Page.read ()
Page.close ()
just a webpage introduction. Next, let's look at a novel interface: Below is the novel of the fast reading network, the novel text on the left, and the relevant webpage code on the right. No. The text of all novels is contained in the elements whose tags are
If we have a tool, we can automatically download the corresponding HTML code elements. You can automatically download the novel. This is the
crawling around the web.Web spiders are looking for Web pages through the URL of a Web page.From one page of the site (usually the homepage), read the contents of the Web page, find the other links in the Web page, and then find the next page through these links, so that the cycle continues until all the pages of this
: If Hasattr (E, ' Code ') and # Retry 5XX HTTP Errors html = download4 (URL, user_agent, num_retries-1) return HTML5. Support AgentSometimes we need to use a proxy to access a website. For example, Nteflix shielded most countries outside the United States. We use the requests module to implement the function of the network agent.Import Urllib2Import Urlparsedef download5 (URL, user_agent= ' wswp ', Proxy=n
Reprint please indicate author and source: http://blog.csdn.net/c406495762GitHub Code acquisition: Https://github.com/Jack-Cherish/python-spiderPython version: python3.xRunning platform: WindowsIde:sublime Text3PS: This article for the Gitchat online sharing article, the article published time for September 19, 2017. Activity Address:http://gitbook.cn/m/mazi/activity/59b09bbf015c905277c2cc09
Introduction to
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.