Baidu Bar Crawler production and embarrassing hundred crawler production principle is basically the same, are through the View Source button key data, and then store it to the local TXT file.
Project content:
Use Python to write the web crawler Baidu Bar.
How to use:
Create a new bugbaidu.py file, and then copy the code into it, and then double-click to run it.
Program function:
Post the content of the landlord posted in packaging txt stored to local.
Principle Explanation:
First, first glance at a bar, click on the landlord and click on the second page after a little change in the URL, has become:
Http://tieba.baidu.com/p/2296712428?see_lz=1&pn=1
Can be seen, see_lz=1 is only to see the landlord, Pn=1 is the corresponding page number, remember this for future preparation.
This is the URL we need to take advantage of.
The next step is to view the page source.
First of all, the problem is dug out to store files when it will be used.
You can see that Baidu uses the GBK code, the title uses the H1 mark:
<H1 class= "Core_title_txt" title= "original" fashion chief (about fashion, fame, career, love, inspirational) "> Original" Fashion Chief (about fashion, fame, career, love, inspirational)
Again, the body part uses the center and class composite tags, and the next thing to do is to match the regular expression.
Run Screenshots: