Python crawler practice (5) -------- Zhaopin network and python Zhaopin Network
A few days ago, I helped my colleagues crawl some information about data analysis jobs on the Internet by Zhaopin. He said that he had to perform some data analysis. Now he has helped him crawl the job. I originally wanted to use Scrapy for crawling, but I don't know why the crawled data is different from the actually searched data. For example, the number of data analysis jobs in Hangzhou is about 5000, but it only crawls more than 4000 points, and the IP address has been banned. The free IP address is not easy to use. It can only be said that scrapy is too fast (or I am not very good at scrapy framework ), therefore, I simply use requests to strictly crawl data to ensure the speed and the data quality is good. I have already given him data. If possible, I will share his Data Analysis Section.
Next let's talk about the following ideas:
Ideas
In fact, you can basically crawl after completing some conventional anti-crawling measures. Here I will talk about IP addresses:
I used to sleep for 1-3 seconds after crawling a job and randomly select
p=random.randint(1,3)
time.sleep(p)
Nothing else is special. I sacrifice the crawling speed, but it is slow. I am not familiar with multithreading and multi-process, so it is useless.
Practical operations
The related code has been modified and debugged. ----
Target website: Zhaopin
Implementation: Crawls information related to jobs searched for data analysis on Zhaopin, such as job name, salary, and work experience. For details, see:
Data: I are stored in Baidu Network Disk link: http://pan.baidu.com/s/1i5okiZb password: xnig
For complete code details, see my github: https://github.com/pujinxiao/zhilian
Author: Jin Xiao
Source: http://www.cnblogs.com/jinxiao-pu/p/6682293.html
The copyright of this article is shared by the author and the blog. You are welcome to repost this article, but you must keep this statement without the author's consent and provide a connection to the original article on the article page.