First experience of Scrapy and first experience of Scrapy
The development environment of python2 and python3 is installed in the previous section.
Step 1: Enter the development environment, workon article_spider
Enter this environment:
When installing Scrapy, some errors occur during the installation process: these errors are usually caused by the absence of installation of some files. Because these errors often occur in college, it is very practical to solve this problem, directly to http://www.lfd.uci.edu /~ Download the corresponding file from the gohlke/pythonlibs/website, and install it with pip. The specific process is not described in detail.
Then go to the project directory and open our new virtual environment:
Create a scrapy project: ArticleSpider
Create a project framework: import it in pycharm
Scrapy. cfg: configuration file of the project.
ArticleSpeder/: python module of the project. Then you will add the code here.
ArticleSpeder/items. py: The item file in the project.
ArticleSpeder/pipelines. py: pipelines file in the project.
ArticleSpeder/settings. py: the setting file of the project.
ArticleSpeder/spiders/: directory where spider code is stored.
Return to the dos window and use basic to create a template.
The above pycharm has been created:
For better development in the future, create a main. py class for debugging.
This is the code content
Import sys to set the project directory, the call command will take effect
It is best not to write the path in it to the dead: the path can be obtained through the OS, more flexible
Execute is used to execute
Jobbole. py content
The xpath technology is used to obtain the field information of the corresponding article, including the title, time, number of comments, and number of likes.
Writing this article, we know that every time we debug and trouble in pycharm, we can use scrapy shell for debugging because Scrapy is large.
Mark the address of the target Website: now we can perform debugging more happily.
This is the first experience of scrapy today.