Using Urllib's Robotparser module, we can realize the analysis of website Robots protocol.
1. Robots Agreement
Robots protocol is also called the Crawler Protocol, the robot protocol, the full name of the web crawler Exclusion standard, used to tell the crawler can search engine which pages can be crawled, which is not, usually a text file called robots.txt, generally placed in the root directory of the site
When the search crawler accesses a site, it first checks the site root directory for the robots.txt file, if so, the crawler will crawl according to its defined scope, if not, the crawler will access all directly accessible pages
Robots.txt Sample Example
All crawlers are allowed to crawl the public target only, save it robots.txt, then put it in the root directory of the website, and the website entry file, such as index.html, etc.
- User-agent: Describes a crawler name, at least one, * represents all
- Disallow: A directory is specified that does not allow crawling,/means that all pages are not allowed to crawl
- Allow: generally and disallow together, generally not used alone, to exclude certain restrictions; set to/public/to indicate that all pages are not allowed to crawl, but that you can crawl the public directory
- All crawlers are forbidden to access any directory:
- Allow all crawlers to access any directory: (directly blank file is also OK)
- Prohibit all crawlers from accessing certain directories of the website:
- Only one crawler is allowed to access:
2. Reptile Name
Reptiles have a fixed name.
Common search crawler names and corresponding websites
- Reptile Name (website)
- Baiduspider (www.baidu.com)
- Googlebot (www.gogle.com)
- 360Spider (www.so.com)
- Youdaobot (www.youdao.com)
- Ia_archiver (www.alexa.cn)
- Scooter (www.altavista.com)
3, Robotparser
Use the Robotparser module to parse the robots.txt
The module provides a Rovotfileparser class that can determine whether a crawler has permission to crawl this web page based on a Web site's robots.txt file.
- Just pass in the robots.txt link in the constructor method:
- Urllib.robotparser.RobotFileParser (url= "")
- You can also leave the declaration blank, using the Set_url () method to set
Common methods:
- Set_url (): The link used to set the robots.txt file (this method is required if a link is passed when creating the Robotfileparser object)
- Read (): reads the robots.txt file and parses it (this method performs a read and parse operation, and if this method is not called, the next judgment will be flase, it will not reverse anything, but the read operation is performed)
- Parse (): Used to parse the robots.txt file, the passed parameter is the content of some robots.txt, and it is parsed according to robots.txt syntax rules.
- Can_fetch (): Pass in two parameters, the first one is User-agnet, the second is the URL to crawl, the reverse is whether the search engine can crawl this url,true or flase
- Mtime (): Back is the last time to crawl and analyze the robots.txt, which is necessary for long-time analysis and crawling of search crawlers, need to check regularly to crawl the latest robots.txt
- Modified (): Also useful for long-time analysis and crawling of search crawlers, setting the current time to the last crawl and analysis robots.txt
Blog Park, for example, is true to indicate that you can crawl
Create the Robotfileparser object first, then set the robots.txt link through Set_url (), and then use Can_fetch () to determine if the page can be crawled
You can also use the parse () method to read and parse
X. URLLIB Library (Analysis Robots Protocol)