The Robots protocol for Web robots
Web bots, or crawlers. You can iterate over a Web site recursively to get a Web page.
The Robots.txt:robots protocol is a voluntary constraint technology. Some Web sites do not want web bots to look at some of the privacy information in their stations, and people have proposed a robots protocol. That is, all Web sites can create a robots.txt file in their root directory, which records the files that the Web bot can access and files that are not accessible to them. If the Web bot is willing to adhere to this Protocol, when it accesses a Web site, it will first go to the root directory to read the robots.txt file and see if it has permission to get a file.
Note: The robots.txt resource does not necessarily exist strictly in the file system of the Web site, it can be generated dynamically by a gateway application.
The Web bot uses the Get method to request robots.txt, and if the file exists on the Web site, it is placed in the Text/plain body and returned to the Web bot. If the file does not exist, 404 is returned. Indicates that the site has no restrictions on web bots.
robots.txt file Format:
User-agent:<robot-name1> (case insensitive)
Disallow:/private
User-agent:<robot-name2>
Disallow:/protect
If the robot does not find a matching rule in the file, the access is unrestricted.
"HTTP authoritative guide" reading notes five