Related to the robots.txt file: the search engine automatically accesses webpages on the Internet and obtains webpage information through a program robot (also called Spider. You can create a pure robot file robots.txt on your website, which declares that the website does not want to be accessed by the robot. In this way, some or all of the content of the website will not be indexed by the search engine, or the specified search engine only contains the specified content.
When you access xxx/robots.txt, you can find that/admin and/BBS exist on our website./adminis the directory of the backend management. It is not safe to expose this simple file. In this case, you can disable the TXT file to increase the website security.
Modify the nginx. conf file, VIM/usr/local/nginx/CONF/nginx. conf
- Location ~ * \. (Txt | DOC) $ {
- Root/usr/local/nginx/html;
- Deny all;
- }
Specifies the configuration information of the 403.html file.
- Error_page 403/403 .html;
- Location =/403.html {
- Root HTML;
- }
Reload the configuration file
- /Usr/local/nginx/sbin/nginx-s reload
When robots.txt is added, the system prompts that the file cannot be accessed.
By using this scheme, you can protect the website security on a certain degree, and avoid the possibility of hacking. Using robots.txt, you can guess the directory structure of our website or the actual directories and files.
Of course, you can also disable other designated documents such as. Doc and. XSL. The method is the same.
Nginxdisables robot robots.txt.