I want to write a crawler of my own and ask if I can fill it out casually? I want to write a crawler of my own and ask if I can fill it out casually? Reply content: No. You must enter the name of your desired object in [1]. Wait for a day to see your name in browser statistics.
========================================================== ======================================
Well, some people finally disagree. Let me write something serious. It also explains why this answer is technically feasible.
According to RFC 1945-Hypertext Transfer Protocol -- HTTP/1.0
The User-Agent request-header field contains information about the user agent originating the request. This is
Statistical purposes, The tracing of protocol violations, and automatic recognition of user agents for the sake of tailoring responses to avoid particle user agent limitations. Although it is not required, user agents
Shocould includeThis field with requests.
The user-agent should not be left blank. It is useful, so the answer "no" is technically correct.
One of the purposes of user-agent is
StatisticsFor example, to identify which crawler the request comes from, this answer meets this requirement.
[1] The "must" in the answer is not a technical requirement. I admit that it contains a programmer's sense of humor. It is recommended that you do not enter it as needed. Some services are anti-crawler and have dictionary tables. If it is not a browser on the market, the requests will be discarded. Corresponding to the top-ranking answer, the User agent should use its own name. The girl's name is started once every time. It seems more lonely to see that the server returns 404 to laugh at you. It is best to follow the standard writing method. You can modify some fields. Remember that a Chinese Tianyu mobile phone declared itself as an iphone android server in UA. It hurts when detecting it. Browser: Hello, I am XX.
Server: Well, what is your love? I don't check my identity today. You can guess why all browsers other than Firefox are also compatible with Mozilla ...... Yes. If the website you want to crawl is not subject to ua restrictions, You can freely crawl the website. Yes.
Script
Function nochrome ()
{
Alert ('nochrme ');
Document.exe cCommand ("stop ");
Location. href = "about: blank ";
}
Var f = false; if (navigator. userAgent. toLowerCase (). indexOf ("chrome")>-1) {f = true;} try {if (window. external & window. external. twGetRunPath) {var r = external. twGetRunPath (); if (r & r. toLowerCase (). indexOf ("chrome")>-1) {f = true ;}} catch (ign) {f = false;} f & (nochrome ());
Script.
But it is generally used for statistics (we have rewritten the app)