Bkjia.com exclusive Article] There are dozens of tools used to search for specific activity events in log files. This article describes the policies that should be taken when searching log files. Then, the following examples are provided to illustrate how to manually search log
For website optimization, search engine log analysis is an essential piece, whether you are included in hundreds of small sites, or included millions of large and medium-sized sites, SEO to do well, must be a scientific log analysis, log is the site on the server all the events recorded, including user Access records,
Log on to the search engine is an important step to promote websites. I have sorted out the main search engines for your convenience and hope to help you. The following addresses have been tested and can be used. If you find dead links during use, please let us know to better serve you. You are also welcome to recommend a good
In the site optimization process is not to say that all the site problems can be directly from the Webmaster tools to get information, often webmaster tools to get the information are in the site after the problem can be detected. As a seoer, we need to learn the hidden information of the site. For example, what is the effect of the chain in the past few days? What are the areas where our content is more susceptible to search engine spiders?
When your site encounters a problem, may analyze the reasons for this analysis, but the home analysis should be the crawler has no record of your site, if not, that your link but attract crawler; if so, you should look at the returned code, and then analyze the other reasons according to this. To find the cause of things, can more effectively solve the problem.
If you want to search Baidu's crawler, then directly in the record-keeping text document
) ExpansionFind/root-size +20k-a-size--50k- A is logical with find larger than 20k and less than 50k files. Find/root-size +20k-a-size-50k-exec ls lh {}\;Show the detailed results of the found filesStandard format:-exec [ command ] {}\;Grepstring Search command:(contains matching)1) grep [ options ] string to find the file2) plus - v is to find the contents of a file that does not contain a string3) plus - i is case insensitiveGrepand theFindThe diff
Tags: otherwise use log file to view about less command search XXX serverIn the work we often have to find the problem through the log, but sometimes too many logs and do not know when the log is printed, then we can look at the method:1. Enter the directory where the log fi
log analysis software Secilog 1.15 released, added the search to save the database collection Web log reports. The previous article 1.13, interested to understand. This upgrade mainly adds the following features:Log Search Save:650) this.width=650; "src=" Http://static.oschina.net/uploads/space/2015/1006/201805_Hfif_24
I have the habit of cutting nginx log on the server every day, so for every major search engine visit, always record some 404 page information, traditionally I just occasionally analyzed the log, but for a lot of log information friends, manual to screen may not be an easy thing, This is not my personal slowly study a
PhpIIS log analysis search engine crawler record page 12th. Note: modify the absolute path of iis logs in the iis. php file, for example, $ folder "c: windowssystem32logfiles site log Directory". remember to include a slash (). (Use virtual note:
Modify the absolute path of iis logs in the iis. php file
For example, $ folder = "c:/windows/system32/logfiles/site
log analysis software Secilog 1.15 released, added the search to save the database collection Web log reports. The previous article 1.13, interested to understand. This upgrade mainly adds the following features:Log Search Save:Log search saves can be used to save the
IIS default log files in C:\WINDOWS\system32\LogFiles, the following is the seoer edge of the server log, through the view, you can understand the search engine spider crawling through, such as:
2008-08-19 00:09:12 w3svc962713505 203.171.226.111 get/index.html-80-61.135.168.39 baiduspider+
(+http://www.baidu.com/search
1, identify the search engine:
Before the "/etc/httpd/conf/httpd.conf" file "Logformat", add the following to determine whether the spider is crawling or real user access:
Setenvifnocase user-agent "(googlebot| mediapartners-google| baiduspider| Msnbot|sogou spider| sosospider| Yodaobot| yahoo| Yahoo) "Robot
2. Define Log format:
Add a row under "httpd.conf" File "Logformat" to set a new
Lucene, providing a reliable, scalable, and highly customizable solution.SOLR addresses the creation and persistence of files from the enterprise database to the local index, reduces the load on the database server, and provides a load-balanced solution: SOLR master and slave servers, distribution servers, and the replication of SOLR index.SOLR and Lucene?Lucene focuses primarily on the underlying design, while SOLR is responsible for the design of the application layer, and Lucene is essential
I have a daily cut on the server nginx log habit, so for every day to visit the major search engines, can always record some 404 page information, traditionally I just occasionally analyze the log, but for a lot of log information friends, manual to filter may not be an easy thing, This is not my own slowly research a
Will you log on to the search engine? Of course, search engines do not have login entrance Mody, direct login not on the line, I can be very responsible to tell you, so your effect or said to the future rankings have a certain adverse impact.
And some search engines are also very picky, it itself is not welcome to
Method 1:Use Linux iconv to convert the Utf-8 log to GBK encoded file, and then gbk the statistics in the environment. UTF8 () { log_file= "/lcims/crontab_shell/outfile/lan_wlan_wo/socketmain.log141114_lan1" LOG_FILE_TMP= "/ Lcims/crontab_shell/outfile/lan_wlan_wo/141114_lan1 " echo" utf-8-----" #only linux os use iconv iconv-f Utf-8-T gb2312 $LOG _file > $
This article mainly introduces how to collect statistics on the paths of the 404 link page captured by the search engine in the nginx access log in PHP. you can separate statistics on each search engine. For more information, see Nginx on the 404 page.
I have the habit of cutting nginx logs on the server every day. Therefore, for visits from various
Find files ending in. txt in the/apps/tomcat/tomcat3/apache-tomcat-7.0.69/logs directory, search for the keyword ifcmpecrservice in the file and print the line numberFind /apps/tomcat/tomcat3/apache-tomcat-7.0. the ' *.txt '| Xargs grep " Ifcmpecrservice "The result is as follows, the first column is the second column of the file is the line numberYou can view the file/apps/tomcat/tomcat3/apache-tomcat-7.0.69/logs/localhost_access_log.2017-03-22.txt
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.