. Net solution for multiple spider and repeated crawling,. netspider
Cause:
In the early days, because of the imperfect search engine spider, it is easy for spider crawls dynamic URLs due to unreasonable website programs and other reasons that lead to endless loops of spider lost.
So in order to avoid the previous phenomenon, the spider will not read dynamic URLs, especially? Url
Solution:
1): configure the route
Copy codeThe Code is as follows:
Routes. MapRoute ("RentofficeList ",
"Rentofficelist/javasaredid316-?priceid=-languacreageid=-langusortid=-langusortnum=.html ",
New {controller = "Home", action = "RentOfficeList "},
New [] {"Mobile. Controllers "});
The first parameter is the route name.
The second parameter is the Url mode of the route. The parameters are separated {}-{}.
The third parameter is an object that contains the default route
The fourth parameter is a set of namespaces of the application.
2): Set the connection
<A href = "@ Url. action ("RentofficeList", new RouteValueDictionary {"AredId", 0 },{ "PriceId", 0 },{ "AcreageId", 0 },{ "SortId ", 0 },{ "SortNum", 0 }}) "> default sorting </a>
Write parameter values in turn based on the above Url Mode
3): Get Parameters
Copy codeThe Code is as follows:
Int areaId = GetRouteInt ("AredId"); // get the Parameter
/// <Summary>
/// Obtain the value in the route
/// </Summary>
/// <Param name = "key"> key </param>
/// <Param name = "defaultValue"> default value </param>
/// <Returns> </returns>
Protected int GetRouteInt (string key, int defaultValue)
{
Return Convert. ToInt32 (RouteData. Values [key], defaultValue );
}
/// <Summary>
/// Obtain the value in the route
/// </Summary>
/// <Param name = "key"> key </param>
/// <Returns> </returns>
Protected int GetRouteInt (string key)
{
Return GetRouteInt (key, 0 );
}
According to the above three steps, the displayed url address is:
Http: // localhost: 3841/rentofficelist/3-0-0-0-0.html
This avoids the use of dynamic parameters on static pages. All displayed pages are static pages.