I did not consider how URL-oriented search engine optimization was taken into account when I wrote Jin Yuan Information Network (www.geofuture.net). after the optimization, I started. In this case, we need to consider static URLs, and try to keep the original programs unchanged for future protection,
I did not consider how URL-oriented
CopyCodeThe Code is as follows: var ID;
Function GETID ()
{
VaR url = location. search;
VaR request = new object ();
If (URL. indexof ("? ")! =-1)
{
VaR STR = URL. substr (1) // remove? No.
STRs = Str. Split ("");
For (VAR I = 0; I {
Request [STRs [I]. Split ("=") [0] = Unescape (STRs [I]. Split ("=") [1]);
Read the title, we can not see it, because a little seo people know: The fewer URL parameters, Baidu and Google's website Optimization guide are clearly stated in this point. And then in practice, it's not necessarily the case. Here's what I've stumbled upon:
Baidu Search "dhc120ml" results
Google search "dhc120ml" results
Compared to two
E-commerce website has a lot of search keywords or class destination URLs are often a large list of URLs, there is a need to be long URLs such as:Domain name +/products.html?q= Warrior showtype=imgsort=istrade-desc become a short domain name +/daxia/And the operators can modify their own additions;Solve:Nginx reverse proxy and includeThe added reverse proxy rules are written to the Nginx file, and then the include Location/ { include /opt/ng
Example: There is no class data when a student is added. You need to automatically jump to the class display page.Problem: students, classes, and other data are stored in the jsp file of iframe. The student management button is in the menu bar of main.html.Mian page menuClass Management:Student Management:The jsp in iframe is constantly changing.Solution:In Student Management jsp, find the class management label of the parent form by id, and clickCopy codeThe Code is as follows:Vertex (window.pa
#!/usr/bin/python#-*-coding:utf-8-*-import sysimport reimport urllib2from beautifulsoup import BeautifulSoupdef Searc H (Key): #请求搜索链接, keyword with parameter key instead of search_url= ' Http://www.baidu.com/s?ie=UTF-8wd=key ' Req=urllib2.urlopen (search_ Url.replace (' key ', key)) #计数变量, used to record the number of pages Count = # main loop, crawl each page URL until the last page while 1:print "\033[1;31mpage%s:\033[0m"% Counthtml=req.read () So
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.