Intermediary transaction http://www.aliyun.com/zixun/aggregation/6858.html ">seo diagnose Taobao guest cloud host technology Hall
Hello everyone, I am Guangzhou SEO water cup, yesterday I shared the "Baidu snapshot was deleted the general case", then today I for Div and table, to the next site architecture for SEO impact. A good site structure is conducive to the search engine crawling and included, the search engine is a friendly attitude, then how is a good site structure? I believe many people know is div layout, but the internal div structure is too much is not conducive to SEO optimization, Spiders are difficult to continue to search the deep structure of the code, the following is the site structure should pay attention to what problems:
First: The impact of div on SEO.
It can be said that the use of Div design site is to use the idea of SEO to structure the site. The so-called SEO idea of the structure of the site is to use the concept of search engines to build a site, architecture friendly to the search engine site structure. This is why we have always said that a website SEO is from the site's construction began. Search engine is from top to bottom, from left to right to visit the website information, and search engine access is code. So we say the location of key content in the page is particularly important. The right page layout for spiders crawl and included, and keyword ranking is very advantageous. Here are three benefits of using DIV to design your site.
(1) Code simplification.
div layout and table layout compared to reduce the page code, loading speed has been improved, conducive to spiders crawling included. It can enhance the spider's crawling efficiency, efficient crawling for the quality of the page to play a good role.
(2) Improve the efficiency of code modification.
Using the div method, the code can be modified only to find the corresponding ID in the CSS, making it easier to modify the page, and will not destroy the rest of the page layout style. There are also annotation code is necessary, one is easy to adjust the code two is easy to understand the location of the key content.
(3) to maintain visual consistency.
Maintaining visual consistency is also an important advantage of Div. Relative to the table nesting method, will make the page and the page, between the region and the display effect is biased. The Div method is to make all the pages and regions with a unified CSS file control, you can avoid the deviation of the display effect.
Second: The impact of the directory on SEO.
The impact of the directory on SEO is divided into directory levels and directory file names. First of all, the next directory level, search engine from the root directory in turn down to crawl content, if your page is stored in the 3 level above the directory, search element Yeqiang will be more difficult to grasp, and thus give up the page included. This is why we often say that the structure of the site should not be more than three levels.
Next is the impact of the following directory file name on SEO. The path and filename of the directory is also an important factor affecting the ranking of keywords, which is easily overlooked by some people. For example, SEO basic directory name can be used, seo-basis.html (search engine improvement, now also supports Chinese filename, you can use the basis of SEO. html or a string that has been urlencode processed. At this time the basis of SEO is a core keyword, the following many related extensions, such as: SEO basic Learning, SEO Basic program, SEO basic promotion and so on. Then these extended two-level keywords can be done as a sub column appears as a two-level classification. A single page of keywords can be done as a three-level keyword.
Third: Static page and the impact of robots on SEO.
First of all, the static page, many people in the process of doing SEO have deliberately emphasized the static page. They think search engines prefer to crawl static page, the purpose of the page by dynamic change is nothing more than to make search engines more like, be included more opportunities. However, search engines can include dynamic pages, and the number of sites using dynamic pages is much larger than the number of static pages. Search engines have little appetite for dynamic and static pages. Sometimes do the page static but is not worth it, do not do the things that turn the cart before the horse. Of course, static page to a certain extent can reduce the system load, but also improve the access speed of the page and system performance and stability. Need to consider the question of gain and loss.
Let's say the next robots are written, and the robots are divided into robots.txt and meta tags in the robots attribute. Robots.txt is a plain text document that is placed under the root directory of the Web site. The first thing a search engine can do to crawl a Web site is to look up the robots.txt in the root directory and then follow the contents of the file to determine the scope of the access, if the file does not exist The search engine will be unrestricted. Robots prohibit search engine access is best not to use, in my last case analysis has such a case, robots prohibit search engine capture, resulting snapshots do not update, included reduced. The more commonly used are nofollow (do not track the current page link), NOINDEX (do not index the current page), noarchive (do not establish the current page snapshot), Nosnippet (not taking the current page description and do not create a snapshot), NOODP ( The description information in the DMOZ directory is not used in the search results. What needs to be explained is that nofollow can also be used as a link to the Rel attribute alone to prevent search engines from tracking this link.
This article original initial guangzhou SEO http://www.shuibeiseo.com/bencandy/1-8.html reprint Please specify the source, thank you