A user asked: For a website, the search engine will retrieve its subdirectories under the page? For examplehttp://www.companyname.com/subWeb/pagename.htm, does the search engine index? A perfunctory answer is "yes". For a subdirectory contained in a site, all search engines provide traversal of subdirectories, provided that the link to the subdirectory of the layer provides a navigation configuration and URL structure that the search engine can follow up on. Network directory structure Ideally, especially for a smaller site, the directory structure should be single (flat), that is, the actual Web page does not exist or there is only one level of subdirectories. For larger sites, two to three levels of subdirectories are ideal. From the point of view of the search engine, a single directory structure is the best. Except for graphics, scripts, Cgi-bin, and style sheets. They should be placed in subdirectories rather than in the root directory. The URL structure also shows the search engine and your visitors clearly what pages you think are most important in your site. In other words, if you think there is a page that is very important, the URL of the page should use the top-level URL instead of the subdirectory. The URL structure of a top-level Web page is generally as follows: http://www.companyname.com/pagename.htm The URL structure that contains a subdirectory is generally: http://www.companyname.com/subWeb1/pagename.htm Where: companyname.com is the domain name, SubWeb1 a subdirectory name, pagename.htm is the name of the Web page. The structure of a URL with a two-tier subdirectory is generally: http://www.companyname.com/subWeb1/subWeb2/pagename.htm Where: companyname.com is the domain name, SUBWEB1 is a level subdirectory name, SUBWEB2 is a level two subdirectory name, pagename.htm is a level two subdirectory under the page name, etc., and so on. In the search for a website, as long as your site provides a search engine can follow the navigation configuration and URL structure, then the search engine will usually traverse at least three levels of subdirectories. However, a more important point than the number of levels in a subdirectory is whether there are external links from other sites in your subdirectory. If your site has a fourth-tier directory, and the directory provides very important content, but also contains a large number of external links, then you can rest assured that the search engine will still be your fourth-tier directory to retrieve. The small trick of search engine marketing In search engine marketing, there are a lot of search engine marketers like to use such a trick: because they know that search engines will automatically retrieve multiple subdirectories, so they intentionally use compound keywords/phrases specifically create a subdirectory, to ensure that the search engine can see the target keyword. But in my opinion, this kind of trick actually does not have the actual effect, therefore is not advisable. For example, a company that sells organic teas may have the following URL and directory structure if you use these strategies: http://www.tranquiliteasorganic.com/Oolong-tea/Oolong.html which 1. tranquiliteasorganic.com is the domain name. 2. Oolong-tea is a subdirectory name, in its domain name contains the key words "Oolong tea", to the hyphen separate. 3. Oolong.html is the name of the page in level two directory. URL structure that uses subdirectorieshttp://www.tranquiliteasorganic.com/Oolong-tea/Oolong.htmland top-level URLshttp://www.tranquiliteasorganic.com/Oolong.html, which one is better? For me, I'm not going to change the subdirectory structure simply to get a good ranking in search engines. The reason is that the use of keywords in the domain name or URL is not important, or the effect is negligible. My answer depends on what kind of site this is. If organic Oolong tea has many kinds, and this site provides a considerable number of unique and high quality pages on Oolong tea, then I recommend using a subdirectory structure. Also, to ensure consistency and ease of use of the site, I also hope that they will be able to set up subdirectories for all types of tea provided. But since I find it hard to believe there will be a lot of unique and high quality pages on Oolong tea, I doubt if this subdirectory is necessary. Use the Exclusion Protocol protocol (deny the Robots Access Protocol) In a database-driven Web site, it is quite common to put similar or identical content in different subdirectories, as this can improve the user experience. Let's take the tea site above for example, assuming that the site has a different subdirectory for each of the teas and provides a wide range of unique and high-quality pages, the URL structure of Oolong tea (oolong teas), green tea and tea sets is as follows: 1. Oolong Tea page:http://www.tranquiliteasorganic.com/Oolong-tea/Oolong.html 2. Green Tea page:http://www.tranquiliteasorganic.com/Green-tea/Green.html 3. Tea-ware page:http://www.tranquiliteasorganic.com/Tea-accessories/accessories.html If the site also offers oolong tea and green tea in bulk, it is logical for the next Web site on tea and tea sets to be placed in the three catalogues of oolong tea, green tea and tea sets. This is a good strategy from the point of view of usability and user experience. However, for search engines, they often regard such content as redundant content. One reason why search engines don't like to compare multiple database-driven sites is that they often get the same content over and over again. Thus, if the tea and tea set is present in the above three-level subdirectory, will the search engine consider it redundant and possibly punish it for providing such content? Most likely, the search engine only shows the page that contains a lot of links on the site, not the other pages on the site. At the same time, there are many unethical search engine marketers who use this strategy too much to generate a lot of redundant content for exactly the same information. Therefore, it is very likely that the search engine is spam and punished. In order to make the site safe, you can put a plain text file robots.txt (exclusion Protocol) in the redundant content of the site, and in this file, declare the part of the site that you don't want to be robot. This will limit the search engine to your site's search scope. However, you also need to do a careful analysis of the site's statistics, to see which subdirectory is the most commonly used, for such subdirectories do not put robots.txt file OH. In the above case, using the robots.txt file solves two problems. First, it communicates to the search engine that you are not intentionally transmitting redundant content. Secondly, the user experience is not adversely affected by the availability of the appropriate subdirectory, which is still valid. Conclusion: Generally speaking, search engine does not have the problem in the retrieval of subdirectories. If you find that dividing your site into a subdirectory structure can provide users with a better user experience, you can do so. But don't create subdirectories just to get the search engine's attention. There are a number of strategies that can achieve this, not only without spending a lot of your time, but also to bring a better return on investment (ROI) to your site. The question raised by the user raises a hotly debated issue in the search engine industry: When will a Web site use subdirectories (subdirectories), subdomains (subdomains), or a mini site (mini-sites) most appropriate? Should the site owner use the target key phrase to create a URL for his or her site? Should the name of the subdirectory contain key phrases? That's something again. Author Introduction: Shari Thurow is the marketing manager of the grantastic design company. Grantastic is a company that provides a full range of search engine markets, networks and graphic design.
|