Objective
Recently done a lot of relevant work related to SEO, used to write a crawler to crawl other Web pages, and now write Web page let crawler to catch, feel this role exchange is quite wonderful. After this time of work found that I write crawlers in order to obtain specific information, but the search engine crawler is to obtain information on the Internet, this kind of goal is not the same as the mechanism of data capture and weight settings have a completely different settings.
Here is also the emphasis on how to improve the weight of their website. This I also in and some senior people after the communication and add their own summary of a bit of experience, certainly not perfect, can be used as some SEO optimization suggestions.
Seo Method of website optimization
Basic Tips (some very important but also very basic methods)
page Header title, keywords, description: Each page to write these corresponding content, this is very beneficial to the crawler, the information here should only reflect the content of the Web page. Here the text writing also has certain skills, you can refer to some SEO to do a good site. The field length also has certain requirements, these are the areas needing attention.
Web site outside the chain : This has been a long time ago, the cause is Google's page rank algorithm, improve the number of anti-chain can effectively improve the weight, but also pay attention to the anti-chain of the site itself is also high weight, so the effect will be better.
Original content : This thing I did not know before, but it is very reasonable to think about it, because the site constantly new content, natural crawler need to improve the frequency of crawling, weights will be improved accordingly. Specifically, you can make the user's community, so that users active on the site every day to generate new content submissions.
static URL: the kind of dynamic link with the. is very unfriendly to SEO, crawler generally directly does not include, so need to change this query to dynamic link. You can refer to restful, or simply put the submitted parameters directly into the URL.
Detail Tuning Article (some of the experience gained during this tuning process)
tag-Based optimization : Crawler to crawl content, basically is based on tags and anchor text, the original writing crawler is also the case. Specifically optimized things have the following several:
H Tags: This can make the crawler more aware of the title information in this page, will effectively improve the quality of the content included.
The title of a tag: this writing clearly can improve the quality, of course, the content of anchor text is also very important.
IMG Tag alt: This will add a title to the image, the image search can be based on the text search for these images.
Remove useless web pages to improve the quality of the whole station content : In addition to the pages that need to be included, we have to do some of the elimination of the site's "about Us" this useless Web page does not let the crawler included, will eventually let the page included is very valuable page, the whole station will also have weight promotion. This is to add rel= "nofollow" in the A tag.
Access Speed : This is easy to understand, that is, the Web site is faster to open the weight of the higher, the slow death of the site is not what we want to see, here in addition to improve the site configuration, bandwidth, but also from the technical level to do the static Web page, the purpose is to speed up access, reduce back-end pressure.
Advanced Tips (this is really a clever way, before I did not know ...) )
use URL rules to generate more pages : A Web site has the total number of search pages is also a very important indicator of the weight of the site, a static page write dead tired also can not write a few, but look at someone else's website is millions of pages, that is how to get out? This is simply to change the dynamic page to a static URL, but this may not be enough data. Here is a very advanced technique, is to make a fuss in search, let different search conditions to form a URL, in fact, each search results will also form a different page content, so the final appearance is to arrange the number of combinations, the natural total is amazing.
URL rules : Crawler URL to the depth of the directory and the length of the URL is sensitive, but also can not blindly use shallow depth of the URL, but also have a directory hierarchy, this will think the structure of the site is clearer, the weight of the site is higher.
Sitemap: Also popular is called the site map, this thing is actually not for people to see, but to the crawler to see. At present, there are two, one is written in the structure of the Web page, there are a large number of links to the site, you can take out the depth of the page, and more directly according to the pinyin first letter to the page list; there is one is based on the Sitemap writing rules to write XML, the site all the pages are listed, convenient crawler quickly included There are tools on the Web to help you generate this file with one click. In addition, I found that Baidu and other search engines have an initiative to submit the site Sitemap, can be faster to let the search engine know their website.
Robot.txt: This is mainly used to constrain the crawler crawling rules of the site, you can avoid crawling some of the content should not be included.
Written in the last
Temporarily think so much, some things are relatively easy to do, but some are a long-term process, if you really want to do SEO is a long-term process. Finally, if you are anxious to improve the site's exposure, the impression that SEM is a more reliable way, but that there are some specific skills, this part of the recent has just begun to do, and so on some days have more feelings to fill another.
Website SEO optimization of some experience summary