SEO keyword relevance outside the chain construction detailed analysis

Source: Internet
Author: User

Intermediary transaction http://www.aliyun.com/zixun/aggregation/6858.html ">seo diagnose Taobao guest cloud host technology Hall

Do Search engine optimization (SEO), the construction of external links mentioned more is the link to the relevance of the argument, after all, you are most of these external links to search engines to see, and search engines rely on the spider program to catch.

Remember, "Spider" is a Crawl "program", not "artificial intelligence". He will not use human judgment logic to judge relevance, but some "logic program", after all, the spider program is not complex (in contrast).

Understand the above situation, I say my own views on "relevance" and building experience, the wrong place, but also please correct me.

The relevance of your Web page to the content in your Web page, the judgment in the Spider program:

Suppose your Web page is a new page that has just been done for less than 2 days and just submitted, spiders come crawling (or through other forms of links) According to the "Crawl index".

First, spiders come to your Web page through a variety of links, and for him, the new page features are judged by the elements in the <title> tags in the <head> section of the page code.

    Suppose spiders come to you this page, crawl the pages of <title> text: My anime forum-new anime-Anime new ideas!

    Spiders first use the text in the crawl <title> as a "keyword", and then go to the page code to find the relevant "keywords."

    You can rest assured that spiders can judge some JS code and the label on the page, he will pay more attention to the text and Div block the title part of the "key words", that is, <h1>-<h4> a series of. This also can verify that you say, why the content of <h1>-<h4>, will be more than other <body> text content of high weight reasons. Then the spider will <tilte> in the text <body> part of the most repeated words, designated as "Reference keywords" (of course, there is a repeat limit, the search engine will be set and embedded in the Spider program).

    Then spiders themselves will be based on the "reference keyword" to judge "keyword extension" (according to the search engine related categories of commonly used data keywords and title in the keyword analysis), comprehensive above, to determine the general "key words" as the next page crawl reference.

Next, the spider calculates the number of all keywords appearing in the Web page code (this simple data-computing function or some) as a base. Then calculate the number of "keywords" in the <body> label, as a molecule (Khan, think of no word, popular first). And then, you get the keyword density of this page. When the

density comes out, search engines themselves will have a standard judgment (artificial design parameters), there is a level is the most reasonable, is the general or the worst, the standard moment in the change, in short, according to the data for a period of adjustment (is to make you a period of time after the heart).

    Good, the density of the judge came out after the key words on this page to contrast.

         based on the keywords in <tilte>, combines the reference of "extended keywords" in <body&gtPart of the capture of the "key words", compared to each of their differences in the text, the difference is also divided into grades, the difference is not big, became a long tail keyword, the difference is large, was discarded as the search keyword for this page, this and "keyword density" method of judging the same. In this way, the page long tail words and search keywords are determined.

When the 2 items of density and page keyword differences get 2 results, there will be a weighted calculation formula (the engine itself, also regularly based on data adjustment), to get a "correlation" score (similar to the Baidu Index algorithm), so as to determine the key words of the page, Determine the degree of relevance of the page content to the keyword. The relevance of the Keywords and page contents of the

page is so.

So, how does the dependency of an external link be judged? OK, let's go to the 2nd part of

External links and Web page relevance judgments:

One-way link: His Web page links to your pages.

On his web page, the link anchor text description of your site must be related to the keywords he links to your page, or similar, the spider's method of judging and the first part, just the beginning of the "keyword" reference, become on his page, your page link road text.

In other words, the relevance of this page and your site is how high, this spider how to judge?

This link anchor text becomes a bridge and reference.

The relationship can be understood as such: your page's keyword vs (relevance judged, set to a) → anchor text (for the link to the anchor text on your Web page) ← (relevance judged, set to B) his web page keyword.

PS: Judge A and B methods, refer to the first part: your Web page and the content of your Web page relevance, in the Spider program judgment.

A and B after the comparison, the difference in a certain range, the spider will have a level standard (artificial set). For example, A and B is less than 10%, for the highest degree of correlation, 2 pages weight increase, add to the result of a weighted formula (may only affect the quality of the Web page, the impact of the ranking is not very clear, to be observed); could. Wait, I won't write.

Links:

involves 2 of pages of anchor text, the method is very similar.

Your page's keyword vs (relevance judgment, set to a) → anchor text (one of the anchor text on the page) ← (relevance judged, set to B) His web page's keywords

Your page's keyword vs (relevance judgment, set to C) → anchor text (anchor text on another page) ← (relevance judged, set to D) the keywords of his web page

A and B in contrast to a weighted formula result: E;

C and D contrast to a weighted formula result: F

E and F are compared at last, and the result of the final weighting formula is obtained.

To judge the relevance of 2 Web pages in the Exchange links, 2 pages.

The above situation is suitable for one-way link correlation judgment, and friendship link correlation judgment.

My experience, if there is a mistake, you are welcome to take a brick!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.