Yesterday talked about the limitations of statistics in the SEO, in fact, this involves another problem, that is, the feasibility and credibility of the SEO experiment.
I believe countless people are trying to interpret search engine algorithms through experiments. This has a special word, reverse engineering, reverse UB, that is, under controllable conditions, by changing the page some parameters, and then observe the changes in search engine rankings, to understand the search engine ranking algorithm process.
These reverse engineering and SEO experiments, many experts and large companies are constantly doing, and some very fruitful. But in the final analysis, this kind of experimental data is not http://www.aliyun.com/zixun/aggregation/34182.html "" "Absolutely reliable.
It's like solving math problems: axb=c. When we know the result C (that is, search engine rankings) and a data in a or B, we can calculate the other number in a or B. But when we don't know two of A and B, we can only List a bunch of possibilities, but it's impossible to get a single ab numerical answer.
What's more, the search engine algorithm considers not two values, but one hundred or two hundred parameters. And we outsiders are not exactly sure about the one hundred or two hundred parameters. So how do you figure out how these parameters are set up by reverse engineering? What is the proportion of the ranking algorithm? Theoretically impossible.
For a simple example, let's say we want to experiment with whether the keyword density is 3% or 5% good. An experimental model that can be conceived is to use two simultaneous registered domain names, with the same length of article content, one of the target keyword density 3%, another density of 5%, on the same page with the two new domain name links. And so on after the query target keyword, see which page row of the front. The choice of the key words is very unpopular words, or even the only word, in other pages did not appear.
But such a seemingly reasonable model ignores many of the factors that may affect the results of the test. For example, the same Web page put two links, there must be before, the weight of these two links will not be different? Will the new domain name collection time cause different? And the collection of time will not cause a different domain name and ranking different?
Once the page appears on the two domain name links, how to ensure that no one else, anywhere else in the two domain name links? As soon as it appears, the experimenter can not guarantee that these two domain names link number and weight exactly the same.
In addition, the content of these two new domain pages should not be the same content? or different content? If it is the same, or is most of the same content, will it be caused by copying content pages? And for copying content, search engines will choose one for the original, the other for replication. Is this option random when other conditions are exactly the same? If the content is different, how does the subtle difference in semantic analysis count?
All these factors are difficult to control, and what effect it will have on the experimental results is difficult to tell. Strictly speaking, to be under completely controlled condition to carry on the SEO experiment, is we those who do the website cannot do. SEO experimental results sometimes have a very high reference value, and sometimes very misleading.
Author: Zac@seo Every day
Original download: New virtual Host