Intermediary transaction http://www.aliyun.com/zixun/aggregation/6858.html ">seo diagnose Taobao guest cloud host technology Hall
A few days ago a friend called Friagan to me to write an email, asked a few he thought is the internet has never been mentioned SEO problems. More questions, the need for a short text to answer clearly, so with the consent of Friagan, write a post public answer, may also help others.
First question: Does the link look like plain text, is it a hidden link?
Almost all SEO staff have a large number of links on the blog, give a link is likely because the article does relate to the link to the content, or perhaps SEO experts deliberately arranged a keyword but pretend to be natural to give this link, anyway, This behavior appears to be understandable in the blog of an SEO person.
But if I'm doing an ordinary website, in order to optimize a keyword, also in an article through a keyword to give some links to my home page, may be a puzzle to the reader, because the reader see blue underlined text always want to click, when he clicked, found to return to my home page, This is probably quite annoying to the reader, so, I think, can be a link to a keyword, but in the CSS style is defined as a black font, no underline, with no link around the text, which has 3 purposes:
Purpose 1 is-search engines will see my links, but reading will not be disturbed
Purpose 2 is-if someone copies my article, it will incidentally copy my link, if he is careless enough, he doesn't know that he has copied my link.
Purpose 3 is-(mentioned in my second question)
Do you think that this kind of behavior is considered to be cheating, because he lets the search engine see what the reader cannot see, although the reader is not completely invisible, he sees at least the keyword, but does not see the link.
Answer: I think this is to hide the link, already belong to cheat behavior. As long as ordinary users do not see the link, is to hide the link.
Question two: Can the article first read to the search engine, and then to the reader to see?
You mentioned that the search engine to judge the original based on a lot, can be time, authority, PR value. According to this theory, within a few hours of your original release, I have sent your articles to my website, at least 40% are mistaken for an original opportunity, if I have a more authoritative site than you, my chances are greater.
Is there a way to prevent this loss? For example, you can leave your article before the search engine to go over the public.
The method is that when you create a page, do not publish the URL of this page, but through my first question in the method, do a few only search engine can see the link, link to this page, three months later to open the URL of the page, at this time even if the page was copied, search engines can make judgments. Of course, also not necessarily through my first question in the way to do the link, you can do in some old page links, the old page generally no one to see, everyone likes to see the latest release of the news. However, this raises my third question, please see my third question.
Answer: Of course, if you can do it first to the search engine to see, and ordinary users can not see, do increase the chance of being judged as original. The problem is, as long as the search engine can see, is included, there must be users can see, although it may be a few days later. Many of the AdSense copied content, but also from the search engine and then copied.
Another more common way to copy articles is to subscribe to the RSS feed, and as soon as new content is available, the copycat will see it. I am afraid it is difficult to do such a CMS system, the new content released, only on the old page link, but not into the RSS seed, which has lost the meaning of RSS. Unless you specialize in such a CMS system.
In addition, the time is only to judge the original factor, the domain name Trust degree is also very important. A few days ago I was in the grassroots interview, search results can see this phenomenon.
Question three: How often does the search engine refresh an old page?
The search engine will go to your blog on the first day to include your new published page, but the search engine will check every day to see if your old page has been updated? Because the workload is quite huge. If you have written for ten years blog, you probably have written 3,650 blog posts, search engine if often refresh, this workload is huge, I am afraid unbearable, and at any time on the Internet content more and more, this workload will be infinitely doubled. Have you ever studied how long a search engine has to check for updates to old pages? Is the age more remote, the lower the frequency? If so, does your blog have a huge burden on search engines?-– please look at my fourth question.
Answer: The old page does not have the new page update frequently. But for a site with high weights on the domain name, old page updates are not slow, usually not more than more than 10 days.
This is also related to the site type. Like blogs, forums, news portals such sites, search engines will frequently crawl updates, including old pages. And some corporate sites, a few years have not updated content, search engines will reduce the crawl frequency, but this does not mean that the weight of the site is low.
Old page update frequency is lower, in fact, the main reason is because the old page distance from the home usually a little farther, need to click more times. If the old page has a direct link on the homepage, such old page update frequency is the same as the new page.
Updated frequently, the domain name high weight of the site, only a part of the Internet content, but also the search engine most want to crawl the updated content in time. I think the search engine deep pockets, not unbearable.
Question four: Your blog some pages are changing every day, bring to the search engine what is the impact?
You use WordPress system blog, you publish a new article, all your old articles have to move a position.
For example http://www.xxx/page/2/above content, along with your update, will appear in http://www.xxx/page/3/position, namely when I search in Google http://www.xxx/page/2/ On the content, I went in and found that the content is different, because the position is early, if Google to ensure that I have to search for things and I see the same, it will be on your blog content regardless of the new sights 8206.html "> Real-time updates, this is a lot of work, If Google feels worthless, will it reduce your weight?
Of course, I was able to look at the exact same content I was searching for with a snapshot, but I think the snapshot is just Google's way of preventing content from being lost when searchers are not able to see what they want to use as a substitute, otherwise Google will not be so disgusted with dead chains and bad chains, because for dead chains and bad chains, We can also use snapshots to view.
Answer: This is why blogs, forums, news portals, social media and other frequently updated sites, search engines will crawl more frequently.
Whether Google crawls often depends on the weight of the site and the quality of the content, not the workload. The weight is a more objective thing, is not the problem that the Google feels worth. As long as your domain name age, external links, content originality, etc. reach a certain level, Google will not because of the workload factors do not crawl.
A little more, careful observation will find that the blog/page/2/this kind of page rankings in general far less than the post page.