The development of search engine history stationmaster must see

Source: Internet
Author: User
Keywords Search engine Google seo stationmaster must see

Intermediary transaction SEO diagnosis Taobao guest Cloud host technology Hall

The history of search engine is not long, but the great contribution that search engine makes to the Internet is obvious, the search engine changed the world, changed the user's usage habit, let us have confidence to the future of the Internet.

Search engine started when did not do well, the first search engine did not even analyze the copy of the page, and did not rank the standard, in order to dig deeper business potential, this is to promote the search engine gradually developed, and develop more advanced systems.

The first larger commercial search engine was the Stanford University in the United States, which spent 6.5 billion dollars in 2001 years buying the @ home page. In the beginning of the promotion of the time, the biggest competitor is the site, mainly because the search results were many of the spam, and people are not accustomed to use the search engine.

Meta tags are a tool that helps search engines sort, often called keyword stuffing. Once the keyword is searched, the keyword and meta tags tell the search engine what page the content is on. In a short time to do the relay marking work, to provide relevant search results, but as some companies have increased marketing experience, they can easily improve the ranking of keywords, when more popular "loans, loans, loans" such as the keyword piling up, so the search engine spam information flooded, causing many users distrust.

At that time some important search engines include: Einet Galaxy, WebCrawler, Lex, Infoseek, Inktomi, ask, AllTheWeb and so on.

Each search engine has three main components:

1, Spider

The job of spiders is to find new pages and collect snapshots of those pages and then analyze the page.

Spiders crawl pages, such as scanning web pages, all search engines can achieve deep search and fast retrieval. In deep search, spiders can find and scan all content within a Web page, and in quick search, spiders do not follow the rules of deep search, searching only for important keyword parts, without checking and scanning all the content in the page.

We all know that the most important site is snapshot time, that is, spiders crawl and included the speed of the Web page faster, it shows that the site in the search engine heart more important, such as the Xinhua and people nets, spiders climb more than 4 times per hour, and some sites one months may not be able to be a spider The degree of capture of the snapshot depends on the popularity of the website content, the update speed and the new and old site domain name.

In spiders crawling rules, if there are many external links to the site, it means that the site is more important, so the frequency of crawling this site is very high. Of course, search engines do so to save money, if all the same frequency crawling all sites, which requires more time and cost, in order to get a more comprehensive search results.

2. Index

Spiders in the process of crawling, may repeatedly check the content of the Web page, and then see whether the site content is replicated other sites to ensure that the original content of the site index, the results of the index is generally to keep the content of the copied search results. When you search, the search engine does not search from the Internet, it will choose search results from the index, so the number of pages to search can not represent the entire site, but the spider will be in the background to scan and save the number of Web pages.

In the number of search results, Google 1-10 search results of about 160,500, as well as the ranking of search results in each region, these can be used to control the search engine algorithm index, or control part.

Each search engine in the country or around the world to establish a data center, when you enter the keyword needs to search, will be due to the time of different data updates to the search results synchronization, so in different regions will appear different search results.

3, Web interface

When you use the search engine to see the interface (such as Google.com, baidu.com), search results depend on the complexity of the algorithm, the algorithm is called from the index results, through the query and analysis can be displayed in the foreground, so the algorithm for the production time is longer, Google in this technical field leading.

There are also some search engine "one-stop" features, this kind of feature in English search is more common, generally speaking, search engine ignore "one-stop" words, such search results will be more correct, such as search "cat, Dog", search engine will rule out "cat and dog", only search "cat" "Dog".

Keyword density is the measurement of a keyword appears on the page frequency, the general search engine to see a page on the keyword over the density range, it will analyze the page is cheating, now search engine can do any region of word relevance processing. So in general, the keyword should be scattered throughout the page, but must have a title or paragraph long-term unchanged.

Search engine also has a core analysis technology is linked to the relevance analysis, in addition to the page rankings and general links, Google also value anchor text links, anchor text links mainly in the age and location of the link, and the link to the page is the authoritative site.

Link is the largest site quality indicators, search engines are very concerned about, because now links are more difficult to find, and you need links, so there is very little spam information links. For example, the university's Web site in Google's weight is high, that is because the university has a lot of high-quality external links. As we all know the importance of external links, many sites to start trading links, which is now the search engine more headaches, but ask now decide to rank more in the quality of the site.

All search engines want to get feedback from users, who expect to be more aware of the user's intentions before querying, search queries, time intervals, and semantic relationships, and they will track users ' clicks, if they click on an item, and then return to the search page immediately, The search engine will think that the purchase is not successful, will delete the tracking list, in fact, this approach is already in the proximity of E-commerce.

It can be seen that the search engine has begun to pay attention to user experience, in order to make the user affirmation of their own labor results, and become a search engine industry standards, perhaps the future development in personalized search.

This article by the http://fenghuangren.5d6d.com/webmaster feeds!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.