Vice President: Google Search-search engine technology
Source: Internet
Author: User
Google partly attributed its success to the proprietary computing mechanism invented by the company, enabling its search engine to provide answers to search requests from millions of online users in a very short time.
Last Wednesday, Google's vice president Urs Hoelzle made a speech at the Eclipse conference, giving participants the opportunity to learn how Google's search technology was developed and how it operates now.
Hoelzle told attendees that to invent Google technology, developers must abandon the mindset of technology used in large databases in the past. Because the content of all search requests in a day is unpredictable, it is indeed a great challenge to possess about 10 billion of webpage data at hand.
Hoelzle presents a series of photos of early Google hardware data centers. Two dilapidated desktop computers were taken in 1997. In 1999, there were several Intel servers with a pile of messy data lines. In 1000, data centers placed neatly arranged dual-processor servers.
Hoelzle said, "the underlying hardware costs are very low, but a lot of work has been done ." At the same time, the reliability of so many servers is another concern of Google. Hoelzle said, "Google uses an automatic control mechanism to operate. Otherwise, engineers need to be exhausted by restarting the server ."
In order to resist the impact of force majeure, Google built a Google File System, which is closely integrated with Google's search computation system and highly capable of server faults.
All Google operations are based on a series of large-capacity files, which are split into 64 M data packets and distributed across multiple "data packet servers. The file description, the number of data packets, and the location of the data packets are all stored on the central server. Each 64 M data packet is backed up on the other two servers, and the three copies are stored in the central server.
Because all files are stored on the Red Hat Linux server, Google ensures service reliability at a low cost. The central server periodically sends a pulse to the data packet server to determine whether the data packet server is operating normally. If no response signal is received, or the response signal shows that the data of a data packet server is damaged, the central server will retrieve data packets from other data packet servers to repair the damaged server. This job can usually be completed within one minute.
Hoelzle pointed out that only copies on the three servers are damaged at the same time will have an impact on Google's services. At this time, it takes a long time to collect Internet data to reconstruct damaged files.
Google will index the webpages collected by Web crawlers, and Web crawlers will also describe these webpages. Hoelzle says that creating a web page index is a tough task and requires several hundred computers to perform operations for several days. At the same time, indexes must be updated frequently.
The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion;
products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the
content of the page makes you feel confusing, please write us an email, we will handle the problem
within 5 days after receiving your email.
If you find any instances of plagiarism from the community, please send an email to:
info-contact@alibabacloud.com
and provide relevant evidence. A staff member will contact you within 5 working days.