Joe Kava, originally on the coast of Southern Finland, was responsible for sending robotic cameras into underground pipelines extending into the Baltic Sea. This is in stark contrast to his original intention to join Google as its http://www.aliyun.com/zixun/aggregation/10671.html > Operations Data Center. In February, in 2009, Google spent 52 million of billions of dollars on an abandoned paper mill in Hamina, Finland, who viewed the 56-Year-old building as an ideal place to build one of its many large-scale computing facilities that serve a wide range of Web services.
Part of the plan has taken into account the Hamina paper Mill has an underground tunnel for extracting seawater from the Gulf of Finland. Originally, cold water was used to cool a steam station in a paper mill, but Google thought it could also be used to cool its servers.
Those robotic cameras-the underwater equipment that is often used in the maintenance of pipelines through remote control-are used to test the dormant tunnels that pass through the rock beneath the paper mill. When the results came out, the data showed that the entire 450-metre tunnel was in excellent condition and, by May 2010, it had started shipping seawater to the thermal devices inside the new Google data center, helping to cool thousands of devices to handle network traffic. Thanks to the Rock tunnel, Google's Hamina facilities do not require energy-intensive power coolers that are often seen in other data centers.
"When someone tells you that we've chosen the next location for the data center, a paper mill that was built in 1953, your first reaction might be: ' What are you talking about? ' , Kava said. "' How can I design this data center? ' But we were very excited to learn that the paper mill was cooled with seawater ... We want to make this facility as greener as possible and reuse existing equipment is an important part of it. ”
Kava this as an excellent example of how Google has built its data center without a parrot, working to create efficient and environmentally friendly facilities. But far more than that, Google's Hamina data center is an idealized symbol of the internet age. Stora Enso, a manufacturer of pulp and paper products in Finland, closed its Summa paper mill in early 2008, saying that the decline in the production of news and magazine papers resulted in "sustained losses and pessimistic long-term earnings forecasts in recent years". Newspapers and magazines are slowly ceding the market to Web services, such as Google and some other large services, which are supported by a new generation of computer data centers that can handle huge amounts of load while consuming less energy and reducing environmental pressures.
Google is the leader in this change, not just in Finland, but also in Belgium, Ireland and the United States. Other internet giants followed closely include Amazon, Microsoft and Facebook. Last year, Facebook opened a new Oregon State data center in Prineville, which uses external air for cooling. Facebook recently announced it would build a second data center not far from Google's 52 million dollar data center.
The thermal room at Google Hamina Data Center
Secrets of Google's data center
Google recruited Joe Kava in 2008 to manage the data center operations team, but soon became an operations and construction team. Originally, Google rented the existing data center and left it to the professional team to manage, but now Google has completely built its own data center, then it has completely used its own engineers to build. "We used to hire construction and engineering companies to work for us," Kava said, "because of recent developments, we've developed our own talents, and we've done this more and more ourselves." ”
Over the years, Google has rarely discussed the design of the facility and its internal hardware. But in April 2009, the search giant unveiled a short film showing the interior of their first independently designed data center-presumably in Oregon State's dalles--and it has revealed more or less the facilities in Hamina and Belgian saint-ghislain.
Neither of the two data centers in Europe uses coolers, according to Kava. Hamina's data centers use seawater cooling, while Belgian data centers use evaporative cooling systems to extract water from nearby industrial canals. "We designed and produced a water treatment plant in the station," Kava said, "So we don't have to use the potable water that supplies the city." ”
For most of the year, the climate in Belgium was mild enough to keep the room at its proper temperature. Kava pointed out that the room temperature does not need to be as low as before. In 2008, the American Association for Heating refrigeration and air-conditioning Engineers (ASHRAE) suggested that the temperature of the data center should remain between 20-25 degrees Celsius-but Google recommends keeping it above 27 degrees.
"The first step in building an efficient data center ... is simply to improve its temperature, "Kava says," machines, servers, storage arrays, anything that works well in places that are much warmer than the average data center. For me, any data center running below 18 or 20 degrees Celsius is pretty ridiculous. ”
Sometimes the temperature in the data center is too high, and Google lets employees leave the building-but lets the server continue to run. "We have so-called hiking hours or hiking days," he said. In general, we don't have to do anything, just tell the staff not to work in the data center during those particularly hot hours, so we can catch up with the work later. ”
Here in Belgium, it is sometimes hot to run the server, and Google will hand over the work to other data centers. Kava did not give any details, but said the handover of data centers involved a software platform called Spanner. The Google-designed platform was mentioned in a October 2009 talk, but this is the first time Google has publicly confirmed that spanner is actually in use.
"If it's really hot and we need to reduce the server load, then yes, we have automated tools and systems to handle this, like spanner." "Kava said.
According to Google's 2009 symposium, Spanner is a "storage and computing system that spans all of their data centers and automatically transfers and adds data and computing copies based on limitations and usage features." "This includes bandwidth, packet loss, energy, resources, and" failure modes-such as when there are errors inside the data center.
The platform showcases Google's overall vision for its data center design. They build things on their own, and only reveal so much news. It sees technology such as spanner as a competitive advantage. But one thing is clear: Google is rethinking the datacenter.
This idea has inevitably caused some impact on the industry. Like Google, Microsoft has begun experimenting with data center modularity-shipping containers that are preloaded with servers and other devices-and can then be assembled into larger devices with other containers. As for Facebook's design of its Prineville facility--a mysterious response to Google's efforts to make its design work--it shows that others are following their footsteps. Late last year, Prineville's city engineer, Eric Klann, claimed that two unnamed companies-nicknamed "Maverick" and "Cloud"-were supposed to build a server farm based on Facebook's no cooler design, looking Maverick Apple.
Large data centers with little detail
This month, in an effort to show the world how environmentally friendly their data centers are, Google announces that all of its home-grown facilities in the United States are certified by ISO 14001 and OHSAS 18001, an internationally recognised environmental friendliness and security certification for any activity, including data centers.
This involves tracking everything from engineering tools to ladders in the data center. "You can actually learn a lot from these reviews that you've never considered." "Kava said. What he is referring to is that Google will consider the smallest part of the data center design--for all data centers. It will soon also apply for similar certificates for its European facilities.
In Finland, the water cooling design of Google's Baltic Sea has a crowning touch. Kava explains that seawater is only part of the set-up. On the ground in the data center, the server disperses heat, which is transmitted to a water-cooled system next to the server, and then the water in the water-cooled system is mixed with seawater to cool it. When the process is over, the cold water is getting warmer. But before returning to the sea, Google will use more seawater to cool it. "When we drain the water into the bay, its temperature is very similar to the temperature in the bay," Kava said, "which minimizes the impact on the environment." ”
According to Kava, the company's environmental license does not require water temperature control. "It makes me feel good," he said, "and we're not just doing what we're asked to do." We do what we think is right. "This is a very common news about Google. But Kava points out that the ISO certification indicates that the company is constantly trying to achieve its goals. "Lookers-on, outsiders clear. Then the third party's accession is very necessary. ”
The complaint from companies like Facebook is that Google does not share enough information about how to solve problems that plague any large network organization. The report says Google not only built its own servers but also built its own network equipment, but the company did not disclose much information. The report also notes that, at least over the years, Google has really started to share more content.
We asked about Joe Kava about network equipment, but he refused to answer. But he confirmed the use of spanner. He kept talking about rock tunnels and the Baltic Sea. He even told us that when Google bought the paper mill, he and his team knew very well that it created an oversized internet symbol. "It didn't escape our eyes," he said.
(Author: Anon Cnbeta editor: Xu Jinyang)