Moderator: Good afternoon everyone, welcome to the Third China Cloud Computing Conference Cloud Base Division venue, this afternoon by the super cloud to share with you in the cloud computing and the Big Data era, the cloud as a focus on infrastructure manufacturers in what we are doing, we can help people achieve what, today I would like to ask everyone here, There are many new friends, old friends, which are the first time to visit the cloud base, are there?
I simply put the current profile of the cloud base to tell you, Cloud base was formally established two years ago, but also by Dr. Tian founded the third time after China Netcom, the mission is also hope to be able to gather talent, capital and technology, to achieve China cloud technology landing and China's cloud applications and industry take-off, At present, we can see many logos on the board, these logos are not anything else, all the new entrepreneurial companies that are located in the cloud base, today, like we are super cloud, there are brothers company, Cloud, etc., there are currently 15 companies engaged in from infrastructure to virtualization, to distributed computing, to applications, even to integration, The provision of telecommunications services, and so on, together constitute the entire cloud computing industry chain, these companies collectively known as the Cloud base of the company, I came from the super cloud. What I just mentioned is that we are at the bottom of the entire cloud-computing chain, mainly to do server storage, as well as related solutions for the emerging companies, I myself today is the fourth time to participate in China's cloud computing conference, but also to see the entire cloud of industry from the ideal into a concept, from the concept of a step-by-step landing. Today when we talk about cloud computing, we are less likely to talk about how we build clouds, whether clouds should be built, and many such conceptual problems. Today we are going to talk about what kind of problems we will encounter in the concrete landing process. How do we deal with it, especially today with all of us, in the cloud era, many new companies, or established service providers, all will be through a large number of data centers to become their own infrastructure, Thus becoming the provider of infrastructure or the above PAAs, SaaS provider, because the whole cloud ultimately is the essence of service, all services will not appear in the market, it will eventually be in the server, or storage, the bottom of the infrastructure will produce a new demand.
To share a topic with you today, in the entire cloud data center scene, we as a server company, how in this scenario to help everyone to achieve higher computing, lower power consumption, how to help everyone is now a datacenter in the largest server optimization, traditional IT applications, Server is a general-purpose server, whether it is running OA, running ERP, running what, server difference is not too big, in the cloud of the scene still continue. But if you see a number of new cloud computing service providers in the US or Europe, such as Google, Yahoo, and their data centers are fully customized and optimized, all of the racks, power supplies and servers are optimized and redesigned two times, So that this series of customized servers can better match the business of these companies, whether it's searching or socializing or the average web, in this scenario, a major change has taken place across the data center, shifting from traditional support companies or more general Internet applications to supporting large-scale Internet services. At this point, we also see a major change in the entire data center market. We'll share with you today in the entire data center business, we all talk about some of the more important topics, such as the optimization of data center energy consumption, the management of electricity consumption, and so on a series of measures, in fact, in the whole world, we have heard a lot, the current total data center energy consumption accounted for all the global energy consumption of 1.5%, This is a very large number, what does this number mean? Can almost mean that it is 50 power plants that generate electricity all year round.
With this a series of carbon emissions, whether through air-conditioning or the server itself energy consumption, the annual carbon emissions equivalent to 410,000 cars, that is 210 million tons, in order to let these large-scale data center refrigeration, to achieve such a low temperature also requires a lot of coal, water refrigeration, The number of water consumed by the data center is 300 million tonnes a year, a terrifying figure that can bathe 6 billion people worldwide.
Of course, there are some problems of environmental pollution, such as a large number of Freon refrigeration, in short, the annual data center consumption of energy consumption of 27 billion dollars a year, so, this is why the data center as a future sunrise industry, but there are many people are very concerned about how it green energy conservation, How to build an energy-saving data center. In Twelve-Five of the planning, not only in Beijing, I cite an example, a few days ago, Beijing, a formal opinion of the Commission, how to promote the software and information services in the energy conservation work, especially in the Twelve-Five plan, must encourage the use of similar warehousing, The container-type data center improves energy efficiency throughout the data center, enabling the entire data to be trusted to power down, optimize application software and server architectures, and thus achieve energy saving and emission reduction in the event that the data center meets the business.
Not only in Beijing, but in many other cities, the Commission is vigorously promoting the implementation of this series of work. You might think that's why data centers are so expensive, why do these data centers cause so much energy waste? In fact, all the people who have been to the data center know that the data center is very cold, and now in and out do not need to wear white coats, but this series of almost all run in the 18-21-degree range, some relatively high temperature may run at 25 degrees relative to the high temperature. You can imagine that in a data center full of servers, to maintain the temperature of the entire data center at 18-21 degrees, this requires a large number of sophisticated air-conditioning refrigeration cooling, in order to reduce the entire data center to this temperature, so that the image of the server to do normal work. So, the question is, why is this data center must be at 18-21 degrees, this temperature is not a reasonable temperature, we can reduce it, can we improve it, this in the economy, in the money means what? The reason is 18-21 degrees, this is a historical legacy of the problem, in particular, these semiconductor and server design, often designed in the power consumption is relatively large, whether the process or cooling effect, make it to the temperature of the demand is quite harsh, once beyond the range of this temperature, Old systems are completely out of work.
But we'll think about it in turn, now a lot of new computers, including all the desktop computer graphics card temperature may be 60 degrees, 70 degrees, 80 degrees, or even 90 degrees, why the whole computer can also be a good operation, why no one on the server to try it? Is there really going to be a problem? In fact, not necessarily. Because there are many more reasons why the vendors of these servers promised me to have my SLA service quality assurance, to have some warranty terms, all of these terms are based on my server in your data center must be operating at an air-conditioned temperature, is this reasonable? In fact, there are some doubts around the world.
And the last thing we've been missing, in the traditional data center of the server is not many, several, dozens of, hundreds of, there are a number of sophisticated air-conditioning blowing, power consumption is not surprising, as the Internet business continues to grow, the underlying data center has also occurred an increase in the order of magnitude, in this case, The cost of the structure is already in front of every data center owner in order to cool all the air conditioning for these servers. So, from the first two years, everyone around the world wants to raise the temperature of the next generation of data centers, from the traditional 18-21 degrees to a higher temperature, the benefits are obvious, which means that we can have less investment in air-conditioning, more power is used on IT equipment, Rather than refrigeration equipment.
We've done a statistic that presents a pyramid trend on a global scale, one kind is the traditional data center, everybody is familiar with the pue, the energy efficiency loss is more than everybody can know, at present most data center's PUE value is generally at 2 or 3, also means that these data center's operating temperature is from 18 degrees to 21 degrees, Up to 25 degrees, this series accounts for 77% of the entire data center worldwide, but with some like Google, like some other companies in the promotion of green energy-saving data centers, there are also many people began to talk about traditional data center to do pue optimization, do some corresponding transformation, such as doing something, Air-conditioning energy-saving converter, such as the outside room temperature is lower than the good operating temperature of the data center, air-conditioning can not be refrigeration, but completely through the outside natural wind to carry out the heat, but also some hot and cold exchange, so as to ensure that the temperature and efficiency is the largest. Nevertheless, this series of data centers in the global scope of about 90%, now there are some new data centers that have a modular approach to the entire structure, such as our Brother's cloud box, which has moved the data center from IT equipment to power equipment, the battery pack has become a module, to the container, To achieve the data center of the modularization, this time the entire data center through such a transformation, a large extent, the temperature can reach 35 degrees to 40 degrees, these data centers in the world accounted for 9%. There are more aggressive companies that can make data center work at 40 degrees, everyone sounds like it's a very high temperature and I don't think my equipment is working anymore, but you can think about why these people can raise their data center to this temperature and promote it worldwide, In addition to what has just been said, there are already a lot of companies using this high temperature data center across the market, and Google has switched to at least 27 degrees, not 18-21 degrees. Sun did a little research, the discovery of a 1-degree increase in the temperature of the data center means that 4% of the energy costs can be saved, which is a huge number, especially for today's large internet companies, telecoms companies, and business data centers, which can reduce energy costs by 4%. This is a very good thing. Microsoft has raised the temperature of the data center by 2-4 degrees, and a small data center can save up to 250,000 dollars of electricity per year. Facebook last year was open source from the data center to the rack, and they ran at 27 degrees.
At this point everyone can turn to see, in fact, to allow their own equipment to run at a higher temperature, this should be a future from a global perspective, in the cloud and large data applications are a trend, has become an irreversible trend. Why do we still see a lot of large data centers in China and many traditional servers still have relatively low temperature environment to run, which as a manufacturer, especially as a number of server manufacturers, which still have a lot of work to do, there are specific topics waiting for development.
Back to today's topic, in the face of such a global data center energy consumption to optimize, temperature to upgrade, what we have done in the cloud, to ensure that we can embrace the temperature of the data center, we concluded that there are four points. The most basic point, the super cloud will be the first time to use the latest generation of chips, whether the current 32 nm, or the future of 22 NM, the Super cloud will be the first time to use this relatively better energy efficiency, higher heat efficiency, while better performance of the next generation of chips, We also officially released this year in March, based on the Intel Next generation chipset of the Super Cloud server, from the extent of the chip to help everyone to achieve CPU power loss and efficiency improvement.
At the same time, we are very actively involved in the global high Temperature data center and high temperature rack standards, called HTA, high temperature operating environment standards. Based on this standard, we have made some new improvements and new designs on our servers, so that our servers can not only run in the traditional 18-21-degree data center, but more importantly, it can operate in a higher temperature with a 7x24, fail-safe and stable operation, This is an obvious benefit for the entire data center owner.
Some optimizations are made at the hardware level, with the updated process, the software layer provides a set of platforms to support data Center energy management, from the entire data center to the pole, to the PDU, to the server, to the power supply, even to the specific hard disk, CPU slots for monitoring, and can be used for strategic energy consumption, For example, in less business time to turn off some boxes, less business when the frequency of the adjustment and so on some work, we can through the entire data center energy consumption platform to complete.
In fact, these efforts have been made to meet the needs of the major data centers that can adopt natural wind cooling and cooling. Using natural wind cooling technology, we can see the whole on the earth with natural wind instead of air-conditioning to heat the nearly 60%. We are also outside the door to see the future Beijing and Inner Mongolia jointly with the future of Beijing-Mongolia's super large-scale cloud computing data center, the location of the selection can be natural wind cooling in Inner Mongolia.
In this case, the temperature of the entire data center and the temperature of the server can be elevated to a new height. A lot of friends ask, OK, besides doing this, super cloud also do other things, can help us to save energy consumption, in fact, we feel that we in the hardware, in the structure, in the design of these considerations, we have a relatively different way of thinking, our idea is to target applications, For customers to run the software to the end of the server architecture adjustment, to optimize the server, this is not know if you can agree, in fact, I very much agree with this point.
For example, one of our larger customers is Taobao, it used to do content buffering, run on some very common and standard server, in this case found its CPU, its memory, its network has run to 100%, has run full, almost no potential to dig, but CPU utilization only 10%, is a very low state, in which case it has little to do with virtualization because its hard drive is completely full. So, he's got 90% CPU power wasted, it means that the electricity has been wasted, and the cloud has helped him to replace the CPU with the idea of redesigning a server that faces CPU applications, using a netbook-driven processor to help him achieve the optimized server for cloud computing applications, Finally we did the application optimization, the memory hard disk network is completely unchanged, or can run to 100%, it can also run to 100%, even so, its power consumption is far lower than the traditional. Such a large-scale deployment of Taobao, not only one-time investment than can be cheaper than before 30%, more crucial is the future, especially when the operation of the dimension, due to the savings of electricity, saving about 60% (OPIS). We are in the hardware for specific software optimization, the unnecessary functions removed, so as to achieve the optimal management of server energy consumption.
Super cloud Next Generation high temperature server thought design, one is the structure, one is the key piece, but also has the overall server design. structure to take into account is a higher temperature, so we will give up some, the main design idea is the height of 2U, the more crucial is that we have done some re-debugging and optimization in the heat dissipation project, so that the location of memory, as well as the location of important local hotspots can achieve a good cooling process under the whole new air duct heat dissipation, In addition, we are different from the choice of traditional server in component selection, for example, in high performance, at some high-end, often consider the calorific value is bigger, but the performance is stronger CPU, we design the high temperature server, chooses more actually is one kind of most mainstream and the universal 80W to the 95W standard, in the market 80% Are all in the mainstream dual server, and we've made some new improvements to our memory, make our whole from components to structure can support a higher temperature, of course, we also know that there is a small bottleneck, the traditional mechanical hard drive, its physical load of the highest temperature is 35 degrees, once more than 35 degrees , the mechanical physics of this hard drive will occur some such as error, there will be some failures and potential risks, this time if the operation at a higher temperature, we use SSD to achieve the entire server to run a higher temperature, to achieve better energy consumption optimization.
There are some subtle parts of the overall design, for example, high-power fans, adjustable fans, and some new heat sinks, are designed to do some of our improved, more critical of our next generation of High-temperature server, a full range of support for energy management systems, as I said just now, not only a server, The power of a certain CPU, can also tube to a specific hard disk, a specific memory, and so a series of our next generation of data center to optimize the server some design concepts.
As a result, our current generation of data centers optimizes servers for our testing, on two models our test is able to run 7x24x365 days in the 0-47 degree environment, which is a very high temperature, basically in the ordinary room can be deployed, To a certain extent means that customers can save a lot of air-conditioning money, which is the cost of our refrigeration, dispense.
The current server has begun to be formally market-oriented, we will be in the future of new data centers, such as Harbin's data center for the next generation of High-temperature server to try and promote. In addition to the traditional 2U dual run 47-degree high temperature server, we have also done on the 1U exploration and optimization, the current 1U can be a year-round operation of 35 degrees of high temperature, this for the General data center and the general city, it is almost natural wind to carry out cooling, refrigeration, and do not need air-conditioning.
You can see that even so, do this series of work, but the entire server in the parameters or performance of the fact that there is no shrinkage, we are in the same parameters to support the latest DDR3 memory, support 1600 trillion UD, to help customers achieve better memory latency. The latest generation of Intel's E5 Xeon dual-Path mainstream server on the entire CPU we already have 4-5 new generation Intel platform's super Cloud high temperature server, our test result generally can run 35 degrees stably in the 1U height, 2U can run stably at 47 degrees such a high temperature, This is also in the market at present can do High-temperature server relatively advanced.
When it comes to server energy management, this is a proven technology partner for some of our partners, such as Intel, which is now Intel's Global Data Center energy Management system, Enabling the entire cloud of future generations of G9 series servers to support the Intel Data Center Energy Management system, the benefits of which can be refined energy monitoring and policy-based energy management, and this series of things will open the corresponding interface to end users, Also means that the end customer can be relatively easy to the server, the computer room monitoring and some of the traditional network management into a part of the network management platform, so as to achieve better and more effective operation of the dimension.
Through our series of data, we finally found that in the entire deployment of Intelligent Energy Management platform servers, we can almost in a similar performance premise to achieve 30% of the power savings, which for the end customer is also a very large benefit. At the same time, the overall power consumption strategy and adjustment, will not have any impact on the current business, can guarantee the continuity of the business, it means that customers in the rack can be placed more machines, but also help him to carry out the corresponding energy savings.
In addition, we have just talked about all the High-temperature servers, including energy management software have our real customers, we have helped the provincial government, e-government cloud implementation of the entire new generation of server structure customization design, because we feel that the future based on the cloud of the scene, The traditional server structure and its philosophy must not be adapted, in the future, whether Google or Facebook, they are based on their own application of the redesign of the server, we also help the final customer to achieve an open architecture based on the high temperature server environment, we finally achieve a result, We are still at the same height of 2U and can help him achieve two rows of dual servers in one space. At the same time, we started with the open structure throughout the design, the chassis architecture is open, not only means that there is no chassis cover, there are a series of architectural changes, will help customers achieve better cooling, while increasing the rack density, because in the server, the heaviest weight is often the chassis, If the weight of the chassis can be removed from the entire cabinet, for it means that you can put more servers on a fixed load-bearing floor, we have now designed this series, and ultimately delivered to the customer, 2U to achieve two physical nodes without any cables, each node is a dual-way, The entire open server is stable at a 30-35-degree high temperature environment, which means that for customers he doesn't have to spend too much money on refrigeration.
From the performance and expansion has been satisfied with the current mainstream applications, such as a sufficient number of 4 or 8 hard drives, this series is ultimately to help us achieve a certain provincial platform to achieve its e-government cloud from the Hardware customization optimization.
A lot has been said, if we put the server's operating temperature from 21 degrees to 25 degrees, up to 35 degrees, some foreign agencies have done a series of calculations, they finally found that for a data center with 20,000 servers, if the traditional data center without temperature optimization, It could cost 8 million of a year's electricity bill, cooling costs. If the operating temperature is designed to 27 degrees, can save 56%, if the operating temperature set at 35 degrees, can have a very large cost savings, will save 85% of the refrigeration costs. Once the data center runs 35 degrees, Pue is almost 1.25, and is already the world's relatively leading value. As a result, more and more data centers abroad to upgrade its temperature, while purchasing this can support high-temperature operation of the server, as his main server. We start from this year, the Super Cloud officially launched the perfect run in 35 degrees and 47 degrees of the next generation of servers, we began in the second half of this year from some provinces and cities in the data center, and then to some national large-scale data centers, has begun to try to carry out a mass promotion.
We listened to a lot, as a new server vendors, how to make my server run at a higher temperature, but also 7x24x365 days of operation, to help customers save costs. A brief introduction to the Super Cloud, the cloud is one of the clouds, the official establishment of the 2010, Dr. Tian is the chairman of the cloud, we at the beginning of the creation of a series of not only broadband capital investment, and we also got the Beijing municipal government and the United States a series of investments. So that can help us to exceed the cloud in a relatively short time, from 2010 to now less than two years, not only the introduction of a series of cloud-oriented high-density low-power servers, such as the launch of the March 8 this year G9 server, as well as today we are running a 35-47-degree high temperature server, We are in a very short period of time to launch, not only that, we have achieved a series of small results in the market. such as the entire China Mobile, China Unicom's collection, 360, Baidu, in the inside will see us super cloud figure, regardless of cloud killing, cloud services or on some cloud platform, can see our next generation of high-density low-power, and application-oriented to optimize the server in his data center appears.
Today with the main share is with our server-related, higher computational density, lower energy consumption, for applications to do optimization, today we are running at a higher temperature of the optimization server to market. In addition to the server itself, I've talked a lot about it, and we have a set of software that can help the datacenter plan his energy consumption to observe energy consumption, and then deploy and adjust the energy based on strategy. In addition to our hardware and infrastructure platform, there are a number of solutions, this solution is called the soft and hard one of the out-of-the-box solution. At present, there are three major software and hardware interactive solution, cloud cabinets, we have integrated the exchange, storage, computing, as well as the above cloud Platform software, as a one-stop service, the entire cabinet delivery of cloud platform management software and platform to the end user. We have another one, for a while, a cloud storage solution, called a cloud silo, which colleagues share with everyone, is also based on our cloud storage software and our software-optimized hardware, packaged into a cloud storage solution system for enterprise private cloud scenarios.
Finally, at present, large data in the market is very hot, the cloud in large data research is more how we can make our hardware and mainstream software to do better optimization, such as our Yun Hui This product is based on hardware, tailor-made Hadoop center, we are cheaper. Server, our management software, as well as soft and hard one is the current super cloud main push products.
Our customers, Baidu, China Telecom, Unicom, Chinese Academy of Sciences, Taobao, National Grid, and so on, a series of less than two years to accumulate a series of more well-known customers, not only that we have exactly the same place in the concept, are aimed at the cloud to carry out the corresponding optimization, to design, There are also a number of people who agree with the speed of development, rapid implementation and deployment of the entire emerging company.
In the entire partner system, and the major software partnerships, such as cloud trends, YOYO, and so on cloud base within the company, and the entire software, hardware partners of the system to progress together, including in the hardware and specific software for hardware optimization, We also have a range of software development business partners based on our hardware optimization software, we eventually feel that in the entire cloud market, software and hardware are somewhat more coupled and coupled, which results in better computing and better efficiency and economy.
Finally, we share with you the latest generation, but also in the market to launch the first generation of G9 server. Gartner's 2012 Top Ten It trends, specifically mentioned a few, in addition to virtualization, large data, cloud computing, specifically mentioned 8th, is calculated/per square foot, this is what we usually say the concept of computational density, whether it is the virtualization or large data, or cloud computing, In fact, is an application or a business model, but this series of applications on the underlying architecture, especially for the bottom of the calculation of a new very large demand, traditional servers or storage, in this wave will be eliminated, Because they can't do better. Provide more CPUs in square feet or square meters, provide more cores, fail to do so, only a new generation of cloud computing servers, or servers optimized for applications.
We call this generation of servers a hyper-cloud G9 server, computing performance relative to the previous generation of 80%, memory performance increased by 50%, while energy consumption at least 15% reduction, which are some very specific, very attractive figures. We have put a variety of products from the previous generation of Intel's architecture, has now switched to a new generation of architecture products, covering our 2U two nodes, 2U four node servers, as well as our 2U-node server, and today the next generation of G9 platform and our high temperature server to do an effective combination, This allows future servers running at 35-47 degrees to support the Intel architecture perfectly.
At the same time for the market demand, is about to release a new generation of CPU based on the industry's innovative blade server, so that the super cloud on the blade to fill a small platform to better face the virtualization scene. The next month or two will officially exit the market. Our next generation e7000-g9,20 a compute node's hyper-high-density server. All future new blades are based on the next generation of G9 platform Intel's products, so that the cloud has achieved the traditional rack, cloud application optimization server, as well as the next generation of Blade server, and we can continue to focus on the CPU, we will continue to update the Hadoop server, There is a new generation of solutions for hardware and software optimization.
Then ask our colleagues to take a look at how the cloud provides optimizations for Hadoop, how we can provide one-stop cloud storage, thank you.
(Responsible editor: The good of the Legacy)