10 most common problems of data center cooling
Although data center operator cooling management is much better than 10 years ago, many facilities still face problems such as insufficient capacity utilization and waste of energy. Data center cooling experts said that the final goal of airflow management is to better control the cooling temperature setting value of the IT inlet while minimizing the air volume transmitted to the data Hall. The following are some of the most common problems in data center cooling:
1. Too many opening floors: there is no reason to place opening floors in hot channels and blank areas. This will waste cooling capacity. There may also be too many opening floors placed at the intake of the rack. A dangerous signal is that the temperature at the top of the IT rack is lower than the normal temperature.
2. Hidden leaks: the cold air leaks out of the space under the active floor and enters the adjacent space or support column. Said the leak was quite common and led to pressure losses in the IT environment, bringing in dusty hot air or humid air elsewhere. The only way to avoid this problem is to check the perimeter and support bar under the active floor and hold on to any vulnerabilities you have found.
3. unencrypted active floor openings: although many data center operators have tried to secure cable openings and other vulnerabilities on the active floor, few have actually completed the work. The remaining vulnerabilities may cause a large amount of cold air to be moved into the areas that are not needed. Distribution Units, remote power supply boards, and other electrical equipment are particularly important to find unencrypted openings.
4. Poor rack confidentiality: it is common knowledge to place the backup panel in an empty rack area for airflow management. However, not everyone will do this. Some cabinets are not well designed, and the edges of the mounted rails and cabinets are killed. Efficiency-oriented operators will password the open ports and the potential open ports at the bottom of the Cabinet.
5. Inaccurate scale of Temperature and Humidity Sensors: Sometimes suppliers use uncalibrated sensors, and sometimes the scale becomes inaccurate over time. This will lead to the inability of poorly managed cool-down units to work collaboratively. Strong recommends that the operator calibrate the temperature and relative humidity sensor every six months and adjust the sensor whenever necessary.
6. rare items: many data center operators preset excessive cooling capabilities. If the cooling capacity is higher than the required capacity and the excess CRACs security cannot be guaranteed, the entire cooling solution will be exhausted because too many units are in an inefficient state. When the cooling temperature under the floor is very high and some racks are difficult to be cooled, the operator's consistent response is to run more cooling units. However, contrary to intuition, the right approach should be to run less CRACs to reduce load, said Phelps.
7. Let CRAC restrain each other to control the humidity: Another good way for two CRAC to restrain each other is to provide the adjacent CRAC with circulating air at different temperatures. As a result, CRAC gets different humidity readings, one ending with humidification while the other being dry air. To solve this problem, you need to cleverly understand the wet chart and set the humidity control point accurately, said Phelps.
8. Idle cabinet space: This is another obvious factor, but it is not taken seriously by everyone for some reason. When one or more Cabinet spaces are vacant, the airflow balance will be damaged, resulting in waste gas circulation into the cold channel, or loss of cold air in the cold channel. This will lead to over-cooling and supply more air than actually needed to compensate for the loss.
9. Bad Rack Layout: Ideally, you want to arrange the racks in a row by heating/cooling, and place the main CRACs on both ends of each row. It has a small rack and has no specific direction and cannot help anyone. No matter whether the racks are arranged from the front to the back or the direction of the CRACs and IT lines is the same, IT does not help.
10. cooling management has not been paid due attention: the benefits of improving cooling management methods have not been considered, leading to the stranded capacity of operators and higher operating costs. Some simple work, such as installing the backup panel, can benefit from this, but they are often ignored. In extreme cases, a well-managed data center cooling system can even delay expansion or create new facilities.