How can we reduce investment in data center O & M and server costs?
To reduce the cost of operating data centers and server costs, enterprises often spend money to save energy.
Enterprises have some cost-saving ideas, such as reducing costs without spending money. However, for more effective equipment or simplified management through predictive analysis, all of these require initial investment, which can reduce costs in the long term.
Network, storage, and server costs
Efficiency is always an important part of cost saving. The goal of improving efficiency is to ensure stable workload throughput while maximizing system utilization. To improve the system efficiency, we can optimize the system efficiency through cross-computing, storage, and network.
Higher utilization reduces the total cost of servers. The data center team can reduce energy consumption by purchasing fewer servers and related peripheral devices to achieve balanced utilization. It can also reduce cooling costs. At the same time, the software cost of the server also decreases, because the software license is usually determined by the number of CPUs. The popularization of Virtualization has also greatly improved the hardware utilization.
Processor Selection
Another way to reduce server costs is to run the most suitable workloads on the system. Managers who have been following the System Design for the past few years are also aware that the physical design of the CPU is limited to about 5 GHz. The supplier is taking other measures to improve server performance. Some servers use the re-designed I/O bus or direct connection bus with CPU and motherboard chip, such as the IBM CAPI interface. On the other hand, you can select different computing units by classifying different types of computing data. For example, traditional CPU Parallel Processing is not a strong point. Using GPU features for parallel computing and distributed computing can improve data efficiency. The multi-processor acceleration system can process workloads much faster than traditional system designs. Data workload processing means that less hardware devices can be used to complete data operations.
Data center operation costs
Around the world, up to 50% of the data center costs come from management systems, peripherals, applications, and databases. The method to save operation costs is simplified. Improve by changing bad or outdated methods and by purchasing new management software to accelerate diagnosis, fault isolation, and repair.
In many data centers, the mainframe is used to process massive transaction data, and useful data is stored on the host. However, many enterprises always consider the importance of a high-performance analysis server rather than a parallel processing mainframe, which is used to separate host data from other data servers. To process data, you must extract the data from a large database and convert it into a common data server's available format and load it to a separate storage, and then extract, convert, and load (ETL) the data). Generally, two or more data backups are processed through ETL and restored to the target.
Data centers can reduce server and operation management costs by optimizing mainframe host-level data. The current generation of server processing and analysis workloads are all processed in real time. The optimized ETL process can save millions of dollars each year.
Contrary to popular ideas, data migration is not free. Migrating data from the mainframe to other servers is equivalent to migrating data to the MIPS architecture server. This includes the cost of loading and managing data from the storage system, this includes data system procurement and related storage and network equipment, network transmission costs, and server management and storage costs.
Human Resource Cost
Human resource costs are one of the largest operational costs of data centers. Based on skills and technical positioning, the annual salary of system, storage, network, database and application administrators is about $75000, and some may even exceed $125000. If enterprises do not need so many technical personnel to manage, adjust, and Diagnose fault systems, this can save a lot of costs.
Predictive analysis technology is a major new development in system management. The system can analyze and take corrective actions automatically. When a fault occurs or is about to occur (the prediction of an unknown fault is called predictive analysis), the technicians are notified automatically. For example, Watson cognitive environment is already an industry-leading technology in data centers, including Log Analysis for predictive analysis. Using IBM's business analysis, the system searches for abnormal big data log files and then processes them with automated scripts or simplified notifications. Suppliers of other data centers should also promote this set of automatic analysis and management tools. The more automatically the system analyzes and handles problems, the less manual management of the data center.
IBM zAware can monitor the host environment and record when the host runs best. If a fault occurs, zAware isolates the faulty host. And clearly specify the fault location of the host, I believe that in the near future, the distributed server world will also have the same type of software.
Enterprises should consider adopting a new generation of management software oriented to cognitive analysis to reduce operating costs.