Efficient virtualization strategy for private cloud performance monitoring

Source: Internet
Author: User
Keywords Performance monitoring private cloud virtualization can private cloud performance


Private cloud performance monitoring is important not only to diagnose failures, but also to ensure that service levels meet the needs of centralized services. To ensure success, smart IT staff often create efficient virtualization strategies to perform performance monitoring for private clouds.




Continuous collection of private cloud performance monitoring data




Private clouds are primarily about processes, automation, people management, and consolidation. Some private clouds use virtualization technology and physical host hosts, so you need to collect data from various host hosts, regardless of which tool is used to perform performance monitoring. Continue to collect data instead of just doing it in consolidation, centralization, or fault diagnosis.




Typically, users and monitoring systems are not aware of problems when they are just emerging. It is only when the problem becomes serious enough that it affects the user's use. With historical data, you can see when the problem arises. Perhaps the problem with the CPU load arose after the virus scanner upgrade was completed a week ago. You can easily spot this in historical data and help problem-solving people quickly locate, fix, and recover to efficient status.




Private cloud performance monitoring also brings some non-technical benefits. Some of the services you want to centralize, such as departmental Web servers, typically do not set up a lot of monitoring. When services are down or slow, many departments are doing a simple reboot. This is wrong.




If you drive centralized service because of increased availability and performance through monitoring, it is difficult for departments to reject it. After all, you did the right thing, and they didn't.




Transparent




Transparency is also important. Open Cloud performance data to developers and application administrators so they can see how their configuration choices affect performance. For a cloud based on virtualized architecture, similar choices may be beneficial to the application itself, but affect the performance of the entire environment. IT systems also pay attention to balance, including performance. The performance goals for an application should be documented so that you can try to achieve rather than exceed. Exceeding these goals requires additional funding and time to invest.




Select the associated data point to monitor private cloud performance




When deploying a private cloud performance monitoring system, collect the associated dimension data as much as possible from the correct locations. Do not get information about CPU load from a virtual machine in a virtualized environment, and the result is wrong. You should get accurate data from the perspective of the virtualization platform. Similarly, memory usage, network I/O, storage I/O, and so on.




Instead, the application can best be judged from the level of a single server to help identify whether a cluster member is overloaded.




Also, collect data as little as possible with minimum granularity. Many performance monitoring tools use the average data of 5, 15, or 60 minutes as historical data, which is reflected in the graph that makes the peak data smooth. This flattening brings some illusion, because the peak data is significant.




When applying the response, instead of slowly, call all of its available CPU resources as quickly as possible, showing a graph of 100% CPU usage peaks. The length of peak time is very important, which usually represents how fast the end user feels about the application. In other words, the latency between the request and the result.




If the performance monitoring software averages these peaks to idle time, you may see a 50% CPU usage, which results in the wrong conclusion that performance can be met. Network and storage connections work in a similar way. Assuming a 100% utilization for a minute and a 0% for the next minute, the average usage is 50%, which doesn't seem to be a problem. In this case, it is necessary to carry out in-depth analysis of the high resolution software. Of course, the process of keeping large amounts of data and collecting high-precision data also consumes CPU, memory, network, and storage resources, so you need to find a balance point.



(Responsible editor: The good of the Legacy)


Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.