IT168 Technology Virtualization is the de facto standard way to deploy new applications and workloads today. However, when it comes to improving data centers based on this still emerging technology services, many companies still do not play the economic benefits of virtualization technology. Readers can get seven tips from this literature to help their virtualized environments run at their best.
Let it go.
If you haven't virtualized any system yet, go ahead and do it. There are still many companies that have not even taken this step. But it is clear that most companies have taken this step. However, when it comes to virtualization of particularly dense or particularly sensitive workloads, many companies still have some fear. For example, those workloads that require very frequent input/output operations or are extremely sensitive to latency issues (network-related or storage-related) are sometimes left in the physical environment, so administrators can more effectively control environmental conditions.
However, there are a variety of new opportunities for storage and networking to help organizations overcome these challenges and even virtualize the largest and most latency-sensitive applications. For example, with mixed storage arrays and pure flash arrays, companies can eliminate the barriers to input/output (IOPS) per second while still gaining many opportunities for processing power or storage capacity. In addition to simply deploying hard drives, mixed storage arrays often offer a good opportunity because they allow businesses to take an extremely balanced approach to storage.
Makeover Data Center
Instead of just focusing on storage, see if there's a great opportunity to improve, focus on emerging opportunities and rethink everything you do. For example, if your existing data center infrastructure is nearing the replacement cycle, consider replacing it with a converged infrastructure program, which is expected to dramatically simplify the way in which data centers are supported. Today's fusion schemes are becoming more common and fall into two broad categories:
Omacro. This category includes solutions such as Vblock and NetApp flexpod. Vblock is a joint venture VCE launched by Cisco, VMware and EMC's three companies. The product in this area is an existing product mix that is typically fully tested and prebuilt and is sold in a single inventory unit (SKU). They are also supported as a single system. So if any part of your environment is in trouble, you can always copy the phone and call VCE. You don't need to first figure out where the problem is, but you can avoid the usual phenomenon of vendor prevarication.
Ohyper. This is a relatively new niche program, but thanks to companies such as Nutanix, simplivity, Scale Computing and PIVOT3. These companies build devices based on popular hardware--and most of all, use software-based methods to solve problems with data centers. These devices move the storage to the device side, very close to the calculation, and thus skim the storage area network (SAN). In general, this solution also makes full use of the custom Distributed file system, which leverages all server-based storage and manages it for use in the environment. This solution is expected to greatly simplify the data center and save costs. Now, if you need more processing power or storage capacity, just buy the basic unit of the infrastructure and add it. It can be said effortlessly.
Follow good deployment practices
Sometimes new workloads can be easily deployed in virtualized environments as double-edged swords. Because it's so easy to deploy, the company often finds itself on a whim, creates a new virtual machine, doesn't calm down, and wants to know if it needs the virtual machine in the long run. This haphazard activity has a long-term negative impact on the health of virtualized environments, as the attendant virtual machines slowly start to run out of resources that could have been more reasonably used for more important workloads.
In order to solve this problem, the company should implement corresponding policies and procedures to limit the virtual server fragmentation. Ask for the necessary reasons to create a new virtual machine, and make sure that the virtual machines have some sort of end date-if they are used only temporarily, and when they expire, they will be removed or archived. Some products on the market can help businesses discover and eliminate virtual server fragmentation and zombie virtual machines.
Automate
How often do you repeat the same operation? If you do the same thing repeatedly, you will not only waste valuable time, but also miss the opportunity to deal with activities that bring more value. With more features moving to software in today's data centers, new opportunities are now available to automate activities. Software is inherently more flexible than hardware and easier to customize according to your requirements.
As part of this productivity improvement effort, use the tools that are available with the hypervisor to simplify the task of deploying new systems. For example, you can take advantage of feature features such as host configuration files (hosts Profiles).
Implement good monitoring tools
Regardless of the size of your virtualized environment, you need good monitoring tools to help maximize your virtualized environment. Monitoring tools can help you more quickly identify issues that can impact the availability of your environment, or may cause performance problems that affect your business workload. Good monitoring tools can also help you protect against possible processing or storage capacity problems in your environment. In addition, in terms of processing capacity or storage capacity management, the right tools can also be used to predict when certain resources are depleted, so that appropriate measures can be taken proactively.
Constantly updated
From many different angles, including security, availability, and performance, running any of the older software can have a negative impact on the system. In your virtualized environment, try to keep up with the latest version of the hypervisor and temporary updates. On a single virtual machine, be sure to stick with the latest VMware tools. This is much easier in the latest version of Vsphere, as once a new version is available, VMware Tools can now be updated automatically. In the past, updates were a manual process.
Hope for the best, plan for the worst
No one wants the failure to affect the operating environment, and no one wants a natural disaster to destroy the data center. Unfortunately, these two situations happen in real life, and the virtualization Administrator's job is to circumvent such events. Use simple usability as an example to follow best practices in system design and workload operations to ensure that you do not cause loss of functionality. For example, the established environment needs to be able to withstand host loss. After all, the hardware will eventually fail. In addition, use affinity rules (affinity rule) and exclusion rules (anti-affinity rule) to ensure that workloads run in the environments in which they should be run. For example, use exclusion rules to prevent all your virtualized domain controllers from running on the same physical host.
When it comes to disaster recovery, consider some of the mixed cloud solutions available on the market. If you do this, remember to automate and ensure that you have the ability to automate and seamlessly migrate from your internal environment to the disaster recovery service provider environment.
Summary
Obviously, when it comes to maintaining an efficient virtualized environment, the virtualization administrator needs to think about a number of tasks that are covered by just seven of these tasks, but they are important tasks that deserve close attention.