Software-defined data centers face challenges: IT departments need to shift their focus

Source: Internet
Author: User

Software-defined data centers face challenges: IT departments need to shift their focus

Virtualization technology has already overturned the data center field, but the IT department must overcome several obstacles so that this technology is expected to enter the next stage.

Innovation often comes at a cost. Enterprises introduce solutions to improve the current operating model, and some aspects may encounter problems in this process. For example, our transition from horses and carriages to cars of mass production means we can go faster and farther. However, relying on this new traffic model brings new challenges, including whether there are roads and road conditions, whether fuel can be added at any time, and whether car repair workers can be found at any time, resulting in noise and air pollution. In many cases, this has created many new markets.

Virtualization Technology is also revolutionizing today's data center field. Virtualization brings many improvements and benefits, including significant cost reduction, significant flexibility, business efficiency, and business continuity, thus winning our hearts. However, this has brought about some new challenges to the data center, which has prompted the IT Department to better combine the data center architecture and virtualization workload requirements.

Virtualization Technology abstracts the hardware components of the data center and overwrites a common software layer. Since the virtualization layer manages the underlying hardware, you can use software to control the operation. After the data center service is virtualized, it eliminates islands of processors, memory, storage, and network resources, which are usually located on a single-purpose device. Software-defined data centers can implement service delivery and automated management through software.

So what are the problems? Many IT departments only perform partial virtualization to implement software-defined data centers. The isolation of physical assets (including storage and data management devices) hinders virtualization progress. So far, the scope of infrastructure components for virtualization is very limited (in most cases, server virtualization still relies on specialized hardware systems that cannot be scaled out ), this brings about greater complexity and higher costs, and scale will only increase this problem.

Infrastructure Innovation

Over time, the surge in the number of specialized devices brought about unnecessary complexity, which led to confusion in the data center. Innovations at different stages and manufacturers lead to different technical levels. Although these technologies can be operated collaboratively, they are often extremely inefficient.

This is an example of backing up data to a disk. Many companies usually invest a lot of money to purchase backup hardware, including backup servers, disk storage systems, deduplication equipment, and WAN optimization systems, these hardware is often deployed in the primary data center and remote disaster recovery site at the same time. When the backup is not running, the processors and memory on many of these specialized systems and devices are not fully utilized.

Another example of this problem is capacity efficiency. Over the past decade, the IT department has been committed to solving this problem by deploying a variety of technologies, such as WAN optimization systems and backup of duplicate data deletion devices. Therefore, data efficiency technology has become a standard feature for many different products.

If all these products are deployed in a data center, the IT department will repeatedly process the same data flowing through each device. This process is complex and costly, and requires multiple management touchpoints. A large amount of resources is contrary to the original intention of virtualization technology.

Insufficient resources

Before virtualization was widely used, the average server utilization rate was less than 10%, which is a common phenomenon. Virtualization technology significantly improves the average utilization. Now the IT department needs to assign different teams to manage different resources, such as servers, storage systems, networks, and end-user computing.

New workloads pose resource challenges, forcing IT departments to develop infrastructure environments for each service. The virtual desktop infrastructure (VDI) environment brings different resource usage modes from those of server virtualization projects. With this in mind, IT professionals often deploy completely independent environments (from servers to storage systems) to meet users' expectations.

Deployment difficulty and latency

The challenge in terms of resources is that many enterprises still have the No. 1 cause of problems when deploying new applications and services, followed by management overhead. One example is to allocate storage resources to run applications reliably. Many virtual machines run on a single logical unit number (LUN), which brings a challenging input/output load to the storage system.

The term "I/O blender" is used to describe this situation, that is, the hypervisor manages multiple workloads with different input/output streams in a unified manner, as a result, random input and output streams compete for resources, increasing the number of input and output operations (IOPS) per second required to serve virtualization workloads ). To overcome this challenge, IT departments often overallocate storage resources or use flash/SSD storage systems instead of Rotating Disks to improve performance, this results in a higher cost for each GB of storage resources allocated to each virtual machine.

Mobility and Management

Virtual machines can be transplanted, but their porting scope is often subject to their physical storage. Virtual machines are bound to the data storage area of the virtual domain, and the data storage area is bound with the storage resources. Isolated physical storage resources are often managed at the unit level, including LUN, volume, RAID group, and physical disk.

Policy is also configured at the unit level, which means that policies cannot be specified for a single virtual machine, but can only be specified for the storage units in which many virtual machines reside. For the mobility and management required in software-defined data centers, a top-down approach is required: policies are developed and managed at the VM and workload levels.

Misaligned Policy

In addition to the performance challenges in the post-virtualization world, enterprises still face the dual challenges of the physical and virtual worlds. Physical servers have direct ing from applications to servers, to storage arrays, to Luns, And to storage policies. This method makes the storage upgrade very complicated. For example, if the data replication policy is applied to the LUN in storage array X whose IP address is Y, the LUN must be copied to storage array A whose IP address is B.

In the virtualization world, a host has many applications, and many hosts use the same LUN. Applying a policy to a single LUN is not efficient. Instead, the backup policy and replication policy are directly applied to a single application (or virtual machine) for better management of the virtual environment. The replication policy specifies the destination abstracted from the infrastructure-in this case the data center. This allows administrators to upgrade the infrastructure in a data center without Configuring policies or migrating data, which improves efficiency and reduces risks.

Organization dislocation

IT departments often combine resource structures and skills. With the Software Defined data center, it is expected that some manual work is not required by the data center personnel. The abstraction layer should basically hide the complexity related to hardware resources.

The IT department needs to shift its focus. It is not for hardware resource silos that require a team with a solid degree of expertise, but for a wider range of knowledge to manage applications and virtualized environments.

Despite the challenges, IT professionals should not avoid deploying virtualized environments. However, they must fully consider the virtualization architecture in order to maintain efficiency and truly achieve the benefits of virtualization.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.