The upcoming version of the Windows Server R2 is dedicated to providing users with advanced functionality in storage and networking, and is even better able to take on tasks that previously required additional paid software or even storage systems.
"We see network and storage as the next target, so that we can help customers reduce their cost of use while improving agility," said Michael Schutz, general manager of Microsoft's Windows Server product marketing. "We have summed up the network, storage, and computing experience we have accumulated in creating and operating cloud services and are trying to integrate them into the day-to-day work of our customers." ”
Microsoft recently released the news at the TechEd North American Convention in New Orleans. Microsoft is scheduled to provide a preview version of Windows Server R2 by the end of this month and a formal version by the end of the year.
The Windows Server Azure pack provides a portal page that administrators can use to provide self-service cloud services to users.
On the storage side, Microsoft has come up with a technology called Automatic layering, which "allows the system to automatically identify the most frequently accessed files," Schutz points out. Based on the identification results, the operating system keeps the most frequently accessed files in the fastest available storage medium, such as SSD (solid-state hard drives). Other files continue to reside on other hard disks, such as traditional disk drives with lower prices. Schutz explained that the idea is to maximize the performance of the system as much as possible by minimizing costs.
Automatic tiering comes from the storage space feature that was originally unveiled with Windows Server 20,121, which allows Windows Server to act as a front-end file server for large JBOD (i.e., simple disk bundles) arrays.
Schutz later said the package would serve as an alternative to a mature storage area network, but he also stressed that small businesses that do not want to waste money on a SAN are best suited to the package. Many networks and cloud service providers are biased towards using JBOD arrays rather than sans, he said. He added that the great potential of the technology was enough to help the Windows Server system support a sizeable storage array.
For example, 16 devices that use Windows Server (divided into four storage instances) can form a 64-node cluster with a raw storage capacity of up to 15PB. Each server is connected to a JBOD array of 60 4TB disks (or servers with an individual capacity of 960TB) through SAS (that is, serial attached storage). Microsoft has adopted the configuration of 240 drives per storage instance in the cluster in the official test process, but the system is not rigidly constrained to the size of the cluster itself.
If an administrator intends to use automated tiering technology to further improve data throughput capabilities, it can take advantage of a faster 500GB SSD instead of One-tenth 4TB drives, which means that data with low access frequency still has up to 13PB of storage space.
Just as the operating system automatically leverages all available memory resources, automatic tiering technology automatically fills existing SSD storage space. The administrator can set the size of the available SSD space according to the situation, and the rest of the work can be automatically completed by automatic layering technology. It is worth mentioning that the entire process does not need to adjust the underlying NTFS (that is, the network file storage) file system.
Another major innovation in the storage area of Windows Server R2 is the fact that enterprise users are now able to use de-duplication technology to process virtual hard disks, making it possible for companies that use a virtual business environment to significantly conserve storage resources. In such cases, the virtual hard disk often contains a large number of very different operating systems and applications, the data content is almost identical. To avoid waste of resources, Windows server reduces the number of replicas to a single set, greatly reducing the amount of money that users waste in acquiring storage solutions.
Contrary to intuitive experience, data deduplication can also improve boot time for virtual hard drives, Schutz explains. Because the virtual hard disk is first started on the server side, then the data flow to the terminal equipment, so the server software can be from the working memory directly read the start of the virtual hard disk information, and submit it to the client side.
Then look at the network. This update of Windows server also improves the migration speed of virtual machines in the running state. Windows Server 2012 has implemented a real-time migration of active virtual machines between servers, and now the new version of the system will drastically reduce the time consuming migration. One of the techniques used is to compress the virtual machine data in the original server and extract it from the target server after the transfer is complete. This means that the amount of migrated data that the line needs to host will be significantly reduced.
The update also uses remote Direct memory access (RDMA) technology for the first time, which means that replicas of the virtual machine can be transferred directly from the original server's memory side to the target server's memory without the need for any server processor. This new process can shorten the transmission time by more than half.
This is Microsoft's "Year of Cloud Strategy", and the new version of Windows Server system will naturally hook up with its own azure cloud service. One of the most important new features is Hyper-V recovery services. This service manages a large number of virtual machine backup information, which automatically switches the operation to the virtual machine at the backup location after the primary site fails. "This function is to sort the recovery process in a reasonable way so that the back end is on line first and then the middle and the front end in turn." You can also adjust the online order of the backup site by setting, "Schutz told us.
Although the primary and secondary two sites can also be localized (rather than Microsoft Azure), the service itself is still running in azure--to avoid natural disasters attacking two sites at the same time, recovery services typically run in two separate locations with a larger geographical span, At this point the cloud service is the best choice for smooth coordination. "Everyone needs a separate, coordinated environment, and our services don't require you to set up a server or install software," Schutz said.
Microsoft also brings some novelty gadgets to administrators in Windows Azure Pack. Free Plug-ins in the Windows Server R2 provide a portal for users to replicate the entire Azure management environment. The enterprise IT department can use Windows Server and System Center to create a Izumo system that provides this "cloud" service to business units, it project managers, and other company colleagues through the portal site.
"From the look and feel of it, it's like Windows Azure entering the user's own data center," Schutz said. "And the IT department no longer needs to submit applications, purchase new equipment or new applications, and the business unit can do it on its own." ”