If you want to implement the vision of a software-defined datacenter, network virtualization will be the last mile in the journey. IDC estimates that the network virtualization market will grow from 2013 360 million dollars to 2016 3.7 billion. Many giants have been a big hit, and a lot of new companies are also targeting this direction. Let's talk about what this means in the following ways.
The game between a big one
Due to the gradual maturing of cloud computing, it is becoming more and more urgent to solve the challenges faced by traditional network architectures. Small and medium-sized enterprises may be fine, the most anxious is who ah? These are the Internet and the IaaS giants. At Stanford University, the OpenFlow is a concrete realization of the concept of network virtualization. Google and Facebook are therefore actively involved in the standardization and promotion of the Open Networking Foundation (onf) as a leader in support of OpenFlow.
Traditional network providers also have to keep up with the times, otherwise it is likely to be the big customers around the development of their own. Nicira, a new network of virtualization start-ups, has taken advantage of the times and OpenStack deep integration to provide solutions for many giants, including ebay.
VMware, already a leader in the server virtualization market, has also been hit by the punches. In June 2012, VMware12.6 billion dollars in mergers and acquisitions at that time only 10 million U.S. dollars Nicira, provoked a hot discussion. The importance of network virtualization can be seen.
Network virtualization is a delicate game between giants, the final situation, we will wait and see. Small and medium enterprises and it practitioners, although this is a big game between, but no matter who wins who negative, will eventually affect our retail investors is not? It is better to know first.
The challenge of traditional network architecture
In recent years, the rapid development of server and storage virtualization technology makes it very common for dynamic and rapid allocation of computing resources and storage resources. This greatly shortens the time to create a server. By contrast, the current traditional network architecture appears to be less force and a short board in the entire resource allocation process.
For example, the provision of network resources in a number of cases requires human intervention. Network switch port configuration, ACLs, routing, and so on. In the case of the WAN Data center, the interconnection between data centers is mostly through the three-layer network protocol. If you are porting applications between data centers, you need to pay special attention to the configuration of the network environment in your application. If the number of servers is large, this is definitely a time-consuming and laborious thing. Three enterprise-class mergers require a rapid integration of network systems, and friends who have implemented integration projects should know how deep the water is. Project complexity and implementation cycles are challenging. The integration of the existing system and cloud platform of the enterprise. The last SaaS system may be fast, but you don't want to have SaaS as an island of information. You may need SaaS to interact with your existing system's data, such as SAP. If you use the IaaS public cloud, you may want to integrate with the existing systems, which is the concept of a hybrid cloud. This also needs to take into account the ability of your network to accommodate seamless porting from local to public cloud applications. Five cannot make full use of network resources. Most companies can make use of the 30%-40% of network resources is good. In many cases, a large amount of resources are idle, and some times because of the cyclical increase in data, but found insufficient network resources. Many ISPs or communication network providers are scratching their heads, and users expect more data traffic than ever before, when basic user fees are not growing. In addition to network expansion, the more important to improve the existing network utilization. Google claims that because of the use of openflow (an implementation of network virtualization technology), their internal network usage will be close to 100%