10 new Internet technologies affecting the next decade
The InfoWorld website recently announced ten new technologies that may affect the next decade. Although everyone's views may be different, we believe that these 10 new technologies are sufficient to represent most of our views. At the same time, we can be confident that these 10 technologies will bring us tremendous changes in our lives in the next few years.
1. Private cloud Technology
IT managers can use the technology and architecture pioneered by public cloud providers and apply them to their own data centers. Private clouds often have many mobile components, including virtualization management, metering and deduction systems, automatic configuration, and self-service supply.
However, some have achieved surprising momentum over the past year. Known as open-source project openstack, it provides the core settings of a cloud business process service: Virtual Machine Management, Object Storage and image service.
The openstack bill, as its "cloud Operating System", was initially developed by rackspace and NASA, but is used as a separate basis after the project is planned to be split. The most famous competitor of openstack is eucalyptus, which is basically a private cloud Implementation of Amazon Web Services.
It may be easy to think that the word "Cloud" is always illusory for technical clusters. However, the benefits of large-scale virtualization and other plans are visible, such as network convergence, greater economies of scale, and concentrated resources. These changes provide a new way of working and the collection of emerging cloud orchestration software.
2. Software-defined network
Like ancient coral reefs, data center networks are slowly developing and calcification over time. Although we have benefited from software abstraction and support for dynamic management, the server and storage network have maintained static hardware binding. With their resistance to change over decades, it has become a major obstacle on the road to cloud computing.
SDN Concatenates the hardware of switches and routers at the software layer as a centralized management control plane and an innovative platform. Whether SDN is network virtualization or not, although network virtualization will certainly be one of its by-products. On the contrary, SDN is a kind of "network programming". That is to say, it allows cloud service providers and independent software developers to create new network functions. The rest can be used for reference.
The leading example of SDN today is openflow. With the development of the Internet, there are two different ways to modify the Internet to meet the needs of new businesses. The reform school believes that new agreements can be added to the original infrastructure to solve the problem, while the reform school believes that everything must be retried. Openflow is a new network exchange model proposed by the reform school. They also set up the openflow switch forum.
3. Advanced Synchronization
Both Apple and Microsoft have their own different strategies, but they jointly support one thing: single-user environment. In fact, the two companies are launching cloud program-distributed user activities on devices and applications.
No more copy and paste information. Imagine that you can easily control your smartphone to synchronize data or other resources, such as network storage, local keyboard, local monitor and nearby network ...... This is a seamless movement.
When you send files by email in this way and copy files between computers, the concept of manual management seems old. You will realize the automatic synchronization of user-centered computing data and metadata.
It sounds like a sci-fi cloud. However, just as many scientific fantasies have become a reality, we can enter the concept of mobile computing structures. iCloud and Windows 8 are just early examples.
4. Apache hadoop
Hadoop is a distributed system infrastructure developed by the Apache Foundation. You can develop distributed programs without understanding the details of the distributed underlying layer. Make full use of the power of clusters for high-speed computing and storage. Hadoop implements a Distributed File System (HDFS.
HDFS features high fault tolerance and is designed to be deployed on low-cost hardware. It also provides high throughput to access application data, suitable for applications with large data sets. HDFS relaxed (relax) POSIX requirements (requirements) so that you can access the data in the streaming Access File System as a stream.
Hadoop is a software framework that can process large amounts of data in a distributed manner. However, hadoop is processed in a reliable, efficient, and scalable manner.
5. Distributed Storage layering
Distributed Storage layering refers to the distributed storage of data on multiple independent devices. Traditional network storage systems use centralized storage servers to store all data. storage servers become the bottleneck of system performance and focus on reliability and security. They cannot meet the needs of large-scale storage applications. The Distributed Network Storage System adopts a scalable system structure. It uses multiple storage servers to share the Storage Load and uses the location server to locate the storage information, which not only improves the reliability, availability, and access efficiency of the system, it is also easy to expand.
Different from the common centralized storage technology, the distributed storage technology does not store data on one or more specific nodes, instead, they use the disk space on each machine in the enterprise through the network, and make these scattered storage resources into a virtual storage device. The data is stored in all corners of the enterprise.
6. Javascript substitutes
Javascript is the most common Execution Code in the world, because it serves as the foundation of web pages. Its advantages continue to grow. However, some people hope to apply a converted code on JavaScript because of the success of JavaScript. The new structured Web Programming Language dart recently launched by Google, which will fix many limitations and dart enthusiasts say they will eventually replace JavaScript.
7. A trusted Chip
Experts have long recognized that in order to ensure the highest security level of applications, physical construction of all layers, including computing devices, needs to be verified.
The trusted computing group (TCG) of the Trusted Platform Module (TPM) is the first widely used hardware chip to ensure reliable hardware and guide sequence. It has been applied in many advanced companies, including Apple and Microsoft, which are applied in the backbone of Microsoft's bitlocker drive encryption technology and the coming Windows 8 UEFI Security boot architecture.
This year, Intel combined the TPM chip and hardware's hypervisor layer to protect boot sequences, memory and other components that any software vendor can use.
Although the hardware solution is not perfect in terms of security, the hardware protection plan will become better and better.
8. Continuous build tools
Continuous integration is a software development practice, that is, the team development members often integrate their work. Generally, each member is integrated at least once a day, which means that multiple integrations may occur every day. Each integration is verified through automated build (including compilation, release, and automated testing) to detect integration errors as soon as possible. Many teams find that this process can greatly reduce integration problems and allow the team to develop cohesive software more quickly.
At present, there are up to 30 continuous integration tools, each of which has its own characteristics. In China, software enterprises seldom pay for such products. Therefore, the most popular products in China include Hudson (Open Source), cruisecontrol (Open Source), teamcity (commercial version, you can use it for free if you have bought the intellij idea ). In other countries, there are two popular commercial software: anthillpro and go (formerly called cruise ).
9. Client Management Program
Traditional desktop virtualization has two major causes: one is the need for continuous connections between a client and a server, and the other is the need for a server to run all the desktop virtual machines.
The client management program can solve these two problems. It is installed on a common desktop or laptop, leveraging the processing capabilities of the client. A virtual machine can manage programs including operating systems, applications, and personal configuration settings. Moreover, it is safe to run a single virtual machine or desktop. If some users accidentally download a malicious software, you will experience the advantages of virtualization management, including VM snapshots, portability, ease of recovery, and other advantages.
10. HTML5
HTML5 is a new Web editing language, but it has not been promoted yet. It has a large limit, but its functions are quite effective. HTML5 provides some new elements and attributes, such as (website navigation blocks) and. This label will facilitate search engine indexing and help small screen devices and visually impaired users better. In addition, it also provides new features, such as and tags, for other browsing elements.