If the application is moved to the cloud computing platform to run, in addition to the popular, in essence, there are many benefits.
There are two options when you want to develop your own application manager on a cloud computing platform. The first is to choose the so-called IaaS (Infrastructure as a Service) level, and the second is almost as PaaS (Platform as a Service), so we need to think carefully.
What is the difference between building cloud applications on top of these two?
Developed on IaaS: Developers can tailor their environment and architecture
The so-called IaaS platform, such as the EC2 (Amazon Elastic Compute Cloud) in Amazon AWS, provides a computing infrastructure that runs on top of the cloud. These are not too different from the computing infrastructure you can build, including the server and operating system that can execute the application, the network environment, the disk storage space, and possibly the software system dynamics such as the HTTP server , Database server.
But the big difference is that these computing infrastructures are not in your computer room right now, but rather on the "cloud," so long as you pay for them, you can get these computing resources from the cloud at any time, Program deployment up.
For IaaS-based applications, development itself is not much different, using the same programming language as before, libraries, and developing applications in the same fashion. The benefits of "cloud" are that computing resources are cloud-based, and it's all made for you to take care of all the details you have with the vendor responsible for managing the cloud far away.
The advantage of developing an application based on IaaS is that application developers have great flexibility because only the most basic computing resources and environments are provided in the cloud so that developers can completely customize the application based on their needs Necessary environment and structure.
However, this is one of the two sides. When developers have the flexibility to build their own system architecture on demand, that means developers have to spend their own time dealing with various issues related to system architecture.
The most important of these is that developers have to deal with various issues related to scalability of the system themselves, such as allowing the system to increase the scale of services required by increasing the relevant computing resources (such as server, memory space, storage space ), You can achieve the desired size. Also need to deal with various computing resources load balancing and fault tolerance.
The topics related to the scalability of the system can be said to be a part of the system is not easy to deal with when building. Although there is a lot of flexibility that can be developed on IaaS, there are more developers at the same time, and more skill is required.
Develop on PaaS: Get a more up-to-date execution environment without having to deal with infrastructure-related details
Developing applications at the PaaS level is quite different from developing at the IaaS level. On PaaS, it offers a much higher-end operating environment than the original facilities provided by IaaS. At the PaaS level, the primary goal is to provide a more sophisticated execution environment that encapsulates many infrastructure-related details. As a result, developers do not have to deal with how to scale servers to get the right size, nor do they have to deal with the issue of how to handle load balancing and fault tolerance across a large number of servers.
Developers are faced with is a highly abstract execution environment, PaaS platform may provide several sets of APIs for different purposes, a variety of development tools, or even integrated development environment, so that developers can develop applications based on, but , Developers can not, without touching the details of the various system operations.
Providers of PaaS platforms typically have extensive and sophisticated experience in building very large system architectures. They summed up the experience of scaling system scalability on a large system architecture to create a PaaS-level cloud computing platform. One of the most typical examples, of course, is Google.
Google is currently the main push PaaS cloud platform called GAE (Google App Engine). Google emphasizes that when you deploy your application to run on top of GAE, it runs on the same infrastructure as all of Google's own applications.
It is conceivable that the original Google to create this platform called "Google Infrastructure", simply to meet the needs of their own applications for large-scale global system size. However, they also noticed that as the possibility and potential for cloud computing to resemble the sales of traditional power facilities, it has built a platform suitable for application development based on the originally intended for its own computing facilities. It has become a platform at the PaaS level On the cloud service.
If you do not have PaaS, you can ask yourself if you want to scale your system, you might consider investing extra computing resources such as adding hosts, increasing main memory, increasing storage space, and increasing network bandwidth, but if your system architecture Designing can not scale up the system by adding these compute resources or, if it can scale the system by adding these compute resources, then you may have to design and build a system architecture.
For example, we are very familiar with the Web application architecture, the front end has an HTTP subcontractor (such as Apache httpd), to undertake from the user-side HTTP protocol connection requests, and then these connection requests are average Dispatched to the middle-tier web application server, and the application executes on the web application server. Applications inevitably need to manipulate the data, and under the typical architecture, the data is stored on the back-end database server. Based on the need for fault tolerance or load balancing, it is possible to allow more than one database server in a fabric, forming a cluster with each other.
When the system is not large enough to cope with the connection requests from the user side, the usual way to scale the system is to increase the number of web application servers in the middle tier. Because initially it is possible that the system lacks adequate CPU or main memory resources, then you can get enough CPU or main memory resources by increasing the number of middle-tier Web application servers.
A well-balanced equalizer can handle sufficiently high request traffic to evenly route traffic to each of the middle-tier Web application servers and automatically detect Web application servers that no longer work in anomalies and thus no longer request traffic Lead to the exception Web application server.
By expanding the number of application servers, only part of the access problem can be solved
This looks ideal, but there are some problems that can not be solved. In addition to the fault-tolerant mechanism may not be ideal, such an architecture, it is difficult to endlessly by increasing the middle of the application server to expand service size. why?
With this architecture, it is indeed possible to scale the service initially with the addition of an application server in the middle tier, since the initial system efficiency bottleneck was the server's CPU power or main memory. However, as the middle tier application servers continue to grow in size, slowly, the bottleneck of system efficiency begins to shift. Often, energy efficiency bottlenecks move to data access. Because a database is usually a centralized resource in a system's architecture, even though there may be multiple database servers in the system, they are still essentially centralized. When enough data access requests are being processed at the same time, a centralized database server becomes a bottleneck for energy efficiency and then various adjustments to the data access architecture are needed to improve energy efficiency. This is a situation we often see when a typical web application scales the system.