Since the advent of the Internet , the concept of cloud computing has been put forward for 50 years. In 1957, John McCarthy designed the concept of time sharing in computers as a tool. Since then, the name of the concept has undergone several changes: from "service Bureau" to application service providers, to Internet-as-a-service, to cloud computing, to software-defined data centers, each name is slightly different. However, the core concept has not changed: to provide Internet (cloud)-based IT services.
The National Institute of Standards and Technology (NIST) has the highest cloud definition: cloud computing is a model that allows users to access configurable computing resources (e.g., network, servers, storage, applications, and services) across the network, anytime, anywhere, easily, on demand, and pool resources can be quickly allocated and released.  It requires very little administrative effort and interaction with the service provider.
The provider uses the three-cycle model (see Figure 1): IaaS (infrastructure as a service), PaaS (platform as a service), and SaaS (software as a service). Here, we focus on IaaS. The next step is to select a model to deploy the cloud service. In the public cloud, the provider provides infrastructure to any customer. Private clouds are available only to one organization. In a hybrid cloud, companies use a combination of public and private clouds.
In order to select the cloud computing model that best suits your organization, you must analyze your organization's IT infrastructure, uses, and requirements. To help with this work, we will describe a current picture of cloud computing here.Cloud Best Practices
As with every new architectural pattern, it is important to consider the characteristics of new technologies when designing systems. In order to select a cloud provider or technology, you should be aware of your needs in order to list the required features. Here are some best practices for cloud migrations .Resilient architecture
IaaS provides clear scalability. The cloud is better than the scaling strategy of traditional physical hardware. To maximize scalability from this potential, when designing systems and application architectures, you should decouple as much as possible, use a service-oriented architecture, and use queues between services.Prepare for failure at design time
High scalability has its limitations. IaaS technology and architecture can lead to a system that is not robust because replacing hardware with multiple software tiers significantly increases complexity and point of failure. Redundancy and fault tolerance are the main design objectives.
To ensure business continuity, in addition to establishing a backup strategy, ensure that the system is ready for restart. With server configuration and deployment steps, deployment practice automation is a must. Automation requires new development practices (development and operations management, continuous integration, test-driven development, and so on) and new tools such as Chef, Puppet, or Ansible.High Availability
For any enterprise, it resource destruction can have a huge negative impact. When you migrate to the cloud, you lose control of the underlying infrastructure, and the service level agreement (SLA) does not cover all the costs incurred, so you should design with downtime and high availability in mind. Because creating a virtual instance is simple, deploying a server or service cluster becomes a popular way. In this scenario, load balancing is a widely recognized cluster operation technology, which is an important feature to consider when choosing a cloud provider.
It is also important to use multiple availability zones or at least different data centers to ensure that the system is as robust as possible. In April 2011, Amazon Web Services (AWS) experienced a system that stopped running or was running for 4 days with a time-out. Dividing clusters into different regions and datacenters can improve resource resilience.Performance
Technology-related technical limitations need to be considered-primarily lack of isolation, and lack of robustness. In any multitenant environment, the performance of one instance can affect adjacent instances. An adjacent instance usage rise affects available resources, especially the IOPS of the Compute units and disks (I/O operations per second). The architecture should handle this change.
Also, because of latency issues, bottlenecks can occur, even in instances in the same datacenter. The cloud provider provides some features to handle this situation (for example, AWS placement groups). However, if your architecture has servers in different regions of the data center, then you should consider using other technologies (such as caching).Safety
Because of the openness of the public cloud, designing and maintaining a secure infrastructure should be an important goal in any cloud deployment. Be sure to adopt widely recognized security practices: firewalls, minimizing server services to reduce attack vectors, keeping operating system versions up-to-date, key-based authentication, and so on. However, the challenge may come from an increase in the number of servers that need to be maintained and the use of the cloud in different development environments: the development environment, staging environment, and production environment. In this scenario, it is important to isolate and secure each environment, because a vulnerability in the prototype server could provide access to the entire infrastructure through a key.Monitoring
The ease of deployment of new resources makes the number of servers exponentially increasing. This raises new questions, and monitoring tools are critical to system management. First, they play a fundamental role in periodic scaling and event-based auto-scaling. Second, they are part of the tools needed to ensure the robustness of the architecture, like Netflix Chaos Monkey. Finally, they are important for security vulnerability detection and incident investigation, as some security vulnerabilities have shown.
Infrastructure as a service and cloud technology