1.1. What is CDP Cloud design mode?
Cloud design patterns are a set of solutions and design ideas for solving common system design problems encountered when using cloud technologies.
To create CDP, we reviewed the designs created by many different cloud architectures and categorized them according to the issues that needed to be addressed, and then created a generic design pattern based on specific issues. There are some issues that can be addressed with traditional data center technologies, but we still propose cloud solutions for these issues, primarily because cloud-based solutions cost less and are more flexible.
The cloud design pattern Web site in this test is the latest in many different architectures, including ninja of Three's expertise and experience sharing when building cloud solutions.
Description of 1.2.CDP
The cloud design pattern is described in terms of the following
- Schema name/summary: Name, summary, and simple description
- Problem solved: A description of the specific problem that prompted the pattern, and the problems or challenges that could be solved by implementing the pattern.
- Description of the pattern/solution in the cloud: Some terminology explains or how to solve some problems in the cloud, why a pattern or description configuration can become a pattern.
- Implementation: A description of how to use AWS to implement a pattern.
- Structure: Visualization of the pattern structure
- Benefits: Describe the benefits of pattern application
- Description: Describes the pros and cons of applying a pattern, and the issues to be aware of.
- Other: Comparisons with other patterns, use cases, and traditional information
1.3. Cloud Design Pattern Classification list
1. Basic mode
Snapshot mode (data backup): Mainly solve the basic disk data backup problem, the EBS snapshot is stored on the S3 for automated, high-endurance, unlimited capacity data backup.
Chapter mode (server replication): Primarily addresses the problem of creating a new server by capturing an AMI when EC2 a state to automate the creation of the required instances without having to manually perform various complex configuration operations.
Vertical expansion mode (Dynamic server specification Scaling): Mainly to solve the problem of server specification adjustment, through Cloudwatch monitoring resource utilization, once the specification is not enough or too large to use the management console to adjust the instance type.
On-Demand disk mode (dynamically increases/decreases disk capacity): Primarily addresses the problem of insufficient disk capacity, creating new EBS volumes based on snapshots of the original EBS to enable rapid disk space expansion.
2. Improve the mode of usability
Multi-server mode (server redundancy): mainly solve the problem of server failure, the use of Elb binding multiple EC2 instances (based on stamp mode creation), easy and rapid implementation of server redundancy.
Multi-Datacenter Mode (datacenter-level redundancy): Primarily addresses data center-level failures and leverages the idea of a multi-server model to place redundant instances in different availability zones (AZ) for data center-level redundancy.
Floating IP mode (floating IP address): The main solution to the server replacement problem, by the elastic IP (EIP) and the new instance to bind to achieve a quick replacement server.
Deep Health Check mode (System health Check): Mainly solve the problem of checking the running state of the system, using Elb's Health check function to check the whole running state of the system.
3. Handling Dynamic Content Patterns
Scale-out mode (dynamically increase the number of servers): The main solution to the problem of Network load change, through the Cloudwatch for indicator monitoring, the use of ELB for load distribution, auto scaling to achieve automatic scaling of server size (based on seal mode).
Clone Server Mode (clone a server): primarily addresses how the initial smaller systems respond to network load changes by creating new instances using an AMI with synchronization mechanism and using ELB to scale out (based on stamp mode) without modifying existing systems.
NFS Sharing mode (using shared content): primarily addresses data synchronization and sharing issues across multiple servers by establishing a Server for NFS that stores shared content, so that scale-out servers reference content from Server for NFS, allowing for content synchronization across multiple servers (involving scale-out mode).
NFS Replication mode (replicate shared content): primarily addresses the performance improvement issues of data synchronization and sharing across multiple servers, using Server for NFS to store shared content, the newly launched EC2 instance will copy content from the NFS server to its own EBS. This prevents NFS servers from becoming a performance bottleneck or a single point of failure (originating from NFS shared mode) by referencing the content on the EBS directly.
State sharing mode (shared state information): mainly solve the problem of inheritance of server state information, use data storage instead of Web/ap server to save and update state information to ensure the inheritance of state information (involving scale-out mode).
URL rewriting mode (save static content): Mainly solve the problem of large access to static content of the site, by saving a part of static content in S3, and overwriting the URL of static content in the original webpage to address on S3 or CloudFront, so as to achieve efficient data transmission and reduce the load of the server. (involves networked storage mode)
Overwrite proxy mode (proxy for URL rewrite): Mainly solve the problem of modifying the system when accessing static content of the website, using proxy server to modify the access address, so as to use S3 or CloudFront to allocate static content (the optimization of URL rewriting mode) without modifying the existing system.
Cache proxy mode (provides cache): mainly to solve the problem of the server under high load capacity, by adding cache servers upstream of the WEB/AP server to improve system performance.
Scheduled scale-out mode (increase or decrease the number of servers as scheduled): Mainly to solve the problem that the server is unable to respond in time by the sudden load surge, and to perform the scale-out through the preset time, so as to realize the sudden burst of load (based on the scale-out mode).
4. The mode of handling static content
Network storage mode (using high-availability networked storage): Mainly solve the problem of large file transmission, by saving the large static files in S3, so that users can download or share content directly from S3, thereby reducing the load on the server (can be combined with the URL rewriting mode).
Direct hosting mode (using networked storage direct hosting): Primarily addresses the problem of sudden traffic surges and excessive server load, by transmitting static content to S3, enabling S3 to host the site and distributing content, enabling fast content delivery and minimizing server load.
Private allocation mode (distributes data to specific users): Primarily addresses access control issues in networked storage by creating restricted URLs for content objects and using restricted URLs for content access control (based on networked storage mode).
Cache allocation mode (places data closer to the user): primarily addresses latency issues for long-distance data transmission, uses cloudfront cloud push services, and leverages cache servers ("Edge Servers") to deliver efficient content worldwide.
Rename allocation mode (guaranteed no update delay allocation): Primarily addresses data synchronization between the primary and cache servers, by updating the URL of the original content to the URL of the new content, enabling the cache server to transfer the updated content as quickly as possible (based on the cache allocation mode).
5. Mode of uploading data
Write-agent mode (high-speed uploading of data to networked storage): primarily addresses low performance issues when writing large files to networked storage, and improves write performance by uploading content indirectly to S3 by using an "Upload Server".
Storage index Mode (increase the efficiency of network storage): Mainly solve the problem of improving the retrieval performance of network storage, by saving the metadata in S3 to the KVS with high retrieval performance, using KVS's retrieval result to obtain data from S3, and improve the retrieval performance of network storage.
Direct object upload mode (simplified upload process): Mainly solve the problem of large server load caused by large amount of data uploading, and upload the data content directly into S3 by creating an HTML form to perform the upload, thereby reducing the server-side load.
6. Model of relational database
Database replication mode (copy online database): Mainly solve the problem of data storage disaster backup, by storing the main and standby database in different regions or regions, so as to ensure the continuity of business in the event of failure or disaster.
Read replica mode (load distribution through read replication): Primarily address the performance bottleneck caused by database access, by creating a number of read replicas for the primary database, allowing the application to read data directly from the replica, thus improving the read performance of the database.
In-memory DB cache mode (cache high-frequency data): Mainly to solve the problem of insufficient database read performance, using data caching service, the common data will be saved in the cache server, improve the overall performance of the system by improving the database read performance.
Shard Write mode (improve writing efficiency): mainly to solve the problem of poor database write performance, using multiple database servers with the same structure to write data simultaneously, thereby increasing the database write speed.
-
7, Batch mode
-
Queue list mode (loosely coupled system): mainly to solve the problem of too close inter-system connection in multi-tasking environment, using simple Queue service to transform work by message passing, so as to realize the loose coupling and high reliability of the system.
-
Priority Queue mode (change priority): primarily addresses the priority management issues of a large number of tasks, by preparing different message queues for different priority tasks, which enables multiple tasks to be handled by priority (based on the queue-linked list pattern).
-
Work Observer mode (monitor work and add/Subtract servers): mainly solve the on-demand dynamic scaling problem of batch server, monitor the number of queue messages by using Cloudwatch, and use the number of messages as an indicator, trigger auto when necessary Scaling to automatically increase or decrease the number of batch servers (involving scale-out mode).
-
Scheduled auto-scaling mode (automatic start and shutdown of batch servers): Mainly to solve the problem of low resource utilization of batch server, start the required batch server at a specified time by setting auto scaling and terminate the unwanted instance after the batch process is finished. , which increases the utilization of the batch server and lowers the cost of use (based on stamp mode, similar to the scheduled scale-out mode).
-
8. Mode of operation and maintenance
-
Bootstrapper mode (automatically get startup settings): The primary solution is to update the machine image by creating an AMI that contains a bootstrapper that automatically acquires the required parameters or packages when the instance is initialized, enabling the instance to be created dynamically , eliminating the need to recreate the AMI when patching or updating the instance (Seal mode optimization).
-
Cloud di mode (externally configured for those parts that are frequently updated): Primarily addresses configuration issues with server initialization, enabling the flexibility of initializing the server (based on stamp mode, bootloader mode) by adding tag information when the EC2 instance is launched.
-
Stack deployment mode (Create a template to set up a group server): mainly solve the problem of deployment operation of a large number of virtual machines, utilize the cloud Orchestration Service, set up a template for starting a set of servers, and start all servers by using one-time Enables quick creation or deletion of the entire system (based on stamp mode creation).
-
Server switching mode (transfer server): primarily resolves a fast recovery problem after a server failure by hooking EBS from a failed server to a newly launched instance, enabling fast recovery of the state before the server fails (based on stamp mode, often combined with floating IP mode).
-
Monitors the integration mode (centralization of monitoring tools): It mainly solves the centralized monitoring problem of the system, obtains the information from the cloud monitoring system by invoking the API provided by the cloud monitoring service, and realizes centralized control in its own monitoring system.
-
Networked storage archiving mode (archive large amounts of data): primarily addresses storage issues for large numbers of logs and backup files, by storing logs or backup files on S3, without worrying about disk failure or lack of space.
-
Weight Conversion Mode (right-of-use value polling schedule conversion): The main solution to the migration of the system environment, the use of the "weighted round-robin" function in route 53, the domain name is unchanged and does not shut down the system under the premise of the smooth migration of the system.
-
Hybrid backup mode (leveraging Cloud Backup): The main solution is to build a high-durability DR System by connecting the internal system to the AWS Cloud to create a costly problem in building a disaster preparedness system.
9, the network mode
On-demand NAT (network address translation) mode (changes the networking settings during maintenance time): Primarily addresses the problem of idle NAT server resources by creating a NAT instance in a VPC, starting or terminating the NAT instance as needed, This enables the benefit rate of NAT server resources and reduces the cost of use.
BackNet mode (Establish Management Network): Mainly solve the problem of network access interface management, through the use of two different virtual network interfaces, the public network interface and management network interface to differentiate, thereby reducing the risk of network management.
Functional firewall mode (multi-layer access control): Mainly solve the problem of large number of firewall configuration and maintenance work, using the function of Network security group, Cluster Server is grouped by function, and the firewall is centralized control by group.
Operational firewall mode (Control access through a single function): Mainly to solve the system access control problem for multiple organizations, using the function of Network security group, create security groups for different organizations, and implement centralized control of firewalls according to different organizations.
Multi-load Balancing mode (set up multiple load balancers): Mainly solves the problem of setting operation of multiple devices accessing the same network, and realizes access management of different devices by specifying multiple virtual load balancers with different settings.
WAF proxy mode (effective use of an expensive Web application protection system): mainly to solve the problem of network Application protection system is difficult to apply, by setting up the WAF proxy server between EC2 and Elb, the application of WAF can be realized easily.
Cloudhub mode (set VPN address): Mainly solve the problem of building VPN connection between multiple sites, take advantage of the VPN function provided by AWS, establish VPN for different sites and route to VPC to make VPN connection easily between multiple sites.
"Cdp-Cloud Design Model", chapter 1th, concept and classification