This is a creation in Article, where the information may have evolved or changed.
The immutable infrastructure (immutable Infrastructure) is a very predictive concept proposed by Chad Fowler in 2013, and its core idea is that instances of any infrastructure become read-only once created, such as modifications and upgrades, is replaced with a new instance.
This model can help us reduce the burden of configuration management and make DevOps easier to practice, based on Packer, TerraForm, and Docker, and we are practicing and refining this concept and now share it with you.
The main elements of the current immutable infrastructure practice are:
- Consistency system for production and development environments
- Building infrastructure with TerraForm
Consistency system for production and development environments
One of the most common problems in our development and testing, and even deployment, is because of inconsistencies in the system environment that cause bugs or failures, which are most commonly seen during slow upgrades of long-running servers, while older systems and software bring more problems and maintenance costs.
So we use Packer and Vagrant to unify the production environment and development environment, Packer build operating system images for Vagrant to run the virtual development environment, so that all developers have a unified and continuously updated development environment, more conducive to reduce problems and collaboration.
Packer-built mirrors are also available for large virtualization platforms such as KVM, Xen and ESXi, as well as cloud computing platforms such as AWS, which we use.
The underlying image that is built is preloaded with most of the required software such as Docker, Consul, and some common Docker container images are also pulled preinstalled.
This allows us to deploy with a unified infrastructure that makes little difference to the development environment.
I have extracted our mirrored build script from open source zealic/packer-boxes, the image is built with CentOS and Debian, so you can build your own image.
Building infrastructure with TerraForm
It says that we use AWS as an infrastructure platform, and based on the idea of an immutable infrastructure, we want the infrastructure to be destroyed and rebuilt quickly,
For this purpose, we used terraform to fully host the AWS infrastructure.
Before I do, I need to introduce some of the architecture.
First we'll group the infrastructure, and each infrastructure group will have a suite of VPC environments.
Each group of infrastructure we are divided into two kinds according to the functional scenario, Ops-center Group and application Infrastructure Group, Ops-center main load-carrying infrastructure, such as Mesos master,docker Registry, continuous Integration services, VPN access and management background are running in Ops-center.
The application infrastructure group runs the microservices, reverse proxies, and Marathon nodes of the primary business.
And these are reflected in AWS, where each group of infrastructure groups will respond to a VPC, connecting the associated infrastructure groups through VPC peering Connection.
Based on this premise, we can separate a number of infrastructure groups, for example, if we have domestic and Singapore operations, we can differentiate the following infrastructure groups:
- Ops-cn
- Prd-cn
- Ops-sg
- Prd-sg-master
- Prd-sg-slave
Here are some details of how we use the TerraForm hosting infrastructure.
We wrote a set of Thor scripts to manage multiple infrastructure groups, each of which is a folder that contains a terraform definition of the infrastructure group that is versioned to allow quick rollback and churn of resources.
When we need to make some changes to the AWS infrastructure, we just need to modify the definition and run the following command.
Thor Exec:terraform Apply
The relationships of multiple infrastructure groups are associated by managing the configuration file pass variables.
The following are currently managed through TerraForm:
- Vpc
- VPC Subnet
- Route table
- Security Groups
- Route53 Records
- ELB
- S3
- Internet/nat Gateways
As well as the various function servers provided in the above mentioned Ops-center, when defining aws_instance, we assign tags to them, and our management program determines the role of the server by tag when it receives the events prepared by the server, and executes corresponding Ansible for it. Playbook to complete automated deployment.
Sometimes we may have an terraform resource that needs to be included in the scope of its hosting, where we use terraforming to incorporate existing VPC, S3, or EC2 instances into TerraForm hosting, and of course TerraForm promises to join in the future. Import feature for importing an existing resource.
The application infrastructure group deploys only Marathon to manage the Docker container cluster, which manages the business services and other related content, and because our business services are built to be published as Docker containers, only the configuration of the application service is managed, Here we use Consul and CONFD for dynamic configuration management.
In this way, our infrastructure is run and created in the same mode, with immutability, simpler and more reliable systems, and the ability to roll back quickly.
By implementing this package, we can achieve the following benefits:
Quickly rebuild, destroy infrastructure groups, deploy multiple infrastructure groups for disaster tolerance, grayscale publishing, and rapid upgrades.
Q&a
Q: What kind of pits have you experienced in this package, please give me one or two specific examples?
A: When using Packer to build at home, there are often failure problems due to well-known network reasons, which can be avoided by other means. In addition, because TerraForm does not fully support all AWS resource management, such as Cloudfront, Route53 Geo DNS, it still needs to be managed manually, but future terraform will join these support.
Q: are each VPC group fully independent of service delivery? Can you say what aspect of the business layer this technology is dealing with?
A: Each VPC group provides a complete application feature implementation, which also improves the prd-sg-master,prd-sg-slave that we can use for disaster tolerance. The business layer provides primarily HTTP services and internally dependent MicroServices.
Q: Why do I choose Consul? Many similar schemes use the ETCD.
A:consul for the agent node is more friendly, in addition Vagrant, Consul, TerraForm are developed by the Hashicorp company, its documentation and technology stack are comprehensive, it is worth the application of practice.
Q: If I use kubernetes, there is no need for big data hadoop or spark, is it still useful to mesos? In other words, what are the benefits of combining Mesos and kubernetes for a pure container platform and what are the usage scenarios?
A:kubernetes and Mesos are both the framework of container management scheduling, and the advantages of Mesos are more scalable.
Q: What should I do with PHP, a service that can dynamically load code?
A: Code sources that dynamically load code are recommended for use in a microservices manner.
q:vagrant are you using NAT mode or bridge on the host? What are the advantages of Packer with respect to the vagrant Package command? Vagrant uses NAT mode on the host to reduce the chance of DNS problems.
A:packer can start building your system from an ISO image, more purely than vagrant, and most of the boxes in vagrant are packaged from Packer.
Q: I introduced a similar packer+terraform tool when I shared the previous issue. When you use TerraForm to manage AWS Resources, you mention additional script management (Thor Exec:terraform apply). Excuse me, why don't you run TerraForm's command directly, what is the purpose of using Thor, which of your repo is useful to Thor, can I refer to it for a moment?
A: We need to pass a lot of variables and hard-coded parameters when we run TerraForm, and we use the AWS domestic region and the AWS International region, they are separate, the corresponding AWS Access Key and Secretkey are different, Thor The purpose of the script is to say that the content is passed to the TerraForm in the form of configuration files and environment variables, so that it can get the correct parameters and locate the correct execution environment.
Q: Does the continuous integration use the Jenkins Plus plugin method? What branch management strategy does the continuous integration code adopt?
A: Yes, continuous integration using Jenkins and plugins, the branch we use is a variant of Git Flow, based on the pull model.
Q: For AWS alone, Amazon's own cloudformation support infrastructure should be more comprehensive, and if you don't consider cross-platform capabilities, what are the reasons to choose TerraForm?
A:cloudformation's design is not friendly, JSON-based syntax is very obscure and difficult to maintain, not as clear as terraform.
Q: How is the cross-host network solved? Is it a combination of kubernetes and Mesos, Docker functionality?
A: We are using Weave, not using Kubernetes.
===========================
The above content is organized according to the February 23, 2016 night group sharing content. Share people
shouting, Hyku Architect, is responsible for the company's infrastructure and operations architecture, is proficient in the AWS technology system, loves Docker and open source, and is interested in running. Dockone organizes targeted technology sharing every week and welcomes students who are interested in adding:
Liyingjiesz, participate in the group, you have to listen to the topic or want to share the topic can give us a message.