Foreword
2018 is not only a year of hot microservice architecture, but also a year when containers and
Kubernetes have received a lot of praise; under the guidance of kubernetes,
container-centric deployment of microservices has become a de facto standard and continues to accelerate The service architecture model has landed and continues to play its magic.
Enterprises, especially Internet companies, in order to quickly respond to the needs of front-end users and shorten the cycle of products from demand to delivery, often need to iterate products quickly and delicately to seize market opportunities; in the micro-service mode, it can be very good To meet this requirement, only publish the changed services, thereby minimizing the risk of a single iteration and achieving agile development and deployment.
When the microservices model is adopted, the entire business process will be split vertically into multiple small units; each small unit is an independently developed, independently deployed, and independently extended microprocessing service. This flexibility is very suitable for the agile development model , But it also brings inherent complexity and difficulty to development and operation and maintenance.
For developers, since the overall microservice application provides services as a distributed system, it is necessary to select the appropriate service communication protocol and deal with potential network differentiation and transient failures. In addition, a service discovery and configuration center needs to be built. Other infrastructure;
For operation and maintenance personnel, the portability of containers needs to be used to continuously integrate and deploy microservices to different cluster environments. These require the operation and maintenance personnel to have very comprehensive capabilities, such as: familiar with containers and k8s, can write Linux Shell operation Dimension script, proficient in a continuous integration deployment tool (for example: gitlab, jenkins), etc.
In summary, how to build a mature, stable, and highly automated
DevOps pipeline that meets the characteristics of microservices has become another problem.
aims
With a minimum learning cost, build a mature, stable, and highly automated DevOps pipeline that meets the characteristics of microservices, and continuously integrate/deploy microservices to kubernetes as needed.
Tools-minimal learning cost
kubernetes + gitlab + shell
Solution-Vision
1. Continuous Integration-CI
Deploy gitlab-runner on the master node of kubernetes to act as the client of the gitlab server; when submitting or merging code to the designated branch, gitlab-runner automatically pulls the code from gitlab and uses the edge computing power provided by the master host to execute the orchestration A good DevOps CI pipeline = "Compile code, run units and integration tests, containerized microservices into an image, and finally upload it to the enterprise image warehouse. This is the continuous integration process. The product delivered at this stage is the image.
2. Continuous deployment-CD
Deploy gitlab-runner on the master node of kubernetes to act as the client of the gitlab server. When the new version of the image is delivered in the continuous integration stage, pull the latest version of the image from the enterprise image warehouse and use the edge computing power provided by the master host to execute The well-organized
DevOps CD pipeline = "Synchronize service configuration information to the configuration center (ConfigMap of k8s), and update the kubernetes cluster image version on a rolling basis.
Deployment environment
Installation directory: /root/gitrunner
Working directory: /home/devops/gitrunner
mkdir -p /root/gitrunner && mkdir -p /home/devops/gitrunner;
1. Deploy gitlab-runner
Deploy gitlab-runner on the master node of kubernetes, the command is as follows:
> wget -O /root/gitrunner/gitlab-runner https://gitlab-runner-downloads.s3.amazonaws.com/latest/binaries/gitlab-runner-linux-amd64;
> cd /root/gitrunner;
> chmod +x gitlab-runner;
> # Note: It is recommended to use the root user for installation to avoid unnecessary permissions issues.
> ./gitlab-runner install --user=root --working-directory=/home/devops/gitrunner;
> ./gitlab-runner start;
2. Register gitlab-runner
gitlab supports registration of two types of runners:
1.Specific Runners
This is an exclusive worker affiliated with a specific project and does not accept other project assignments.
2. Shared Runners
This is a worker affiliated with gitlab-server and can be shared for all project assignments.
These two types of Runners have their own advantages. If you register a dedicated Runner for each project, it will be more cumbersome and redundant, and using a shared Runner is very easy, but one worker can only do one thing at a time. When one worker is dispatched at the same time, then There will be competition and waiting, so everyone still needs to register workers, as long as it does not delay the construction period, hehe.
Register runner
Register Runner in development, pre-production, production environment, and label: build, staging, prod.
Note: When setting up a DevOps pipeline later, workers will be dispatched according to the tags.
step
1. Obtain the project address and registration token, and search for the path in sequence: Settings => CI / CD => Runners settings
Build a DevOps pipeline-PipeLines
The above solution only describes the vision, which is the final result of the desired goal, but it is still very empty for how to land a real pipeline. In fact, this is exactly the difficulty of DevOps. The general process knows that there is a continuous integration and continuous deployment. It is like a few treasures when it comes to the ground.
Similarly, adhering to the idea of microservices and dividing and conquering, we divide the pipeline into two parts: create and update, that is, create a main board first, and then iterate on small versions based on this main board to continuously expand new functions. Through this effective separation, is it not so hollow, just like the CQRS model of domain-driven design, treats reading and writing differently, thereby greatly reducing impedance, and also very suitable for product innovation iterations, such as The requirements are split into 3 phases, each of which corresponds to a major version, and then iterate the requirements of each phase in a small version to complete one phase and seal one phase, isolate each other, complement each other, and facilitate traceability.
Having sorted out the context of the entire pipeline, we now need to think about some practical issues, such as:
How to script the continuous integration/deployment microservices process, that is, how to code the infrastructure?
How to dynamically parse the git current change log to achieve accurate release of microservices on demand?
How to keep the site and retry the pipeline with minimum cost?
Without modifying the pipeline script, how to manually control on-demand publishing, automatic scaling and rollback of microservices?
How to be compatible with newly added microservices?
How to quickly debug the entire pipeline script?
Only by addressing the above issues can it be considered as a mature and usable,
enterprise-level CI/CD pipeline that meets the characteristics of high automation, stability, speed, and fault tolerance; in Internet companies, several versions may be submitted to different days. The environment should not affect the progress of continuous deployment due to poor consideration. Once the pipeline is put into use, it needs to be closed for modification and only open for expansion.
Only by addressing the above issues can it be considered as a mature and usable, enterprise-level CI/CD pipeline that meets the characteristics of high automation, stability, speed, and fault tolerance; in Internet companies, several versions may be submitted to different days. The environment should not affect the progress of continuous deployment due to poor consideration. Once the pipeline is put into use, it needs to be closed for modification and only open for expansion.