Depending on the nature of
containers and Docker technology, developers can easily share their software and dependencies with IT operations and production environments, while eliminating the typical "computer-friendly" excuse. Containers can resolve application conflicts between different environments.
Containers and Docker indirectly bring developers and IT operations closer together, allowing them to collaborate more efficiently. Adopting container workflows can provide the continuity of
DevOps that many customers are looking for, but previously it was necessary to implement release and build pipelines through more complex configurations. Containers simplify the build/test/deploy pipeline in DevOps.
With Docker containers, developers own the content inside the container (applications and services, and dependencies on frameworks and components) and how the containers and services run together as an application consisting of a set of services. The interdependence of multiple containers is defined in the docker-compose.yml file, or it can be called a deployment manifest. At the same time, the IT operations team (IT professionals and managers) can focus on the management, infrastructure, scalability, and monitoring of the production environment, and ultimately ensure that the application is correctly delivered to the end user without having to know the contents of various containers . Therefore, the term "container" is like a real world shipping container. The owner of the container content does not need to consider how the container is calculated. The transportation company will transport the container from its origin to its destination without knowing or paying attention to the content of the container. In a similar way, developers can create and own content in Docker containers without having to care about the "transport" mechanism.
In the pillar on the left side of Figure 2-1, developers use Docker for Windows or Mac locally to write and run code in Docker containers. They define the operating environment of the code by using a Dockerfile that specifies the basic operating system to run and the steps to build the code into a Docker image. Developers use the above docker-compose.yml file deployment manifest to define how one or more images interact. After completing local development, they pushed the application code and Docker configuration files to the selected code base (that is, the Git repository).
The DevOps pillar uses the Dockerfile provided in the code repository to define the build-continuous integration (CI) pipeline. The CI system extracts the base container image from the selected Docker registry and builds a custom Docker image for the application. Then verify these images and push them to the Docker registry for deployment to multiple environments.
In the pillar on the right, the operations team manages the applications and infrastructure deployed in production while monitoring the environment and applications so that they can provide feedback and insights to the development team on how to improve the applications. Container applications usually use a container business process coordinator (such as Kubernetes) to run in a production environment. In this case, Helm charts are usually used to configure deployment units instead of docker-compose files.
The two teams collaborated by separating the focus into a contracted basic platform (Docker container), while greatly improving the collaboration capabilities of the two teams in the application lifecycle. The developer owns the container content, its operating environment, and container interdependencies, while the operations team uses the built image with the manifest and executes it in its business process system.
When using Docker, there are difficulties in the application life cycle.
In the next few years, there are various reasons for the increase in the number of containerized applications, one of which is the creation of applications based on microservices.
Over the past 15 years, the use of Web services has been the foundation for building thousands of applications, and perhaps in a few years, we will find the same situation with microservice-based applications running on Docker containers.
It is also worth mentioning that Docker containers can also be used for monolithic applications and still get most of the benefits of Docker. Containers are not just for microservices.
Using Docker containerization and microservices will bring new challenges to the organization's development process. Therefore, a reliable strategy is needed to maintain multiple containers and microservices running on the production system. Eventually, enterprise applications will run hundreds or thousands of containers/instances in production.
These challenges create new requirements when using DevOps tools, so you must define new processes in DevOps activities and find answers to such questions:
What tools can be used for development, CI/CD, management and operation?
How does the company manage errors in containers while it is running in production?
How to change the software in production with the shortest downtime?
How to scale and monitor the production system?
How to include container testing and deployment in the release pipeline?
How to use open source tools/platforms for containers in Microsoft Azure?
If you can answer all these questions, then you are fully prepared to move the application (existing or new application) to the Docker container.
Introduction to generic end-to-end Docker application lifecycle workflow
It all starts with the developer, who starts writing code in the internal loop workflow. The internal loop stage is where developers define everything that happens before pushing code to the code repository (for example, a source code management system such as Git). After submission, the repository triggers continuous integration (CI) and the rest of the workflow.
The internal loop basically consists of typical steps ("code", "run", "test" and "debug", etc.) and other steps required before running the application locally. This is the process by which developers run and test applications as Docker containers. The internal circulation workflow will be introduced in later chapters.
Looking back at the end-to-end workflow, DevOps workflow is not just a technology or toolset: it is a way of thinking that requires cultural evolution. It is the people, processes, and appropriate tools that can make the application life cycle faster and more predictable. Enterprises that adopt containerized workflows usually reorganize their organizations to represent the people and processes that match containerized workflows.
Practicing
DevOps can help teams respond to competitive pressures faster by replacing manual processes that are prone to errors with automated functions, thereby improving traceability and repeatable workflows. By combining local and cloud resources and tightly integrated tools, organizations can also manage the environment more effectively and achieve cost savings.
When implementing DevOps workflows for Docker applications, you will see that Docker technology exists at almost every stage of the workflow. When dealing with internal loops (code, run, debug), start with the development box, build the test CI phase, and finally deploy these containers to staging and production environments.
Improvements in quality practices help identify defects early in the development cycle, thereby reducing the cost of repairing them. By including the environment and dependencies in the image and adopting the concept of deploying the same image in multiple environments, you can facilitate the extraction of environment-specific configuration rules, making deployment more reliable.
The rich data obtained through effective instruments (monitoring and diagnostics) can provide insight into performance issues and user behavior, thereby guiding future priorities and investments.
DevOps should be viewed as a journey, not a destination. It should be implemented gradually through projects of appropriate scope, which can demonstrate success, learning and development.
Advantages of containerized application DevOps
The following are some of the most important benefits provided by a reliable
DevOps workflow:
Provide higher quality software faster and with more compliance.
Promote continuous improvement and adjustment earlier and more economically.
Improve transparency and collaboration among stakeholders involved in the delivery and operation of software.
Control costs and use provisioned resources more efficiently while minimizing security risks.
Plug and play multiple existing DevOps investments, including investments in open source.