8 best practices for building container applications

Source: Internet
Author: User

8 best practices for building container applications
GuideContainers are the main trend of application development in the common cloud and private cloud in the future. But what are containers? Why have they become a popular deployment mechanism, and how do you modify your application to optimize it for the containerized environment?

Containers are the main trend of application development in the common cloud and private cloud in the future. But what are containers? Why have they become a popular deployment mechanism, and how do you modify your application to optimize it for the containerized environment?

What is a container?

The history of container technology began in SELinux in 2000 and Solaris zones in 2005. Today, containers are composed of several kernel features, including SELinux, Linux namespace, and control group (cgroup). They provide isolation of user processes, network space, and file system space.

Why are they so popular?

The recent large-scale application of container technology is largely due to the development of standards designed to make containers easier to use, such as the Docker image format and distribution model, this standard uses immutable image, which is the starting point of the container runtime environment. immutable images can ensure that the images released by the development team are tested, the image is the same as the image deployed in the production environment.

The lightweight Isolation provided by containers provides a better abstraction for an application component. Components running in containers will not interfere with other applications that may run directly on virtual machines. They can avoid competition for system resources and will not block write requests to the same file unless they share a persistent volume. Containers standardize log and Metric Collection Practices and support higher user density on physical machines and virtual machines. All these advantages lead to lower deployment costs.

How should we build a container-based application?

Changing an application to run in a container is not a high requirement. Basic images are available for major Linux distributions, and any program that can run on a virtual machine can run on it. However, the trend of containerized applications is to follow the following best practices:

1. The instance is one-time

You do not need to be careful when running any instance of your application. If one of your systems that run many containers crashes, you can also migrate to other available systems to create new containers.

2. Retry rather than crash

When one service of your application depends on another service, it should not crash when another service is unavailable. For example, your API service is starting and the database cannot be connected. You should design it so that it keeps retrying the connection, instead of running failed and refusing to start. When the database connection is disconnected, the API can return a 503 status code to notify the client that the service is unavailable. The application should have followed this practice, but if you are working in a one-time instance container environment, the need for this practice will become more obvious.

3. persistent data is special

A container is started based on a shared image and uses a COW file system. If the container process selects to write files, the written content only exists when the container exists. When the container is deleted, the layer in the copy file system will be deleted when writing. Provide the container with a mounted file system directory so that it can be permanently saved outside the container's survival. This requires additional configuration and consumes additional physical storage. A clear abstraction defines what storage is persistent, giving rise to the idea that instances are disposable. An abstraction layer allows the container compilation engine to handle complex requests for attaching and detaching persistent volumes so that these persistent volumes can be used by containers.

4. Use stdout instead of log files

Now you may think about what to do with log files if persistent data is special? The container runtime environment and compilation engine project adopt the method that the process should write to stdout/stderr and have the infrastructure to archive and maintain container logs.

5. sensitive information (and other configuration information) is also special

You should never hard-code sensitive information such as passwords, keys, and certificates into your image. These sensitive information is usually different when your application interacts with development services, test services, or production services. Most developers do not have the permission to access sensitive information in the production environment. Therefore, if sensitive information is packaged into an image, a new image layer must be created to overwrite the sensitive information of this development service. From this point of view, you can no longer use the same image as the one created by your development team and tested by the Quality Test, and also lose the benefits of unmodifiable images. On the contrary, these values should be stored in the environment variable file, which will be imported at the container startup.

6. Do not assume collaborative positioning of services

In an orchestration container environment, you want the orchestrator to send your container to any node that is best suited to the container. The most suitable means many things: it should be based on the space that the node currently has the most, the quality of service required by the container, whether the container needs a persistent volume, and so on. This may mean that your front-end, API, and database containers will eventually be placed on different nodes. Although it is possible to forcibly allocate an API container to each node (refer to the DaemonSets of Kubernetes), this method should be left to the container that executes the tasks of the monitoring Node itself.

7. redundancy/High Availability plan

Even if you do not have so many loads that require high-availability configurations, you should not write services in a single way, otherwise it will be prevented from running multiple copies. This will allow you to use rolling deployment to easily move loads from one node to another, or update services from one version to the next version without turning them offline.

8. implement readiness check and flexibility check

It is normal for an application to have a certain start time before responding to a request. For example, an API server needs to fill in the memory data cache. The container orchestration engine requires a method to check whether your container is ready to serve user requests. Providing a readiness check for a new container allows us to perform rolling deployment so that the old container can continue running until it is no longer needed, which prevents service downtime. Similarly, a survival check is also a way for the container orchestration engine to continuously check whether the container is healthy and available. The container application creator determines whether the container is healthy or "alive. A new container is created to replace it.

Want to find more information?

I will attend the Grace Hopper Celebration of Women in Computing Summit in August. Here you can take a look at my interview: Containerization of applications: what is, why, and how to implement it. Don't you go to GHC this year? You can learn about containers, orchestrations, and applications at the OpenShift and Kubernetes project sites.

From: https://linux.cn/article-7896-1.html

Address: http://www.linuxprobe.com/eight-apply-practice.html


Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.