Docker-Compose yml file details

Source: Internet
Author: User
Tags docker swarm

Compose and docker compatibility: The compose file format has three versions: 1, 2.x, and 3.x. Currently, the mainstream versions are 3.x, which support common parameters for docker 1.13.0 and later versions: version # specify the version services of the compose file # define all service information. The first-level key under services is a service name build # specify the path containing the build context, or as an object, the object has the context and the specified dockerfile and The args parameter value context # context: Specifies the path of the dockerfile file dockerfile # dockerfile: specify the name of dockerfile under the directory specified by context (dockerfile by default) ARGs # ARGs: dockerf The parameters required by Ile In the build process (equivalent to the function of docker container build -- Build-Arg) cache_from # new parameters in v3.2, specify the cached image list (equivalent to the function of docker container build -- cache_from) Labels # parameters added in v3.3, set the image metadata (equivalent to the function of docker container build -- labels) shm_size # The New Parameter in v3.5 to set the size of the/dev/SHM partition of the container (equivalent to docker container build -- SHM-size) command # overwrite the default command executed after the container is started. It supports the shell format and the [] format configs # Do not know how to use cgroup_parent # Do not know how to use container _ Name # specify the container name (equivalent to the docker run -- name function) credential_spec # Do not know how to use deploy # V3. specify configurations related to service deployment and running, the deploy part is used by docker stack. docker stack depends on the new features in docker swarm endpoint_mode # v3.3, specify the mode of service exposure VIP # docker assigns a virtual IP address (VIP) to the service as the address for accessing the service from the client dnsrr # DNS round robin. docker sets DNS entries for the service, make the DNS query of the service name return an IP address list, and the client directly accesses one of the addresses labels # specify the service tag, these labels only set mode on the service # specify the deploy mode Global # each cluster node has only one Containers replicated # You can specify the number of containers in the cluster (default) Placement # Do not know how to use replicas # When deploy mode is replicated, specify the number of container replicas resources # resource limit limits # Set the container resource limit CPUs: "0.5" # Set this container to use a maximum of 50% CPU memory: 50 m # Set the container to use up to 50 m of memory space reservations # set the system resources reserved for the container (available at any time) CPUs: "0.2" # reserve 20% of CPU memory for the container: 20 m # reserve 20 m of memory space for the container restart_policy # define a container restart policy, used to replace the restart parameter condition # define the container restart Policy (three parameters are accepted) none # Do not try to restart on-fa Ilure # Any will be restarted only when there is a problem with the internal application of the container # Try to restart (default) Delay # restart interval (default: 0 s) max_attempts # Number of Retries (retries by default) window # the waiting time before checking whether the restart is successful (that is, if the container is started, check whether the container is normal after how many seconds, 0 s by default) update_config # used to configure rolling Update Configuration parallelism # One-time update of the number of containers delay # update the interval between groups of containers failure_action # define the failed update policy continue # continue to update rollback # rollback update pause # Pause Update (default) monitor # duration after each update to monitor whether the update fails (unit: NS | us | MS | S | M | H) (default: 0) max_failure_ratio # tolerable during rollback Failure Rate (default value: 0) Order # parameters added in v3.4, Operation Sequence during rollback stop-first # Old tasks are stopped before new tasks are started (default) start-first # Start a new task, and the running task temporarily overlaps the parameters added in rollback_config # v3.7, defines the failed update rollback policy parallelism in update_config # number of containers for one rollback. If it is set to 0, then all containers roll back the delay at the same time # The interval between the rollback of each group (the default value is 0) failure_action # define the rollback failure policy continue # continue to roll back pause # Pause the rollback monitor # duration of each rollback task to monitor failure (unit: NS | us | MS | S | M | H) (0 by default) max_failure_ratio # tolerable failure rate during rollback (0 by default) Order # rollback The operation sequence in the period: Stop-first # The old task stops before starting the new task (default) Start-first # start the new task first, and the running tasks temporarily overlap. Note: support docker-Compose up and docker-Compose run, but do not support the sub-options security_opt container_name devices tmpfs stop_signal links using network_mode external_links restart build userns_mode sysctls devices # specify the device ing list (equivalent for docker run -- device) depends_on # define the container startup sequence (this option solves the dependency between containers. This option is deployed using swarm in V3. Will ignore this option) Example: docker-Compose up starts the service in the dependent order, in the following example, when the redis and DB services start the Web service by default using docker-Compose up Web to start the web service, the redis and DB services are also started, because the dependency version: '3' services: Web: Build:. is defined in the configuration file :. depends_on:-db-redis: Image: redis DB: Image: s dns # Set the DNS address (equivalent to docker run-DNS) dns_search # Set the DNS search domain (equivalent to docker run -- DNS-search) tmpfs # V2. mount the directory to the container as a temporary file system of the container (equivalent to d The role of Ocker run -- tmpfs will be ignored when swarm is used for deployment.) entrypoint # overwrite the default entrypoint command of the container (equivalent to docker run -- entrypoint) env_file # Read the variable from the specified file and set it to the environment variable in the container. It can be a single value or a file list. If the variable names in multiple files are duplicated, the variable that follows will overwrite the previous variable, the value of Environment overwrites the value of env_file. File Format: rack_env = Development Environment # sets the environment variable. The value of environment can overwrite the value of env_file (equivalent to docker run -- env) expose # expose the port, but cannot establish a ing relationship with the host, similar to the expose command external_links of dockerfile # Connect a container that is not defined in the docker-compose.yml or that is not managed by compose (the container started by docker run, which is ignored when swarm is deployed in V3) extra_hosts # Add the Host record to/etc/hosts in the container (equivalent to docker run -- add-host) healthcheck # V2.1 or a later version, which defines the container health check, similar to the dockerfile healthcheck command test # command to check the container check status, this option must be a string or list, the first item must be none, CMD or CMD-SHELL, if it is a string, it is equivalent to CMD-SHELL adding this string none # disable the container health check cmd # test: ["cmd", "curl", "-F", "http: // Lo Calhost "] CMD-SHELL # test: [" CMD-SHELL "," curl-F http: // localhost | Exit 1 "] or test: curl-F https: // localhost | Exit 1 interval: 1m30s # interval between each check Timeout: 10 s # timeout time for running the command retries: 3 # Number of Retries start_period: 40 s # v3.4 the new options above define the container startup interval Disable: true # True or false, indicating whether to disable health check and the same image as test: none # specify the docker image, it can be a new parameter added in remote repository image and local image init # v3.7. "True" or "false" indicates whether to run init in the container. It receives the signal and passes it Cheng isolation # isolate container technology. In Linux, only the default value labels is supported. # Use docker labels to add metadata to containers. Similar to the labels in dockerfile, links # link to containers in other services, this option is a legacy option of docker. It has been replaced by a custom network namespace and may be discarded (this option will be ignored when swarm is deployed) logging # Set the container Log Service Driver # specify the logging driver. The default value is JSON-file (equivalent to docker run -- log-driver) options # specify the relevant log parameters (equivalent to docker run -- log-oPt) max-size # Set the size of a single log file, when this value is reached, log scrolling is performed. Max-file # Number of log files retained. netwo Rk_mode # specify the network mode (equivalent to docker run -- net, which is ignored when swarm is used for deployment) networks # Add containers to a specified network (equivalent to docker network connect ), networks can be located in aliases, the top-level key of the compose file, and the second-level key of the Services Key # containers on the same network can be connected to a service container using the service name or alias ipv4_address # IP v4 format ipv6_address # IP v6 format example: version: '3. 7 'services: Test: Image: nginx: 1.14-Alpine container_name: mynginx command: ifconfig networks: app_net: # Call the app_n defined by networks Et network ipv4_address: 172.16.238.10 networks: app_net: Driver: bridge IPAM: Driver: Default config:-subnet: 172.16.238.0/24 PID: 'host' # process space (PID) of the shared host) ports # establishes the port ing between the host and container. Ports supports two syntax formats:-"3000" # expose port 3000 of the container, the host machine port is randomly mapped to an unused port by docker-"3000-3005" # expose ports 3000 to 3005 of the container, the host machine port is randomly mapped to the unused port by docker-"8000: 8000" # The Container Port 8000 is mapped to the host machine port 8000-"9090-9091: 8080-80 81 "-" 127.0.0.1: 8001: 8001 "#" 127.0.0.1: 5000-5010: 5000-5010 "-" 6060: 6060/udp "# example of long syntax format of the specified Protocol: (v3.2 new syntax format) ports:-target: 80 # container port published: 8080 # host port protocol: TCP # protocol mode: host # host port published on each node, ingress performs Server Load balancer secrets on the port in group mode # I don't know how to use security_opt # override the default label for each container (this option will be ignored when swarm is deployed) stop_grace_period # specify the number of seconds after the container Waits for the sigterm signal to exit (10 s by default) stop_signal # specify Stop the signal sent by the container (the default value is sigterm, which is equivalent to kill PID; sigkill is equivalent to kill-9 PID; this option is ignored when swarm is deployed) sysctls # Set the kernel parameters in the container (this option will be ignored when swarm is used for deployment) ulimits # Set the container's limit userns_mode # If the docker daemon configures the user namespace, disable the user namespace of the service (this option will be ignored when swarm is used for deployment) volumes # define the volume ing relationship between the container and the host, similar to networks, it can be the second-level key of the services key and the compose top-level key. If you need to use it across services, you can define the top-level key. In services, reference the short syntax format example: volumes: -/var/lib/MySQL # MAP/var/lib/MySQL in the container to a host In the random directory-/opt/data:/var/lib/MySQL # MAP/var/lib/MySQL in the container to the host/opt/data -. /cache:/tmp/cache # MAP/var/lib/MySQL in the container to the location of the compose file on the host machine -~ /Configs:/etc/configs/: RO # map the directory of the container host to the container. The permission is read-only-datavolume: /var/lib/MySQL # datavolume is the directory defined by the volumes top-level key. In this example, the long syntax format is called directly: (v3.2 new syntax format) version: "3.2" services: Web: image: nginx: Alpine ports:-"80: 80" volumes:-type: Volume # Mount type, which must be bind, volume, or tmpfs Source: mydata # target: /data # container directory volume: # configure additional options. The key must be the same as the type value nocopy: true # additional volume options, when creating a volume, Disable copying data from the container-type: bind # volume mode. Only the container path is specified, and the host path is randomly generated. Bind must specify the ing path of the container and the data machine Source :. /static target:/opt/APP/static read_only: true # Set the file system to a read-only file system volumes: mydata: # defined in volume, you can call restart # In all services to define the container restart Policy (this option is ignored when swarm is used for deployment and restart_policy is used for restart in swarm) No # Disable Automatic Restart of containers (default) always # The container restarts on-failure in any case # When an on-failure error occurs, the container restarts other options: domainname, hostname, IPC, mac_address, privileged, read_only, shm_size, the options above stdin_open, tty, user, and working_dir only accept the acceptable values of a single value and the corresponding parameters of docker run, similar to those of a time value: 2.5 s 10 s 1m30s 2h32m 5h34m56s time unit: US, MS, S, M, H for an acceptable value for size: 2b 1024kb 2048 K 300 M 1 GB unit: b, K, M, G or kb, MB, and GB networks # defines the networks information driver # specifies the network mode. In most cases, it bridges the network between a single host and overlay swarm # by default, docker uses bridge to connect the network overlay on a single host # The overlay driver creates a network host named across multiple nodes # The shared host network namespace (equivalent to docker run -- Net = host) none # is equivalent to docker run -- Net = none driver_opts # v3.2 and later versions. The parameters passed to the driver depend on the driver attachable # used when the driver is overlay, if it is set to true, in addition to services, independent containers can also be attached to the network. If an independent container is connected to the network, then it can communicate with the services and independent containers of the network connected to by other docker daemon. IPAM # custom IPAM configuration. this is an object with multiple attributes. Each attribute is an optional driver # IPAM driver, bridge or default config # subnet in CIDR format, it indicates the network segment external # external network of the network. If it is set to true, docker-Compose up will not try to create it. If it does not exist, an error name # v3.5 or a later version will be thrown, for this network, set the name file format example: Version: "3" services: redis: Image: redis: Alpine ports:-"6379" networks:-frontend deploy: replicas: 2 update_config: parallelism: 2 delay: 10 s restart_policy: condition: On-failure DB: Image: Postgres: 9.4 volumes:-db-data:/var/lib/PostgreSQL/data networks: -backend deploy: Placement: Constraints: [node. role = manager]

docker-Compose yml file DETAILS

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.