The application of Docker in PHP project development environment

Source: Internet
Author: User
Tags php cli composer install docker hub docker run
This article is starting with my column in Shang PHP and entrepreneurship of those things, reproduced please keep.

Environmental deployment is a problem that all teams must face, as systems become larger and more dependent on services, such as one of our current projects:

    • Web server: Nginx
    • Web program: PHP + Node
    • Database: MySQL
    • Search Engine: ElasticSearch
    • Queue service: Gearman
    • Caching service: Redis + Memcache
    • Front-End Build tool: NPM + bower + Gulp
    • PHP CLI Tools: Composer + PHPUnit

Therefore, the deployment of the team's development environment exposes a number of issues:

  • Rely on a lot of services, local construction of a set of environmental costs are getting higher and lower, junior personnel can not solve the environmental deployment of some problems
  • The difference between the version of the service and the OS may cause a bug in the online environment
  • When a project introduces a new service, everyone's environment needs to be reconfigured
  • For issue 1, a virtual machine-based project such as Vagrant can be used to solve the problem, and team members share a set of development environment images. For Issue 2, a multi-version PHP management tool like Phpbrew can be introduced to solve this problem. But neither of them solves the problem very well. 3, because virtual machine images do not have the concept of versioning, when multiple people maintain a mirror, it is easy to configure the omission or conflict, a large image transmission is not convenient.

    The advent of Docker gives a better solution to the problems above, although individuals are cautious about Docker's large-scale application to production, but if only considering testing and development, the idea of Docker's containerized concept is already a silver bullet that can really solve the problem of environmental deployment.

    The following is an introduction to the evolution of the Docker Build PHP project development environment, assuming that your operating system is Linux, that you have Docker installed, and that you already know what Docker is and the basis of the Docker command line, and if you do not have these background knowledge, it is recommended that you first understand it yourself.

    Hello World

    First, start with the Hello World instance of PHP under the Docker container. We prepare such a PHP file index.php:

       

    The text file is then created in the same directory and named Dockerfile, with the following contents:

    # Build from the official PHP image from       php# to copy the index.php into the/var/www directory within the container add        index.php/var/www# externally exposed 8080 ports expose     8080# Set the container default working directory as/var/wwwworkdir    /var/www# container runs after the default execution of instructions entrypoint ["PHP", "-S", "0.0.0.0:8080"]

    Build this container:

    Docker build-t Allovince/php-helloworld.

    Run this container

    Docker run-d-P 8080:8080 Allovince/php-helloworld

    View results:

    Curl localhost:8080php in Docker

    This allows us to create a Docker container to demonstrate PHP programs, and any machine that has been installed with Docker can run this container to get the same results. Anyone with PHP files and Dockerfile can build the same container, completely eliminating the various problems that may arise from different environments and versions.

    Imagine how the program is further complicated, how we should extend it, and the very direct idea is to continue to install other services in the container and to run all the services, so our dockerfile is likely to evolve like this:

    From       phpadd        index.php/var/www# Install more services run        apt-get install-y \mysql-server \nginx \php5-fpm \php5-mysql# Write a startup script to start all services entrypoint ["/opt/bin/php-nginx-mysql-start.sh"]

    Although we have built a development environment through Docker, we don't feel a bit familiar. Yes, this approach is similar to making a virtual machine image, and there are several problems with this approach:

      • If you need to validate different versions of a service, such as Test php5.3/5.4/5.5/5.6, you have to prepare 4 mirrors, but there is only a small difference in each image.
      • If you start a new project, the services that are installed within the container are constantly expanding, and ultimately it is impossible to figure out which service belongs to which project.

    Using a single process container

    The above pattern of putting all the services in a container has an image of unofficial address: Fat Container. The opposite is the pattern of splitting services into containers. From the Docker design you can see that the process of building the image can specify a single container-initiated instruction, so Docker is naturally suitable for a container to run only one service, which is also officially more respected.

    The first question that the spin-off service encounters is where does the underlying image of each of our services come from? Here are two options:

    Option one, unified from the standard OS image extension, such as the following are Nginx and MySQL mirror

    From Ubuntu:14.04run  apt-get update-y && apt-get install-y nginx
    From Ubuntu:14.04run  apt-get update-y && apt-get install-y MySQL

    The advantage of this approach is that all services can have a unified base image, which can be extended and modified using the same approach, such as Ubuntu, which can be used to install the service using the Apt-get command.

    The problem is that a large number of services need their own maintenance, especially sometimes need a different version of a service, often need to directly compile the source code, debugging and maintenance costs are very high.

    Option two, inherit the official image directly from the Docker hub, below the same nginx and MySQL image

    From nginx:1.9.0
    From mysql:5.6

    The Docker Hub can be seen as a Docker Github,docker official has prepared a large number of images of popular services, as well as a lot of third-party submitted images. You can even build a private Docker Hub for a short period of time based on the Docker-registry project.

    Based on the official image of a service to build the image, there is a very rich choice, and can switch the version of the service at a very small cost. The problem with this approach is that the official image is built in a variety of ways, and the dockerfile of the original image needs to be understood before scaling.

    To make the service more flexible, we chose the latter to build the image.

    In order to split the service, our directory now changes to the following structure:

    ~/dockerfiles├──mysql│   └──dockerfile├──nginx│   ├──dockerfile│   ├──nginx.conf│   └──sites-enabled│       ├──default.conf│       └──evaengine.conf├──php│   ├──dockerfile│   ├──composer.phar│   ├── php-fpm.conf│   ├──php.ini│   ├──redis.tgz└──redis    └──dockerfile

    A separate folder is created for each service and a dockerfile is delegated in each service folder.

    MySQL Container

    MySQL inherits from the official MySQL5.6 image, Dockerfile has only one line, no additional processing required, because the general requirements of the official has been implemented in the image, so dockerfile content is:

    From mysql:5.6

    Run under the project root directory

    Docker build-t Eva/mysql./mysql

    The image is automatically downloaded and built, and here we name it eva/mysql.

    Because all the database data is discarded at the end of the container run, in order to not import the data every time, we will use mount to persist the MySQL database, the official image by default the database is stored in/var/lib/mysql, and the container must be run by the environment variable to set an administrator password, You can therefore run the container using the following directives:

    Docker run-p 3306:3306-v ~/opt/data/mysql:/var/lib/mysql-e mysql_root_password=123456-it eva/mysql

    With the instructions above, we bind the local 3306 port to the container's 3306 port, persist the database in the container to the local ~/opt/data/mysql, and set a root password for MySQL 123456

    Nginx Container

    Nginx directory in advance prepared Nginx configuration file nginx.conf and the project configuration file default.conf and so on. Dockerfile content is:

    From Nginx:1.9add  nginx.conf      /etc/nginx/nginx.confadd  sites-enabled/*    /etc/nginx/conf.d/run  mkdir/opt/htdocs && mkdir/opt/log && mkdir/opt/log/nginxrun  chown-r www-data.www-data/opt/ Htdocs/opt/logvolume ["/opt"]

    Since the official Nginx1.9 is based on the Debian Jessie, first copy the prepared configuration file to the specified location, replacing the configuration within the image, here according to personal custom, the Convention/opt/htdocs directory is the Web server root directory,/opt/log/ The Nginx directory is the Nginx log directory.

    Also build a mirror

    Docker build-t Eva/nginx./nginx

    and run the container

    Docker run-p 80:80-v ~/opt:/opt-it Eva/nginx

    Note that we bind the local 80 port to the container's 80 port and mount the local ~/opt directory to the container's/opt directory, so that the project source code can be placed in the ~/opt directory and accessed through the container.

    PHP container

    PHP container is the most complicated one, because in the actual project, we probably need to install some PHP extensions separately and use some command-line tools, here we use the Redis extension and composer for example. First, the project needs to download the extension and other files in advance to the PHP directory, so that the building can be copied from the local without having to download each time through the network, greatly accelerate the image build speed:

    wget Https://getcomposer.org/composer.phar-O php/composer.pharwget Https://pecl.php.net/get/redis-2.2.7.tgz-O php/ Redis.tgz

    PHP directory is also ready for PHP configuration Files php.ini and php-fpm.conf, base image we chose PHP 5.6-fpm, which is also a Debian Jessie image. The officials are more gracious. A docker-php-ext-install instruction is prepared inside the mirror to quickly install common extensions such as GD and PDO. All supported extension names can be obtained by running docker-php-ext-install within the container.

    Take a look at dockerfile.

    From Php:5.6-fpmadd php.ini/usr/local/etc/php/php.iniadd php-fpm.conf/usr/local/etc/php-fpm.confcopy redis.tgz/ Home/redis.tgzrun docker-php-ext-install gd \&& docker-php-ext-install pdo_mysql \&& pecl install/home /redis.tgz && echo "extension=redis.so" >/usr/local/etc/php/conf.d/redis.iniadd composer.phar/usr/local/ Bin/composerrun chmod 755/usr/local/bin/composerworkdir/optrun usermod-u www-datavolume ["/opt"]

    There are a few things that have been done during the build process:

  • Copy PHP and PHP-FPM configuration files to the appropriate directory
  • Copy the Redis extension source code to/home
  • Installing GD and PDO extensions via Docker-php-ext-install
  • Installing the Redis extension via pecl
  • Copy composer to mirror as global directive
  • According to personal custom, still set/opt directory as working directory.

    Here is a detail that when copying a tar package file, the Docker instructions used are copy instead of add, because the add instruction automatically extracts the tar file.

    Now finally can build + run:

    Docker build-t eva/php./phpdocker run-p 9000:9000-v ~/opt:/opt-it eva/php

    In most cases, nginx and PHP read the project source code is the same copy, so here also mount the local ~/opt directory, and bound 9000 port.

    The realization of PHP-CLI

    PHP containers, in addition to running PHP-FPM, should also be used as the PHP CLI for the project to ensure that PHP versions, extensions, and configuration files remain consistent.

    For example, running composer inside a container can be done with the following directives:

    Docker Run-v $ (pwd-p):/opt-it eva/php Composer Install--DEV-VVV

    Running this line of instructions in any directory is equivalent to dynamically mounting the current directory to the container's default working directory and running it, which is why the PHP container specifies the working directory as/opt.

    Similarly, you can implement command-line tools such as PHPUnit, NPM, and gulp to run inside a container.

    Redis container

    For demonstration purposes, Redis is used only as a cache, without persistence requirements, so dockerfile has only one row

    From redis:3.0

    Connection of the container

    The above has split services that were originally running in one container into multiple containers, each running a single service. This allows the containers to communicate with each other. There are two ways to communicate between Docker containers, one of which is to bind a container port to a local port and communicate over the port as above. The other is through the linking function provided by Docker, in the development environment, through the linking communication more flexible, but also can avoid port occupancy caused some problems, such as can be in the following way to link Nginx and PHP:

    Docker run-p 9000:9000-v ~/opt:/opt--name php-it eva/phpdocker run-p 80:80-v ~/opt:/opt-it--link php:php Eva/nginx

    In the general PHP project, Nginx needs to link PHP, and PHP needs to link mysql,redis and so on. To make it easier to manage the links between containers, Docker officially recommends using Docker-compose to do these things.

    Complete the installation with one line of instructions

    Pip Install-u docker-compose

    A docker-compose.yml file is then prepared in the root directory of the Docker project, with the following content:

    Nginx:    build:./nginx    Ports:      -"80:80"    Links:      -"PHP"    volumes:      -~/opt:/optphp:    Build:./php    Ports:      -"9000:9000"    Links:      -"MySQL"      -"Redis"    volumes:      -~/opt:/ Optmysql:    build:./mysql    Ports:      -"3306:3306"    volumes:      -~/opt/data/mysql:/var/lib/mysql    Environment:      Mysql_root_password:123456redis:    build:./redis    Ports:      -"6,379:6,379"

    Then run docker-compose up to complete all of the port bindings, mounts, and link operations.

    More complex instances

    The above is a standard PHP project in the Docker environment evolution process, the actual project will generally integrate more and more complex services, but the above basic steps can still be common. For example, Evaengine/dockerfiles is a Docker-based development environment for running my open source project Evaengine, Evaengine relies on the queue service Gearman, cache service Memcache, Redis, Front-end build Tools gulp, Bower, back-end CLI tools composer, PHPUnit, and more. The specific implementation of the code can be read by themselves.

    After the team practice, it would take about 1 days to install the environment, after switching to Docker only need to run more than 10 instructions, the time has been significantly reduced to less than 3 hours (most of the time is waiting to download), most importantly, the environment is built by Docker is 100% consistent, There will be no problems caused by mistakes. In the future we will further apply Docker to CI and production environments.

  • Related Article

    Contact Us

    The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

    If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

    A Free Trial That Lets You Build Big!

    Start building with 50+ products and up to 12 months usage for Elastic Compute Service

    • Sales Support

      1 on 1 presale consultation

    • After-Sales Support

      24/7 Technical Support 6 Free Tickets per Quarter Faster Response

    • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.