Application of deep analytic container technology in GF securities Trading system "turn"

Source: Internet
Author: User
Tags grafana influxdb statsd

Original link: http://geek.csdn.net/news/detail/94850

This is a good example of Docker landing, many places in the text can learn, the following is an excerpt:

Why containerized

For the traditional vertical industry, Docker is also a technology that came out of the last few years, the technology concept is very advanced, so the use of Docker container technology for US needs a comprehensive assessment, but why should we do it? First of all, from the industry status, the securities industry on the one hand quantitative trading, high-frequency trading, real-time wind control requirements high, second, the industry innovation is very many, innovation business is very frequent, in addition, regulatory aspects, the SFC of the Securities and Futures Commission on our request for trading accident 0 tolerance, of course, the Internet financial development in recent years , again, the volume of the market, the end of 2014 to last year, the highest day there are 2 trillion of markets, such words for our entire trading system is a very big challenge.

What we have been doing around our business is how we use our limited resources to support business innovation, truly it leads the business, and, on the other hand, we can't compare resources with other big internet companies, so how to leverage limited resources to support business innovation is something we have to consider.
Before using containerized technology, we encountered some of the more typical pain points:

    1. Deploy the system as a project: a project to procure a batch of machines, respectively, each of the machines isolated from each other, do not share resources. To the great waste of resources.

    2. The traditional trading system upgrade is very difficult, for example, to upgrade to have a background data table, because the data volume of the table is very large, so this upgrade is basically only on the weekend upgrade, upgrade time of 1 or a few hours.

    3. There are trading systems to upgrade to patch, upgrade a patch in the system, put the patch up, the more patches to make the production environment of the system version becomes non-maintainable, do not dare to change or redeploy.

    4. The test environment is time-consuming and very difficult to deploy quickly.

    5. After the big upgrade, the system is basically non-fallback.

    6. Industry oolong refers to such black swan incidents occur, the fundamental reason is that the technical system real-time wind control speed up to the requirements of the business, take the risk of more time-consuming wind control removed.

Containerized technology is a good solution to the problems we have encountered, as there are at least a few benefits of using containerized:

① lightweight engine for efficient virtualization
② second-level deployment for easy migration and expansion
③ easy to transplant, elastic and easy to manage
④ began to have a little "cloud" capability
⑤ make MicroServices a possibility
⑥ Standardized server-side deliverables
⑦ a significant step forward for DevOps

Container technology and the landing of the cloud
    1. Docker is definitely not a virtual machine, and it's not doing it, and we understand that Docker is actually a replacement for the process.

    2. Broadly speaking, the cloud is actually a single-machine multi-process cross-network multi-process extension, to achieve the cloud is sure to realize the resources to the process of remote orchestration and scheduling, we think this constitutes the foundation of the cloud.

To give a simple example of monitoring services, a complete monitoring service based on STATSD needs to have Influxdb, Grafana, STATSD Three service components, each service is only a single function, a simple service is not able to provide monitoring services, to provide it as a service, Must be combined, so we are using the compose file to form a relationship with three services, which is a complete service.

influxdb:   image:  "docker.gf.com.cn/gfcloud/influxdb:0.9"    ports:       -  "8083:8083"       -  "8086:8086"     expose:      -  "8090"       -   "8099"    volumes:      -  "/var/monitor/influxdb:/data"    environment:      -  "Pre_create_db=influxdb"        -  "Admin_user=root"       -  "influxdb_init_pwd= Root "grafana:   image: " Docker.gf.com.cn/gfcloud/grafana "   ports:       -  "3000:3000"    volumes:       -  "/var/monitor/grafana:/var/lib/grafana" statsd:      image:  " Docker.gf.com.cn/gfcloud/statsd"      ports:         - " 8125:8125/UDP "      links:          -  "Influxdb:influxdb"       volumes:          -  "/var/monitor/statsd/log:/var/log"       environment:             - INFLUXDB_HOST=influxdb             - INFLUXDB_PORT=8086             - INFLUXDB=influxdb             - INFLUXDB_USERNAME=root             - influxdb_password=root

For container orchestration, the following two aspects can be understood.

    1. Container orchestration is a kind of behavior to manage multiple clusters container across hosts, and with the development of Docker, the ecological circle is more and more perfect, the more common kubernetes, Mesos + Mathon, rancher and so on belong to this category.

    2. Capacity orchestration is designed to maximize resource utilization while balancing the system's need for a changing demand for fault tolerance.

Best Practices for container technology and cloud

Non-variable operation dimension

Because of the container technology that makes it possible to manage changes and automate deployment, the production system will accumulate many changes over time, including: New applications, upgrades, configuration changes, scheduled tasks, and bug fixes. One thing is no doubt: the longer a configured server runs, the more likely it is to be in an unknown state. For each change described previously, an immutable operation resolves an issue that determines the state of the server by recreating a new container instance. So before we mentioned the upgrade patching a series of pain points are well resolved.

Immutable, as the name implies is once created, it is no longer modified, the following are our production practices in the use of Docker containers some principles:

①, container once instantiated, never change
②, service changes (upgrade, downgrade, modify configuration) are implemented through redeployment
③, do not modify the inside of the container

Docker Best Practices

①. external files are not mapped except for persistent storage

In particular, log files, not by mapping external files, can be flume, Kafka and other interfaces to write log data to the centralized place.

②. Do not use the local network

Host mode pollutes the host without using "host" mode, and does not meet cloud computing

③. Mirroring using tags, discarding latest

tags are used in production applications to identify applications without latest, because using latest will not enable rollback.

④.build compilation application and build image separation

Use a dedicated compiled Docker to compile the app itself, such as using Maven, Go, C + + image to compile the app. After compiling, the application is added to the service image by Dockerfile.

⑤.apt-get Install package is deleted

As an example, Apt-get is written in one command, because Docker is a tiered store, and if you unload a temporary package inside another run command, it does not help to reduce the size of Docker images.

⑥.docker don't daemon inside.

The service does not have a daemon inside Docker, and the app hangs up and lets it hang out. Through external monitoring to achieve application restart and other operations.

from docker.gf.com.cn/gfcloud/ubunturun echo  "deb http://nginx.org/packages/ubuntu/  Trusty nginx " >> /etc/apt/sources.list && echo " deb-src http ://nginx.org/packages/ubuntu/ trusty nginx " >>/etc/apt/sources.listRUN apt-get  update && apt-get install -y wget     &&  wget http://nginx.org/keys/nginx_signing.key && apt-key add nginx_ signing.key     && apt-get update  &&  Apt-get install -y nginx     && rm -rf /var/lib /apt/lists/*     && echo  "\ndaemon off;"  >> /etc/nginx/nginx.conf \     && apt-get  Purge -y --auto-remove wget# &nbSp;chown -r www-data:www-data /var/lib/nginx# define mountable directories. volume ["/etc/nginx/sites-enabled",  "/etc/nginx/certs",  "/ETC/NGINX/CONF.D",  "/var/log/ Nginx ", "/var/www/html "]# define working directory. Workdir /etc/nginx# define default command.# expose ports. expose 80expose 443cmd ["Nginx", "-G", "daemon off;"]


This article is from the creator think blog, so be sure to keep this source http://strongit.blog.51cto.com/10020534/1837240

Application of deep analytic container technology in GF securities Trading system "turn"

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.