How the financial industry uses containers, taking into account the need to stabilize above all!

Source: Internet
Author: User

Complex basic IT architecture is the status quo of traditional finance, how to quickly respond to user needs, accelerate new business online speed, shorten product iteration cycle? Several people cloud in the container landed financial cloud 2 years of practice, the realization of financial core business technology WebLogic, Java, Oracle Middleware container production standards, has been on the stock exchange, joint-stock banks to germinate. Service orchestration, service discovery, continuous integration, big data container, high-performance container environment provide reference implementation standards for the industry, and truly build dynamic and flexible financial it. The following is a few people Cloud founder and CEO Wang Pu in [email protected] Container technology conference, Shanghai station delivered a keynote speech to share:

Three challenges that plague the financial industry

First of all, we look at three questions, these three problems not only troubled the financial industry, many traditional industry enterprises are also facing these challenges.

First of all, the new application on-line speed has been reduced from month to day, how to quickly respond to user needs? New applications on-line speed requirements are high, at home, the Internet industry development is very rapid, the Internet to the traditional industry has brought great impact and impact. Many businesses in the traditional industry also need to be online quickly, which puts great demands on their existing it architectures. Second, new technologies are emerging, how to deliver and perform applications in a standardized manner? This problem is also very typical, is a lot of traditional industry enterprises will encounter problems. How to choose the new technology, how to landing, how to deliver? In the end, how to deal with the sudden growth of elastic applications, such as the high concurrent application growth of the second kill and red envelopes? Second-kill, red envelopes such a high-concurrency business is very characteristic of the Internet, financial institutions such a traditional enterprise how to deal with?

One of the three problems is that the business patterns of traditional industries have changed. As we all know, the financial industry has a large number of services for individual users, 2C scene and the combination of the internet is an irreversible trend, and it is 2C services of the Internet, online, resulting in the financial industry business form changes. Today, a lot of business in the financial industry itself has the characteristics of Internet business, Internet scene, which requires the financial industry to combine the internet scene to solve new business problems. At the same time, the need for secure and controllable information technology poses new challenges to the IT architecture of financial institutions.

The status of IT in the financial industry

The first article is very different from other industries, that is, compliance is the red line, 0 accidents are required. The CBRC, CIRC and CSRC have many requirements for the financial industry, and many of the rules are not touching the red line. The financial industry has a very high demand for stability.

Second, the Internet scenario business faces high concurrency pressures. This is also a challenge that financial institutions have never faced in their traditional business. The traditional business is characterized by a stable peak, daytime work time has a certain peak, will reach a certain peak, and to the evening fell down, the next peak and the day before the peak is very close, and the internet scene business is unexpected, unpredictable.

Third, there are applications that are difficult to implement quickly and slow to upgrade processes. This is also the financial industry has the business characteristics of the decision, because the stability of the overwhelming, this needs to undergo comprehensive testing, all-round integration and so on, which slows down the speed of the line. The financial sector ensures stability by reducing the speed at which the business is launched, contrary to internet companies ' approach.

Finally, multiple sets of environments are isolated and the test environment is extremely time-consuming to build. The IT environment in the financial industry is isolated from one another, for example, the bank's development, testing, production at least three environments, the three are basically physically isolated. The physical isolation of the environment leads to the difficulty of setting up the test environment and it is difficult to reproduce the production process. I was in Google before, Google only a set of production environment, development, testing and production in a large data center mixed aggregation. Google as the representative of the Internet company's development, testing, production environment is not physically isolated, three environments mixed building, so that testing, reproduce the entire production environment becomes very convenient. However, due to compliance requirements, the financial industry is not.

A large version upgrade cannot be rolled back. This is related to the separation of the previous environment from each other. Because of the complexity of the environment, financial institutions are difficult to roll back, because every time on-line will have to modify the existing environment, rollback will need to undo the previous changes, so rollback in the financial industry is also difficult to achieve.

Six, a variety of heterogeneous equipment, hardware resource utilization is very low. The last point is a historical burden in the financial industry, with a wide variety of heterogeneous devices in financial institutions. A decade ago many financial institutions used large-scale machines and minicomputer, which have been in use ever since. In addition, the resource utilization of these devices is not very high. Because, the traditional business does not have the characteristics of sudden, very regular, such as the peak during the day, to the evening is the trough, the use of the evening time can also run a variety of bulk business. In addition, many businesses in the financial industry are tied to hardware, many business applications are statically deployed, and each business is supported by specific hardware. In Google is not so, Google will not put a specific application on a server, business applications and the strong binding of the server for Google, the magnitude of the data center maintenance is too difficult. Google has more than 2 million servers, if the business application to be strong binding with the server, the OPS staff on the line, it is necessary to remember what the application on each server, this is obviously not possible. But the data centers of financial institutions are not as large as Google's, so they can be strongly tied to business applications and hardware. However, strong binding means that the resource utilization is not high, because the business cannot be busy at all hours, and in idle time, computing resources can not be fully exploited.

The above is the financial industry it status of some introduction, can not be said to be comprehensive, this is a number of people cloud exposure to the performance of financial clients, especially they and the internet companies are very different places.

New needs for IT development in the financial industry

Here are three areas-new capacity, new speed, new efficiency, and a summary of some of the new requirements for IT development in the financial industry. The first is the new capacity, which refers to the capacity of the business. The financial industry's business scale has undergone a great change, red envelopes, seconds to kill such a business needs an instantaneous horizontal capacity to expand, the financial industry needs second-level horizontal capacity to carry the burst of red envelopes and other sudden traffic. At the same time, the financial industry also needs to shield the underlying heterogeneity for a seamless deployment of hybrid clouds.

At the same time, the rapid business iterations of the internet have had a big impact on the traditional industry, and the traditional industries are constantly improving their business iterations. It is very difficult for the financial industry to have a new version of the iteration every month or every week, as the Internet company does to ensure stability. As a result, the financial industry needs to achieve continuous integration from the code to the online environment without manual operation, shortening the uptime to the hour level. Financial institutions need to be flexible in providing a fully-true test and development environment, and reduce the risk of rapid release through Grayscale publishing, A/B testing.

Another is the new efficiency, the financial industry needs to increase the resource utilization of traditional physical machines 2-3 times, the bottom of the small-scale error automatic fault tolerance, but also to effectively manage multiple clusters on different infrastructure, so that they are not affected by the scale of business expansion.

The expectations of IT in the financial industry

This three point is the financial industry IT development challenges, but also the demand, this is our simple summary of the financial industry it expectations. As mentioned above, the business of the financial industry has undergone great changes, and the 2C business is becoming more and more characteristic of the Internet. Therefore, the support of 2C related Internet scene business needs to be as integrated as possible, that is, from the demand, to the development, testing, release on-line, and then to the follow-up operations, monitoring and so on, all the process should try to use a process.

A unified process can smooth the life cycle of the entire application, which is also a great convenience for Docker technology. Docker masks the heterogeneous environment, so that the development of written programs to test can also run, test run-through program to the production environment is also applicable, so that the integration, smooth flow throughout the application of the entire life cycle.

Here are one or two specific requirements, such as how to quickly build a variety of test environments based on container technology in a test environment, how to quickly generate components based on container technology at test time, and quickly recycle them. These are expectations, indeed a big blueprint, the financial industry is not able to achieve such a smooth process at this stage, but the whole financial sector in this direction, the development, testing and operations are actively embracing Docker technology, embracing container technology to upgrade their IT architecture.

  Container technology can create a smooth, integrated IT system for the financial industry, and it also brings a lot of changes to the existing IT architecture. Let's look at how the container corresponds to the structure that the financial institution already has. From an analogy point of view, many financial industry customers existing enterprise-level it architecture is based mostly on Java, is the right side of the framework. The bottom is the resource layer, formerly on the right are based on IBM, HP these high-end hardware, such as mainframe, minicomputer. On the left is the adoption of cloud architecture, more are biased x86,pc machine server, based on X86 do virtualization or adopt private cloud, public cloud services, this is the layer of resources. In the face of the middleware layer, before the financial industry used a lot of Java-based middleware, such as WebLogic, WebSphere. Middleware to provide a standard Java operating environment, the development of the jar and so on, the package will run to the middleware. In particular, Java-like middleware, including WebLogic and WebSphere, provides the operating environment for standard Java programs. Corresponding to the cloud side, the container-based data center operating system, the PAAs platform for cloud computing, is the middleware of the cloud era, so it provides a standard application runtime environment. Most of these applications are now containerized applications. Middleware this layer to provide a standard operating environment, the former WebLogic, WebSphere and other Java middleware provides standard Java program operating environment, and the left PAAs platform to provide a standard container application of the operating environment. Go up to the next level. Business packaging, business application development This layer, traditional enterprise-class It is Java, EE, now we are more containers to package. Container is not a simple programming language, more is the application of packaging, the container can be a variety of applications, Java, C + + or PHP. To encapsulate the application, the Java EE is packaged as a jar package, and in the cloud era we use Docker to encapsulate it into the form of a container. Business packaging to the next level is the business architecture, traditional enterprise it more with the SOA framework, to the era of cloud computing, the use of container technology, we began to transition to the framework of micro-services. The architecture of microservices and SOA is inherently same strain. First, the SOA architecture is service-oriented, and microservices are service-oriented, but micro-services are becoming finer-grained for services. MicroServices are developed, maintained, and launched on a per-service basis. This is not the same as traditional SOA, SOA is more to the development level of different business logic into different services, and then assign different services to different teams for development, and finally the overall online. and micro-services are fragmented, different micro-services to do each of the operations, this is the business framework level. The top level is the development and operation of the organizational framework, the traditional enterprise development operations is the separation, the cloud era of development operations to achieve continuous integration, DevOps. In fact, continuous integration, devops, or a bit more popular agile development, the most fundamental is the integration of the entire development operations, which involves a lot of organizational structure at the level of adjustment. This involves personnel, organizational adjustment, which is different from the adjustment of it architecture, is a very complex change.

Reinventing the next generation of enterprise-class it based on cloud computing is not just a technical change, but a change in organizational architecture. This will include the development and operation of collaboration, multi-departmental integration, functional division of the change and so on. In Google, the development can reach about 20,000, and the operation and maintenance personnel are one thousand or two thousand, the number is very small. But Google's operation and maintenance of the number of management servers is very large, millions of servers are all managed by operation. Google's operations department and financial industry operators do things differently. Google's Ops people are doing more of the planning of resources, as well as the specification of the development process. Google's operation and maintenance of a lot of traditional industry operations to do things to the development, such as the business of the line, Google's operators regardless of development.

Monitor, manage, manipulate

Agile development is definitely not a formal thing, it will have a deep organizational structure and functional changes. This ppt describes how to understand the cloud-based IT architecture from a traditional enterprise-level it perspective. It contains three parts, a monitoring section, a Management section, and a manipulation section. In the middle through the CMDB Configuration management database to connect several modules, this diagram for the traditional enterprise-level IT industry people more easily understood. The centralized monitoring of the system has many aspects, including the monitoring of the machine room equipment, the monitoring of the topology and so on. Automation of the operating platform, including the task of the on-line, authority management and so on, there are computer room, network and other systems of different operations, these two modules for many financial industry data center colleagues do not feel strange, their daily work in these two parts. Monitoring and automation are called manipulations, and the management is part of it. The management part is more of the process of things, such as data center operation management scheduling problems how to deal with, change how to deal with, release how to deal with, configuration how to manage and so on. Management is a part of the entire data center extension.

So how does the container cloud work with the existing data center operations? The number of people cloud is more from the automated operation into. Because at the management level, financial firms fall within the red line of compliance, and management processes are not able to change in the short term. A few people cloud is thinking about landing, that is, how to use the new technology container to quickly help financial customers. Therefore, we are more from the point of control, because from this level will not affect the financial customer's existing management process. Many operations based on the container cloud become very convenient, such as rapid deployment of applications, Quick Launch, management of tasks, and the management of quotas for permissions resources, which are all part of the automated controls. However, it is not enough to apply the fast online, flexible deployment, because the production process also requires a lot of monitoring, so we will be the container cloud and the customer's existing monitoring platform docking, so that monitoring, logging, alarm and so on are used by the customer's existing processes to deal with. A few people cloud from this point, help the data center operation and control become more automated, reduce the complexity of operation and maintenance.

The most important point is not to destroy, do not change the upper management process, this is the point of view of the cloud cut into the number of people. But as mentioned above, if the future is really to be completely cloud-based, agile development, DevOps, then the organizational structure of the enterprise, as well as management adjustments must be avoided. We as container technology manufacturers, more from the technology to the point of view to consider the problem, so our main landing is from the automatic control to cut into.

Three scenes

Let's take a brief look at some of the scenarios where the container cloud landed in the financial industry. The first scenario is an elastic scale-up scenario, such as a second kill or a red envelope, which has the need for elastic expansion. The ability to elastically expand and shrink the application will improve the resource utilization of the data center. When a business calculates a large amount of computing, it can flexibly expand the business application and occupy more computing resources. And when this business scale down, the back-end business applications can also be contracted back, the calculation of resources released to other applications, so that the business application has the ability to flex, expand, which is a change in response to business capacity.

It is very convenient to use Docker to do a container for elastic scaling. For example, monitoring network latency or other business-related metrics to monitor the interface speed of the business. When this business metric discovers that the network latency increases, the network latency of a service increases, or the number of requests for a service hits a certain threshold, the automatic extension of the relational logic begins. Auto-scaling is very handy for Docker, which is actually adding a lot of Docker application examples. This refers to the Web instance, each Web instance encapsulated in the Docker container, need to expand the time to use the scheduling platform to dispatch the container instance, you can quickly expand the application of the example. At the same time, for the resource level, if the enterprise under the management of a layer of private cloud IaaS, then the container cloud can dispatch the interface of the IaaS, dispatch OpenStack or VMware, generate more virtual machines to request more computing resources, and then compute the resources to allocate and dispatch the past. Elastic scaling is really good to understand, is to dispatch more instances.

  The second scenario, relatively complex, corresponds to the new speed, the business application from the code to the production, to do continuous integration, continuous delivery. Where is the complex? First, different environments need to be opened with Docker, which is where Docker excels. Development and test environments are relatively easy to get through, and are accessible on the Web. But testing and production environment is more difficult to get through, the network is generally unreachable, which requires the delivery of something more standard. So, from testing to production, it's best to pass in the past as a Docker application. The process of development is constant, and Docker does not help the efficiency of development. That is to say, how to write code before, how to do code review these relationships with Docker is not big. But with Docker, after you go into the code warehouse, you can automatically build a new program by packaging it from the code warehouse. For example, using Java programs to build the jar package, and then build the image, these images can be automatically pushed from the development environment to the mirror warehouse, and then from the Mirror warehouse to the test environment, so that the two environments can be more easily opened. However, there will be some mirrors in the mirrored warehouse that cannot pass the test, which needs to be returned to inform the developer to continue with the business iterations, do a docker image, test it completely, and then save it to the mirror repository, marking the latest fully tested business application image. In the production deployment to the OPS will involve a lot of links, the middle of the physical network may be out of the way, operators from the test link to the Docker image production and delivery are to be opened. Also, Docker is to package the application depends on the environment and the application itself, assuming that the Docker application contains WebLogic, run Java to write a war package program, then this weblogic also need the container inside the basic environment, assume that is an Ubuntu Linux, as well as a variety of configuration files, XML-based configuration files. There are many different ways to deal with war packages and configuration files when Docker is delivering. The most convenient way to develop tests is to pack all the things in Docker and put the program and configuration files together, in this way, the Docker image is completely self-reliant, there will be troublesome episodes, such as the program changed a line will be re-made a war package, Repackage a docker image, or change the configuration file to a place where the entire image needs to be repackaged. Enterprise-Class It applications have a wide variety of dependencies, so the entire packaged process doesn't have to be done in a matter of seconds. At this point, the relatively constant part is Ubuntu and WebLogic, which is partially dependent, so you can put them in a Docker container as a base image. The war package changes most frequently when the app is released, but you can make the program and the imageSeparation. This way, the underlying image remains the same every time you go online, and the new app can reuse the existing Docker base image and replace the war package. In this way, you can still take advantage of some of the features of Docker, such as the isolation, resource constraints, and other lightweight deployments. Also for configuration management, because the above configuration is still placed in the Docker image. Configuration files are generally small, although not as many as the program changes, but the configuration file will also change. Does the entire Docker image have to be re-changed every time the configuration file is modified? Not necessarily, we can manage the configuration files separately. Configuration file management is not easy for the financial industry, because the environment is isolated. Configure the server to create a different environment configuration, when the program is running, there are two ways, one is pull, the container starts to configure the center to pull the real-time configuration, no need to modify the code. Another is push mode, the configuration update will be pushed to a specific container in real time, need to use the SDK. Pull mode, after the program starts, each update program to send a configuration static load, configuration modification program will not change, the pull mode is relatively easy to implement. Another scenario is the new efficiency, to improve the efficiency of the entire data center operations, the complexity of operation and maintenance down. Using the container cloud can automate 80% of repetitive operations. Operation and maintenance deployment is not much need for human participation, only need to be operational personnel to trigger, set the time of application on-line, the logic of the actual on-line is based on the container to quickly deploy. Basically only the new physical server on the line, or the component virtual machine resource pool need manpower, the container cloud under the cluster automatic construction can be based on container technology automatic implementation. Both CPU and memory can be automatically allocated and recycled. There is also the application of scale-out, the application of fault-tolerant automatic recovery, etc. can also be automatically implemented. With this, 80% of repetitive operations will become automated, which is undoubtedly a significant improvement in the operational efficiency of the data center.

The case of several people cloud

Digital Cloud is based on a simple architecture of container-centric data center operating systems, and we want to do a lightweight PAAs platform for private or hybrid clouds. The concept of this PAAs platform is very simple, that is, a variety of applications, based on the application of the Internet, based on the traditional architecture of the application, or the distributed open source components, Message Queuing These various components, the unified abstraction of them into containerized applications. For containerized applications of these standards, the PAAs platform can provide a standard container runtime environment, including application deployment, continuous integration, resilient expansion, service discovery, logging, permissions, and docking for persistence and networking. This is the standard PAAs platform, all thanks to container technology to standardize the application layer, on the basis of which all applications are containerized applications, no longer distinguish between business applications or component-level applications, or processing big data applications, they are container applications, The PAAs platform only needs to manage the runtime requirements of the container application.

The PAAs platform is managed in a downward direction on a variety of computing resources, including public or private clouds, or physical machines, and the number of people cloud is now more focused on private cloud scenarios. With the lightweight PAAs platform, both physical and virtual machines have a unified management platform, from the rapid release of applications, to the overall utilization of resources, and finally to large-scale deployment, are integrated processes, several people cloud PAAs platform can all support.

Take a second kill example, this is one of our clients, the activity of the night around 10 o'clock seconds kill, because worry about the daytime too many people. This is indeed a customer's dilemma, because their IT architecture is hard to adapt to elastic expansion and is therefore forced to do a second-kill operation at night. We've also done millions of simultaneous pressure measurements, with 1 million requests per second. After the pressure, we began to do elastic expansion, for this, the Docker container is very convenient, in addition to monitoring trigger automatic expansion. The second example is the disaster preparedness of the same city, the financial industry for compliance requirements, to do the three centers of the two places, this is not so easy to achieve. Based on the container cloud can realize the three centers, the container management node is cross-network, high availability, they are several backup, the following is a different cluster, may be a production cluster, backup cluster, development cluster and so on a variety of clusters, these cross-physical nodes of the cluster through several people cloud node to manage. When a cluster goes down, many of the applications above can be migrated automatically and quickly, such as production downtime and immediate backup, which can be easily implemented based on containers.

A simple small example, just mentioned, if the big Data system is containerized, then there is no need to differentiate between big data applications or other business applications, all of which are containerized applications. So big Data systems run in containers, and all are containerized after encapsulation, including Kafka,zookeeper, Redis. After the container, the PAAs platform does not differentiate the application is what, all based on the container, just the container management, that is, the container needs CPU to the CPU, need memory to allocate memory, need to allocate network network, need to isolate PAAs platform to help it do isolation, so that, The entire big data platform is easy to maintain. Application systems, data systems can be unified through a PAAs platform to operations.

How the financial industry uses containers, taking into account the need to stabilize above all!

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.