Python Back-end architecture

Source: Internet
Author: User

Recently in an online platform, architecture ideas are as follows

Architecture Evolution: 1, MVC 2, service split 3, microservices architecture 4, Domain driven design

1. MVC

This stage is mainly the rapid implementation of the product, did not consider the other, the design of the first division of a number of app,app within the high clustering, low coupling between apps, the DB table is well designed, realize the view layer function requirements, using Django to quickly realize the function, the backend has a lot of reserved design, Avoid the change of the product logic to bring the whole table structure change, the structure is like;


MVC architecture

Nginx is load balancing, through the weighting method, the request sent to a number of Django services (in fact, there is a uwsgi), if the static request, Nginx directly returned to the client, if the other request, passed through Uwsgi to Django,django to get the request, Process the response request. The time-consuming need to be asynchronous, we use celery processing, using MySQL as a database, Redis as a cache, speed up the response of requests, reduce the burden of MySQL, while there are real-time message notification needs to use the Nginx Push Module.

Problems and handling:

1, Django is not like tornado, the concurrency is very support, Django concurrency performance is poor, using uwsgi+nginx+gevent to achieve high concurrency.

2, too many redis connections, causing the service to hang up, using the redis-py with the connection pool to achieve connectivity multiplexing

3, MySQL connection is too many, use Djorm-ext-pool

4. Celery configuration gevent Support concurrent tasks

5, celery with RABBITMQ task queue to implement the task of asynchronous scheduling execution

Celery is a distributed task queue. Its basic job is to manage the assignment of tasks to different servers and get results. How do you communicate between servers? The celery itself cannot be solved. Therefore, RABBITMQ as a Message Queuing management tool is introduced into the integration with celery and is responsible for handling communication tasks between servers.

With the development of more and more functional requirements, Django under the more and more apps, which brings a release on the inconvenient, each release version will need to restart all Django services, if the issue encountered, only overtime to solve. And the amount of code under a single Django project is growing and not well maintained.

2. Service splitting

The design of the app in the high clustering, low coupling between the app is for service split to do the groundwork, first of all the public code out, to achieve a common library, other or public. It is estimated that when the amount of data is increased, Redis and MySQL are optimized to be divided into tables, followed by the splitting of the business, depending on the original code cleanliness and interdependence.


Service separation

Nginx Push Module, the maximum number of long connections is not enough, using Tornado + ZEROMQ implements the TORMQ service to Support message notification.

Problem:

With business splitting, it is cumbersome to continue to use Nginx maintenance configuration, often due to a call error caused by modifying Nginx configuration. Each service has a complete authentication process, and the authentication relies on the database of the User Center, which needs to republish multiple services when modifying the authentication.

The previous two-tier architecture has been implemented, followed by microservices and domain-driven design as I have not yet been involved (I was working with a Java-based microservices), so I posted a Python development Engineer's solution.

3. Micro-Service Architecture

MicroServices

The first is the introduction of the Openresty-based Kong API Gateway in the access layer, custom implementation of the authentication, current limit and other plug-ins. In the access layer to undertake and peel off the application layer of public authentication, current limit and other functions. When publishing a new service, the release script calls the Kong Admin API to register the service address to Kong, and the load API requires the use of plug-ins.

In order to solve the problem of mutual invocation, a gevent+msgpack-based RPC service framework Doge was maintained, and the service governance was done by ETCD, and the RPC client realized the functions of current limit, high availability and load balancing.

At this stage of the most difficult technology selection, open source API gateways are mostly implemented with Golang and Openresty (LUA) and are tailored to meet the needs of our business. It took 1 months to learn openresty and Golang, and to use openresty to achieve a short URL service Shorturl used in the business. The ultimate choice of Kong is based on the convenience of the LUA release, which is easier to open out of the box and plug-in development. Performance considerations are not the most important, and in order to support more concurrency, the LB service provided by the cloud platform is used to distribute traffic to a cluster of 2 Kong servers. Automatic synchronization of configurations between clusters.

Are you hungry? Maintain a pure Python implementation of the Thrift Protocol Framework Thriftpy, and provide a lot of supporting tools, if the team is large enough, this set of RPC solution is actually appropriate, but our team is understaffed, uneven level, it is difficult to promote the whole package of learning expensive programs. Finally, we developed the RPC framework Doge for Class Duboo, which mainly referenced Weibo open source Motan.

4. Field-driven design

Domain driven Design (DDD)

In this architecture we try to pull out of the data service layer from the application service, where each data service contains one or more bounds contexts, and the boundary context class has only one aggregate root to expose the method of RPC invocation. Data services do not depend on application services, and application services can rely on multiple data services. With the data service layer, applications decouple from each other, and high-level services rely only on the underlying services.

Source:https://zhu327.github.io/2018/07/19/python/Backend Architecture Evolution/

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.