Recently made a simple demo request, build an HTTP server, support simple QA query. There are 10,000 QA pairs in the library that need to support more than 10,000 requests per second.
The demand is relatively simple, the main difficulty is 10000+ RPS. First, using Python + Uwsgi to write a simple demo, after the test found that RPS only thousands of, not up to performance requirements. Later, multiple services were deployed, using Nginx for load balancing to barely meet demand.
Japronto
Later after Google search, found the Japronto,github address Https://github.com/squeaky-pl/japronto, performance is very strong, you can see the author's Performance Map:
Why this can be so high performance, because Japronto did a lot of optimizations, the most important of which is the HTTP Pipelining,japronto use it to perform the optimization of concurrent requests. Most servers treat pipelining and non-pipelining requests from clients equally, in the same way, without targeted optimizations.
For other details, refer to https://medium.freecodecamp.org/million-requests-per-second-with-python-95c137af319 and https://github.com. /squeaky-pl/japronto
Test
Use Docker to deploy, according to the official website example
1. Pull the image
Docker Pull Japronto/japronto
2. Write test code
#examples/1_hello/hello.py fromJaprontoImportApplication#Views handle logic, take request as a parameter and#returns Response object back to the clientdefHello (Request):returnRequest. Response (text='Hello world!')#The application instance is a fundamental concept.#It's a parent to all of the resources and all the settings#can is tweaked here.App =application ()#The Router instance lets you register your handlers and execute#them depending on the URL path and methodsApp.router.add_route ('/', hello)#Finally start our servers and handle requests until termination is#requested. Enabling debug lets you see request logs and Stack traces.App.run (Debug=true)
3. Start the Docker container
Docker run-p 8080:8080-v $ (PWD)/hello.py:/hello.py Japronto/japronto--script/hello.py
Using wrk for pressure measurement, using single thread, 100 connections, pressure measurement 30s. The results are as follows
Wrk-c 100-t 1-d 30s http://192.168.86.10:8077/Running 30s Test @ http://192.168.86.10:8077/ 1 Threads and Conne Ctions Thread Stats Avg Stdev Max +/-Stdev Latency 1.88ms 548.76us 17.70ms 88.46% Req/sec 53.43k 2.40k 54.86k 96.33% 1593994 requests in 30.02s, 139.85MB readrequests/sec: 53104.58transfer/sec: 4.66MB
The results of the test are affected by the server, operation mode, and so on, although the data is quite different, but the performance is very powerful.
Unfortunately, the current project has been suspended.
Python High performance web framework--japronto