Recently learning Golang, always want to experience the next concurrency in the end have more, will I big python strong how much. Learned the official tutorial HTTP service, using the Performance test Tool WRK test, found the result is very surprising ~
WRK can refer to my blog, there are basic usage notes: http://blog.yuanzhaoyi.cn/2018/01/12/test.html
Test command: wrk-t10-d1m-c200 http://127.0.0.1:8080
Meaning: 10 thread, concurrent 200 link, lasts 1 minutes
The HTTP service returns basic: "Hello World", there should be no IO blocking
Python Standard library Basehttprequesthandler implementation:
fromHttp.serverImportBasehttprequesthandler fromUrllibImportParseclassGetHandler (basehttprequesthandler):defDo_get (self): message="Hello World"Self.send_response (200) self.end_headers () Self.wfile.write (Message.encode ('Utf-8'))if __name__=='__main__': fromHttp.serverImportHttpserver Server= Httpserver (('localhost', 8080), GetHandler)Print('starting server, use <Ctrl-C> to stop') Server.serve_forever ()
Result: There are only 282 responses per second, and the longer the test time, the less
Because yes but the process, single threaded, this data should be good, although Gil in IO blocking will release threads, but also a bit of performance consumption
Running 1m Test @ http://127.0.0.1:8080 TenThreads and $connections Thread Stats Avg Stdev Max+/-Stdev Latency2.05ms6.73ms265.58ms98.90%Req/sec107.11 103.19 1.05k84.08%16959Requestsinch 1.00m,1. 65MB Read Socket errors:connect0, read19024,Write -, timeout0Requests/SEC:282.21Transfer/SEC: -.11KB
Asynchronous framework for convenience, we first use the twisted-based event Loop tornado:
ImportTornado.ioloopImportTornado.webclassMainHandler (tornado.web.RequestHandler):defGet (self): Self.write ("Hello, World")if __name__=="__main__": Application=Tornado.web.Application ([(R"/", MainHandler),]) Application.listen (8080) tornado.ioloop.IOLoop.current (). Start ()
Result: More than 1300 responses per second, significantly better
Running 1m Test @ http://127.0.0.1:8080 TenThreads and $connections Thread Stats Avg Stdev Max+/-Stdev Latency147.44ms $.17ms467.54ms86.25%Req/sec141.40 57.52 202.00 65.17%81818Requestsinch 1.00m, -. 15MB Read Socket errors:connect0, read1,Write 0, timeout0Requests/SEC:1361.25Transfer/SEC:275.17KB
Python3 began to support the native co-process to handle the event loop, although Tornado also support, but for convenience, directly with the fastest sanic test it
fromSanicImportSanic fromSanic.responseImportJsonapp=sanic () @app. Route ("/") AsyncdefTest (Request):returnJSON ({"Hello":" World"})if __name__=="__main__": App.run (Host="0.0.0.0", Debug=false, port=8080)
Result: The number of responses per second is more than 4,400, it looks good.
Running 1m test @ http://127.0.0.1:8080 and connections Thread Stats Avg Stdev Max +/- Stdev Latency 45.59ms 16.91ms 255.88ms 71.70% Req/ Sec 443.64 111.85 0.89k 68.56% in 1.00m,. 09MB Read , write 0, timeout 0Requests/sec: 4408.87Transfer/sec: 546.80KB
Recently studied Golang, and conducted basic tests on HTTP services based on the official guidelines:
Package MainImport ( "FMT" "Log" "net/http") Type Hello struct{}func (H Hello) servehttp (w http. Responsewriter, R*http. Request) {fmt. Fprint (W,"Hello world!")}func Main () {h:=hello{} ERR:= http. Listenandserve ("localhost:8080", h)ifErr! =Nil {log. Fatal (Err)}}
The result was an eye-opener: The number of responses per second reached 35365, and the Python service was not a magnitude
Running 1m Test @ http://127.0.0.1:8080 TenThreads and $connections Thread Stats Avg Stdev Max+/-Stdev Latency7.26ms9.46ms About.93ms93.36%Req/sec3.56k1.10k +.98k74.96%2125366Requestsinch 1.00m,261. 47MB readrequests/SEC:35365.98Transfer/SEC:4.35MB
In summary, in the test, the memory is not significantly increased, there must be no action, just return a string. But Golang's CPU usage has grown significantly, sanic services have grown almost as well, while the rest of the CPU usage has grown a little, but Python's services should all be CPU-centric. But when the CPU ratio is similar, Golang's responsiveness is significantly better. Specific reasons to think, in addition to the core use of the CPU difference, that is true parallel implementation, it seems nothing. Python's asynchronous service and Golang services should be based on the event loop to achieve the scheduling of the process, of course, the implementation of the method must be very different, specifically to continue to learn. But Golang's innate concurrency support is really good for this optimization.
These tests are all based on curiosity, relatively simple and not rigorous enough, but I think we can illustrate some differences. If you find any problems, please leave a message.
Basic HTTP Service performance test (Python vs Golang)