This is a creation in Article, where the information may have evolved or changed.
As we all know, both the Go1.7 version and the previous version are optimized for system throughput under high concurrency, which is not enough to optimize the mass object storage. The next version is 1.8, so will 1.8 bring us this kind of surprise? Please keep looking down.
The benchmark is a multi-lingual implementation that measures a Web service that stores 250K objects through a large hashtable.
The service also meets the following 3 prerequisites:
1. Each HTTP request adds a new object and removes an old one when the number of objects is greater than 250K
2. Each object is a 1KB size []byte type, and is initialized
3. The program listens 8080 port, if the request succeeds, then returns 200,body is "OK"
Test data:
1. Warm-up phase: Initialize Hashtable, make 9k/s requests, last 60 seconds
2. Formal phase: Start 99 clients, make 9K/S requests, last 180 seconds
Initial test:
Programming languages (basically using the latest version):
1.Go language: go1.6.2/amd64, use fasthttp because there are some problems with GC in the standard library
2.ocaml-reason: The high-performance language of the Academy
3.node.js
4.haskell:
Test results:
Test again:
Since we heard that go1.8 optimized GC for massive objects, we pulled the master version of 1.8 and tested it again.
Test results:
Conclusion:
1.go1.8 request delay has been significantly optimized
2. There is no need to use the Fasthttp,net/http delay as very low
3. Go is one of the preferred languages when developing high-concurrency, low-latency Web programs: Low resource usage, high throughput, low latency, and even more performance than college OCaml