This is a creation in Article, where the information may have evolved or changed.
According to my previous article (the ultra-full go HTTP routing framework performance comparison), Iris has obviously won the comparison of various go HTTP routing frameworks, and its performance far exceeds that of other Golang HTTP routing frameworks.
But, in the real world, is Iris really the fastest Golang HTTP routing framework?
2016-04-05 Update: I have submitted a bug, author Makis has done a temporary solution, the performance has been restored, so the reader ready to use iris need not worry.
According to my tests, the latest Iris tests are as follows:
- When the business logic takes 10 milliseconds, the throughput rate can reach 9281 request/s
- When the business logic takes 1000 milliseconds, the throughput rate can reach request/s
The performance has been very good.
I'll do a test of the other routing framework to see if the other frameworks have the same problem as this article says.
Benchmark Test Analysis
In that article I used Julien Schmidt's test code, which simulates static routes, Github APIs, goolge+ APIs, and the Parse API, because these APIs are open APIs for well-known websites, and look like testing is really reliable.
However, this test has a serious problem, that is, handler business logic is very simple, the framework of the handler similar, such as the implementation of IRIS handler:
123456789 |
func Irishandler (_ *iris. Context) {}func irishandlerwrite (c *iris. Context) {io. WriteString (C.responsewriter, C.param ("name")}func irishandlertest (c *iris. Context) {io. WriteString (C.responsewriter, C.request.requesturi)} |
There is almost no business logic, most of which writes a string to response.
This is very inconsistent with the situation in the production environment!
The actual product will certainly have some business processing, such as parameter verification, data calculation, local file reading, remote service invocation, cache read, database read and write, and some operations may take a lot of time, one or two milliseconds can be done, some is very long, it may take dozens of milliseconds, such as:
- Reading data from a network connection
- Write data to the hard disk
- Call other services and wait for the return of the service result
- ......
This is our usual case, not a simple write string.
So the handler of the test framework should also be added to the time-consuming situation.
Simulate the real handler situation.
Let's simulate the real situation and see how the iris framework and the Golang built-in HTTP routing framework are performing.
First, use Iris to implement an HTTP Server:
12345678910111213141516171819202122
|
package mainimport ( Span class= "string" "OS" "StrConv" "time" "Github.com/kataras/iris" ) func main () {API: = Iris. New () API. Get ( "/rest/hello" , func (c *iris.) Context) {Sleeptime, _: = StrConv. Atoi (OS. Args[1 ]) if sleeptime > 0 { Time. Sleep (time. Duration (sleeptime) * Time.millisecond)}c.text ( "Hello World" )}) API. Listen ()} |
We can pass it a time-consuming parameter sleeptime, simulating the time it takes for the handler to handle the business, and it will let the handler pause sleeptime milliseconds, and if it is 0, it does not need to be paused, which is similar to the test above.
Then we use the go built-in routing feature to implement an HTTP Server:
1234567891011121314151617181920212223242526 |
PackageMainImport("Log""Net/http""OS""StrConv""Time")//There is some Golang RESTful libraries and MUX libraries but I use the simplest to test.funcMain () {http. Handlefunc ("/rest/hello",func(W http. Responsewriter, R *http. Request) {Sleeptime, _: = StrConv. Atoi (OS. Args[1])ifSleeptime >0{time. Sleep (time. Duration (sleeptime) * time.millisecond)}w.write ([]byte("Hello World")}) Err: = http. Listenandserve (": 8080",Nil)ifErr! =Nil{log. Fatal ("Listenandserve:", err)}} |
Compile two programs to test.
1, the first business logic time to spend 0 of the test
Run the program iris 0
, and then perform a wrk -t16 -c100 -d30s http://127.0.0.1:8080/rest/hello
test that carries out concurrency 100 for a duration of 30 seconds.
Iris has a throughput rate of 46155 Requests/second.
Run the program gomux 0
, and then perform a wrk -t16 -c100 -d30s http://127.0.0.1:8080/rest/hello
test that carries out concurrency 100 for a duration of 30 seconds.
The go built-in routing program has a throughput rate of 55944 Requests/second.
The throughput difference between the two is not big, iris slightly close
2, then the business logic time to spend 10 of the test
Run the program iris 10
, and then perform a wrk -t16 -c100 -d30s http://127.0.0.1:8080/rest/hello
test that carries out concurrency 100 for a duration of 30 seconds.
Iris's throughput rate is requests/second.
Run the program gomux 10
, and then perform a wrk -t16 -c100 -d30s http://127.0.0.1:8080/rest/hello
test that carries out concurrency 100 for a duration of 30 seconds.
The go built-in routing program has a throughput rate of 9294 Requests/second.
3, the last business logic time to spend 1000 of the test
This simulation is an extreme situation where the business process is slow and takes 1 seconds to process a business.
Run the program iris 1000
, and then perform a wrk -t16 -c100 -d30s http://127.0.0.1:8080/rest/hello
test that carries out concurrency 100 for a duration of 30 seconds.
Iris has a throughput rate of 1 requests/second.
Run the program gomux 1000
, and then perform a wrk -t16 -c100 -d30s http://127.0.0.1:8080/rest/hello
test that carries out concurrency 100 for a duration of 30 seconds.
The go built-in routing program has a throughput rate of requests/second.
As you can see, if you add the processing time of the business logic, the go built-in routing function is far better than Iris, and even the business logic products that Iris's routing simply can't apply, Iris's throughput drops sharply as the business logic spends more time.
For Go's built-in routing, the time spent on business logic increases, and a single client waits longer, but with a large number of concurrent sites, the throughput rate does not drop too much.
For example, we use 1000 of concurrent volume testing gomux 10
and gomux 1000
.
gomux 10
: Throughput rate is 47664
gomux 1000
: Throughput rate is 979
This is the real situation of the HTTP site, because we have to deal with the amount of concurrency of the site, the site should support the simultaneous access to as many users as possible, even if a single user to get back the page will take hundreds of milliseconds to accept.
Iris cannot support large throughput rates when the business logic is processing more time, even when the concurrency is large (such as 1000), and the throughput rate is low.
Explore the implementation of Go HTTP server
The Go HTTP server implements a Goroutine (Goroutine per request) for each request, taking into account the HTTP keep-alive situation, more accurately each connection corresponds to a goroutine ( Goroutine per connection).
Because Goroutine is very lightweight and not like Java, Thread per request causes insufficient server resources to create a lot of Thread, Golang can create enough goroutine, so goroutine per The method of request is not a problem in Golang. And there's a benefit, because the request is handled in a goroutine, without having to think about the same request/response concurrent read and write issues.
How do I see which handler was executed in Goroutine? We need to implement a function to get the ID of the goroutine:
12345678910 |
func int {var[+]bytefalse) IDfield: = Strings. Fields (strings. Trimprefix (string"Goroutine"))[0]id, err: = StrConv. Atoi (IDfield)ifnil {panic(FMT. Sprintf ("Cannot get Goroutine ID:%v", Err)}return ID} |
Then print out the current Goroutine ID in handler:
1234 |
func (c *iris. Context) {fmt. Println (GoID ()) ...} |
And
1234 |
func (W http. Responsewriter, R *http. Request) {fmt. Println (GoID ()) ...} |
Start gomux 0
, and then run the ab -c 5 -n 5 http://localhost:8080/rest/hello
test, Apache's AB command uses 5 concurrent and each concurrent two requests access to the server.
You can see the output of the server:
12345678910 |
21181719203335363734 |
Because no parameters are specified -k
, each client sends two requests to create two connections.
You can add -k
parameters to see that duplicate Goroutine IDs appear, indicating that the same persistent connection uses the same goroutine processing.
The above is the experiment to verify our theory, the following is code analysis.
net/http/server.go
The No. 2146 line go c.serve()
indicates that for an HTTP connection, a goroutine is started:
123456789101112131415161718 |
func (SRV *server) Serve (l net. Listener) Error {defer l.close () if fn: = Testhookserverserve; fn! = nil {fn (SRV, L)}var tempdelay time. Duration //how long-to-sleep on accept failure if err: = srv.se TupHTTP2 (); Err! = nil {return err}for {RW, E: = L.accept () ... tempdelay = 0 C: = Srv.newconn (rw) c.setstate (C.RWC, statenew) //before Serve can return go c.serve ()}} |
This c.serve
method reads the request from the connection and is referred to handler for processing:
1234567891011121314151617181920 |
func (c *conn) serve () {... for {W, err: = C.readrequest () ... req: = W.reqserverhandler{c.server}. Servehttp (W, w.req)if c.hijacked () {return}w.finishrequest ()if !w.shouldreuseconnection () {if w.requestbodylimithit | | w.closedrequestbodyearly () {c.closewriteandwait ()}return} C.setstate (C.RWC, Stateidle)}} |
ServeHTTP
the implementation is as follows, using the default defaultservemuxif the handler or router is not configured.
12345678910 |
func (Sh serverhandler) Servehttp (rw responsewriter, req *request) {handler: = Sh.srv.Handlerifnil {handler = Defaultservemux }if"*""Options" {handler = Globaloptionshandler{}}handler. Servehttp (rw, req)} |
It can be seen that there is no new open goroutine, but in the same connection corresponding to the Goroutine execution. If the keep-alive is tried, it is performed in the goroutine corresponding to this connection.
As noted in the note:
// HTTP cannot have multiple simultaneous active requests.[*] // Until the server replies to this request, it can't read another, // so we might as well run the handler in this goroutine. // [*] Not strictly true: HTTP pipelining. We could let them all process // in parallel even if their responses need to be serialized. serverHandler{c.server}.ServeHTTP(w, w.req)
Therefore, the time spent on the business logic affects the execution time of a single goroutine, and it is reflected to the customer's browser that the delay time latency increased, if the concurrency is large enough to affect the number of goroutine in the system and their scheduling, the throughput rate will not be severely affected.
Iris's analysis
If you use Iris to see which goroutine each handler is performing, you will find that each connection is executed with a different goroutine, but where is the poor performance?
Or, what causes Iris's performance to drop sharply?
The iris server listens and starts a goroutine for the connection without significantly different, important differences in the logic of handling the request with router.
The reason is that iris caches the context in order to provide performance, and for the same request URL and method, it uses the same context from the cache.
12345678910111213141516 |
func (R *memoryrouter) Servehttp (res http. Responsewriter, req *http. Request) {ifnil {ctx. Redo (res, req)return}ctx: = R.getstation (). Pool. Get (). (*context) ctx. Reset (res, req)if r.processrequest (CTX) {//if something found and served then add it's clone to the CACHEr.cache.additem (req. Method, req. Url. Path, CTX. Clone ())}r.getstation (). Pool. Put (CTX)} |
As the concurrency is large, multiple client requests go into the ServeHTTP
method above, causing the same request to enter the following logic:
1234 |
if Nil {ctx. Redo (res, req)return} |
ctx.Redo(res, req)
Causes the loop to continue until each request is processed and the context is put back into the pool.
So for Iris, the concurrency is large, for the same request (req. Url. Path and method) will enter the queued state, resulting in poor performance.
Resources
- https://blog.golang.org/context
- https://www.reddit.com/r/golang/comments/3xz1f3/go_http_server _and_go_routines/
- http://screamingatmyscreen.com/2013/6/http-request-and-goroutines/
- https:// GROUPS.GOOGLE.COM/FORUM/#!TOPIC/GOLANG-NUTS/IWCZ_PQU8R4
- https://groups.google.com/forum/#!topic/ GOLANG-NUTS/IC3FXWZRYHS