Use Go to test Nginx Performance

Source: Internet
Author: User
Tags nginx load balancing intel core i7

Currently, there are many methods to provide HTTP application services in the Go language, but the best choice is determined by the actual situation of each application. Currently, Nginx seems to be the standard Web server for each new project, even if there are many other good Web servers. However, what is the cost of providing the Go Application Service on Nginx? Do we need some nginx feature parameters (vhosts, Server Load balancer, cache, etc.) or directly use Go to provide services? If you need nginx, what is the fastest connection mechanism? This is the question I am trying to answer. The purpose of this benchmark is not to verify that Go is faster or slower than nginx. That would be silly.

We need to compare different settings:

  • Go HTTP standalone (as the control group)
  • Nginx proxy to Go HTTP
  • Nginx fastcgi to Go TCP FastCGI
  • Nginx fastcgi to Go Unix Socket FastCGI
Hardware

Because we will compare all the settings under the same hardware, the hardware will choose a cheap one. This shouldn't be a big problem.

  • Samsung notebook NP550P5C-AD1BR
  • Intel Core i7 3630QM @ 2.4 GHz (quad core, 8 threads)
  • CPU caches: (L1: 256KiB, L2: 1MiB, L3: 6MiB)
  • RAM 8GiB DDR3 1600 MHz
Software
  • Ubuntu 13.10 amd64 Saucy Salamander (updated)
  • Nginx 1.4.4 (1.4.4-1 ~ Saucy0 amd64)
  • Go 1.2 (linux/amd64)
  • Wrk 3.0.4
Set the kernel

You only need to adjust a little bit to increase the limits of the kernel. If you have a better idea about this change, write it in the following comments:

Fs. file-max9999999
Fs. nr_open 9999999
Net. core. netdev_max_backlog 4096
Net. core. rmem_max 16777216
Net. core. somaxconn 65535
Net. core. wmem_max 16777216
Net. ipv4.ip _ forward 0
Net. ipv4.ip _ local_port_range 1025 65535
Net. ipv4.tcp _ fin_timeout 30
Net. ipv4.tcp _ keepalive_time 30
Net. ipv4.tcp _ max_syn_backlog 20480
Net. ipv4.tcp _ max_tw_buckets 400000
Net. ipv4.tcp _ no_metrics_save 1
Net. ipv4.tcp _ syn_retries 2
Net. ipv4.tcp _ synack_retries 2
Net. ipv4.tcp _ tw_recycle 1
Net. ipv4.tcp _ tw_reuse 1
Vm. min_free_kbytes 65536
Vm. overcommit_memory 1

Limits

The maximum number of files opened for root and www-data is set to 200000.

Nginx

Several Nginx adjustments are required. Someone told me that I disabled gzip to make sure it is fair. The following is its configuration file/etc/nginx. conf:

User www-data;
Worker_processes auto;
Worker_rlimit_nofile 200000;
Pid/var/run/nginx. pid;

Events {
Worker_connections 10000;
Use epoll;
Multi_accept on;
}

Http {
Sendfile on;
Tcp_nopush on;
Tcp_nodelay on;
Keepalive_timeout 300;
Keepalive_requests 10000;
Types_hash_max_size 2048;

Open_file_cache max = 200000 inactive = 300 s;
Open_file_cache_valid 300 seconds;
Open_file_cache_min_uses 2;
Open_file_cache_errors on;

Server_tokens off;
Dav_methods off;

Include/etc/nginx/mime. types;
Default_type application/octet-stream;

Access_log/var/log/nginx/access. log combined;
Error_log/var/log/nginx/error. log warn;

Gzip off;
Gzip_vary off;

Include/etc/nginx/conf. d/*. conf;
Include/etc/nginx/sites-enabled/*. conf;
}

Nginx vhosts

Upstream go_http {
Server 127.0.0.1: 8080;
Keepalive 300;
}

Server {
Listen 80;
Server_name go. http;
Access_log off;
Error_log/dev/null crit;

Location /{
Proxy_pass http: // go_http;
Proxy_http_version 1.1;
Proxy_set_header Connection "";
}
}

Upstream go_fcgi_tcp {
Server 127.0.0.1: 9001;
Keepalive 300;
}

Server {
Listen 80;
Server_name go. fcgi. tcp;
Access_log off;
Error_log/dev/null crit;

Location /{
Include fastcgi_params;
Fastcgi_keep_conn on;
Fastcgi_pass go_fcgi_tcp;
}
}

Upstream go_fcgi_unix {
Server unix:/tmp/go. sock;
Keepalive 300;
}

Server {
Listen 80;
Server_name go. fcgi. unix;
Access_log off;
Error_log/dev/null crit;

Location /{
Include fastcgi_params;
Fastcgi_keep_conn on;
Fastcgi_pass go_fcgi_unix;
}
}

Go source code

Package main

Import (
"Fmt"
"Log"
"Net"
"Net/http"
"Net/http/fcgi"
"OS"
"OS/signal"
"Syscall"
)

Var (
Abort bool
)

Const (
SOCK = "/tmp/go. sock"
)

Type Server struct {
}

Func (s Server) ServeHTTP (w http. ResponseWriter, r * http. Request ){
Body: = "Hello World \ n"
// Try to keep the same amount of headers
W. Header (). Set ("Server", "gophr ")
W. Header (). Set ("Connection", "keep-alive ")
W. Header (). Set ("Content-Type", "text/plain ")
W. Header (). Set ("Content-Length", fmt. Sprint (len (body )))
Fmt. Fprint (w, body)
}

Func main (){
Sigchan: = make (chan OS. Signal, 1)
Signal. Y (sigchan, OS. Interrupt)
Signal. Notify (sigchan, syscall. SIGTERM)

Server: = Server {}

Go func (){
Http. Handle ("/", server)
If err: = http. ListenAndServe (": 8080", nil); err! = Nil {
Log. Fatal (err)
}
}()

Go func (){
Tcp, err: = net. Listen ("tcp", ": 9001 ")
If err! = Nil {
Log. Fatal (err)
}
Fcgi. Serve (tcp, server)
}()

Go func (){
Unix, err: = net. Listen ("unix", SOCK)
If err! = Nil {
Log. Fatal (err)
}
Fcgi. Serve (unix, server)
}()

<-Sigchan

If err: = OS. Remove (SOCK); err! = Nil {
Log. Fatal (err)
}
}

Check HTTP header

For fairness, all requests must be of the same size.

$ Curl-sI http: // 127.0.0.1: 8080/
HTTP/1.1 200 OK
Connection: keep-alive
Content-Length: 12
Content-Type: text/plain
Server: gophr
Date: Sun, 15 Dec 2013 14:59:14 GMT

$ Curl-sI http: // 127.0.0.1: 8080/| wc-c
141

 

$ Curl-sI http://go.http/
HTTP/1.1 200 OK
Server: nginx
Date: Sun, 15 Dec 2013 14:59:31 GMT
Content-Type: text/plain
Content-Length: 12
Connection: keep-alive

$ Curl-sI http://go.http/| wc-c
141

 

$ Curl-sI http://go.fcgi.tcp/
HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 12
Connection: keep-alive
Date: Sun, 15 Dec 2013 14:59:40 GMT
Server: gophr

$ Curl-sI http://go.fcgi.tcp/| wc-c
141

 

$ Curl-sI http://go.fcgi.unix/
HTTP/1.1 200 OK
Content-Type: text/plain
Content-Length: 12
Connection: keep-alive
Date: Sun, 15 Dec 2013 15:00:15 GMT
Server: gophr

$ Curl-sI http://go.fcgi.unix/| wc-c
141

 

Start the engine
  • Use sysctl to configure the kernel
  • Configure Nginx
  • Configure Nginx vhosts
  • Start the service with www-data
  • Run the Benchmark Test
Benchmark Test GOMAXPROCS = 1Go standalone

# Wrk-t100-c5000-d30s http: // 127.0.0.1: 8080/
Running 30 s test @ http: // 127.0.0.1: 8080/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 116.96 ms 17.76 ms 173.96 ms 85.31%
Req/Sec 429.16 49.20 589.00 69.44% million
1281567 requests in 29.98 s, 215.11 MB read
Request/sec: 42745.15
Transfer/sec: 7.17 MB

Nginx + Go through HTTP

# Wrk-t100-c5000-d30s http://go.http/
Running 30 s test @ http://go.http/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 124.57 ms 18.26 ms 209.70 ms 80.17%
Req/Sec 406.29 56.94 0.87 k 89.41%
1198450 requests in 29.97 s, 201.16 MB read
Request/sec: 39991.57
Transfer/sec: 6.71 MB

Nginx + Go through FastCGI TCP

# Wrk-t100-c5000-d30s http://go.fcgi.tcp/
Running 30 s test @ http://go.fcgi.tcp/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 514.57 ms 119.80 ms 1.21 s 71.85%
Req/Sec 97.18 22.56 263.00 79.59% million
287416 requests in 30.00 s, 48.24 MB read
Socket errors: connect 0, read 0, write 0, timeout 661
Request/sec: 9580.75
Transfer/sec: 1.61 MB

Nginx + Go through FastCGI Unix Socket

# Wrk-t100-c5000-d30s http://go.fcgi.unix/
Running 30 s test @ http://go.fcgi.unix/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 425.64 ms 80.53 ms 925.03 ms 76.88%
Req/Sec 117.03 22.13 255.00 81.30% million
350162 requests in 30.00 s, 58.77 MB read
Socket errors: connect 0, read 0, write 0, timeout 210
Request/sec: 11670.72
Transfer/sec: 1.96 MB

GOMAXPROCS = 8Go standalone

# Wrk-t100-c5000-d30s http: // 127.0.0.1: 8080/
Running 30 s test @ http: // 127.0.0.1: 8080/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 39.25 ms 8.49 ms 86.45 ms 81.39%
Req/Sec 1.29 k 129.27 1.79 k 69.23%
3837995 requests in 29.89 s, 644.19 MB read
Request/sec: 128402.88
Transfer/sec: 21.55 MB

Nginx + Go through HTTP

# Wrk-t100-c5000-d30s http://go.http/
Running 30 s test @ http://go.http/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 336.77 ms 297.88 ms 632.52 ms 60.16%
Req/Sec 2.36 k 2.99 k 19.11 k 84.83%
2232068 requests in 29.98 s, 374.64 MB read
Request/sec: 74442.91
Transfer/sec: 12.49 MB

Nginx + Go through FastCGI TCP

# Wrk-t100-c5000-d30s http://go.fcgi.tcp/
Running 30 s test @ http://go.fcgi.tcp/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 217.69 ms 121.22 ms 1.80 s 75.14%
Req/Sec 263.09 102.78 629.00 62.54% million
721027 requests in 30.01 s, 121.02 MB read
Socket errors: connect 0, read 0, write 176, timeout 1343
Request/sec: 24026.50
Transfer/sec: 4.03 MB

Nginx + Go through FastCGI Unix Socket

# Wrk-t100-c5000-d30s http://go.fcgi.unix/
Running 30 s test @ http://go.fcgi.unix/
100 threads and 5000 connections
Thread Stats Avg Stdev Max ++/-Stdev
Latency 694.32 ms 332.27 ms 1.79 s 62.13%
Req/Sec 646.86 669.65 6.11 k 87.80%
909836 requests in 30.00 s, 152.71 MB read
Request/sec: 30324.77
Transfer/sec: 5.09 MB

Conclusion

Some Nginx settings were not well optimized during the first benchmark test (gzip is enabled and the Go backend does not use the keep-alive connection ). When it is changed to wrk and Nginx is optimized as recommended, the results are significantly different.

When GOMAXPROCS = 1, The Nginx overhead is not that large, but the time difference when OMAXPROCS = 8 is very large. You may try other settings later. If you need to use Nginx features such as virtual hosts, Server Load balancer, and cache, use HTTP proxy instead of FastCGI. Some people say that the Go FastCGI has not been well optimized, which may be the reason for the huge difference in the test results.

Recommended reading:

 

Configure and optimize reverse proxy and load balancing in Nginx

 

Nginx load balancing: nginx: [emerg] cocould not build the types_hash

 

Nginx Load Balancing module ngx_http_upstream_module details

 

Nginx + Firebug allows the browser to tell you which Server Load balancer distributes requests

 

Ubuntu install Nginx php5-fpm MySQL (LNMP environment setup)

 

Nginx details: click here
Nginx: click here

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.