How to write a go program to perform performance testing on a nginx server _nginx

Source: Internet
Author: User
Tags benchmark curl socket nginx server intel core i7

There are many ways to provide the Go Language HTTP application service, but the best choice is based on the actual situation of each application. Currently, Nginx appears to be a standard Web server for each new project, even in the case of many other good Web servers. However, how much does it cost to provide go application services on the Nginx? Do we need some nginx feature parameters (vhosts, load balancing, caching, etc.) or use go directly to provide services? If you need nginx, what's the quickest connection mechanism? That's the question I'm trying to answer here. The purpose of this benchmark test is not to verify that go is faster or slower than nginx. It's going to be stupid.

Here are the different settings we want to compare:

    • Go HTTP Standalone (as the control group)
    • Nginx Proxy to go HTTP
    • Nginx fastcgi to go TCP fastcgi
    • Nginx fastcgi to go Unix Socket fastcgi


Hardware

Because we will compare all the settings under the same hardware, the hardware chooses a cheap one. This should not be a big problem.

    • Samsung Notebook NP550P5C-AD1BR
    • Intel Core i7 3630QM @2.4ghz (quad core, 8 threads)
    • CPU caches: (L1:256kib, L2:1mib, L3:6mib)
    • RAM 8GiB DDR3 1600MHz

Software

    • Ubuntu 13.10 AMD64 Saucy Salamander (updated)
    • Nginx 1.4.4 (1.4.4-1~saucy0 amd64)
    • Go 1.2 (LINUX/AMD64)
    • WRK 3.0.4

Set up
Kernel

With a little bit of tweaking, the limits of the kernel is adjusted higher. If you have a better idea of this variable, please write it in the comments below:

Copy Code code as follows:
Fs.file-max 9999999
Fs.nr_open 9999999
Net.core.netdev_max_backlog 4096
Net.core.rmem_max 16777216
Net.core.somaxconn 65535
Net.core.wmem_max 16777216
Net.ipv4.ip_forward 0
Net.ipv4.ip_local_port_range 1025 65535
Net.ipv4.tcp_fin_timeout 30
Net.ipv4.tcp_keepalive_time 30
Net.ipv4.tcp_max_syn_backlog 20480
Net.ipv4.tcp_max_tw_buckets 400000
Net.ipv4.tcp_no_metrics_save 1
Net.ipv4.tcp_syn_retries 2
Net.ipv4.tcp_synack_retries 2
Net.ipv4.tcp_tw_recycle 1
Net.ipv4.tcp_tw_reuse 1
Vm.min_free_kbytes 65536
Vm.overcommit_memory 1
Limits

The maximum number of file limits opened for Root and Www-data is configured to 200000.
Nginx

There are several needs to be nginx adjusted. Someone told me that I disabled gzip to make sure it was more fair. The following is the configuration file/etc/nginx/nginx.conf:

Copy Code code as follows:

User Www-data;
Worker_processes Auto;
Worker_rlimit_nofile 200000;
Pid/var/run/nginx.pid;

Events {
Worker_connections 10000;
Use Epoll;
Multi_accept on;
}

HTTP {
Sendfile on;
Tcp_nopush on;
Tcp_nodelay on;
Keepalive_timeout 300;
Keepalive_requests 10000;
Types_hash_max_size 2048;

Open_file_cache max=200000 inactive=300s;
Open_file_cache_valid 300s;
Open_file_cache_min_uses 2;
Open_file_cache_errors on;

Server_tokens off;
Dav_methods off;

Include/etc/nginx/mime.types;
Default_type Application/octet-stream;

Access_log/var/log/nginx/access.log combined;
Error_log/var/log/nginx/error.log warn;

gzip off;
Gzip_vary off;

include/etc/nginx/conf.d/*.conf;
include/etc/nginx/sites-enabled/*.conf;
}
Nginx vhosts

Upstream Go_http {
Server 127.0.0.1:8080;
KeepAlive 300;
}

server {
Listen 80;
server_name go.http;
Access_log off;
Error_log/dev/null Crit;

Location/{
Proxy_pass http://go_http;
Proxy_http_version 1.1;
Proxy_set_header Connection "";
}
}

Upstream Go_fcgi_tcp {
Server 127.0.0.1:9001;
KeepAlive 300;
}

server {
Listen 80;
server_name go.fcgi.tcp;
Access_log off;
Error_log/dev/null Crit;

Location/{
Include Fastcgi_params;
Fastcgi_keep_conn on;
Fastcgi_pass go_fcgi_tcp;
}
}

Upstream Go_fcgi_unix {
Server Unix:/tmp/go.sock;
KeepAlive 300;
}

server {
Listen 80;
server_name Go.fcgi.unix;
Access_log off;
Error_log/dev/null Crit;

Location/{
Include Fastcgi_params;
Fastcgi_keep_conn on;
Fastcgi_pass Go_fcgi_unix;
}
}


Go Source

Copy Code code as follows:

Package Main

Import (
"FMT"
"Log"
"NET"
"Net/http"
"Net/http/fcgi"
"OS"
"Os/signal"
"Syscall"
)

VAR (
abort bool
)

Const (
Sock = "/tmp/go.sock"
)

Type Server struct {
}

Func (S Server) servehttp (w http. Responsewriter, R *http. Request) {
Body: = "Hello world\n"
Try to keep the same amount of headers
W.header (). Set ("Server", "GOPHR")
W.header (). Set ("Connection", "keep-alive")
W.header (). Set ("Content-type", "Text/plain")
W.header (). Set ("Content-length", FMT. Sprint (len (body))
Fmt. Fprint (W, body)
}

Func Main () {
Sigchan: = Make (chan os. Signal, 1)
Signal. Notify (Sigchan, OS. Interrupt)
Signal. Notify (Sigchan, Syscall. Sigterm)

Server: = server{}

Go func () {
http. Handle ("/", server)
If err: = http. Listenandserve (": 8080", nil); Err!= Nil {
Log. Fatal (ERR)
}
}()

Go func () {
TCP, ERR: = Net. Listen ("TCP", ": 9001")
If Err!= nil {
Log. Fatal (ERR)
}
fcgi. Serve (TCP, Server)
}()

Go func () {
UNIX, err: = Net. Listen ("Unix", sock)
If Err!= nil {
Log. Fatal (ERR)
}
fcgi. Serve (Unix, server)
}()

<-sigchan

If err: = OS. Remove (sock); Err!= Nil {
Log. Fatal (ERR)
}
}

Check HTTP Header

For the sake of fairness, all requests must be of the same size.

Copy Code code as follows:
$ curl-si http://127.0.0.1:8080/
http/1.1 OK
Connection:keep-alive
Content-length:12
Content-type:text/plain
Server:gophr
Date:sun, Dec 2013 14:59:14 GMT

$ Curl-si http://127.0.0.1:8080/| Wc-c
141

$ curl-si http://go.http/
http/1.1 OK
Server:nginx
Date:sun, Dec 2013 14:59:31 GMT
Content-type:text/plain
Content-length:12
Connection:keep-alive

$ Curl-si http://go.http/| Wc-c
141

$ curl-si http://go.fcgi.tcp/
http/1.1 OK
Content-type:text/plain
Content-length:12
Connection:keep-alive
Date:sun, Dec 2013 14:59:40 GMT
Server:gophr

$ Curl-si http://go.fcgi.tcp/| Wc-c
141

$ curl-si http://go.fcgi.unix/
http/1.1 OK
Content-type:text/plain
Content-length:12
Connection:keep-alive
Date:sun, Dec 2013 15:00:15 GMT
Server:gophr

$ Curl-si http://go.fcgi.unix/| Wc-c
141

Start engine

    • Configuring the kernel with Sysctl
    • Configure Nginx
    • Configure Nginx Vhosts
    • Start the service with Www-data
    • Running benchmark Tests

Benchmark Test

Gomaxprocs = 1
Go standalone

Copy Code code as follows:

# wrk-t100-c5000-d30s http://127.0.0.1:8080/
Running 30s Test @ http://127.0.0.1:8080/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 116.96ms 17.76ms 173.96ms 85.31%
Req/sec 429.16 49.20 589.00 69.44%
1281567 requests in 29.98s, 215.11MB read
requests/sec:42745.15
Transfer/sec:7.17mb
Nginx + go through HTTP


# wrk-t100-c5000-d30s http://go.http/
Running 30s Test @ http://go.http/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 124.57ms 18.26ms 209.70ms 80.17%
Req/sec 406.29 56.94 0.87k 89.41%
1198450 requests in 29.97s, 201.16MB read
requests/sec:39991.57
Transfer/sec:6.71mb


Nginx + go through FastCGI TCP

Copy Code code as follows:

# wrk-t100-c5000-d30s http://go.fcgi.tcp/
Running 30s Test @ http://go.fcgi.tcp/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 514.57ms 119.80ms 1.21s 71.85%
Req/sec 97.18 22.56 263.00 79.59%
287416 requests in 30.00s, 48.24MB read
Socket errors:connect 0, read 0, write 0, timeout 661
requests/sec:9580.75
Transfer/sec:1.61mb
Nginx + go through FastCGI Unix Socket


# wrk-t100-c5000-d30s http://go.fcgi.unix/
Running 30s Test @ http://go.fcgi.unix/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 425.64ms 80.53ms 925.03ms 76.88%
Req/sec 117.03 22.13 255.00 81.3%
350162 requests in 30.00s, 58.77MB read
Socket errors:connect 0, read 0, write 0, timeout 210
requests/sec:11670.72
transfer/sec:1.96mb


Gomaxprocs = 8
Go standalone

Copy Code code as follows:

# wrk-t100-c5000-d30s http://127.0.0.1:8080/
Running 30s Test @ http://127.0.0.1:8080/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 39.25ms 8.49ms 86.45ms 81.39%
Req/sec 1.29k 129.27 1.79k 69.23%
3837995 requests in 29.89s, 644.19MB read
requests/sec:128402.88
Transfer/sec:21.55mb


Nginx + go through HTTP

Copy Code code as follows:
# wrk-t100-c5000-d30s http://go.http/
Running 30s Test @ http://go.http/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 336.77ms 297.88ms 632.52ms 60.16%
Req/sec 2.36k 2.99k 19.11k 84.83%
2232068 requests in 29.98s, 374.64MB read
requests/sec:74442.91
Transfer/sec:12.49mb


Nginx + go through FastCGI TCP

Copy Code code as follows:

# wrk-t100-c5000-d30s http://go.fcgi.tcp/
Running 30s Test @ http://go.fcgi.tcp/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 217.69ms 121.22ms 1.80s 75.14%
Req/sec 263.09 102.78 629.00 62.54%
721027 requests in 30.01s, 121.02MB read
Socket errors:connect 0, read 0, write 176, timeout 1343
requests/sec:24026.50
Transfer/sec:4.03mb


Nginx + go through FastCGI Unix Socket

Copy Code code as follows:

# wrk-t100-c5000-d30s http://go.fcgi.unix/
Running 30s Test @ http://go.fcgi.unix/
Threads and 5000 connections
Thread Stats AVG Stdev Max/+ Stdev
Latency 694.32ms 332.27ms 1.79s 62.13%
Req/sec 646.86 669.65 6.11k 87.8%
909836 requests in 30.00s, 152.71MB read
requests/sec:30324.77
Transfer/sec:5.09mb

Conclusions

Some of the Nginx settings are not well tuned at the first set of benchmarks (the Gzip,go backend is not using keep-alive connections). The results are significantly different when changed to WRK and optimized Nginx as recommended.

When Gomaxprocs=1, the cost of nginx is not that large, but when omaxprocs=8 the difference is very large. You may try other settings later. If you need to use nginx like a virtual host, load balancing, caching, etc., use HTTP proxy, do not use fastcgi. Some people say that the fastcgi of Go has not been well optimized, which may be the reason for the big difference in the test results.

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.