High-performance Linux Web cluster builds detailed steps up to million requests per second

Source: Internet
Author: User
Tags benchmark generator hash sleep time interval free ssh iptables nginx server

This tutorial is more detailed, can be said to be hands-on, so if you have the need to do not start, please put some patience to read


How do I generate millions other HTTP requests per second?

Load Generation tool (Load-generating tools)

One important thing to keep in mind when conducting a test is how many socket connections you can build on Linux. This limitation is hard-coded in the kernel, and the most typical is the limit of the temporary W port. (To some extent) you can expand it in the/etc/sysctl.conf. But basically, a Linux machine can only open about 64,000 sockets at a time. So in the load test, we have to make the most of the socket by making as many requests as possible on a single connection. In addition, we need more than one machine to generate the load. Otherwise, the load generator takes up the available sockets and causes insufficient load to be generated.

I started with ' AB ', Apache bench. It is the simplest and most versatile of the HTTP benchmark tools I know. And it is the product that comes with Apache, so it may already exist in your system. Unfortunately, I can only generate about 900 requests per second when I use it. Although I have seen other people use it to reach 2,000 requests per second, I can tell you immediately that ' AB ' is not suitable for our benchmark test.

Httperf





Then, I tried ' httperf '. The tool is more powerful, but it is still relatively simple and has limited functionality. It is not as simple as passing parameters to figure out how many requests per second are produced. After many attempts, I have obtained the results of more than hundreds of requests per second. For example:





It creates 100,000 sessions (session) at a rate of 1,000 per second. Each session initiates 5 requests, with a time interval of 2 seconds.





Httperf--hog--server=192.168.122.10--wsess=100000,5,2--rate 1000--timeout 5





Total:connections 117557 Requests 219121 replies 116697 Test-duration 111.423 S





Connection rate:1055.0 conn/s (0.9 ms/conn, <=1022 concurrent)


Connection time [MS]: Min 0.3 avg 865.9 Max 7912.5 median 459.5 stddev 993.1


Connection time [MS]: Connect 31.1


Connection length [replies/conn]: 1.000





Request rate:1966.6 req/s (0.5 ms/req)


Request size [B]: 91.0





Reply rate [replies/s]: Min 59.4 avg 1060.3 Max 1639.7 StdDev 475.2 (samples)


Reply time [MS]: Response 56.3 transfer 0.0


Reply size [B]: Header 267.0 content 18.0 footer 0.0 (total 285.0)


Reply status:1xx=0 2xx=116697 3xx=0 4xx=0 5xx=0





CPU time [s]: User 9.68 system 101.72 (user 8.7% system 91.3% total 100%)


Net I/O: 467.5 kb/s (3.8*10^6 bps)





Finally, I used these settings to reach 6,622 connections per second:





Httperf--hog--server 192.168.122.10--num-conn 100000--ra 20000--timeout 5





(a total of 100,000 connections were created and created at a fixed rate of 20,000 connections per second)


It also has some potential advantages and has more features than ' AB '. But it's not the heavyweight tool I'm going to use in this project. What I need is a tool that can support distributed, multiple-load test nodes. So my next attempt is: Jmeter.




Apache Jmeter





This is a full-featured Web application test suite that simulates all the behavior of a real user. You can use the Jmeter agent to visit your website, click, log in, and imitate all the behaviors that the user can do. Jemeter will record these behaviors as test cases. Then Jmeter will perform these actions repeatedly to simulate the number of users you want. Although configuring Jmeter is much more complicated than ' ab ' and ' httperf ', it is an interesting tool!





According to my test, it can produce 14,000 requests per second! This is definitely a good progress.





I use some of the plug-ins on Googlle Code project and use their "stepping Threads" and "HTTP RAW" requests, which can eventually generate about 30,000 requests per second! But it's already up to the limit, so look for another tool. Here's a previous Jmeter configuration I want to be able to help other people. Although this configuration is far from perfect, sometimes it can meet your requirements.





Tsung: Heavy (heavy-duty), distributed, multi-protocol testing tools


It basically generates 40,000 requests per second, which is definitely the tool we want. Like Jmeter, you can record some of the actions you run during a test, and you can test most protocols. such as SSL, HHTP, WebDAV, SOAP, PostgreSQL, MySQL, LDAP, and Jabber/xmpp. Unlike Jmeter, it has no confusing GUI settings, it has only one XML configuration file, and some of your selected distributed nodes SSH keys. Its simplicity and efficiency appeal to me exactly as much as its robustness and scalability. I found it to be a very powerful tool that, in the right configuration, generates millions HTTP requests per second.





In addition, Tsung can generate graphs on HTML and enter detailed reports of your tests. The results of the test are easy to understand, and you can even show these pictures to your boss!





I will also explain the tool in the remainder of this series. Now you can continue to browse the configuration instructions below or skip to the next page.





Install Tsung on CentOS 6.2





First, you want to install the Epel source (Erlang needs). So be sure to install it before proceeding to the next step. After installation, continue to install the packages you need for each node you use to generate the load. If you have not established a password-free SSH key (passwordless ssh key) between the nodes, then please establish it.





Yum-y Install Erlang perl perl-rrd-simple.noarch perl-log-log4perl-rrds.noarch gnuplot perl-template-toolkit Firefox





Download the latest Tsung from Github or Tsung's official web site.





wget http://tsung.erlang-projects.org/dist/tsung-1.4.2.tar.gz





Unzip and compile.





Tar zxfv tsung-1.4.2.tar.gz


CD tsung-1.4.2


./configure && make && make install





Copy the sample configuration to the ~/.tsung directory. This is where the Tsung configuration and log files are stored.





Cp/usr/share/doc/tsung/examples/http_simple.xml/root/.tsung/tsung.xml





You can edit this profile according to your needs, or use my profile. After a lot of attempts and failures, my current profile can produce 5 million HTTP requests per second when using 7 distributed nodes.





<?xml version= "1.0"?>


<! DOCTYPE Tsung SYSTEM "/USR/SHARE/TSUNG/TSUNG-1.0.DTD" >


<tsung loglevel= "notice" version= "1.0" >





<clients>


<client host= "localhost" weight= "1" cpu= "ten" maxusers= "40000" >


<ip value= "192.168.122.2"/>


</client>


<client host= "Loadnode1" weight= "1" cpu= "9" maxusers= "40000" >


<ip value= "192.168.122.2"/>


</client>


<client host= "Loadnode2" weight= "1" maxusers= "40000" cpu= "8" >


<ip value= "192.168.122.3"/>


</client>


<client host= "Loadnode3" weight= "1" maxusers= "40000" cpu= "9" >


<ip value= "192.168.122.21"/>


</client>


<client host= "Loadnode4" weight= "1" maxusers= "40000" cpu= "9" >


<ip value= "192.168.122.11"/>


</client>


<client host= "Loadnode5" weight= "1" maxusers= "40000" cpu= "9" >


<ip value= "192.168.122.12"/>


</client>


<client host= "Loadnode6" weight= "1" maxusers= "40000" cpu= "9" >


<ip value= "192.168.122.13"/>


</client>


<client host= "Loadnode7" weight= "1" maxusers= "40000" cpu= "9" >


<ip value= "192.168.122.14"/>


</client>


</clients>





<servers>


<server host= "192.168.122.10" port= "type=" "TCP"/>


</servers>





<load>


<arrivalphase phase= "1" duration= "ten" unit= "Minute" >


<users maxnumber= "15000" arrivalrate= "8" unit= "second"/>


</arrivalphase>





<arrivalphase phase= "2" duration= "ten" unit= "Minute" >


<users maxnumber= "15000" arrivalrate= "8" unit= "second"/>


</arrivalphase>





<arrivalphase phase= "3" duration= "a" unit= "minute" >


<users maxnumber= "20000" arrivalrate= "3" unit= "second"/>


</arrivalphase>





</load>





<sessions>


<session probability= "name=" "AB" Type= "ts_http" >


<for from= "1" to= "10000000" var= "I" >


<request> <http url= "/test.txt" method= "Get" version= "1.1"/> </request>


</for>


</session>


</sessions>


</tsung>





There are a lot of things to understand at first, but once you understand them, it becomes easy.





<client> simply specifies the host that is running Tsung. You can specify the maximum number of IP and CPU Tsung use. You can use Maxusers to set the maximum number of users that a node can simulate. Each user will perform the actions we define later.


<servers> Specifies the HTTP server you want to test. We can use this option to test an IP cluster, or a single server.


<load> defines when our simulation users will "arrive" at our site. And how fast they are reaching.


<arrivalphase> in the first phase, which lasted 10 minutes, reached 15,000 users at a rate of 8 users per second.


<arrivalphase phase= "1" duration= "ten" unit= "Minute" >


<users maxnumber= "15000" arrivalrate= "8" unit= "second"/>


There are also two arrivalphases, and their users are in the same way.


These arrivalphases together form a <load&gt, which controls how many requests we can produce per second.


<session> This section defines what action they will perform once these users reach your site.


Probability allows you to define random events that users might do. Sometimes they may click here, sometimes they may click there. All the probability add up to be equal to 100%.


In the above configuration, the user only does one thing, so its probability equals 100%.


<for from= "1" to= "10000000" var= "I" > This is what users do in 100% of the time. They loop through 10,000,000 times and <request> a Web page:/test.txt.


This loop structure allows us to use a small number of user connections to obtain a larger request per second.





Once you have a good understanding of them, you can create a convenient alias to quickly observe the Tsung report.





Vim ~/.BASHRC


Alias treport= "/usr/lib/tsung/bin/tsung_stats.pl; Firefox report.html "





SOURCE ~/.BASHRC





and start Tsung.





[Root@loadnode1 ~] Tsung Start


Starting Tsung


"Log directory is:/root/.tsung/log/20120421-1004"





After the end of the observation report





cd/root/.tsung/log/20120421-1004


Treport





Use Tsung to plan your cluster structure


Now that we have a strong enough load test tool, we can plan for the remaining cluster constructs:





Use Tsung to test a single HTTP server. Get a basic benchmark.


Tune the Web server and periodically use Tsung to test for performance improvements.


The TCP sockets of these systems are tuned to obtain the best network performance. Try again, test, test, and keep testing.


Constructs the LVS cluster, which contains these fully tuned Web servers.


Use the Tsung IP cluster to perform stress testing on LVS.





tuning Nginx for optimal performance





Typically, an optimized Nginx Linux server can reach 500,000? 600,000 times/sec Request processing performance, however, my Nginx server can stabilize the processing performance of 904,000 times/sec, and I test more than 12 hours with this high load, the server work stably.





Specifically, all of the configurations listed in this article are validated in my test environment, and you need to configure them according to your server:





Install Nginx from Epel Source:





Yum-y Install Nginx





Back up the configuration file and configure it according to your needs:





Cp/etc/nginx/nginx.conf/etc/nginx/nginx.conf.orig


Vim/etc/nginx/nginx.conf





# This number should is, at maximum, the number of CPU cores on your system.


# (since Nginx doesn ' t benefit from the more than one worker per CPU.)


# The value here cannot exceed the total number of CPUs, because deploying more than 1 Nginx service processes on a single core does not have the effect of lifting performance.


Worker_processes 24;





# Number of file descriptors used for Nginx. This is set in the OS with ' ulimit-n 200000 '


# or Using/etc/security/limits.conf


# Nginx The maximum number of available file descriptors, either configure the operating system's "ulimit-n 200000" or configure it in/etc/security/limits.conf.


Worker_rlimit_nofile 200000;





# only log Critical errors


# log only the error logs at the critical level


Error_log/var/log/nginx/error.log Crit





# determines how many clients is served by the each worker process.


# (Max clients = worker_connections * worker_processes)


# "Max clients" is also limited by the number of sockets connections available on the system (~64K)


# Configure a single Nginx number of clients that a single process can serve (maximum number of clients = Single process connections * Number of processes)


# Maximum number of clients is also affected by the number of OS socket connections (max 64K)


Worker_connections 4000;





# essential for Linux, optmized to serve many clients with each thread


# Linux Critical configuration allows a single thread to handle multiple client requests.


Use Epoll;





# Accept as many connections as possible, after Nginx gets about a new notification.


# May flood worker_connections, if ' option is set too low.


# allows more connections to be processed as much as possible, and if the worker_connections configuration is too low, a large number of invalid connection requests are generated.


Multi_accept on;





# caches information about open FDs, freqently accessed files.


# Changing this setting, in my environment, brought performance up from 560k req/sec, to 904k req/sec.


# I recommend using some varient the options, though not the specific values listed.


# cache FDS (file descriptor/file handle) for high-frequency operation files


# in my device environment, performance is raised from 560k Requests/sec to 904k Requests/sec by modifying the following configuration.


# I suggest you try a different combination of the following configurations instead of using these data directly.


Open_file_cache max=200000 inactive=20s;


Open_file_cache_valid 30s;


Open_file_cache_min_uses 2;


Open_file_cache_errors on;





# Buffer Log writes to speed up IO, or disable them altogether


# Write logs to a high speed IO storage device, or close the log directly.


# Access_log/var/log/nginx/access.log main buffer=16k;


Access_log off;





# Sendfile copies data between one FD and other from within the kernel.


# more efficient than read () + write (), since the requires transferring data to and from the user.


# Open the Sendfile option, using the kernel's FD file transfer function, which is more efficient than in user mode with read () + write ().


Sendfile on;





# Tcp_nopush causes Nginx to attempt to-send its HTTP-response head in one packet,


# instead of using partial frames. This is useful the for prepending headers before calling Sendfile,


# or for throughput optimization.


# Open the Tcp_nopush option, Nginux allows the HTTP reply header to be sent in the same message as the data content.


# This option enables the server to prepare HTTP headers in advance when sendfile, and to achieve optimal throughput performance.


Tcp_nopush on;





# don ' t buffer data-sends (disable Nagle algorithm). Good for sending frequent small bursts of the data in real time.


# do not cache Data-sends (closed Nagle algorithm), this can improve the high frequency of sending small data packets real-time.


Tcp_nodelay on;





# Timeout for keep-alive connections. The Server would close connections after this time.


# Configure the connection keep-alive timeout, the server will close the appropriate connection after the timeout period.


Keepalive_timeout 30;





# Number of requests a client can make over the keep-alive connection. This is set high for testing.


# The number of requests that a single client can send on a keep-alive connection, and a larger value needs to be configured in the test environment.


Keepalive_requests 100000;





# allow the server to close the connection after a client stops responding. Frees up socket-associated memory.


# allows the server to turn off the connection after the client stops sending the answer to free the appropriate socket memory overhead for the connection.


Reset_timedout_connection on;





# Send the client a ' request timed out ' if the ' is ' not ' loaded by '. Default 60.


# Configure client data request Timeout, default is 60 seconds.


Client_body_timeout 10;





# If The client stops reading data, free up the stale client connection the more time. Default 60.


# Client Data read Timeout configuration, client stop reading data, timeout time after the corresponding connection, the default is 60 seconds.


Send_timeout 2;





# Compression. Reduces the amount of data that needs to be transferred over the network


# Compress parameter configuration to reduce the amount of data transmitted over the network.


gzip on;


Gzip_min_length 10240;


Gzip_proxied expired No-cache No-store private auth;


Gzip_types text/plain text/css text/xml text/javascript application/x-javascript;


Gzip_disable "MSIE [1-6].";





Start the Nginx and configure the machine to load automatically.





Service Nginx Start


Chkconfig Nginx on





Configure Tsung and start the test to test the peak capacity of the server in approximately 10 minutes or so, with specific time associated with your Tsung configuration.





Vim ~/.tsung/tsung.xml


<server host= "YourWebServer" port= "type=" "TCP"/>





Tsung Start





If you think the test results are enough, you can exit by CTRL + C, and then use the alias command Treport we configured before to view the test report.




WEB Server Tuning, Part two: TCP protocol stack Tuning





This section is not only applicable to Ngiinx, but can also be used on any WEB server. The server network bandwidth can be improved by optimizing the kernel TCP configuration.





The following configuration worked perfectly on my 10-gbase-t server, and the server increased the 8Gbps bandwidth from the default configuration to 9.3Gbps.





Of course, the conclusions on your server may be different.





For the following configuration items, I recommend that you revise only one of them at a time, and then test the server multiple times with the network Performance Test tool Netperf, Iperf, or with my similar test script cluster-netbench.pl.


cluster-netbench.pl:





#!/usr/bin/perl


#


# My Hideous-script!


# Stefanie Edgar


# FEB 2012


#


# netbench.pl


# A script to run network benchmarks in remote hosts and return the results.


# Useful for testing out new network settings across cluster nodes.





Use strict;


Use warnings;


Use Io::handle;





# Remote hosts to run network benchmarks on


My @hosts = QW (192.168.12.1 192.168.12.2 192.168.12.3 192.168.12.4 192.168.12.5 192.168.12.6 192.168.12.7 192.168.12.8 19 2.168.12.9);





# Configuration


# Choose between one or more network benchmark tests to run


My $conf ={


Debug => 0,


Iperf => 1,


Netperf => 1,


Netpipe => 0,


Path => {


Iperf => "/usr/bin/iperf",


Netpipe => "/usr/bin/nptcp",


Netperf => "/usr/bin/netperf",


NetServer => "/usr/bin/netserver",


Iperf => "/usr/bin/iperf",


SSH => "/usr/bin/ssh",


},


};





# for each host, create a child processes to start the background daemons remotely.


# Then Locally run the client benchmark program for that host.


foreach my $host (@hosts)


{


# Store The PIDs of the ' the child processes in this hash as they ' re spawned.


My%pids;





Print "============ network benchmarks for $host ============ \ n";





#--Fork for Netpipe--


Defined (my $pid =fork) or die "Can" T fork, Error: [$!]. \ n ";


if ($pid)


{


# Record of child PID in the hash.


$pids {$pid}=1;


}


Else


{


# Start up the background daemon on the remote hosts.


Print "Netpipe fork:started. \ n "If $conf->{debug};


print "Netpipe fork:calling function call_netpipe_on_remote ($host) \ n" If $conf->{debug};


Call_netpipe_on_remote ($host);





# When the function is returns, this child fork exits.


# Though, Netpipe won ' t exit without being killed,


# so I ' ll have to make sure to kill Netpipe


# The test has run. Until then, this would stay running.





Print "fork:exiting\n" if $conf->{debug};


Exit


}








#--Fork for Netperf--


Defined (my $pid =fork) or die "Can" T fork, Error: [$!]. \ n ";


if ($pid)


{


$pids {$pid}=1;


}


Else


{


Call_netperf_on_remote ($host);


Exit


}





#--Fork for Iperf--


Defined (my $pid =fork) or die "Can" T fork, Error: [$!]. \ n ";


if ($pid)


{


$pids {$pid}=1;


}


Else


{


Print "Iperf fork:started. \ n "If $conf->{debug};


print "Iperf fork:calling function call_iperf_on_remote ($host) \ n" If $conf->{debug};


Call_iperf_on_remote ($host);


Print "Iperf fork:exiting\n" if $conf->{debug};


Exit


}





# Wait for daemons to get set up, then run client-side benchmarks on the local machine


Sleep 3;


Run_local_netpipe ($host) if $conf->{netpipe};


Run_local_iperf ($host) if $conf->{iperf};


Run_local_netperf ($host) if $conf->{netperf};


}





#############


# functions #


#############





Sub Call_netpipe_on_remote


{


Print "Call_netpipe_on_remote:function started. \ n "If $conf->{debug};





# param passed to function, tells where to run Netpipe


my $host = shift;





Print "Call_netpipe_on_remote:calling kill_remote_process ($host, $conf->{path}{netpipe}) \ n" If $conf->{debug} ;


Kill_remote_process ($host, $conf->{path}{netpipe});


Sleep 1; # Wait for process to die before proceeding





print ' call_netpipe_on_remote:attempting to start netpipe on $host \ n ' If $conf->{debug};





# Create file handle, then specify a shell command to start the load generator


My $fh =io::handle->new ();


My $sc = "$conf->{path}{ssh} root\@ $host \" $conf->{path}{netpipe} 2>&1\ "";





# Open the file handle, using the command and catching the output


Open ($fh, "$sc 2>&1 |") or die "Failed to call: [$SC], error is: $!\n";





while (< $fh >)


{


My $line =$_;


print "$line \ n" If $conf->{debug};


}


$FH->close ();





Print "call_netpipe_on_remote:exiting. \ n "If $conf->{debug};


}





Sub Call_netperf_on_remote


{


Print "Call_netperf_on_remote:function started. \ n "If $conf->{debug};


my $host = shift;





Print "Call_netperf_on_remote:calling kill_remote_process ($host, $conf->{path}{netserver}) \ n" If $conf->{ Debug};


Kill_remote_process ($host, $conf->{path}{netserver});





Sleep 1; # Wait for process to die before proceeding





print ' call_netperf_on_remote:attempting to start netperf on $host \ n ' If $conf->{debug};


My $fh =io::handle->new ();


My $sc = "$conf->{path}{ssh} root\@ $host \" $conf->{path}{netserver} \ "";


Open ($fh, "$sc 2>&1 |") or die "Failed to call: [$SC], error is: $!\n";





while (< $fh >)


{


My $line =$_;


print "$line \ n" If $conf->{debug};


}


$FH->close ();


Print "call_netperf_on_remote:netperf\ ' s NetServer runs as a daemon, so this fork doesnt to need open. Exiting. \ n "If $conf->{debug};


}








Sub Call_iperf_on_remote


{


Print "Call_iperf_on_remote:function started\n" if $conf->{debug};





my $host = shift;


Print "Call_iperf_on_remote:calling kill_remote_process ($host, iperf) \ n" If $conf->{debug};


Kill_remote_process ($host, "Iperf");





print ' call_iperf_on_remote:attempting to start iperf on $host \ n ' If $conf->{debug};


My $fh =io::handle->new ();


My $sc = "$conf->{path}{ssh} root\@ $host \" $conf->{path}{iperf}-S--bind $host \ "";


Open ($fh, "$sc 2>&1 |") or die "Failed to call: [$SC], error is: $!\n";


Print "Call_iperf_on_remote:iperf daemon started on $host \ n" If $conf->{debug};





while (< $fh >)


{


Chomp


My $line =$_;


print "$line \ n" If $conf->{debug};


}


$FH->close ();


Print "call_iperf_on_remote:exiting. \ n "If $conf->{debug};


}





Sub Kill_remote_process


{


Print "Kill_remote_process:function started.\n" if $conf->{debug};


# params


My ($host, $process) = @_;





Print "kill_remote_process:killing all $process on $host \ n" If $conf->{debug};


My $fh =io::handle->new ();


My $sc = "$conf->{path}{ssh} root\@ $host killall $process";


Open ($fh, "$sc 2>&1 |") or die "Failed to call: [$SC], error is: $!\n";





while (< $fh >)


{


My $line =$_;


print "$line \ n" If $conf->{debug};


}


$FH->close ();


Print "kill_remote_process:exiting.\n" if $conf->{debug};





}





Sub Run_local_netpipe


{


Print "Run_local_netpipe:function started.\n" if $conf->{debug};


My $host =shift;


My $fh =io::handle->new ();


My $sc = "$conf->{path}{netpipe}-h $host 2>&1 | Tail-n 10 ";


Open ($fh, "$sc 2>&1 |") or die "Failed to call: [$SC], error is: $!\n";


while (< $fh >)


{


Chomp


My $line =$_;


print "$line \ n";


}


$FH->close ();


Print "Run_local_netpipe:post-run. Calling Kill_remote_process ($host, $conf->{path}{netpipe}) \ n "If $conf->{debug};


Kill_remote_process ($host, $conf->{path}{netpipe});


}





Sub Run_local_iperf


{


Print "Run_local_iperf:function started.\n" if $conf->{debug};


My $host =shift;


My $fh =io::handle->new ();


My $sc = "$conf->{path}{iperf}-C $host |tail-n 1";


Open ($fh, "$sc 2>&1 |") or die "Failed to call: [$SC], error is: $!\n";


while (< $fh >)


{


Chomp


My $line =$_;


print "$line \ n";


}


$FH->close ();


Print "RUN_LOCAL_IPERF:FH closed. Killing off remaining Iperf process.\n "if $conf->{debug};


Kill_remote_process ($host, "Iperf");


}





Sub Run_local_netperf


{


Print "Run_local_netperf:function started. \ n "If $conf->{debug};


My $host =shift;


My $fh =io::handle->new ();


My $sc = "$conf->{path}{netperf}-l 30-h $host 2>&1 | Tail-n 6 ";


Open ($fh, "$sc 2>&1 |") or die "Failed to call: [$SC], error is: $!\n";


while (< $fh >)


{


My $line =$_;


print "$line"; # missing \ intentially here. Netperf has its own newlines.


}


$FH->close ();


Print "Run_local_netperf:post-run cleanup. Calling Kill_remote_process ($host, $conf->{path}{netserver}) \ n "If $conf->{debug};


Kill_remote_process ($host, $conf->{path}{netserver});


Print "run_local_netperf:exiting. \ n "If $conf->{debug};


}





Install related software





Yum-y Install Netperf Iperf





Vim/etc/sysctl.conf





# Increase system IP port limits to allow for more connections


# Increase the system's IP and port data limits, from which you can accept more connections


Net.ipv4.ip_local_port_range = 2000 65000





net.ipv4.tcp_window_scaling = 1





# Number of packets to keep in backlog before the kernel starts dropping


# Set the protocol stack can be cached message number threshold, the message exceeding the threshold will be discarded by the kernel


Net.ipv4.tcp_max_syn_backlog = 3240000





# Increase Socket Listen backlog


# increase the number of socket listening thresholds


Net.core.somaxconn = 3240000


Net.ipv4.tcp_max_tw_buckets = 1440000





# Increase TCP Buffer sizes


# Large TCP storage size


Net.core.rmem_default = 8388608


Net.core.rmem_max = 16777216


Net.core.wmem_max = 16777216


Net.ipv4.tcp_rmem = 4096 87380 16777216


Net.ipv4.tcp_wmem = 4096 65536 16777216


Net.ipv4.tcp_congestion_control = Cubic





The following commands are required to take effect each time the configuration is revised.





Sysctl-p/etc/sysctl.conf





Don't forget to make sure that you have a network benchmark test after you configure the revisions, so that you can see which configuration revisions are the most obvious optimizations. This effective testing method can save you a lot of time.


Using LVS to build a load-balanced cluster

Based on your optimized server and network protocol stack, this article uses the Iperf and Netperf tools to test the performance of servers optimized to support static Web pages of 500,000/s.

Now you are ready enough to install the server cluster.

There are some good articles on the Redhat website, so I suggest you look at these articles when you are not aware of the problem. But don't worry about it, I'll take a step-by-step explanation of all the operations of the cluster.


LVS Router Configuration

This requires a device as a router, which is responsible for equalizing TCP traffic to every server in the LVS cluster. So you need to take out a device and configure it as follows. If your IP route traffic is very small, you can take a performance of the most weak server as a router.

1. Installing the LVS software on the LVS router

Yum Groupinstall "Load balancer"
Chkconfig Piranha-gui on
Chkconfig Pulse on

2. Configure a WEB-managed password

/usr/sbin/piranha-passwd

3. Configure the release port in the Iptables

Vim/etc/sysconfig/iptables
-A input-m state--state new-m tcp-p TCP--dport 3636-j ACCEPT

4. Start WEB Administration

Service Piranha-gui Start

Be sure to wait until the Piranha configuration is complete before opening the pulse.


5. Open message Forwarding

Vim/etc/sysctl.conf
Net.ipv4.ip_forward = 1

Sysctl-p/etc/sysctl.conf

6. Start the WEB server

Service Nginx Start

Direct Routing Mode configuration

1. Log on to the Piranha WEB Admin interface for configuration at the LVS router.


2. Choose Virtual SERVERS tab page, create a Web server, where the server is your Web server cluster. This configuration allows you to have more than one server as a server, and hence is called a virtual server.
Click ADD, and then click EDIT.

3. Edit the virtual server, first select an IP address as virtual IP (IP is not used as a real server), and then select a device Interface (Device) to bind.

Click ACCEPT to complete the configuration, this time the WEB page does not refresh, but the configuration has been saved.

Click Real server to perform the next step of the true servers configuration.

4. Configure the real server, and the real Server page is used to configure the true servers for the WEB cluster.
Use Add to add all HTTP servers in, and then use EDIT to configure the server in detail, then click ACCEPT to save.

If you need to reconfigure the cluster, first click VIRTUAL Server and then reconfigure real server.

After the real Server page is configured with all of the actual servers, select each row and click (DE) ACTIVATE to activate it.

5. At this point, all the real server configuration and activation completed, next to the VIRTUAL SERVERS page

Dot (DE) ACTIVATE activates the virtual server.

This is the end of the router configuration, now you can shut down and exit the browser, and then open pulse to configure each server.





Service Pulse Start





The input ipvsadm can see that the cluster has started normally.





# Ipvsadm


IP Virtual Server version 1.2.1 (size=4096)


Prot Localaddress:port Scheduler Flags


-> remoteaddress:port Forward Weight activeconn inactconn


TCP 192.168.122.10:http WLC


-> 192.168.122.1:http Route 1 0 0


-> 192.168.122.2:http Route 1 0 0


-> 192.168.122.3:http Route 1 0 0





Direct Routing? Configure each real server node





On each server in the cluster, configure the following steps.


1. Configure the virtual IP address for the real server.





IP addr Add 192.168.12.10 dev eth0:1





Because we want the IP address configuration to take effect after the server restarts, you need to write the configuration to the/etc/rc.local file.





Vim/etc/rc.local


IP addr Add 192.168.12.10 dev eth0:1





2. Configure the ARP table entries for the virtual IP on the real server.


This is to turn off all real servers responding to ARP requests for virtual IP addresses, which respond only to ARP requests for physical IP addresses, and only LVS routers in the entire cluster system can respond to ARP requests for virtual IP addresses.





Yum-y Install ARPTABLES_JF


Arptables-a in-d <cluster-ip-address>-j DROP


Arptables-a out-s <cluster-ip-address>-j mangle--mangle-ip-s <realserver-ip-address>





3. Save the ARP table entry configuration after the configuration is complete on the real server.





Service ARPTABLES_JF Save


Chkconfig--level 2345 ARPTABLES_JF on





4. Test





If the Arptables command is configured correctly, only the LVS router will answer the Ping request. First make sure that the pulse is closed and then ping the virtual IP address from any real server in the cluster, and if a real server responds to the request, you can see it by looking at the ARP entry.





Ping 192.168.122.10


ARP | grep 192.168.122.10





Here you can see the MAC address that resolves to the server, and then turn off the ARP response on this server.





There is also a simple and effective test method is to use curl to the cluster request WEB page, you can in the LVS router through the command Ipvsadm to see data traffic.





[Root@lvsrouter ~]# Watch Ipvsadm


[User@outside ~]$ Curl Http://192.168.122.10/test.txt





Performance testing of a cluster using Tsung


By this time the cluster server has been configured and working properly, and you can see how powerful the performance is with the stress test.





[Root@loadnode1 ~] Tsung Start


Starting Tsung


"Log directory is:/root/.tsung/log/20120421-1004"





It is recommended that the test be done for at least 2 hours because it takes a long time for the test to see the HTTP peak request rate. Throughout the testing process you can see the rate per CPU core by htop commands on the cluster server.





This assumes that you have installed the Epel and rpmforge sources.





Yum-y Install Htop Cluster-ssh


Cssh node1 node2 Node3 ...


Htop





You can see that the HTTP server is receiving and responding to WEB requests at high speed, the entire process of the LVS router actually does not have much load.

In actual use, make sure that the server's total CPU count is less than the total number of cores on the CPU (for example: in my 24 core system, I always maintain a load of less than 23 cores.) So that all CPUs can be fully capable, while the system can have a single failure when the redundancy ability.

After Tsung execution completes, you can view a detailed test report of the cluster server stress tests.

cd/root/.tsung/log/20120421-1004
/usr/lib/tsung/bin/tsung_stats.pl
Firefox report.html

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.