Nginx Configuration Instructions
Please refer to the file description for typical nginx configuration files. This only explains the implementation of the Nginx load balancer function.
1. Polling (default)
Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.
2, Weight
Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
For example:
Upstream bakend{
server192.168.159.10 weight=10;
server192.168.159.11 weight=10;
}
3, Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
For example:
upstreamresinserver{
Ip_hash;
server192.168.159.10:8080;
server192.168.159.11:8080;
}
4. Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.
upstreamresinserver{
server Server1;
Server Server2;
Fair
}
5. Url_hash (Third Party)
Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.
Example: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm
upstreamresinserver{
serversquid1:3128;
serversquid2:3128;
Hash$request_uri;
Hash_method CRC32;
}
Tips
upstreamresinserver{#定义负载均衡设备的Ip及设备状态
Ip_hash;
server127.0.0.1:8000 down;
server127.0.0.1:8080 weight=2;
server127.0.0.1:6801;
server127.0.0.1:6802 backup;
}
In servers that need to use load balancing, add
proxy_passhttp://resinserver/;
The status of each device is set to:
1.down indicates that the server is temporarily not participating in the load
2.weight by default, the larger the 1.weight, the greater the load weight.
3.max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded
4.fail_timeout:max_fails the time of the pause after the failure.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.
Nginx supports multiple sets of load balancing at the same time, which is used for unused servers.
Client_body_in_file_only set to On can speak the client post data logged to the file to do debug
Client_body_temp_path setting a directory of record files can be set up to 3 levels of directories
Location matches the URL. Can redirect or perform new proxy load Balancing 1, polling (default)
Each request is assigned to a different back-end server in chronological order, and can be automatically rejected if the backend server is down.
2, Weight
Specifies the polling probability, proportional to the weight and access ratios, for situations where the performance of the backend server is uneven.
For example:
Upstream bakend{
server192.168.159.10 weight=10;
server192.168.159.11 weight=10;
}
3, Ip_hash
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to a back-end server that resolves the session issue.
For example:
upstreamresinserver{
Ip_hash;
server192.168.159.10:8080;
server192.168.159.11:8080;
}
4. Fair (third party)
The response time of the back-end server is allocated to the request, and the response time is short of priority allocation.
upstreamresinserver{
server Server1;
Server Server2;
Fair
}
5. Url_hash (Third Party)
Assign requests by the hash result of the access URL so that each URL is directed to the same back-end server, which is more efficient when the backend server is cached.
Example: Add a hash statement in upstream, the server statement can not write weight and other parameters, Hash_method is the use of the hash algorithm
upstreamresinserver{
serversquid1:3128;
serversquid2:3128;
Hash$request_uri;
HASH_METHODCRC32;
}
Tips
upstreamresinserver{#定义负载均衡设备的Ip及设备状态
Ip_hash;
server127.0.0.1:8000 down;
server127.0.0.1:8080 weight=2;
server127.0.0.1:6801;
server127.0.0.1:6802 backup;
}
In servers that need to use load balancing, add
proxy_passhttp://resinserver/;
The status of each device is set to:
1.down indicates that the server is temporarily not participating in the load
2.weight by default, the larger the 1.weight, the greater the load weight.
3.max_fails: The number of times that a request failed is allowed defaults to 1. Returns the error defined by the Proxy_next_upstream module when the maximum number of times is exceeded
4.fail_timeout:max_fails the time of the pause after the failure.
5.backup: When all other non-backup machines are down or busy, request the backup machine. So the pressure on this machine is the lightest.
Nginx supports multiple sets of load balancing at the same time, which is used for unused servers.
Client_body_in_file_only set to On can speak the client post data logged to the file to do debug
Client_body_temp_path setting a directory of record files can be set up to 3 levels of directories
The location matches the URL. Can redirect or perform new proxy load balancing
CentOS Configuration Instructions
1> Modify the/etc/sysctl.conf configuration file to add the following parameters.
#对于一个新建连接, how many SYN connection requests the kernel will send to decide to discard, should not be greater than 255, the default value is 5, corresponding to 180 seconds or so time
net.ipv4.tcp_syn_retries=2
#net. ipv4.tcp_synack_retries=2
#表示当keepalive起用的时候, the frequency at which TCP sends keepalive messages. The default is 2 hours, instead of 300 seconds.
net.ipv4.tcp_keepalive_time=1200
Net.ipv4.tcp_orphan_retries=3
#表示如果套接字由本端要求关闭, this parameter determines how long it remains in the fin-wait-2 state.
Net.ipv4.tcp_fin_timeout=30
#表示SYN队列的长度, the default is 1024, and the queue length is 8192, which can accommodate more network connections waiting to be connected.
Net.ipv4.tcp_max_syn_backlog = 4096
#表示开启SYN Cookies. When a SYN wait queue overflow occurs, cookies are enabled to protect against a small number of SYN attacks, which defaults to 0, which means close
Net.ipv4.tcp_syncookies = 1
#表示开启重用. Allows Time-waitsockets to be re-used for new TCP connections, by default 0, which means shutdown
Net.ipv4.tcp_tw_reuse = 1
#表示开启TCP连接中TIME-wait Sockets Fast Recovery, default is 0, indicates off
Net.ipv4.tcp_tw_recycle = 1
# #减少超时前的探测次数
Net.ipv4.tcp_keepalive_probes=5
# #优化网络设备接收队列
net.core.netdev_max_backlog=3000
2> after modification, execute/sbin/sysctl-p to make the parameters effective.
JVM parameter Configuration instructions
Our Java servers are primarily running Tomcat servers. Here I'll take Tomcat as an example to set the JVM parameters to take effect.
1> into the Tomcat root directory, execute command: # VI bin/catalina.sh
2> Adding parameters
Java_opts= "-SERVER-VERBOSE:GC-XLOGGC:/USR/LOCAL/APACHE-TOMCAT-7.0.53/LOGS/GC.LOG-XMS1024M–XMX1024M–XMN512M-XX: MAXDIRECTMEMORYSIZE=256M-XX:MAXTENURINGTHRESHOLD=1-XX:SURVIVORRATIO=30-XX:TARGETSURVIVORRATIO=50-XNOCLASSGC- xss256k-xx:+printgcdetails-xx:+printgctimestamps-xx:permsize=256m-xx:maxpermsize=256m-xx:+useparnewgc-xx:+ useconcmarksweepgc-xx:cmsinitiatingoccupancyfraction=80-xx:parallelgcthreads=4-xx:concgcthreads=4-xx:+ Cmsparallelremarkenabled-xx:+cmsscavengebeforeremark-xx:+explicitgcinvokesconcurrent-xx:+usetlab-xx:tlabsize= 64K "
3> Save exit and restart Tomcat
Detailed JVM parameter settings:
-server represents the server side, will provide a lot of server-side default configuration, such as parallel recycling, and the server is generally the default, so all can be omitted, and corresponding to a-client parameter, generally on 64-bit machines, the JVM is the default startup-server parameters, That is, the default to start the parallel GC, but is PARALLELGC instead of PARALLELOLDGC, the two algorithms are different (the following will be explained briefly below), and the more special is the Windows 32-bit default is-client, the difference is not only the default parameters are not the same, Under the Java package under the JRE package will generally contain the client and server packages, the following corresponding to the launch of the dynamic link library, and the real view of Javac and other related commands indicate a boot-oriented, it just according to the command to find the corresponding JVM and into the JVM to start, That's what you see. java.exe these files are not JVMs; In conclusion,-server and-client are two completely different VMS, one for desktop applications and one for servers.
The-verbose:gc-xloggc:/usr/local/apache-tomcat-7.0.53/logs/gc.log represents the output of the JVM's GC logs and outputs the logs to Tomcat's Logs/gc.log file.
The minimum value of the-xms1024m heap area.
The maximum value of the-xmx1024m heap area.
–xmn512m The amount of Cenozoic memory values in the heap area.
-xx:maxdirectmemorysize=256m Javanio in the direct memory to improve performance, the size of this area by default is 64M, in the appropriate scenario can be set larger. Tomcat uses the bio scene by default, but some scenarios use the NIO model.
-xx:maxtenuringthreshold=1 under normal circumstances, the newly requested object will be moved to old (an object that does not fit in the S0 or S1, or is unlikely to occur) after the GC has occurred in the Yong area. This parameter typically does not exceed 16 (since the counter is counted from 0, so setting to 15 is equivalent to a lifetime of 16). To see the specifics of this value now, you can use the parameter:-xx:+printtenuringdistribution.
-xx:survivorratio=30 This parameter is the ratio of Eden to one of the two help spaces, note that the size of Yong is equivalent to the size of Eden + S0 +s1,s0 and S1, which is the ratio of Eden to the size of one of the s regions, such as a parameter of 8, Then Eden occupies 80% of Yong, while S0 and S1 occupy 10% respectively. The previous old version has a parameter:-xx:initialsurivivorratio, if you do not make any settings, will be based on this parameter, the default value of this parameter is 8, but this parameter is not eden/survivor size, but yong/ Survivor, so the default value of 8, which represents the space size of each S-zone is 12.5% instead of 10% for the Yong area. In addition, by the way, each time you see the GC log, the GC log the maximum value of each region, where the maximum space of Yong, always smaller than the size of the Yong space set, is about 12.5% smaller, That's because each free space is the size of Eden plus a survivor area, not the size of the entire Yong, because the free space is so large each time that two survivor areas are always empty, so you don't add two to compute.
-xx:targetsurvivorratio=50Calculate expected survival sizeDesired survivor SizeThe Parameters.. Default Value -. Calculation formula:(survivor_capacity * targetsurvivorratio)/* sizeof (a pointer):survivor_capacity(ASurvivorspacesize) multiplied byTargetsurvivorratio, indicating that all Ageof theSurvivorspacethe size of the object if it exceedsDesired survivor Size, the recalculationThreshold, to Ageand theMaxtenuringthresholdthe minimum value, otherwiseMaxtenuringthresholdis subject to.
-XNOCLASSGC each time the permanent storage area is full, the general GC algorithm triggers a fully GC before the extended allocation of memory, unless-XNOCLASSGC is set. Disable GC-triggered FULLGC
-xss256k a single thread stack size value, JDK5.0 after each thread stack size is 1M, before each thread stack size is 256K. In the same physical memory, reducing this value can generate more threads. However, the operating system of the number of threads within a process is still limited, can not be generated indefinitely, the empirical value of 3000~5000 around.
-xx:+printgcdetails the log details of the output GC, including the timestamp. Mostly output to console and GC logs
-xx:+printgctimestamps the time that the GC was paused when it was output. Mostly output to console and GC logs
-xx:permsize=256m Configuring the initial value of the Permanent zone memory
-xx:maxpermsize=256m the maximum memory value of the permanent zone
-XX:+USEPARNEWGC-XX:+USECONCMARKSWEEPGC These two parameters are combined, using the parallel GC (Parallel Collector) in the young area, and the old area using concurrent GC (CMS Collector).
-xx:cmsinitiatingoccupancyfraction=80 trigger a CMS GC when the usage of the old generation is up to what percentage
-xx:parallelgcthreads=4 number of threads configured for concurrent GC (PARALLELGC)
-xx:concgcthreads=4 Configuring the number of threads for a CMS GC
-xx:+cmsparallelremarkenabled Open Parallel CMS GC
-xx:+cmsscavengebeforeremark to reduce the time of the second pause, turn on parallel remark:-xx:+cmsparallelremarkenabled. If the remark is too long, you can turn on the-xx:+cmsscavengebeforeremark option, force remark to start a minor GC before, reduce the remark pause time, However, after remark, you will also start minor GC again immediately.
-xx:+explicitgcinvokesconcurrent modifies the triggering behavior of the System.GC () inside the nio/rmi. The default System.GC () triggers the full GC to cause a longer pause time. When this parameter is added, System.GC () raises the CMS GC. Reduced Directmemory oom Chance
-xx:+usetlab enable the local thread private zone (only this area will be placed in Eden to apply). Using this parameter in a multi-CPU environment is more efficient
-xx:tlabsize=64k setting the local thread private area size
Other Notes:
-XX:+USEPARNEWGC Note: The parameters that conflict with it are:-XX:+USEPARALLELOLDGC and-XX:+USESERIALGC, if you need this parameter and want the entire area to be recycled in parallel, use-xx:+ USECONCMARKSWEEPGC parameters to match, in fact, this parameter after using the CMS, the default is to start the parameter, that is, this parameter is not set under CMSGC, which will be mentioned in the following parameters.
-xx:+cmsincrementalmode starts the incremental mode under concurrent GC, only this parameter is valid under CMSGC. (I'm afraid this will easily increase memory fragmentation, which is not used in this parameter system and requires careful experimentation)
The-xx:+cmsincrementalpacing starts the auto-tuning duty cycle, which is the time-ratio setting that occurs in the CMS GC, which means that the maximum allowable GC work for this period of time can be adjusted.
-xx:cmsincrementaldutycyclemin=0
-xx:cmsincrementaldutycycle=10
After this parameter is set, the following two parameters can be set separately (the ratio of the parameter setting, the range is 0-100):
Common Server Software Nginx, JVM, CentOS network environment and other configuration