Sysctl is an interface that allows you to change the Linux system you are running. It contains advanced options for TCP/IP stacks and virtual memory systems, which allows experienced administrators to improve compelling system performance. Sysctl can be read to set more than 500 system variables. Based on this, SYSCTL (8) offers two features: Read and modify system settings.
first look at a well tuned sysctl.conf simple configuration that may apply to some friends
sysctl.conf Configuration parameters:
Net.ipv4.ip_forward = 0
Net.ipv4.conf.default.rp_filter = 1
Net.ipv4.conf.default.accept_source_route = 0
KERNEL.SYSRQ = 0
Kernel.core_uses_pid = 1
Net.ipv4.tcp_syncookies = 1
KERNEL.MSGMNB = 65536
Kernel.msgmax = 65536
Kernel.shmmax = 68719476736
Kernel.shmall = 4294967296
Net.ipv4.tcp_max_tw_buckets = 6000
Net.ipv4.tcp_sack = 1
net.ipv4.tcp_window_scaling = 1
Net.ipv4.tcp_wmem = 8192 4336600 873200
Net.ipv4.tcp_rmem = 32768 4336600 873200
Net.core.wmem_default = 8388608
Net.core.rmem_default = 8388608
Net.core.rmem_max = 16777216
Net.core.wmem_max = 16777216
Net.core.netdev_max_backlog = 262144
Net.core.somaxconn = 262144
Net.ipv4.tcp_max_orphans = 3276800
Net.ipv4.tcp_max_syn_backlog = 262144
Net.ipv4.tcp_timestamps = 1
Net.ipv4.tcp_synack_retries = 1
Net.ipv4.tcp_syn_retries = 1
Net.ipv4.tcp_tw_recycle = 1
Net.ipv4.tcp_tw_reuse = 1
Net.ipv4.tcp_mem = 786432 1048576 1572864
Net.ipv4.tcp_fin_timeout = 30
#net. Ipv4.tcp_keepalive_time = 30
Net.ipv4.tcp_keepalive_time = 300
Net.ipv4.ip_local_port_range = 1024 65000
Use Sysctl-p to make it effective
Here's a look at our focus: the most complete sysctl.conf optimization program
To view all readable variables:
% sysctl-a
Read a specified variable, such as Kern.maxproc:
% sysctl Kern.maxproc kern.maxproc:1044
To set a specified variable, use syntax such as Variable=value directly:
# sysctl kern.maxfiles=5000
kern.maxfiles:2088-> 5000
You can modify system variables by using SYSCTL, or you can modify the system variables by editing the sysctl.conf file. Sysctl.conf looks like a rc.conf. It uses the form of Variable=value to set the value. The specified value is set after the system enters multiuser mode. Not all variables can be set in this mode.
The SYSCTL variable is usually set to a string, a number, or a Boolean. (A Boolean is used to denote ' yes ', with 0来 for ' no ').
Sysctl-w kernel.sysrq=0
Sysctl-w kernel.core_uses_pid=1
Sysctl-w net.ipv4.conf.default.accept_redirects=0
Sysctl-w net.ipv4.conf.default.accept_source_route=0
Sysctl-w net.ipv4.conf.default.rp_filter=1
Sysctl-w Net.ipv4.tcp_syncookies=1
Sysctl-w net.ipv4.tcp_max_syn_backlog=2048
Sysctl-w net.ipv4.tcp_fin_timeout=30
Sysctl-w net.ipv4.tcp_synack_retries=2
Sysctl-w net.ipv4.tcp_keepalive_time=3600
Sysctl-w net.ipv4.tcp_window_scaling=1
Sysctl-w net.ipv4.tcp_sack=1
Configure Sysctl
Edit this file:
Vi/etc/sysctl.conf
If the file is empty, enter the following, or make your own adjustments as appropriate:
# Controls Source Route Verification
# Default should work for all interfaces
Net.ipv4.conf.default.rp_filter = 1
# net.ipv4.conf.all.rp_filter = 1
# net.ipv4.conf.lo.rp_filter = 1
# net.ipv4.conf.eth0.rp_filter = 1
# Disables IP source routing
# Default should work for all interfaces
Net.ipv4.conf.default.accept_source_route = 0
# Net.ipv4.conf.all.accept_source_route = 0
# Net.ipv4.conf.lo.accept_source_route = 0
# Net.ipv4.conf.eth0.accept_source_route = 0
# Controls The System Request debugging functionality of the kernel
KERNEL.SYSRQ = 0
# Controls Whether core dumps would append the PID to the core filename.
# Useful for debugging multi-threaded applications.
Kernel.core_uses_pid = 1
# Increase maximum amount of memory allocated to SHM
# only uncomment if needed!
# Kernel.shmmax = 67108864
# Disable ICMP Redirect Acceptance
# Default should work for all interfaces
net.ipv4.conf.default.accept_redirects = 0
# net.ipv4.conf.all.accept_redirects = 0
# net.ipv4.conf.lo.accept_redirects = 0
# net.ipv4.conf.eth0.accept_redirects = 0
# Enable Log spoofed Packets, Source Routed Packets, Redirect Packets
# Default should work for all interfaces
Net.ipv4.conf.default.log_martians = 1
# Net.ipv4.conf.all.log_martians = 1
# Net.ipv4.conf.lo.log_martians = 1
# Net.ipv4.conf.eth0.log_martians = 1
# Decrease the time default value for Tcp_fin_timeout connection
Net.ipv4.tcp_fin_timeout = 25
# Decrease the time default value for Tcp_keepalive_time connection
Net.ipv4.tcp_keepalive_time = 1200
# Turn on the tcp_window_scaling
net.ipv4.tcp_window_scaling = 1
# Turn on the Tcp_sack
Net.ipv4.tcp_sack = 1
# Tcp_fack should is on because of sack
Net.ipv4.tcp_fack = 1
# Turn on the Tcp_timestamps
Net.ipv4.tcp_timestamps = 1
# Enable TCP SYN Cookie Protection
Net.ipv4.tcp_syncookies = 1
# Enable Ignoring broadcasts request
Net.ipv4.icmp_echo_ignore_broadcasts = 1
# Enable bad error message Protection
net.ipv4.icmp_ignore_bogus_error_responses = 1
# make more local ports available
# Net.ipv4.ip_local_port_range = 1024 65000
# Set TCP re-ordering value in kernel to ' 5′
net.ipv4.tcp_reordering = 5
# Lower SYN Retry rates
Net.ipv4.tcp_synack_retries = 2
Net.ipv4.tcp_syn_retries = 3
# Set Max SYN Backlog to ' 2048′
Net.ipv4.tcp_max_syn_backlog = 2048
# various Settings
Net.core.netdev_max_backlog = 1024
# Increase the maximum number of skb-heads to be cached
Net.core.hot_list_length = 256
# Increase the tcp-time-wait buckets pool size
Net.ipv4.tcp_max_tw_buckets = 360000
# This'll increase the amount of memory available for socket input/output queues
Net.core.rmem_default = 65535
Net.core.rmem_max = 8388608
Net.ipv4.tcp_rmem = 4096 87380 8388608
Net.core.wmem_default = 65535
Net.core.wmem_max = 8388608
Net.ipv4.tcp_wmem = 4096 65535 8388608
Net.ipv4.tcp_mem = 8388608 8388608 8388608
Net.core.optmem_max = 40960
If you want to block someone from pinging your host, add the following code:
# Disable Ping Requests
Net.ipv4.icmp_echo_ignore_all = 1
When the edit is complete, execute the following command to make the change effective immediately:
/sbin/sysctl-p
/sbin/sysctl-w net.ipv4.route.flush=1
###################
All RFC-related options are enabled by default, so those who write their own RFC support on the Web can throw them away:
###############################
Net.inet.ip.sourceroute=0
Net.inet.ip.accept_sourceroute=0
#############################
With source routing, an attacker could attempt to reach an internal IP address, including an address in RFC1918, so
Not accepting source routing packets can prevent your internal network from being probed.
#################################
Net.inet.tcp.drop_synfin=1
###################################
Security parameters, when the kernel is compiled with the options Tcp_drop_synfin can be used to prevent certain OS detection.
##################################
kern.maxvnodes=8446
################ #http://www.bsdlover.cn#########
Vnode is an internal expression of a file or directory. Therefore, increasing the number of vnode that can be exploited by the operating system will reduce disk I/O.
Generally, this is done by the operating system itself and does not need to be modified. But at some point disk I/O can become a bottleneck,
The vnode of the system is insufficient, this configuration should be increased. You need to consider the number of inactive and free memory at this time.
To view the number of Vnode currently in use:
# sysctl Vfs.numvnodes
vfs.numvnodes:91349
To view the maximum number of Vnode available:
# sysctl Kern.maxvnodes
kern.maxvnodes:100000
Increasing the Kern.maxvnodes value by 1,000 may be a good idea if the current amount of vnode is close to the maximum value.
You should continue to see the value of vfs.numvnodes, if it climbs again to the nearest maximum,
Still need to continue to improve kern.maxvnodes. The amount of memory shown in top (1) should vary significantly,
More memory will be in active (active) state.
####################################
kern.maxproc:964
################ #http://www.bsdlover.cn#########
Maximum Number of processes
####################################
kern.maxprocperuid:867
################ #http://www.bsdlover.cn#########
Maximum processes allowed per userid
####################################
Because my maxusers is set to 256,20+16*maxusers=4116.
Maxprocperuid at least 1 less than Maxproc, because Init (8) The system program must remain in operation.
I gave it a set of 2068.
kern.maxfiles:1928
################ #http://www.bsdlover.cn#########
The number of files in the system that support the most simultaneous opening, and if you are running a database or a large process that eats a descriptor, then it should be set above 20000,
such as KDE desktop environment, it also needs to use a lot of files.
The general recommendation is set to 32768 or 65536.
####################################
kern.argmax:262144
################ #http://www.bsdlover.cn#########
Maximum number of bytes (or characters) in a argument list.
The most supported parameters under the command line, such as when you use the Find command to bulk delete some files
Find. -name "*.old"-delete, if the number of files exceeds this number, then you will be prompted too many numbers.
You can use Find. -name "*.old"-ok rm {} \ to delete.
The default parameters are sufficient, so no further modifications are recommended.
####################################
Kern.securelevel:-1
################ #http://www.bsdlover.cn#########
-1: This is the system default level and does not provide any kernel protection errors;
0: The basic function is not much, when your system just started is 0 level, when entering multi-user mode will automatically become 1 level.
1: At this level, there are several limitations:
A. Can not load or unload the Loadable kernel module through kldload or kldunload;
B. Applications cannot write memory directly through/dev/mem or/DEV/KMEM;
C. You cannot write to a disk that is already installed in (mounted), that is, you cannot format a disk, but you can perform a write operation through a standard kernel interface;
D. Can not start x-windows, at the same time can not use Chflags to modify file properties;
2: On the basis of level 1 can not write not loaded disk, and can not be in 1 seconds to create multiple warnings, this is to prevent the DOS console;
3: The rules of the IPFW firewall are not allowed at level 2.
If you have installed a firewall, and set the rules, do not easily change, then recommend the use of Level 3, if you do not install a firewall, but also ready to install a firewall, do not recommend use.
We recommend using level 2 to avoid more attacks on the kernel.
####################################
kern.maxfilesperproc:1735
################ #http://www.bsdlover.cn#########
The maximum number of files that each process can open at the same time, a lot of information on the web is 32768
Unless you are using asynchronous I/O or a large number of threads, it might not be normal to open so many files.
I personally suggest not to make changes, leave the default.
####################################
kern.ipc.maxsockbuf:262144
################ #http://www.bsdlover.cn#########
The largest socket buffer, there are suggestions on the Internet set to 2097152 (2M), 8388608 (8M).
I personally recommend that you do not make changes, keep the default 256K, the buffer is large can cause fragmentation, congestion or loss of packets.
####################################
kern.ipc.somaxconn:128
################ #http://www.bsdlover.cn#########
The maximum number of socket queues that are waiting for a connection to complete, that is, the concurrent connections.
High-load servers and DOS-attacked systems may not be able to provide normal service because the queue is stuffed.
The default is 128, recommended between 1024-4096, according to the machine and the actual situation need to change, the larger the number of memory consumption is larger.
####################################
kern.ipc.nmbclusters:4800
################ #http://www.bsdlover.cn#########
This value is used to adjust the number of cluster that the system will allocate to the network MBUFS after it is powered on.
Because each cluster size is 2K, when this value is 1024, it also uses 2MB of core memory space.
Let's say that our web page is about 1000 online, and that TCP is sending and receiving a registers size of 16K.
In the worst case scenario, we would need (16k+16k) * 1024, or 32MB of space,
However, the required mbufs is about twice times that of the space, 64MB, so the number of cluster required is 64mb/2k, which is 32768.
For machines with limited memory, the recommended value is between 1024 and 4096, and when we have massive memory space, we can set it to 4096 to 32768.
We can use netstat this instruction and add parameter-m to view the number of MBUFS currently in use.
To modify this value must be modified at a boot, so you can only add the modified settings in the/boot/loader.conf
kern.ipc.nmbclusters=32768
####################################
kern.ipc.shmmax:33554432
################ #http://www.bsdlover.cn#########
Shared Memory and Semaphore ("System VIPC") If these are too small, some large software will fail to start
The setup xine and MPlayer prompts are set to 67108864, or 64M,
If there is more memory, it can be set to 134217728, that is, 128M
####################################
kern.ipc.shmall:8192
################ #http://www.bsdlover.cn#########
Shared Memory and Semaphore ("System VIPC") If these are too small, some large software will fail to start
Install the xine and MPlayer prompts with a setting of 32768
####################################
kern.ipc.shm_use_phys:0
################ #http://www.bsdlover.cn#########
If we set it to 1, all System V shared memory (share memory, a way of communicating between programs) will be left in the entity's memory (physical memory),
Instead of being put on a swap space on your hard disk. We know that physical memory is accessed much faster than a hard disk, and when the physical memory space is low,
Some of the data is put into virtual memory, and the movement of the transfer from physical memory to virtual memory is called swap. If you do the action of swap often,
You will need to keep the hard drive as I/O, slow. So if we have a large number of programs (hundreds of) that need to share a small shared memory space,
Or when there is a large amount of shared memory, we can open this value.
This item, I personally suggest not to make changes, unless your memory is very large.
####################################
kern.ipc.shm_allow_removed:0
################ #http://www.bsdlover.cn#########
is shared memory allowed to be removed? This seems to be installed under the FB VMware need to set to 1, otherwise there will be load SVGA error prompts
As a server, this does not move.
####################################
Kern.ipc.numopensockets:12
################ #http://www.bsdlover.cn#########
The number of sockets that have been opened can be seen at the busiest time, and then you will know how much maxsockets should be set.
####################################
kern.ipc.maxsockets:1928
################ #http://www.bsdlover.cn#########
This is used to set the maximum number of sockets the system can open. If your server provides a large number of FTP services,
And often fast transmission of small files, you may find that often transmitted to half interrupted. Because FTP is transferring files,
Each file must open a socket to transmit, but it takes a while to close the socket, and if the transmission is fast,
And a lot of files, then the same time open the socket will be more than the original system to allow the value, then we have to put this value a bit larger.
In addition to FTP, there may be other network programs also have this problem.
However, this value must be set when the system is powered on, so if you want to modify this setting, we have to modify the/boot/loader.conf.
Kern.ipc.maxsockets= "16424"
####################################
kern.ipc.nsfbufs:1456
################ #http://www.bsdlover.cn#########
Frequent use of sendfile (2) system calls to busy servers,
It is necessary to adjust the Sendfile (2) cache number by nsfbufs the kernel option or by setting its value in/boot/loader.conf (view loader (8) for more details).
The common reason this parameter needs to be adjusted is to see the Sfbufa state in the process. The SYSCTL kern.ipc.nsfbufs variable is read-only in the kernel configuration variable.
This parameter is determined by kern.maxusers, however it may be necessary to adjust accordingly.
Join in the/boot/loader.conf.
Kern.ipc.nsfbufs= "2496"
####################################
kern.maxusers:59
################ #http://www.bsdlover.cn#########
The value of maxusers determines the maximum allowable value for the handler, and 20+16*maxusers is the allowable handler you will receive.
The system must have 18 handlers (process) when it is powered on, and even a simple execute instruction man will produce 9 processes.
So setting this value to 64 should be a reasonable number.
If your system will have a proc table full message, you can set it up a little larger, such as 128.
Do not set more than 256 unless your system will need to open a lot of files at the same time.
You can add the setting for this option in/boot/loader.conf,
kern.maxusers=256
####################################
Kern.coredump:1
################ #http://www.bsdlover.cn#########
If set to 0, the core file is not generated when the program exits abnormally, as the server does not recommend.
####################################
Kern.corefile:%n.core
################ #http://www.bsdlover.cn#########
Can be set to kern.corefile= "/data/coredump/%u-%p-%n.core"
Where%u is uid,%p is the process id,%n is the process name, of course/data/coredump must be an actual directory
####################################
vm.swap_idle_enabled:0
Vm.swap_idle_threshold1:2
Vm.swap_idle_threshold2:10
#########################
It is useful in large multi-user systems that have many users entering, leaving the system, and having many idle processes.
Allows a process to get into memory faster, but it eats up more swap and disk bandwidth.
The system default page scheduling algorithm is good, it is best not to change.
########################
vfs.ufs.dirhash_maxmem:2097152
#########################
Default Dirhash maximum memory, default 2M
Increase it to help improve the performance of read-only catalogs when the single directory exceeds 100K files
Recommended modification to 33554432 (32M)
#############################
Vfs.vmiodirenable:1
#################
This variable controls whether the directory is cached by the system. Most directories are small, using only a single fragment (typically 1K) in the system and smaller in the cache (typically 512 bytes).
When this variable is set to shutdown (0), the buffer caches only a fixed number of directories, even if you have a large amount of memory.
When it is turned on (set to 1), the cache is allowed to cache these directories with the VM page cache, allowing all available memory to cache the directory.
The downside is that the smallest core memory used to cache the directory is a physical page size (usually 4k) larger than 512 bytes.
We recommend that you keep this option open when you run any programs that manipulate a large number of files.
These services include Web caching, large-capacity messaging systems, and news systems.
Although some memory may be wasted, opening this option usually does not degrade performance. But it should be tested.
####################
vfs.hirunningspace:1048576
############################
This value determines how much data the system can put into the waiting area where the storage device is written. You usually use the default value,
But when we have more than one hard drive, we can size it up to 4MB or 5MB.
Note that this set to a high value (exceeding the write limit of the cache) can result in bad performance.
Do not blindly set it too high! High values can cause delays in simultaneous read operations.
#############################
Vfs.write_behind:1
#########################
This option is preset to 1, which is the open state. On opening, when the system needs to write data on a hard disk or other storage device,
It waits until a cluster unit of data is collected and then written again, or it is written to the hard disk as soon as a registers space is written.
When this option is turned on, it is very helpful for a large continuous file write speed. However, you may have to turn off this feature if you are experiencing many travel delays while waiting for a write action.
############################
net.local.stream.sendspace:8192
##################################
Data sending space for local socket connections
Recommended setting to 65536
###################################
net.local.stream.recvspace:8192
##################################
Data reception space for local socket connections
Recommended setting to 65536
###################################
net.inet.ip.portrange.lowfirst:1023
net.inet.ip.portrange.lowlast:600
net.inet.ip.portrange.first:49152
net.inet.ip.portrange.last:65535
net.inet.ip.portrange.hifirst:49152
net.inet.ip.portrange.hilast:65535
###################
The above six items are used to control the port range used by TCP and UDP, which is divided into three parts, low range, preset range, and high range.
These are the scope of the temporary port when your server initiates the connection, the preset is already more than 10,000, the general application is enough.
If it is a busier FTP server, it will not be available to more than 10,000 people at the same time.
Of course, if unfortunately, your server will provide a lot, then you can modify the first value, such as directly with the 1024 start
#########################
Net.inet.ip.redirect:1
#########################
Set to 0, shielding IP redirection function
###########################
net.inet.ip.rtexpire:3600
Net.inet.ip.rtminexpire:10
########################
A lot of Apache generated close_wait state, this state is waiting for the client to shut down, but the client side and there is no normal shutdown, so leave a lot of such stuff.
Recommendations are modified to 2
#########################
Net.inet.ip.intr_queue_maxlen:50
########################
Maximum size of the IP input queue, if the following net.inet.ip.intr_queue_drops has been increasing,
That means your queue is out of space, so you can consider increasing the value.
##########################
net.inet.ip.intr_queue_drops:0
####################
Number of packets dropped from the IP input queue, if you sysctl it's been increasing,
Then increase the value of the Net.inet.ip.intr_queue_maxlen.
#######################
net.inet.ip.fastforwarding:0
#############################
If opened, once each target address is forwarded successfully, its data will be recorded in the routing table and ARP data table, saving the calculation time of routing.
However, a large amount of kernel memory space is required to save the routing table.
If the memory is big enough, open it, hehe
#############################
net.inet.ip.random_id:0
#####################
By default, the IP packet ID number is contiguous, and these may be exploited by attackers, such as knowing how many hosts you have behind Nat.
If set to 1, then this ID number is random, hehe.
#####################
net.inet.icmp.maskrepl:0
############################
Prevent broadcast storms and turn off responses to other broadcast probes. By default, no modification is required.
###############################
net.inet.icmp.icmplim:200
##############################
Limit the system to send ICMP rate, to 100 bar, or retain also can, and will not bring too much pressure on the system.
###########################
Net.inet.icmp.icmplim_output:1
###################################
If set to 0, you will not see the hint that limiting ICMP Unreach response from 214 to packets per second and so on
But banning the output makes it easier for us to ignore the presence of the attack. Let's see what I can do with this.
######################################
net.inet.icmp.drop_redirect:0
net.inet.icmp.log_redirect:0
###################################
Set to 1, shielding ICMP redirection feature
###################################
net.inet.icmp.bmcastecho:0
############################
Prevent broadcast storm, turn off broadcast echo response, default is, no need to modify.
###############################
net.inet.tcp.mssdflt:512
net.inet.tcp.minmss:216
###############################
Packet data segment minimum value, the above two options are best not to move! Or only modify Mssdflt for 1460,minmss not move.
For more details http://www.bsdlover.cn/security/2007/1211/article_4.html
#############################
net.inet.tcp.keepidle:7200000
######################
The default time for TCP sockets is too long, which can be changed to 600000 (10 minutes).
##########################
net.inet.tcp.sendspace:32768
################ #http://www.bsdlover.cn#########
The largest TCP data buffer space to send, the application will put the data here is considered to be successful, the system TCP stack to ensure the normal data sent.
####################################
net.inet.tcp.recvspace:65536
###################################
The largest acceptance of the TCP buffer space, the system from here to distribute data to different sockets, increase the space to improve the system's ability to accept data instantaneously to improve performance.
###################################
These two options control the size of the transmit and receive registers used by the network TCP online respectively. The preset transfer registers is 32K, and the received registers is 64K.
If you need to speed up TCP transmission, you can turn these two values up a bit, but the downside is that too much value can cause the system core to take up too much memory.
If our machine will service hundreds of or thousands of of the network online, then these two options are best maintained by default, otherwise it will cause system core memory is low.
But if we are using a gigabite network, the two-value tuning assembly has a significant performance boost.
The registers size of the transmit and receive can be adjusted separately,
For example, assuming that our system is primarily a Web server, we can reduce the number of registers received and make the transfer registers larger so that we can avoid taking up too much of the core memory space.
net.inet.udp.maxdgram:9216
#########################
Maximum send UDP data buffer size, online data are mostly 65536, I personally think not much need,
If you want to adjust, you can try 24576.
##############################
net.inet.udp.recvspace:42080
##################
The largest accept UDP buffer size, online data are mostly 65536, I personally think not much need,
If you want to adjust, you can try 49152.
#######################
The above four configurations usually do not cause problems, generally speaking, network traffic is asymmetric, so should be adjusted according to the actual situation, and observe its effect.
If we set the registers to be more than 65535, unless the server itself and the operating system used by the client support the TCP protocol's Windows scaling extension (refer to the RFC 1323 file).
FreeBSD the rfs1323 (that is, the sysctl net.inet.tcp.rfc1323 option) is supported by default.
###################################################
net.inet.tcp.log_in_vain:0
##################
Record any TCP connections, which should not normally be changed.
####################
net.inet.tcp.blackhole:0
##################################
The recommendation is set to 2 to receive all packets from a port that has already been closed, drop directly, or only TCP packets if set to 1
#####################################
Net.inet.tcp.delayed_ack:1
###########################
When a computer initiates a TCP connection request, the system responds to an ACK reply packet.
This option sets whether to delay the ACK reply packet and send it along with the packet that contains the data.
Improves performance slightly in high speed networks and low load situations, but when network connectivity is poor,
If the other computer doesn't get an answer, it will continue to initiate a connection request, which will make the network more congested and less performance.
So this value I suggest you see the situation, if your speed is not a problem, you can reduce the number of packets by half
If the network is not particularly good, then set to 0, there is a request to respond first, so in fact, the waste of netcom, telecom bandwidth rate rather than your processing time:)
############################
Net.inet.tcp.inflight.enable:1
net.inet.tcp.inflight.debug:0
Net.inet.tcp.inflight.rttthresh:10
net.inet.tcp.inflight.min:6144
net.inet.tcp.inflight.max:1073725440
Net.inet.tcp.inflight.stab:20
###########################
Limiting the TCP bandwidth delay product is similar to the NetBSD Tcp/vegas.
It can be enabled by setting the SYSCTL variable net.inet.tcp.inflight.enable.
The system will attempt to compute the bandwidth delay product for each connection and limit the amount of data queued to a level that just keeps the optimal throughput.
This feature is especially important when your server is connected to a common modem, gigabit Ethernet, and even higher speed optical connections to the network (or other bandwidth delays),
Especially when you use the sliding window to zoom in, or when you use a large send window.
If this option is enabled, you should also set the Net.inet.tcp.inflight.debug to 0 (disable debugging).
For a production environment, it would be advantageous to set net.inet.tcp.inflight.min at least 6144.
However, it should be noted that this value is set too large in fact equivalent to disabling the connection bandwidth Delay product throttling feature.
This limiting feature reduces the number of blocked data in Routing and switching packet queues, and also reduces the number of data blocked on the local host interface queue.
In a small number of waiting queues, interactive connections, especially through slow modems, can also be used for low round-trip time operations.
However, note that this only affects the data sent (upload/service side). No effect on data reception (download).
Adjustment of Net.inet.tcp.inflight.stab is not recommended.
The default value of this parameter is 20, which means that the 2 maximum packets are added to the calculation of the Bandwidth Delay product window.
The extra window-like algorithm is more stable and improves the ability to do the same for a diverse network environment,
But it can also cause the ping time to grow under a slow connection (although much less than the inflight algorithm is not used).
For these situations, you might want to reduce this parameter to 15, 10, or 5;
And it may have to reduce net.inet.tcp.inflight.min (say, 3500) to get the desired results.
Reducing the value of these parameters should be used only as a last resort.
############################
Net.inet.tcp.syncookies:1
#########################
SYN cookies are technologies that can be used to validate the impact of syn ' flood ' attacks by selecting the encrypted initialization TCP serial number and verifying the packet of the response.
By default, no modification is required
########################
net.inet.tcp.msl:30000
#######################
This value is recommended by many articles on the Internet 7500,
It can also be changed to a smaller number (such as 2000 or 2500), which speeds up the process of releasing an abnormal connection (three handshake 2 seconds, fin_wait4 seconds).
#########################
Net.inet.tcp.always_keepalive:1
###########################
The Help system clears TCP connections that are not properly disconnected, which increases the use of some network bandwidth, but some dead connections can eventually be identified and purged.
A dead TCP connection is a special problem for a dial-up user-accessed system because the user often disconnects the modem without properly closing the active connection.
#############################
Net.inet.udp.checksum:1
#########################
Prevent improper UDP packets from attacking, by default that is, no need to modify
##############################
net.inet.udp.log_in_vain:0
#######################
Record any UDP connections, which should not be modified in general.
#######################
net.inet.udp.blackhole:0
####################
The recommendation is set to 1 to receive a direct drop of all UDP packets sent from a port that has been closed
#######################
net.inet.raw.maxdgram:8192
#########################
Maximum outgoing raw IP datagram size
Many articles recommend setting to 65536, as if not much necessary.
######################################
net.inet.raw.recvspace:8192
######################
Maximum incoming raw IP datagram size
Many articles recommend setting to 65536, as if not much necessary.
#######################
net.link.ether.inet.max_age:1200
####################
When the ARP cleanup is adjusted, a spoofed ARP entry is populated by buffering the IP route to allow the malicious user to generate resource exhaustion and performance reduction attacks.
This does not seem to have been changed, I suggest not to move or a slight reduction, such as (HP-UX default 5 minutes)
#######################
Net.inet6.ip6.redirect:1
###############################
Set to 0, shielding IPv6 redirection function
###########################
net.isr.direct:0
################ #http://www.bsdlover.cn#########
All Mpsafe's network ISR responds immediately to the packet, improving the performance of the NIC, set to 1.
####################################
Hw.ata.wc:1
#####################
This option is used to open the IDE hard drive cache. When open, if there is data to write to the hard disk, the hard drive will pretend to have finished writing, and quickly pick up the data.
This method accelerates the access speed of the hard disk, but when the system shuts down abnormally, it is more likely to cause data loss.
However, due to the closure of this function brought about by the speed difference is too large, suggest or keep the original open state it, do not make changes.
###################
Security.bsd.see_other_uids:1
Security.bsd.see_other_gids:1
#####################
Does not allow the user to see the process of another user, so it should be changed to 0,
#######################