Linux memory Management-kernel Shmall and Shmmax parameters (performance tuning)

Source: Internet
Author: User
Tags message queue system log cpu usage

Shmall and Shmmax parameters of the kernel

Shmmax= is configured with the largest memory segment size: This setting is much better than sga_max_size.

Shmmin= the smallest amount of memory segment

Shmmni= the total number of memory segment for the entire system

Shmseg= the maximum number of memory segment that can be used per process

Parameters for configuration Beacon (Semphore):

Semmsl= the number of Semphore per Semphore set: This setting is greater than the number of your process, otherwise you have to divide multiple semphore set, as if there were process+n, I forgot N is a few.

semmni= Total Semphore Set totals for the entire system

semmns= total number of Semphore for the entire system

Shmall is the total allowed shared memory size, Shmmax is the size allowed for a single segment. These two can be set to 90% of memory. For example 16G memory, the size of the 16*1024*1024*1024*90%=15461882265,shmall is 15461882265/4k (getconf pagesize available) = 3774873.

Modify/etc/sysctl.conf

kernel.shmmax=15461882265

kernel.shmall=3774873

kernel.msgmax=65535

kernel.msgmnb=65535

Execute sudo sysctl-p

You can use Ipcs-l to see the results. Ipcs-u can see the actual use of the situation

Linux Memory Management

First, preface

This document is aimed at OOP8 production environment, the specific optimization strategy needs to be adjusted according to the actual situation, the following aspects will explain how to optimize performance for Redhat Enterprise Linux.

1) Linux proc file system, through the proc file system to adjust to achieve the purpose of performance optimization.

2) Linux Performance Diagnostic Tool, describes how to use the Linux-brought diagnostic tools for performance diagnosis.

Bold Italic represents a command that can be run directly.

The underscore indicates the contents of the file.

Second,/proc/sys/kernel/optimization

1)/proc/sys/kernel/ctrl-alt-del

The file has a binary value that controls how the system reacts when it receives the Ctrl+alt+delete key combination. The two values are:

A value of 0 (0) that captures the ctrl+alt+delete and sends it to the INIT program, which allows the system to be safely shut down and restarted as if the shutdown command was entered.

One (1) value, which means that the ctrl+alt+delete is not captured and will perform an abnormal shutdown, as if the power is turned off.

Default setting: 0

Recommended setting: 1 To prevent accidental press of ctrl+alt+delete causing abnormal system restart.

2) Proc/sys/kernel/msgmax

This file specifies the maximum length (bytes) of messages sent from one process to another. Inter-process messaging is done in the kernel's memory and is not swapped to disk, so increasing the value increases the amount of memory used by the operating system.

Default setting: 8192

3)/PROC/SYS/KERNEL/MSGMNB

The file specifies the maximum length of a message queue (bytes).

Default setting: 16384

4)/proc/sys/kernel/msgmni

This file specifies the maximum number of message queue identities, which is the maximum system-wide message queue.

Default setting: 16

5)/proc/sys/kernel/panic

This file indicates the time (in seconds) that the kernel waits before rebooting if "kernel Critical error (Kernel panic)" occurs.

0 (0) seconds, indicating that automatic reboots will be disabled if a kernel critical error occurs.

Default setting: 0

6) Proc/sys/kernel/shmall

This file represents the total amount of shared memory (bytes) that can be used on the system at any given moment.

Default setting: 2097152

7)/proc/sys/kernel/shmmax

This file represents the size of the maximum shared memory segment allowed by the kernel (bytes).

Default setting: 33554432

Recommended setting: Physical Memory * 50%

The actual available maximum shared memory segment size is =shmmax * 98%, of which approximately 2% is used for shared memory structures.

You can verify this by setting Shmmax, and then performing ipcs-l.

8)/proc/sys/kernel/shmmni

The file represents the maximum number of shared memory segments used for the entire system (each).

Default setting: 4096

9)/proc/sys/kernel/threads-max

This file represents the maximum number of threads that the kernel can use.

Default setting: 2048

)/proc/sys/kernel/sem

This file is used to control the kernel semaphore, which is the method used by system VIPC for interprocess communication.

Recommended setting: 250 32000 100 128

The first column, which represents the maximum number of semaphores in each signal set.

The second column, which represents the total number of maximum semaphores in the system range.

The third column, which represents the maximum number of system operations per signal occurrence.

The fourth column, which represents the maximum number of signal lumped in the system range.

So, (first column) * (fourth column) = (second column)

The above settings can be verified by executing ipcs-l.

Third,/proc/sys/vm/optimization

1)/proc/sys/vm/block_dump

This file indicates whether block debug mode is turned on to record all read-write and dirty block writeback actions.

Default setting: 0, disable block debug mode

2)/proc/sys/vm/dirty_background_ratio

This file represents the percentage of dirty data that reaches the overall system memory, triggering the pdflush process to write dirty data back to disk.

Default setting: 10

3)/proc/sys/vm/dirty_expire_centisecs

The file indicates that if the dirty data resides in memory longer than this value, the Pdflush process will write the data back to disk the next time.

Default setting: 3000 (1/100 sec)

4)/proc/sys/vm/dirty_ratio

This file indicates that if the dirty data generated by the process reaches the percentage of the system's overall memory, the process itself writes the dirty data back to disk.

Default setting: 40

5)/proc/sys/vm/dirty_writeback_centisecs

This file indicates how often the Pdflush process writes dirty data back to disk.

Default setting: 500 (1/100 sec)

6)/proc/sys/vm/vfs_cache_pressure

This file indicates that the kernel recycles the memory used by the directory and Inode caches, and the default value of 100 means that the kernel will keep the directory and inode caches at a reasonable percentage based on Pagecache and Swapcache. Lowering this value below 100 causes the kernel to tend to retain the directory and Inode caches, and increasing this value by more than 100 will cause the kernel to tend to reclaim the directory and Inode caches.

Default setting: 100

7)/proc/sys/vm/min_free_kbytes

This file represents the minimum amount of free memory (Kbytes) that the Linux VM is forced to keep.

Default setting: 724 (512M physical memory)

8)/proc/sys/vm/nr_pdflush_threads

This file represents the number of Pdflush processes currently running, and the kernel will automatically add more Pdflush processes with high I/O load.

Default setting: 2 (Read only)

9)/proc/sys/vm/overcommit_memory

This file specifies the kernel's policy for memory allocation, which can be 0, 1, 2.

0, indicates that the kernel will check for sufficient available memory to be used by the process, and if sufficient memory is available, the memory request is allowed; otherwise, the memory request fails and the error is returned to the application process.

1, which means that the kernel allows all physical memory to be allocated regardless of the current memory state.

2, which indicates that the kernel allows allocating more memory than the sum of all physical memory and swap space (refer to Overcommit_ratio).

Default setting: 0

)/proc/sys/vm/overcommit_ratio

The file indicates that if overcommit_memory=2, the percentage of memory that can be overloaded, the overall available memory of the system is calculated by the following formula.

System assignable Memory = Swap space + physical memory *overcommit_ratio/100

Default setting: 50 (%)

One)/proc/sys/vm/page-cluster

This file represents the number of pages written once to the swap area, 0 for 1 pages, 1 for 2 pages, and 2 for 4 pages.

Default setting: 3 (2 of 3 parties, 8 pages)

/proc/sys/vm/swapiness)

The file represents the degree to which the system is exchanging behavior, and the higher the value (0-100), the more likely the disk exchange will occur.

Default setting: 60

Legacy_va_layout)

This file indicates whether to use the latest 32-bit shared memory mmap () system calls, and how Linux supports shared memory allocations including mmap (), Posix,system VIPC.

0, use the latest 32-bit mmap () system call.

1, use the system call provided by the 2.4 kernel.

Default setting: 0

Nr_hugepages)

The file represents the number of hugetlb pages reserved by the system.

Hugetlb_shm_group)

This file represents the System group ID that allows the hugetlb page to be used to create systems VIPC shared memory segments.

Iv. optimization of/proc/sys/fs/

1)/proc/sys/fs/file-max

This file specifies the maximum number of file handles that can be allocated. If the user gets an error message that is declared due to open

The maximum number of files has been reached so that they cannot open more files, you may need to increase this value.

Default setting: 4096

Recommended setting: 65536

2)/proc/sys/fs/file-nr

This file is related to File-max and it has three values:

Number of allocated file handles

The number of file handles that have been used

Maximum number of file handles

The file is read-only and is used only to display information.

Five,/proc/sys/net/core/optimization

The configuration file under this directory is primarily used to control the interaction between the kernel and the network layer.

1)/proc/sys/net/core/message_burst

The time, in 1/10 seconds, that is required to write a new warning message, and the other warning messages received by the system during this time are discarded. This is used to prevent some attempts to "overwhelm" the system with a denial of service (denial of services) attack.

Default setting: 50 (5 seconds)

2)/proc/sys/net/core/message_cost

The file represents the cost value that is related to writing each warning message. The larger the value, the more likely it is to ignore the warning message.

Default setting: 5

3)/proc/sys/net/core/netdev_max_backlog

The file represents the maximum number of packets that are allowed to be sent to the queue when each network interface receives a packet at a rate that is faster than the rate at which the kernel processes the packets.

Default setting: 300

4)/proc/sys/net/core/optmem_max

The file represents the size of the maximum buffer allowed for each socket.

Default setting: 10240

5)/proc/sys/net/core/rmem_default

The file specifies the default value, in bytes, to receive the socket buffer size.

Default setting: 110592

6)/proc/sys/net/core/rmem_max

The file specifies the maximum value, in bytes, of the receive socket buffer size.

Default setting: 131071

7)/proc/sys/net/core/wmem_default

The file specifies the default value (in bytes) for the send socket buffer size.

Default setting: 110592

8)/proc/sys/net/core/wmem_max

The file specifies the maximum size, in bytes, of the send socket buffer.

Default setting: 131071

Six,/proc/sys/net/ipv4/optimization

1)/proc/sys/net/ipv4/ip_forward

The file indicates whether IP forwarding is turned on.

0, prohibit

1, forwarding

Default setting: 0

2)/proc/sys/net/ipv4/ip_default_ttl

This file represents the lifetime of a datagram (time to Live), which is the maximum number of routers to go through.

Default setting: 64

Increasing this value can degrade system performance.

3)/proc/sys/net/ipv4/ip_no_pmtu_disc

This file indicates that the path MTU detection function is turned off globally.

Default setting: 0

4)/PROC/SYS/NET/IPV4/ROUTE/MIN_PMTU

The file represents the size of the minimum path MTU.

Default setting: 552

5)/proc/sys/net/ipv4/route/mtu_expires

The file represents how long (in seconds) The PMTU information is cached.

Default setting: 600 (seconds)

6)/proc/sys/net/ipv4/route/min_adv_mss

The file represents the smallest MSS (Maximum Segment size), depending on the router MTU of the first hop.

Default setting: (bytes)

6.1 IP Fragmentation

1)/proc/sys/net/ipv4/ipfrag_low_thresh/proc/sys/net/ipv4/ipfrag_low_thresh

The two files represent the minimum and maximum memory allocations used to reorganize the IP segment, and once the maximum memory allocation value is reached, the other segments are discarded until the minimum memory allocation value is reached.

Default setting: 196608 (Ipfrag_low_thresh)

262144 (Ipfrag_high_thresh)

2)/proc/sys/net/ipv4/ipfrag_time

The file represents how many seconds an IP fragment is retained in memory.

Default setting: 30 (seconds)

6.2 INET Peer Storage

1)/proc/sys/net/ipv4/inet_peer_threshold

inet an appropriate value to the end of the memory, when exceeding the threshold entry will be discarded. This threshold also determines the time to live and the interval at which the waste is collected. The more entries, the lower the survival period, and the shorter the GC interval.

Default setting: 65664

2)/proc/sys/net/ipv4/inet_peer_minttl

The minimum survival period for the entry. There must be enough fragmentation (fragment) survival time on the recombinant side. This minimum survival period must ensure that the buffer pool volume is less than inet_peer_threshold. The value is Jiffies

Unit measurements.

Default setting: 120

3)/proc/sys/net/ipv4/inet_peer_maxttl

Maximum lifetime of the entry. After this period arrives, unused entries will time out if the buffer pool does not run out of pressure (for example: the number of entries in the buffer pool is very small). The value is measured in jiffies.

Default setting: 600

4)/proc/sys/net/ipv4/inet_peer_gc_mintime

The shortest interval through which waste collection (GC) is passed. This interval affects the high pressure of memory in the buffer pool. The value is measured in jiffies.

Default setting: 10

5)/proc/sys/net/ipv4/inet_peer_gc_maxtime

The maximum interval passed by the waste collection (GC), which affects the low pressure of memory in the buffer pool. The value is measured in jiffies.

Default setting: 120

6.3 TCP Variables

1)/proc/sys/net/ipv4/tcp_syn_retries

This file indicates the number of times that the native outbound TCP SYN Connection timed out retransmission should not be higher than 255, which is only for outgoing connections and is controlled by Tcp_retries1 for incoming connections.

Default setting: 5

2)/proc/sys/net/ipv4/tcp_keepalive_probes

The file represents the maximum number of TCP hold connection detections before a TCP connection is dropped. The hold connection is sent only when the so_keepalive socket option is turned on.

Default setting: 9 (Times)

3)/proc/sys/net/ipv4/tcp_keepalive_time

The file represents the number of seconds between the time the data is no longer being transmitted and the hold-to-connect signal is sent to the connection.

Default setting: 7200 (2 hours)

4)/PROC/SYS/NET/IPV4/TCP_KEEPALIVE_INTVL

This file represents the frequency at which TCP probes are sent, multiplied by tcp_keepalive_probes to indicate when there is no corresponding TCP connection.

Default setting: 75 (seconds)

5)/proc/sys/net/ipv4/tcp_retries1

The file represents the number of retransmissions that were made before the response to a TCP connection request was discarded.

Default setting: 3

6)/proc/sys/net/ipv4/tcp_retries2

This file indicates the number of retransmissions before a TCP packet has been established in the communication State.

Default setting: 15

7)/proc/sys/net/ipv4/tcp_orphan_retries

How many retries to make before the near-end drops the TCP connection. The default value is 7, which is equivalent to 50 seconds – 16 minutes, depending on the RTO. If your system is a heavily loaded Web server, you may need to lower this value, which can be a lot of resource-intensive sockets. In addition refer to Tcp_max_orphans.

8)/proc/sys/net/ipv4/tcp_fin_timeout

For a socket connection that is disconnected at this end, TCP remains in the Fin-wait-2 state for the time. The other person may be disconnected or have not ended the connection or the unpredictable process has died. The default value is 60 seconds. It used to be 180 seconds in the 2.2 version of the kernel. You can set this value, but be aware that if your machine is a heavily loaded Web server, you may be risking memory being filled with a large number of invalid datagrams, and the risk of fin-wait-2 sockets is less than fin-wait-1 because they eat up to 1.5K of memory, But they exist for a longer period of time. In addition refer to Tcp_max_orphans.

Default setting: 60 (seconds)

9)/proc/sys/net/ipv4/tcp_max_tw_buckets

The maximum number of timewait sockets the system is processing at the same time. If this number is exceeded, the time-wait socket is immediately removed and a warning message is displayed. The reason to set this limit, purely to protect against those simple DoS attacks, do not artificially reduce this limit, but if

Network conditions require more than the default value, you can increase it (and perhaps increase the memory).

Default setting: 180000

)/proc/sys/net/ipv4/tcp_tw_recyle

Turn on quick time-wait sockets recycling. Do not modify this value unless you are advised or requested by a technical expert.

Default setting: 0

One)/proc/sys/net/ipv4/tcp_tw_reuse

The file indicates whether to allow the time-wait state of the socket to be re-applied for the new TCP connection.

Default setting: 0

/proc/sys/net/ipv4/tcp_max_orphans)

The maximum number of TCP sockets that the system can handle that is not part of any process. If this amount is exceeded, then the connection that is not part of any process is immediately reset and a warning message is displayed. The reason to set this limit is simply to resist those simple DoS attacks, and do not rely on this or artificially reduce the limit.

Default setting: 8192

/proc/sys/net/ipv4/tcp_abort_on_overflow)

When the daemon is too busy to accept the new connection, the reset message is sent to the other party, and the default value is False. This means that when the cause of the overflow is due to an accidental burst, then the connection will revert to the state. This option is only turned on when you are sure that the daemon is really unable to complete the connection request, which affects the customer's use.

Default setting: 0

/proc/sys/net/ipv4/tcp_syncookies)

The file indicates whether the TCP synchronization label (Syncookie) is turned on, and the kernel must have the Config_syn_cookies key open to compile. The Sync label (Syncookie) prevents a socket from overloading when there are too many attempts to connect.

Default setting: 0

/proc/sys/net/ipv4/tcp_stdurg)

Use the host Request interpretation feature in the TCP Urg pointer field. Most hosts use an old BSD explanation, so if you open it on Linux, you may not be able to communicate with them correctly.

Default setting: 0

/proc/sys/net/ipv4/tcp_max_syn_backlog)

For connection requests that still do not have a client acknowledgement, the maximum number that needs to be saved in the queue. For systems that exceed 128Mb of memory, the default value is 1024, or 128 below 128Mb. If the server is overloaded frequently, try increasing this number. Warning! If you set this value to be greater than 1024, it is best to modify the tcp_synq_hsize inside the include/net/tcp.h to keep tcp_synq_hsize*16 0) or bytes-bytes/2^ (-tcp_adv_win_ Scale) (if Tcp_adv_win_scale 128Mb 32768-610000) The system ignores all requests sent to its own ICMP echo request or those broadcast addresses.

Default setting: 1024

/proc/sys/net/ipv4/tcp_window_scaling)

This file indicates whether the sliding window size of the TCP/IP session is set to variable. The value of the parameter is a Boolean value, 1 is variable, and 0 indicates immutable. TCP/IP typically uses a maximum of 65535 bytes of Windows, which may be too small for high-speed networks, which, if enabled, can increase the TCP/IP sliding window size by several orders of magnitude, increasing the ability to transmit data.

Default setting: 1

)/proc/sys/net/ipv4/tcp_sack

This file indicates whether a selective answer (selective acknowledgment) is enabled, which can improve performance by selectively answering packets received by the order (which allows the sender to send only the missing segment); (for WAN communication) This option should be enabled, However, this increases the CPU usage.

Default setting: 1

/proc/sys/net/ipv4/tcp_timestamps)

The file indicates whether to enable a more accurate method than a timeout (see RFC 1323) to enable calculation of RTT; This option should be enabled for better performance.

Default setting: 1

/proc/sys/net/ipv4/tcp_fack)

This file indicates whether to turn on fack congestion avoidance and fast retransmission functionality.

Default setting: 1

)/proc/sys/net/ipv4/tcp_dsack

This file indicates whether TCP is allowed to send "two identical" sack.

Default setting: 1

/PROC/SYS/NET/IPV4/TCP_ECN)

This file indicates whether the TCP direct congestion notification feature is turned on.

Default setting: 0

/proc/sys/net/ipv4/tcp_reordering)

The file represents the maximum number of reordered datagrams in the TCP stream.

Default setting: 3

)/proc/sys/net/ipv4/tcp_retrans_collapse

This file indicates whether the printer that has the bug is compatible with its bug.

Default setting: 1

/PROC/SYS/NET/IPV4/TCP_WMEM)

The file contains 3 integer values, respectively: Min,default,max

Min: Reserve the minimum amount of memory used to send buffers for the TCP socket. It can be used by each TCP socket.

Default: The amount of memory reserved for a TCP socket for sending buffers, which, by defaults, affects the value of default in Net.core.wmem used by other protocols, typically less than the default value in Net.core.wmem.

Max: reserves the maximum amount of memory used for sending buffers for TCP sockets. This value does not affect Net.core.wmem_max, and today the Select parameter So_sndbuf is not affected by this value. The default value is 128K.

Default setting: 4096 16384 131072

/PROC/SYS/NET/IPV4/TCP_RMEM)

The file contains 3 integer values, respectively: Min,default,max

Min: The amount of memory reserved for the TCP socket for receiving buffering, even if the TCP socket has at least so much memory to receive buffering in the event of a memory tension.

Default: The amount of memory reserved for the TCP socket for receiving buffering, which affects the value of default in the Net.core.wmem used by the other protocol. This value determines the TCP window size of 65535 in the case of default values for Tcp_adv_win_scale, Tcp_app_win, and Tcp_app_win.

Max: reserves the maximum amount of memory for the TCP socket to receive buffering. This value does not affect the value of Max in Net.core.wmem, and selecting the parameter so_sndbuf today is not affected by that value.

Default setting: 4096 87380 174760

/PROC/SYS/NET/IPV4/TCP_MEM)

The file contains 3 integer values, respectively: Low,pressure,high

Low: TCP does not consider freeing memory when TCP uses a number of memory pages that are below this value.

Pressure: When TCP uses more memory pages than this value, TCP attempts to stabilize its memory usage, enters pressure mode, and exits the pressure state when memory consumption falls below the low value.

High: Allows all TCP sockets the amount of pages used to queue buffered datagrams.

In general, these values are calculated based on the amount of system memory at system startup.

Default setting: 24576 32768 49152

/proc/sys/net/ipv4/tcp_app_win)

The file represents the number of reserved Max (Window/2^tcp_app_win, MSS) Windows due to application buffering. When 0 indicates that no buffering is required.

Default setting: 31

)/proc/sys/net/ipv4/tcp_adv_win_scale

The file represents the calculation buffer overhead bytes/2^tcp_adv_win_scale (if Tcp_adv_win_scale >; 0) or bytes-bytes/2^ (-tcp_adv_win_scale) (If tcp_adv_ Win_scale <= 0).

Default setting: 2

6.4 IP Variables

1)/proc/sys/net/ipv4/ip_local_port_range

The file represents the local port number that the TCP/UDP protocol opens.

Default setting: 1024 4999

Recommended settings: 32768 61000

2)/proc/sys/net/ipv4/ip_nonlocal_bind

The file indicates whether the process is allowed to state to a non-local address.

Default setting: 0

3)/proc/sys/net/ipv4/ip_dynaddr

This parameter is typically used in the case of a dial-up connection, which enables the system to immediately change the IP packet's source address to that IP address while interrupting the original TCP conversation and re-issuing a SYN request packet with the new address to start a new TCP conversation. When using IP spoofing, this parameter can immediately change the spoofed address to a new IP address. The file indicates whether dynamic addresses are allowed, if the value is not 0, and if the value is greater than 1, the kernel will record the dynamic address rewrite information via log.

Default setting: 0

4)/proc/sys/net/ipv4/icmp_echo_ignore_all/proc/sys/net/ipv4/icmp_echo_ignore_broadcasts

This file indicates whether the kernel ignores all ICMP echo requests, or ignores broadcast and multicast requests.

0, responding to requests

1, ignore request

Default setting: 0

Recommended setting: 1

5)/proc/sys/net/ipv4/icmp_ratelimit

6)/proc/sys/net/ipv4/icmp_ratemask

7)/proc/sys/net/ipv4/icmp_ignore_bogus_error_reponses

Some routers violate the RFC1122 standard, which sends a forged response to the broadcast frame to answer. This violation is usually recorded in the system log as an alarm. If this option is set to true, the kernel does not log this warning message.

Default setting: 0

8)/proc/sys/net/ipv4/igmp_max_memberships

The file represents the maximum number of members in a multicast group.

Default setting: 20

6.5 Other Configuration

1)/proc/sys/net/ipv4/conf/*/accept_redirects

If there are two routers in the network segment where the host is located, you set one of them as the default gateway, but when the gateway receives your IP packet and discovers that the IP packet must go through another router, the router sends you a so-called "redirect" ICMP packet, which tells the IP packet to be forwarded to another router. The value of the parameter is a Boolean value, 1 means that it receives such redirection ICMP information, and 0 is ignored. On a Linux host that acts as a router, the default value is 0, and the default value is 1 on a typical Linux host. It is recommended that you change it to 0 to eliminate security risks.

2)/proc/sys/net/ipv4/*/accept_source_route

Whether to accept IP packets containing source routing information. The parameter value is a Boolean value, 1 is accepted, and 0 is not accepted. On a Linux host that acts as a gateway, the default value is 1, and the default value is 0 on a typical Linux host. From a security standpoint, it is recommended to turn this feature off.

3)/proc/sys/net/ipv4/*/secure_redirects

In fact, the so-called "security redirection" is to accept only "redirect" ICMP packets from the gateway. This parameter is used to set the "Security redirection" feature. The parameter value is a Boolean value, 1 means enabled, 0 is forbidden, and the default is enabled.

4)/proc/sys/net/ipv4/*/proxy_arp

Sets whether to relay ARP packets on the network. The parameter value is a Boolean value, 1 is the trunk, 0 is ignored, and the default value is 0. This parameter is typically useful only for Linux hosts that act as routers.

Seven, performance optimization strategy

7.1 Basic Optimization

1) Turn off background daemon

When the system is installed, some daemon processes are started by default, and some processes are not required, so shutting down these processes can save a portion of the physical memory consumption. Log in to the system as root, run NTSYSV, and select the following process:

Iptables

Network

Syslog

Random

Apmd

xinetd

Vsftpd

Crond

Local

When you are finished modifying, restart the system.

Thus, the system will simply start the selected daemons.

2) Reduce the number of terminal connections

The system starts 6 terminals by default, and actually only needs to start 3, log in as root, run vi/etc/inittab, and modify it as follows:

# Run Gettys in standard runlevels

1:2345:respawn:/sbin/mingetty tty1

2:2345:respawn:/sbin/mingetty Tty2

3:2345:respawn:/sbin/mingetty Tty3

#4:2345:respawn:/sbin/mingetty Tty4

#5:2345:respawn:/sbin/mingetty tty5

#6:2345:respawn:/sbin/mingetty tty6

Comment out the 4, 5, and 6 terminals as described above.

7.2 Network optimization

1) Optimizing the system socket buffer

net.core.rmem_max=16777216

net.core.wmem_max=16777216

2) Optimize TCP receive/send buffers

net.ipv4.tcp_rmem=4096 87380 16777216

net.ipv4.tcp_wmem=4096 65536 16777216

3) Optimize the network device receive queue

net.core.netdev_max_backlog=3000

4) Turn off routing-related features

Net.ipv4.conf.lo.accept_source_route=0

Net.ipv4.conf.all.accept_source_route=0

Net.ipv4.conf.eth0.accept_source_route=0

Net.ipv4.conf.default.accept_source_route=0

Net.ipv4.conf.lo.accept_redirects=0

Net.ipv4.conf.all.accept_redirects=0

Net.ipv4.conf.eth0.accept_redirects=0

Net.ipv4.conf.default.accept_redirects=0

Net.ipv4.conf.lo.secure_redirects=0

Net.ipv4.conf.all.secure_redirects=0

Net.ipv4.conf.eth0.secure_redirects=0

Net.ipv4.conf.default.secure_redirects=0

Net.ipv4.conf.lo.send_redirects=0

Net.ipv4.conf.all.send_redirects=0

Net.ipv4.conf.eth0.send_redirects=0

Net.ipv4.conf.default.send_redirects=0

5) Optimizing the TCP protocol stack

Open the TCP SYN cookie option to help protect the server from Syncflood attacks.

Net.ipv4.tcp_syncookies=1

Open the TIME-WAIT socket reuse feature, which is very effective for Web servers that have a large number of connections.

Net.ipv4.tcp_tw_recyle=1

Net.ipv4.tcp_tw_reuse=1

Reduces the time that is in the Fin-wait-2 connection state, allowing the system to handle more connections.

Net.ipv4.tcp_fin_timeout=30

Reduce the time of TCP keepalive connection detection, so that the system can handle more connections.

net.ipv4.tcp_keepalive_time=1800

Increase the TCP SYN queue length so that the system can handle more concurrent connections.

net.ipv4.tcp_max_syn_backlog=8192

Reference:

Http://www.cnblogs.com/dkblog/archive/2011/09/06/2168721.html (the above content is transferred from this article)

Linux memory Management-kernel Shmall and Shmmax parameters (performance Tuning) (RPM)

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.