UNIT Essential Network Tuning
Network Performance TuningTarget: 1. Application queue technology maximizes network throughput by 2. Adjust buffer for TCP and NON-TCP network sockets
13.1 Simplified Transmit model
simplified mode of transmissionA. Output/writer sending data to the socket "file" (equivalent to transmitting buffer) * * B. The kernel encapsulates the data into a PDU (protocol data Unit) c. The PDU is transmitted to each device for transmission Queue * * d. The driver sends the PDU at the front of the queue to the NIC E. The NIC will raise the interrupt number when the PDU arrives
13.2
simplified receive model
simplifying the mode of receivingA. Input/reader receive data A. The NIC receives the sent data frame and uses DMA to copy the frame to receive buffer B. Network card to increase CPU interrupt number C. The kernel handles the interrupt number and dispatches a soft interrupt d. Soft After the interrupt is processed, the packet is placed at the IP layer and determined by the route to which address E. After the local host receives: 1) unpack the received packet into the receive buffer of the socket 2) wake from the socket waiting queue Process 3) The process reads data from the socket receiving buffer.
13.3 Kernel Socket buffersA. Kernel buffers A. Udp:core read and write buffers core read and write buffer B. tcp:core+tcp read and write buffers C. Fragmentation buffer fragment buffer d. DMA for NIC receive B. The kernel automatically adjusts the size of the buffer based on traffic a. Buffer must be an idle memory page B. Buffer too assembly increases zone_normal pressure C. The amount of data traffic received depends on the size of the buffer.
13.3
Calculating Total buffer size
calculate the entire
Buffer
the sizeA Bandwidth delay Product (DBP): Number of single-Send packets lpipe=bandwidth * Delayrtt = A * W Bandwidth * delay = Buffer size calculate rtt with ping command (delay) B. All connections Share pipe when all connections are shared pipe: Socket buffer =dbp/#sockets (number of interfaces)
13.4
Calculating per-socket buffer size
Calculate each
Socket Buffer
sizeMax connections = min buffer Min connections = Max buffer The larger the number of connections, the smaller the number of buffer connections, and the larger the buffer becomes.
13.5
Tuning Core buffer size (UDP)
Adjustment
UDP Buffer
sizeA. Adjust the bdp/#connections buffer A in the/etc/sysctl.conf file. The Input/reader in Bytes receives the data net.core.rmem_default the default value Net.core.rmem_max the maximum value B. Output/write in Bytes sends data Net.core.wmem_default Net.core.wmem_max B. Reload/etc/sysctl.conf sysctl–p
13.6 Tuning TCP buffer size
Adjustment
TCP Buffer
sizeA. Tune core buffers for dbp/#connections B. In the/etc/sysctl.conf file, adjust the TCP buffer a. TCP memory total number of pages net. Ipv4.tcp_mem B. Input/reader in Bytes receiving data Net.ipv4.tcp_rmem c. output/writer in Bytes sending data Net.ipv4.tcp_wmem C. reload/etc/sysctl.conf sysctl-p Note: Each parameter above has three values: Minimum value, default value, maximum value is generally set to 1.5 times times the default value.
13.7
Tuning DMA Buffer size
Adjustment
DMA Buffer
sizeA. If NIC driver has DMA buffer adjustable: modinfo–p e1000 view NIC module Information b. Update/etc/modprobe.conf alias eth0 e1000 op tions eth0 rxdescriptors=1024 txdescriptors=1024 C. TCP connections:increase Socket buffer size by 25% increased socket buffer * 25%
13.8
is packet fragmentation a problem?A. Viewing various protocol packet conditions Netstat–s B. View packet reassembly Failure CAT/PROC/NET/SNMP | grep ' ^ip: ' | Cut–f17–d ' NOTE: A reorganization failure indicates the need to adjust buffer C. Cause of fragmentation: A. Denial of Service (Dos) attacks DOS attack B. NFS C. Noisy N Etworks D. Failing network electronics the underlying physical link electronic signal problem
13.9 Tuning Fragmentation buffers
Adjust Fragmentation
BufferThe time the Net.ipv4.ipfrag_time fragment stays in buffer, the default value is 30 seconds, and the timeout is discarded. Net.ipv4.ipfrag_high_thresh: Default value 262144bytes, 256KiB net.ipv4.ipfrag_low_thresh: Default 196608bytes, 192KiB when buffer space is greater than When this value is added, the later fragments are discarded until the value of buffer Net.ipv4.ipfrag_low_thresh The following fragment is re-reorganized. Note: Services such as NFS SMB are prone to fragmentation, so the use of these services can be appropriate to adjust the buffer value, but too large to cause network delay.
13.10 Network Interrupt handling
Network Interrupt processingA. The NIC allocates a hard CPU interrupt for each packet. A. A soft interrupt is dispatched for each process receive queue. B. Interrupt processing will preempt the process queue: a. The packet is discarded when the transmit queue is full. B. The packet is discarded when the receive socket buffer is full. C. Heavy loads can cause Receive-livelock to occur. C. Viewing hard interrupts cat/proc/interrupts D. View Soft interrupt ps Axo Pid,comm.,util | grep SOFTIRQ Note: receive-livelock: In an interrupt drive system, the receive interrupt takes precedence over other processes, and if the packet arrives too fast, the CPU spends a lot of time processing the receive, so there is no resource to pass the incoming packet to the application.
13.11
Improving interrupt handling
improve interrupt processing performanceA. Two basic technologies: A. Interrupt coalescing (interrupt merge): One interrupt handles multiple frames. B. Polling: Process queue is processed with timed interrupts. B. The drive will automatically adjust the interrupt handling under high load. C. Summary: A. Reduce CPU service time and utilization. B. Increase the receive buffer by some C. When the load is low, increase the delay. D. Different strategies are used to adjust throughput based on different characteristics of the packet.
13.12 Tuning Interrupt handling
Adjust interrupt handlingA. Determine module parameters modinfo–p e1000 B. update/etc/modprobe.conf alias Eth0 e1000 alias eth1 e1000 options e1000 interruptthrottlerate=1,300 0 0 Disable 1 dynamic Auto Adjust 3 conservative value of buffer size, automatically adjust according to traffic. Device Tuning Related commands: Ethtool mii-tool IP link/sbin/ifup-local systool querying kernel module information, example: SYSTOOL–AVM usbcore
13.13
Network sockets
Network socketsA. The application is a socket connection to the network stack for reading and writing B. The socket API treats each socket (network connection) as a virtual file. A. Transferring data is equivalent to writing a file. B. Receiving data is equivalent to reading a file. C. Closing a network connection is equivalent to deleting a file. C. Read and write buffer is used to store data for the application. D. The TCP sockets requires additional connection processing.
13.14
TCP Sockets TCP
SocketsA. TCP uses three handshakes to establish a connection and three handshake: A. Client-------SYN packet-----------àserver b. Server--------syn-ack----------- --àclient c. Client--------ack-----------------àserver B. System end Cocket connection: A. One end sends Fin pack B. The other end sends the Fin-ack package C. If the connection is idle after a timeout, D is turned off. The connection needs to be keepalives to remain active E. A semi-closed connection (wait state) is closed if it exceeds the default time-out before Fin-ack is received.
13.15
Viewing network sockets
View Network socketsA. Passive opens (listeners) view open connection (listening status) NETSTAT-TULPN b.active sockets active connection Port Sar–n SOCK l Sof-i View the connection Netstat-tu C that is being manipulated. All sockets connection Ports Netstat-taupe d.half-closed connections Half-closed connection NETSTAT-TAPN | grep time_wait
13.16
Tuning TCP Socket creation
Create
TCP
SocketsThe number of A.TCP connection (SYN) reconnection net.ipv4.tcp_syn_retries defaults to 5 B. The length of the reconnect queue Net.ipv4.tcp_max_syn_backlog default to 1024 , discard after this value C. Re-use TCP connection net.ipv4.tcp_tw_recycle default is 0 off, 1 open
13.17
Tuning TCP Socket keepalive
Adjustment
TCP
Socket Survival TimeA. How long the connection is idle to start sending keepalive. net.ipv4.tcp_keepalive_time idle time, default 7,200 seconds B. How often is the interval probed keepalive Net.ipv4.tcp_keepal IVE_INTVL interval, default value 75 seconds c. Number of Net.ipv4.tcp_keepalive_probes probes detected, default 9 times
RHCA442 Learning Notes-unit13 network performance tuning