Linux system optimization

Source: Internet
Author: User
Tags message queue semaphore system log systemtap

System Tuning Parameters:
L = A (arrival rate) * W (total service time)
L LOWER the better, w lower the better

Awk is simple to use:
1.awk-f: ' {print '} '/path prints the first column of data with ': ' As a delimiter
2.df-p | awk ' {print $ ' is mounted on ' $NF} ' separate input device mount directory
3.awk-f: ' Begin{print ' any_string} {print $ End{print ' any_str '} '/path first row any_string last Any_str
When working with text, you can do the processing before the end of the start
4.awk-f: '/regix/{print '} '/etc/passwd regular string match
5. Sar-q | awk '/^[^a-z]+$/{print $1,$2,$3} ' outputs a row statistic payload at the end of a line beginning with a non-letter
6. Sar-b-S start time-e End Time | awk ' {print $1,$5+$6} ' displays the system input and output block size
Gnuplot Charting tools using:
Yum Install-y gnuplot
Set XData time Specifies the x-axis
Set TIMEFMT "%h:%m:%s" specifies the time format
Plot "/data/path" using 1:2 with lines specifies the data file path, using those two columns of data mapping, between points and points with a line connection
Plot "/data/path" using 1:2 title "1 min" with lines, "/data/path" using 1:3 title "5 min" with lines, "/data/path" using 1: 4 title "Min" with lines
Multiple data lines
Automate the development of a change-chart script
Vim Xxx.gnuplot
xxx Ibid.
: X
Run: Gnuplot-persist ~/cpu.gnuplot-persist results persist on screen
Output data results to the Apache directory as a picture
Set XData time Specifies the x-axis
Set TIMEFMT "%h:%m:%s" specifies the time format
Set term PNG size 1024,768
Set output "/var/www/html/stat/' data +%f '. png"
Set "/data/path" Using 1:2 with lines
Do a planning task--"crontab
Crontab-e
0 * * * sar-b-s 17:00:00-e 19:30:00 | awk ' {print $, ($5+$6)/2} ' > Xxx.data;gnuplot dinner.gnuplot

To view the System Status command:
Sar
Vmstat
Mpstat
Iostat
Time COMMAND
Iotop
Top
Uname-r kernel
sysctl-a | grep agurment
Lscpu CPU Alive
/etc/device/system/cpux/on-line
User logon log:
Last
Lastlog
Lastb

IO Disk scheduling algorithm:
System master configuration file path:
/sys/block/sda/queue/scheduler
Algorithm scheduling parameters:
/sys/block/sda/queue/iosched/*
To change the disk scheduling algorithm:
CP xxx >/sys/block/sda/queue/scheduler
Tuned using:
Tuned-adm List
Tuned-adm Active
Tuned-adm Profile <mode_name>
Master configuration file/etc/tune-profiles/
cat/etc/tune-profiles/
Mode
Default e-mail server with less impact on the server
Desktop-powersave for desktop computers, power-down mode, disk Sata,cpu, Nic
Server-powersave server-oriented, power-saving mode
Laptop-ac-powersave power saving mode when charging a laptop
Laptop-battery-powersave notebook without charging, high power-saving, consumption-resistant mode
Throughput-performance High performance disk scheduling algorithm ReadLine
Latency-performance High Performance
Enterprise-profile-storage High Performance

Application Process Priority:
Pr:rt,-99-39
NI:-20-19
Real-time: (can be implemented by invoking a specific function during programming)
SCHED_RR Polling
Sched_fifo Queue (The executing process is not interrupted as long as no process priority is high with the process being executed)
Non-real-time:
Sched_normal
Sched_batch a large amount of data operations, as far as possible without being affected
Process is executed only when the system space is Sched_idle
If real-time is not implemented in programming, you can use Chrt to change the process execution mode:
Chrt-f COMMAND test: Chrt-f Md5sum/dev/zero
Chrt-r 1 COMMAND to divide the CPU
Process scheduling algorithm:
CFS Scheduler
Introduced virtual time: vittual times (high-value priority execution)
Waiting time
No of needed processes
Process Priority

Cache view
X86info-c
To view the command cache hit ratio:
Valgrind--tool=cachegrind COMMAND
Refs instruction Cache
Real machine View Cache Hit rate:
Perf Stat-e cache-misses COMMAND

Pam_limits restrict user use of resources, must be user identity (cannot limit IO)
User profile Directory
/etc/pam.d/fingerprint-auth-ac
Limits Master configuration file
/etc/security/limits.conf
Limits Child Profile Directory
/etc/security/limits.d/
Users view their own resource limits
Ulimit-a
Restricting user resources
Student Hard CPU 1 (min) write in master configuration or sub-configuration file is possible
Adminsoft CPU 5
(PID):(greater than or equal to) soft CPU 1

Limit user process memory to only limit its virtual storage virt (virtual address)
Student hard as 262144 (KB)
Limit number of user processes
* Soft Nproc 1024x768 (max user processes) soft limit
Student Hard Nproc 1024
Test command: Boom () {boom|boom&};boom OR:() {:|:&};:
Cgroup Limit Cpu,memory,disk,network
Yum Install-y Libcgroup
Master configuration file
/etc/cgconfig.conf
Combine two resources to change their directory to one, commenting on the original directory
CPU =/cgroup/cpumen
Memory =/cgroup/cpumen
Execute script:
/etc/init.d/cgconfig start
A/cgroup/directory is created automatically as soon as you run cgroup,/
LSSUBSYS-M Mount Subsystem
Customizing group Groups
Vim/etc/cgconfig.conf
Group cname{
cpu{
CPU.SHARES=100 (1024 for 100% of the right to use)
}
}
Vim/etc/cgconfig.conf
Group poormen{
memory{
memory.limit_in_bytes=268435456 (the physical memory size used by the 256M future process)
memory.memsw.limit_in_bytes=268435456 (256M, memory +swap space when memory is exhausted, swap size = bottom-up)
}
}
Vim/etc/cgconfig.conf
Group io{
blkio{
BLKIO.WEIGHT=100 (requires CFQ gradient algorithm support)
Blkio.throttle.read_bps_device= (8:0 1000000) read disk Max 1Mbps, need to see the disk Master number from the number LL/DEV/SDA
}
}
Vim/etc/cgconfig.conf
Group stopit{
freezer{

}
}
Use cgexec to bind processes to pre-defined Cgroup groups
Test: Cgexec-g cpu:cname time DD If=/dev/zero of=/dev/null bs=1m count=200000
Test: Memory
Create a new memory disk
Mkdir/mnt/tmpfs
Mount-t TMPFS None/mnt/tmpfs to this directory operation equals direct operating memory
Command:
Cgexec-g memory:poormen DD If=/dev/zero OF=/MNT/TMPFS bs=1m count =
Test: Discard the cache before IO test it's not fair. Echo 3 >/proc/sys/vm/drop_caches
Command:
Cgexec-g blkio:low time Cat/bigfile1 >/dev/null
Cgexec-g blkio:hig time Cat/bigfile2 >/dev/null
Test:
echo pid >/cgroup/freezer/stopit/tasks
echo FROZEN >/cgroup/freezer/stopit/freezer.state frozen
echo thawed >/cgroup/freezer/stopit/freezer.state thaw
Edit file limit user process, bind Cgroup
Vim/etc/cgrules.conf
STUDENT:DD Blkio io/

The Cgroup implements 2 CPUs, one for the system, and one for a specific command.
Isolate shields the CPU from the user, but does not block basecpu
Isolcpus=1 (CPU list:0,1,2,3 ...)
A CPU immune interrupts interrupt Vim/etc/sysconfig/irqbalance
irq_affinity_mask=2 (CPU list:1,2,4,8...6=cpu 2+CPU 3)

Vim/etc/cgconfig.conf
Group 2ndcpu{
cpuset{
Cpuset.cpus=1 (CPU list:0,1,2,3 ...)
cpuset.mems=0;
}
}


Specify which CPU the current virtual machine uses
Inquire:
Virsh Dumpxml Vmname | grep CPU
Watch-n1 Virsh Vcpuinfo Vmname
Distribution:
Echo 1 >/cgroup/cpuset/libvirt/qemu/vmname/cpuset.cpus

STRACE Displays tracking kernel space
As long as the Execute command has system call, then it will be traced.
Example:function-Open ()
Demo
Strace UpdateDB
STRACE-E strace=network COMMAND
STRACE-E Stracr=file COMMAND
STRACE-P PID Tracking Process
Strace-c command to count all system calls
Strace-f COMMAND tracks all child processes
Parameters: (A large number of commands to call the GLIBC library)
Open Dynamic Link library
Mmap mirroring the command to memory
Socket to establish socket connection
Dynamic Library for Open ("/lib64/libresolv.so.2") DNS queries
SendTo Contract
Recvfrom Receiving Package
Read Package

LTRACE Displays tracking kernel space
Example:function-fopen ()
function calls that primarily track glibc libraries
Demo: ditto

SYSTEMTAP tracking applications and kernel behavior
Use of the Kprobes subsystem for monitoring
Develop SYSTEMTAP Environment
Dependent Packages:
Kernel-debuginfo
Kernel-debuginfo-common
Kernel-devel
Systemtap
Gcc
Execute a self-systemtap script (including compile, parse, add to kernel run)
Stap-v Scriptname.stap kernel To see the effect immediately
Leave the kernel module for use in production environments
Stap-v-p4-m modulename Scriptname.stap

Production environment:
Dependent Packages:
Systemtap-runtime
Adding SYSTEMTAP modules to the kernel
Staprun Modulename.ko
Rights Management:
Back door, normal user if you are in the STAPUSR group, you can enter run/lib/kernel/2.4-/systemtap/*.ko

Prevents non-secure kernel modules from being executed by ordinary users in the STAPUSR group
Root
CP *.ko/lib/kernel/2.4.-/systemrap/
Normal users can only add secure kernel modules in the directory
If the user is added to the Stapdev Development Group, then the user can add the. ko file to the kernel from anywhere.




Memory Management:
x86 instruction set, memory paging, size of 4KB per page
Physical memory and virtual memory
The program first to apply for virtual memory, x86 16E almost unlimited, but the kernel is not immediately all allocated, but
How much does the program use to allocate?
The program sees only virtual memory and is contiguous. However, when virtual memory is converted to physical memory, it may not be contiguous. Parent process may use the same memory page as multiple child processes at the same time
Virtual address to Physical address translations is stored in page table
The page table also accounts for memory, sacrificing some memory management
Process to access the data in real physical memory to process page walk,x86 has a 4 layer structure, the layer recursively queries to the page table, and finally points to the page frame (physical memory)
The program accesses the physical memory to first look for the tlb,tlb in the CPU to store the virtual memory into the mirrored relationship of the physical memory.
Tlb-translation look-aside Buffer
First check TLB
If TLB Hit,return address
Else page walk, cache address mapping
The x86 platform provides a large memory page hug, which solves the memory waste by requiring a large number of page table item descriptions when large memory processes use a large number of pages. With 2Mpage, 4Mpage

Use of Memory huge page
Huge page is a contiguous segment of memory that is not synthesized by page and is a separate memory area
Generally in the time of the power on the allocation of large pages, then the memory is very clean, a lot of continuous segments
Vim/etc/sysctl.conf
Vm.nr_hugepages = 10
Temporarily allocating memory:
Sysctl Vm.nr_hugepages = 10

View Memory Large page:
grep huge/proc/meminfo


To use huge page
1. Create a pseudo file system Hugetlbfs:
mount-t hugetlbfs none/mountpoint
Application use mmap system call On/mountpoint
2. Using Shared Memory:
Application use Shmget and Shmat system calls
3.rhel6.2 introducing transparent huge page use
THP Auto-Assemble huge page
khugepaged automatically switch normal to large page
THP works in anonymous memory (memory pages that process uses dynamically)

Memory Allocation
1.process forks or execs child process
Child process uses parent process ' page frames unitl a WR ITE access is made
the referred to as copy write (COW)
2.new process requests Memory
Kernel commits Additiona L Virtual address space to process
3.new process uses memory
Tiggers a page fault exception
Minor:kernel allocate s a new page frame with help from MMU
Major:kernel blocks process while the page is retrieved from disk
4.process frees Memory
Kernel reclaims pages

Swap space:
Swap space ' increases ' the amount of memory
pages not being use D can is paged out to disk,therefore greater free memory to use
Vmstat:
Observe the SI so value of swap, if large it means that the swap space is constantly being used, resulting in a heavy cost
swap space Size configuration:
system memory, minimum swap space
4GB2GB
4-16GB4GB
16-24GB8GB
64-256GB16GB


Buffer Cache:
It (Slab cache prevents memory fragmentation) used to cache file system Meta data (Dentries and Inodes)
Data that is independent of the contents of the file can be attributed to buffer cache


Page Cache:
It used to cache file system block data (file content)
It is file content data and cannot be swapped back to disk


Discard cache:
Echo 1 >/proc/sys/vm/drop_caches
-1 Block Data
-2 Meta Data
-3 block and Meta data


Exchange tunable parameters:
/proc/sys/vm/swappiness
swap_tendency = MAPPED_RATIO/2 + distress + vm_swappiness
Mapped_ration is the% of physical memory in use
Distress is what hard kernel try-to-free memory
Vm_swappiness is what we can tune
The value is small:
Kernel swap page cache as much as possible
Example: Web server Email, interactive
Swap_tendency < 100
The value is larger:
The kernel tries to swap anonymous memory and shared memory
Example: Non-interactive, business flow memory is large, try not to interfere with page cache
Swap_tendency >=100
When you add a mount swap partition by default, if you do not adjust the priority parameter, the current mount swap partition will have a lower priority than the previously mounted partition
Performance improvement point: As long as the swap partition has the same priority then the kernel back-polling uses the swap partition

Memory Page Status:
Free: Memory pages that can be allocated
Inactive clean:page cache or buffer cache, ready to be reclaimed by the system
Inactive Dirty: Program modified memory data, but not yet saved
Active: Memory page used by program
How to deal with inactive Dirty, dirty page recycling, memory page not processed, increase in RAM space

Each disk device has a default, Per-bdi Flush Threads
Adjustable parameters for memory dirty data:
Vm.dirty_expire_centisecs (dirty data lag, meet minimum time limit, merge write operation)
Vm.dirty_writeback_centisecs (fixed interval wakeup per-bdi refresh thread processing dirty page data)
Vm.dirty_background_ratio (10% are dirty data in the system)
Vm.dirty_ratio (when there are 40% dirty data in the system, all writes are suspended and dirty data is synchronized to disk)

Oom:kernel committed to memory too large, unable to deliver the promise to the program
Kernel Workaround:
Randomly killing some processes can cause the system to become unstable
Autonomous solutions:
sysctl-a | grep vm.panic_on_oom
Vm.panic_on_oom set to 1, memory cannot be cashed when system hangs, no data interaction

x86_64 Physical Memory Architecture (64-bit):
Low 16M left ZONE_DMA
4G left to Zone_dma32
Other left zone_normal (basic infinity) memory running in Zone_normal
x86_64 Physical Memory Architecture (32-bit)
Low 16M left ZONE_DMA
880M left to Zone_normal
Others left to Zone_highmmem

Memory Overcommit
0: Intelligent Judgment Assignment
1: Not making judgments
2: Exceeds real memory size, absolutely no allocation (real size =swap+ physical memory 50%)
Sysctl vm.overcommit_memory = 2 (SAFE)
If you want to add all the physical memory, then you need to modify
Vm.overcommit_ratio = 100
Test: BIGMEM-V 2000 (default unit MIB)
Maximum exceeding Commitment value:
grep comm/proc/meminfo
As long as it does not go beyond the limit program, it will work.

Inter-process communication means: SysV IPC standard
Semaphores Signal Volume
Max number of arrays allowed array of semaphores
Max semaphores per array runs a program semaphore that has a signal array size
Max semaphores System Wide total number of semaphores running
Max Ops per SEMOP call the system call that is emitted for each semaphore
sysctl-a | grep Kernel.sem
Message queues Messages Queue
Default max size of queue (bytes) How many bytes of data exist for a message queue to run
Max size of message (bytes) a maximum length of messages that can fill a message queue
Max queues System wide the number of message queues that may be random
sysctl-a | grep kernel.msg
Shared memory (most used)
Max number of segments system running shared memory
Max seg Size (Kbytes) maximum shared memory segment limit
Max Total Shared Memory (Kbytes) global shared segment limit
sysctl-a | grep KERNEL.SHM

IPCS commands to view usage
IPCs
Inter-process communication restrictions
Ipcs-l

File System Introduction
file systems used at the server level: Ext3,ext4,xfs,btrfs (High efficiency, strong performance) (Oracle open source file system)
User Desktop-level file system: Ntfs,fat16,fat32,fat64
EXT3 Advantages: A large number of users, compatibility good
ext3 disadvantage: fsck file system do a routine check very slow, file system support up to 16TB size
Ext4 Advantage: Super File system hundreds of t,extent greatly reduced page description, 10 times times more efficiency than ext3, write lag disk, support for larger file system
Ext4 Disadvantage: Extent is not compatible with low version, Red Hat rhel6 supports up to 16TB size file system
XFS Advantage: Large file system, high-efficiency storage
XFS disadvantage: A large number of small files, fragmented file processing inefficient
Btrfs advantage: With raid, snapshot capabilities, high data efficiency, high protection integrity
BTRFS disadvantage: not suitable for production
Large file system efficiency high to Low: BTRFS,XFS,EXT4, EXT3
Large number of small file system efficiency high to Low: BTRFS,EXT4,XFS,EXT3
File system repair efficiency high to Low: (EXT4,XFS), BTRFS,EXT3
Write operation efficiency: (Ext4,xfs,btrfs), EXT3
Oversize file write: (ext4,xfs,btrfs), EXT3

File System log tuning (increase a reach)
Objective: To speed up the file system repair, avoid the overall search, and ensure the stability of the file system
Mode:
Ordered
records the pre-and post-status of each step of the write
writeback
write only, do not log write completion
Journal
The source and content data for the file are written in the log area, the 2 write cost is too high (1. Requirements for file Systems 100% , data loss is not allowed 2. Face a large number of small files to disk write)

To create a new file system, specify the external logging feature:
Mkfs.ext4-o journal_dec-b 4096 (block size)/dev/vdb1 log file system for other file systems
Mkfs.ext4-j (journal) device=/dev/vdb1-b 4096/dev/vda3
Tune2fs-l/dev/vda1 File System details

For a file system that already exists, specify an external log:
Tune2fs-l/DEV/VDA1 Confirm block Size
Mkfs.ext4-o journal_dec-b 4096/dev/vdb2 New External log partition
Put the required modified file system Umount offline:
Umount/dev/sda1
Tune2fs-o ' ^has_journal '/dev/sda1 turn off the original log function
Tune2fs-j-j device=/dev/vdb2/dev/sda1 to reassign a log partition
MOUNT/DEV/VDA1 re-mount using

Network Load Balancing
Yum Install-y Qperf
Both hosts run Qperf to view the host's basic information
Host 1:qperf
Host 2:qperf hostname conf
To view the host network card rate:
Qperf host TCP_BW Tcp_lat UDP_BW Udp_lat

View network card rates for different packet sizes:
Qperf-oo msg_size:1:32k:*2 [-v] host TCP_BW Tcp_lat UDP_BW udp_lat

Adjustable parameters (System Auto-tuning) Maximum Buffers:
Net.ipv4.tcp.mem
net.ipv4.udp.mem
min pressure max
The kernel interferes with TCP memory consumption only when TCP connection memory takes up the pressure value
If this server is only doing network work, then you can modify the min value of buffer to three-fourths
socket buffer for core networking,include UDP connection:
Net.core.rmem_ Default
Net.core.wmem_default
Net.core.rmem_max
Net.core.wmem_max
These parameter values are no smaller than:
Net.ipv4.udp.rmem_min
Net.ipv4.upd.wmen_min

Then tune buffers of TCP specific networking:
Net.ipv4.tcp_ Rmem
net.ipv4.tcp_wmem
min default max
Min:minimum receive/send buffer for a TCP connection
default: Default buffer size, typically the largest half
bdp:buffer size = BANDWITH/8 * delay Time (BYTE)
Demo example:
TC Qdisc Show
Emulation 2S delay Bundle:
TC Qdisc add dev eth0 root netem delay 2s
TC Qdisc show
Another machine deployment Web service, host Wget server file

Network card binding increased bandwidth:
B ALANCE-RR Polling mode: can increase fault tolerance and increase network card bandwidth rate
Active-backup hot: Only one piece of work
802.3ad dynamic link negotiation:

Modify the MTU value, support Jumbo Frames, and all network devices must support large frames:
The maximum limit for each Ethernet frame is not limited to 1500Byte, which can significantly reduce overhead for large data volumes
The host only needs to modify the NIC parameters:
Vim/etc/syscocnfig/network-script/ifcf-eth0
mtu=9000
Network devices need to modify hardware parameters
Generally speaking, there's no point in crossing.




Linux system optimization

Related Article

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.