Performance testing and analysis tools

Source: Internet
Author: User

I. Analysis Tools


1. CPU performance analysis tools:
Vmstat
PS
SAR
Time
Strace
Pstree
Top2 and memory performance analysis tools:
Vmstat
Strace
Top
IPCS
Ipcrm
CAT/proc/meminfo
CAT/proc/slabinfo
CAT/proc // maps3 and I/O performance analysis tools:
Vmstat
Ipstat
Repquota
Quotacheck4 and network performance analysis tools:
Ifconfig
Ethereal
Tethereal
Iptraf
Iwconfig
Nfsstat
MRTG
Ntop
Netstat
CAT/proc/sys/net 2. Linux Performance Tuning Tool
After finding the performance bottleneck of the application through the above tools and commands, we can use the following tools or commands to adjust the performance 1. CPU performance tuning tools:
Nice/renic
Sysctl
2. memory performance tuning tool:
Swapon
Ulimit
Sysctl
3. I/O performance tuning tools:
Edquota
Quoton
Sysctl
Boot line: Elevator =
4. Network Performance Tuning tool:
Ifconfig
Iwconfig
Sysctl 3. Performance Adjustment
1. CPU performance Adjustment
When the CPU idle time or wait time of a system is less than 5%, we can think that the CPU resources of the system are exhausted, and we should adjust the CPU performance.
CPU performance adjustment method:
Edit the file in/proc/sys/kernel/and modify the kernel parameters.
# Cd/proc/sys/kernel/
# Ls/proc/sys/kernel/
Acct hotplug panic real-root-Dev
Cad_pid modprobe panic_on_oops SEM
Cap-bound msgmax pid_max Shmall
Core_pattern msgmnb powersave-nap shmmax
Core_uses_pid msgmni print-fatal-signals shmmni
CTRL-alt-del ngroups_max printk suid_dumpable
Domainname osrelease printk_ratelimit sysrq
Exec-shield ostype printk_ratelimit_burst tainted
Exec-shield-randomize overflowgid Pty threads-max
Hostname overflowuid random version
You may need to edit pid_max and threads-Max as follows:
# Sysctl kernel. threads-max
Kernel. threads-max = 8192
# Sysctl kernel. threads-max = 10000
Kernel. threads-max = 100002, memory performance Adjustment
When the memory resources of an application system meet the following conditions, we consider that memory performance needs to be adjusted:
Frequent page feed-in and swap-out;
The inactive page is missing.
For example, when using the vmstat command, we find that memory cache usage is very low, while swap Si or so has a relatively high data value. We should be cautious about memory performance problems.
Memory performance adjustment method:
1) Disable non-core service processes.
For more information, see CPU performance adjustment.
2) modify the system parameters under/proc/sys/Vm.
# Ls/proc/sys/Vm/
Block_dump laptop_mode nr_pdflush_threads
Dirty_background_ratio legacy_va_layout overcommit_memory
Dirty_expire_centisecs lower_zone_protection overcommit_ratio
Dirty_ratio max_map_count page-Cluster
Dirty_writeback_centisecs min_free_kbytes swappiness
Hugetlb_shm_group nr_hugepages vfs_cache_pressure
# Sysctl VM. min_free_kbytes
VMS. min_free_kbytes = 1024
# Sysctl-w vm. min_free_kbytes = 2508
VMS. min_free_kbytes = 2508
# Cat/etc/sysctl. conf
...
VMS. min_free_kbytes = 2058
...
3) configure the system's swap partition to be equal to or twice the physical memory.
# Free
Total used free shared buffers cached
Mem: 987656 970240 17416 0 63324 742400
-/+ Buffers/cache: 164516 823140
Swap: 1998840 150272 18485683, I/O performance Adjustment
When the system encounters the following situations, we think the system has an I/O performance problem:
The system waits for I/O more than 50%;
The average queue length of a device is greater than 5.
We can use commands such as vmstat to view the Wa wait time of the CPU to obtain accurate information about whether the system has I/O performance problems.
I/O performance adjustment method:
1) Modify I/O Scheduling Algorithm .
Linux has four known I/O debugging algorithms:
Deadline-deadline I/O Scheduler
As-anticipatory I/O Scheduler
CFQ-complete Fair Queuing scheding
Noop-Noop I/O Scheduler
You can modify the elevator parameter in the/etc/yunct. conf file.
# Vi/etc/yaboot. conf
Image =/vmlinuz-2.6.9-11.EL
Label = Linux
Read-Only
Initrd =/initrd-2.6.9-11.EL.img
Root =/dev/volgroup00/logvol00
Append = "elevator = CFQ rhgb quiet"
2) file system adjustment.
There are several accepted criteria for adjusting the file system:
Distribute the I/O load evenly to all available disks;
Select an appropriate file system. The Linux kernel supports reiserfs, ext2, ext3, JFS, XFS, and other file systems;
# Mkfs-T reiserfs-J/dev/sdc1
Even after the file system is created, it can also be optimized by command;
Tune2fs (ext2/ext3)
Reiserfstune (reiserfs)
Jfs_tune (JFS)
3) The noatime and nodiratime options can be added when the file system is mounted.
# Vi/etc/fstab
...
/Dev/sdb1/backup reiserfs ACL, user_xattr, noatime, nodiratime 1 1
4) Adjust the readahead of the block device to increase the Ra value.
[Root @ overflowuid ~] # Blockdev-Report
RO Ra SSZ bsz startsec Size Device
...
RW 256 512 4096 0 71096640/dev/SDB
RW 256 512 4096 32 71094240/dev/sdb1
[Root @ overflowuid ~] # Blockdev-setra 2048/dev/sdb1
[Root @ overflowuid ~] # Blockdev-Report
RO Ra SSZ bsz startsec Size Device
...
RW 2048 512 4096 0 71096640/dev/SDB
RW 2048 512 4096 32 71094240/dev/sdb14, network performance Adjustment
When an application system encounters the following situations, we think the system has a network performance problem:
The throughput of the network interface is less than the expected value;
A large number of packet loss occurs;
There are a lot of conflicts.
Network Performance adjustment method:
1) Adjust the NIC parameters.
# Ethtool eth0
Settings for eth0:
Supported ports: [TP]
Supported link modes: 10 BaseT/half 10 BaseT/full
100 Baset/half 100 Baset/full
1000 Baset/full
Supports auto-negotiation: Yes
Advertised link modes: 10 BaseT/half 10 BaseT/full
100 Baset/half 100 Baset/full
1000 Baset/full
Advertised auto-negotiation: Yes
Speed: 100 Mb/s
Duplex: Half
Port: Twisted Pair
Phyad: 0
Transceiver: Internal
Auto-negotiation: On
Supports wake-on: d
Wake-on: d
Current message level: 0x00000007 (7)
Link detected: Yes
# Ethtool-s eth0 duplex full
# Ifconfig eth0 MTU 9000 up
2) Added Network buffers and packet queues.
# Cat/proc/sys/NET/IPv4/tcp_mem
196608 262144 393216
# Cat/proc/sys/NET/CORE/rmem_default
135168
# Cat/proc/sys/NET/CORE/rmem_max
131071
# Cat/proc/sys/NET/CORE/wmem_default
135168
# Cat/proc/sys/NET/CORE/wmem_max
131071
# Cat/proc/sys/NET/CORE/optmem_max
20480
# Cat/proc/sys/NET/CORE/netdev_max_backlog
300
# Sysctl net. Core. rmem_max
Net. Core. rmem_max = 131071
# Sysctl-W net. Core. rmem_max = 135168
Net. Core. rmem_max = 135168
3) Adjust webserving.
# Sysctl net. ipv4.tcp _ tw_reuse
Net. ipv4.tcp _ tw_reuse = 0
# Sysctl-W net. ipv4.tcp _ tw_reuse = 1
Net. ipv4.tcp _ tw_reuse = 1
# Sysctl net. ipv4.tcp _ tw_recycle
Net. ipv4.tcp _ tw_recycle = 0
# Sysctl-W net. ipv4.tcp _ tw_recycle = 1
Net. ipv4.tcp _ tw_recycle = 1 <End>

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.