Sysdig's chisels is a built-in script for users to track system calls or view system performance bottlenecks, written in a powerful and efficient scripting language, Lua.
Today, to share the usage of fdbytes_by, this case can detect that the system's file has the highest I/O usage (not just the file, but also the network I/O), and it can find out which process is reading and writing to it, and can see the details of I/O activity at the kernel level. Application scenarios can be used to see if your file system is running efficiently, or to investigate the failure of a disk I/O latency. With Dstat--top-io It is easier to navigate to the process name, but today the main introduction is Sysdig fdbytes_by chisel usage, which can be imagined without the Dstat tool available in the scenario.
First, let's take a look at the usage details of today's protagonist Fdbytes_by:
# sysdig-i fdbytes_by category:i/o-------------fdbytes_by I/O bytes, aggregated by an arbitrary filter fieldgroups FD activity based on the given filter field, and returns the key is generated the most input+output bytes. For example, this script can is used to list the processes or TCP ports that generated most traffic. Args:[string] key-the filter field used for grouping
The answer is to sort by the IO size generated by the various activities of the file descriptor.
First of all, we will crawl 30M Sysdig package for analysis use.
Sysdig-w Fdbytes_by.scap-c 30
And then we're going to analyze this grab bag. No file descriptor for the file system I/O activity:
# sysdig-r fdbytes_by.scap0-c fdbytes_by fd.typebytes fd.type-------------------------------- ------------------------------------------------45.16M file9.30m ipv487.55kb unix31 6B <na>60b Pipe
You can see that file occupies 45.16M, is the largest fd, and then we look at the I/O activity by directory to sort by:
# sysdig -r fdbytes_by.scap0 -c fdbytes_by fd.directorybytes fd.directory --------------------------------------------------------------------------------38.42M /etc7.59M /5.04M /var/www/html1.38M /var/log/nginx304.73KB /root/.zsh_history/root7.31KB /lib/x86_64-linux-gnu2.82KB /dev2.76kb /dev/pts1.62KB /usr/lib/x86_64-linux-gnu
Find the most visited is/etc directory, then we look at the specific access to which file?
# sysdig-r fdbytes_by.scap0-c fdbytes_by fd.name fd.directory=/etcbytes fd.name-------------- ------------------------------------------------------------------38.42m/etc/services
Bingo! Found, the original is/etc/services is accessed the most, because the services are system files, so you can be sure that the read operation reached 38.42M, then we look at which process access to this file?
# sysdig-r fdbytes_by.scap0-c fdbytes_by proc.name "fd.filename=services and Fd.directory=/etc" Bytes Proc. Name--------------------------------------------------------------------------------38.42M NSCD
Find the culprit, the original is the NSCD cache program, then why he read so many services file it? Continue to look at:
# sysdig -r fdbytes_by.scap0 -a -s 4096 -c echo_fds proc.name= nscd ------ read 12b from ffff880009dc6900->ffff880009dc6180 /var /run/nscd/socket (NSCD)------ Read 6B from ffff880009dc6900-> ffff880009dc6180 /var/run/nscd/socket (NSCD) hosts------ Write 14B to ffff880009dc6900->ffff880009dc6180 /var/run/nscd/socket (NSCD) hostso------ Read 12b from ffff880009dc6900->ffff880009dc6180 /var/run/nscd/socket (NSCD)---- -- read 7b from ffff880009dc6900->ffff880009dc6180 /var/run/nscd/ socket (NSCD) 28060/------ Read 4.00KB from /etc/services (NSCD) # network services, internet style## note that it is presently the policy of i------ read 4.00kb from /etc/services (NSCD) # ipxipx213/udpimap3220/tcp# &NBSP;INTERACTIVE&NBSP;MAIL&NBSP;ACCESSIMAP3220/UDP------ read 4.00kb from / etc/services (NSCD) nessus1241/tcp# nessus vulnerabilitynessus1241/udp# assessment scann------ Read 4.00KB from /etc/services (NSCD) Qmaster6444/tcpsge _QMASTER#&NBSP;GRID&NBSP;ENGINE&NBSP;QMASTER&NBSP;SERVICESGE-QMASTER6444/UDP------ Read 3.10KB from /etc/services (NSCD)
The original is NSCD in the read services defined in the relationship between the port and the service name, I was in the process of grasping the package is an ab-nginx static page pressure test, I would like to see the Nginx read and write will be very high, did not expect to appear in the middle of this nscd to make trouble:
Ab-k-C 2000-n 300000 http://shanker.heyoa.com/index.html
# sysdig -r fdbytes_by.scap0 -c topprocs_file bytes Process PID -------------------------------------------------------------------- ------------38.42M&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;&NBSP;NSCD 13436.43m nginx 4804304.89KB zsh 324029.20kb ab 207742.79kb screen 183382.37KB sshd 12812
Later, I tested the test time of AB to open NSCD, and do not open NSCD cache, do open NSCD do local services cache will increase by 10.189%.
Ab-k-C 2000-n 300000 http://shanker.heyoa.com/index.html 0.94s user 2.77s system 9% CPU 38.561 totalab-k-C 2000-n 3 00000 http://shanker.heyoa.com/index.html 0.93s User 2.79s system 10% CPU 34.632 Total
NSCD Cache acceleration can refer to this previous article
http://shanker.blog.51cto.com/1189689/1735058
At this point, the entire analysis is over, this article is just an example, to share with you how to use Chisel Fdbytes_by,sysdig also provides a lot of chisel analysis system.
Welcome to add!
This article is from "Tianya Horizon" blog, please be sure to keep this source http://shanker.blog.51cto.com/1189689/1771418
Sysdig Case Study-Analyze disk I/O activity with Fdbytes_by chisel