Linux disk IO monitoring [ZZ]

Source: Internet
Author: User
Tags set time disk usage

Disk I/O monitoring is a very important part of unix/linux system management. It can monitor the throughput, the number of I/O per second, disk utilization, service time and other information, and when the exception is found, send alarm information to the system administrator, so that the system administrator to adjust the data layout and other management activities to optimize the overall system performance.

Commands that monitor disk I/O for different operating systems may be slightly different. This article describes the Unix/linux operating system disk I/O management commands and related information, and describes the automated scripting implementation of disk I/O management monitoring on the Unix/linux operating system.

Unix/linux disk I/O performance monitoring commands

Disk I/O performance monitoring metrics and tuning methods

Before we introduce the disk I/O monitoring commands, we need to understand the metrics for disk I/O performance monitoring, and the performance of some aspects of the disk revealed by each indicator. Indicators for disk I/O performance monitoring mainly include:

Indicator 1: Number of I/Os per second (IOPS or TPS)

For disks, the continuous read or continuous write of a disk is called a disk I/O, and the IOPS of the disk is the sum of the number of consecutive reads and successive writes per second. This indicator has important reference significance when transferring small block discontinuous data.

Indicator 2: Throughput (throughput)

Refers to the speed at which a hard disk transmits data, and transmits data to and from the data being read and written. Its units are generally Kbps, MB/s, and so on. When transmitting large chunks of discontinuous data, the indicator has important reference functions.

Indicator 3: Average I/O data size

The average I/O data size is the throughput divided by the number of I/OS, which is significant for revealing disk usage patterns. In general, if the average I/O data size is less than 32K, it is considered that the disk usage mode is mainly random access, and if the average size of I/O data is greater than 32K, the disk usage mode is considered sequential access.

Indicator 4: Percentage of disk activity time (utilization)

The percentage of disk active time, disk utilization, in which the disk is active in data transfer and processing commands (such as Seek). Disk utilization is proportional to the level of resource contention, inversely related to performance. That is, the higher the disk utilization, the more serious the resource contention, the worse the performance, the longer the response time. In general, if disk utilization exceeds 70%, the application process will take a long time to wait for I/O to complete, as most processes will be blocked or hibernate during the wait.

Indicator 5: Service time

Refers to the time when disk reads or writes are performed, including seek, rotational delay, and data transmission. Its size is generally related to disk performance, cpu/memory load will have an impact on it, too many requests will indirectly lead to increased service time. If the value continues to exceed 20ms, it can generally be considered to have an impact on the upper application.

Indicator 6:i/o Wait Queue Length (queue lengths)

Refers to the number of I/O requests to be processed, which increases if the I/O request pressure continues to exceed the disk processing power. If the queue length of a single disk continues to exceed 2, it is generally considered that the disk has an I/O performance issue. Note that if the disk is a virtual logical drive for a disk array, you need to divide the value by the number of actual physical disks that make up the logical drive to obtain an I/O waiting queue length for the average single hard drive.

Indicator 7: Latency (wait time)

Refers to the time that a disk read or write operation waits for execution, that is, the time that is queued in the queue. If I/O requests continue to exceed disk processing power, it means that I/O requests that are too late to process have to wait longer in the queue.

By monitoring the above metrics and comparing these metrics to historical data, empirical data, and disk nominal values, it is not difficult to identify potential or already occurring problems with disk I/O when necessary, using the CPU, memory, and swap partitions. But what if these problems are avoided and resolved? This requires knowledge and technology for disk I/O performance optimization. Limited to the topic and length of this article, only a few commonly used optimization methods for readers to reference:

Adjust the layout of the data to allocate more reasonable I/O requests to all physical disks. For RAID disk arrays, try to make the application I/O equal to the stripe size or as a multiple of the stripe size. and select the appropriate RAID mode, such as RAID10,RAID5. Increase the queue depth of the disk driver, but do not exceed the processing power of the disk, otherwise, some I/O requests will be re-emitted because of loss, which will degrade performance. Application caching technology reduces the number of times an application accesses a disk, and caching technology can be applied at the file system level or at the application level. Since the optimized cache technology is already included in most databases, database I/O should directly access the raw disk partition (raw partition) or take advantage of the memory read-write bandwidth far better than the direct disk I/O operation with DIO technology (direct IO) bypassing the file system cache. Places frequently accessed files or data in memory.

Introduction to disk I/O performance monitoring commands

Unix/linux provides two very useful commands for disk I/O performance monitoring: Iostat and SAR.

The Iostat command monitors the system input/output device load primarily by observing the active time of the physical disk and their average transmission speed. Based on the reports generated by the Iostat command, the user can determine whether a system configuration is balanced and thus better balance the input/output load between the physical disk and the adapter. The main purpose of the Iostat tool is to detect I/O bottlenecks in the system by monitoring the utilization of the disk. Different operating system command format output format is slightly different, the administrator can see the user manual to determine its use.

The SAR command reports CPU usage, I/O, and other system behavior. The SAR command can collect, report, and save system behavior information. The data collected in this way is useful for determining the system's time-period characteristics and determining the peak usage time. However, it is important to note that the SAR command will generate a considerable amount of read and write when it runs itself, so it is best to run SAR statistics without effort, to see how much the SAR affects the total statistics.

In the AIX environment, Iostat and SAR are located in the file set Bos.acct and are part of the base system (base Operating systems) and do not require any configuration or any additional package installation in the default installation.

In the Linux environment, Iostat and SAR are included in the Sysstat package. Sysstat is a common tool pack in Linux systems. For different Linux systems or different hardware platforms, there may be minor differences in the installation package name and specific commands. Listing 1 is an example of installing the Sysstat package on a RHEL5.3.


Listing 1: Installing the Sysstat package on RHEL5.3

# RPM-IVH sysstat-7.0.2-3.el5.ppc.rpm warning:sysstat-7.0.2-3.el5.ppc.rpm:header V3 DSA signature:nokey, key ID 370171 Preparing #################################### [100%] 1:sysstat #################################### [100%]

Monitoring disk I/O status on AIX systems

Listing 2 and listing 3 are the execution results of Iostat and SAR running on a heavily loaded AIX node, with each command time interval set to 10 seconds and a total of 3 executions.


Listing 2: Viewing the disk I/O load on the AIX6.1 system using Iostat

# iostat-d 3 System configuration:lcpu=32 drives=226 paths=2 vdisks=0 Disks:% tm_act Kbps TPS Kb_read Kb_wrtn Hdisk1 51.6 1582.8 25.6 2208 13632 hdisk2 14.6 6958.5 7.1 0 69637 hdisk3 94.2 40013.8 73.3 9795 390643 hdisk1 61.2 2096.9 33.9 4 176 16844 Hdisk2 20.1 9424.0 10.0 94438 hdisk3 97.2 39928.3 73.8 25112 375144 hdisk1 63.5 2098.6 34.7 4216 16796 Hdisk2 27.1 13549.5 13.6 8352 127308 hdisk3 98.4 40263.8 81.2 27665 375464

The main fields have the following meanings:

% tm_act indicates the percentage of time that the physical disk is active, that is, disk utilization.

Kbps represents the amount of data transferred (read or write) to the drive in kilobytes per second.

Tps represents the number of I/Os per second of physical disks.

Kb_read The amount of data to read in the set time interval, in kilobytes.

Kb_wrtn The amount of data written within the set time interval, in kilobytes.


Listing 3: Using SAR–D to report disk I/O information on AIX6.1 systems

# sar-d 3 AIX node001 1 6 00caa4cc4c00 08/19/09 System configuration:lcpu=32 drives=226 mode=capped 04:34:43 device% Busy Avque r+w/s kbs/s avwait avserv 04:34:53 hdisk1 0.0 0.1 1645 0.0 28.3 hdisk2 0.0 8 8614 0.4 73.5 HDISK3 92 7 2 38773 28.5 105.1 04:35:03 hdisk1 0.1 0.0 2133 0.0 30.7 hdisk2 0.0 9855 0.4 84.2 HDISK3 98 74 39975 24.4 115. 7 04:35:13 hdisk1 0.0 2019 0.0 32.5 hdisk2 0.0 One 11898 0.4 67.4 hdisk3 $0.0 40287 13.7 97.4 Average Hdisk1 0.0 1932 0.0 30.5 hdisk2 9 0.0 10122 0.4 75.0 95 0.1 75 39678 22.2 106.1

The main fields of the output have the following meanings:

%busy the time taken to process I/O requests, expressed as a percentage.

Avque the average number of requests that have not been completed at the specified time interval.

r+w/s Total I/O read/write per second.

kbs/s bytes per second, in kilobytes.

Avwait the average time, in milliseconds, that the delivery request waits for the queue to be idle.

Avserv the average time, in milliseconds, that is required to complete an I/O request.

This example shows that the utilization of Hdisk1 is centered, the number of I/O per second is centered, but the throughput is minimal, the HDISK2 utilization is minimal, the number of I/O per second is minimal, but the throughput is higher than hdisk1, hdisk3 I/O utilization is the largest, I/O and throughput is the largest, I/O average wait time , the service time is also the longest. The average I/O size of the Hdisk1 for 1932/31= Kb;hdisk2 is 1125 kb;hdisk3 KB for the average I/Os dimensions of 10122/9= 39678/75= 529. It can be seen that small random access has a large impact on the number of I/O per second, and large sequential reads have a large impact on throughput. HDISK3 utilization exceeds 70% cordon, although its average I/O size is about half as small as hdisk2, but the service time is about 30% more than Hdisk2, while the waiting time is longer, it should be taken management measures.

Monitoring disk I/O status on Linux systems

Listing 4 and listing 5 are the results of iostat and SAR execution on a lightly loaded Linux node, with a time interval of 10 seconds and a total of 3 executions.


Listing 4: Viewing the disk I/O load on the RHEL5.3 system using Iostat

# iostat-d-X 3 Linux 2.6.18-128.el5 (node002.ibm.com) 08/19/2009 device:rrqm/s wrqm/s r/s w/s rsec/s wsec/s Avgrq-sz Avgqu-sz await SVCTM%util SDA 0.10 22.12 0.14 2.06 12.98 286.60 136.58 0.19 87.17 3.76 0.82 sda1 0.00 0.00 0.00 0.00 0.0 0 0.00 75.06 0.00 3.89 3.14 0.00 SDA2 0.00 0.00 0.00 0.00 0.02 0.00 53.56 0.00 13.28 11.67 0.00 Sda3 0.09 22.12 0.14 2.06 12.94 286.60 136.59 0.19 87.19 3.76 0.82 device:rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await SVCTM%util s Da 0.00 6.40 0.00 1.20 0.00 91.20 76.00 0.01 7.25 5.08 0.61 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 SD A2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 SDA3 0.00 6.40 0.00 1.20 0.00 91.20 76.00 0.01 7.25 5.08 0.61 De vice:rrqm/s wrqm/s r/s w/s rsec/s wsec/s avgrq-sz avgqu-sz await SVCTM%util SDA 0.00 3.30 0.00 5.40 0.00 100.00 18.52 0. 83.24 3.63 1.96 sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 SDA2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.0 0 0.00 0.00 0.00 Sda3 0.00 3.30 0.00 5.40 0.00 100.00 18.52 0.45 83.24 3.63 1.96

The main fields have the following meanings:

R/S the number of read operations per second.

W/S the number of write operations per second.

RSEC/S the number of sectors read from the device per second.

WSEC/S number of sectors written to the device per second.

The average number of sectors for the AVGRQ-SZ I/O request.

Average queue Length for Avgqu-sz I/O requests.

The average wait time, in milliseconds, for an I/O request to await.

SVCTM The average service time, in milliseconds, for an I/O request.

The percentage of time that%util handles I/O requests, which is device utilization.


Listing 5: Using SAR to report disk I/O information on RHEL5.3 systems

#sar-PD 3 Linux 2.6.18-128.el5 (node002.ibm.com) 08/19/2009 04:13:48 AM DEV TPs rd_sec/s wr_sec/s Avgrq-sz avgqu-sz AW AIT SVCTM%util 04:13:58 am SDA 1.20 0.00 91.11 76.00 0.01 7.25 5.08 0.61 04:13:58 AM sda1 0.00 0.00 0.00 0.00 0.00 0.00 0 .0.00 04:13:58 AM SDA2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 04:13:58 AM Sda3 1.20 0.00 91.11 76.00 0.01 7.25 5.08 0. 04:13:58 am DEV TPs rd_sec/s wr_sec/s avgrq-sz avgqu-sz await SVCTM%util 04:14:08 AM SDA 5.41 0.00 100.10 18.52 0.45 8 3.24 3.63 1.96 04:14:08 am sda1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 04:14:08 AM SDA2 0.00 0.00 0.00 0.00 0.00 0.00 0.0 0 0.00 04:14:08 am Sda3 5.41 0.00 100.10 18.52 0.45 83.24 3.63 1.96 04:14:08 AM DEV TPs rd_sec/s wr_sec/s Avgrq-sz avgqu-s Z await SVCTM%util 04:14:18 am SDA 0.60 0.00 74.47 124.00 0.00 7.50 6.33 0.38 04:14:18 AM sda1 0.00 0.00 0.00 0.00 0.00 0 .0.00 0.00 04:14:18 AM SDA2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 04:14:18 AM Sda3 0.60 0.00 74.47 124.00 0.00 7.50 6 .0.38 Average:dev TPs rd_sec/s wr_sec/s Avgrq-sz avgqu-sz await SVCTM%util AVERAGE:SDA 2.40 0.00 88.56 36.89 0.15 64.26 4.10 0.98 average:s DA1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 AVERAGE:SDA2 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 Average:sda3 2.40 0.00 88.56 36.89 0.15 64.26 4.10 0.98

The main fields of the output have the following meanings:

The block device that DEV is monitoring

TPS Total I/O transfers per second for physical devices

RD_SEC/S number of sectors read from the device per second

WR_SEC/S number of sectors written to the device per second

Avgrq-sz average number of sectors for I/O requests

Avgqu-sz average Queue Length for I/O requests

Average wait time for await I/O requests, in milliseconds

SVCTM Average service time for I/O requests, in milliseconds

%util the percentage of time that the I/O request takes up, i.e. device utilization

This example shows that the disk SDA has fewer I/O operations, only the partition Sda3 has I/O operations, and the utilization is less than 1%, the average I/o size is 0.15 * 77b,i/o = The operation is mainly small write. In this example, no abnormal output is found and no administrative action is required on the disk.

Back to top of page

Unix/linux system disk I/O performance monitoring Automation script example

This article describes the methods of AIX and Linux disk I/O monitoring, and this section combines examples to introduce the design and implementation of automated scripting for disk I/O monitoring.

Design ideas

1. Monitoring metrics

The previous article introduced a number of monitoring indicators, such as throughput, the number of I/O per second, the average single I/O response time, disk utilization, etc., users can choose the monitoring indicators according to their own system and application characteristics. The following is the design of a monitoring script using disk utilization as an example.

2. Monitoring means

For AIX, the output field "% tm_act" of the Command "iostat-d" reflects disk utilization, and for Linux, the output field "%util" of the Command "iostat-d-X" reflects disk utilization. The frequency of monitoring can be specified by the parameters of the "iostat" command itself.

3. Alarm mechanism

In general, if the disk usage is 75% or 80% long, it is generally considered a busy disk, generally you need to adjust the disk layout, adjust the application use allocation and other management activities, if the disk usage is occasionally high, only need to continue monitoring. Therefore, the alarm can not be frequent alarm, and can not miss the phenomenon of longer duration, this example of the alarm mechanism is set to "a certain proportion of the time interval of the monitoring record exceeded the set disk utilization threshold is the alarm"

4. Log

To keep disk I/O, analysis activities, alarm information and other raw information, easy to conduct problem analysis, positioning.

Disk I/O analysis and alarm automatic scripting

Combining the above design ideas, I developed the disk I/O analysis and Alarm script i/oanalyzer.sh, and our scripts were tested on RHEL 5.3, SLES 11, Aix 5.3, and Aix 6.1.

The first part of the script ioanalyzer.sh initializes, examines and processes the input parameters, and assigns default values to the parameters that are not entered.


Listing 6. ioanalyzer.sh Script Initialization Section

#!/bin/sh # ================================================================ # Script Name:IOAnalyzer.sh # desciption : To analyze output of ' iostat-d ', send a alert to admin # If disk Utilization counter reach defined threshold # Date:m Ay, #================================================================ #----------------------------------- ------------------------------# Function Definitions ... Define functions #----------------------------------------------------------------function Usage {echo "echo" Usage: Ioanalyzer.sh-i Iniostatfile [-l outlogfile] \ [-A outalertfile] [-u dishutil] [-R Rategeutil] "echo" "echo" for Example:ioanalyzer.sh-i/tmp/iostat.out-l/tmp/logfile \-a/tmp/aletfile-u 80-r "echo" for AIX, please run ' iost at-d [<interval> [<count>] \ to-Create Iniostatfile "echo" for Linux, please run ' iostat-d-X [<inter val> [<count>] \ To create Iniostatfile "Exit 1} #----------------------------------------------------------------# Process command-line arguments command line parameter processing #----------------------------------------------------------- -----while Getopts:i:l:a:u:r: Opt does case "$opt" in i) iniostatfile= "$OPTARG";; L) outlogfile= "$OPTARG";; A) outalertfile= "$OPTARG";; u) diskutil= "$OPTARG";; R) rategeutil= "$OPTARG";; \?) usage;; Esac done #----------------------------------------------------------------# input validation Input validation #---------------- ------------------------------------------------if [!-F "$inIostatFile"] then echo "Error:invalid augument Iniostatfi Le in Option-i "usage exit 1 fi #---------------------------------------------------------------# Set values, if unset Set the variable #----------------------------------------------------------------Outlogfile=${outlogfile:-${iniostatfile}. log} Outalertfile=${outalertfile:-${iniostatfile}.alert} diskutil=${diskutil:-' rategeutil=${rategeutil:-' 60 ' }

Next, the ioanalyzer.sh script queries the log and locates the text to be parsed by the IO output file by calculating the starting and ending rows.


Listing 7. ioanalyzer.sh script locating I/O output file for analysis section

#----------------------------------------------------------------# Identify the lines to be analyzed between StartLine and Endline # Locate the text to be parsed in the log #----------------------------------------------------------------if [!-f ' $outLogFile] | | [! tail-1 "$outLogFile" |grep ' ENDLINE ') then startlinenum=1; else completedline= ' tail-1 ' $outLogFile | grep ' ENDLINE ' | \ awk ' {print $4} ' |cut-d:-f2 ' startlinenum= ' expr 1 + $CompletedLine ' fi eval ' sed-n ' ${startlinenum},\ $p ' $inIostatFile > ${iniostatfile}.tail linecount= ' cat ${iniostatfile}.tail|wc-l|awk ' {print $ endlinenum= ' expr $LineCount + $St Artlinenum '

The script in Listing 7 implements the Iostat output, which is positioned on a row basis, and if a row of disk utilization is less than the previously defined threshold, the line end marks "OK", and if a row of disk utilization is greater than or equal to the previously defined threshold, the row end is labeled "Alarm". The script also handles the differences between the AIX and Linux output formats and the disk naming.


Listing 8. Ioanalyzer.sh analyzing iostat output by row

#----------------------------------------------------------------# Analyze ' iostat ' output, append "Alarm" or "OK" at t He end of each# line #----------------------------------------------------------------os= ' uname ' case "$OS" in AIX) disk Utillabel= "% tm_act" diskutilcol=2 diskprefix= "Hdisk";; Linux) diskutillabel= "%util" diskutilcol=14 diskprefix= "HD|SD";; *) echo "not support $OS operating system!" exit 1;;; ESAC eval "Cat ${iniostatfile}.tail | Egrep ' ${diskprefix} ' \ | awk ' {if (\$${diskutilcol} * < ${diskutil}) \ {\$20 = \ "Ok\"; print \$1\ "\t\" \$${diskutilcol}\ "\t\" \$20} \ Else {\$20 = \ "Alarm\"; print \$1\ "\t\" \$${diskutilcol}\ "\t\" \$20}} ' " \ > ${outlogfile}.tmp

The following script gives an example of an alarm trigger that, if the ratio of high disk utilization counts to the total number of rows of analysis reaches or exceeds the predetermined ratio, the script sends a warning message to the root user.


Listing 9. IOANALYZER.SH Trigger Alarm

#----------------------------------------------------------------# Send Admin an alert if disk utilization counter REAC H defined # threshold #----------------------------------------------------------------alert= "NO" for DISK in ' CUT-F1 $ {outlogfile}.tmp | Sort-u ' do numalarm= ' cat ${outlogfile}.tmp | grep "^ $DISK. *alarm$" |wc-l ' numrecord= ' cat ${outlogfile}.tmp | grep "^ $DISK" |wc-l ' ratealarm= ' expr $numAlarm \* 100/$numRecord ' If [$rateAlarm-ge $rateGEUtil];then Echo ' disk:${d ISK} time: ' Date +%y%m%d%h%m ' \ Rate:${ratealarm} threshold:${rategeutil} ' >> ${outalertfile}.tmp alert= ' YES ' fi Done if [$Alert = "YES"];then cat ${outalertfile}.tmp >> ${outalertfile} mail-s "DISK IO Alert" [Email protected]& Lt ${outalertfile}.tmp fi

Finally, the script parses the active archive to locate the starting row for the next analysis, and the files that are generated during the analysis are deleted.


Listing 10. ioanalyzer.sh Record analysis activity log and purge temporary files

#----------------------------------------------------------------# Clearup temporary files and logging #------------ ----------------------------------------------------echo "Iostatfile:${iniostatfile} Time: ' Date +%y%m%d%h%m ' \ Startline:${startlinenum} Endline:${endlinenum} Alert:${alert} "\ >> ${outlogfile} rm-f ${outlogfile}.tmp rm-f ${ Outalertfile}.tmp rm-f ${iniostatfile}.tail Exit 0

Script Use Example

The following is an example of the ioanalyzer.sh script used on AIX

1. Perform iostat in the background and redirect the output to a file


Listing 11. Background execution Iostat

# nohup iostat-d 5 >/root/iostat.out & (For Linux, run iostat-d–x 5 >/root/iostat.out &)

2. Edit the crontab file, run the ioanalyzer.sh script every 10 minutes,-u 70–r 80, indicating that the 80% usage in the monitoring record of a disk from the last run ioanalyzer.sh to date has reached or exceeded 70%, that is, an alarm is issued. The alarm log and analysis log can be specified by the –l–a parameter of the ioanalyzer.sh, this example keeps the default value, which is to produce iostat.out.log and Iostat.out.alert files in the same directory as the Iostat output file.


Listing 12. Edit Crontab

# crontab–e 0,10,20,30,40,50 * * * * */root/ioanalyzer.sh-i/root/iostat.out-u-R 80>/tmp/iostat.out 2>& ; 1

3. You can view the log file when a user receives an alert message that requires further query history


Listing 13. viewing log files

# Cat/root/iostat.out.log | More iostatfile:/root/iostat.out time:200905200255 startline:7220 endline:7580 alarm:yes IOSTATFILE:/root/iostat.out time:200905200300 startline:7581 endline:7940 alarm:yes iostatfile:/root/iostat.out TIME:200905200305 STARTLINE:7941 endline:8300 Alarm:yes [aixn01]> Cat/root/iostat.out.alert | More Disk:hdisk4 time:200905200250 rate:84 threshold:70 disk:hdisk5 time:200905200250 RATE:84 THRESHOLD:70 DISK:hdisk6 time:200905200250 rate:84 threshold:70

Back to top of page

Summary

This article describes the disk I/O management commands on Unix/linux and describes in detail how to implement automated monitoring and management of disk I/O through an automated monitoring script. The automated monitoring of disk I/O can help system administrators to identify problems with disk I/O in a timely manner, and administrators can eliminate and mitigate problems by taking appropriate actions.

Resources

Learn

Disk Management chapter (i): This article mainly describes how to automate the monitoring of disk space status and notify the system administrator in a timely manner.

Official website of the OpenSSH organization: OpenSSH related information can be found here.

EXPECT.PM User's Manual: Describes the Perl language interface for Expect.

EXPECT's homepage: Here you can find almost all the information about EXPECT, including documentation, FAQs, wikis and useful links.

Expect exceeded expectations: a non-known but powerful popular tool, the article describes the basic functions of Expect.

IBM Publication: "Command Reference, Volume 5:s-u": AIX 6.1 Information Center documentation, System management classes.

For Perl information and its associated resources, see perl.com.

Programming Perl Third Edition (Larry Wall, Tom Christiansen and Jon orwant; O ' Reilly & associates,2000) is the best Perl guide today, and now has been updated to 5.005 and 5.6.0.

Perl Cookbook (Tom Christiansen and Nathan Torkington, O ' Reilly & associates,1998) is an authoritative primer on all Perl issues.

Aix and UNIX zone: DeveloperWorks's Aix and UNIX Zone provides a wealth of information about all aspects of AIX system management that you can use to extend your Unix skills.

Getting Started with Aix and UNIX: Visit the "Getting Started with Aix and UNIX" page to learn more about Aix and UNIX.

Aix and UNIX feature summary: The AIX and UNIX zone has introduced a number of technical topics for you, summarizing a lot of popular knowledge points. We will continue to introduce a lot of relevant hot topics to you, in order to facilitate your visit, we are here for you to summarize all the topics in this area, so that you more convenient to find what you need.

DeveloperWorks Technical Activities and Webcasts: Stay tuned for DeveloperWorks technical activities and webcasts.

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.