Documents are as follows
# cat Cesc
a,1
a,2
b,3
b,4
c,2
d,5
You need to get the number of ABCD occurrences, followed by the numbers and the average.
With Shell:
# grep-e ^a Cesc |awk-f ', ' {sum+=$2} end {print ' A, Count: ' NR ' sum: ' Sum ' Average: ' Sum/nr} '
A, count:2 sum:3 average:1.5
# grep-e ^b Cesc |awk-f ', ' {sum+=$2} end {print ' B, Count: ' NR ' sum: ' Sum ' Average: ' Sum/nr} '
B, Count:2 Sum:7 average:3.5
# grep-e ^c Cesc |awk-f ', ' {sum+=$2} end {print ' C, Count: ' NR ' sum: ' Sum ' Average: ' Sum/nr} '
C, Count:1 Sum:2 average:2
# grep-e ^d Cesc |awk-f ', ' {sum+=$2} end {print ' d, Count: "NR" sum: "Sum" Average: "Sum/nr}"
D, Count:1 Sum:5 Average:5
or write A For loop, this is better for portability, and there are two ways to refer to the shell's variables in awk, one with double quotes and single quotes including variables such as "' var '," or "in advance with the-v parameter of awk, such as: Awk-v var=" $var "
# for I in ' Cat Cesc |cut-d,-f1|sort|uniq ';d o grep-e ^ $i Cesc |awk-f ', ' {sum+=$2} end {print ' $i ' ' Count: NR ', Sum: "Sum", Average: "Sum/nr} ';d one
A count:2, Sum:3, average:1.5
b count:2, Sum:7, average:3.5
C Count:1, Sum:2, Average:2
D count:1, Sum:5, Average:5
Or:
# for I in ' Cat Cesc |cut-d,-f1|sort|uniq ';d o grep-e ^ $i Cesc |awk-v i= ' $i '-f ', ' {sum+=$2} end {print i ' Count: NR ", sum:" Sum ", Average:" Sum/nr} ';d one
A count:2, Sum:3, average:1.5
b count:2, Sum:7, average:3.5
C Count:1, Sum:2, Average:2
D count:1, Sum:5, Average:5
With Python: (in addition to the default floor of the plastic division of Python, only one shape is returned, and true division can be achieved using the From __FUTURE__ Import Division)
From __future__ Import Division
Alist = []
Blist = []
CList = []
Dlist = []
For I in open (' Cesc '):
SS = I.split (', ')
If ss[0] = = ' A ':
Alist.append (int (ss[1]))
Elif Ss[0] = = ' B ':
Blist.append (int (ss[1]))
Elif Ss[0] = = ' C ':
Clist.append (int (ss[1]))
Elif Ss[0] = = ' d ':
Dlist.append (int (ss[1]))
print ' A, Count: ' + str (len (alist) + ', sum: ' + str (SUM (alist)) + '. Average: ' + str (SUM (alist)//len (alist))
print ' B, Count: ' + str (len (blist) + ', sum: ' + str (SUM (blist)) + '. Average: ' + str (SUM (blist)//len (blist))
print ' C, Count: ' + str (len (clist) + ', sum: ' + str (SUM (clist)) + '. Average: ' + str (SUM (clist)//len (clist))
print ' d, Count: ' + str (len (dlist) + ', sum: ' + str (SUM (dlist)) + '. Average: ' + str (SUM (dlist)//len (dlist))
AWK sum, average, maximum
Record several commands: (Package All files under the current directory)
ls | awk ' {print ' tar zcvf ' $ '. tar.gz "$0|" /bin/bash "}"
(Take Range)
[root@vm-202 zhuo]# echo "abc#1233+232@jjjj?===" |awk-f ' [#@] ' {print $} '
1233+232
[root@vm-202 zhuo]# echo abc#1233+232@jjjj?=== ' |awk-f ' [@] ' ' {print $} '
Jjjj
awk '/^[^$]/{print $} ' test.txt matching non-empty rows
awk '/^[^zhuo]/{print $} ' test.txt match a Zhuo-containing
Replace (add: Replace with #)
[root@vm-202 zhuo]# echo zhuo:x:503:504::/home/zhuo:/bin/bash ' |awk ' gsub (/:/, ' # ') {print $} '
Zhuo#x#503#504##/home/zhuo#/bin/bash
You.txt Document Content
1
2
3
4
Column summation: Cat you.txt |awk ' {A+=$1}end{print A} '
Column averaging: Cat you.txt |awk ' {a+=$1}end{print a/nr} '
Maximum column value: Cat you.txt |awk ' begin{a=0}{if ($1>a) a=$1 fi}end{print A} '
Set a variable to start with 0, and encounter a value that is larger than that, and assign the value to the variable until the end.
Minimum value: Cat you.txt |awk ' begin{a=11111}{if ($1<a) a=$1 fi}end{print A} ' conversely
_ to find the maximum value of the full text
Example: To find the maximum value of test.txt
12 34 56 78
24 65 87 90
76 11 67 87
100 89 78 99
For i in ' cat test.txt ';d o echo $i; Done |sort |sed-n ' 1p;2p '
Example 2: The same is test.txt
Sum: For i in ' cat you.txt ';d o echo $i;d one |awk ' {A+=$1}end{print A} '
________
Example 3:
A 88
B 78
B 89
C 44
A 98
C 433
Output Required: a:88;98
b:78;89
c:44;433
awk ' {a[$1]=a[$1] ' "$2}end{for (i in a) print I,a[i]} ' test.txt |awk ' {print $ ': ', $ '; ', $} '