It is interesting to count the frequency of words used in a specific file to find out the word frequency. Next, we use the associated array, awk, sed, grep and other methods to solve the problem. First, we need a text for testing. Save the content named word.txt as follows: [python] Word used this counting this next we need to write a Shell script program, as shown below: [python] #! /Bin/bash # Name: word_freq.sh # Description: Find out frequency of words in a file if [$ #-ne 1]; then echo "Usage: $0 filename "; exit-1 fi filename = $1 egrep-o "\ B [[: alpha:] + \ B "$ filename | \ awk '{count [$0] ++} END {printf (" %-14 s % s \ n "," Word ", "Count"); \ for (ind in count) {printf ("%-14 s % d \ n", ind, count [ind]);} 'Working principle: 1. egrep-o "\ B [[: alpha:] + \ B" $ filename is used to output only words, and the-o option is used to print matching character sequences separated by line breaks, In this way, a word 2. \ B is a word boundary identifier in each line. [: Alpha:] is a character class that represents letters. 3. The awk command is used to avoid running each word following an iteration: