There is a small demand recently: There are many files in a directory. The first line of each file starts with BEGIN, the last line starts with END, and each row has multiple columns in the middle, the first column is called & quot; DN & quot;, and the second column is called & quot; CV & quot;. The combination of DN and CV serves as the primary key, check whether there are duplicate DN-CV in the file. There is a small demand recently: There are many files in a directory. The first line of each file starts with BEGIN, the last line starts with END, and each row has multiple columns in the middle, the number is not equal. The first column is called "DN", and the second column is called "CV". The combination of DN and CV serves as the primary key. now we need to check whether there are duplicate DN-CV in the file.
So I wrote a simple python program.
#! /usr/bin/pythonimport osimport syscmd = "cat /home/zhangj/hosts/* | grep -v BEGIN | grep -v END"def check_dc_line(): has_duplicate = False dc_set = set() for dc_line in os.popen(cmd, 'r').readlines(): dc_token = dc_line.split() dn = dc_token[0] cv = dc_token[1] dc = dn + "," + cv if dc in dc_set: print "duplicate dc found:", dc has_duplicate = True else: dc_set.add(dc) return has_duplicateif not check_dc_line(): print "no duplicate dc"
For 250 files with a total of 0.6 million rows of data, filter the data for about 1.67 seconds
A little unwilling to this efficiency, so I wrote a shell script with the same function.
#! /bin/bashcat /home/zhangj/hosts/* | grep -v BEGIN | grep -v END | awk ' BEGIN { has_duplicate = 0 } { dc = $1","$2; if (dc in dc_set) { print "duplicate dc found", dc has_duplicate = 1 } else { dc_set[dc] = 1 } } END { if (has_duplicate ==0) { print "no duplicate dc found" } }'
For further comparison, we repeat 10 experiments.
The above is a detailed description of the python and bash versions of the applet. For more information, see other related articles in the first PHP community!