Have you ever had to calculate a very large data (hundreds of GB) requirement? Or search inside, or anything else--something that can't be done in parallel. Data experts, I'm talking to you. You may have a 4-core or more core CPU, but our appropriate tools, such as grep, bzip2, WC, awk, sed, and so on, are single-threaded and can only use one CPU core.
Borrowing cartoon characters Cartman words, "How can I use these kernels"?
To get the Linux command to use all of the CPU cores, we need to use the GNU Parallel Command, which allows all of our CPU cores to perform magical map-reduce operations on a single machine, and, of course, with –pipes parameters that are rarely used (also called – Spreadstdin). In this way, your load will be evenly distributed to each CPU, really.
BZIP2
BZIP2 is a better compression tool than gzip, but it's very slow! Come on, we have a way to solve the problem.
Previous practice:
cat bigfile.bin | bzip2 --best > compressedfile.bz2 |
Now this:
cat bigfile.bin | parallel --pipe --recend ‘‘ -k bzip2 --best > compressedfile.bz2 |
Especially for Bzip2,gnu parallel on multicore CPUs is super fast. When you're not careful, it's done.
Grep
If you have a very large text file, you might have done this before:
Now you can do this:
cat bigfile.txt | parallel --pipe grep ‘pattern‘ |
or this:
cat bigfile.txt | parallel --block 10M --pipe grep ‘pattern‘ |
This second usage uses the –block 10M parameter, which means that each kernel processes 10 million rows-you can use this parameter to adjust how many rows of data each CPU kernel processes.
Awk
Here is an example of using the awk command to calculate a very large data file.
General usage:
cat rands20M.txt | awk ‘{s+=$1} END {print s}‘ |
Now this:
cat rands20M.txt | parallel --pipe awk \ ‘{s+=\$1} END {print s}\‘ | awk ‘ {s+=$1} END {print s}‘ |
This is a bit complicated: the –pipe parameter in the parallel command assigns the cat output to a number of blocks assigned to the awk call, resulting in a number of sub-computation operations. These sub-computations go through the second pipeline into the same awk command, which outputs the final result. The first awk has three backslashes, which is required by the GNU parallel to invoke awk.
Wc
Want the fastest speed to calculate the number of rows for a file?
Traditional practices:
Now you should:
cat bigfile.txt | parallel --pipe wc -l | awk ‘{s+=$1} END {print s}‘ |
Very ingenious, first using the parallel Command ' mapping ' out a large number of wc-l calls, forming a sub-calculation, and finally the pipeline sent to awk to summarize.
Sed
Want to use the SED command in a huge file to do a lot of substitution operations?
General Practice:
sed s^old^new^g bigfile.txt |
Now you can:
cat bigfile.txt | parallel --pipe sed s^old^new^g |
You can then use the pipeline to store the output in the specified file.
Article title: Leveraging multicore CPUs to accelerate Linux commands-awk, sed, bzip2, grep, WC
This address: http://www.ttlsa.com/linux/to-accelerate-the-linux-command-awk-sed-bzip2-grep-wc-use-of-the-multi-core-cpu/