Have you ever had the need to calculate a very large data (several hundred GB? You can also search for operations in it or perform other operations that cannot be performed in parallel. Data experts, I am talking to you. You may have a 4-core or more multi-core CPU, but our appropriate tools such as grep, bzip2, wc Linux Commands
Have you ever had the need to calculate a very large data (several hundred GB? Or search in it, or other operations-some operations that cannot be performed in parallel. Data experts, I am talking to you. You may have a 4-core or more multi-core CPU, but our appropriate tools, such as grep, bzip2, wc, awk, sed, etc., are all single-threaded, only one CPU core can be used.
"How can I use these kernels "?
To allow Linux commands to use all the CPU cores, we need to use the GNU Parallel command to allow all of our CPU kernels to perform the magic map-reduce operation on a single machine. of course, this also relies on the seldom used-pipes parameter (also called-spreadstdin ). In this way, your load will be evenly distributed to each CPU, really.
BZIP2
Bzip2 is a better compression tool than gzip, but it is very slow! Don't worry about it. we have a solution to this problem.
Previous practices:
cat bigfile.bin | bzip2 --best > compressedfile.bz2
Now:
cat bigfile.bin | parallel --pipe --recend '' -k bzip2 --best > compressedfile.bz2
Especially for bzip2, GNU parallel is super fast on multi-core CPU. As soon as you don't care, it's done.
GREP
If you have a very large text file, you may do this before:
grep pattern bigfile.txt
Now you can:
cat bigfile.txt | parallel --pipe grep 'pattern'
Or:
cat bigfile.txt | parallel --block 10M --pipe grep 'pattern'
The second usage uses the-block 10 M parameter, which means that each kernel processes 10 million lines-you can use this parameter to adjust the number of rows of data processed by each CPU kernel.
AWK
The following is an example of using the awk command to calculate a very large data file.
General usage:
cat rands20M.txt | awk '{s+=$1} END {print s}'
Now:
cat rands20M.txt | parallel --pipe awk \'{s+=\$1} END {print s}\' | awk '{s+=$1} END {print s}'
This is a bit complicated: The-pipe parameter in the parallel command divides the cat output into multiple blocks and assigns them to the awk call, forming many subcomputing operations. The subcomputation enters the same awk command through the second pipeline to output the final result. The first awk has three backslashes, which is required for GNU parallel to call awk.
WC
Do you want to calculate the number of lines in a file at the fastest speed?
Traditional practices:
wc -l bigfile.txt
Now you should:
cat bigfile.txt | parallel --pipe wc -l | awk '{s+=$1} END {print s}'
Very clever. First, use the parallel command 'mapping' to generate a large number of wc-l calls, form a subcomputation, and finally send it to awk for summary through the pipeline.
SED
Do you want to use the sed command in a large file to perform a lot of replacement operations?
General practice:
sed s^old^new^g bigfile.txt
Now you can:
cat bigfile.txt | parallel --pipe sed s^old^new^g
... Then, you can use an MPs queue to store the output to a specified file.
Use multiple CPU Cores with your Linux commands