Have you ever had the need to calculate a very large data (several hundred GB? Or search in it, or other operations-some operations that cannot be performed in parallel. Data Experts, I am talking to you. You may have a 4-core or more multi-core CPU, but our appropriate tool, suchGrep,Bzip2,Wc,Awk,SedAnd so on. They are all single-threaded and can only use one CPU core.
"How can I use these kernels "?
To allow Linux commands to use all the CPU cores, we need to use the GNU Parallel command to allow all of our CPU Kernels to perform the magic map-reduce operation on a single machine. Of course, this also requires the use of rarely used-PipesParameter (also called-Spreadstdin). In this way, your load will be evenly distributed to each CPU, really.
BZIP2
Bzip2 is a better compression tool than gzip, but it is very slow! Don't worry about it. We have a solution to this problem.
Previous practices:
Cat bigfile. bin | bzip2 -- best> compressedfile.bz2:
Cat bigfile. bin | parallel -- pipe -- recend ''-k bzip2 -- best> compressedfile.bz2, especially for bzip2, GNU parallel is super fast on multi-core CPUs. As soon as you don't care, it's done.
GREP
If you have a very large text file, you may do this before:
Grep pattern bigfile.txt now you can do this:
Cat bigfile.txt | parallel -- pipe grep 'pattern' or:
Cat bigfile.txt | parallel -- block 10 M -- pipe grep 'pattern'-Block 10 MParameter. This indicates that each kernel processes 10 million rows-you can use this parameter to adjust the number of rows of data processed by each CPU kernel.
AWK
The following is an example of using the awk command to calculate a very large data file.
General Usage:
Cat rands20M.txt | awk '{s + = $1} END {print s}' is now like this:
Cat rands20M.txt | parallel -- pipe awk \ '{s + =\$ 1} END {print s} \' | awk '{s + =1 1} END {print s }' this is a bit complicated: in the parallel command-PipeThe parameter divides the cat output into multiple blocks and assigns them to the awk call, forming many subcomputing operations. The subcomputation enters the same awk command through the second pipeline to output the final result. The first awk has three backslashes, which is required for GNU parallel to call awk.
WC
Do you want to calculate the number of lines in a file at the fastest speed?
Traditional practices:
Wc-l bigfile.txt now you should do this:
Cat bigfile.txt | parallel -- pipe wc-l | awk '{s + = $1} END {print s}' is very clever, first, use the parallel command 'mapping' to generate a large number of wc-l calls, form a subcomputation, and finally send it to awk through the pipeline for summary.
SED
Do you want to use the sed command in a large file to perform a lot of replacement operations?
General Practice:
Sed s ^ old ^ new ^ g bigfile.txt now you can:
Cat bigfile.txt | parallel -- pipe sed s ^ old ^ new ^ g... Then, you can use an MPS queue to store the output to a specified file.
[Use multiple CPU Cores with your Linux commands]