Workaround:
1. Use split command to cut large files, save 1 million lines per small file
Split parameter:
-B: The following can be divided into the file size, can add units, such as B, K, m, etc.;
-L: Split by number of rows;
#按每个文件1000行来分割除
Split-l 1000 Httperr8007.log Httperr
Httpaa,httpab,httpac .....
#按照每个文件100K来分割
Split-b 100k Httperr8007.log http
Httpaa,httpab,httpac .....
2. Traverse all 1 million lines of files, create a new directory, and then cut into 10,000 lines of small files
#!/bin/bash
Bigfile= "1.txt"
Split-l 1000000 $bigfile Text
Currdir=1
For smallfile in ' ls | grep "text*" '
Todo
linenum= ' Wc-l $smallfile | awk ' {print '} '
N1=1
File=1
Savedir= "$smallfile $currdir"
if [!-D "$savedir"]
Then
mkdir $savedir
Fi
While [$n 1-lt $linenum]
Todo
n2= ' expr $n 1 + 9999 '
Sed-n "${n1},${n2}p" $smallfile > $savedir/text$file.txt
n1= ' Expr $n 2 + 1 '
file= ' expr $file + 1 '
Done
Currdir= ' expr $currdir + 1 '
Done