In Linux, a little attention is likely to produce a large volume of log files, even hundreds of m, drag down analysis is also a waste of time, this time, if you can cut the file into N small files, take the last file can see the recent log. There are some means, such as using the shell to generate logs on a daily basis, but not the scope of the discussion here. Lixin County Archives Bureau
The command to cut large files in CentOS is as follows:
View Source print?
1 |
split [OPTION] [INPUT [PREFIX]] |
The options are as follows:
- -A: Specify suffix length
- -B: How many bytes per file
- -D: Use a numeric suffix instead of a letter
- -L: Specify the number of rows per file
For example I want to have suffix length 2, that is,-a 2. With the numeric suffix-d. 10M per file, i.e.-B 10m. The command can be designed as follows:
View Source print?
1 |
split -a 2 -d -b 10m /var/lib/mysql/general. log nowamagic |
The following cut files are generated under the/root folder:
View Source print?
In addition to the last file is not 10M (it may be 10M, but the odds are very small), the others are.
Very understood, record here, convenient for people in need.
Linux (CentOS) splits files with split command