Linux has a disk footprint of 100%, unable to find which large files are running out of disk. __linux

Source: Internet
Author: User

Linux root partition usage 100%, but the view/partition under the directory is not big, not occupied, this how to deal with.
Restart is definitely effective, the current processing situation: Restart Application, Space released
1, lsof | grep deletd
2, reboot reboot
Linux has a disk footprint of 100%, unable to find which large files are running out of disk.
Using DF-LH under Linux
When viewing disks:
/DEV/SDA1 130G 123G 353M 100%
/
The disk is running out,
But my side is to find out what is the specific size of the file occupied.
1, if the large file is occupied, then query larger than a value of the file method:
Find/-size +100c-print
This is the start of the root disk to find more than 100 bytes of files (as for the number of bytes you can certainly set yourself)
You can use Find/-size +100c-exec ls-l {}\; To list the file attributes.
2, if only because some applications generated more log files, long time did not clean up after the occupation, the most obvious sign for the system space use gradually increasing, the daily increment of the basic difference is not big. The quickest way to do this is to ask the application vendor to clean up after the log is stored in the directory. If you can't find the manufacturer, you have to do it yourself, write a script check:
#!/bin/ksh
Use the du command to output the amount of disk space occupied by all directories, in G
Du-h >fs_du.log
To determine the size of each level, find a large amount of directory
Cat Fs_du.log|while Read Line fs_used
Todo
If
[Line−ge10]thenecho Line-ge] then echo fs_used >>result.log
Else
Exit
Fi
Done
View Run Results
More Result.log
This way you can see a large amount of the directory, so targeted to the corresponding directory to check, to see what is in the end what is taking up hard disk space.
(If [$LINE-ge 10] Here is a list of more than 10G judgments you can modify)
3. Because of human error, some processes have been killed when no execution has been completed. However, the program in the cache is not released and is still running, which results in a number of temporary files consuming a large amount of disk space resources, which is characterized by explosive growth that fills up disk space in a very short time. Way to solve:
1, if it is because the parent process is killed, the child process also run cause, then the simplest, kill child process, will be released.
2, if you can use IPCS to confirm which user's process, then also not difficult, follow the use of Ipcrm on the line (this is not one by one cases lifted, with the command to check the use of the method is very convenient)
3, the implementation of the process of users are more critical users such as: root users, have instances of Oracle users, online production users. It is recommended that you reboot the server after acknowledging the problems caused by the shared cache.
4, you have deleted some of the large amount of files, or under the root plate to do
Du-h
The result of discovering that the footprint is far less than 130G,DF is still 100% usage. Then basically sure you hit a Linux bug, direct restart can be resolved. (not necessarily a bug, of course, I've come across a program that writes a log, but does not release the space after the log is deleted.) This is the mechanism of Linux itself caused by the need to stop the relevant program space will be released in fact, it is not your disk space is occupied, but your disk node is depleted. Use:
Df-i/dev/sdbx (x is logical partition)
command to view the I node situation.
Workaround: Delete empty files that occupy the I node.
Command:
RM-RF/-empty-a-type f/opt partition is full of web logs, causing some services to not function properly, so RM-RF the logs (nearly 11GB), but the service is still not back to normal, with the DF-HT, the partition occupies 100%:
[Root@anjing opt]# Df-ht
But with the du-sh/opt command, see:
[Root@anjing/]# du-sh/opt/
8.3g/opt/
These files should be deleted, but the space is not freed, of course, the restart can be resolved, but will cause all the business interruption on the server, you can use the following command to view the deletion of file consumption:
[root@anjing opt]# lsof |grep Delete

Contact Us

The content source of this page is from Internet, which doesn't represent Alibaba Cloud's opinion; products and services mentioned on that page don't have any relationship with Alibaba Cloud. If the content of the page makes you feel confusing, please write us an email, we will handle the problem within 5 days after receiving your email.

If you find any instances of plagiarism from the community, please send an email to: info-contact@alibabacloud.com and provide relevant evidence. A staff member will contact you within 5 working days.

A Free Trial That Lets You Build Big!

Start building with 50+ products and up to 12 months usage for Elastic Compute Service

  • Sales Support

    1 on 1 presale consultation

  • After-Sales Support

    24/7 Technical Support 6 Free Tickets per Quarter Faster Response

  • Alibaba Cloud offers highly flexible support services tailored to meet your exact needs.