Linux Amazon EC2 - 磁盘已满
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/20031604/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Amazon EC2 - disk full
提问by D_R
When I run df -h
on my Amazon EC2 server, this is the output:
当我df -h
在 Amazon EC2 服务器上运行时,输出如下:
[ec2-user@ip-XXXX ~]$ df -h
Filesystem Size Used Avail Use% Mounted on
/dev/xvda1 25G 25G 0 100% /
tmpfs 4.0G 0 4.0G 0% /dev/shm
for some reason something is eating up my storage space.
出于某种原因,有些东西正在占用我的存储空间。
Im trying to find all of the big files/folders and this is what i get back:
我试图找到所有大文件/文件夹,这就是我得到的:
[ec2-user@ip-XXXX ~]$ sudo du -a / | sort -n -r | head -n 10
993580 /
639296 /usr
237284 /usr/share
217908 /usr/lib
206884 /opt
150236 /opt/app
150232 /opt/app/current
150224 /opt/app/current/[deleted].com
113432 /usr/lib64
How can I find out whats eating my storage space?
我怎样才能知道是什么在占用我的存储空间?
采纳答案by u1860929
Well, I think its one (or more) logfiles which have grown too large and need to be removed/backupped. I would suggest going after the big files first. So find all files greater than 10 MB
(10 MB is a big enough file size, you can choose +1M for 1MB similarly)
好吧,我认为它的一个(或多个)日志文件变得太大,需要删除/备份。我建议先处理大文件。所以找到所有大于的文件10 MB
(10 MB 是一个足够大的文件大小,你可以为 1MB 选择 +1M 同样)
sudo find / -type f -size +10M -exec ls -lh {} \;
and now you can identify which ones are causing the trouble and deal with them accordingly.
现在您可以确定哪些问题导致了问题并相应地处理它们。
As for your original du -a / | sort -n -r | head -n 10
command, that won't work since it is sorting by size, and so, all ancestor directories of a large file will go up the pyramid, while the individual file will most probably be missed.
至于您的原始du -a / | sort -n -r | head -n 10
命令,这将不起作用,因为它是按大小排序的,因此,大文件的所有祖先目录都将上升到金字塔,而单个文件很可能会被遗漏。
Note: It should be pretty simple to notice the occurence of similar other log files/binaries in the location of the files you so find, so as a suggestion, do cd
in to the directory containing the original file to cleanup more files of the same kind. You can also iterate with the command for files with sizes greater than 1MB
next, and so on.
注意:注意到在您找到的文件位置中出现类似的其他日志文件/二进制文件应该很简单,因此建议cd
您进入包含原始文件的目录以清理更多相同类型的文件. 您还可以使用命令迭代大小大于1MB
next 的文件,依此类推。
回答by Ricardo Martins
At /
, type du -hs *
as root
:
在/
键入du -hs *
为root
:
$ sudo su -
cd /; du -hs *
You will see the full size of all folders and identify the bigger ones.
您将看到所有文件夹的完整大小并识别较大的文件夹。
回答by mti2935
ansh0l's answer is the way to go to find large files. But, if you want to see how much space each directory in your files system is consuming, cd to the root directory, then do du -k --max-depth=
'. This will show you how much space is being consumed by each subdirectory within the root directory. When you spot the culprit, cd to that directory then run the same command again, and repeat, until you find the files that are consuming all of the space.
ansh0l 的答案是查找大文件的方法。但是,如果您想查看文件系统中的每个目录消耗了多少空间,请 cd 到根目录,然后执行du -k --max-depth=
'. 这将显示根目录中的每个子目录消耗了多少空间。当您发现罪魁祸首时,cd 到该目录,然后再次运行相同的命令,并重复,直到找到占用所有空间的文件。
回答by user18853
If you are not able to find any gigantic file , killing some processes might solve the issue (it worked for me, read full answer to know why)
如果您找不到任何巨大的文件,杀死一些进程可能会解决问题(它对我有用,请阅读完整答案以了解原因)
Earlier:
之前:
/dev/xvda1 8256952 7837552 0 100% /
Now
现在
/dev/xvda18256952 1062780 6774744 14% /
Reason:If you do rm <filename>
on a file which is currently open by any process, it doesn't delete the file and the process still could be writing to the file. These ghost files can't be found by find
command and they can't be deleted. Use this command to find out which processes are using deleted files:
原因:如果您rm <filename>
对当前由任何进程打开的文件进行操作,它不会删除该文件,并且该进程仍可能正在写入该文件。这些ghost文件无法通过find
命令找到,也无法删除。使用此命令找出哪些进程正在使用已删除的文件:
lsof +L1
Kill the processes to release the files. Sometimes its difficult to kill all the processes using the file. Try restarting the system (I don't feel good, but that's a quick solution, makes sure no process uses the deleted file)
杀死进程以释放文件。有时很难杀死使用该文件的所有进程。尝试重新启动系统(我感觉不太好,但这是一个快速解决方案,确保没有进程使用已删除的文件)
阅读:https: //serverfault.com/questions/232525/df-in-linux-not-showing-correct-free-space-after-file-removal/232526
回答by Stevie
If you have any snapshots against the file system the usage doesn't show in the O/S.
如果您有针对文件系统的任何快照,则使用情况不会显示在 O/S 中。
So the longer you leave your snapshot the more disk it will consume on your current volume. If you delete the snapshot, then reboot the missing disk capacity will re-appear.
因此,您保留快照的时间越长,它在当前卷上消耗的磁盘就越多。如果删除了快照,那么重启后丢失的磁盘容量会重新出现。
回答by Krishan Kumar Mourya
This space is consumed by mail notifications
此空间被邮件通知占用
you can check it by typing
你可以通过输入来检查它
sudo find / -type f -size +1000M -exec ls -lh {} \;
It will show large folders above 1000MB
它将显示大于 1000MB 的大文件夹
Result will have a folder
结果会有一个文件夹
/var/mail/username
You can free that space by running the following command
您可以通过运行以下命令来释放该空间
> /var/mail/username
Note that greater than (>) symbol is not a prompt, you have to run the cmd with it.
请注意,大于 (>) 符号不是提示,您必须使用它运行 cmd。
Now check you space free space by
现在检查你的空间可用空间
df -h
Now you have enough free space, Enjoy... :)
现在您有足够的可用空间,享受... :)