Linux 我所有的 inode 都在哪里使用?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/347620/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Where are all my inodes being used?
提问by Joel
How do I find out which directories are responsible for chewing up all my inodes?
我如何找出哪些目录负责咀嚼我的所有 inode?
Ultimately the root directory will be responsible for the largest number of inodes, so I'm not sure exactly what sort of answer I want..
最终,根目录将负责最多数量的 inode,所以我不确定我想要什么样的答案..
Basically, I'm running out of available inodes and need to find a unneeded directory to cull.
基本上,我用完了可用的 inode,需要找到一个不需要的目录来剔除。
Thanks, and sorry for the vague question.
谢谢,很抱歉这个模糊的问题。
采纳答案by Paul Tomblin
So basically you're looking for which directories have a lot of files? Here's a first stab at it:
所以基本上你正在寻找哪些目录有很多文件?这是第一次尝试:
find . -type d -print0 | xargs -0 -n1 count_files | sort -n
where "count_files" is a shell script that does (thanks Jonathan)
其中“count_files”是一个shell脚本(感谢乔纳森)
echo $(ls -a "" | wc -l)
回答by Alnitak
Here's a simple Perl script that'll do it:
这是一个简单的 Perl 脚本:
#!/usr/bin/perl -w
use strict;
sub count_inodes($);
sub count_inodes($)
{
my $dir = shift;
if (opendir(my $dh, $dir)) {
my $count = 0;
while (defined(my $file = readdir($dh))) {
next if ($file eq '.' || $file eq '..');
$count++;
my $path = $dir . '/' . $file;
count_inodes($path) if (-d $path);
}
closedir($dh);
printf "%7d\t%s\n", $count, $dir;
} else {
warn "couldn't open $dir - $!\n";
}
}
push(@ARGV, '.') unless (@ARGV);
while (@ARGV) {
count_inodes(shift);
}
If you want it to work like du
(where each directory count also includes the recursive count of the subdirectory) then change the recursive function to return $count
and then at the recursion point say:
如果您希望它像du
(其中每个目录计数还包括子目录的递归计数)那样工作,则将递归函数更改为return $count
,然后在递归点说:
$count += count_inodes($path) if (-d $path);
回答by Hannes
If you don't want to make a new file (or can't because you ran out of inodes) you can run this query:
如果你不想创建一个新文件(或者不能因为你的 inode 用完了),你可以运行这个查询:
for i in `find . -type d `; do echo `ls -a $i | wc -l` $i; done | sort -n
as insider mentioned in another answer, using a solution with find will be much quicker since recursive ls is quite slow, check below for that solution! (credit where credit due!)
正如内部人士在另一个答案中提到的那样,使用带有 find 的解决方案会快得多,因为递归 ls 非常慢,请在下面查看该解决方案!(信用到期的信用!)
回答by insider
Provided methods with recursive lsare very slow. Just for quickly finding parent directory consuming most of inodes i used:
提供的带有递归ls 的方法非常慢。只是为了快速找到消耗我使用的大部分 inode 的父目录:
cd /partition_that_is_out_of_inodes
for i in *; do echo -e "$(find $i | wc -l)\t$i"; done | sort -n
回答by AndrewM at Affinity
for i in dir.[01] do find $i -printf "%i\n"|sort -u|wc -l|xargs echo $i -- done
dir.0 -- 27913
dir.1 -- 27913
dir.0 -- 27913
dir.1 -- 27913
回答by stinkoid
The perl script is good, but beware symlinks- recurse only when -l filetest returns false or you will at best over-count, at worst recurse indefinitely (which could- minor concern- invoke Satan's 1000-year reign).
perl 脚本很好,但要注意符号链接-仅当 -l filetest 返回 false 时才递归,否则充其量会多计数,最坏的情况是无限期递归(这可能-次要问题-调用撒旦的 1000 年统治)。
The whole idea of counting inodes in a file system tree falls apart when there are multiple links to more than a small percentage of the files.
当有多个链接指向超过一小部分文件时,在文件系统树中计算 inode 的整个想法就会失效。
回答by Romuald Brunet
Just wanted to mention that you could also search indirectlyusing the directory size, for example:
只想提一下,您也可以使用目录大小进行间接搜索,例如:
find /path -type d -size +500k
Where 500k could be increased if you have a lot of large directories.
如果您有很多大目录,则可以增加 500k。
Note that this method is notrecursive. This will only help you if you have a lot of files in one single directory, but not if the files are evenly distributed across its descendants.
请注意,此方法不是递归的。如果您在一个目录中有很多文件,这只会对您有所帮助,但如果这些文件均匀分布在其后代中,则无济于事。
回答by Noah Spurrier
This is my take on it. It's not so different from others, but the output is pretty and I think it counts more valid inodes than others (directories and symlinks). This counts the number of files in each subdirectory of the working directory; it sorts and formats the output into two columns; and it prints a grand total (shown as ".", the working directory). This will not follow symlinks but will count files and directories that begin with a dot. This does not count device nodes and special files like named pipes. Just remove the "-type l -o -type d -o -type f" test if you want to count those, too. Because this command is split up into two find commands it cannot correctly discriminate against directories mounted on other filesystems (the -mount option will not work). For example, this should really ignore "/proc" and "/sys" directories. You can see that in the case of running this command in "/" that including "/proc" and "/sys" grossly skews the grand total count.
这是我的看法。它与其他人没什么不同,但输出很漂亮,我认为它比其他人(目录和符号链接)计算的有效 inode 数更多。这会计算工作目录的每个子目录中的文件数;它将输出排序和格式化为两列;并打印总计(显示为“.”,工作目录)。这不会遵循符号链接,但会计算以点开头的文件和目录。这不包括设备节点和命名管道等特殊文件。如果您也想计算这些,只需删除“-type l -o -type d -o -type f”测试。因为这个命令被分成两个 find 命令,所以它不能正确区分安装在其他文件系统上的目录(-mount 选项不起作用)。例如,这真的应该忽略“/proc” 和“/sys”目录。您可以看到,在“/”中运行此命令的情况下,包括“/proc”和“/sys”严重扭曲了总计数。
for ii in $(find . -maxdepth 1 -type d); do
echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"
done | sort -n -k 2 | column -t
Example:
例子:
# cd /
# for ii in $(find -maxdepth 1 -type d); do echo -e "${ii}\t$(find "${ii}" -type l -o -type d -o -type f | wc -l)"; done | sort -n -k 2 | column -t
./boot 1
./lost+found 1
./media 1
./mnt 1
./opt 1
./srv 1
./lib64 2
./tmp 5
./bin 107
./sbin 109
./home 146
./root 169
./dev 188
./run 226
./etc 1545
./var 3611
./sys 12421
./lib 17219
./proc 20824
./usr 56628
. 113207
回答by sanxiago
This command works in highly unlikely cases where your directory structure is identical to mine:
此命令适用于您的目录结构与我的目录结构相同的极不可能的情况:
find / -type f | grep -oP '^/([^/]+/){3}' | sort | uniq -c | sort -n
find / -type f | grep -oP '^/([^/]+/){3}' | sort | uniq -c | sort -n
回答by CO4 Computing
Just a note, when you finally find some mail spool directory and want to delete all the junk that's in there, rm * will not work if there are too many files, you can run the following command to quickly delete everything in that directory:
请注意,当您最终找到某个邮件池目录并想删除其中的所有垃圾时,如果文件太多, rm * 将不起作用,您可以运行以下命令快速删除该目录中的所有内容:
* WARNING *THIS WILL DELETE ALL FILES QUICKLY FOR CASES WHEN rm DOESN'T WORK
* 警告 *当 rm 不工作时,这将快速删除所有文件
find . -type f -delete