Linux rm、cp、mv 命令的参数列表太长错误

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/11289551/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-06 07:12:08  来源:igfitidea点击:

Argument list too long error for rm, cp, mv commands

linuxunixcommand-line-arguments

提问by Vicky

I have several hundred PDFs under a directory in UNIX. The names of the PDFs are really long (approx. 60 chars).

我在 UNIX 的一个目录下有数百个 PDF。PDF 的名称非常长(大约 60 个字符)。

When I try to delete all PDFs together using the following command:

当我尝试使用以下命令一起删除所有 PDF 时:

rm -f *.pdf

I get the following error:

我收到以下错误:

/bin/rm: cannot execute [Argument list too long]

What is the solution to this error? Does this error occur for mvand cpcommands as well? If yes, how to solve for these commands?

这个错误的解决方法是什么?mvcp命令是否也会发生此错误?如果是,如何解决这些命令?

采纳答案by DPlusV

The reason this occurs is because bash actually expands the asterisk to every matching file, producing a very long command line.

发生这种情况的原因是因为 bash 实际上将星号扩展到每个匹配的文件,生成一个很长的命令行。

Try this:

尝试这个:

find . -name "*.pdf" -print0 | xargs -0 rm

Warning:this is a recursive search and will find (and delete) files in subdirectories as well. Tack on -fto the rm command only if you are sure you don't want confirmation.

警告:这是一个递归搜索,也会在子目录中查找(和删除)文件。钉在-f只有当你确信你不想确认rm命令。

You can do the following to make the command non-recursive:

您可以执行以下操作以使命令非递归:

find . -maxdepth 1 -name "*.pdf" -print0 | xargs -0 rm

Another option is to use find's -deleteflag:

另一种选择是使用 find 的-delete标志:

find . -name "*.pdf" -delete

回答by BigMike

you can try this:

你可以试试这个:

for f in *.pdf
do
  rm $f
done

EDIT: ThiefMaster comment suggest me not to disclose such dangerous practice to young shell's jedis, so I'll add a more "safer" version (for the sake of preserving things when someone has a "-rf . ..pdf" file)

编辑:ThiefMaster 评论建议我不要向年轻的 shell 的 jedis 透露这种危险的做法,所以我会添加一个更“安全”的版本(为了在有人拥有“-rf ..pdf”文件时保留内容)

echo "# Whooooo" > /tmp/dummy.sh
for f in '*.pdf'
do
   echo "rm -i $f" >> /tmp/dummy.sh
done

After running the above, just open the /tmp/dummy.sh file in your fav. editor and check every single line for dangerous filenames, commenting them out if found.

运行上述后,只需在您的收藏夹中打开 /tmp/dummy.sh 文件。编辑器并检查每一行是否有危险的文件名,如果找到则将其注释掉。

Then copy the dummy.sh script in your working dir and run it.

然后将 dummy.sh 脚本复制到您的工作目录中并运行它。

All this for security reasons.

所有这些都是出于安全原因。

回答by Jon Lin

Or you can try:

或者你可以试试:

find . -name '*.pdf' -exec rm -f {} \;

回答by lind

And another one:

还有一个:

cd  /path/to/pdf
printf "%s
find . -maxdepth 1 -name '*.pdf' -delete
" *.[Pp][Dd][Ff] | xargs -0 rm

printfis a shell builtin, and as far as I know it's always been as such. Now given that printfis not a shell command (but a builtin), it's not subject to "argument list too long ..." fatal error.

printf是一个内置的 shell,据我所知,它一直都是这样。现在考虑到printf它不是一个 shell 命令(而是一个内置命令),它不受“ argument list too long ...”致命错误的影响。

So we can safely use it with shell globbing patterns such as *.[Pp][Dd][Ff], then we pipe its output to remove (rm) command, through xargs, which makes sure it fits enough file names in the command line so as not to fail the rmcommand, which is a shell command.

所以我们可以安全地将它与 shell globbing 模式一起使用,例如*.[Pp][Dd][Ff],然后我们将其输出通过管道传递给 remove( rm) 命令,通过xargs,这确保它在命令行中适合足够的文件名,以免rm命令失败,这是一个 shell命令。

The \0in printfserves as a null separator for the file names wich are then processed by xargscommand, using it (-0) as a separator, so rmdoes not fail when there are white spaces or other special characters in the file names.

\0printf作为空分隔符至极随后被处理的文件名xargs的命令,用它(-0)作为分隔符,所以rm不会在有空格或在文件名中其他特殊字符失败。

回答by ThiefMaster

findhas a -deleteaction:

find有一个-delete动作:

getconf ARG_MAX
# 2097152 # on 3.5.0-40-generic

回答by édouard Lopez

tl;dr

tl;博士

It's a kernel limitation on the size of the command line argument. Use a forloop instead.

这是对命令行参数大小的内核限制。改用for循环。

Origin of problem

问题的根源

This is a system issue, related to execveand ARG_MAXconstant. There is plenty of documentation about that (see man execve, debian's wiki).

这是一个系统问题,相关execveARG_MAX不断。有很多关于它的文档(参见man execvedebian 的 wiki)。

Basically, the expansion produce a command(with its parameters) that exceeds the ARG_MAXlimit. On kernel 2.6.23, the limit was set at 128 kB. This constant has been increased and you can get its value by executing:

基本上,扩展会产生一个超出限制的命令(及其参数)ARG_MAX。在内核上2.6.23,限制设置为128 kB. 此常量已增加,您可以通过执行以下命令获取其值:

for f in *.pdf; do echo rm "$f"; done

Solution: Using forLoop

解决方案:使用for循环

Use a forloop as it's recommended on BashFAQ/095and there is no limit except for RAM/memory space:

使用BashFAQ/095for推荐的循环,除了 RAM/内存空间外没有限制:

Dry run to ascertain it will delete what you expect:

试运行以确定它会删除您期望的内容:

for f in *.pdf; do rm "$f"; done

And execute it:

并执行它:

find . -maxdepth 1 -name '*.pdf' -delete 

Also this is a portable approach as glob have strong and consistant behavior among shells (part of POSIX spec).

这也是一种可移植的方法,因为 glob 在 shell 之间具有强大且一致的行为(POSIX 规范的一部分)。

Note:As noted by several comments, this is indeed slower but more maintainable as it can adapt more complex scenarios, e.g.where one want to do more than just one action.

注意:正如一些评论所指出的,这确实更慢但更易于维护,因为它可以适应更复杂的场景,例如,一个人想要做的不仅仅是一个动作。

Solution: Using find

解决方案:使用 find

If you insist, you can use findbut really don't use xargsas it "is dangerous (broken, exploitable, etc.) when reading non-NUL-delimited input":

如果您坚持,您可以使用find但实际上不要使用 xargs,因为它“在读取非 NUL 分隔的输入时很危险(损坏、可利用等)”

ls | grep .pdf > list.txt
wc -l list.txt

Using -maxdepth 1 ... -deleteinstead of -exec rm {} +allows findto simply execute the required system calls itself without using an external process, hence faster (thanks to @chepner comment).

使用-maxdepth 1 ... -delete而不是-exec rm {} +允许find在不使用外部进程的情况下简单地执行所需的系统调用本身,因此速度更快(感谢@chepner comment)。

References

参考

回答by thai_phan

I only know a way around this. The idea is to export that list of pdf files you have into a file. Then split that file into several parts. Then remove pdf files listed in each part.

我只知道解决这个问题的方法。这个想法是将您拥有的 pdf 文件列表导出到一个文件中。然后将该文件拆分为几个部分。然后删除每个部分中列出的pdf文件。

split -l 600 list.txt

wc -l is to count how many line the list.txt contains. When you have the idea of how long it is, you can decide to split it in half, forth or something. Using split -l command For example, split it in 600 lines each.

wc -l 是计算list.txt包含多少行。当您知道它有多长时,您可以决定将其分成两半、四分之三等。使用 split -l 命令 例如,将其拆分为 600 行。

rm $(<xaa)
rm $(<xab)
rm $(<xac)

this will create a few file named xaa,xab,xac and so on depends on how you split it. Now to "import" each list in those file into command rm, use this:

这将创建一些名为 xaa、xab、xac 等的文件,具体取决于您如何拆分它。现在要将这些文件中的每个列表“导入”到命令 rm 中,请使用以下命令:

files=(*.pdf)
for((I=0;I<${#files[@]};I+=1000)); do
    rm -f "${files[@]:I:1000}"
done

Sorry for my bad english.

对不起,我的英语不好。

回答by user3405020

i was facing same problem while copying form source directory to destination

我在将表单源目录复制到目标时遇到了同样的问题

source directory had files ~3 lakcs

源目录有文件 ~3 lakcs

i used cp with option -rand it's worked for me

我将cp 与选项 -r 一起使用,它对我有用

cp -r abc/ def/

cp -r abc/ def/

it will copy all files from abc to def without giving warning of Argument list too long

它会将所有文件从 abc 复制到 def 而不给出参数列表太长的警告

回答by danjperron

You could use a bash array:

您可以使用 bash 数组:

rm -f A*.pdf
rm -f B*.pdf
rm -f C*.pdf
...
rm -f *.pdf

This way it will erase in batches of 1000 files per step.

这样,它将每步批量擦除 1000 个文件。

回答by Fabio Farath

The rmcommand has a limitation of files which you can remove simultaneous.

RM命令有哪些可以删除文件同时的限制。

One possibility you can remove them using multiple times the rmcommand bases on your file patterns, like:

您可以根据文件模式多次使用rm命令删除它们,例如:

find . -name "*.pdf" -exec rm {} \;

You can also remove them through the findcommand:

您还可以通过find命令删除它们:

##代码##