bash 分别对每个文件进行 GZip

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/1792078/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-09 18:43:29  来源:igfitidea点击:

GZip every file separately

linuxbashgzip

提问by Tonio

How can we GZip every file separately?

我们如何分别对每个文件进行 GZip 压缩?

I don't want to have all of the files in a big tar.

我不想将所有文件都放在一个大 tar 中。

回答by Courtney Faulkner

You can use gzip *

您可以使用 gzip *



Note:

笔记:

  • This will zip each file individually and DELETEthe original.
  • Use -k(--keep) option to keep the original files.
  • This may not work if you have a huge number of files due to limits of the shell
  • To run gzipin parallel see @MarkSetchell's answer below.
  • 这将单独压缩每个文件并删除原始文件。
  • 使用-k( --keep) 选项保留原始文件。
  • 如果由于 shell 的限制而有大量文件,这可能不起作用
  • 要并行运行gzip请参阅下面的@MarkSetchell 的回答

回答by Mark Setchell

Easy and very fast answer that will use all your CPU cores in parallel:

将并行使用所有 CPU 内核的简单且非常快速的答案:

parallel gzip ::: *

GNU Parallelis a fantastic tool that should be used far more in this world where CPUs are only getting more cores rather than more speed. There are loads of examples that we would all do well to take 10 minutes to read... here

GNU Parallel是一个很棒的工具,在这个 CPU 只获得更多内核而不是更高速度的世界中,应该更多地使用它。有很多例子,我们都可以花 10 分钟阅读……这里

回答by leekaiinthesky

After seven years, this highly upvoted comment still doesn't have its own full-fledged answer, so I'm promoting it now:

七年后,这条备受赞誉的评论仍然没有自己的完整答案,所以我现在正在推广它:

gzip -r .

gzip -r .

This has two advantages over the currently accepted answer: it works recursively if there are any subdirectories, and it won't fail from Argument list too longif the number of files is very large.

与当前接受的答案相比,这有两个优点:如果有任何子目录,它会递归工作,并且Argument list too long如果文件数量非常大,它不会失败。

回答by Buddy

If you want to gzip every file recursively, you could use find piped to xargs:

如果你想递归地 gzip 每个文件,你可以使用 find 管道到 xargs:

$ find . -type f -print0 | xargs -0r gzip

回答by Federico Giorgi

Try a loop

尝试循环

$ for file in *; do gzip "$file"; done

回答by itsmisterbrown

Or, if you have pigz (gzip utility that parallelizes compression over multiple processors and cores)

或者,如果您有 pigz(在多个处理器和内核上并行压缩的 gzip 实用程序)

pigz *