bash 使用 WGET 从网站/目录下载所有 .tar.gz 文件

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/14489889/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-18 04:20:54  来源:igfitidea点击:

Download all .tar.gz files from website/directory using WGET

linuxbashdownloadwget

提问by sMyles

So i'm attempting to create an alias/script to download all specific extensions from a website/directory using wget but i feel like there must be an easier way than what i've come up with.

所以我试图创建一个别名/脚本来使用 wget 从网站/目录下载所有特定的扩展,但我觉得必须有比我想出的更简单的方法。

Right now the code i've come up with from searching Google and the man pages is:

现在我通过搜索谷歌和手册页得出的代码是:

wget -r -l1 -nH --cut-dirs=2 --no-parent -A.tar.gz --no-directories http://download.openvz.org/template/precreated/

So in the example above i'm trying to download all the .tar.gz files from the OpenVZ precreated templates directory.

所以在上面的例子中,我试图从 OpenVZ 预先创建的模板目录下载所有 .tar.gz 文件。

The above code works correctly but I have to manually specify --cut-dirs=2 which would cut out the /template/precreated/ directory structure that would normally be created and it also downloads the robots.txt file.

上面的代码可以正常工作,但我必须手动指定 --cut-dirs=2 ,它会删除通常会创建的 /template/precreated/ 目录结构,它还会下载 robots.txt 文件。

Now this isn't necessarily a problem and it's easy to just remove the robots.txt file but i was hoping i just missed something in the man pages that would allow me to do this same things without specifying the directory structure to cut out...

现在这不一定是一个问题,删除 robots.txt 文件很容易,但我希望我只是错过了手册页中的一些内容,这些内容可以让我在不指定要删除的目录结构的情况下做同样的事情。 .

Thanks for any help ahead of time, it's greatly appreciated!

感谢您提前提供任何帮助,非常感谢!

回答by Anew

Use the -Roption

使用-R选项

-R robots.txt,unwanted-file.txt

as a reject list of files you don't want (comma-separated).

作为您不想要的文件的拒绝列表(逗号分隔)。

As for scripting this:

至于编写这个:

URL=http://download.openvz.org/template/precreated/
CUTS=`echo ${URL#http://} | awk -F '/' '{print NF -2}'`
wget -r -l1 -nH --cut-dirs=${CUTS} --no-parent -A.tar.gz --no-directories -R robots.txt ${URL}

That should work based on the subdirectories in your URL.

这应该基于您的 URL 中的子目录。

回答by Roguebantha

I would suggest, if this is really annoying and you're having to do it a lot, to just write a really short two-line script to delete it for you:

我建议,如果这真的很烦人并且你不得不做很多事情,只需编写一个非常短的两行脚本来为你删除它:

wget -r -l1 -nH --cut-dirs=2 --no-parent -A.tar.gz --no-directories http://download.openvz.org/template/precreated/
rm robots.txt