Bash 中的并行 wget

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/7577615/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-09 21:05:17  来源:igfitidea点击:

Parallel wget in Bash

bashparallel-processingwget

提问by Jonathon Vandezande

I am getting a bunch of relatively small pages from a website and was wondering if I could somehow do it in parallel in Bash. Currently my code looks like this, but it takes a while to execute (I think what is slowing me down is the latency in the connection).

我从一个网站上获得了一堆相对较小的页面,想知道是否可以在 Bash 中以某种方式并行执行。目前我的代码看起来像这样,但执行需要一段时间(我认为让我变慢的是连接的延迟)。

for i in {1..42}
do
    wget "https://www.example.com/page$i.html"
done

I have heard of using xargs, but I don't know anything about that and the man page is very confusing. Any ideas? Is it even possible to do this in parallel? Is there another way I could go about attacking this?

我听说过使用 xargs,但我对此一无所知,而且手册页非常混乱。有任何想法吗?甚至可以并行执行此操作吗?我还有其他方法可以攻击这个吗?

回答by Damon

Much preferrable to pushing wgetinto the background using &or -b, you can use xargsto the same effect, and better.

wget使用&或推入后台更可取-b,您可以使用或xargs达到相同的效果,而且效果更好。

The advantage is that xargswill synchronize properlywith no extra work. Which means that you are safe to access the downloaded files (assuming no error occurs). All downloads will have completed (or failed) once xargsexits, and you know by the exit code whether all went well. This is much preferrable to busy waiting with sleepand testing for completion manually.

其优点在于xargs正确地同步,没有额外的工作。这意味着您可以安全地访问下载的文件(假设没有发生错误)。一旦xargs退出,所有下载都将完成(或失败),您可以通过退出代码知道是否一切顺利。这比忙于sleep手动等待和测试完成要好得多。

Assuming that URL_LISTis a variable containing all the URLs (can be constructed with a loop in the OP's example, but could also be a manually generated list), running this:

假设这URL_LIST是一个包含所有 URL 的变量(可以在 OP 示例中使用循环构造,但也可以是手动生成的列表),运行以下命令:

echo $URL_LIST | xargs -n 1 -P 8 wget -q

will pass one argument at a time (-n 1) to wget, and execute at most 8 parallel wgetprocesses at a time (-P 8). xargreturns after the last spawned process has finished, which is just what we wanted to know. No extra trickery needed.

将一次传递一个参数 ( -n 1) 到wget,并且一次最多执行 8 个并行wget进程 ( -P 8)。xarg在最后一个生成的进程完成后返回,这正是我们想知道的。不需要额外的技巧。

The "magic number" of 8 parallel downloads that I've chosen is not set in stone, but it is probably a good compromise. There are two factors in "maximising" a series of downloads:

我选择的 8 个并行下载的“神奇数字”并不是一成不变的,但这可能是一个很好的折衷方案。“最大化”一系列下载有两个因素:

One is filling "the cable", i.e. utilizing the available bandwidth. Assuming "normal" conditions (server has more bandwidth than client), this is already the case with one or at most two downloads. Throwing more connections at the problem will only result in packets being dropped and TCP congestion control kicking in, and Ndownloads with asymptotically 1/Nbandwidth each, to the same net effect (minus the dropped packets, minus window size recovery). Packets being dropped is a normal thing to happen in an IP network, this is how congestion control is supposed to work (even with a single connection), and normally the impact is practically zero. However, having an unreasonably large number of connections amplifies this effect, so it can be come noticeable. In any case, it doesn't make anything faster.

一种是填充“电缆”,即利用可用带宽。假设“正常”条件(服务器具有比客户端更多的带宽),一次或最多两次下载已经是这种情况。在这个问题上抛出更多的连接只会导致数据包被丢弃和 TCP 拥塞控制启动,以及渐近1/N 的N 次下载每个带宽,具有相同的净效果(减去丢弃的数据包,减去窗口大小恢复)。数据包被丢弃是 IP 网络中发生的正常现象,这就是拥塞控制应该如何工作(即使使用单个连接),通常影响几乎为零。然而,不合理的大量连接会放大这种效果,因此它会变得很明显。无论如何,它不会使任何事情变得更快。

The second factor is connection establishment and request processing. Here, having a few extra connections in flight really helps. The problem one faces is the latency of two round-trips (typically 20-40ms within the same geographic area, 200-300ms inter-continental) plus the odd 1-2 milliseconds that the server actually needs to process the request and push a reply to the socket. This is not a lot of time per se, but multiplied by a few hundred/thousand requests, it quickly adds up.
Having anything from half a dozen to a dozen requests in-flight hides most or all of this latency (it is still there, but since it overlaps, it does not sum up!). At the same time, having only a few concurrent connections does not have adverse effects, such as causing excessive congestion, or forcing a server into forking new processes.

第二个因素是连接建立和请求处理。在这里,在飞行中增加一些额外的连接真的很有帮助。一个面临的问题是两次往返的延迟(通常在同一地理区域内为 20-40 毫秒,洲际为 200-300 毫秒)加上服务器实际需要处理请求并推送回复的奇数 1-2 毫秒到插座。这是不是有很多的时间本身,而是乘几百/千请求时,它迅速增加。
处理中从六个到十几个请求隐藏了大部分或全部延迟(它仍然存在,但由于它重叠,它没有总结!)。同时,只有少数并发连接不会产生不利影响,例如导致过度拥塞,或迫使服务器分叉新进程。

回答by Ole Tange

Just running the jobs in the background is not a scalable solution: If you are fetching 10000 urls you probably only want to fetch a few (say 100) in parallel. GNU Parallel is made for that:

仅在后台运行作业并不是一个可扩展的解决方案:如果您要获取 10000 个 url,您可能只想并行获取几个(比如 100 个)。GNU Parallel 就是为此而生的:

seq 10000 | parallel -j100 wget https://www.example.com/page{}.html

See the man page for more examples: http://www.gnu.org/software/parallel/man.html#example__download_10_images_for_each_of_the_past_30_days

有关更多示例,请参阅手册页:http: //www.gnu.org/software/parallel/man.html#example__download_10_images_for_each_of_the_past_30_days

回答by uzsolt

You can use -boption:

您可以使用-b选项:

wget -b "https://www.example.com/page$i.html"

If you don't want log files, add option -o /dev/null.

如果您不需要日志文件,请添加 option -o /dev/null

-o FILE  log messages to FILE.

回答by Hyman Edmonds

Adding an ampersand to a command makes it run in the background

在命令中添加 & 号使其在后台运行

for i in {1..42}
do
    wget "https://www.example.com/page$i.html" &
done

回答by user9869932

The version 2 of wget seems to implement multiple connections. The link of the project in github: https://github.com/rockdaboot/wget2

wget 的第 2 版似乎实现了多个连接。项目在github中的链接:https: //github.com/rockdaboot/wget2