通过 Bash cURL 并行或多个请求
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/24811199/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Parallel or multiple requests through Bash cURL
提问by Mozung
I have this simple program where I am sending URLS and requesting server to process that Url and send back results to be saved in the file; all through the loop one by one. It runs fine but only problem is I have 5,000 links or URLS to be processed and it takes a long time with one by one loop. Input Urls are different from each other and do not have pattern. Is there anyway I can pass on 10,20 or 30 requests in parallel and save their results in one file as well? here is my code. Thanks
我有这个简单的程序,我在其中发送 URL 并请求服务器处理该 URL 并将结果发回以保存在文件中;全部通过循环一一。它运行良好,但唯一的问题是我有 5,000 个链接或 URL 需要处理,并且一个一个循环需要很长时间。输入 URL 彼此不同,没有模式。无论如何,我可以并行传递 10,20 或 30 个请求并将它们的结果保存在一个文件中吗?这是我的代码。谢谢
USER_GUID=
API_KEY=
EXTRACTOR_GUID=
URL_FILE=
DATA_FILE=
while read URL
do
echo -n $URL
curl -XPOST -H 'Content-Type: application/json' -s -d "{\"input\":{\"webpage/url\":\"$URL\"}}" "https://api.io/store/connector/$EXTRACTOR_GUID/_query?_user=$USER_GUID&_apikey=$API_KEY" >> $DATA_FILE
echo "" >> $DATA_FILE
echo " ...done"
done < $URL_FILE
回答by Ole Tange
Use GNU Parallel
使用 GNU 并行
cat $URL_FILE | parallel -j30 -q curl -XPOST -H 'Content-Type: application/json' -s -d "{\"input\":{\"webpage/url\":\"$URL\"}}" "https://api.io/store/connector/$EXTRACTOR_GUID/_query?_user=$USER_GUID&_apikey=$API_KEY" >> $DATA_FILE