是否可以使用 PHP 进行异步 HTTP 请求?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/5453192/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-25 21:29:20  来源:igfitidea点击:

Is making asynchronous HTTP requests possible with PHP?

phpcurl

提问by MarathonStudios

I have a PHP script that needs to download several files from a remote server. At the moment I just have a loop downloading and processing the files with cURL, which means that it doesn't start downloading one file until the previous one is finished - this increases the script run time significantly.

我有一个 PHP 脚本,需要从远程服务器下载多个文件。目前我只有一个循环下载并使用 cURL 处理文件,这意味着它不会在前一个文件完成之前开始下载一个文件 - 这显着增加了脚本运行时间。

Would it be possible to start several instances of cURL, for example, to asynchronously download these files at the same time without waiting for the previous one to finish? If so, how would this be accomplished?

是否可以启动多个 cURL 实例,例如,同时异步下载这些文件而无需等待前一个完成?如果是这样,这将如何实现?

采纳答案by bl00dshooter

Yes.

是的。

There is the multirequest PHP library(or see: archived Google Code project). It's a multi-threaded CURL library.

还有就是多请求PHP库(或见:归档谷歌代码项目)。它是一个多线程的 CURL 库。

As another solution, you could write a script that does that in a language that supports threading, like Ruby or Python. Then, just call the script with PHP. Seems rather simple.

作为另一种解决方案,您可以编写一个脚本,以支持线程的语言(如 Ruby 或 Python)执行此操作。然后,只需使用 PHP 调用脚本即可。看起来比较简单。

回答by stil

Check out curl-easy. It supports both blocking and not-blocking requests in parallel or single request at once. Also, it is unit-tested, unlike many simple or buggy libraries.

查看curl-easy。它同时支持并行或单个请求的阻塞和非阻塞请求。此外,它经过单元测试,与许多简单或有缺陷的库不同。

Disclosure: I am the author of this library. The library has it's own test suite so I'm pretty confident it is robust.

披露:我是这个图书馆的作者。该库有自己的测试套件,所以我非常有信心它是健壮的。

Also, check out example of use below:

另外,请查看下面的使用示例:

<?php
// We will download info about 2 YouTube videos:
// http://youtu.be/XmSdTa9kaiQ and
// http://youtu.be/6dC-sm5SWiU

// Init queue of requests
$queue = new cURL\RequestsQueue;
// Set default options for all requests in queue
$queue->getDefaultOptions()
    ->set(CURLOPT_TIMEOUT, 5)
    ->set(CURLOPT_RETURNTRANSFER, true);
// Set callback function to be executed when request will be completed
$queue->addListener('complete', function (cURL\Event $event) {
    $response = $event->response;
    $json = $response->getContent(); // Returns content of response
    $feed = json_decode($json, true);
    echo $feed['entry']['title']['$t'] . "\n";
});

$request = new cURL\Request('http://gdata.youtube.com/feeds/api/videos/XmSdTa9kaiQ?v=2&alt=json');
$queue->attach($request);

$request = new cURL\Request('http://gdata.youtube.com/feeds/api/videos/6dC-sm5SWiU?v=2&alt=json');
$queue->attach($request);

// Execute queue
$queue->send();

回答by Rav

The library of @stilis so, so cool. Many thanks to him!

@stil的图书馆太酷了。非常感谢他!

Still, I have written nice utility function that makes it very easy to get asynchronously content from multiple URLs (APIs in my case) and to return them without losing information which is which.

尽管如此,我已经编写了很好的实用程序函数,它可以很容易地从多个 URL(在我的例子中为 API)获取异步内容并返回它们而不会丢失哪个信息。

You simply run it by passing key => valuearray as an input and it returns key => responsearray as result :- )

您只需通过传递键 => 值数组作为输入来运行它,并返回键 => 响应数组作为结果:-)

/** 
     * This function runs multiple GET requests parallely.<br />
     * @param array $urlsArray needs to be in format:<br />
     * <i>array(<br />
     * [url_unique_id_1] => [url_for_request_1],<br />
     * [url_unique_id_2] => [url_for_request_2],<br />
     * [url_unique_id_3] => [url_for_request_3]<br />
     * )</i><br />
     * e.g. input like:<br />
     *  <i>array(<br />
     * &nbsp; "[email protected]" =>
     * &nbsp; "http://someapi.com/results?search=easylife",<br />
     * &nbsp; "[email protected]" =>
     * &nbsp; "http://someapi.com/results?search=safelife"<br />
     * )</i>
     * @return array An array where for every <i>url_unique_id</i> response to this request 
     * is returned e.g.<br />
     * <i>array(<br />
     * &nbsp; "[email protected]" => <br />
     * &nbsp; "Work less, enjoy more",<br />
     * &nbsp; "[email protected]" => <br />
     * &nbsp; "Study, work, pay taxes"<br />
     * )</i>
     *  */
    public function getResponsesFromUrlsAsynchronously(array $urlsArray, $timeout = 8) {
        $queue = new \cURL\RequestsQueue;

        // Set default options for all requests in queue
        $queue->getDefaultOptions()
                ->set(CURLOPT_TIMEOUT, $timeout)
                ->set(CURLOPT_RETURNTRANSFER, true);

        // =========================================================================
        // Define some extra variables to be used in callback

        global $requestUidToUserUrlIdentifiers;
        $requestUidToUserUrlIdentifiers = array();

        global $userIdentifiersToResponses;
        $userIdentifiersToResponses = array();

        // =========================================================================

        // Set function to be executed when request will be completed
        $queue->addListener('complete', function (\cURL\Event $event) {

            // Define user identifier for this url
            global $requestUidToUserUrlIdentifiers;
            $requestId = $event->request->getUID();
            $userIdentifier = $requestUidToUserUrlIdentifiers[$requestId];

            // =========================================================================

            $response = $event->response;
            $json = $response->getContent(); // Returns content of response

            $apiResponseAsArray = json_decode($json, true);
            $apiResponseAsArray = $apiResponseAsArray['jobs'];

            // =========================================================================
            // Store this response in proper structure
            global $userIdentifiersToResponses;
            $userIdentifiersToResponses[$userIdentifier] = $apiResponseAsArray;
        });

        // =========================================================================

        // Add all request to queue
        foreach ($urlsArray as $userUrlIdentifier => $url) {
            $request = new \cURL\Request($url);
            $requestUidToUserUrlIdentifiers[$request->getUID()] = $userUrlIdentifier;
            $queue->attach($request);
        }

        // =========================================================================

        // Execute queue
        $queue->send();

        // =========================================================================

        return $userIdentifiersToResponses;
    }

回答by mpyw

For PHP5.5+, mpyw/cois the ultimate solution. It works as if it is tj/coin JavaScript.

对于 PHP5.5+,mpyw/co是终极解决方案。它就像JavaScript 中的tj/co 一样工作。

Example

例子

Assume that you want to download specified multiple GitHub users' avatars. The following steps are required for each user.

假设您要下载指定的多个 GitHub 用户的头像。每个用户都需要执行以下步骤。

  1. Get content of http://github.com/mpyw(GET HTML)
  2. Find <img class="avatar" src="...">and request it (GET IMAGE)
  1. 获取http://github.com/mpyw 的内容(GET HTML)
  2. 查找<img class="avatar" src="...">并请求它(获取图像)

---: Waiting my response
...: Waiting other response in parallel flows

---: 等待我的响应
...: 等待并行流中的其他响应

Many famous curl_multibased scripts already provide us the following flows.

许多著名curl_multi的脚本已经为我们提供了以下流程。

        /-----------GET HTML\  /--GET IMAGE.........\
       /                     \/                      \ 
[Start] GET HTML..............----------------GET IMAGE [Finish]
       \                     /\                      /
        \-----GET HTML....../  \-----GET IMAGE....../

However, this is not efficient enough. Do you want to reduce worthless waiting times ...?

然而,这还不够有效。您想减少毫无价值的等待时间...吗?

        /-----------GET HTML--GET IMAGE\
       /                                \            
[Start] GET HTML----------------GET IMAGE [Finish]
       \                                /
        \-----GET HTML-----GET IMAGE.../

Yes, it's very easy with mpyw/co. For more details, visit the repository page.

是的,使用 mpyw/co 很容易。有关更多详细信息,请访问存储库页面。

回答by user5175717

In PHP 7.0 & Apache 2.0, as said in PHP exec Document, redirecting the output, by adding " &> /dev/null &" at the end of the command, could makes it running on background, just remember to wrap the command correctly.

在 PHP 7.0 和 Apache 2.0 中,如PHP exec Document 中所述,重定向输出,通过在命令末尾添加“ &> /dev/null &”,可以使其在后台运行,只需记住正确包装命令.

$time = microtime(true);
$command = '/usr/bin/curl -H \'Content-Type: application/json\' -d \'' . $curlPost . '\' --url \'' . $wholeUrl . '\' >> /dev/shm/request.log 2> /dev/null &';
exec($command);
echo (microtime(true) - $time) * 1000 . ' ms';

Above works well for me, takes only 3ms, but following won't work, takes 1500ms.

以上对我来说效果很好,只需要 3 毫秒,但以下不起作用,需要 1500 毫秒。

$time = microtime(true);
$command = '/usr/bin/curl -H \'Content-Type: application/json\' -d \'' . $curlPost . '\' --url ' . $wholeUrl;
exec($command . ' >> /dev/shm/request.log 2> /dev/null &');
echo (microtime(true) - $time) * 1000 . ' ms';

In total, adding " &> /dev/null &" at the end of your command may helps, just remember to WRAP your command properly.

总的来说,在命令末尾添加“&> /dev/null &”可能会有所帮助,只需记住正确包裹您的命令即可。