laravel cURL 错误 28:操作在 2000 毫秒后超时,收到 23000995 个字节中的 7276200 个字节

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/46897627/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-14 16:51:38  来源:igfitidea点击:

cURL error 28: Operation timed out after 2000 milliseconds with 7276200 out of 23000995 bytes received

phplaravellaravel-5guzzle

提问by cyber8200

Description

描述

I'm using Guzzle in my Laravel project. I had a memory crash when I make a request to an API that return a huge payload.

我在 Laravel 项目中使用 Guzzle。当我向返回大量有效负载的 API 发出请求时,我遇到了内存崩溃。

I have this on the top of my CURL.phpclass. I have get() that I use Guzzle.

我在CURL.php班级中名列前茅。我有使用 Guzzle 的 get()。

use GuzzleHttp\Exception\GuzzleException;
use GuzzleHttp\Client;
use GuzzleHttp\FORCE_IP_RESOLVE;
use GuzzleHttp\DECODE_CONTENT;
use GuzzleHttp\CONNECT_TIMEOUT;
use GuzzleHttp\READ_TIMEOUT;
use GuzzleHttp\TIMEOUT;

class CURL {

    public static function get($url) {

        $client = new Client();
        $options = [
            'http_errors' => true,
            'force_ip_resolve' => 'v4',
            'connect_timeout' => 2,
            'read_timeout' => 2,
            'timeout' => 2,
        ];
        $result = $client->request('GET',$url,$options);
        $result = (string) $result->getBody();
        $result = json_decode($result, true);
        return $result;

    }

    ...

}

When I call it like this in my application, it request a large payload (30000)

当我在我的应用程序中这样调用它时,它请求一个大的有效负载 (30000)

$url = 'http://site/api/account/30000';
$response =  CURL::get($url)['data'];

I kept getting this error

我一直收到这个错误

cURL error 28: Operation timed out after 2000 milliseconds with 7276200 out of 23000995 bytes received (see http://curl.haxx.se/libcurl/c/libcurl-errors.html)

cURL 错误 28:操作在 2000 毫秒后超时,收到 23000995 个字节中的 7276200 个(参见http://curl.haxx.se/libcurl/c/libcurl-errors.html

How do I avoid this?

我如何避免这种情况?

Should I increase these settings?

我应该增加这些设置吗?

'connect_timeout' => 2,
'read_timeout' => 2,
'timeout' => 2,

回答by Alexey Shokov

Yes, you need to increase read_timeoutand timeout. The error is clear, you don't have enough time to get the response (the server is slow, network or something else, doesn't matter).

是的,您需要增加read_timeouttimeout。错误很明显,您没有足够的时间来获得响应(服务器很慢,网络或其他什么都无所谓)。

If it's possible, increasing the timeouts is the easiest way.

如果可能,增加超时是最简单的方法。

If the server supports pagination, it's a better way to request the data part by part.

如果服务器支持分页,那么逐部分请求数据是一种更好的方式。

Also you can use async queries in Guzzle and send something to your end user while you are waiting for the response from the API.

您还可以在 Guzzle 中使用异步查询,并在等待 API 响应时向最终用户发送一些内容。