如何在 PHP 中发出异步 HTTP 请求
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/124462/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to make asynchronous HTTP requests in PHP
提问by Brent
Is there a way in PHP to make asynchronous HTTP calls? I don't care about the response, I just want to do something like file_get_contents(), but not wait for the request to finish before executing the rest of my code. This would be super useful for setting off "events" of a sort in my application, or triggering long processes.
PHP 中有没有办法进行异步 HTTP 调用?我不在乎响应,我只想做类似的事情file_get_contents(),但在执行我的其余代码之前不等待请求完成。这对于在我的应用程序中触发某种“事件”或触发长进程非常有用。
Any ideas?
有任何想法吗?
采纳答案by Brent
The answer I'd previously accepted didn't work. It still waited for responses. This does work though, taken from How do I make an asynchronous GET request in PHP?
我之前接受的答案不起作用。它仍在等待回应。不过,这确实有效,取自如何在 PHP 中发出异步 GET 请求?
function post_without_wait($url, $params)
{
foreach ($params as $key => &$val) {
if (is_array($val)) $val = implode(',', $val);
$post_params[] = $key.'='.urlencode($val);
}
$post_string = implode('&', $post_params);
$parts=parse_url($url);
$fp = fsockopen($parts['host'],
isset($parts['port'])?$parts['port']:80,
$errno, $errstr, 30);
$out = "POST ".$parts['path']." HTTP/1.1\r\n";
$out.= "Host: ".$parts['host']."\r\n";
$out.= "Content-Type: application/x-www-form-urlencoded\r\n";
$out.= "Content-Length: ".strlen($post_string)."\r\n";
$out.= "Connection: Close\r\n\r\n";
if (isset($post_string)) $out.= $post_string;
fwrite($fp, $out);
fclose($fp);
}
回答by Christian Davén
If you control the target that you want to call asynchronously (e.g. your own "longtask.php"), you can close the connection from that end, and both scripts will run in parallel. It works like this:
如果您控制要异步调用的目标(例如您自己的“longtask.php”),您可以从该端关闭连接,两个脚本将并行运行。它是这样工作的:
- quick.php opens longtask.php via cURL (no magic here)
- longtask.php closes the connection and continues (magic!)
- cURL returns to quick.php when the connection is closed
- Both tasks continue in parallel
- quick.php 通过 cURL 打开 longtask.php (这里没有魔法)
- longtask.php 关闭连接并继续(神奇!)
- 当连接关闭时,cURL 返回到 quick.php
- 两个任务并行进行
I have tried this, and it works just fine. But quick.php won't know anything about how longtask.php is doing, unless you create some means of communication between the processes.
我试过这个,它工作得很好。但是quick.php 不会知道longtask.php 是怎么做的,除非您在进程之间创建某种通信方式。
Try this code in longtask.php, before you do anything else. It will close the connection, but still continue to run (and suppress any output):
在执行任何其他操作之前,先在 longtask.php 中尝试此代码。它将关闭连接,但仍会继续运行(并抑制任何输出):
while(ob_get_level()) ob_end_clean();
header('Connection: close');
ignore_user_abort();
ob_start();
echo('Connection Closed');
$size = ob_get_length();
header("Content-Length: $size");
ob_end_flush();
flush();
The code is copied from the PHP manual's user contributed notesand somewhat improved.
该代码是从PHP 手册的用户贡献注释中复制的,并有所改进。
回答by Internet Friend
You can do trickery by using exec() to invoke something that can do HTTP requests, like wget, but you must direct all output from the program to somewhere, like a file or /dev/null, otherwise the PHP process will wait for that output.
你可以通过使用 exec() 来调用一些可以执行 HTTP 请求的东西,比如wget,但是你必须将程序的所有输出定向到某个地方,比如文件或 /dev/null,否则 PHP 进程将等待该输出.
If you want to separate the process from the apache thread entirely, try something like (I'm not sure about this, but I hope you get the idea):
如果您想将进程与 apache 线程完全分开,请尝试类似的操作(我不确定这一点,但我希望您能明白):
exec('bash -c "wget -O (url goes here) > /dev/null 2>&1 &"');
It's not a nice business, and you'll probably want something like a cron job invoking a heartbeat script which polls an actual database event queue to do real asynchronous events.
这不是一件好事,您可能需要类似 cron 作业调用心跳脚本之类的东西,该脚本轮询实际数据库事件队列以执行真正的异步事件。
回答by Simon East
As of 2018, Guzzlehas become the defacto standard library for HTTP requests, used in several modern frameworks. It's written in pure PHP and does not require installing any custom extensions.
截至 2018 年,Guzzle已成为 HTTP 请求的事实上的标准库,用于多个现代框架中。它是用纯 PHP 编写的,不需要安装任何自定义扩展。
It can do asynchronous HTTP calls very nicely, and even pool themsuch as when you need to make 100 HTTP calls, but don't want to run more than 5 at a time.
它可以很好地进行异步 HTTP 调用,甚至可以将它们池化,例如当您需要进行 100 个 HTTP 调用,但不想一次运行超过 5 个时。
Concurrent request example
并发请求示例
use GuzzleHttp\Client;
use GuzzleHttp\Promise;
$client = new Client(['base_uri' => 'http://httpbin.org/']);
// Initiate each request but do not block
$promises = [
'image' => $client->getAsync('/image'),
'png' => $client->getAsync('/image/png'),
'jpeg' => $client->getAsync('/image/jpeg'),
'webp' => $client->getAsync('/image/webp')
];
// Wait on all of the requests to complete. Throws a ConnectException
// if any of the requests fail
$results = Promise\unwrap($promises);
// Wait for the requests to complete, even if some of them fail
$results = Promise\settle($promises)->wait();
// You can access each result using the key provided to the unwrap
// function.
echo $results['image']['value']->getHeader('Content-Length')[0]
echo $results['png']['value']->getHeader('Content-Length')[0]
See http://docs.guzzlephp.org/en/stable/quickstart.html#concurrent-requests
请参阅http://docs.guzzlephp.org/en/stable/quickstart.html#concurrent-requests
回答by philfreo
/**
* Asynchronously execute/include a PHP file. Does not record the output of the file anywhere.
*
* @param string $filename file to execute, relative to calling script
* @param string $options (optional) arguments to pass to file via the command line
*/
function asyncInclude($filename, $options = '') {
exec("/path/to/php -f {$filename} {$options} >> /dev/null &");
}
回答by stil
You can use this library: https://github.com/stil/curl-easy
你可以使用这个库:https: //github.com/stil/curl-easy
It's pretty straightforward then:
那么它非常简单:
<?php
$request = new cURL\Request('http://yahoo.com/');
$request->getOptions()->set(CURLOPT_RETURNTRANSFER, true);
// Specify function to be called when your request is complete
$request->addListener('complete', function (cURL\Event $event) {
$response = $event->response;
$httpCode = $response->getInfo(CURLINFO_HTTP_CODE);
$html = $response->getContent();
echo "\nDone.\n";
});
// Loop below will run as long as request is processed
$timeStart = microtime(true);
while ($request->socketPerform()) {
printf("Running time: %dms \r", (microtime(true) - $timeStart)*1000);
// Here you can do anything else, while your request is in progress
}
Below you can see console output of above example. It will display simple live clock indicating how much time request is running:
下面你可以看到上面例子的控制台输出。它将显示简单的实时时钟,指示正在运行多少时间请求:


回答by RafaSashi
Fake a request abortion using
CURLsetting a lowCURLOPT_TIMEOUT_MSset
ignore_user_abort(true)to keep processing after the connection closed.
使用
CURL设置低来伪造请求中止CURLOPT_TIMEOUT_MS设置
ignore_user_abort(true)为在连接关闭后继续处理。
With this method no need to implement connection handling via headers and buffer too dependent on OS, Browser and PHP version
使用这种方法不需要通过头和缓冲区实现连接处理太依赖于操作系统、浏览器和 PHP 版本
Master process
主流程
function async_curl($background_process=''){
//-------------get curl contents----------------
$ch = curl_init($background_process);
curl_setopt_array($ch, array(
CURLOPT_HEADER => 0,
CURLOPT_RETURNTRANSFER =>true,
CURLOPT_NOSIGNAL => 1, //to timeout immediately if the value is < 1000 ms
CURLOPT_TIMEOUT_MS => 50, //The maximum number of mseconds to allow cURL functions to execute
CURLOPT_VERBOSE => 1,
CURLOPT_HEADER => 1
));
$out = curl_exec($ch);
//-------------parse curl contents----------------
//$header_size = curl_getinfo($ch, CURLINFO_HEADER_SIZE);
//$header = substr($out, 0, $header_size);
//$body = substr($out, $header_size);
curl_close($ch);
return true;
}
async_curl('http://example.com/background_process_1.php');
Background process
后台进程
ignore_user_abort(true);
//do something...
NB
NB
If you want cURL to timeout in less than one second, you can use CURLOPT_TIMEOUT_MS, although there is a bug/"feature" on "Unix-like systems" that causes libcurl to timeout immediately if the value is < 1000 ms with the error "cURL Error (28): Timeout was reached". The explanation for this behavior is:
[...]
The solution is to disable signals using CURLOPT_NOSIGNAL
如果您希望 cURL 在不到一秒的时间内超时,您可以使用 CURLOPT_TIMEOUT_MS,尽管“类 Unix 系统”上存在一个错误/“功能”,如果值 < 1000 毫秒,则会导致 libcurl 立即超时并出现错误“ cURL 错误 (28):已达到超时”。这种行为的解释是:
[...]
解决方案是使用 CURLOPT_NOSIGNAL 禁用信号
Resources
资源
回答by user1031143
let me show you my way :)
让我告诉你我的方式:)
needs nodejs installed on the server
需要在服务器上安装 nodejs
(my server sends 1000 https get request takes only 2 seconds)
(我的服务器发送 1000 个 https get 请求只需要 2 秒)
url.php :
网址.php :
<?
$urls = array_fill(0, 100, 'http://google.com/blank.html');
function execinbackground($cmd) {
if (substr(php_uname(), 0, 7) == "Windows"){
pclose(popen("start /B ". $cmd, "r"));
}
else {
exec($cmd . " > /dev/null &");
}
}
fwite(fopen("urls.txt","w"),implode("\n",$urls);
execinbackground("nodejs urlscript.js urls.txt");
// { do your work while get requests being executed.. }
?>
urlscript.js >
urlscript.js >
var https = require('https');
var url = require('url');
var http = require('http');
var fs = require('fs');
var dosya = process.argv[2];
var logdosya = 'log.txt';
var count=0;
http.globalAgent.maxSockets = 300;
https.globalAgent.maxSockets = 300;
setTimeout(timeout,100000); // maximum execution time (in ms)
function trim(string) {
return string.replace(/^\s*|\s*$/g, '')
}
fs.readFile(process.argv[2], 'utf8', function (err, data) {
if (err) {
throw err;
}
parcala(data);
});
function parcala(data) {
var data = data.split("\n");
count=''+data.length+'-'+data[1];
data.forEach(function (d) {
req(trim(d));
});
/*
fs.unlink(dosya, function d() {
console.log('<%s> file deleted', dosya);
});
*/
}
function req(link) {
var linkinfo = url.parse(link);
if (linkinfo.protocol == 'https:') {
var options = {
host: linkinfo.host,
port: 443,
path: linkinfo.path,
method: 'GET'
};
https.get(options, function(res) {res.on('data', function(d) {});}).on('error', function(e) {console.error(e);});
} else {
var options = {
host: linkinfo.host,
port: 80,
path: linkinfo.path,
method: 'GET'
};
http.get(options, function(res) {res.on('data', function(d) {});}).on('error', function(e) {console.error(e);});
}
}
process.on('exit', onExit);
function onExit() {
log();
}
function timeout()
{
console.log("i am too far gone");process.exit();
}
function log()
{
var fd = fs.openSync(logdosya, 'a+');
fs.writeSync(fd, dosya + '-'+count+'\n');
fs.closeSync(fd);
}
回答by Tony
The swoole extension. https://github.com/matyhtf/swooleAsynchronous & concurrent networking framework for PHP.
swoole 扩展。https://github.com/matyhtf/swoolePHP 异步并发网络框架。
$client = new swoole_client(SWOOLE_SOCK_TCP, SWOOLE_SOCK_ASYNC);
$client->on("connect", function($cli) {
$cli->send("hello world\n");
});
$client->on("receive", function($cli, $data){
echo "Receive: $data\n";
});
$client->on("error", function($cli){
echo "connect fail\n";
});
$client->on("close", function($cli){
echo "close\n";
});
$client->connect('127.0.0.1', 9501, 0.5);
回答by Roman Shamritskiy
You can use non-blocking sockets and one of pecl extensions for PHP:
您可以使用非阻塞套接字和 PHP 的 pecl 扩展之一:
You can use library which gives you an abstraction layer between your code and a pecl extension: https://github.com/reactphp/event-loop
您可以使用在代码和 pecl 扩展之间为您提供抽象层的库:https: //github.com/reactphp/event-loop
You can also use async http-client, based on the previous library: https://github.com/reactphp/http-client
您也可以使用异步 http-client,基于之前的库:https: //github.com/reactphp/http-client
See others libraries of ReactPHP: http://reactphp.org
查看其他 ReactPHP 库:http://reactphp.org
Be careful with an asynchronous model. I recommend to see this video on youtube: http://www.youtube.com/watch?v=MWNcItWuKpI
小心使用异步模型。我建议在 youtube 上观看此视频:http: //www.youtube.com/watch?v=MWNcItWuKpI

