node.js 对于大型请求,Nginx 上游在从上游读取响应标头时过早关闭连接
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/36488688/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Nginx upstream prematurely closed connection while reading response header from upstream, for large requests
提问by Divya Konda
I am using nginx and node server to serve update requests. I get a gateway timeout when I request an update on large data. I saw this error from the nginx error logs :
我正在使用 nginx 和节点服务器来处理更新请求。当我请求更新大数据时,网关超时。我从 nginx 错误日志中看到了这个错误:
2016/04/07 00:46:04 [error] 28599#0: *1 upstream prematurely closed connection while reading response header from upstream, client: 10.0.2.77, server: gis.oneconcern.com, request: "GET /update_mbtiles/atlas19891018000415 HTTP/1.1", upstream: "http://127.0.0.1:7777/update_mbtiles/atlas19891018000415", host: "gis.oneconcern.com"
2016/04/07 00:46:04 [错误] 28599#0:*1 上游在从上游读取响应头时过早关闭连接,客户端:10.0.2.77,服务器:gis.oneconcern.com,请求:“GET /update_mbtiles /atlas19891018000415 HTTP/1.1”,上游:“ http://127.0.0.1:7777/update_mbtiles/atlas19891018000415”,主机:“gis.oneconcern.com”
I googled for the error and tried everything I could, but I still get the error.
我用谷歌搜索错误并尝试了一切,但我仍然得到错误。
My nginx conf has these proxy settings:
我的 nginx conf 有这些代理设置:
##
# Proxy settings
##
proxy_connect_timeout 1000;
proxy_send_timeout 1000;
proxy_read_timeout 1000;
send_timeout 1000;
This is how my server is configured
这是我的服务器的配置方式
server {
listen 80;
server_name gis.oneconcern.com;
access_log /home/ubuntu/Tilelive-Server/logs/nginx_access.log;
error_log /home/ubuntu/Tilelive-Server/logs/nginx_error.log;
large_client_header_buffers 8 32k;
location / {
proxy_pass http://127.0.0.1:7777;
proxy_redirect off;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $http_host;
proxy_cache_bypass $http_upgrade;
}
location /faults {
proxy_pass http://127.0.0.1:8888;
proxy_http_version 1.1;
proxy_buffers 8 64k;
proxy_buffer_size 128k;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
}
}
}
I am using a nodejs backend to serve the requests on an aws server. The gateway error shows up only when the update takes a long time (about 3-4 minutes). I do not get any error for smaller updates. Any help will be highly appreciated.
我正在使用 nodejs 后端来处理 aws 服务器上的请求。仅当更新需要很长时间(大约 3-4 分钟)时才会显示网关错误。对于较小的更新,我没有收到任何错误。任何帮助将不胜感激。
Node js code :
节点js代码:
app.get("/update_mbtiles/:earthquake", function(req, res){
var earthquake = req.params.earthquake
var command = spawn(__dirname + '/update_mbtiles.sh', [ earthquake, pg_details ]);
//var output = [];
command.stdout.on('data', function(chunk) {
// logger.info(chunk.toString());
// output.push(chunk.toString());
});
command.stderr.on('data', function(chunk) {
// logger.error(chunk.toString());
// output.push(chunk.toString());
});
command.on('close', function(code) {
if (code === 0) {
logger.info("updating mbtiles successful for " + earthquake);
tilelive_reload_and_switch_source(earthquake);
res.send("Completed updating!");
}
else {
logger.error("Error occured while updating " + earthquake);
res.status(500);
res.send("Error occured while updating " + earthquake);
}
});
});
function tilelive_reload_and_switch_source(earthquake_unique_id) {
tilelive.load('mbtiles:///'+__dirname+'/mbtiles/tipp_out_'+ earthquake_unique_id + '.mbtiles', function(err, source) {
if (err) {
logger.error(err.message);
throw err;
}
sources.set(earthquake_unique_id, source);
logger.info('Updated source! New tiles!');
});
}
Thank you.
谢谢你。
采纳答案by SilentMiles
I think that error from Nginx is indicating that the connection was closed by your nodejs server (i.e., "upstream"). How is nodejs configured?
我认为来自 Nginx 的错误表明连接已被您的 nodejs 服务器(即“上游”)关闭。nodejs是如何配置的?
回答by OpSocket
I solved this by setting a higher timeout value for the proxy:
我通过为代理设置更高的超时值解决了这个问题:
location / {
proxy_read_timeout 300s;
proxy_connect_timeout 75s;
proxy_pass http://localhost:3000;
}
Documentation: https://nginx.org/en/docs/http/ngx_http_proxy_module.html
文档:https: //nginx.org/en/docs/http/ngx_http_proxy_module.html
回答by millenion
I had the same error for quite a while, and here what fixed it for me.
我有一段时间有同样的错误,这里是什么为我修复了它。
I simply declared in service that i use what follows:
我只是在服务中声明我使用以下内容:
Description= Your node service description
After=network.target
[Service]
Type=forking
PIDFile=/tmp/node_pid_name.pid
Restart=on-failure
KillSignal=SIGQUIT
WorkingDirectory=/path/to/node/app/root/directory
ExecStart=/path/to/node /path/to/server.js
[Install]
WantedBy=multi-user.target
What should catch your attention here is "After=network.target". I spent days and days looking for fixes on nginx side, while the problem was just that. To be sure, stop running the node service you have, launch the ExecStart command directly and try to reproduce the bug. If it doesn't pop, it just means that your service has a problem. At least this is how i found my answer.
这里应该引起您注意的是“After=network.target”。我花了几天时间在 nginx 方面寻找修复程序,而问题就在于此。可以肯定的是,停止运行您拥有的节点服务,直接启动 ExecStart 命令并尝试重现该错误。如果它没有弹出,则仅表示您的服务有问题。至少这是我找到答案的方式。
For everybody else, good luck!
对于其他人,祝你好运!
回答by tanner burton
You can increase the timeout in node like so.
您可以像这样增加节点中的超时时间。
app.post('/slow/request', function(req, res){
req.connection.setTimeout(100000); //100 seconds
...
}
app.post('/slow/request', function(req, res){
req.connection.setTimeout(100000); //100 seconds
...
}
回答by Yukshy Klein
I don't think this is your case, but I'll post it if it helps anyone. I had the same issue and the problem was that Node didn't respond at all (I had a condition that when failed didn't do anything - so no response) - So if increasing all your timeouts didn't solve it, make sure all scenarios get a response.
我不认为这是你的情况,但如果它对任何人有帮助,我会发布它。我遇到了同样的问题,问题是 Node 根本没有响应(我有一个条件,即失败时没有做任何事情 - 所以没有响应) - 所以如果增加所有超时没有解决它,请确保所有场景都会得到响应。
回答by ArnaudN
I meet the same problem and no one of the solutions detailed here worked for me ... First of all I had an error 413 Entity too large so I updated my nginx.conf as following :
我遇到了同样的问题,这里详述的解决方案没有一个对我有用......首先,我有一个错误 413 Entity too large 所以我更新了我的 nginx.conf 如下:
http {
# Increase request size
client_max_body_size 10m;
##
# Basic Settings
##
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
# server_tokens off;
# server_names_hash_bucket_size 64;
# server_name_in_redirect off;
include /etc/nginx/mime.types;
default_type application/octet-stream;
##
# SSL Settings
##
ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
ssl_prefer_server_ciphers on;
##
# Logging Settings
##
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
##
# Gzip Settings
##
gzip on;
# gzip_vary on;
# gzip_proxied any;
# gzip_comp_level 6;
# gzip_buffers 16 8k;
# gzip_http_version 1.1;
# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;
##
# Virtual Host Configs
##
include /etc/nginx/conf.d/*.conf;
include /etc/nginx/sites-enabled/*;
##
# Proxy settings
##
proxy_connect_timeout 1000;
proxy_send_timeout 1000;
proxy_read_timeout 1000;
send_timeout 1000;
}
So I only updated the http part, and now I meet the error 502 Bad Gateway and when I display /var/log/nginx/error.log I got the famous "upstream prematurely closed connection while reading response header from upstream"
所以我只更新了 http 部分,现在我遇到了错误 502 Bad Gateway,当我显示 /var/log/nginx/error.log 时,我得到了著名的“从上游读取响应头时上游过早关闭的连接”
What is really mysterious for me is that the request works when I run it with virtualenv on my server and send the request to the : IP:8000/nameOfTheRequest
对我来说真正神秘的是,当我在我的服务器上使用 virtualenv 运行它并将请求发送到:IP:8000/nameOfTheRequest 时,该请求有效
Thanks for reading
谢谢阅读
回答by The Coder
I got the same error, here is how I resolved it:
我遇到了同样的错误,这是我的解决方法:
- Downloaded logs from AWS.
- Reviewed Nginx logs, no additional details as above.
- Reviewed node.js logs, AccessDenied AWS SDK permissions error.
- Checked the S3 bucket that AWS was trying to read from.
- Added additional bucket with read permission to correct server role.
- 从 AWS 下载的日志。
- 了 Nginx 日志,没有上面的其他详细信息。
- 查看 node.js 日志,AccessDenied AWS SDK 权限错误。
- 检查 AWS 试图从中读取的 S3 存储桶。
- 添加了具有读取权限的额外存储桶以更正服务器角色。
Even though I was processing large files there were no other errors or settings I had to change once I corrected the missing S3 access.
即使我正在处理大文件,也没有其他错误或设置,一旦我纠正了缺少的 S3 访问,我就必须更改。
回答by ddss12
In my case, I tried to increase the timeout in the configuration file, but did not work. Later turned out it was working when filtering for less data to display on one page. In the views.py, I just added " & Q(year=2019)" to only display the data for the year 2019. BTW, a permanent fix would be using Pagination.
就我而言,我尝试在配置文件中增加超时时间,但没有奏效。后来证明它在过滤较少的数据以显示在一页上时有效。在 views.py 中,我刚刚添加了“& Q(year=2019)”以仅显示 2019 年的数据。顺便说一句,永久修复将使用分页。
def list_offers(request, list_type):
context = {}
context['list_type'] = list_type
if list_type == 'ready':
context['menu_page'] = 'ready'
offer_groups = OfferGroup.objects.filter(~Q(run_status=OfferGroup.DRAFT) & Q(year=2019)).order_by('-year', '-week')
context['grouped_offers'] = offer_groups
return render(request, 'app_offers/list_offers.html', context)

