Python uwsgi + nginx + flask:上游提前关闭
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/27396248/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
uwsgi + nginx + flask: upstream prematurely closed
提问by user299709
I created an endpoint on my flask which generates a spreadsheet from a database query (remote db) and then sends it as a download in the browser. Flask doesn't throw any errors. Uwsgi doesn't complain.
我在我的烧瓶上创建了一个端点,它从数据库查询(远程数据库)生成一个电子表格,然后将它作为下载发送到浏览器中。Flask 不会抛出任何错误。Uwsgi 没有抱怨。
But when I check nginx's error.log I see a lot of
但是当我查看 nginx 的 error.log 时,我看到了很多
2014/12/10 05:06:24 [error] 14084#0: *239436 upstream prematurely closed connection while reading response header from upstream, client: 34.34.34.34, server: me.com, request: "GET /download/export.csv HTTP/1.1", upstream: "uwsgi://0.0.0.0:5002", host: "me.com", referrer: "https://me.com/download/export.csv"
二○一四年十二月一十日5点06分24秒[错误] 14084#0:* 239436上游过早关闭的连接,同时读取来自上游响应头,客户机:34.34.34.34,服务器:me.com,请求:“GET /下载/出口.csv HTTP/1.1”,上游:“uwsgi://0.0.0.0:5002”,主机:“me.com”,引荐来源:“ https://me.com/download/export.csv”
I deploy the uwsgi like
我像部署 uwsgi
uwsgi --socket 0.0.0.0:5002 --buffer-size=32768 --module server --callab app
my nginx config:
我的 nginx 配置:
server {
listen 80;
merge_slashes off;
server_name me.com www.me.cpm;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
server {
listen 443;
merge_slashes off;
server_name me.com www.me.com;
location / { try_files $uri @app; }
location @app {
include uwsgi_params;
uwsgi_pass 0.0.0.0:5002;
uwsgi_buffer_size 32k;
uwsgi_buffers 8 32k;
uwsgi_busy_buffers_size 32k;
}
}
Is this an nginx or uwsgi issue, or both?
这是 nginx 或 uwsgi 问题,还是两者都有?
采纳答案by tourdownunder
Change nginx.conf to include
更改 nginx.conf 以包含
sendfile on;
client_max_body_size 20M;
keepalive_timeout 0;
See self answer uwsgi upstart on amazon linuxfor full example
有关完整示例,请参阅亚马逊 linux 上的自我回答uwsgi upstart
回答by jwalker
Replace uwsgi_pass 0.0.0.0:5002;
with uwsgi_pass 127.0.0.1:5002;
or better use unix sockets.
更换uwsgi_pass 0.0.0.0:5002;
用uwsgi_pass 127.0.0.1:5002;
或更好的使用Unix套接字。
回答by krzychu
It seems many causes can stand behind this error message. I know you are using uwsgi_pass
, but for those having the problem on long requests when using proxy_pass
, setting http-timeout
on uWSGI may help (it is not harakiri setting).
似乎有很多原因可以支持此错误消息。我知道您正在使用uwsgi_pass
,但是对于那些在使用时遇到长请求问题的人proxy_pass
,http-timeout
在 uWSGI 上进行设置可能会有所帮助(这不是 harakiri 设置)。
回答by Sathish
I fixed this issue by passing socket-timeout = 65
(uwsgi.ini file) or --socket-timeout=65
(uwsgi command line) option in uwsgi. We have to check with different value depends on the web traffic. This value socket-timeout = 65
in uwsgi.ini file worked in my case.
我通过在socket-timeout = 65
uwsgi 中传递(uwsgi.ini 文件)或--socket-timeout=65
(uwsgi 命令行)选项解决了这个问题。我们必须根据网络流量检查不同的值。socket-timeout = 65
uwsgi.ini 文件中的这个值适用于我的情况。
回答by saaj
I had the same sporadic errors in Elastic Beanstalk single-container Docker WSGI app deployment. On EC2 instance of the environment upstream configuration looks like:
我在 Elastic Beanstalk 单容器 Docker WSGI 应用程序部署中遇到了同样的偶发错误。在环境上游配置的 EC2 实例上,如下所示:
upstream docker {
server 172.17.0.3:8080;
keepalive 256;
}
With this default upstream simple load test like:
使用此默认上游简单负载测试,例如:
siege -b -c 16 -t 60S -T 'application/json' 'http://host/foo POST {"foo": "bar"}'
...on the EC2 led to availability of ~70%. The rest were 502 errors caused by upstream prematurely closed connection while reading response header from upstream.
...在 EC2 上实现了约 70% 的可用性。其余的是 502 错误,由上游在从上游读取响应头时过早关闭连接引起。
The solution was to either remove keepalive
setting from the upstream configuration, or which is easier and more reasonable, is to enable HTTP keep-alive at uWSGI
's side as well, with --http-keepalive
(available since 1.9).
解决方案是keepalive
从上游配置中删除设置,或者更容易和更合理的方法是在uWSGI
's 端启用 HTTP keep-alive ,使用--http-keepalive
(从 1.9 开始可用)。
回答by mahdix
In my case, problem was nginx was sending a request with uwsgi protocol while uwsgi was listening on that port for http packets. So either I had to change the way nginx connects to uwsgi or change the uwsgi to listen using uwsgi protocol.
就我而言,问题是 nginx 正在使用 uwsgi 协议发送请求,而 uwsgi 正在该端口上侦听 http 数据包。因此,要么我必须更改 nginx 连接到 uwsgi 的方式,要么更改 uwsgi 以使用 uwsgi 协议进行侦听。
回答by Ivan Ogai
As mentioned by @mahdix, the error can be caused by Nginx sending a request with the uwsgi protocol while uwsgi is listening on that port for http packets.
正如@mahdix 所提到的,该错误可能是由于 Nginx 使用 uwsgi 协议发送请求而 uwsgi 在该端口上侦听 http 数据包造成的。
When in the Nginx config you have something like:
在 Nginx 配置中,您有类似的内容:
upstream org_app {
server 10.0.9.79:9597;
}
location / {
include uwsgi_params;
uwsgi_pass org_app;
}
Nginx will use the uwsgi protocol. But if in uwsgi.ini
you have something like (or its equivalent in the command line):
Nginx 将使用 uwsgi 协议。但是如果uwsgi.ini
你有类似的东西(或命令行中的等价物):
http-socket=:9597
uwsgi will speakhttp, and the error mentioned in the question appears. See native HTTP support.
uwsgi会说http,出现问题中提到的错误。请参阅本机 HTTP 支持。
A possible fix is to have instead:
一个可能的解决方法是改为:
socket=:9597
In which case Nginx and uwsgi will communicate with each other using the uwsgi protocol over a TCP connection.
在这种情况下,Nginx 和 uwsgi 将使用 uwsgi 协议通过 TCP 连接相互通信。
Side note: if Nginx and uwsgi are in the same node, a Unix socket will be faster than TCP. See using Unix sockets instead of ports.
旁注:如果 Nginx 和 uwsgi 在同一个节点中,Unix 套接字将比 TCP 快。请参阅使用 Unix 套接字而不是端口。