json 服务器响应中途中断
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/10557927/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Server response gets cut off half way through
提问by samvermette
I have a REST API that returns json responses. Sometimes (and what seems to be at completely random), the json response gets cut off half-way through. So the returned json string looks like:
我有一个返回 json 响应的 REST API。有时(似乎是完全随机的),json 响应在中途被切断。所以返回的 json 字符串看起来像:
...route_short_name":"135","route_long_name":"Secte // end of response
I'm pretty sure it's not an encoding issue because the cut off point keeps changing position, depending on the json string that's returned. I haven't found a particular response size either for which the cut off happens (I've seen 65kb not get cut off, whereas 40kbs would).
我很确定这不是编码问题,因为截止点不断改变位置,具体取决于返回的 json 字符串。我还没有找到一个特定的响应大小,因为它会发生截断(我已经看到 65kb 没有被截断,而 40kbs 会发生截断)。
Looking at the response header when the cut off does happen:
当中断发生时查看响应头:
{
"Cache-Control" = "must-revalidate, private, max-age=0";
Connection = "keep-alive";
"Content-Type" = "application/json; charset=utf-8";
Date = "Fri, 11 May 2012 19:58:36 GMT";
Etag = "\"f36e55529c131f9c043b01e965e5f291\"";
Server = "nginx/1.0.14";
"Transfer-Encoding" = Identity;
"X-Rack-Cache" = miss;
"X-Runtime" = "0.739158";
"X-UA-Compatible" = "IE=Edge,chrome=1";
}
Doesn't ring a bell either. Anyone?
也不敲钟。任何人?
采纳答案by Clement Nedelcu
I had the same problem:
我有同样的问题:
Nginx cut off some responses from the FastCGI backend. For example, I couldn't generate a proper SQL backup from PhpMyAdmin. I checked the logs and found this:
Nginx 切断了来自 FastCGI 后端的一些响应。例如,我无法从 PhpMyAdmin 生成正确的 SQL 备份。我检查了日志,发现了这个:
2012/10/15 02:28:14 [crit] 16443#0: *14534527 open() "/usr/local/nginx/fastcgi_temp/4/81/0000004814" failed (13: Permission denied) while reading upstream, client: *, server: , request: "POST /HTTP/1.1", upstream: "fastcgi://127.0.0.1:9000", host: "", referrer: "http://*/server_export.php?token=**"
2012/10/15 02:28:14 [crit] 16443#0: *14534527 open() "/usr/local/nginx/fastcgi_temp/4/81/0000004814" 在读取上游时失败(13:权限被拒绝),客户端:*,服务器:, 请求: "POST /HTTP/1.1”,上游:“fastcgi://127.0.0.1:9000”,主机:“", 推荐人: "http://*/server_export.php?token=**"
All I had to do to fix it was to give proper permissions to the /usr/local/nginx/fastcgi_tempfolder, as well as client_body_temp.
我所要做的就是为该/usr/local/nginx/fastcgi_temp文件夹以及client_body_temp.
Fixed!
固定的!
Thanks a lot samvermette, your Question & Answer put me on the right track.
非常感谢samvermette,您的问答让我走上了正轨。
回答by samvermette
Looked up my nginx error.logfile and found the following:
查找我的 nginxerror.log文件,发现以下内容:
13870 open() "/var/lib/nginx/tmp/proxy/9/00/0000000009" failed (13: Permission denied) while reading upstream...
Looks like nginx's proxy was trying to save the response content (passed in by thin) to a file. It only does so when the response size exceeds proxy_buffers(64kb by default on 64 bits platform). So in the end the bug wasconnected to my request response size.
看起来 nginx 的代理正在尝试将响应内容(由 Thin 传入)保存到文件中。它仅在响应大小超过proxy_buffers(64 位平台上默认为 64kb)时才会这样做。那么到底臭虫被连接到我的请求响应的大小。
I ended fixing my issue by setting proxy_bufferingto offin my nginx config file, instead of upping proxy_buffersor fixing the file permission issue.
我通过在我的 nginx 配置文件中设置proxy_buffering为来结束我的问题off,而不是升级proxy_buffers或修复文件权限问题。
Still not sure about the purpose of nginx's buffer. I'd appreciate if anyone could add up on that. Is disabling the buffering completely a bad idea?
仍然不确定 nginx 缓冲区的用途。如果有人能对此进行补充,我将不胜感激。完全禁用缓冲是个坏主意吗?
回答by Dralac
I had similar problem with cutting response from server.
我在切断服务器响应时遇到了类似的问题。
It happened only when I added json header before returning response header('Content-type: application/json');
仅当我在返回响应之前添加 json 标头时才发生 header('Content-type: application/json');
In my case gzipcaused the issue.
在我的情况下gzip导致了这个问题。
I solved it by specifying gzip_typesin nginx.confand adding application/jsonto list before turning on gzip:
我解决它通过指定gzip_types在nginx.conf和添加application/json打开之前列表gzip:
gzip_types text/plain text/html text/css application/x-javascript text/xml application/xml application/xml+rss text/javascript application/json;
gzip on;
回答by PJ Brunet
It's possible you ran out of inodes, which prevents NginX from using the fastcgi_temp directory properly.
您可能用完了 inode,这会阻止 NginX 正确使用 fastcgi_temp 目录。
Try df -iand if you have 0% inodes free, that's a problem.
尝试一下df -i,如果您有 0% 空闲的 inode,那就是一个问题。
Try find /tmp -mtime 10(older than 10 days) to see what might be filling up your disk.
尝试find /tmp -mtime 10(超过 10 天)查看可能会填满您的磁盘的内容。
Or maybe it's another directory with too many files. For example, go to /home/www-data/example.com and count the files:
或者它可能是另一个包含太多文件的目录。例如,转到 /home/www-data/example.com 并计算文件:
find . -print | wc -l
find . -print | wc -l
回答by Vlad
Thanks for the question and the great answers, it saved me a lot of time. In the end, the answer of clement and sam helped me solve my issue, so the credits go to them.
感谢您的问题和出色的答案,它为我节省了很多时间。最后,clement 和 sam 的回答帮助我解决了我的问题,所以功劳归于他们。
Just wanted to point out that after reading a bit about the topic, it seems it is not recommended to disable proxy_bufferingsince it could make your server stall if the clients (user of your system) have a bad internet connection for example.
只是想指出,在阅读了有关该主题的一些内容后,似乎不建议禁用,proxy_buffering因为例如,如果客户端(系统用户)的 Internet 连接不良,它可能会使您的服务器停顿。
I found this discussionvery useful to understand more. The example of Francis Daly made it very clear for me:
我发现这个讨论对于了解更多信息非常有用。Francis Daly 的例子让我很清楚:
Perhaps it is easier to think of the full process as a chain of processes.
web browser talks to nginx, over a 1 MB/s link. nginx talks to upstream server, over a 100 MB/s link. upstream server returns 100 MB of content to nginx. nginx returns 100 MB of content to web browser.
With proxy_buffering on, nginx can hold the whole 100 MB, so the nginx-upstream connection can be closed after 1 s, and then nginx can spend 100 s sending the content to the web browser.
With proxy_buffering off, nginx can only take the content from upstream at the same rate that nginx can send it to the web browser.
The web browser doesn't care about the difference -- it still takes 100 s for it to get the whole content.
nginx doesn't care much about the difference -- it still takes 100 s to feed the content to the browser, but it does have to hold the connection to upstream open for an extra 99 s.
Upstream does care about the difference -- what could have taken it 1 s actually takes 100 s; and for the extra 99 s, that upstream server is not serving any other requests.
Usually: the nginx-upstream link is faster than the browser-nginx link; and upstream is more "heavyweight" than nginx; so it is prudent to let upstream finish processing as quickly as possible.
也许将整个流程视为一系列流程更容易。
Web 浏览器通过 1 MB/s 的链接与 nginx 对话。nginx 通过 100 MB/s 的链接与上游服务器通信。上游服务器向 nginx 返回 100 MB 的内容。nginx 向 Web 浏览器返回 100 MB 的内容。
开启proxy_buffering后,nginx可以容纳整整100MB,这样nginx-upstream的连接可以在1s后关闭,然后nginx可以用100s的时间将内容发送到web浏览器。
关闭 proxy_buffering 后,nginx 只能以与 nginx 将内容发送到 Web 浏览器相同的速率从上游获取内容。
Web 浏览器并不关心差异——它仍然需要 100 秒才能获取全部内容。
nginx的不在乎的区别 - 它仍然需要100秒养活内容到浏览器,但它确实有能力持有至上游开放一个额外的99个S上的连接。
上游确实关心差异——本可以花 1 秒的时间实际上需要 100 秒;对于额外的 99 秒,该上游服务器不服务任何其他请求。
通常:nginx-上游链接比浏览器-nginx链接快;上游比 nginx 更“重量级”;所以谨慎的做法是让上游尽快完成处理。
回答by Chris Kannon
We had a similar problem. It was caused by our REST server (DropWizard) having SO_LINGER enabled. Under load DropWizard was disconnecting NGINX before it had a chance to flush it's buffers. The JSON was >8kb and the front end would receive it truncated.
我们遇到了类似的问题。这是由我们的 REST 服务器(DropWizard)启用了 SO_LINGER 引起的。在负载下 DropWizard 在它有机会刷新它的缓冲区之前断开 NGINX。JSON 大于 8kb,前端会收到它被截断。
回答by Dr1Ku
I've also had this issue – JSONparsing client-side was faulty, the response was being cut off or worse still, the response was stale and was read from some random memory buffer.
我也遇到过这个问题——JSON解析客户端出错,响应被切断或更糟的是,响应是陈旧的,是从一些随机内存缓冲区中读取的。
After going through some guides – Serving Static Content Via POST From Nginxas well as Nginx: Fix to “405 Not Allowed” when using POST serving staticwhile trying to configure nginx to serve a simple JSONfile.
在阅读了一些指南后 –通过 POST 从 Nginx和Nginx提供静态内容:在尝试配置 nginx 以提供简单JSON文件的同时使用静态 POST 服务时,修复为“405 Not Allowed”。
In my case, I had to use:
就我而言,我不得不使用:
max_ranges 0;
so that the browser doesn't get any funny ideas when nginx adds Accept-Ranges: bytesin the response header) as well as
以便在 nginx 添加Accept-Ranges: bytes响应头时浏览器不会得到任何有趣的想法)以及
sendfile off;
in my serverblock for the proxy which serves the static files. Adding it to the locationblock which would finally serve the found JSONfile didn't help.
在我的server块中,用于提供静态文件的代理。将它添加到location最终为找到的JSON文件提供服务的块没有帮助。
Another protip for serving static JSONs would also be not forgetting the response type:
另一个服务 staticJSON的技巧也不会忘记响应类型:
charset_types application/json;
default_type application/json;
charset utf-8;
Other searches yielded folder permission issues – nginx is cutting the end of dynamic pages and cache itor proxy buffering issues – Getting a chunked request through nginx, but that was not my case.
其他搜索产生了文件夹权限问题 - nginx 正在切断动态页面的末尾并对其进行缓存或代理缓冲问题 -通过 nginx 获取分块请求,但这不是我的情况。

