java Jetty IOException:打开的文件太多
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/6322823/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Jetty IOException: Too many open files
提问by John Smith
I'm running Jetty on a website doing around 100 requests/sec, with nginx in front. I just noticed in the logs, only a few minutes after doing a deploy and starting Jetty, that for a little while it was spamming:
我在一个网站上运行 Jetty,每秒处理大约 100 个请求,前面有 nginx。我刚刚在日志中注意到,在进行部署并启动 Jetty 几分钟后,它有一段时间在发送垃圾邮件:
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method)
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:163)
at org.mortbay.jetty.nio.SelectChannelConnector.acceptChannel(SelectChannelConnector.java:75)
at org.mortbay.io.nio.SelectorManager$SelectSet.doSelect(SelectorManager.java:673)
at org.mortbay.io.nio.SelectorManager.doSelect(SelectorManager.java:192)
at org.mortbay.jetty.nio.SelectChannelConnector.accept(SelectChannelConnector.java:124)
at org.mortbay.jetty.AbstractConnector$Acceptor.run(AbstractConnector.java:708)
at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)
For a minute or two. I did an "lsof -u jetty" and saw hundreds of lines of:
一两分钟。我做了一个“lsof -u jetty”,看到了数百行:
java 15892 jetty 1020u IPv6 298105434 0t0 TCP 192.168.1.100:http-alt->192.168.1.100:60839 (ESTABLISHED)
java 15892 jetty 1021u IPv6 298105438 0t0 TCP 192.168.1.100:http-alt->192.168.1.100:60841 (ESTABLISHED)
java 15892 jetty 1022u IPv6 298105441 0t0 TCP 192.168.1.100:http-alt->192.168.1.100:60842 (ESTABLISHED)
java 15892 jetty 1023u IPv6 298105443 0t0 TCP 192.168.1.100:http-alt->192.168.1.100:60843 (ESTABLISHED)
Where 192.168.1.100 is the servers internal IP.
其中 192.168.1.100 是服务器内部 IP。
As you can see, this brought the number of open files to the default max of 1024. I could just increase this, but I'm wondering why this happens in the first place? It's in Jetty's nio socket acceptor, so is this caused by a storm of connection requests?
如您所见,这使打开文件的数量达到默认最大值 1024。我可以增加它,但我想知道为什么会发生这种情况?是在Jetty的nio socket acceptor里面,所以这是连接请求风暴造成的吗?
回答by jpredham
While there may be a bug in Jetty, I think a far more likely explanation is that your open file ulimits are too low. Typically the 1024 default is simply not enough for web servers with moderate use.
虽然 Jetty 中可能存在错误,但我认为更可能的解释是您的打开文件 ulimits 太低。通常,1024 默认值对于适度使用的 Web 服务器来说是不够的。
A good way to test this is to use apache bench to simulate the inbound traffic you're seeing. Running this on a remote host will generate 1000 requests each over 10 concurrent connections.
测试这一点的一个好方法是使用 apache bench 来模拟您看到的入站流量。在远程主机上运行此程序将生成 1000 个请求,每个请求超过 10 个并发连接。
ab -c 10 -n 1000 [http://]hostname[:port]/path
Now count the sockets on your web server using netstat...
现在使用 netstat 计算 Web 服务器上的套接字...
netstat -a | grep -c 192.168.1.100
Hopefully what you'll find is that your sockets will plateau at some value not dramatically larger than 1024 (mine is at 16384).
希望您会发现您的套接字将稳定在某个值,不会显着大于 1024(我的是 16384)。
Another good thing to ensure is that connections are being closed properly in your business logic.
确保连接的另一件好事是在您的业务逻辑中正确关闭了连接。
netstat -a | grep -c CLOSE_WAIT
If you see this number continue to grow over the lifecycle of your application, you may be missing a few calls to Connection.close().
如果你看到这个数字在你的应用程序的生命周期中继续增长,你可能会错过几个对 Connection.close() 的调用。