java BindException/在负载下使用 HttpClient 时打开的文件太多

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/2914218/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-29 23:25:45  来源:igfitidea点击:

BindException/Too many file open while using HttpClient under load

javaapache-commons-httpclient

提问by Langali

I have got 1000 dedicated Java threads where each thread polls a corresponding url every one second.

我有 1000 个专用 Java 线程,其中每个线程每秒轮询一个相应的 url。

public class Poller { 
    public static Node poll(Node node) { 
        GetMethod method =  null; 
        try { 
            HttpClient client = new HttpClient(new SimpleHttpConnectionManager(true)); 
            ......
        } catch (IOException ex) { 
            ex.printStackTrace(); 
        } finally { 
            method.releaseConnection(); 
        } 
    } 
} 

The threads are run every one second:

线程每隔一秒运行一次:

for (int i=0; i <1000; i++) { 
    MyThread thread = threads.get(i) // threads  is a static field 
    if(thread.isAlive()) { 
        // If the previous thread is still running, let it run. 
    } else { 
        thread.start(); 
    } 
}

The problem is if I run the job every one second I get random exceptions like these:

问题是,如果我每隔一秒运行一次作业,我就会收到如下随机异常:

java.net.BindException: Address already in use 
 INFO httpclient.HttpMethodDirector: I/O exception (java.net.BindException) caught when processing request: Address already in use 
 INFO httpclient.HttpMethodDirector: Retrying request 

But if I run the job every 2 seconds or more, everything runs fine.

但是,如果我每 2 秒或更长时间运行一次作业,则一切正常。

I even tried shutting down the instance of SimpleHttpConnectionManager() using shutDown() with no effect.

我什至尝试使用 shutdown() 关闭 SimpleHttpConnectionManager() 的实例,但没有任何效果。

If I do netstat, I see thousands of TCP connections in TIME_WAIT state, which means they are have been closed and are clearing up.

如果我执行 netstat,我会看到数以千计的 TCP 连接处于 TIME_WAIT 状态,这意味着它们已关闭并正在清除。

So to limit the no of connections, I tried using a single instance of HttpClient and use it like this:

因此,为了限制连接数,我尝试使用 HttpClient 的单个实例并像这样使用它:

  public class MyHttpClientFactory { 
        private static MyHttpClientFactory instance = new HttpClientFactory(); 
        private MultiThreadedHttpConnectionManager connectionManager; 
        private HttpClient client; 

        private HttpClientFactory() { 
                init(); 
        } 

        public static HttpClientFactory getInstance() { 
                return instance; 
        } 

        public void init() { 
                connectionManager = new MultiThreadedHttpConnectionManager(); 
                HttpConnectionManagerParams managerParams = new HttpConnectionManagerParams(); 
                managerParams.setMaxTotalConnections(1000); 
                connectionManager.setParams(managerParams); 
                client = new HttpClient(connectionManager); 
        } 

        public HttpClient getHttpClient() { 
                if (client != null) { 
                        return client; 
                } else { 
                    init(); 
                    return client; 
                } 
        } 
}

However after running for exactly 2 hours, it starts throwing 'too many open files' and eventually cannot do anything at all.

然而,在运行了整整 2 小时后,它开始抛出“打开的文件太多”,最终什么也做不了。

ERROR java.net.SocketException: Too many open files
INFO httpclient.HttpMethodDirector: I/O exception (java.net.SocketException) caught when processing request: Too many open files
INFO httpclient.HttpMethodDirector: Retrying request

I should be able to increase the no of connections allowed and make it work, but I would just be prolonging the evil. Any idea what is the best practise to use HttpClient in a situation like above?

我应该能够增加允许的连接数并使其工作,但我只会延长邪恶。知道在上述情况下使用 HttpClient 的最佳做法是什么吗?

Btw, I am still on HttpClient3.1.

顺便说一句,我还在 HttpClient3.1 上。

回答by Jim Ferrans

This happened to us a few months back. First, double check to make sure you really are calling releaseConnection() every time. But even then, the OS doesn't actually reclaim the TCP connections all at once. The solution is to use the Apache HTTP Client's MultiThreadedHttpConnectionManager. This pools and reuses the connections.

这发生在我们几个月前。首先,仔细检查以确保您每次确实都在调用 releaseConnection()。但即便如此,操作系统实际上也不会一次性回收所有 TCP 连接。解决方案是使用 Apache HTTP 客户端的MultiThreadedHttpConnectionManager。这将汇集并重用连接。

See http://hc.apache.org/httpclient-3.x/performance.htmlfor more performance tips.

有关更多性能提示,请参阅http://hc.apache.org/httpclient-3.x/performance.html

Update: Whoops, I didn't read the lower code sample. If you're doing releaseConnection() and using MultiThreadedHttpConnectionManager, consider whether your OS limit on open files per process is set high enough. We had that problem too, and needed to extend the limit a bit.

更新:糟糕,我没有阅读较低的代码示例。如果您正在执行 releaseConnection() 并使用 MultiThreadedHttpConnectionManager,请考虑您对每个进程打开的文件的操作系统限制是否设置得足够高。我们也有这个问题,需要稍微扩展一下限制。

回答by ZZ Coder

There is nothing wrong with first error. You just depleted empirical ports available. Each TCP connection can stay in TIME_WAIT state for 2 minutes. You generate 2000/seconds. Soon or later, the socket can't find any unused local port and you will get that error. TIME_WAIT designed exactly for this purpose. Without it, your system might hiHyman a previous connection.

第一个错误没有错。您刚刚耗尽了可用的经验端口。每个 TCP 连接可以保持 TIME_WAIT 状态 2 分钟。你产生 2000/秒。迟早,套接字找不到任何未使用的本地端口,您将收到该错误。TIME_WAIT 正是为此目的而设计的。没有它,您的系统可能会劫持以前的连接。

The second error means you have too many sockets open. On some system, there is a limit of 1K open files. Maybe you just hit that limit due to lingering sockets and other open files. On Linux, you can change this limit using

第二个错误意味着您打开了太多套接字。在某些系统上,有 1K 打开文件的限制。也许您只是因为延迟的套接字和其他打开的文件而达到了该限制。在 Linux 上,您可以使用更改此限制

  ulimit -n 2048

But that's limited by a system-wide max value.

但这受到系统范围最大值的限制。

回答by Hymantrade

As sudo or root edit the /etc/security/limits.conf file. At the end of the file just above “# End of File” enter the following values: * soft nofile 65535 * hard nofile 65535 This will set the number of open files to unlimited.

以 sudo 或 root 身份编辑 /etc/security/limits.conf 文件。在“# End of File”上方的文件末尾输入以下值: * soft nofile 65535 * hard nofile 65535 这会将打开的文件数设置为无限制。