Apache 和 JBOSS 使用 AJP (mod_jk) 导致线程数激增
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1846034/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Apache with JBOSS using AJP (mod_jk) giving spikes in thread count
提问by Ashish Jain
We used Apache with JBOSS for hosting our Application, but we found some issues related to thread handling of mod_jk.
我们使用 Apache 和 JBOSS 来托管我们的应用程序,但我们发现了一些与 mod_jk 的线程处理相关的问题。
Our website comes under low traffic websites and has maximum 200-300 concurrent users during our website's peak activity time. As the traffic grows (not in terms of concurrent users, but in terms of cumulative requests which came to our server), the server stopped serving requests for long, although it didn't crash but could not serve the request till 20 mins. The JBOSS server console showed that 350 thread were busy on both servers although there was enough free memory say, more than 1-1.5 GB (2 servers for JBOSS were used which were 64 bits, 4 GB RAM allocated for JBOSS)
我们的网站属于低流量网站,在我们网站的活动高峰期最多有 200-300 名并发用户。随着流量的增长(不是在并发用户方面,而是在到达我们服务器的累积请求方面),服务器停止服务请求很长时间,虽然它没有崩溃但直到 20 分钟才能为请求提供服务。JBOSS 服务器控制台显示 350 个线程在两台服务器上都很忙,尽管有足够的空闲内存,比如超过 1-1.5 GB(使用了 2 个 JBOSS 服务器,64 位,为 JBOSS 分配了 4 GB RAM)
In order to check the problem we were using JBOSS and Apache Web Consoles, and we were seeing that the thread were showing in S state for as long as minutes although our pages take around 4-5 seconds to be served.
为了检查问题,我们使用了 JBOSS 和 Apache Web 控制台,我们看到线程在 S 状态下显示长达几分钟,尽管我们的页面需要大约 4-5 秒才能提供服务。
We took the thread dump and found that the threads were mostly in WAITING state which means that they were waiting indefinitely. These threads were not of our Application Classes but of AJP 8009 port.
我们进行了线程转储,发现线程大多处于 WAITING 状态,这意味着它们无限期地等待。这些线程不属于我们的应用程序类,而是属于 AJP 8009 端口。
Could somebody help me in this, as somebody else might also got this issue and solved it somehow. In case any more information is required then let me know.
有人可以帮助我吗,因为其他人也可能遇到这个问题并以某种方式解决了它。如果需要更多信息,请告诉我。
Also is mod_proxy better than using mod_jk, or there are some other problems with mod_proxy which can be fatal for me if I switch to mod__proxy?
mod_proxy 是否比使用 mod_jk 更好,或者 mod_proxy 存在一些其他问题,如果我切换到 mod__proxy,这对我来说可能是致命的?
The versions I used are as follows:
我使用的版本如下:
Apache 2.0.52
JBOSS: 4.2.2
MOD_JK: 1.2.20
JDK: 1.6
Operating System: RHEL 4
Thanks for the help.
谢谢您的帮助。
Guys!!!! We finally found the workaround with the configuration mentioned above. It is use of APR and is mentioned here: http://community.jboss.org/thread/153737. Its issue as correctly mentioned by many people in answers below i.e. connector issue. Earlier we made temporary workaround by configuring hibernate and increasing response time. The full fix is APR.
伙计们!!!!我们终于找到了上述配置的解决方法。这是 APR 的使用,这里提到:http: //community.jboss.org/thread/153737。许多人在下面的答案中正确提到了它的问题,即连接器问题。早些时候,我们通过配置休眠和增加响应时间来临时解决。完整的修复程序是 APR。
回答by Naganalf
We are experiencing similar issues. We are still working on solutions, but it looks like alot of answers can be found here:
我们遇到了类似的问题。我们仍在研究解决方案,但似乎可以在这里找到很多答案:
http://www.jboss.org/community/wiki/OptimalModjk12Configuration
http://www.jboss.org/community/wiki/OptimalModjk12Configuration
Good luck!
祝你好运!
回答by Dan
Deploy the Apache native APR under jboss/bin/native.
在 jboss/bin/native 下部署 Apache 本机 APR。
Edit your jboss run.sh to make sure it is looking for the native libs in the right folder.
编辑您的 jboss run.sh 以确保它在正确的文件夹中寻找本机库。
This will force jboss to use native AJP connector trheads rather than the default pure-java ones.
这将强制 jboss 使用本机 AJP 连接器 trheads 而不是默认的纯 java 连接器。
回答by Stephen Souness
You should also take a look at the JBoss Jira issue, titled "AJP Connector Threads Hung in CLOSE_WAIT Status":
您还应该查看 JBoss Jira 问题,标题为“AJP Connector Threads Hung in CLOSE_WAIT Status”:
回答by Ashish Jain
What we did for sorting this issue out is as follows:
我们为解决这个问题所做的工作如下:
<property name="hibernate.cache.use_second_level_cache">false</property>
<property name="hibernate.search.default.directory_provider">org.hibernate.search.store.FSDirectoryProvider</property>
<property name="hibernate.search.Rules.directory_provider">
org.hibernate.search.store.RAMDirectoryProvider
</property>
<property name="hibernate.search.default.indexBase">/usr/local/lucene/indexes</property>
<property name="hibernate.search.default.indexwriter.batch.max_merge_docs">1000</property>
<property name="hibernate.search.default.indexwriter.transaction.max_merge_docs">10</property>
<property name="hibernate.search.default.indexwriter.batch.merge_factor">20</property>
<property name="hibernate.search.default.indexwriter.transaction.merge_factor">10</property>
<property name ="hibernate.search.reader.strategy">not-shared</property>
<property name ="hibernate.search.worker.execution">async</property>
<property name ="hibernate.search.worker.thread_pool.size">100</property>
<property name ="hibernate.search.worker.buffer_queue.max">300</property>
<property name ="hibernate.search.default.optimizer.operation_limit.max">1000</property>
<property name ="hibernate.search.default.optimizer.transaction_limit.max">100</property>
<property name ="hibernate.search.indexing_strategy">manual</property>
Above parameters ensured that the worker threads are not blocked by lucene and hibernate search. Default optimizer of hibernate made our life easy, thus I consider this setting very important.
以上参数确保工作线程不会被 lucene 和 hibernate 搜索阻塞。hibernate 的默认优化器让我们的生活变得轻松,因此我认为这个设置非常重要。
Also removed the C3P0 connection pooling and used inbuilt JDBC connection pooling, thus we commented below section.
还删除了 C3P0 连接池并使用了内置的 JDBC 连接池,因此我们在下面的部分进行了评论。
<!--For JDBC connection pool (use the built-in)-->
<property name="connection.provider_class">org.hibernate.connection.C3P0ConnectionProvider</property>
<!-- DEPRECATED very expensive property name="c3p0.validate>-->
<!-- seconds -->
After doing all this, we were able to reduce considerably the time which an AJP thread was taking to serve a request and threads started coming to R state after serving the request i.e. in S state.
完成所有这些之后,我们能够显着减少 AJP 线程处理请求所花费的时间,并且线程在处理请求后开始进入 R 状态,即处于 S 状态。
回答by Mark
There is a bug in tomcat 6 that was filed recently. It's in regards to the HTTP connector but the symptoms sound the same.
最近提交的 tomcat 6 中有一个错误。它与 HTTP 连接器有关,但症状听起来相同。
回答by David Mann
We were having this issue in a Jboss 5 environment. The cause was a web service that took longer to respond than Jboss/Tomcat allowed. This would cause the AJP thread pool to eventually exhaust its available threads. It would then stop responding. Our solution was to adjust the web service to use a Request/Acknowledge pattern rather than a Request/Respond pattern. This allowed the web service to respond within the timeout period every time. Granted this doesn't solve the underlying configuration issue of Jboss, but it was easier for us to do in our context than tuning jboss.
我们在 Jboss 5 环境中遇到了这个问题。原因是 Web 服务的响应时间比 Jboss/Tomcat 允许的时间长。这将导致 AJP 线程池最终耗尽其可用线程。然后它会停止响应。我们的解决方案是调整 Web 服务以使用请求/确认模式而不是请求/响应模式。这允许 Web 服务每次都在超时时间内做出响应。当然,这并不能解决 Jboss 的底层配置问题,但在我们的上下文中,这比调整 jboss 更容易。
回答by user3277258
There is a bug related to AJP connector executor leaking threads and the solution is explained here Jboss AJP thread pool not released idle threads. In summary, AJP thread-pool connections by default have no timeout and will persist permanently once established. Hope this helps,
有一个与 AJP 连接器执行程序泄漏线程相关的错误,解决方案在这里解释Jboss AJP 线程池未释放空闲线程。总之,AJP 线程池连接默认没有超时,一旦建立就会永久存在。希望这可以帮助,

