java 弹簧或休眠连接泄漏
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/27925914/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
spring or hibernate connection leak
提问by abdel
I am using spring with hibernate on a webapp (hibernate-core-4.3.8.Final and spring 3.2.11.RELEASE). I am using hikaricp (v 2.2.5) as the connection pool impl which detects a connection leak and prints the stacktrace below. I am using spring's declarative transaction demarcation so I assume the management and clean up of resources is done by spring/hibernate. Therefore, I think either spring or hibernate is the cause of the detected connection leak.
我在 webapp 上使用 spring 和 hibernate(hibernate-core-4.3.8.Final 和 spring 3.2.11.RELEASE)。我使用 hikaricp (v 2.2.5) 作为连接池 impl,它检测连接泄漏并在下面打印堆栈跟踪。我正在使用 spring 的声明性事务划分,所以我假设资源的管理和清理是由 spring/hibernate 完成的。因此,我认为 spring 或 hibernate 是检测到连接泄漏的原因。
basically, there is timer which when fired, calls a spring bean marked with @Transactional annotation.
基本上,有一个计时器,当它被触发时,会调用一个标有 @Transactional 注释的 spring bean。
@Transactional public class InvoiceCycleExporter {
public runExportInvoiceCycleJob(){
//this method when called is **sometimes** leaking a connection ....
} }
can you please help me to trace the source of connection leak.
你能帮我追踪连接泄漏的来源吗?
my appcontext.xml config for datasource, connection pool, entitymanager are below
我的数据源、连接池、实体管理器的 appcontext.xml 配置如下
<bean id="hikariConfig" class="com.zaxxer.hikari.HikariConfig">
<property name="jdbcUrl" value="${jdbc.url}"/>
<property name="username" value="${jdbc.user}"/>
<property name="password" value="${jdbc.password}"/>
<property name="maximumPoolSize" value="${jdbc.maximumPoolSize}"/>
<property name="driverClassName" value="org.postgresql.Driver"/>
<property name="leakDetectionThreshold" value="${jdbc.leakDetectionThreshold}"/>
</bean>
<bean id="dataSource" class="com.zaxxer.hikari.HikariDataSource" destroy-method="shutdown">
<constructor-arg ref="hikariConfig"/>
</bean>
<bean id="entityManagerFactory" class="org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean">
<property name="dataSource" ref="dataSource"/>
<property name="persistenceUnitName" value="velosPU"/>
<property name="persistenceXmlLocation" value="classpath:META-INF/persistence.xml"/> //more stuff ....
</bean>
stacktrace below:
下面的堆栈跟踪:
2015-01-13 14:25:00.123 [Hikari Housekeeping Timer (pool HikariPool-0)] WARN com.zaxxer.hikari.util.LeakTask - Connection leak detection triggered, stack trace follows
java.lang.Exception: null
at org.hibernate.engine.jdbc.connections.internal.DatasourceConnectionProviderImpl.getConnection(DatasourceConnectionProviderImpl.java:139) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.internal.AbstractSessionImpl$NonContextualJdbcConnectionAccess.obtainConnection(AbstractSessionImpl.java:380) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.obtainConnection(LogicalConnectionImpl.java:228) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.engine.jdbc.internal.LogicalConnectionImpl.getConnection(LogicalConnectionImpl.java:171) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.engine.transaction.internal.jdbc.JdbcTransaction.doBegin(JdbcTransaction.java:67) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.engine.transaction.spi.AbstractTransactionImpl.begin(AbstractTransactionImpl.java:162) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.internal.SessionImpl.beginTransaction(SessionImpl.java:1435) ~[hibernate-core-4.3.8.Final.jar:4.3.8.Final]
at org.hibernate.jpa.internal.TransactionImpl.begin(TransactionImpl.java:61) ~[hibernate-entitymanager-4.3.8.Final.jar:4.3.8.Final]
at org.springframework.orm.jpa.DefaultJpaDialect.beginTransaction(DefaultJpaDialect.java:70) ~[spring-orm-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.orm.jpa.vendor.HibernateJpaDialect.beginTransaction(HibernateJpaDialect.java:61) ~[spring-orm-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.orm.jpa.JpaTransactionManager.doBegin(JpaTransactionManager.java:378) ~[spring-orm-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.support.AbstractPlatformTransactionManager.getTransaction(AbstractPlatformTransactionManager.java:372) ~[spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.createTransactionIfNecessary(TransactionAspectSupport.java:417) ~[spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionAspectSupport.invokeWithinTransaction(TransactionAspectSupport.java:255) ~[spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.transaction.interceptor.TransactionInterceptor.invoke(TransactionInterceptor.java:94) ~[spring-tx-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.ReflectiveMethodInvocation.proceed(ReflectiveMethodInvocation.java:172) ~[spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at org.springframework.aop.framework.CglibAopProxy$DynamicAdvisedInterceptor.intercept(CglibAopProxy.java:633) ~[spring-aop-3.2.11.RELEASE.jar:3.2.11.RELEASE]
at com.ukfuels.velos.services.bl.internalinterface.impl.bl.invoicing.**InvoiceCycleExporter (this is the spring bean marked with the transactional annotation)**$$EnhancerBySpringCGLIB$9c078f.runExportInvoiceCycleJob(<generated>) ~[spring-core-3.2.11.RELEASE.jar:na]
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) ~[na:1.7.0_65]
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) ~[na:1.7.0_65]
at org.apache.camel.component.bean.BeanProcessor.process(BeanProcessor.java:67) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.impl.ProcessorEndpoint.onExchange(ProcessorEndpoint.java:103) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.impl.ProcessorEndpoint.process(ProcessorEndpoint.java:71) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorConverterHelper$ProcessorToAsyncProcessorBridge.process(AsyncProcessorConverterHelper.java:61) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.SendProcessor.doInAsyncProducer(SendProcessor.java:122) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.impl.ProducerCache.doInAsyncProducer(ProducerCache.java:298) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.SendProcessor.process(SendProcessor.java:117) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.interceptor.BacklogTracerInterceptor.process(BacklogTracerInterceptor.java:84) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.interceptor.TraceInterceptor.process(TraceInterceptor.java:91) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.RedeliveryErrorHandler.processErrorHandler(RedeliveryErrorHandler.java:391) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.RedeliveryErrorHandler.process(RedeliveryErrorHandler.java:273) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.RouteContextProcessor.processNext(RouteContextProcessor.java:46) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.interceptor.DefaultChannel.process(DefaultChannel.java:335) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.RouteContextProcessor.processNext(RouteContextProcessor.java:46) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.UnitOfWorkProcessor.processAsync(UnitOfWorkProcessor.java:150) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.UnitOfWorkProcessor.process(UnitOfWorkProcessor.java:117) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.RouteInflightRepositoryProcessor.processNext(RouteInflightRepositoryProcessor.java:48) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.processNext(DelegateAsyncProcessor.java:99) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.DelegateAsyncProcessor.process(DelegateAsyncProcessor.java:90) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.management.InstrumentationProcessor.process(InstrumentationProcessor.java:72) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:73) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.loadbalancer.QueueLoadBalancer.process(QueueLoadBalancer.java:44) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.util.AsyncProcessorHelper.process(AsyncProcessorHelper.java:99) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.processor.loadbalancer.QueueLoadBalancer.process(QueueLoadBalancer.java:71) ~[camel-core-2.11.4.jar:2.11.4]
at org.apache.camel.component.quartz.QuartzEndpoint.onJobExecute(QuartzEndpoint.java:113) ~[camel-quartz-2.11.4.jar:2.11.4]
at org.apache.camel.component.quartz.CamelJob.execute(CamelJob.java:61) ~[camel-quartz-2.11.4.jar:2.11.4]
at org.quartz.core.JobRunShell.run(JobRunShell.java:223) ~[quartz-1.8.6.jar:na]
at org.quartz.simpl.SimpleThreadPool$WorkerThread.run(SimpleThreadPool.java:549) ~[quartz-1.8.6.jar:na] **(a timer is triggered)**
采纳答案by Cristian Sevescu
The issue is discussed here at large:
这个问题在这里被广泛讨论:
https://github.com/brettwooldridge/HikariCP/issues/34
https://github.com/brettwooldridge/HikariCP/issues/34
To narrow the problem:
缩小问题范围:
- Try Spring 4 with Hibernate 4
- Try another data-source to see if problem persists.
- 使用 Hibernate 4 尝试 Spring 4
- 尝试另一个数据源以查看问题是否仍然存在。
回答by abdel
here is some information that i think could be useful for debugging connection leaks in general
这里有一些我认为可能对调试连接泄漏有用的信息
- Rule out false alarms. For example, if you have a transaction that could potentially run for 5 minutes (e.g., while producing a large report) and you had set your 'leakDetectionThreshold' parameter (or whatever the equivalent for your chosen pool implementation) to 4 minutes, then any transaction taking longer than 4 minutes will be reported as a leak even though it may not be a genuine leak (transaction could complete gracefully and adequately release all connections after 5 minutes).
If you are using a connection pool but find yourself running out of connections occasionallythen think of how you configure your pool. In my case, I had configured the pool with 'maximumPoolSize'=100. Hikari will auto-default 'minimumIdle' config (minimum number of idle connections that HikariCP tries to maintain in the pool) to the same as that of 'maximumPoolSize' so on start-up the pool will be initialized with all max 100 connections. But this will mean that when the 'maxLifetime' (the maximum lifetime of a connection in the pool) fires, then all connections will have to be renewed at the same time. This results in a steeptemporary reduction of available connections. In my logs I was occasionally seeing following lines.
HikariPool-0 (total=100, inUse=0, avail=100, waiting=0)
HikariPool-0 (total=4, inUse=0, avail=4, waiting=0)
HikariPool-0 (total=100, inUse=0, avail=100, waiting=0)the second line where available connections dropped to only 4, is when the 'maxLifetime' was reached and the connections needed renewal. So when configuring your pool, do it in a way so that the connections expire at different times. In my case, I simply changed 'minimumIdle' to 40, which means as load increases on server, new connections will be incrementallyacquired (watch if your pool impl offers the equivalent of minToAcquire property) and hence those connections will have different expiry dates.
Your connection 'maxLifetime' need to be less than what your database assigns/expects for connections so that you don't end up with connections in the pool that are invalid. UPDATE:some databases may force drop the connection after a period of time. For example, postgres has a 'connectionTimeout' and 'socketTimeout' options. So in your application's connection pool, you don't want to keep connections for longer than this db-enforced-connection-timeout because otherwise you will be keeping an invalid/already-dropped connection.
- 排除误报。例如,如果您的事务可能会运行 5 分钟(例如,在生成大型报告时)并且您已将“leakDetectionThreshold”参数(或任何与您选择的池实现等效的参数)设置为 4 分钟,那么任何超过 4 分钟的事务将被报告为泄漏,即使它可能不是真正的泄漏(事务可以在 5 分钟后正常完成并充分释放所有连接)。
如果您正在使用连接池,但发现自己偶尔会耗尽连接,那么请考虑如何配置您的池。就我而言,我已将池配置为“maximumPoolSize”=100。Hikari 将自动默认 'minimumIdle' 配置(HikariCP 尝试在池中维护的最小空闲连接数)与 'maximumPoolSize' 相同,因此在启动时,池将初始化为所有最大 100 个连接。但是,这将意味着,当“maxLifetime”火灾(最大的连接池的寿命),那么所有的连接都必须进行更新在同一时间。这会导致可用连接急剧减少。在我的日志中,我偶尔会看到以下几行。
HikariPool-0(total=100,inUse=0,avail=100,waiting=0)
HikariPool-0(total=4,inUse=0,avail=4,waiting=0)
HikariPool-0(total=100,inUse= 0,可用=100,等待=0)可用连接下降到只有 4 个的第二行是达到 'maxLifetime' 并且连接需要更新时。因此,在配置池时,请以某种方式进行操作,以便连接在不同时间到期。就我而言,我只是将“minimumIdle”更改为 40,这意味着随着服务器负载的增加,将逐步获取新连接(注意您的池 impl 是否提供等效的 minToAcquire 属性),因此这些连接将具有不同的到期日期。
您的连接 'maxLifetime' 需要小于您的数据库为连接分配/期望的连接,这样您就不会在池中得到无效的连接。更新:某些数据库可能会在一段时间后强制断开连接。例如,postgres 有一个 'connectionTimeout' 和 'socketTimeout' 选项。因此,在您的应用程序的连接池中,您不希望保持连接的时间超过此 db-enforced-connection-timeout 的时间,否则您将保持无效/已断开的连接。