Java 使用 Hibernate 时内存使用率高
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/24359088/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
High memory usage when using Hibernate
提问by Viet
I code a server side application with java run on linux server. I use hibernate to open session to database, use native sql to query it and always close this session by try, catch, finally.
我用在 linux 服务器上运行的 java 编写了一个服务器端应用程序。我使用 hibernate 打开到数据库的会话,使用本机 sql 查询它并始终通过 try、catch、finally 关闭此会话。
My server query DB using hibernate with very high frequency.
我的服务器使用 hibernate 以非常高的频率查询数据库。
I already define MaxHeapSize for it is 3000M but it usually use 2.7GB on RAM, it can decrease but slower than increase. Sometime it grow up to 3.6GB memory usage, more than my MaxHeapSize define when start.
我已经定义了 MaxHeapSize 为 3000M 但它通常在 RAM 上使用 2.7GB,它可以减少但比增加慢。有时它会增长到 3.6GB 内存使用量,超过我在启动时定义的 MaxHeapSize。
When memory used is 3.6GB, i try to dump it with -jmap command and got a heapdump with size of 1.3GB only.
当使用的内存为 3.6GB 时,我尝试使用 -jmap 命令将其转储并获得大小仅为 1.3GB 的堆转储。
Im using eclipse MAT to analyse it, here is the dominator tree from MAT
I think hibernate is the problem, i have so many org.apache.commons.collections.map.AbstractReferenceMap$ReferenceEntry like this. It maybe cant be dispose by garbage collection or can but slow.
我使用eclipse MAT来分析它,这是来自MAT的支配树
我认为hibernate是问题,我有很多org.apache.commons.collections.map.AbstractReferenceMap$ReferenceEntry这样的。它可能无法通过垃圾收集处理或只能缓慢处理。
How can i fix it?
我该如何解决?
采纳答案by Vlad Mihalcea
You have 250kentries in your IN query list. Even a native query will put the database to its knees. Oracle limits the IN query listing to 1000 for performance reasons so you should do the same.
您的 IN 查询列表中有25 万个条目。即使是本机查询也会使数据库屈服。Oracle 出于性能原因将 IN 查询列表限制为 1000,因此您也应该这样做。
Giving it more RAM is not going to solve the problem, you need to limit your select/updates to batches of at most 1000 entries, by using pagination.
给它更多的 RAM 并不能解决问题,您需要通过使用分页将选择/更新限制为最多 1000 个条目的批次。
Streaming is an optionas well, but, for such a large result set, keyset paginationis usually the best option.
流也是一种选择,但是,对于如此大的结果集,键集分页通常是最好的选择。
If you can do all the processing in the database, then you will not have to move 250k records from the DB to the app. There's a very good reason why many RDBMS offer advanced procedural languages (e.g. PL/SQL, T-SQL).
如果您可以在数据库中完成所有处理,那么您就不必将 250k 条记录从数据库移动到应用程序。许多 RDBMS 提供高级过程语言(例如 PL/SQL、T-SQL)是有充分理由的。
回答by Viet
Thank you Vlad Mihalcea
with your link to Hibernate issue, this is bug on hibernate, it fix on version 3.6. I just update my hibernate version 3.3.2 to version 3.6.10, use default value of "hibernate.query.plan_cache_max_soft_references" (2048), "hibernate.query.plan_cache_max_strong_references" (128) and my problem is gone. No more high memory usage.
感谢您Vlad Mihalcea
提供Hibernate 问题的链接,这是Hibernate 的错误,它在 3.6 版上修复。我只是将我的休眠版本 3.3.2 更新到版本 3.6.10,使用“hibernate.query.plan_cache_max_soft_references”(2048)、“hibernate.query.plan_cache_max_strong_references”(128)的默认值,我的问题就消失了。不再使用高内存。
回答by jalogar
Notice that even although the number of object within the queryPlanCache can be configured and limited, it is probably not normal having that much.
请注意,尽管可以配置和限制 queryPlanCache 中的对象数量,但拥有这么多可能并不正常。
In our case we were writing queries in hql similar to this:
在我们的例子中,我们在 hql 中编写类似于这样的查询:
hql = String.format("from Entity where msisdn='%s'", msisdn);
This resulted in N different queries going to the queryPlanCache. When we changed this query to:
这导致 N 个不同的查询进入 queryPlanCache。当我们将此查询更改为:
hql = "from Blacklist where msisnd = :msisdn";
...
query.setParameter("msisdn", msisdn);
the size of queryPlanCache was dramatically reduced from 100Mb to almost 0. This second query is translated into a one single preparedStament resulting just one object inside the cache.
queryPlanCache 的大小从 100Mb 急剧减少到几乎为 0。第二个查询被转换为一个单一的 PreparedStament,结果缓存中只有一个对象。