oracle java中的大型sql结果集
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1179977/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
large sql resultsets in java
提问by user140736
How can I fetch large resultset in java? I have about 140,000 rows with 3 columns.
如何在java中获取大型结果集?我有大约 140,000 行和 3 列。
回答by erickson
There's no special way to retrieve a large result set; this can be done the same as any other database query via JDBC.
没有特殊的方法来检索大的结果集;这可以像通过 JDBC 进行的任何其他数据库查询一样完成。
The key is in how the results are handled. 140,000 small records is not too many, but if holding them all in application memory at once is a problem, consider whether they can be processed "streamwise". That is, use the information needed from each record, then discard the record before retrieving the next. This way, the memory requirement doesn't depend on the number of records in the result set.
关键在于如何处理结果。140,000 条小记录并不算多,但如果将它们全部保存在应用程序内存中是个问题,请考虑是否可以“流式”处理它们。也就是说,使用每条记录所需的信息,然后在检索下一条记录之前丢弃该记录。这样,内存要求不取决于结果集中的记录数。
回答by BobMcGee
(using a java.sql.Connection to your DB):
(使用 java.sql.Connection 到您的数据库):
Statement st = cnxn.createStatement();
ResultSet rs = st.executeQuery("SELECT column1,column2,your_mom FROM some_table");
while(rs.next()){
Object ob1 = rs.getObject(1);
Object ob2 = rs.getObject(2);
Ho ob3 = rs.getHo(3);
doStuffWithRow(ob1,ob2,ob3);
}
Unless your DBMS is pathetically useless, results will be read from disk/memory as requested, and won't sit in your memory or anything crazy like that. Using the primitives-based ResultSet.getXXX methods is faster than getObject, but I didn't feel like specifying column types.
除非您的 DBMS 非常无用,否则将根据要求从磁盘/内存中读取结果,并且不会出现在您的内存中或任何类似的疯狂事物中。使用基于原语的 ResultSet.getXXX 方法比 getObject 快,但我不想指定列类型。
Just be patient and let 'er chug. Oh, and stay the heck away from ORM here.
请耐心等待,让我们尽情享受吧。哦,在这里远离 ORM。
回答by Daniel Winterstein
If you're using PostgreSQL, you'll need to setup the Connection and the Statement as follows:
如果您使用的是 PostgreSQL,则需要按如下方式设置连接和语句:
Connection cnxn = ...;
cnxn.setAutoCommit(false);
Statement stmnt = cnxn.createStatement();
stmnt.setFetchSize(1);
Otherwise your query is liable to try & load everything into memory. Thanks to http://abhirama.wordpress.com/2009/01/07/postgresql-jdbc-and-large-result-sets/for documenting that.
否则,您的查询可能会尝试将所有内容加载到内存中。感谢http://abhirama.wordpress.com/2009/01/07/postgresql-jdbc-and-large-result-sets/的记录。
回答by Suresh
I recommend a batch style fetch instead of loading everything. There are performance considerations at the database end when executing a query with a big result set.
我建议使用批处理样式获取而不是加载所有内容。在执行具有大结果集的查询时,数据库端存在性能考虑。
回答by Andreas Petersson
i would avoid loading such a data set with any kind of complex abstraction. this sounds like a "batch job" style application.
我会避免使用任何类型的复杂抽象加载这样的数据集。这听起来像是一个“批处理作业”风格的应用程序。
i would recommend using raw jdbc and map it to a very compact representation, without bloat with only 3 colums this should be fairly easy to hold in memory, if the strings? are not overly big.
我建议使用原始 jdbc 并将其映射到一个非常紧凑的表示,没有膨胀,只有 3 个列,如果字符串?不是太大。
avoid loading 140k rows with a tool such as hibernate. it is a great tool, but you might run into memory issues if you hold that many entities in the hibernate 1st and 2nd level caches.
避免使用 hibernate 等工具加载 140k 行。这是一个很棒的工具,但是如果您在休眠的第一级和第二级缓存中保存了那么多实体,您可能会遇到内存问题。