java 多次有效地使用Prepared Statement
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/12100550/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Using Prepared Statement multiple times efficiently
提问by AKIWEB
Below is the code which I am using to insert multiple records( around 5000-7000)
in the Oracle Database using Prepared Statement.
下面是我用来multiple records( around 5000-7000)
使用 Prepared Statement插入Oracle 数据库的代码。
The way I am doing currently is good? Or it can be improve more using some batch thing
?
我目前的做法好吗?或者它可以使用一些来改进更多batch thing
?
pstatement = db_connection.prepareStatement(PDSLnPConstants.UPSERT_SQL);
for (Entry<Integer, LinkedHashMap<Integer, String>> entry : MAPPING.entrySet()) {
pstatement.setInt(1, entry.getKey());
pstatement.setString(2, entry.getValue().get(LnPConstants.CGUID_ID));
pstatement.setString(3, entry.getValue().get(LnPConstants.PGUID_ID));
pstatement.setString(4, entry.getValue().get(LnPConstants.SGUID_ID));
pstatement.setString(5, entry.getValue().get(LnPConstants.UID_ID));
pstatement.setString(6, entry.getValue().get(LnPConstants.ULOC_ID));
pstatement.setString(7, entry.getValue().get(LnPConstants.SLOC_ID));
pstatement.setString(8, entry.getValue().get(LnPConstants.PLOC_ID));
pstatement.setString(9, entry.getValue().get(LnPConstants.ALOC_ID));
pstatement.setString(10, entry.getValue().get(LnPConstants.SITE_ID));
pstatement.executeUpdate();
pstatement.clearParameters();
}
Udpated Code That I am Using:-
我正在使用的更新代码:-
public void runNextCommand() {
Connection db_connection = null;
PreparedStatement pstatement = null;
int batchLimit = 1000;
boolean autoCommit = false;
try {
db_connection = getDBConnection();
autoCommit = db_connection.getAutoCommit();
db_connection.setAutoCommit(false); //Turn off autoCommit
pstatement = db_connection.prepareStatement(LnPConstants.UPSERT_SQL); // create a statement
for (Entry<Integer, LinkedHashMap<Integer, String>> entry : GUID_ID_MAPPING.entrySet()) {
pstatement.setInt(1, entry.getKey());
pstatement.setString(2, entry.getValue().get(LnPConstants.CGUID_ID));
pstatement.setString(3, entry.getValue().get(LnPConstants.PGUID_ID));
pstatement.setString(4, entry.getValue().get(LnPConstants.SGUID_ID));
pstatement.setString(5, entry.getValue().get(LnPConstants.UID_ID));
pstatement.setString(6, entry.getValue().get(LnPConstants.ULOC_ID));
pstatement.setString(7, entry.getValue().get(LnPConstants.SLOC_ID));
pstatement.setString(8, entry.getValue().get(LnPConstants.PLOC_ID));
pstatement.setString(9, entry.getValue().get(LnPConstants.ALOC_ID));
pstatement.setString(10, entry.getValue().get(LnPConstants.SITE_ID));
pstatement.addBatch();
batchLimit--;
if(batchLimit == 0){
pstatement.executeBatch();
pstatement.clearBatch();
batchLimit = 1000;
}
pstatement.clearParameters();
}
} catch (SQLException e) {
getLogger().log(LogLevel.ERROR, e);
} finally {
try {
pstatement.executeBatch();
db_connection.commit();
db_connection.setAutoCommit(autoCommit);
} catch (SQLException e1) {
getLogger().log(LogLevel.ERROR, e1.getMessage(), e1.fillInStackTrace());
}
if (pstatement != null) {
try {
pstatement.close();
pstatement = null;
} catch (SQLException e) {
getLogger().log(LogLevel.ERROR, e.getMessage(), e.fillInStackTrace());
}
}
if (db_connection!= null) {
try {
db_connection.close();
db_connection = null;
} catch (SQLException e) {
getLogger().log(LogLevel.ERROR, e.getMessage(), e.fillInStackTrace());
}
}
}
}
回答by Sujay
You can think of using addBatch()
and executing a back of statements in one shot. Also, as @pst commented in your question, consider using trasaction
.
您可以考虑addBatch()
一次性使用和执行后面的语句。另外,正如@pst 在您的问题中所评论的,请考虑使用trasaction
.
The way you would do is as follows:
你会做的方式如下:
boolean autoCommit = connection.getAutoCommit();
try{
connection.setAutoCommit(false //Turn off autoCommit
pstatement = db_connection.prepareStatement(PDSLnPConstants.UPSERT_SQL);
int batchLimit = 1000;
try{
for (Entry<Integer, LinkedHashMap<Integer, String>> entry : MAPPING.entrySet()) {
pstatement.setInt(1, entry.getKey());
pstatement.setString(2, entry.getValue().get(LnPConstants.CGUID_ID));
pstatement.setString(3, entry.getValue().get(LnPConstants.PGUID_ID));
pstatement.setString(4, entry.getValue().get(LnPConstants.SGUID_ID));
pstatement.setString(5, entry.getValue().get(LnPConstants.UID_ID));
pstatement.setString(6, entry.getValue().get(LnPConstants.ULOC_ID));
pstatement.setString(7, entry.getValue().get(LnPConstants.SLOC_ID));
pstatement.setString(8, entry.getValue().get(LnPConstants.PLOC_ID));
pstatement.setString(9, entry.getValue().get(LnPConstants.ALOC_ID));
pstatement.setString(10, entry.getValue().get(LnPConstants.SITE_ID));
pstatement.addBatch();
batchLimit--;
if(batchLimit == 0){
pstatement.executeBatch();
pstatement.clearBatch
batchLimit = 1000;
}
pstatement.clearParameters();
}
}finally{
//for the remaining ones
pstatement.executeBatch();
//commit your updates
connection.commit();
}
}finally{
connection.setAutoCommit(autoCommit);
}
The idea is to set a limit for batch updates and execute a database update only when you reach a particular limit. This way you're limiting a database call to once every batchLimit
that you've defined. This way it would be faster.
这个想法是为批量更新设置限制,并仅在达到特定限制时才执行数据库更新。通过这种方式,您可以将数据库调用限制为batchLimit
您定义的每个调用一次。这样速度会更快。
Also note for the transaction
, I've just shown how and when to commit
. This might not always be the correct point to commit
because this decision would be based on your requirement. You might also want to perform a rollback
in case of an exception. So it's upto you to decide.
还要注意transaction
,我刚刚展示了如何以及何时commit
。这可能并不总是正确的观点,commit
因为此决定将基于您的要求。您可能还想rollback
在出现异常时执行 a 。所以由你来决定。
Have a look at "Using Transaction"tutorial to get a better picture of how to use transaction
.
查看“使用事务”教程以更好地了解如何使用transaction
.
回答by gd1
Your piece of code seems good to me.
你的一段代码对我来说似乎很好。
Just for code cleanness, I'd put entry.getValue()
into a variable (call it value
).
And there's no need to call clearParameters()
.
只是为了代码清洁,我会放入entry.getValue()
一个变量(称之为value
)。
而且没有必要打电话clearParameters()
。
Last, remember to correctly dispose the prepared statement when you don't need it anymore (close()
).
最后,记住在不再需要准备好的语句时正确处理它 ( close()
)。
回答by t0r0X
Yes, doing batch updates would significantly improve your performance. Just google for it, my preferred answer is this one from Mkyong.com. Else, your code looks ok. "clearParameters()" is not really necessary, it might even consume some processor cycles. Important: if AutoCommit was enabled, don't forget to disable it before, and enable it after doing the updates, this brings again a tremendous improvement.
是的,批量更新会显着提高您的性能。只需谷歌一下,我的首选答案是来自 Mkyong.com 的这个。否则,您的代码看起来不错。“clearParameters()”并不是真正必要的,它甚至可能消耗一些处理器周期。重要提示:如果启用了 AutoCommit,不要忘记之前禁用它,并在更新后启用它,这再次带来了巨大的改进。
PS
聚苯乙烯
Above recommendation is based also on my experience. I've just noted that this question was already asked here at Stackoverflowand the answer is very detailed. More on PreparedStatements and batches can be found in the Oracle docs hereand about Transactions (AutoCommit) here.
以上推荐也是基于我的经验。我刚刚注意到这个问题已经在 Stackoverflow 上问过了,答案非常详细。有关 PreparedStatements 和批处理的更多信息,请参见此处的 Oracle 文档以及此处的有关事务 (AutoCommit) 的信息。