如何忽略 T-SQL (SQL Server) 中的“重复键”错误
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/1139050/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How to Ignore "Duplicate Key" error in T-SQL (SQL Server)
提问by Kingamoon
I have a transaction that contains multiple SQL Statements (INSERT, UPDATE and/or DELETES). When executing, I want to ignore Duplicate Error statements and continue onto the next statement. What's the best way of doing that?
我有一个包含多个 SQL 语句(插入、更新和/或删除)的事务。执行时,我想忽略重复错误语句并继续执行下一条语句。这样做的最佳方法是什么?
采纳答案by zinglon
Although my emphatic advice to you is to structure your sql so as to not attempt duplicate inserts (Philip Kelley's snippet is probably what you need), I want to mention that an error on a statement doesn't necessarily cause a rollback.
尽管我对您的重点建议是构建您的 sql 以便不尝试重复插入(Philip Kelley 的代码段可能是您所需要的),但我想提一下,语句中的错误不一定会导致回滚。
Unless XACT_ABORT
is ON
, a transaction will not automatically rollback if an error is encountered unless it's severe enough to kill the connection. XACT_ABORT
defaults to OFF
.
除非XACT_ABORT
是ON
,否则在遇到错误时事务不会自动回滚,除非它严重到足以终止连接。XACT_ABORT
默认为OFF
.
For example, the following sql successfully inserts three values into the table:
比如下面的sql成功地向表中插入了三个值:
create table x ( y int not null primary key )
begin transaction
insert into x(y)
values(1)
insert into x(y)
values(2)
insert into x(y)
values(2)
insert into x(y)
values(3)
commit
Unless you're setting XACT_ABORT
, an error is being raised on the client and causing the rollback. If for some horrible reason you can't avoid inserting duplicates, you ought to be able to trap the error on the client and ignore it.
除非您设置XACT_ABORT
,否则会在客户端上引发错误并导致回滚。如果由于某些可怕的原因您无法避免插入重复项,您应该能够在客户端上捕获错误并忽略它。
回答by Emmanuel
I think you are looking for the IGNORE_DUP_KEY option on your index. Have a look at IGNORE_DUP_KEY ON option documented at http://msdn.microsoft.com/en-us/library/ms186869.aspxwhich causes duplicate insertion attempts to produce a warning instead of an error.
我认为您正在索引上寻找 IGNORE_DUP_KEY 选项。查看http://msdn.microsoft.com/en-us/library/ms186869.aspx中记录的 IGNORE_DUP_KEY ON 选项,这会导致重复插入尝试产生警告而不是错误。
回答by Philip Kelley
Expanding on your comment to SquareCog's reply, you could do:
扩展您对 SquareCog 回复的评论,您可以执行以下操作:
INSERT INTO X VALUES(Y,Z) WHERE Y NOT IN (SELECT Y FROM X)
INSERT INTO X2 VALUES(Y2,Z2) WHERE Y2 NOT IN (SELECT Y FROM X2)
INSERT INTO X3 VALUES(Y3,Z3) WHERE Y3 NOT IN (SELECT Y FROM X3)
Here, I assume that column Y is present in all three tables. Note that performance will be poor if the tables are not indexed on Y.
在这里,我假设 Y 列存在于所有三个表中。请注意,如果表未在 Y 上建立索引,则性能会很差。
Oh yeah, Y has a unique constraint on it--so they're indexed, and this should perform optimally.
哦,是的,Y 对它有一个独特的约束——所以它们被索引了,这应该是最佳表现。
回答by user139593
If by "Ignore Duplicate Error statments", to abort the current statement and continue to the next statement without aborting the trnsaction then just put BEGIN TRY.. END TRY around each statement:
如果通过“忽略重复错误语句”,要中止当前语句并继续下一个语句而不中止 trnsaction,则只需在每个语句周围放置 BEGIN TRY.. END TRY:
BEGIN TRY
INSERT ...
END TRY
BEGIN CATCH /*required, but you dont have to do anything */ END CATCH
...
回答by Vincent Vancalbergh
I'd like to chime in with the following:
我想附和以下内容:
If 99% of your data is going to insert without error doing a select beforehand results in a huge performance drop (like, in my case, from 200 lines/sec to 20 lines/sec) compared to "dumb" inserts and catching the occasional error.
如果 99% 的数据要插入而不会出错,那么与“愚蠢”插入相比,预先执行选择会导致性能大幅下降(例如,从 200 行/秒到 20 行/秒)错误。
After ignoring the "Violation of PRIMARY KEY constraint" errors things went back to being bottlenecked by other resources (headroom being defined as "what the bottlenecking resources don't have").
在忽略“违反 PRIMARY KEY 约束”错误之后,事情又回到了其他资源的瓶颈(净空被定义为“瓶颈资源没有的东西”)。
Which is the whole reason I landed on this discussion in the first place.
这就是我首先参与此讨论的全部原因。
回答by Kingamoon
OK. After trying out some error handling, I figured out how to solve the issue I was having.
好的。在尝试了一些错误处理之后,我想出了如何解决我遇到的问题。
Here's an example of how to make this work (let me know if there's something I'm missing) :
这是一个如何进行这项工作的示例(如果我遗漏了什么,请告诉我):
SET XACT_ABORT OFF ; -- > really important to set that to OFF
BEGIN
DECLARE @Any_error int
DECLARE @SSQL varchar(4000)
BEGIN TRANSACTION
INSERT INTO Table1(Value1) VALUES('Value1')
SELECT @Any_error = @@ERROR
IF @Any_error<> 0 AND @Any_error<>2627 GOTO ErrorHandler
INSERT INTO Table1(Value1) VALUES('Value1')
SELECT @Any_error = @@ERROR
IF @Any_error<> 0 AND @Any_error<>2627 GOTO ErrorHandler
INSERT INTO Table1(Value1) VALUES('Value2')
SELECT @Any_error = @@ERROR
IF @Any_error<> 0 AND @Any_error<>2627 GOTO ErrorHandler
ErrorHandler:
IF @Any_error = 0 OR @Any_error=2627
BEGIN
PRINT @ssql
COMMIT TRAN
END
ELSE
BEGIN
PRINT @ssql
ROLLBACK TRAN
END
END
As a result of the above Transaction, Table1 will have the following values Value1, Value2.
作为上述事务的结果,表 1 将具有以下值 Value1、Value2。
2627 is the error code for Duplicate Key by the way.
顺便说一下,2627 是 Duplicate Key 的错误代码。
Thank you all for the prompt reply and helpful suggestions.
感谢大家的及时回复和有用的建议。
回答by Dave Scotese
INSERT INTO KeyedTable(KeyField, Otherfield)
SELECT n.* FROM
(SELECT 'PossibleDupeLiteral' AS KeyField, 'OtherfieldValue' AS Otherfield
UNION ALL
SELECT 'PossibleDupeLiteral', 'OtherfieldValue2'
)
LEFT JOIN KeyedTable k
ON k.KeyField=n.KeyField
WHERE k.KeyField IS NULL
This tells the Server to look for the same data (hopefully the same speedy way it does to check for duplicate keys) and insert nothing if it finds it.
这告诉服务器查找相同的数据(希望与检查重复键的速度相同),如果找到,则不插入任何内容。
I like the IGNORE_DUP_KEY solution too, but then anyone who relies on errors to catch problems will be mystified when the server silently ignores their dupe-key errors.
我也喜欢 IGNORE_DUP_KEY 解决方案,但是当服务器默默地忽略他们的重复密钥错误时,任何依赖错误来捕捉问题的人都会感到困惑。
The reason I choose this over Philip Kelley's solution is that you can provide several rows of data and only have the missing ones actually get in:
我选择这个而不是 Philip Kelley 的解决方案的原因是您可以提供多行数据,并且只有丢失的数据才能真正进入:
回答by Allen Cales
I came here because I was trying to do the same thing; I knew I had dupes in the source data but only wanted to update the target data and not add the dupes.
我来这里是因为我想做同样的事情;我知道我在源数据中有欺骗,但只想更新目标数据而不是添加欺骗。
I think a MERGE works great here because you can UPDATE or DELETE things that are different and INSERT things that are missing.
我认为 MERGE 在这里效果很好,因为您可以更新或删除不同的内容并插入丢失的内容。
I ended up doing this and it worked great. I use SSIS to loop through Excel files and load them into a "RAW" SQL table with dupes and all. Then I run a MERGE to merge the "raw" table with the production table. Then I TRUNCATE the "raw" table and move to the next Excel file.
我最终做到了这一点,而且效果很好。我使用 SSIS 循环遍历 Excel 文件并将它们加载到带有欺骗和所有内容的“RAW”SQL 表中。然后我运行 MERGE 将“原始”表与生产表合并。然后我截断“原始”表并移动到下一个 Excel 文件。
回答by ToXinE
For SQL server 2000:
对于 SQL Server 2000:
INSERT INTO t1 (ID, NAME)
SELECT valueid, valuename
WHERE NOT EXISTS
(SELECT 0
FROM t1 as t2
WHERE t2.ID = valueid AND t2.name = valuename)
回答by Peter
Well you could solve this with a temp table..
那么你可以用临时表解决这个问题..
DECLARE @RoleToAdds TABLE
([RoleID] int, [PageID] int)
INSERT INTO @RoleToAdds ([RoleID], [PageID])
VALUES
(1, 2),
(1, 3),
(1, 4),
(2, 5)
INSERT INTO [dbo].[RolePages] ([RoleID], [PageID])
SELECT rta.[RoleID], rta.[PageID] FROM @RoleToAdds rta WHERE NOT EXISTS
(SELECT * FROM [RolePages] rp WHERE rp.PageID = rta.PageID AND rp.RoleID = rta.RoleID)
This might not work for big amounts of data but for a few rows it should work!
这可能不适用于大量数据,但对于几行它应该有效!