postgresql 抑制“重复键值违反唯一约束”错误
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/12385763/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Suppress "duplicate key value violates unique constraint" errors
提问by Alex Hockey
I'm developing a Rails 3 app that uses Postgres as its database. I've got the table shown below:
我正在开发一个使用 Postgres 作为其数据库的 Rails 3 应用程序。我得到了如下所示的表格:
Table "public.test"
Column | Type | Modifiers
---------------+---------+-----------
id | integer | not null
some_other_id | integer |
Indexes:
"test_pkey" PRIMARY KEY, btree (id)
"some_other_id_key" UNIQUE CONSTRAINT, btree (some_other_id)
This has two columns:
这有两列:
- id, which is the primary key (automatically created by rails)
- some_other_id, which contains keys generated by another system. This id needs to be unique, so I've added a unique key constraint to the table.
- id,主键(由rails自动创建)
- some_other_id,其中包含由另一个系统生成的密钥。这个 id 需要是唯一的,所以我在表中添加了一个唯一的键约束。
Now if I try to insert a row with a duplicate some_other_id
, it fails (good) and I get the following output in my Postgres logs:
现在,如果我尝试插入带有重复的行some_other_id
,它会失败(很好)并且我在 Postgres 日志中得到以下输出:
ERROR: duplicate key value violates unique constraint "some_other_id_key"
ERROR: duplicate key value violates unique constraint "some_other_id_key"
The problem is that it's completely mainline for my app to try and add the same ID twice, and my logs are being spammed with this "ERROR" message, which causes various problems: files take a lot of disk space, diagnostics get lost in the noise, Postgres has to throw away diags to keep the log files within size limits, etc.
问题是我的应用程序尝试添加相同的 ID 两次完全是主线,并且我的日志被垃圾邮件发送了这个“错误”消息,这会导致各种问题:文件占用大量磁盘空间,诊断信息丢失噪音,Postgres 必须丢弃诊断文件以将日志文件保持在大小限制内,等等。
Does anyone know how I can either:
有谁知道我可以如何:
- Suppress the log, either by suppressing all logs about this key, or perhaps by specifying something on the transaction that tries to do the
INSERT
. - Use some other Postgres feature to spot the duplicate key and not try the
INSERT
. I've heard of rules and triggers but I can't get either to work (though I'm no Postgres expert).
- 抑制日志,或者通过抑制关于这个键的所有日志,或者可能通过在尝试执行
INSERT
. - 使用其他一些 Postgres 功能来发现重复的键,而不是尝试
INSERT
. 我听说过规则和触发器,但我无法使用它们(尽管我不是 Postgres 专家)。
Note that any solution needs to work with Rails, which does its inserts like this:
请注意,任何解决方案都需要与 Rails 一起使用,Rails 会像这样执行插入操作:
INSERT INTO test (some_other_id) VALUES (123) RETURNING id;
回答by Erwin Brandstetter
To avoid the duplicate key error to begin with:
为避免重复键错误开始:
INSERT INTO test (some_other_id)
SELECT 123
WHERE NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = 123)
RETURNING id;
I am assuming id is a serialcolumn that gets its value automatically.
我假设 id 是一个自动获取其值的串行列。
This is subject to a very tiny race condition(in the time slot between the SELECT
and the INSERT
). But the worst that can happen is that you get a duplicate key error after all and this will hardly ever occur and shouldn't be a problem in your case.
这受制于一个非常小的竞争条件(在SELECT
和之间的时隙中INSERT
)。但可能发生的最糟糕的情况是,您毕竟会收到重复的密钥错误,而且这种情况几乎永远不会发生,并且在您的情况下不应该成为问题。
You can always use raw SQL if your framework restricts your options to use proper syntax.
如果您的框架限制您的选项使用正确的语法,您始终可以使用原始 SQL。
Or you can create a UDF (user defined function) for the purpose:
或者您可以为此目的创建一个 UDF(用户定义函数):
CREATE FUNCTION f_my_insert(int)
RETURNS int LANGUAGE SQL AS
$func$
INSERT INTO test (some_other_id)
SELECT
WHERE NOT EXISTS (SELECT 1 FROM test WHERE some_other_id = )
RETURNING id;
$func$
Call:
称呼:
SELECT f_my_insert(123);
Or, to default to an already existing id
:
或者,默认为已经存在的id
:
CREATE FUNCTION f_my_insert(int)
RETURNS int LANGUAGE plpgsql AS
$func$
BEGIN;
RETURN QUERY
SELECT id FROM test WHERE some_other_id = ;
IF NOT FOUND THEN
INSERT INTO test (some_other_id)
VALUES ()
RETURNING id;
END IF;
END
$func$
Again, that leaves a minimal chance for a race condition. You can eliminate that at the cost of slower performance:
同样,这为竞争条件留下了最小的机会。您可以以降低性能为代价来消除它:
回答by a_horse_with_no_name
You can disable logging of error messages for the session (or globally actually) but it requires superuser privileges:
您可以禁用会话(或实际上全局)的错误消息日志记录,但它需要超级用户权限:
By running:
通过运行:
set log_min_messages=fatal;
only fatal errors are logged until the session (=connection) is ended or you issue a new set
statement to reset the value.
在会话 (=connection) 结束或您发出新set
语句以重置值之前,只会记录致命错误。
But as only a superuser is allowed to change this, it is probably not a good solution as it would required your application user to have that privilege which is a major security problem.
但由于只允许超级用户更改此设置,这可能不是一个好的解决方案,因为它要求您的应用程序用户拥有该权限,这是一个主要的安全问题。
回答by Randall
If you just want to suppress those errors while working in psql
, you can do
如果您只想在工作时抑制这些错误psql
,您可以这样做
SET client_min_messages TO fatal
which will last for the rest of your session.
这将持续到您的剩余时间。