postgresql:共享内存不足?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/3132533/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
postgresql: out of shared memory?
提问by Claudiu
I'm running a bunch of queries using Python and psycopg2. I create one large temporary table w/ about 2 million rows, then I get 1000 rows at a time from it by using cur.fetchmany(1000)
and run more extensive queries involving those rows. The extensive queries are self-sufficient, though - once they are done, I don't need their results anymore when I move on to the next 1000.
我正在使用 Python 和 psycopg2 运行一堆查询。我创建了一个包含大约 200 万行的大型临时表,然后通过使用cur.fetchmany(1000)
和运行涉及这些行的更广泛的查询,一次从中获取 1000行。但是,广泛的查询是自给自足的 - 一旦完成,当我继续下一个 1000 时,我不再需要它们的结果。
However, about 1000000 rows in, I got an exception from psycopg2:
但是,在大约 1000000 行中,我从 psycopg2 中得到了一个异常:
psycopg2.OperationalError: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
Funnily enough, this happened when I was executing a query to drop some temporary tables that the more extensive queries created.
有趣的是,这发生在我执行查询以删除更广泛的查询创建的一些临时表时。
Why might this happen? Is there any way to avoid it? It was annoying that this happened halfway through, meaning I have to run it all again. What might max_locks_per_transaction
have to do with anything?
为什么会发生这种情况?有什么办法可以避免吗?这很烦人,这发生在中途,这意味着我必须再次运行它。可能max_locks_per_transaction
有什么关系?
NOTE: I'm not doing any .commit()
s, but I'm deleting all the temporary tables I create, and I'm only touching the same 5 tables anyway for each "extensive" transaction, so I don't see how running out of table locks could be the problem...
注意:我没有做任何.commit()
s,但我正在删除我创建的所有临时表,而且我只为每个“广泛”事务处理相同的 5 个表,所以我不知道如何用完表锁可能是问题...
回答by araqnid
when you create a table, you get an exclusive lock on it that lasts to the end of the transaction. Even if you then go ahead and drop it.
当你创建一个表时,你会得到一个排它锁,这个锁一直持续到事务结束。即使你继续前进并放弃它。
So if I start a tx and create a temp table:
因此,如果我启动 tx 并创建一个临时表:
steve@steve@[local] *=# create temp table foo(foo_id int);
CREATE TABLE
steve@steve@[local] *=# select * from pg_locks where pid = pg_backend_pid();
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted
---------------+----------+-----------+------+-------+------------+---------------+---------+-----------+----------+--------------------+-------+---------------------+---------
virtualxid | | | | | 2/105315 | | | | | 2/105315 | 19098 | ExclusiveLock | t
transactionid | | | | | | 291788 | | | | 2/105315 | 19098 | ExclusiveLock | t
relation | 17631 | 10985 | | | | | | | | 2/105315 | 19098 | AccessShareLock | t
relation | 17631 | 214780901 | | | | | | | | 2/105315 | 19098 | AccessExclusiveLock | t
object | 17631 | | | | | | 2615 | 124616403 | 0 | 2/105315 | 19098 | AccessExclusiveLock | t
object | 0 | | | | | | 1260 | 16384 | 0 | 2/105315 | 19098 | AccessShareLock | t
(6 rows)
These 'relation' locks aren't dropped when I drop the table:
当我删除表时,这些“关系”锁不会被删除:
steve@steve@[local] *=# drop table foo;
DROP TABLE
steve@steve@[local] *=# select * from pg_locks where pid = pg_backend_pid();
locktype | database | relation | page | tuple | virtualxid | transactionid | classid | objid | objsubid | virtualtransaction | pid | mode | granted
---------------+----------+-----------+------+-------+------------+---------------+---------+-----------+----------+--------------------+-------+---------------------+---------
virtualxid | | | | | 2/105315 | | | | | 2/105315 | 19098 | ExclusiveLock | t
object | 17631 | | | | | | 1247 | 214780902 | 0 | 2/105315 | 19098 | AccessExclusiveLock | t
transactionid | | | | | | 291788 | | | | 2/105315 | 19098 | ExclusiveLock | t
relation | 17631 | 10985 | | | | | | | | 2/105315 | 19098 | AccessShareLock | t
relation | 17631 | 214780901 | | | | | | | | 2/105315 | 19098 | AccessExclusiveLock | t
object | 17631 | | | | | | 2615 | 124616403 | 0 | 2/105315 | 19098 | AccessExclusiveLock | t
object | 17631 | | | | | | 1247 | 214780903 | 0 | 2/105315 | 19098 | AccessExclusiveLock | t
object | 0 | | | | | | 1260 | 16384 | 0 | 2/105315 | 19098 | AccessShareLock | t
(8 rows)
In fact, it added two more locks... It seems if I continually create/drop that temp table, it adds 3 locks each time.
事实上,它又增加了两个锁……似乎如果我不断地创建/删除那个临时表,它每次都会增加 3 个锁。
So I guess one answer is that you will need enough locks to cope with all these tables being added/dropped throughout the transaction. Alternatively, you could try to reuse the temp tables between queries, simply truncate them to remove all the temp data?
所以我想一个答案是你需要足够的锁来处理在整个事务中添加/删除的所有这些表。或者,您可以尝试在查询之间重用临时表,只需截断它们以删除所有临时数据?
回答by notpeter
Did you create multiple savepoints with the same name without releasing them?
您是否创建了多个同名保存点而不释放它们?
I followed these instructions, repeatedly executing
SAVEPOINT savepoint_name
but without ever executing any corresponding RELEASE SAVEPOINT savepoint_name
statements. PostgreSQL was just masking the old savepoints, never freeing them. It kept track of each until it ran out of memory for locks. I think my postgresql memory limits were much lower, it only took ~10,000 savepoints for me to hit max_locks_per_transaction.
我遵循这些指令,反复执行
SAVEPOINT savepoint_name
但从未执行任何相应的RELEASE SAVEPOINT savepoint_name
语句。PostgreSQL 只是掩盖了旧的保存点,从不释放它们。它会跟踪每一个,直到它用完锁的内存。我认为我的 postgresql 内存限制要低得多,我只需要大约 10,000 个保存点就可以达到max_locks_per_transaction。
回答by Hyman Lloyd
Well, are you running the entire create + queries inside a single transaction? This would perhaps explain the issue. Just because it happened when you were dropping tables would not necessarily mean anything, that may just happen to be the point when it ran out of free locks.
那么,您是否在单个事务中运行整个 create + 查询?这或许可以解释这个问题。仅仅因为它发生在你删除表时并不一定意味着什么,这可能恰好是它用完空闲锁的时候。
Using a view might be an alternative to a temporary table and would definitely by my first pick if you're creating this thing and then immediately removing it.
使用视图可能是临时表的替代方案,如果您正在创建这个东西然后立即删除它,我肯定会首先选择它。