如何在 postgresql 中获取整个表的哈希值?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/4020033/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
How can I get a hash of an entire table in postgresql?
提问by Ben
I would like a fairly efficient way to condense an entire table to a hash value.
我想要一种相当有效的方法来将整个表压缩为一个哈希值。
I have some tools that generate entire data tables, which can then be used to generate further tables, and so on. I'm trying to implement a simplistic build system to coordinate build runs and avoid repeating work. I want to be able to record hashes of the input tables so that I can later check whether they have changed. Building a table takes minutes or hours, so spending several seconds building hashes is acceptable.
我有一些工具可以生成整个数据表,然后可以用来生成更多的表,等等。我正在尝试实施一个简单的构建系统来协调构建运行并避免重复工作。我希望能够记录输入表的哈希值,以便稍后检查它们是否已更改。构建一张表需要几分钟或几小时,因此花费几秒钟来构建哈希是可以接受的。
A hack I have used is to just pipe the output of pg_dump to md5sum, but that requires transferring the entire table dump over the network to hash it on the local box. Ideally I'd like to produce the hash on the database server.
我使用的一个技巧是将 pg_dump 的输出通过管道传输到 md5sum,但这需要通过网络传输整个表转储以在本地框上散列它。理想情况下,我想在数据库服务器上生成哈希。
Finding the hash value of a row in postgresqlgives me a way to calculate a hash for a row at a time, which could then be combined somehow.
在 postgresql 中查找一行的散列值给了我一种方法来一次计算一行的散列,然后可以以某种方式组合。
Any tips would be greatly appreciated.
任何提示将非常感谢。
Edit to post what I ended up with:tinychen's answer didn't work for me directly, because I couldn't use 'plpgsql' apparently. When I implemented the function in SQL instead, it worked, but was very inefficient for large tables. So instead of concatenating all the row hashes and then hashing that, I switched to using a "rolling hash", where the previous hash is concatenated with the text representation of a row and then that is hashed to produce the next hash. This was much better; apparently running md5 on short strings millions of extra times is better than concatenating short strings millions of times.
编辑发布我最终得到的内容:tinychen 的回答对我没有直接作用,因为我显然无法使用“plpgsql”。当我在 SQL 中实现该函数时,它起作用了,但对于大表来说效率很低。因此,我没有连接所有行散列然后对其进行散列,而是转而使用“滚动散列”,其中前一个散列与行的文本表示连接,然后散列以生成下一个散列。这要好得多;显然,在短字符串上运行 md5 数百万次比将短字符串连接数百万次要好。
create function zz_concat(text, text) returns text as
'select md5( || );' language 'sql';
create aggregate zz_hashagg(text) (
sfunc = zz_concat,
stype = text,
initcond = '');
采纳答案by tinychen
just do like this to create a hash table aggregation function.
只是这样做来创建一个哈希表聚合函数。
create function pg_concat( text, text ) returns text as '
begin
if isnull then
return ;
else
return || ;
end if;
end;' language 'plpgsql';
create function pg_concat_fin(text) returns text as '
begin
return ;
end;' language 'plpgsql';
create aggregate pg_concat (
basetype = text,
sfunc = pg_concat,
stype = text,
finalfunc = pg_concat_fin);
then you could use the pg_concat function to caculate the table's hash value.
那么你可以使用 pg_concat 函数来计算表的哈希值。
select md5(pg_concat(md5(CAST((f.*)AS text)))) from f order by id
回答by Tomas Greif
I know this is old question, however this is my solution:
我知道这是个老问题,但这是我的解决方案:
SELECT
md5(CAST((array_agg(f.* order by id))AS text)) /* id is a primary key of table (to avoid random sorting) */
FROM
foo f;
回答by nick_olya
SELECT md5(array_agg(md5((t.*)::varchar))::varchar)
FROM (
SELECT *
FROM my_table
ORDER BY 1
) AS t
回答by harmic
I had a similar requirement, to use when testing a specialized table replication solution.
我有一个类似的要求,在测试专门的表复制解决方案时使用。
@Ben's rolling MD5 solution (which he appended to the question) seems quite efficient, but there were a couple of traps which tripped me up.
@Ben 的滚动 MD5 解决方案(他将其附加到问题中)似乎非常有效,但是有几个陷阱让我绊倒了。
The first (mentioned in some of the other answers) is that you need to ensure that the aggregate is performed in a known order over the table you are checking. The syntax for that is eg.
第一个(在其他一些答案中提到)是您需要确保在您正在检查的表上以已知顺序执行聚合。其语法是例如。
select zz_hashagg(CAST((example.*)AS text) order by id) from example;
Note the order by
is inside the aggregate.
注意order by
是在聚合内。
The second is that using CAST((example.*)AS text
will not give identical results for two tables with the same column contents unless the columns were created in the same order. In my case that was not guaranteed, so to get a true comparison I had to list the columns separately, for example:
第二个是 usingCAST((example.*)AS text
不会为具有相同列内容的两个表提供相同的结果,除非这些列是以相同的顺序创建的。在我的情况下,不能保证,所以为了获得真正的比较,我必须单独列出列,例如:
select zz_hashagg(CAST((example.id, example.a, example.c)AS text) order by id) from example;
For completeness (in case a subsequent edit should remove it) here is the definition of the zz_hashagg from @Ben's question:
为了完整起见(以防后续编辑应将其删除)这里是@Ben 问题中 zz_hashagg 的定义:
create function zz_concat(text, text) returns text as
'select md5( || );' language 'sql';
create aggregate zz_hashagg(text) (
sfunc = zz_concat,
stype = text,
initcond = '');
回答by Thilo
As for the algorithm, you could XOR all the individual MD5 hashes, or concatenate them and hash the concatenation.
至于算法,您可以对所有单独的 MD5 散列进行异或,或者将它们连接起来并对连接进行散列。
If you want to do this completely server-side you probably have to create your own aggregation function, which you could then call.
如果您想完全在服务器端执行此操作,您可能必须创建自己的聚合函数,然后您可以调用该函数。
select my_table_hash(md5(CAST((f.*)AS text)) from f order by id
As an intermediate step, instead of copying the whole table to the client, you could just select the MD5 results for all rows, and run those through md5sum.
作为中间步骤,您可以只选择所有行的 MD5 结果,然后通过 md5sum 运行这些结果,而不是将整个表复制到客户端。
Either way you need to establish a fixed sort order, otherwise you might end up with different checksums even for the same data.
无论哪种方式,您都需要建立固定的排序顺序,否则即使对于相同的数据,您最终也可能会得到不同的校验和。
回答by 1737973
Great answers.
很棒的答案。
In case by any means someone required not to use aggregation functions but maintaining support for tables sized several GiB, you can use this that has littleperformance penalties over the best answers in the case of largest tables.
万一有人需要不使用聚合函数但需要保持对大小为几个 GiB 的表的支持,您可以使用它,在最大表的情况下,它对最佳答案的性能影响很小。
CREATE OR REPLACE FUNCTION table_md5(
table_name CHARACTER VARYING
, VARIADIC order_key_columns CHARACTER VARYING [])
RETURNS CHARACTER VARYING AS $$
DECLARE
order_key_columns_list CHARACTER VARYING;
query CHARACTER VARYING;
first BOOLEAN;
i SMALLINT;
working_cursor REFCURSOR;
working_row_md5 CHARACTER VARYING;
partial_md5_so_far CHARACTER VARYING;
BEGIN
order_key_columns_list := '';
first := TRUE;
FOR i IN 1..array_length(order_key_columns, 1) LOOP
IF first THEN
first := FALSE;
ELSE
order_key_columns_list := order_key_columns_list || ', ';
END IF;
order_key_columns_list := order_key_columns_list || order_key_columns[i];
END LOOP;
query := (
'SELECT ' ||
'md5(CAST(t.* AS TEXT)) ' ||
'FROM (' ||
'SELECT * FROM ' || table_name || ' ' ||
'ORDER BY ' || order_key_columns_list ||
') t');
OPEN working_cursor FOR EXECUTE (query);
-- RAISE NOTICE 'opened cursor for query: ''%''', query;
first := TRUE;
LOOP
FETCH working_cursor INTO working_row_md5;
EXIT WHEN NOT FOUND;
IF first THEN
SELECT working_row_md5 INTO partial_md5_so_far;
ELSE
SELECT md5(working_row_md5 || partial_md5_so_far)
INTO partial_md5_so_far;
END IF;
-- RAISE NOTICE 'partial md5 so far: %', partial_md5_so_far;
END LOOP;
-- RAISE NOTICE 'final md5: %', partial_md5_so_far;
RETURN partial_md5_so_far :: CHARACTER VARYING;
END;
$$ LANGUAGE plpgsql;
Used as:
用作:
SELECT table_md5(
'table_name', 'sorting_col_0', 'sorting_col_1', ..., 'sorting_col_n'
);