在 postgresql 中将日期转换为 unix 时间戳

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/21494665/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-11 00:36:32  来源:igfitidea点击:

Convert date to unix timestamp in postgresql

postgresqldateunix-timestamp

提问by javadude

I have a table with a column abc carrying the unix timestamp (eg. 13898161481435) and I want to run a between dates select.

我有一个表,其中 abc 列带有 unix 时间戳(例如 13898161481435),我想在日期之间运行选择。

It would be not efficient to do a

这样做效率不高

where TO_CHAR(TO_TIMESTAMP(abc / 1000), 'DD/MM/YYYY') > '14/01/2014 00:00:00' and ..;

which would convert every record.

这将转换每条记录。

Rather do something like
where abc > ('14/01/2014 00:00:00' tobigint()) and abc < ...

而是做一些类似
where abc > ('14/01/2014 00:00:00' tobigint()) 和 abc < ...

But I cant find any reference, though for the reverse case.

但是我找不到任何参考资料,尽管是相反的情况。

回答by Vignesh Kumar A

Try this

尝试这个

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')

PostgreSQL Docs

PostgreSQL 文档

回答by harmic

You do not need to convert it to char to compare it.

您无需将其转换为 char 即可进行比较。

WHERE to_timestamp(abc/1000) > timestamp '2014-01-28 00:00:00'

I don't think that conversion would be very inefficient because timestamps are stored internally in a similar format to epoch secs (admittedly with a different origin and resolution).

我不认为转换会非常低效,因为时间戳在内部以类似于纪元秒的格式存储(不可否认,具有不同的来源和分辨率)。

If you really want to go the other way:

如果你真的想走另一条路:

WHERE abc > extract(epoch from timestamp '2014-01-28 00:00:00')

回答by javadude

Interesting observation though, while

有趣的观察虽然

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';

takes almost 10 seconds (my db with 1,5 mill records), the below only 1,5 sec

需要将近 10 秒(我的数据库有 1,5 条磨坊记录),下面只有 1,5 秒

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));

and the below about 1sec

和下面大约1秒

select count(*) from cb.logs where (timestmp > extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and (timestmp < extract(epoch from timestamp '2014-01-15 23:59:59') * 1000);

to count ~40.000 records

计算 ~40.000 条记录

Most likely because the division I would say.

很可能是因为我要说的划分。

回答by javadude

1

1

select count(*) from cb.logs where to_timestamp(timestmp/1000) > timestamp '2014-01-15 00:00:00' and to_timestamp(timestmp/1000) < timestamp '2014-01-15 23:59:59';  
8600ms

"Aggregate  (cost=225390.52..225390.53 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..225370.34 rows=8073 width=0)"
"        Filter: ((to_timestamp(((timestmp / 1000))::double precision) > '2014-01-15 00:00:00'::timestamp without time zone) AND (to_timestamp(((timestmp / 1000))::double precision) < '2014-01-15 23:59:59'::timestamp without time zone))"

2

2

select count(*) from cb.logs where (timestmp > (select extract(epoch from timestamp '2014-01-15 00:00:00') * 1000) and timestmp < (select extract(epoch from timestamp '2014-01-15 23:59:59') * 1000));
1199ms
"Aggregate  (cost=209245.94..209245.95 rows=1 width=0)"
"  InitPlan 1 (returns ##代码##)"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  InitPlan 2 (returns )"
"    ->  Result  (cost=0.00..0.01 rows=1 width=0)"
"  ->  Seq Scan on logs  (cost=0.00..209225.74 rows=8073 width=0)"
"        Filter: (((timestmp)::double precision > ##代码##) AND ((timestmp)::double precision < ))"