需要将整个 postgreSQL 数据库加载到 RAM 中

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/407006/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-19 23:37:25  来源:igfitidea点击:

Need to load the whole postgreSQL database into the RAM

performancepostgresqlram

提问by Bharath

How do I put my whole PostgreSql database into the RAM for a faster access?? I have 8GB memory and I want to dedicate 2 GB for the DB. I have read about the shared buffers settings but it just caches the most accessed fragment of the database. I needed a solution where the whole DB is put into the RAM and any read would happen from the RAM DB and any write operation would first write into the RAM DB and then the DB on the hard drive.(some thing like the default fsync = on with shared buffers in postgresql configuration settings).

如何将整个 PostgreSql 数据库放入 RAM 中以便更快地访问?我有 8GB 内存,我想为 DB 分配 2GB。我已经阅读了共享缓冲区设置,但它只是缓存了数据库中访问最多的片段。我需要一个解决方案,将整个 DB 放入 RAM 中,任何读取都将从 RAM DB 中进行,任何写操作都将首先写入 RAM DB,然后写入硬盘驱动器上的 DB。(某些类似于默认的 fsync =在 postgresql 配置设置中使用共享缓冲区)。

采纳答案by Nicholas Leonard

I have asked myself the same question for a while. One of the disadvantages of PostgreSQL is that it does not seem to support an IN MEMORY storage engines as MySQL does...

我问自己同样的问题有一段时间了。PostgreSQL 的一个缺点是它似乎不像 MySQL 那样支持 IN MEMORY 存储引擎......

Anyway I ran in to an articlecouple of weeks ago describing how this could be done; although it only seems to work on Linux. I really can't vouch for it for I have not tried it myself, but it does seem to make sense since a PostgreSQL tablespace is indeed assigned a mounted repository.

无论如何,几周前我遇到了一篇文章,描述了如何做到这一点;虽然它似乎只适用于 Linux。我真的不能保证它,因为我自己没有尝试过,但它似乎是有道理的,因为 PostgreSQL 表空间确实分配了一个已安装的存储库。

However, even with this approach, I am not sure you could put your index(s) into RAM as well; I do not think MySQL forces HASH index use with its IN MEMORY table for nothing...

但是,即使使用这种方法,我也不确定您是否也可以将索引放入 RAM 中;我不认为 MySQL 会强制 HASH 索引与其 IN MEMORY 表一起使用......

I also wanted to do a similar thing to improve performance for I am also working with huge data sets. I am using python; they have dictionary data types which are basically hash tables in the form of {key: value} pairs. Using these is very efficient and effective. Basically, to get my PostgreSQL table into RAM, I load it into such a python dictionary, work with it, and persist it into db once in a while; its worth it if it is used well.

我也想做类似的事情来提高性能,因为我也在处理庞大的数据集。我正在使用 python;他们有字典数据类型,基本上是 {key: value} 对形式的哈希表。使用这些是非常有效和有效的。基本上,为了将我的 PostgreSQL 表放入 RAM,我将它加载到这样的 Python 字典中,使用它,并偶尔将它保存到 db 中;如果使用得当,它是值得的。

If you are not using python, I am pretty sure their is a similar dictionary-mapping data structure in your language.

如果您不使用 python,我很确定它们是您语言中类似的字典映射数据结构。

Hope this helps!

希望这可以帮助!

回答by l_39217_l

if you are pulling data by id, use memcached - http://www.danga.com/memcached/+ postgresql.

如果您按 id 提取数据,请使用 memcached - http://www.danga.com/memcached/+ postgresql。

回答by Marsh Ray

Set up an old-fashioned RAMdisk and tell pg to store its data there.

设置一个老式的 RAMdisk 并告诉 pg 将其数据存储在那里。

Be sure you back it up well though.

不过一定要备份好。

回答by dkretz

With only an 8GB database, if you've already optimized all the SQL activity and you're ready solve query problems with hardware, I suggest you're in trouble. This is just not a scalable solution in the long term. Are you sure there is nothing you can do to make substantial differences on the software and database design side?

只有一个 8GB 的​​数据库,如果你已经优化了所有的 SQL 活动并且你准备用硬件解决查询问题,我建议你有麻烦了。从长远来看,这不是一个可扩展的解决方案。您确定在软件和数据库设计方面没有什么可以做的事情吗?

回答by duffymo

Perhaps something like a Tangosol Coherence cacheif you're using Java.

如果您使用的是 Java,则可能类似于Tangosol Coherence 缓存