如何使用 Docker/Kubernetes 为 PostgreSQL 故障转移集群建模?

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/29451702/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-21 01:50:31  来源:igfitidea点击:

How do I model a PostgreSQL failover cluster with Docker/Kubernetes?

postgresqldockerkubernetes

提问by Nikolai Prokoschenko

I'm still wrapping my head around Kubernetes and how that's supposed to work. Currently, I'm struggling to understand how to model something like a PostgreSQL cluster with streaming replication, scaling out and automatic failover/failback (pgpool-II, repmgr, pick your poison).

我仍然在思考 Kubernetes 以及它应该如何工作。目前,我正在努力理解如何像用流复制,向外扩展和自动故障转移/故障恢复PostgreSQL的集群模式(pgpool-IIrepmgr,挑选你的毒药)。

My main problem with the approach is the dual nature of a PostgreSQL instance, configuration-wise -- it's either a master or a cold/warm/hot standby. If I increase the number of replicas, I'd expect them all to come up as standbys, so I'd imagine creating a postgresql-standbyreplication controller separately from a postgresql-masterpod. However I'd also expect one of those standbys to become a master in case current master is down, so it's a common postgresqlreplication controller after all.

我对这种方法的主要问题是 PostgreSQL 实例的双重性质,在配置方面——它要么是主服务器,要么是冷/暖/热备用。如果我增加副本的数量,我希望它们都作为备用,所以我想创建一个postgresql-standbypostgresql-masterpod分开的复制控制器。但是,我也希望其中一个备用服务器在当前主服务器宕机的情况下成为主服务器,因此它postgresql毕竟是一个常见的复制控制器。

The only idea I've had so far is to put the replication configuration on an external volume and manage the state and state changes outside the containers.

到目前为止,我唯一的想法是将复制配置放在外部卷上并管理容器外部的状态和状态更改。

(in case of PostgreSQL the configuration would probably already be on a volume inside its datadirectory, which itself is obviously something I'd want on a volume, but that's beside the point)

(在 PostgreSQL 的情况下,配置可能已经在其data目录中的一个卷上,这本身显然是我想要的一个卷,但这不是重点)

Is that the correct approaach, or is there any other cleaner way?

这是正确的方法,还是有其他更清洁的方法?

回答by Clayton

There's an example in OpenShift: https://github.com/openshift/postgresql/tree/master/examples/replicaThe principle is the same in pure Kube (it's not using anything truly OpenShift specific, and you can use the images in plain docker)

OpenShift 中有一个例子:https: //github.com/openshift/postgresql/tree/master/examples/replica原理在纯 Kube 中是一样的(它没有使用任何真正特定于 OpenShift 的东西,你可以使用普通的图像码头工人)

回答by Yuci

You can give PostDocka try, either with docker-compose or Kubernetes. Currently I have tried it in our project with docker-compose, with the schema as shown below:

您可以使用docker-compose 或 Kubernetes 尝试 PostDock。目前我已经在我们的项目中使用 docker-compose 进行了尝试,其架构如下所示:

pgmaster (primary node1)  --|
|- pgslave1 (node2)       --|
|  |- pgslave2 (node3)    --|----pgpool (master_slave_mode stream)----client
|- pgslave3 (node4)       --|
   |- pgslave4 (node5)    --|

I have tested the following scenarios, and they all work very well:

我已经测试了以下场景,它们都运行良好:

  • Replication: changes made at the primary (i.e., master) node will be replicated to all standby (i.e., slave) nodes
  • Failover: stops the primary node, and a standby node (e.g., node4) will automatically take over the primary role.
  • Prevention of two primary nodes: resurrect the previous primary node (node1), node4 will continue as the primary node, while node1 will be in sync but as a standby node.
  • 复制:在主(即主)节点上所做的更改将复制到所有备用(即,从)节点
  • Failover:停止主节点,由一个备节点(如node4)自动接管主节点。
  • 防止两个主节点:复活之前的主节点(node1),node4会继续作为主节点,而node1会同步但是作为备节点。

As for the client application, these changes are all transparent. The client just points to the pgpool node, and keeps working fine in all the aforementioned scenarios.

对于客户端应用程序,这些更改都是透明的。客户端只指向 pgpool 节点,并在上述所有场景中保持正常工作。

Note: In case you have problems to get PostDock up running, you could try my forked version of PostDock.

注意:如果您在运行 PostDock 时遇到问题,您可以尝试我的 PostDock 分叉版本

Pgpool-II with Watchdog

带看门狗的 Pgpool-II

A problem with the aforementioned architecture is that pgpool is the single point of failure. So I have also tried enabling Watchdog for pgpool-IIwith a delegated virtual IP, so as to avoid the single point of failure.

上述架构的一个问题是 pgpool 是单点故障。所以我也尝试过为 pgpool-II启用Watchdog使用委托的虚拟 IP,以避免单点故障。

master (primary node1)  --\
|- slave1 (node2)       ---\     / pgpool1 (active)  \
|  |- slave2 (node3)    ----|---|                     |----client
|- slave3 (node4)       ---/     \ pgpool2 (standby) /
   |- slave4 (node5)    --/

I have tested the following scenarios, and they all work very well:

我已经测试了以下场景,它们都运行良好:

  • Normal scenario: both pgpools start up, with the virtual IP automatically applied to one of them, in my case, pgpool1
  • Failover: shutdown pgpool1. The virtual IP will be automatically applied to pgpool2, which hence becomes active.
  • Start failed pgpool: start again pgpool1. The virtual IP will be kept with pgpool2, and pgpool1 is now working as standby.
  • 正常情况:两个 pgpool 都启动,虚拟 IP 自动应用到其中一个,在我的例子中是 pgpool1
  • 故障转移:关闭 pgpool1。虚拟 IP 将自动应用于 pgpool2,从而变为活动状态。
  • 启动失败的 pgpool:再次启动 pgpool1。虚拟 IP 将保留在 pgpool2 中,而 pgpool1 现在作为备用。

As for the client application, these changes are all transparent. The client just points to the virtual IP, and keeps working fine in all the aforementioned scenarios.

对于客户端应用程序,这些更改都是透明的。客户端只指向虚拟 IP,并在上述所有场景中保持正常工作。

You can find this project at my GitHub repository on the watchdog branch.

你可以在我的 GitHub 仓库的 watchdog 分支上找到这个项目。

回答by P Ekambaram

You can look at one of the below postgresql open-source tools

您可以查看以下 postgresql 开源工具之一

1 Crunchy data postgresql

1 松脆的数据 postgresql

  1. Patroni postgresql .
  1. 赞助人 postgresql 。

回答by CloudStax

Kubernetes's statefulset is a good base for setting up the stateful service. You will still need some work to configure the correct membership among PostgreSQL replicas.

Kubernetes 的 statefulset 是设置有状态服务的良好基础。您仍然需要一些工作来配置 PostgreSQL 副本之间的正确成员资格。

Kubernetes has one example for it. http://blog.kubernetes.io/2017/02/postgresql-clusters-kubernetes-statefulsets.html

Kubernetes 有一个例子。http://blog.kubernetes.io/2017/02/postgresql-clusters-kubernetes-statefulsets.html