物理KVM主机上的Libvirt防护
时间:2020-03-21 11:42:31 来源:igfitidea点击:
Red Hat High Availability Add-On附带了许多用于不同虚拟机管理程序的防护代理。
作为在KVM/libvirt主机上运行的VM的群集节点,需要配置软件防护设备fence-virtd。
我们的目标是在RHEL群集中配置STONITH代理fence_xvm。
为此,我们首先需要在物理KVM主机上配置libvirt防护。
安装
我们的KVM服务器运行CentOS 7.
在虚拟机监控程序上,安装以下软件包:
[kvm]# yum install fence-virtd fence-virtd-libvirt fence-virtd-multicast
创建一个共享密钥:
[kvm]# mkdir /etc/cluster [kvm]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=4k count=1
配置
配置fence_virtd守护程序。
重要的是选择libvirt后端和多播侦听器。
另外,请确保选择用于集群节点之间通信的正确接口(在本例中为br0)。
[kvm]# fence_virtd -c Module search path [/usr/lib64/fence-virt]: Available backends: libvirt 0.3 Available listeners: multicast 1.2 Listener modules are responsible for accepting requests from fencing clients. Listener module [multicast]: The multicast listener module is designed for use environments where the guests and hosts Jan communicate over a network using multicast. The multicast address is the address that a client will use to send fencing requests to fence_virtd. Multicast IP Address [225.0.0.12]: Using ipv4 as family. Multicast IP Port [1229]: Setting a preferred interface causes fence_virtd to listen only on that interface. Normally, it listens on all interfaces. In environments where the virtual machines are using the host machine as a gateway, this *must* be set (typically to virbr0). Set to 'none' for no interface. Interface [br0]: The key file is the shared key information which is used to authenticate fencing requests. The contents of this file must be distributed to each physical host and virtual machine within a cluster. Key File [/etc/cluster/fence_xvm.key]: Backend modules are responsible for routing requests to the appropriate hypervisor or management layer. Backend module [libvirt]: Configuration complete. === Begin Configuration === fence_virtd { listener = "multicast"; backend = "libvirt"; module_path = "/usr/lib64/fence-virt"; } listeners { multicast { key_file = "/etc/cluster/fence_xvm.key"; address = "225.0.0.12"; interface = "br0"; family = "ipv4"; port = "1229"; } } backends { libvirt { uri = "qemu:///system"; } } === End Configuration === Replace /etc/fence_virt.conf with the above [y/N]? y
在管理程序上启用并启动服务:
[kvm]# systemctl enable fence_virtd [kvm]# systemctl start fence_virtd
这很重要,请不要忘记在系统管理程序上打开防火墙端口UDP 1229:
[kvm]# firewall-cmd --permanent --add-port=1229/udp [kvm]# firewall-cmd --reload
将篱笆机密密钥“ /etc/cluster/fence_xvm.key”复制到所有群集节点。
确保文件名和路径与管理程序上的相同。
[kvm]# for i in $(seq 1 3);do \ ssh node$i mkdir /etc/cluster; \ scp /etc/cluster/fence_xvm.key node$i:/etc/cluster/;\ done
配置Fence代理fence_xvm
此配置适用于群集节点,而不适用于管理程序。
我们的群集节点已配置了RHEL高可用性添加组件。
有关更多信息,请参见此处。
在每个群集节点上安装fence-virt软件包。
[nodex]# yum install fence-virt
[nodex]# ls -Z /etc/cluster/fence_xvm.key -rw-r--r--. root root unconfined_u:object_r:cluster_conf_t:s0 /etc/cluster/fence_xvm.key
在所有群集节点上打开防火墙端口TCP 1229:
[nodex]# firewall-cmd --permanent --add-port=1229/tcp [nodex]# firewall-cmd --reload
检查围列:
[nodex]# fence_xvm -o list nfs 66bc6e9e-73dd-41af-85f0-e50b34e1fc07 on node1 c6220f3a-f937-4470-bfae-d3a3f49e2500 on node2 2711db33-da71-4119-85da-ae7b294d9d4a on node3 632121f5-6e40-4910-b863-f4f16d7abcaf on
尝试对群集节点之一进行防护:
[node1]# fence_xvm -o off -H node2
向起搏器集群添加一个stonith资源:
[node1]# pcs stonith create fence_node1 fence_xvm \ key_file="/etc/cluster/fence_xvm.key" \ action="reboot" \ port="node1" \ pcmk_host_list="node1.hl.local"
该端口是libvirt(virsh列表)所看到的VM的名称,而pcmk_host_list包含群集节点的名称。
这将为每个群集节点创建一个stonith资源。
或者,可以使用主机映射:
[node1]# pcs stonith create fence_all fence_xvm \ key_file="/etc/cluster/fence_xvm.key" \ action="reboot" \ pcmk_host_map="node1.hl.local:node1,node2.hl.local:node2,node3.hl.local:node3" \ pcmk_host_list="node1,node2,node3" \ pcmk_host_check=static-list
上面的命令将创建一个单一的stonith资源,该资源可以隔离所有群集节点。