Linux:CentOS/RHEL6上的Oracle坚不可摧的内核和OCFS2

时间:2020-02-23 14:39:54  来源:igfitidea点击:

Oracle集群文件系统2(也称为OCFS2)是一个相当不错的选择,来选择一个文件系统,需要进行访问和使用的几个服务器节点写入时。
不幸的是,由于RedHat改用GFS,因此它们的内核默认不支持OCFS2。

我不知道可用于RHEL6的任何非官方kmod-ocfs2模块。
甲骨文似乎使在内核中实现它变得更加困难,迫使人们改用其" UBLK"内核。
从源代码进行编译可能会起作用,但是由于我当前的用例是数据库服务器,因此优化的Oracle内核数据库就可以了!

这是在CentOS/RedHat Enterprise Linux 6上安装Oracle UBK和OCFS2的过程。

1.首先为RHEL 6添加Oracle Yum存储库:

/etc/yum.repos.d/public-yum-ol6.repo

并附加以下内容:

[ol6_ga_base]
name=Oracle Linux 6 GA - $basearch - base
baseurl=http://public-yum.oracle.com/repo/OracleLinux/OL6/0/base/$basearch/
gpgkey=http://public-yum.oracle.com/RPM-GPG-KEY-oracle-ol6
gpgcheck=1
name=Oracle Linux 6 GA - $basearch - base
enabled=1

2.安装内核和OCFS2工具:

yum install kernel-uek ocfs2-tools

3.重新启动系统并在新内核上启动。

5.创建文件夹/etc/ocfs2并创建cluster.conf:

mkdir /etc/ocfs2
vi /etc/ocfs2/cluster.conf

然后附加以下内容:

cluster:
       node_count=<maximum number of node that will access the filesystem>
       name=<label of your ocfs2 volume>
node:
        ip_port = 7777
        ip_address = <server1 ip address>
        number = <node numerical id>
        name = <node1 fqdn>
        cluster = <label of your ocfs2 volume>

node:
        ip_port = 7777
        ip_address = <server2 ip address>
        number = <node numerical id>
        name = <node2 fqdn>
        cluster = <label of your ocfs2 volume>

范例:

cluster:
       node_count=64
       name=san01vd02v001

node:
        ip_port = 7777
        ip_address = 10.10.30.1
        number = 1
        name = node1.theitroad.local
        cluster = san01vd02v001

node:
        ip_port = 7777
        ip_address = 10.10.30.2
        number = 2
        name = node2.theitroad.local
        cluster = san01vd02v001

[...add as many nodes you need]

6.为OCFS2配置Oracle集群堆栈(称为o2cb):

service o2cb configure

然后回答适合您的设置并与您的" cluster.conf"设置相匹配的问题(值采用上述示例):

Load O2CB driver on boot (y/n) [y]: y
Cluster stack backing O2CB [o2cb]: 
Cluster to start on boot (Enter "none" to clear) [san01vd02v001]: 
Specify heartbeat dead threshold (>=7) [31]: 
Specify network idle timeout in ms (>=5000) [30000]: 
Specify network keepalive delay in ms (>=1000) [2000]: 
Specify network reconnect delay in ms (>=2000) [2000]:

这实际上将在以下路径下写入文件:

/etc/sysconfig/o2cb

并且应该看起来像:

# O2CB_ENABLED: 'true' means to load the driver on boot.
O2CB_ENABLED=true

# O2CB_STACK: The name of the cluster stack backing O2CB.
O2CB_STACK=o2cb

# O2CB_BOOTCLUSTER: If not empty, the name of a cluster to start.
O2CB_BOOTCLUSTER=san01vd02v001

# O2CB_HEARTBEAT_THRESHOLD: Iterations before a node is considered dead.
O2CB_HEARTBEAT_THRESHOLD=

# O2CB_IDLE_TIMEOUT_MS: Time in ms before a network connection is considered dead.
O2CB_IDLE_TIMEOUT_MS=

# O2CB_KEEPALIVE_DELAY_MS: Max time in ms before a keepalive packet is sent
O2CB_KEEPALIVE_DELAY_MS=

# O2CB_RECONNECT_DELAY_MS: Min time in ms between connection attempts
O2CB_RECONNECT_DELAY_MS=

注意:上面的两个配置(cluster.conf和o2cb)都必须在所有节点上复制。

7.格式化OCFS2文件系统:

mkfs.ocfs2 /dev/<device_name> -N <number_of_nodes> -b <block_size: 512 | 1K | 2K | 4K> -C <cluster_size: 4K | 8K | 16K | 32K | 64K | 128K | 256K | 512K | 1M> -T <filesystem_type: mail | datafiles> --fs-features=<optional_features> -L "<label_name>"

(有关更多详细信息和选项,请参见ocfs2手册页,您可能需要浏览整个列表以根据需要进行调整。
)

这是我使用的示例:

mkfs.ocfs2 /dev/mapper/mpatha -N 64 -b 4K -C 256K -T mail --fs-features=extended-slotmap --fs-feature-level=max-features -L "san01vd02v001"

然后只需挂载您的音量:

mkdir /mnt/san01_vd02_v001
mount -t ocfs2 /dev/mapper/mpatha /mnt/san01_vd02_v001