在Ubuntu 20.04上安装Ceph 15存储群集

时间:2020-02-23 14:39:13  来源:igfitidea点击:

Ceph是软件定义的存储解决方案,专为在商品硬件上构建分布式存储群集而设计。
在Ubuntu 20.04上构建Ceph存储群集的要求将主要取决于所需的用例。

此设置不用于运行任务关键激烈的写入应用程序。
我们可能需要咨询官方项目文档,以便在网络和存储硬件上进行此类要求。
以下是将在此安装教程中配置的标准CEPH组件:Ceph Mon-Monitor Serverceph MDS - Metadata Serversceph MGR - Ceph Manager DaConceph OSD - 对象存储守护程序

在Ubuntu 20.04上安装Ceph存储群集

在开始在Ubuntu 20.04上部署Ceph存储群集之前,我们需要准备需要的服务器。
下面是我的服务器的图片,准备设置了。

如图所示,我的实验室有以下服务器名称和IP地址。

|||||
| --- - | --- | - - | --- |
|服务器主机名|服务器IP地址| ceph组件|服务器规格|
| Ceph-Mon-01 | 172.16.20.10 | CEPH MON,MGR,MDS | 8GB RAM,4VPCUS |
| Ceph-Mon-02 | 172.16.20.11 | CEPH MON,MGR,MDS | 8GB RAM,4VPCUS |
| Ceph-Mon-03 | 172.16.20.12 | CEPH MON,MGR,MDS | 8GB RAM,4VPCUS |
| ceph-osd-01 | 172.16.20.13 | cephosd | 16GB RAM,8VPCUS |
| ceph-osd-02 | 172.16.20.14 | cephosd | 16GB RAM,8VPCUS |
| ceph-osd-03 | 172.16.20.15 | cephosd | 16GB RAM,8VPCUS |

第1步:准备第一监视节点

用于部署的Ceph组件是Cephadm。
Cephadm通过SSH连接到来自Manager守护程序的主机部署并管理Ceph集群,以添加,删除或者更新Ceph守护程序容器。

登录第一个监视节点:

$ssh Hyman@theitroad 
Warning: Permanently added 'ceph-mon-01,172.16.20.10' (ECDSA) to the list of known hosts.
Enter passphrase for key '/var/home/jkmutai/.ssh/id_rsa': 
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-33-generic x86_64)
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage
Last login: Tue Jun  2 20:36:36 2017 from 172.16.20.10
Hyman@theitroad:~#

使用所有IP地址和主机名的条目更新/etc/hosts文件。

# vim /etc/hosts
127.0.0.1 localhost
# Ceph nodes
172.16.20.10  ceph-mon-01
172.16.20.11  ceph-mon-02
172.16.20.12  ceph-mon-03
172.16.20.13  ceph-osd-01
172.16.20.14  ceph-osd-02
172.16.20.15  ceph-osd-03

更新和升级操作系统:

sudo apt update && sudo apt -y upgrade
sudo systemctl reboot

安装Ansible和其他基本实用程序:

sudo apt update
sudo apt -y install software-properties-common git curl vim bash-completion ansible

确认已安装Ansible。

$ansible --version
ansible 2.9.6
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3/dist-packages/ansible
  executable location = /usr/bin/ansible
  python version = 3.8.2 (default, Apr 27 2017, 15:53:34) [GCC 9.3.0]

确保/USR/Local/Bin路径添加到路径中。

echo "PATH=$PATH:/usr/local/bin" >>~/.bashrc
source ~/.bashrc

检查我们当前的路径:

$echo $PATH
/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/usr/local/bin

生成SSH键:

$ssh-keygen -t rsa -b 4096 -N '' -f ~/.ssh/id_rsa
Generating public/private rsa key pair.
Your identification has been saved in /root/.ssh/id_rsa
Your public key has been saved in /root/.ssh/id_rsa.pub
The key fingerprint is:
SHA256:3gGoZCVsA6jbnBuMIpnJilCiblaM9qc5Xk38V7lfJ6U Hyman@theitroad
The key's randomart image is:
+---[RSA 4096]----+
| ..o. . |
|. +o . |
|. .o.. . |
|o .o .. . . |
|o%o.. oS . o .|
|@+*o o… .. .o |
|O oo . ….. .E o|
|o+.oo. . ..o|
|o .++ . |
+----[SHA256]-----+

安装Cephadm:

curl --silent --remote-name --location https://github.com/ceph/ceph/raw/octopus/src/cephadm/cephadm
chmod +x cephadm
sudo mv cephadm  /usr/local/bin/

确认Cephadm可用于本地使用:

$cephadm --help

第2步:更新所有Ceph节点并推送SSH公钥

使用第一个Mon节点配置,创建一个Ansible PlayBook以更新所有节点,并在所有节点中推送SSH公钥和更新/etc/hosts文件。

cd ~/
vim prepare-ceph-nodes.yml

修改以下内容以设置正确的时区并添加到文件。

--
- name: Prepare ceph nodes
  hosts: ceph_nodes
  become: yes
  become_method: sudo
  vars:
    ceph_admin_user: cephadmin
  tasks:
    - name: Set timezone
      timezone:
        name: Africa/Nairobi
    - name: Update system
      apt:
        name: "*"
        state: latest
        update_cache: yes
    - name: Install common packages
      apt:
        name: [vim,git,bash-completion,wget,curl,chrony]
        state: present
        update_cache: yes
    - name: Set authorized key taken from file to root user
      authorized_key:
        user: root
        state: present
        key: "{{ lookup('file', '~/.ssh/id_rsa.pub') }}"
   
    - name: Install Docker
      shell: |
        curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add 
        echo "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable" > /etc/apt/sources.list.d/docker-ce.list
        apt update
        apt install -qq -y docker-ce docker-ce-cli containerd.io
    - name: Reboot server after update and configs
      reboot:

创建库存文件。

$vim hosts
[ceph_nodes]
ceph-mon-01
ceph-mon-02
ceph-mon-03
ceph-osd-01
ceph-osd-02
ceph-osd-03

如果使用一个,请保存密钥密码。

$eval `ssh-agent -s` && ssh-add ~/.ssh/id_rsa_jmutai 
Agent pid 3275
Enter passphrase for /root/.ssh/id_rsa_jmutai: 
Identity added: /root/.ssh/id_rsa_jkmutai (/root/.ssh/id_rsa_jmutai)

配置SSH:

tee -a ~/.ssh/config<<EOF
Host *
    UserKnownHostsFile /dev/null
    StrictHostKeyChecking no
    IdentitiesOnly yes
    ConnectTimeout 0
    ServerAliveInterval 300
EOF

执行剧本:

# As root user with  default ssh key:
$ansible-playbook -i hosts prepare-ceph-nodes.yml --user root
# As root user with password:
$ansible-playbook -i hosts prepare-ceph-nodes.yml --user root --ask-pass
# As sudo user with password - replace ubuntu with correct username
$ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-pass --ask-become-pass
# As sudo user with ssh key and sudo password - replace ubuntu with correct username
$ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-become-pass
# As sudo user with ssh key and passwordless sudo - replace ubuntu with correct username
$ansible-playbook -i hosts prepare-ceph-nodes.yml --user ubuntu --ask-become-pass
# As sudo or root user with custom key
$ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key /path/to/private/key <options>

在我的情况下,我会运行:

$ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key ~/.ssh/id_rsa_jkmutai

执行

ansible-playbook -i hosts prepare-ceph-nodes.yml --private-key ~/.ssh/id_rsa_jmutai 
PLAY [Prepare ceph nodes] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ****
TASK [Gathering Facts] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***
ok: [ceph-mon-03]
ok: [ceph-mon-02]
ok: [ceph-mon-01]
ok: [ceph-osd-01]
ok: [ceph-osd-02]
ok: [ceph-osd-03]
TASK [Update system] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** *****
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-mon-03]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-osd-03]
TASK [Install common packages] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]
TASK [Add ceph admin user] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***
changed: [ceph-osd-02]
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-mon-03]
changed: [ceph-osd-01]
changed: [ceph-osd-03]
TASK [Create sudo file] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]
TASK [Give ceph admin user passwordless sudo] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ****
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]
TASK [Set authorized key taken from file to ceph admin] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-02]
changed: [ceph-mon-02]
changed: [ceph-osd-03]
TASK [Set authorized key taken from file to root user] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-mon-03]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-osd-03]
TASK [Install Docker] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ****
changed: [ceph-mon-01]
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-osd-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]
TASK [Reboot server after update and configs] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ****
changed: [ceph-osd-01]
changed: [ceph-mon-02]
changed: [ceph-osd-02]
changed: [ceph-mon-01]
changed: [ceph-mon-03]
changed: [ceph-osd-03]
PLAY RECAP ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***
ceph-mon-01                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-02                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-03                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-01                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-02                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-03                : ok=9    changed=8    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

测试SSH作为在节点上创建的Ceph admin用户:

$ssh Hyman@theitroad2
Warning: Permanently added 'ceph-mon-02,172.16.20.11' (ECDSA) to the list of known hosts.
Welcome to Ubuntu 20.04 LTS (GNU/Linux 5.4.0-28-generic x86_64)
 * Documentation:  https://help.ubuntu.com
 * Management:     https://landscape.canonical.com
 * Support:        https://ubuntu.com/advantage

The programs included with the Ubuntu system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.
Ubuntu comes with ABSOLUTELY NO WARRANTY, to the extent permitted by
applicable law.
To run a command as administrator (user "root"), use "sudo ".
See "man sudo_root" for details.
Hyman@theitroad:~$sudo su 
Hyman@theitroad:~# logout
Hyman@theitroad:~$exit
logout
Connection to ceph-mon-01 closed.

配置/etc/hosts

如果在所有群集服务器上未在主机名配置为主机名,则更新所有节点上的/etc/hosts。

以下是修改的PlayBook:

$vim update-hosts.yml
--
- name: Prepare ceph nodes
  hosts: ceph_nodes
  become: yes
  become_method: sudo
  tasks:
    - name: Clean /etc/hosts file
      copy:
        content: ""
        dest: /etc/hosts
    - name: Update /etc/hosts file
      blockinfile:
        path: /etc/hosts
        block: |
           127.0.0.1     localhost
           172.16.20.10  ceph-mon-01
           172.16.20.11  ceph-mon-02
           172.16.20.12  ceph-mon-03
           172.16.20.13  ceph-osd-01
           172.16.20.14  ceph-osd-02
           172.16.20.15  ceph-osd-03

运行PlayBook:

$ansible-playbook -i hosts update-hosts.yml --private-key ~/.ssh/id_rsa_jmutai 
PLAY [Prepare ceph nodes] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ****
TASK [Gathering Facts] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***
ok: [ceph-mon-01]
ok: [ceph-osd-02]
ok: [ceph-mon-03]
ok: [ceph-mon-02]
ok: [ceph-osd-01]
ok: [ceph-osd-03]
TASK [Clean /etc/hosts file] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** *****
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-mon-03]
changed: [ceph-osd-03]
TASK [Update /etc/hosts file] ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ****
changed: [ceph-mon-02]
changed: [ceph-mon-01]
changed: [ceph-osd-01]
changed: [ceph-osd-02]
changed: [ceph-mon-03]
changed: [ceph-osd-03]
PLAY RECAP ** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** **** ***
ceph-mon-01                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-02                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-mon-03                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-01                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-02                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0   
ceph-osd-03                : ok=3    changed=2    unreachable=0    failed=0    skipped=0    rescued=0    ignored=0

确认:

$ssh Hyman@theitroad
$cat /etc/hosts
# BEGIN ANSIBLE MANAGED BLOCK
127.0.0.1      localhost
172.16.20.10   ceph-mon-01
172.16.20.11   ceph-mon-02
172.16.20.12   ceph-mon-03
172.16.20.13   ceph-osd-01
172.16.20.14   ceph-osd-02
172.16.20.15   ceph-osd-03
# END ANSIBLE MANAGED BLOCK

第3步:在Ubuntu 20.04上部署Ceph 15(章鱼)存储群集

在Ubuntu 20.04上引导新的Ceph集群,我们需要第一个监视器地址 - IP或者主机名。

sudo mkdir -p /etc/ceph
cephadm bootstrap \
  --mon-ip ceph-mon-01 \
  --initial-dashboard-user admin \
  --initial-dashboard-password Hyman@theitroad

执行

INFO:cephadm:Verifying podman|docker is present...
INFO:cephadm:Verifying lvm2 is present...
INFO:cephadm:Verifying time synchronization is in place...
INFO:cephadm:Unit chrony.service is enabled and running
INFO:cephadm:Repeating the final host check...
INFO:cephadm:podman|docker (/usr/bin/docker) is present
INFO:cephadm:systemctl is present
INFO:cephadm:lvcreate is present
INFO:cephadm:Unit chrony.service is enabled and running
INFO:cephadm:Host looks OK
INFO:root:Cluster fsid: 8dbf2eda-a513-11ea-a3c1-a534e03850ee
INFO:cephadm:Verifying IP 172.16.20.10 port 3300 ...
INFO:cephadm:Verifying IP 172.16.20.10 port 6789 ...
INFO:cephadm:Mon IP 172.16.20.10 is in CIDR network 172.31.1.1
INFO:cephadm:Pulling latest docker.io/ceph/ceph:v15 container...
INFO:cephadm:Extracting ceph user uid/gid from container image...
INFO:cephadm:Creating initial keys...
INFO:cephadm:Creating initial monmap...
INFO:cephadm:Creating mon...
INFO:cephadm:Waiting for mon to start...
INFO:cephadm:Waiting for mon...
INFO:cephadm:mon is available
INFO:cephadm:Assimilating anything we can from ceph.conf...
INFO:cephadm:Generating new minimal ceph.conf...
INFO:cephadm:Restarting the monitor...
INFO:cephadm:Setting mon public_network...
INFO:cephadm:Creating mgr...
INFO:cephadm:Wrote keyring to /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Wrote config to /etc/ceph/ceph.conf
INFO:cephadm:Waiting for mgr to start...
INFO:cephadm:Waiting for mgr...
INFO:cephadm:mgr not available, waiting (1/10)...
INFO:cephadm:mgr not available, waiting (2/10)...
INFO:cephadm:mgr not available, waiting (3/10)...
INFO:cephadm:mgr not available, waiting (4/10)...
INFO:cephadm:mgr is available
INFO:cephadm:Enabling cephadm module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 5...
INFO:cephadm:Mgr epoch 5 is available
INFO:cephadm:Setting orchestrator backend to cephadm...
INFO:cephadm:Generating ssh key...
INFO:cephadm:Wrote public SSH key to to /etc/ceph/ceph.pub
INFO:cephadm:Adding key to Hyman@theitroad's authorized_keys...
INFO:cephadm:Adding host ceph-mon-01...
INFO:cephadm:Deploying mon service with default placement...
INFO:cephadm:Deploying mgr service with default placement...
INFO:cephadm:Deploying crash service with default placement...
INFO:cephadm:Enabling mgr prometheus module...
INFO:cephadm:Deploying prometheus service with default placement...
INFO:cephadm:Deploying grafana service with default placement...
INFO:cephadm:Deploying node-exporter service with default placement...
INFO:cephadm:Deploying alertmanager service with default placement...
INFO:cephadm:Enabling the dashboard module...
INFO:cephadm:Waiting for the mgr to restart...
INFO:cephadm:Waiting for Mgr epoch 13...
INFO:cephadm:Mgr epoch 13 is available
INFO:cephadm:Generating a dashboard self-signed certificate...
INFO:cephadm:Creating initial admin user...
INFO:cephadm:Fetching dashboard port number...
INFO:cephadm:Ceph Dashboard is now available at:
	     URL: https://ceph-mon-01:8443/
	    User: admin
	Password: Hyman@theitroad
INFO:cephadm:You can access the Ceph CLI with:
	sudo /usr/local/bin/cephadm shell --fsid 8dbf2eda-a513-11ea-a3c1-a534e03850ee -c /etc/ceph/ceph.conf -k /etc/ceph/ceph.client.admin.keyring
INFO:cephadm:Please consider enabling telemetry to help improve Ceph:
	ceph telemetry on
For more information see:
	https://docs.ceph.com/docs/master/mgr/telemetry/
INFO:cephadm:Bootstrap complete.

安装ceph工具。

cephadm add-repo --release octopus
cephadm install ceph-common

如果有它们,请添加额外的监视器。

--- Copy Ceph SSH key --
ssh-copy-id -f -i /etc/ceph/ceph.pub Hyman@theitroad
ssh-copy-id -f -i /etc/ceph/ceph.pub Hyman@theitroad
--- Label the nodes with mon --
ceph orch host label add ceph-mon-01 mon
ceph orch host label add ceph-mon-02 mon
ceph orch host label add ceph-mon-03 mon
--- Add nodes to the cluster --
ceph orch host add ceph-mon-02
ceph orch host add ceph-mon-03
--- Apply configs --
ceph orch apply mon ceph-mon-02
ceph orch apply mon ceph-mon-03

查看主机和标签列表。

# ceph orch host ls
HOST         ADDR         LABELS  STATUS  
ceph-mon-01  ceph-mon-01  mon             
ceph-mon-02  ceph-mon-02  mon             
ceph-mon-03  ceph-mon-03  mon

运行容器:

# docker ps
CONTAINER ID        IMAGE                      COMMAND                  CREATED             STATUS              PORTS               NAMES
7d666ae63232        prom/alertmanager          "/bin/alertmanager -…"   3 minutes ago       Up 3 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-alertmanager.ceph-mon-01
4e7ccde697c7        prom/prometheus:latest     "/bin/prometheus --c…"   3 minutes ago       Up 3 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-prometheus.ceph-mon-01
9fe169a3f2dc        ceph/ceph-grafana:latest   "/bin/sh -c 'grafana…"   8 minutes ago       Up 8 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-grafana.ceph-mon-01
c8e99deb55a4        prom/node-exporter         "/bin/node_exporter …"   8 minutes ago       Up 8 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-node-exporter.ceph-mon-01
277f0ef7dd9d        ceph/ceph:v15              "/usr/bin/ceph-crash…"   9 minutes ago       Up 9 minutes                            ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-crash.ceph-mon-01
9de7a86857aa        ceph/ceph:v15              "/usr/bin/ceph-mgr -…"   10 minutes ago      Up 10 minutes                           ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-mgr.ceph-mon-01.qhokxo
d116bc14109c        ceph/ceph:v15              "/usr/bin/ceph-mon -…"   10 minutes ago      Up 10 minutes                           ceph-8dbf2eda-a513-11ea-a3c1-a534e03850ee-mon.ceph-mon-01

第4步:部署Ceph OSDS

在新OSD节点root用户授权中安装群集的公共SSH密钥:

ssh-copy-id -f -i /etc/ceph/ceph.pub Hyman@theitroad
ssh-copy-id -f -i /etc/ceph/ceph.pub Hyman@theitroad
ssh-copy-id -f -i /etc/ceph/ceph.pub Hyman@theitroad

告诉Ceph新节点是群集的一部分:

--- Add hosts to cluster --
ceph orch host add ceph-osd-01
ceph orch host add ceph-osd-02
ceph orch host add ceph-osd-03
--- Give new nodes labels --
ceph orch host label add  ceph-osd-01 osd
ceph orch host label add  ceph-osd-02 osd
ceph orch host label add  ceph-osd-03 osd

查看存储节点上的所有设备:

# ceph orch device ls
HOST         PATH      TYPE   SIZE  DEVICE                           AVAIL  REJECT REASONS  
ceph-mon-01  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-mon-02  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-mon-03  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-01  /dev/sdb  hdd   50.0G  HC_Volume_5680482                True                   
ceph-osd-01  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-02  /dev/sdb  hdd   50.0G  HC_Volume_5680484                True                   
ceph-osd-02  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked          
ceph-osd-03  /dev/sdb  hdd   50.0G  HC_Volume_5680483                True                   
ceph-osd-03  /dev/sda  hdd   76.2G  QEMU_HARDDISK_drive-scsi0-0-0-0  False  locked

如果满足以下所有条件,则认为存储设备可用:设备必须没有分区。
设备不得具有任何LVM状态。
不得挂载设备。
设备不得包含文件系统。
设备必须包含不包含Ceph Bluestore OSD。
设备必须大于5 GB。

告诉Ceph要使用任何可用和未使用的存储设备:

# ceph orch daemon add osd ceph-osd-01:/dev/sdb
Created osd(s) 0 on host 'ceph-osd-01'
# ceph orch daemon add osd ceph-osd-02:/dev/sdb
Created osd(s) 1 on host 'ceph-osd-02'
# ceph orch daemon add osd ceph-osd-03:/dev/sdb
Created osd(s) 1 on host 'ceph-osd-03'

检查ceph状态:

# ceph -s
  cluster:
    id:     8dbf2eda-a513-11ea-a3c1-a534e03850ee
    health: HEALTH_OK
 
  services:
    mon: 1 daemons, quorum ceph-mon-01 (age 23m)
    mgr: ceph-mon-01.qhokxo(active, since 22m), standbys: ceph-mon-03.rhhvzc
    osd: 3 osds: 3 up (since 36s), 3 in (since 36s)
 
  data:
    pools:   1 pools, 1 pgs
    objects: 1 objects, 0 B
    usage:   3.0 GiB used, 147 GiB/150 GiB avail
    pgs:     1 active+clean

第5步:访问Ceph仪表板

Ceph Dashboard现在可以在Active MGR服务器的地址提供。

# ceph -s

因为这将是:

URL: https://ceph-mon-01:8443/
User: admin
Password: Hyman@theitroad

使用凭据登录以访问Ceph Management Dashboard。