如何配置Tripleo Undercloud在OpenStack中部署Overcloud
在本文中,我们将配置Undercloud Director节点以在Openstack中部署overcloud。
获取并上传镜像以进行Overcloud内省和部署
为overcloud节点创建虚拟机(计算和控制器)
配置虚拟裸机控制器
导入和注册overcloud节点
内省overcloud节点
将overcloud节点标记到配置文件
最后开始部署Overcloud节点
在Openstack中部署Overcloud
导向器需要几个磁盘镜像才能配置overcloud节点。这包括:
introspection内核和ramdisk
用于通过PXE引导进行裸机系统自省。部署内核和ramdisk用于系统配置和部署。
" Overcloud内核,ramdisk和完整镜像"写入节点硬盘的基本overcloud系统。
获取Overcloud的镜像
[stack@director ~]$sudo yum install rhosp-director-images rhosp-director-images-ipa -y [stack@director ~]$cp /usr/share/rhosp-director-images/overcloud-full-latest-10.0.tar /usr/share/rhosp-director-images/ironic-python-agent-latest-10.0.tar ~/images/ [stack@director ~]$cd images/
将档案解压缩到堆栈用户主目录(/home/stack/images)上的images目录中:
[stack@director images]$tar -xf overcloud-full-latest-10.0.tar [stack@director images]$tar -xf ironic-python-agent-latest-10.0.tar
[stack@director images]$ls -l total 3848560 -rw-r--r--. 1 stack stack 425703356 Aug 22 02:15 ironic-python-agent.initramfs -rwxr-xr-x. 1 stack stack 6398256 Aug 22 02:15 ironic-python-agent.kernel -rw-r--r--. 1 stack stack 432107520 Oct 8 10:14 ironic-python-agent-latest-10.0.tar -rw-r--r--. 1 stack stack 61388282 Aug 22 02:29 overcloud-full.initrd -rw-r--r--. 1 stack stack 1537239040 Oct 8 10:13 overcloud-full-latest-10.0.tar -rw-r--r--. 1 stack stack 1471676416 Oct 8 10:18 overcloud-full.qcow2 -rwxr-xr-x. 1 stack stack 6398256 Aug 22 02:29 overcloud-full.vmlinuz
更改overcloud节点的root密码
我们需要virt-customize来更改root密码。
[stack@director images]$sudo yum install -y libguestfs-tools
执行以下命令。将突出显示的文本" password"替换为我们希望分配给" root"的密码
[stack@director images]$virt-customize -a overcloud-full.qcow2 --root-password password:password [ 0.0] Examining the guest ... [ 40.9] Setting a random seed [ 40.9] Setting the machine ID in /etc/machine-id [ 40.9] Setting passwords [ 63.0] Finishing off
将这些图像导入导向器:
[stack@director images]$openstack overcloud image upload --image-path ~/images/ Image "overcloud-full-vmlinuz" was uploaded. +--------------------------------------+------------------------+-------------+---------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+------------------------+-------------+---------+--------+ | db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz | aki | 6398256 | active | +--------------------------------------+------------------------+-------------+---------+--------+ Image "overcloud-full-initrd" was uploaded. +--------------------------------------+-----------------------+-------------+----------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+-----------------------+-------------+----------+--------+ | 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd | ari | 61388282 | active | +--------------------------------------+-----------------------+-------------+----------+--------+ Image "overcloud-full" was uploaded. +--------------------------------------+----------------+-------------+------------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+----------------+-------------+------------+--------+ | 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full | qcow2 | 1471676416 | active | +--------------------------------------+----------------+-------------+------------+--------+ Image "bm-deploy-kernel" was uploaded. +--------------------------------------+------------------+-------------+---------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+------------------+-------------+---------+--------+ | 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel | aki | 6398256 | active | +--------------------------------------+------------------+-------------+---------+--------+ Image "bm-deploy-ramdisk" was uploaded. +--------------------------------------+-------------------+-------------+-----------+--------+ | ID | Name | Disk Format | Size | Status | +--------------------------------------+-------------------+-------------+-----------+--------+ | 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk | ari | 425703356 | active | +--------------------------------------+-------------------+-------------+-----------+--------+
在CLI中查看图像列表:
[stack@director images]$openstack image list +--------------------------------------+------------------------+--------+ | ID | Name | Status | +--------------------------------------+------------------------+--------+ | 9624b338-cb5f-45e0-b0f4-3fe78f0f3f45 | bm-deploy-ramdisk | active | | 3b73c55b-6184-41df-a6e5-9a56cfb73238 | bm-deploy-kernel | active | | 234179da-b9ff-424d-ac94-83042b5f073e | overcloud-full | active | | 56e387a9-e570-4bff-be91-16fbc9bb7bcc | overcloud-full-initrd | active | | db69fe5c-2b06-4d56-914b-9fb6b32130fe | overcloud-full-vmlinuz | active | +--------------------------------------+------------------------+--------+
此列表将不显示自省PXE图像。导演将这些文件复制到/httpboot
。
[stack@director images]$ls -l /httpboot/ total 421988 -rwxr-xr-x. 1 root root 6398256 Oct 8 10:19 agent.kernel -rw-r--r--. 1 root root 425703356 Oct 8 10:19 agent.ramdisk -rw-r--r--. 1 ironic ironic 759 Oct 8 10:41 boot.ipxe -rw-r--r--. 1 ironic-inspector ironic-inspector 473 Oct 8 09:43 inspector.ipxe drwxr-xr-x. 2 ironic ironic 6 Oct 8 10:51 pxelinux.cfg
在Undercloud的中子子网上设置名称服务器
Overcloud节点需要一个"名称服务器",以便它们可以通过DNS解析主机名。对于没有网络隔离的标准overcloud,名称服务器是使用undercloud的" neutron子网"定义的。
[stack@director images]$neutron subnet-list +--------------------------------------+------+------------------+--------------------------------------------------------+ | id | name | cidr | allocation_pools | +--------------------------------------+------+------------------+--------------------------------------------------------+ | 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 | | 192.168.126.0/24 | {"start": "192.168.126.100", "end": "192.168.126.150"} | +--------------------------------------+------+------------------+--------------------------------------------------------+
将"名称服务器"更新为"子网"
[stack@director images]$neutron subnet-update 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 --dns-nameserver 192.168.122.1 Updated subnet: 7b7f251d-edfc-46ea-8d56-f9f2397e01d1
验证更改
[stack@director images]$neutron subnet-show 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 +-------------------+-------------------------------------------------------------------+ | Field | Value | +-------------------+-------------------------------------------------------------------+ | allocation_pools | {"start": "192.168.126.100", "end": "192.168.126.150"} | | cidr | 192.168.126.0/24 | | created_at | 2016-10-08T04:20:48Z | | description | | | dns_nameservers | 192.168.122.1 | | enable_dhcp | True | | gateway_ip | 192.168.126.1 | | host_routes | {"destination": "169.254.169.254/32", "nexthop": "192.168.126.1"} | | id | 7b7f251d-edfc-46ea-8d56-f9f2397e01d1 | | ip_version | 4 | | ipv6_address_mode | | | ipv6_ra_mode | | | name | | | network_id | 7047a1c6-86ac-4237-8fe5-b0bb26538752 | | project_id | 681d63dc1f1d4c5892941c68e6d07c54 | | revision_number | 3 | | service_types | | | subnetpool_id | | | tenant_id | 681d63dc1f1d4c5892941c68e6d07c54 | | updated_at | 2016-10-08T04:50:09Z | +-------------------+-------------------------------------------------------------------+
为overcloud创建虚拟机
我的controller
节点配置:
操作系统 | CentOS7.4 |
虚拟机名称 | controller0 |
vCPUs | 2个 |
内存 | 8192 MB |
磁盘 | 60 GB |
NIC 1(配置网络) | MAC:52:54:00:36:65:a6 |
NIC 2(外部网络) | MAC:52:54:00:c4:34:ca |
我的compute
节点配置:
操作系统 | CentOS7.4 |
虚拟机名称 | compute1 |
vCPUs | 2个 |
内存 | 8192 MB |
磁盘 | 60 GB |
NIC 1(配置网络) | MAC:52:54:00:13:b8:aa |
NIC 2(外部网络) | MAC:52:54:00:d1:93:28 |
对于多云,我们需要一个控制器和一个计算。为物理主机上的控制器和计算节点分别创建两个qcow磁盘。
重要的提示:
我们也可以使用virt-manager
创建虚拟机。
[root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata controller0.qcow2 60G Formatting 'controller0.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off [root@openstack images]# qemu-img create -f qcow2 -o preallocation=metadata compute1.qcow2 60G Formatting 'compute1.qcow2', fmt=qcow2 size=64424509440 encryption=off cluster_size=65536 preallocation='metadata' lazy_refcounts=off
[root@openstack images]# ls -lh total 47G -rw-r--r--. 1 root root 61G Oct 8 10:35 compute1.qcow2 -rw-r--r--. 1 root root 61G Oct 8 10:34 controller0.qcow2 -rw-------. 1 qemu qemu 81G Oct 8 10:35 director-new.qcow2
将qcow2磁盘的所有权更改为" qemu:qemu"
[root@openstack images]# chown qemu:qemu * [root@openstack images]# ls -lh total 47G -rw-r--r--. 1 qemu qemu 61G Oct 8 10:35 compute1.qcow2 -rw-r--r--. 1 qemu qemu 61G Oct 8 10:34 controller0.qcow2 -rw-------. 1 qemu qemu 81G Oct 8 10:35 director-new.qcow2
接下来安装" virt-install",以便能够使用CLI创建虚拟机。
[root@openstack images]# yum -y install virt-install
其中我为两个虚拟机(" controller0"和" compute1")创建xml文件。
[root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/controller0.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name controller0 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/controller0.xml [root@openstack images]# virt-install --ram 8192 --vcpus 2 --os-variant rhel7 --disk path=/var/lib/libvirt/images/compute1.qcow2,device=disk,bus=virtio,format=qcow2 --noautoconsole --vnc --network network:provisioning --network network:external --name compute1 --cpu IvyBridge,+vmx --dry-run --print-xml > /tmp/compute1.xml
验证我们在上面创建的文件
[root@openstack images]# ls -l /tmp/*.xml -rw-r--r--. 1 root root 1850 Oct 8 10:45 /tmp/compute1.xml -rw-r--r--. 1 root root 1856 Oct 8 10:45 /tmp/controller0.xml -rw-r--r--. 1 root root 207 Oct 7 15:52 /tmp/external.xml -rw-r--r--. 1 root root 117 Oct 6 19:45 /tmp/provisioning.xml
现在是时候添加这些虚拟机了
[root@openstack images]# virsh define --file /tmp/controller0.xml Domain controller0 defined from /tmp/controller0.xml [root@openstack images]# virsh define --file /tmp/compute1.xml Domain compute1 defined from /tmp/compute1.xml
验证主机上当前活动的虚拟机。我们正在" director-new"上运行我们的undercloud导演
[root@openstack images]# virsh list --all Id Name State --------------------------------------------------- 6 director-new running - compute1 shut off - controller0 shut off
配置虚拟裸机控制器(VBMC)
导向器可以将虚拟机用作KVM主机上的节点。它通过模拟的IPMI设备控制其电源管理。因为我们有一个使用KVM作为我的设置的实验室设置,所以我们将使用VBMC来帮助注册节点。
由于我们使用的虚拟机没有任何iLO或者类似的实用程序进行电源管理,因此我们将使用VBMC。我们可以从openstack git存储库中获取软件包。
[root@openstack ~]# wget https://git.openstack.org/openstack/virtualbmc
接下来安装VBMC软件包
[root@openstack ~]# yum install -y python-virtualbmc
开始将虚拟机添加到vbmc域列表
说明:
为每个虚拟机使用不同的端口。小于1025的端口号需要系统中的root特权。
[root@openstack images]# vbmc add controller0 --port 6320 --username admin --password redhat [root@openstack images]# vbmc add compute1 --port 6321 --username admin --password redhat
列出可用域
[root@openstack images]# vbmc list +-------------+--------+---------+------+ | Domain name | Status | Address | Port | +-------------+--------+---------+------+ | compute1 | down | :: | 6321 | | controller0 | down | :: | 6320 | +-------------+--------+---------+------+
接下来启动所有虚拟BMC:
[root@openstack images]# vbmc start compute1 [root@openstack images]# vbmc start controller0
再次检查状态
[root@openstack images]# vbmc list +-------------+---------+---------+------+ | Domain name | Status | Address | Port | +-------------+---------+---------+------+ | compute1 | running | :: | 6321 | | controller0 | running | :: | 6320 | +-------------+---------+---------+------+
现在我们所有的域都处于运行状态。
说明:
借助VBMC,我们将使用pxe_ipmitool作为执行所有IPMI命令的驱动程序,因此请确保已将其加载并在undercloud上使用
用于测试电源IPMI仿真功能的命令行实用程序使用以下语法
[root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6320 -U admin -R 3 -N 5 -P redhat power status Chassis Power is off [root@director ~]# ipmitool -I lanplus -H 192.168.122.1 -L ADMINISTRATOR -p 6321 -U admin -R 3 -N 5 -P redhat power status Chassis Power is off
为overcloud注册节点
控制器需要我们手动创建的节点定义模板。该文件(instack-twonodes.json
)使用JSON格式文件,并包含我们节点的硬件和电源管理详细信息。
[stack@director ~]$cat instack-twonodes.json { "nodes":[ { "mac":[ "52:54:00:36:65:a6" ], "name":"controller0", "cpu":"2", "memory":"8192", "disk":"60", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_addr": "192.168.122.1", "pm_password": "redhat", "pm_port": "6320" }, { "mac":[ "52:54:00:13:b8:aa" ], "name":"compute1", "cpu":"2", "memory":"8192", "disk":"60", "arch":"x86_64", "pm_type":"pxe_ipmitool", "pm_user":"admin", "pm_addr": "192.168.122.1", "pm_password": "redhat", "pm_port": "6321" } ] }
要在Openstack中部署overcloud,下一步是"注册" Overcloud的节点部分,对我们来说,这是一个单一的控制器和计算节点。工作流服务管理此任务集,其中包括计划和监视多个任务和动作的功能。
[stack@director ~]$openstack baremetal import --json instack-twonodes.json Started Mistral Workflow. Execution ID: 6ad7c642-275e-4293-988a-b84c28fd99c1 Successfully registered node UUID 633f53f7-7b3c-454a-8d39-bd9c4371d248 Successfully registered node UUID f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 Started Mistral Workflow. Execution ID: 5989359f-3cad-43cb-9ea3-e86ebee87964 Successfully set all nodes to available.
导入后检查可用的讽刺节点列表
[stack@director ~]$openstack baremetal node list +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None | power off | available | False | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | None | power off | available | False | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
这将为每个节点分配bm_deploy_kernel和bm_deploy_ramdisk镜像
[stack@director ~]$openstack baremetal configure boot
使用此命令将供应状态设置为"可管理"
[stack@director ~]$for node in $(openstack baremetal node list -c UUID -f value) ; do openstack baremetal node manage $node ; done
现在,节点已在控制器中注册和配置。在CLI中查看这些节点的列表:
[stack@director ~]$openstack baremetal node list +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None | power off | manageable | False | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | None | power off | manageable | False | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
在以下输出中,确认将deploy_kernel和deploy_ramdisk分配给了新节点。
[stack@director ~]$for i in controller0 compute1 ; do ironic node-show $i| grep -1 deploy; done | driver | pxe_ipmitool | | driver_info | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': | | | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address': | | | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f- | | | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'** ****'} | | driver | pxe_ipmitool | | driver_info | {u'ipmi_port': u'6321', u'ipmi_username': u'admin', u'deploy_kernel': | | | u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address': | | | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f- | | | 45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'** ****'} |
检查节点的硬件
控制器可以在每个节点上运行自省过程。
此过程使每个节点都通过PXE引导自省代理。该代理从节点收集硬件数据,并将其发送回控制器。控制器然后将此自省数据存储在控制器上运行的OpenStack对象存储(交换)服务中。
导向器将硬件信息用于各种目的,例如配置文件标记,基准测试和手动根磁盘分配。
重要的提示:
由于我们使用的是VirtualBMC,因此我们无法使用" openstack overcloud节点自省--all-manageable --provide"命令,因为我们使用"端口"而不是IP地址为虚拟机启动电源。因此,无法在虚拟机上进行批量自省。
[stack@director ~]$for node in $(openstack baremetal node list -c UUID -f value) ; do openstack overcloud node introspect $node --provide; done Started Mistral Workflow. Execution ID: 123c4290-82ba-4766-8fdc-65878eac03ac Waiting for introspection to finish... Successfully introspected all nodes. Introspection completed. Started Mistral Workflow. Execution ID: 5b6009a1-855a-492b-9196-9c0291913d2f Successfully set all nodes to available. Started Mistral Workflow. Execution ID: 7f9a5d65-c94a-496d-afe2-e649a85d5912 Waiting for introspection to finish... Successfully introspected all nodes. Introspection completed. Started Mistral Workflow. Execution ID: ffb4a0c5-3090-4d88-b407-2a8e06035485 Successfully set all nodes to available.
在单独的终端窗口中使用以下命令监视introspection的进度:
[stack@director ~]$sudo journalctl -l -u openstack-ironic-inspector -u openstack-ironicinspector-dnsmasq -u openstack-ironic-conductor -f
检查introspection状态
[stack@director ~]$for node in $(openstack baremetal node list -c UUID -f value) ; do echo -e "n"$node;openstack baremetal introspection status $node; done 633f53f7-7b3c-454a-8d39-bd9c4371d248 +----------+-------+ | Field | Value | +----------+-------+ | error | None | | finished | True | +----------+-------+ f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 +----------+-------+ | Field | Value | +----------+-------+ | error | None | | finished | True | +----------+-------+
收集控制器的自检数据
我们可以检查为各个节点收集的自省数据。在此示例中,将介绍获取控制器节点此信息的步骤
[stack@director ~]$openstack baremetal node show controller0 +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+ | clean_step | {} | | console_enabled | False | | created_at | 2016-10-08T04:55:22+00:00 | | driver | pxe_ipmitool | | driver_info | {u'ipmi_port': u'6320', u'ipmi_username': u'admin', u'deploy_kernel': u'3b73c55b-6184-41df-a6e5-9a56cfb73238', u'ipmi_address': | | | u'192.168.122.1', u'deploy_ramdisk': u'9624b338-cb5f-45e0-b0f4-3fe78f0f3f45', u'ipmi_password': u'** ****'} | | driver_internal_info | {} | | extra | {u'hardware_swift_object': u'extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248'} | | inspection_finished_at | None | | inspection_started_at | None | | instance_info | {} | | instance_uuid | None | | last_error | None | | maintenance | False | | maintenance_reason | None | | name | controller0 | | ports | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'self'}, {u'href': | | | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/ports', u'rel': u'bookmark'}] | | power_state | power off | | properties | {u'memory_mb': u'8192', u'cpu_arch': u'x86_64', u'local_gb': u'59', u'cpus': u'2', u'capabilities': | | | u'cpu_vt:true,cpu_aes:true,cpu_hugepages:true,boot_option:local'} | | provision_state | available | | provision_updated_at | 2016-10-08T05:00:44+00:00 | | raid_config | {} | | reservation | None | | states | [{u'href': u'http://192.168.126.2:13385/v1/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'self'}, {u'href': | | | u'http://192.168.126.2:13385/nodes/633f53f7-7b3c-454a-8d39-bd9c4371d248/states', u'rel': u'bookmark'}] | | target_power_state | None | | target_provision_state | None | | target_raid_config | {} | | updated_at | 2016-10-08T05:00:51+00:00 | | uuid | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | +------------------------+-------------------------------------------------------------------------------------------------------------------------------------------+
存储来自undercloud-passwords.conf
文件的ironic的用户密码
[stack@director ~]$grep ironic undercloud-passwords.conf undercloud_ironic_password=f670269d38916530ac00e5f1af6bf8e39619a9f5
在上面突出显示的部分中,这里使用具有讽刺意味的密码作为" OS_PASSWORD",并将对象作为" extra_hardware"值。
[stack@director ~]$OS_TENANT_NAME=service OS_USERNAME=ironic OS_PASSWORD=f670269d38916530ac00e5f1af6bf8e39619a9f5 openstack object save ironic-inspector extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248
检查是否创建了对象存储
[stack@director ~]$ls -l total 36 -rw-rw-r--. 1 stack stack 9013 Oct 8 10:34 extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248 drwxrwxr-x. 2 stack stack 245 Oct 8 10:14 images -rw-rw-r--. 1 stack stack 836 Oct 8 10:25 instack-twonodes.json -rw-------. 1 stack stack 725 Oct 8 09:51 stackrc -rw-r--r--. 1 stack stack 11150 Oct 8 09:05 undercloud.conf -rw-rw-r--. 1 stack stack 1650 Oct 8 09:33 undercloud-passwords.conf
现在,我们可以使用以下命令读取数据
[stack@director ~]$jq . < extra_hardware-633f53f7-7b3c-454a-8d39-bd9c4371d248 [ [ "disk", "logical", "count", "1" ], [ "disk", "vda", "size", "64" ], [ "disk", "vda", "vendor", "0x1af4" ], *** output trimmed *** [ "system", "kernel", "version", "3.10.0-862.11.6.el7.x86_64" ], [ "system", "kernel", "arch", "x86_64" ], [ "system", "kernel", "cmdline", "ipa-inspection-callback-url=http://192.168.126.1:5050/v1/continue ipa-inspection-collectors=default,extra-hardware,numa-topology,logs systemd.journald.forward_to_console=yes BOOTIF=52:54:00:36:65:a6 ipa-debug=1 ipa-inspection-dhcp-all-interfaces=1 ipa-collect-lldp=1 initrd=agent.ramdisk" ] ]
将节点标记到配置文件
因此,在注册并检查每个节点的硬件之后,我们将它们标记为特定的配置文件。这些概要文件标记将节点与口味匹配,然后口味被分配给部署角色。以下示例显示了控制器节点的角色,风味,配置文件和节点之间的关系:
[stack@director ~]$openstack flavor list +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | ID | Name | RAM | Disk | Ephemeral | VCPUs | Is Public | +--------------------------------------+---------------+------+------+-----------+-------+-----------+ | 06ab97b9-6d7e-4d4d-8d6e-c2ba1e781657 | baremetal | 4096 | 40 | 0 | 1 | True | | 17eec9b0-811d-4ff0-a028-29e7ff748654 | block-storage | 4096 | 40 | 0 | 1 | True | | 38cbb6df-4852-49d0-bbed-0bddee5173c8 | compute | 4096 | 40 | 0 | 1 | True | | 88345a7e-f617-4514-9aac-0d794a32ee80 | ceph-storage | 4096 | 40 | 0 | 1 | True | | dce1c321-32bb-4abf-bfd5-08f952529550 | swift-storage | 4096 | 40 | 0 | 1 | True | | febf52e2-5707-43b3-8f3a-069a957828fb | control | 4096 | 40 | 0 | 1 | True | +--------------------------------------+---------------+------+------+-----------+-------+-----------+
[stack@director ~]$openstack baremetal node list +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | UUID | Name | Instance UUID | Power State | Provisioning State | Maintenance | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | None | power off | available | False | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | None | power off | available | False | +--------------------------------------+-------------+---------------+-------------+--------------------+-------------+
profile:compute和profile:control选项的添加将两个节点分别标记为各自的配置文件。这些命令还设置了" boot_option:local"参数,该参数定义了每个节点的引导模式。
[stack@director ~]$openstack baremetal node set --property capabilities='profile:control,boot_option:local' 633f53f7-7b3c-454a-8d39-bd9c4371d248 [stack@director ~]$openstack baremetal node set --property capabilities='profile:compute,boot_option:local' f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5
完成节点标记后,检查分配的配置文件或者可能的配置文件:
[stack@director ~]$openstack overcloud profiles list +--------------------------------------+-------------+-----------------+-----------------+-------------------+ | Node UUID | Node Name | Provision State | Current Profile | Possible Profiles | +--------------------------------------+-------------+-----------------+-----------------+-------------------+ | 633f53f7-7b3c-454a-8d39-bd9c4371d248 | controller0 | available | control | | | f44f0b75-cb0c-46fe-ae44-c9d71ae1f3a5 | compute1 | available | compute | | +--------------------------------------+-------------+-----------------+-----------------+-------------------+
我们还可以检查口味,如果此处分配的口味与我们分配给讽刺节点的口味相同
[stack@director ~]$openstack flavor show control -c properties +------------+------------------------------------------------------------------+ | Field | Value | +------------+------------------------------------------------------------------+ | properties | capabilities:boot_option='local', capabilities:profile='control' | +------------+------------------------------------------------------------------+ [stack@director ~]$openstack flavor show compute -c properties +------------+------------------------------------------------------------------+ | Field | Value | +------------+------------------------------------------------------------------+ | properties | capabilities:boot_option='local', capabilities:profile='compute' | +------------+------------------------------------------------------------------+
部署Overcloud
所以现在在OpenStack环境中部署Overcloud的最后阶段是通过运行openstack overcloud deploy
命令。
[stack@director ~]$openstack overcloud deploy --templates --control-scale 1 --compute-scale 1 --neutron-tunnel-types vxlan --neutron-network-type vxlan Removing the current plan files Uploading new plan files Started Mistral Workflow. Execution ID: 5dd005ed-67c8-4cef-8d16-c196fc852051 Plan updated Deploying templates in the directory /tmp/tripleoclient-LDQ2md/tripleo-heat-templates Started Mistral Workflow. Execution ID: 23e8f1b0-6e4c-444b-9890-d48fef1a96a6 2016-10-08 17:11:42Z [overcloud]: CREATE_IN_PROGRESS Stack CREATE started 2016-10-08 17:11:42Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS state changed 2016-10-08 17:11:43Z [overcloud.HorizonSecret]: CREATE_IN_PROGRESS state changed 2016-10-08 17:11:43Z [overcloud.ServiceNetMap]: CREATE_IN_PROGRESS Stack CREATE started 2016-10-08 17:11:43Z [overcloud.ServiceNetMap.ServiceNetMapValue]: CREATE_IN_PROGRESS state changed 2016-10-08 17:11:43Z [overcloud.Networks]: CREATE_IN_PROGRESS state changed *** Output Trimmed *** 2016-10-08 17:53:25Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_IN_PROGRESS state changed 2016-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet.ControllerPostPuppetRestart]: CREATE_COMPLETE state changed 2016-10-08 17:54:22Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE Stack CREATE completed successfully 2016-10-08 17:54:23Z [overcloud.AllNodesDeploySteps.ControllerPostPuppet]: CREATE_COMPLETE state changed 2016-10-08 17:54:23Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE Stack CREATE completed successfully 2016-10-08 17:54:24Z [overcloud.AllNodesDeploySteps]: CREATE_COMPLETE state changed 2016-10-08 17:54:24Z [overcloud]: CREATE_COMPLETE Stack CREATE completed successfully Stack overcloud CREATE_COMPLETE Overcloud Endpoint: http://192.168.126.107:5000/v2.0
至此,我们的overcloud部署已完成。检查堆叠状态
[stack@director ~]$openstack stack list +--------------------------------------+------------+-----------------+----------------------+--------------+ | ID | Stack Name | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+-----------------+----------------------+--------------+ | 952eeb74-0c29-4cdc-913c-5d834c8ad6c5 | overcloud | CREATE_COMPLETE | 2016-10-08T17:11:41Z | None | +--------------------------------------+------------+-----------------+----------------------+--------------+
获取overcloud节点列表
[stack@director ~]$nova list +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+ | ID | Name | Status | Task State | Power State | Networks | +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+ | 9a8307e3-7e53-44f8-a77b-7e0115ac75aa | overcloud-compute-0 | ACTIVE | - | Running | ctlplane=192.168.126.112 | | 3667b67f-802f-4c13-ba86-150576cd2b16 | overcloud-controller-0 | ACTIVE | - | Running | ctlplane=192.168.126.113 | +--------------------------------------+------------------------+--------+------------+-------------+--------------------------+
我们可以从用户堆栈主文件夹(~/stack)中的overcloudrc文件中获取Horizon仪表盘凭据。
[stack@director ~]$cat overcloudrc # Clear any old environment that Jan conflict. for key in $( set | awk '{FS="="} /^OS_/{print }' ); do unset $key ; done export OS_USERNAME=admin export OS_TENANT_NAME=admin export NOVA_VERSION=1.1 export OS_PROJECT_NAME=admin export OS_PASSWORD=tZQDQsbGG96t4KcXYfAM22BzN export OS_NO_CACHE=True export COMPUTE_API_VERSION=1.1 export no_proxy=,192.168.126.107,192.168.126.107 export OS_CLOUDNAME=overcloud export OS_AUTH_URL=http://192.168.126.107:5000/v2.0 export PYTHONWARNINGS="ignore:Certificate has no, ignore:A true SSLContext object is not available"
因此,我们可以使用overcloudrc文件中的OS_USERNAME和OS_PASSWORD登录到192.168.126.107处的地平线仪表盘