启动顺序:
1. classroom
2. workstation
3. power
4. utility
5. director和overcloud 相关节点机器
启动集群
(undercloud) [stack@director ~]$ rht-overcloud.sh start
关闭集群
(undercloud) [stack@director ~]$ rht-overcloud.sh stop
查看节点状态
(undercloud) [stack@director ~]$ openstack baremetal node list
(undercloud) [stack@director ~]$ openstack baremetal node list -c Name -c 'Power State'
查看状态和启动
(undercloud) [stack@director ~]$ openstack server list -c Name -c Status
(undercloud) [stack@director ~]$ openstack server start compute1
(undercloud) [stack@director ~]$ openstack server list -c Name -c Status
(undercloud) [stack@director ~]$ openstack server start controller0
(undercloud) [stack@director ~]$ openstack server set --state active controller0
(undercloud) [stack@director ~]$ openstack server list -c Name -c Status
如果服务器启动有困难,并且不能响应前面的启动或设置状态命令,则强制重新引导对于清除当前问题有效,如本示例所示。
(undercloud) [stack@director ~]$ openstack server reboot --hard controller0
重置overcloud的要点
Never Need To Be Reset
• classroom
• workstation
• power
• utility
Only Reset Together As A Group
• controller0
• compute0
• compute1
• computehci0
• ceph0
• director
在不重置director的情况下重置overcloud将加载一个新的overcloud,同时该控制器将保留关于刚刚丢弃的前一个overcloud的陈旧信息。
[root@foundation0 ~]# rht-vmctl reset undercloud
[root@foundation0 ~]# rht-vmctl reset overcloud
##########################################################################################################################
(undercloud) [stack@director ~]$ openstack service list
(undercloud) [stack@director ~]$ openstack endpoint list -c 'Service Type' -c Interface -c URL
(undercloud) [stack@director ~]$ cat undercloud.conf | egrep -v '(#.*|$)'
[DEFAULT]
local_ip = 172.25.249.200/24
undercloud_public_vip = 172.25.249.201
undercloud_admin_vip = 172.25.249.202
undercloud_ntp_servers = 172.25.254.254
dhcp_start = 172.25.249.51
dhcp_end = 172.25.249.59
inspection_iprange = 172.25.249.150,172.25.249.180
masquerade_network = 172.25.249.0/24
undercloud_service_certificate = /etc/pki/tls/certs/undercloud.pem
generate_service_certificate = false
local_interface = eth0
network_cidr = 172.25.249.0/24
network_gateway = 172.25.249.200
hieradata_override = /home/stack/hieradata.txt
undercloud_debug = false
enable_telemetry = false
enable_ui = true
[auth]
undercloud_admin_password = redhat
[ctlplane-subnet]
查看undercloud的网络接口
br-ctlplane网桥是172.25.249.0配置网络。eth1接口是172.25.250.0公共公网。
(undercloud) [stack@director ~]$ ip addr | grep eth1
(undercloud) [stack@director ~]$ ip addr | grep br-ctlplane
(undercloud) [stack@director ~]$ openstack subnet show ctlplane-subnet
(undercloud) [stack@director ~]$ openstack image list
(undercloud) [stack@director ~]$ openstack baremetal node list -c Name -c 'Power State' -c 'Provisioning State'
(undercloud) [stack@director ~]$ openstack stack list
(undercloud) [stack@director ~]$ openstack stack list -c 'Stack Name' -c 'Stack Status'
(undercloud) [stack@director ~]$ openstack compute service list
(undercloud) [stack@director ~]$ systemctl status docker-distribution.service
[heat-admin@controller0 ~]$ sudo docker ps | grep nova
[heat-admin@controller0 ~]$ sudo docker ps -a --format="{{.Names}} {{.Status}}" | grep nova
[heat-admin@controller0 ~]$ sudo docker ps --filter status=running --format="{{.ID}} {{.Names}} {{.Status}}"
[heat-admin@controller0 ~]$ sudo docker ps --filter status=running --format="{{.ID}} {{.Names}} {{.Status}}" | grep horizon
[root@controller0 ~]# docker exec -it keystone /bin/bash
[root@controller0 ~]# docker exec -t keystone /openstack/healthcheck
(undercloud) [stack@director ~]$ openstack server list -c Name -c Status -c Networks
(overcloud) [stack@director ~]$ openstack endpoint list -c 'Service Type' -c Interface -c URL
[heat-admin@controller0 ~]$ ip addr | egrep 'eth0|br-ex'
[heat-admin@controller0 ~]$ sudo ovs-vsctl show
[heat-admin@controller0 ~]$ ip addr show vlan30
[heat-admin@controller0 ~]$ sudo docker inspect nova_scheduler --format
"{{json .HostConfig.Binds}}" | jq .
[root@controller0 ~]# docker inspect nova_api -f "{{.Config.Labels.managed_by}}"
[root@controller0 ~]# paunch debug --file /var/lib/tripleo-config/hashed-docker-container-startup-config-step_4.json --container nova_scheduler --action print-cmd