compute节点有异常
这几天测试用用CentOS7做了 T 版 和用 CentOS8 测试做了 V 版 的openstack都是这个情况,都是在compute节点上启动 libvirtd.service 和 openstack-nova-compute.service 服务时就报 kernel doesn't support AMD SEV
不确定是我宿机的CPU问题还是其他哪里的问题,等待高手指点
宿机:intel I7-6700 / 24G /1T机械盘
虚机:Vmware Station 16 pro
Openstack T版安装
172.16.186.111/192.168.1.111/4G/2U/controller/CentOS7/最小化纯英文安装
172.16.186.112/192.168.1.112/4G/2U/compute/CentOS7/最小化纯英文安装
安装
openssl rand -hex 10
密码名称 描述
数据库密码(未使用变量)数据库的root密码
ADMIN_PASS 用户密码 admin
CINDER_DBPASS 块存储服务的数据库密码
CINDER_PASS 块存储服务用户密码 cinder
DASH_DBPASS 仪表板的数据库密码
DEMO_PASS 用户密码 demo
GLANCE_DBPASS 影像服务的数据库密码
GLANCE_PASS 影像服务用户密码 glance
KEYSTONE_DBPASS 身份服务的数据库密码
METADATA_SECRET 元数据代理的秘密
NEUTRON_DBPASS 网络服务的数据库密码
NEUTRON_PASS 网络服务用户密码 neutron
NOVA_DBPASS 计算服务的数据库密码
NOVA_PASS 计算服务用户密码 nova
PLACEMENT_PASS Placement服务用户密码 placement
RABBIT_PASS RabbitMQ用户密码 openstack
KVM 被配置为 Compute 的默认管理程序。
要显式启用 KVM,请将以下配置选项添加到 /etc/nova/nova.conf文件中:
compute_driver = libvirt.LibvirtDriver
[libvirt]
virt_type = kvm
对于基于 x86 的系统确定是否存在svm或vmxCPU 扩展
grep -E 'svm|vmx' /proc/cpuinfo
如果 CPU 能够进行硬件虚拟化则此命令会生成输出。即使显示输出,您可能仍需要在系统 BIOS 中启用虚拟化以获得全面支持。
如果没有出现输出需确保您的 CPU 和主板支持硬件虚拟化。验证是否在系统 BIOS 中启用了任何相关的硬件虚拟化选项。
每个制造商的 BIOS 都不同。如果必须在BIOS中启用虚拟化,查找包含单词的选项 virtualization,VT,VMX,或SVM。
要列出加载的内核模块并验证kvm模块是否已加载
lsmod | grep kvm
如果输出包括kvm_intel或kvm_amd,kvm则加载硬件虚拟化模块并且您的内核满足 OpenStack Compute 的模块要求。
如果输出未显示kvm模块已加载
modprobe -a kvm
对于英特尔CPU运行命令
modprobe -a kvm-intel
对于AMDCPU运行命令
modprobe -a kvm-amd
嵌套访客支持
要启用嵌套KVM的客人,你的计算节点必须加载 kvm_intel 或 kvm_amd与模块nested=1。
您可以nested通过创建一个命名的文件/etc/modprobe.d/kvm.conf并用以下内容填充它来永久启用该参数:
vim /etc/modprobe.d/kvm.conf
options kvm_intel nested=1
或
options kvm_amd nested=1
可能需要重新启动才能使更改生效。
以上是第一步,必须做,否则在后面安装会报错,后面在compute节点上还要做其他的设置
修改主机名
[root@bree-1 ~]# hostnamectl set-hostname controller
[root@bree-1 ~]# reboot
[root@bree-2 ~]# hostnamectl set-hostname compute
[root@bree-2 ~]# reboot
配置网络
[root@controller ~]# cd /etc/sysconfig/network-scripts/
[root@controller network-scripts]# cp ifcfg-ens33 ifcfg-ens36
[root@controller network-scripts]# vim ifcfg-ens33
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="ens33"
UUID="5a98aa1f-a709-4a85-a566-80e3b6409b15"
DEVICE="ens33"
ONBOOT="yes"
IPADDR="172.16.186.111"
PREFIX="24"
GATEWAY="172.16.186.2"
DNS1="172.16.186.2"
IPV6_PRIVACY="no"
[root@controller network-scripts]# vim ifcfg-ens36
TYPE="Ethernet"
PROXY_METHOD="none"
BROWSER_ONLY="no"
BOOTPROTO="none"
DEFROUTE="yes"
NAME="ens36"
DEVICE="ens36"
ONBOOT="yes"
IPADDR="192.168.1.111"
PREFIX="24"
修改hosts文件
[root@controller ~]# cat >> /etc/hosts << EOF
192.168.1.111 controller
192.168.1.112 compute
EOF
[root@controller ~]# scp /etc/hosts root@192.168.1.112:/etc
配置时间服务
[root@controller ~]# yum -y install chrony
[root@controller ~]# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server ntp.aliyun.com iburst
[root@controller ~]# systemctl restart chronyd
[root@compute ~]# yum -y install chrony
[root@compute ~]# vim /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server 192.168.1.111 iburst
[root@compute ~]# systemctl restart chronyd
[root@compute ~]# chronyc sources
安装OpenStack 软件包
下载并安装 RDO 存储库 RPM 以启用 OpenStack 存储库
[root@controller ~]# yum -y install https://rdoproject.org/repos/rdo-release.rpm
[root@compute ~]# yum -y install https://rdoproject.org/repos/rdo-release.rpm
升级所有节点上的软件包
[root@controller ~]# yum -y upgrade
[root@controller ~]# reboot
[root@compute ~]# yum -y upgrade
[root@compute ~]# reboot
安装OpenStack 客户端 和openstack-selinux
[root@controller ~]# yum -y install python-openstackclient openstack-selinux
[root@compute ~]# yum -y install python-openstackclient openstack-selinux
安装数据库
[root@controller ~]# yum -y install mariadb mariadb-server python2-PyMySQL
[root@controller ~]# cat > /etc/my.cnf.d/openstack.cnf << EOF
[mysqld]
bind-address = 192.168.1.111
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
EOF
[root@controller ~]# systemctl enable mariadb && systemctl start mariadb
初始化数据库
[root@controller ~]# mysql_secure_installation
安装消息队列
[root@controller ~]# yum -y install rabbitmq-server
[root@controller ~]# systemctl enable rabbitmq-server && systemctl start rabbitmq-server
创建openstack用户
[root@controller ~]# rabbitmqctl add_user openstack RABBIT_PASS
给openstack用户授权
[root@controller ~]# rabbitmqctl set_permissions openstack "." "." ".*"
安装Memcached
[root@controller ~]# yum -y install memcached python-memcached
[root@controller ~]# sed -i '/OPTIONS/cOPTIONS="-l 127.0.0.1,::1,controller"' /etc/sysconfig/memcached
[root@controller ~]# systemctl enable memcached && systemctl start memcached
[root@controller ~]# netstat -anpt | grep 11211
安装etcd
[root@controller ~]# yum -y install etcd
[root@controller ~]# cp -a /etc/etcd/etcd.conf{,.bak}
[root@controller ~]# cat > /etc/etcd/etcd.conf << EOF
#[Member]
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="http://192.168.1.111:2380"
ETCD_LISTEN_CLIENT_URLS="http://192.168.1.111:2379"
ETCD_NAME="controller"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.1.111:2380"
ETCD_ADVERTISE_CLIENT_URLS="http://192.168.1.111:2379"
ETCD_INITIAL_CLUSTER="controller=http://192.168.1.111:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
ETCD_INITIAL_CLUSTER_STATE="new"
EOF
[root@controller ~]# systemctl enable etcd && systemctl start etcd
[root@controller ~]# netstat -anpt |grep 2379
Identity安装(代号Keystone)
[root@controller ~]# mysql -u root -p
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY 'KEYSTONE_DBPASS';
安装和配置组件
[root@controller ~]# yum -y install openstack-keystone httpd mod_wsgi
[root@controller ~]# cp -a /etc/keystone/keystone.conf{,.bak}
[root@controller ~]# vim /etc/keystone/keystone.conf
[database]
connection = mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
#配置 Fernet 令牌提供程序
[token]
provider = fernet
填充身份服务数据库
[root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
验证
[root@controller ~]# mysql -u keystone -p keystone -e "show tables;"
密码是KEYSTONE_DBPASS
初始化 Fernet 密钥库
[root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
[root@controller ~]# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
引导身份服务
[root@controller ~]# keystone-manage bootstrap
--bootstrap-password ADMIN_PASS
--bootstrap-admin-url http://controller:5000/v3/
--bootstrap-internal-url http://controller:5000/v3/
--bootstrap-public-url http://controller:5000/v3/
--bootstrap-region-id RegionOne
配置 Apache HTTP 服务器
[root@controller ~]# sed -i "s/#ServerName www.example.com:80/ServerName controller/" /etc/httpd/conf/httpd.conf
[root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
[root@controller ~]# systemctl enable httpd && systemctl start httpd
创建 OpenStack 客户端环境脚本
[root@controller ~]# cat > admin-openrc << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
[root@controller ~]# source admin-openrc
#demo-openrc脚本按需执行
[root@controller ~]# cat > demo-openrc << EOF
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=myproject
export OS_USERNAME=myuser
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
EOF
[root@controller ~]# source demo-openrc
[root@controller ~]# openstack token issue
使用脚本请求身份验证令牌
[root@controller ~]# openstack token issue
创建域、项目、用户和角色
#默认openstack自带一个默认域,这里使用默认的域
查看域
[root@controller ~]# openstack domain list
创建service项目
[root@controller ~]# openstack project create --domain default --description "Service Project" service
[root@controller ~]# openstack project list
Image Service安装(代号Glance)
[root@controller ~]# mysql -u root -p
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY 'GLANCE_DBPASS';
创建服务凭证
[root@controller ~]# source admin-openrc
创建glance用户
[root@controller ~]# openstack user create --domain default --password GLANCE_PASS glance
将admin角色添加到glance用户和 service项目
[root@controller ~]# openstack role add --project service --user glance admin
创建glance服务实体
[root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
创建图像服务 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
openstack endpoint create --region RegionOne image internal http://controller:9292
openstack endpoint create --region RegionOne image admin http://controller:9292
安装和配置组件
[root@controller ~]# yum -y install openstack-glance
[root@controller ~]# cp -a /etc/glance/glance-api.conf{,.bak}
[root@controller ~]# vim /etc/glance/glance-api.conf
[database]
connection = mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
#以下5项额外添加
project_domain_name = Default
user_domain_name = Default
project_name = service
username = glance
password = GLANCE_PASS
[paste_deploy]
flavor = keystone
[glance_store]
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
填充image服务数据库
[root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance
忽略此输出中的任何弃用消息
启动 Image 服务
[root@controller ~]# systemctl enable openstack-glance-api && systemctl start openstack-glance-api
验证glance服务的可用性
admin凭据来访问仅管理员CLI命令
[root@controller ~]# source admin-openrc
下载源图像
[root@controller ~]# wget http://download.cirros-cloud.net/0.5.2/cirros-0.5.2-x86_64-disk.img
[root@controller ~]# glance image-create --name "cirros"
--file cirros-0.5.2-x86_64-disk.img
--disk-format qcow2 --container-format bare
--visibility public
使用QCOW2磁盘格式、裸 容器格式和公开可见性将图像上传到 Image 服务 ,以便所有项目都可以访问它
查看镜像
[root@controller ~]# openstack image list
[root@controller ~]#
安装Placement放置服务
[root@controller ~]# mysql -u root -p
CREATE DATABASE placement;
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY 'PLACEMENT_DBPASS';
GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY 'PLACEMENT_DBPASS';
配置用户和端点
源admin凭据来访问仅管理员CLI命令
创建一个安置服务用户
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack user create --domain default --password PLACEMENT_PASS placement
将 Placement 用户添加到具有 admin 角色的服务项目
[root@controller ~]# openstack role add --project service --user placement admin
此命令不提供任何输出
在服务目录中创建 Placement API 条目
[root@controller ~]# openstack service create --name placement --description "Placement API" placement
创建 Placement API 服务端点
[root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
openstack endpoint create --region RegionOne placement internal http://controller:8778
openstack endpoint create --region RegionOne placement admin http://controller:8778
安装和配置组件
[root@controller ~]# yum -y install openstack-placement-api
[root@controller ~]# cp -a /etc/placement/placement.conf{,.bak}
[root@controller ~]# vim /etc/placement/placement.conf
[placement_database]
connection = mysql+pymysql://placement:PLACEMENT_DBPASS@controller/placement
[api]
auth_strategy = keystone
[keystone_authtoken]
auth_url = http://controller:5000/v3
memcached_servers = controller:11211
auth_type = password
#以下5项额外添加
project_domain_name = Default
user_domain_name = Default
project_name = service
username = placement
password = PLACEMENT_PASS
填充placement数据库
[root@controller ~]# su -s /bin/sh -c "placement-manage db sync" placement
忽略此输出中的任何弃用消息
修改httpd配置(重点)
修改placement的apache配置文件(官方文档坑点之一,这个步骤官方文档没有提到,如果不做,后面计算服务检查时将会报错)
[root@controller ~]# vim /etc/httpd/conf.d/00-placement-api.conf
在#SSLCertificateKeyFile ...下面添加下面如下内容
#SSLCertificateKeyFile ...
<Directory /usr/bin>
<IfVersion >= 2.4>
Require all granted
</IfVersion>
<IfVersion < 2.4>
Order allow,deny
Allow from all
</IfVersion>
</Directory>
</VirtualHost>
重启httpd服务
[root@controller ~]# systemctl restart httpd
controller节点Compute安装(代号Nova)
[root@controller ~]# mysql -u root -p
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY 'NOVA_DBPASS';
来源admin凭据来访问仅管理员CLI命令
[root@controller ~]# source admin-openrc
创建计算服务凭证
创建nova用户
[root@controller ~]# openstack user create --domain default --password NOVA_PASS nova
admin为nova用户添加角色
[root@controller ~]# openstack role add --project service --user nova admin
>此命令不提供任何输出
创建nova服务实体:
[root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
创建 Compute API 服务端点
[root@controller ~]#
openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
安装和配置组件
[root@controller ~]# yum -y install openstack-nova-api
openstack-nova-conductor
openstack-nova-novncproxy
openstack-nova-scheduler
[root@controller ~]# cp -a /etc/nova/nova.conf{,.bak}
[root@controller ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller:5672/
my_ip=192.168.1.111
use_neutron=true
firewall_driver=nova.virt.firewall.NoopFirewallDriver
[api_database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[database]
connection = mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
#以下5项额外添加
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
server_listen = $my_ip
server_proxyclient_address = $my_ip
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
auth_type=password
auth_url=http://controller:5000/v3
project_name=service
project_domain_name=Default
username=placement
user_domain_name=Default
password=PLACEMENT_PASS
region_name=RegionOne
填充nova-api数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
验证
[root@controller ~]# mysql -u nova -p nova_api -e "show tables;"
>密码是NOVA_DBPASS
注册cell0数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
创建cell1单元格
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
>回显是:7f3f4158-8b6c-434f-87cc-7c80be46fc62
填充 nova 数据库
[root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
>忽略此输出中的任何弃用消息
验证
[root@controller ~]# mysql -u nova -p nova -e "show tables;"
>密码是NOVA_DBPASS
验证 nova cell0 和 cell1 是否正确注册
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 list_cells" nova
>名称列应该有cell0、cell1这2个才是正确的
启动 Compute 服务
[root@controller ~]# systemctl enable
openstack-nova-api.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service
[root@controller ~]# systemctl start
openstack-nova-api.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service
[root@controller ~]# systemctl status
openstack-nova-api.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service | grep Active
截止到现在可能/var/log/nova/nova-scheduler.log会有报错,这个暂时先不管
检查
[root@controller ~]# nova-status upgrade check
+------------------------------------------------------------------+
| Upgrade Check Results |
+------------------------------------------------------------------+
| Check: Cells v2 |
| Result: Failure |
| Details: No host mappings found but there are compute nodes. Run |
| command 'nova-manage cell_v2 simple_cell_setup' and then |
| retry. |
+------------------------------------------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+------------------------------------------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+------------------------------------------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+------------------------------------------------------------------+
compute节点安装Compute服务(代号Nova)
[root@compute ~]# yum -y install openstack-nova-compute
[root@compute ~]# cp -a /etc/nova/nova.conf{,.bak}
[root@compute ~]# vim /etc/nova/nova.conf
[DEFAULT]
enabled_apis = osapi_compute,metadata
transport_url = rabbit://openstack:RABBIT_PASS@controller
my_ip=192.168.1.112
use_neutron = true
firewall_driver = nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000/
auth_url = http://controller:5000/
memcached_servers = controller:11211
auth_type = password
#以下5项额外添加
project_domain_name = Default
user_domain_name = Default
project_name = service
username = nova
password = NOVA_PASS
[vnc]
enabled = true
server_listen = 0.0.0.0
server_proxyclient_address = $my_ip
novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance]
api_servers = http://controller:9292
[oslo_concurrency]
lock_path = /var/lib/nova/tmp
[placement]
auth_type=password
auth_url = http://controller:5000/v3
project_name=service
project_domain_name=Default
username = placement
user_domain_name = Default
password = PLACEMENT_PASS
region_name = RegionOne
确定计算节点是否支持虚拟机的硬件加速
[root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
>如果返回值为非0,则计算节点支持硬件加速,这通常不需要额外配置
如果返回值为0,则计算节点不支持硬件加速,必须配置libvirt为使用 QEMU 而不是 KVM。
编辑/etc/nova/nova.conf文件中的[libvirt]部分:
[libvirt]
virt_type = qemu
嵌套访客支持
要启用嵌套KVM的客人,计算节点必须加载 kvm_intel或kvm_amd与模块nested=1
[root@compute ~]# cat > /etc/modprobe.d/kvm.conf << EOF
options kvm_intel nested=1
options kvm_amd nested=1
EOF
[root@compute ~]# reboot
启动 Compute 服务
[root@compute ~]# systemctl enable libvirtd.service
openstack-nova-compute.service
[root@compute ~]# systemctl start libvirtd.service
openstack-nova-compute.service
tail -f /var/log/nova/nova-compute.log发现报错
问题1、kernel doesn't support AMD SEV
解决:未解决.....
此外nova-compute.log中还有其他错误,这些都先不管
将计算节点添加到cell数据库中
在controller节点上运行以下命令
[root@controller ~]# source admin-openrc
确认数据库中有compute主机
[root@controller ~]# openstack compute service list --service nova-compute
+----+--------------+---------+------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+--------------+---------+------+---------+-------+----------------------------+
| 6 | nova-compute | compute | nova | enabled | down | 2021-06-03T10:54:40.000000 |
+----+--------------+---------+------+---------+-------+----------------------------+
发现计算主机
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
添加新计算节点时必须在控制器节点上运行以注册这些新计算节点。
或者
可在设置适当的间隔
[root@compute ~]# /etc/nova/nova.conf
[scheduler]
discover_hosts_in_cells_interval = 300
controller节点部署Network(代号neutron)
[root@controller ~]# mysql -u root -p
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY 'NEUTRON_DBPASS';
来源admin凭据来访问仅管理员CLI命令
[root@controller ~]# source admin-openrc
创建服务凭证
创建neutron用户
[root@controller ~]# openstack user create --domain default --password NEUTRON_PASS neutron
admin为neutron用户添加角色:
[root@controller ~]# openstack role add --project service --user neutron admin
此命令不提供任何输出
创建neutron服务实体
[root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
创建网络服务 API 端点
[root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
openstack endpoint create --region RegionOne network internal http://controller:9696
openstack endpoint create --region RegionOne network admin http://controller:9696
配置网络选项
自助服务网络
[root@controller ~]# yum -y install openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables
[root@controller ~]# cp -a /etc/neutron/neutron.conf{,.bak}
[root@controller ~]# vim /etc/neutron/neutron.conf
原文件中没有的选项需额外添加
[database]
connection = mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[nova]
auth_url = http://controller:5000
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = NOVA_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
配置模块化第 2 层 (ML2) 插件
ML2 插件使用 Linux 桥接机制为实例构建第 2 层(桥接和交换)虚拟网络基础设施
下面ml2_conf.ini文件中除DEFAULT段外其他段都为自添加段
[root@controller ~]# cp -a /etc/neutron/plugins/ml2/ml2_conf.ini{,.bak}
[root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini
启用平面、VLAN 和 VXLAN 网络
[ml2]
type_drivers = flat,vlan,vxlan
配置 ML2 插件后,删除type_drivers选项中的值 可能会导致数据库不一致
启用 VXLAN 自助网络
tenant_network_types = vxlan
启用 Linux 桥接和二层填充机制
mechanism_drivers = linuxbridge,l2population
启用端口安全扩展驱动程序
extension_drivers = port_security
将提供者虚拟网络配置为平面网络
[ml2_type_flat]
flat_networks = provider
配置自助网络的 VXLAN 网络标识符范围
[ml2_type_vxlan]
vni_ranges = 1:1000
启用ipset以提高安全组规则的效率
[securitygroup]
enable_ipset = true
配置 Linux 网桥代理
Linux 桥接代理为实例构建第 2 层(桥接和交换)虚拟网络基础架构并处理安全组
linuxbridge_agent.ini文件中除DEFAULT段外其他段都为自添加段
[root@controller ~]# cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
[root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
将提供者虚拟网络映射到提供者物理网络接口
[linux_bridge]
physical_interface_mappings = 172.16.186.111:ens33
启用VXLAN覆盖网络,配置处理覆盖网络的物理网络接口的IP地址,并启用第2层填充
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.111
l2_population = true
启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
controller节点操作系统内核支持网桥
[root@controller ~]# cat > /etc/sysctl.d/openstack_bridge.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@controller ~]# sysctl --system
controller节点启用网络桥接支持
[root@controller ~]# echo "modprobe br_netfilter">>/etc/profile
[root@controller ~]# source /etc/profile
[root@controller ~]# lsmod |grep br_netfilter
配置三层代理
第 3 层 (L3) 代理为自助服务虚拟网络提供路由和 NAT 服务
[root@controller ~]# cp -a /etc/neutron/l3_agent.ini{,.bak}
[root@controller ~]# vim /etc/neutron/l3_agent.ini
配置Linux桥接接口驱动
[DEFAULT]
interface_driver = linuxbridge
配置 DHCP 代理
DHCP 代理为虚拟网络提供 DHCP 服务
[root@controller ~]# cp -a /etc/neutron/dhcp_agent.ini{,.bak}
[root@controller ~]# vim /etc/neutron/dhcp_agent.ini
配置 Linux 桥接接口驱动程序、Dnsmasq DHCP 驱动程序,并启用隔离元数据,以便提供商网络上的实例可以通过网络访问元数据
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
配置元数据主机和共享密钥
[root@controller ~]# openssl rand -hex 10
9be47ea5a25ff3b14d98
配置元数据代理
[root@controller ~]# cp -a /etc/neutron/metadata_agent.ini{,.bak}
[root@controller ~]# vim /etc/neutron/metadata_agent.ini
[DEFAULT]
nova_metadata_host = controller
metadata_proxy_shared_secret = 9be47ea5a25ff3b14d98
配置 Compute 服务以使用 Networking 服务
[root@controller ~]# vim /etc/nova/nova.conf
配置访问参数,启用元数据代理,并配置密钥
[neutron]
service_metadata_proxy=true
metadata_proxy_shared_secret = 9be47ea5a25ff3b14d98
auth_type=password
auth_url = http://controller:5000
project_name = service
project_domain_name = default
username = neutron
user_domain_name = default
password=NEUTRON_PASS
region_name = RegionOne
网络服务初始化脚本需要一个/etc/neutron/plugin.ini指向 ML2 插件配置文件的符号链接 /etc/neutron/plugins/ml2/ml2_conf.ini。如果此符号链接不存在则需执行以下命令
[root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini
填充数据库
[root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
重启计算 API 服务
[root@controller ~]# systemctl restart openstack-nova-api.service
启动两个网络服务
[root@controller ~]# systemctl enable neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
[root@controller ~]# systemctl start neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
启动第 3 层服务
[root@controller ~]# systemctl enable neutron-l3-agent.service
[root@controller ~]# systemctl start neutron-l3-agent.service
compute节点安装compute服务
计算节点处理实例的连接和安全组
安装组件
[root@compute ~]# yum -y install openstack-neutron-linuxbridge ebtables ipset
配置通用组件
Networking 通用组件配置包括身份验证机制、消息队列和插件
[root@compute ~]# cp -a /etc/neutron/neutron.conf{,.bak}
[root@compute ~]# vim /etc/neutron/neutron.conf
[DEFAULT]
transport_url = rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
[keystone_authtoken]
www_authenticate_uri = http://controller:5000
auth_url = http://controller:5000
memcached_servers = controller:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = NEUTRON_PASS
[oslo_concurrency]
lock_path = /var/lib/neutron/tmp
自助服务配置网络选项
在计算节点上配置网络组件
配置 Linux 网桥代理
Linux 桥接代理为实例构建第 2 层(桥接和交换)虚拟网络基础架构并处理安全组
[root@compute ~]# cp -a /etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.bak}
[root@compute ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
将提供者虚拟网络映射到提供者物理网络接口
[linux_bridge]
physical_interface_mappings = 172.16.186.112:ens33
启用VXLAN覆盖网络,配置处理覆盖网络的物理网络接口的IP地址,并启用第2层填充
[vxlan]
enable_vxlan = true
local_ip = 192.168.1.112
l2_population = true
启用安全组并配置 Linux 网桥 iptables 防火墙驱动程序
[securitygroup]
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
compute节点操作系统内核支持网桥
[root@compute ~]# cat > /etc/sysctl.d/openstack_bridge.conf << EOF
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
[root@compute ~]# sysctl --system
配置 Compute 服务以使用 Networking 服务
[root@compute ~]# vim /etc/nova/nova.conf
[neutron]
auth_type=password
auth_url = http://controller:5000
project_name = service
project_domain_name = default
username = neutron
user_domain_name = default
password = NEUTRON_PASS
region_name = RegionOne
重启计算服务
[root@compute ~]# systemctl restart
libvirtd.service
openstack-nova-compute.service
启动 Linux 网桥代理
[root@compute ~]# systemctl enable neutron-linuxbridge-agent.service
[root@compute ~]# systemctl start neutron-linuxbridge-agent.service
查看启动日志:
[root@compute ~]# tail -f /var/log/nova/nova-compute.log
2021-06-04 15:41:38.031 3462 INFO nova.virt.libvirt.driver [req-6139deb3-da74-422e-b0e7-009cc1a9772b - - - - -] Connection event '0' reason '到libvirt的连接丢失:1'
2021-06-04 15:41:52.426 3794 INFO os_vif [-] Loaded VIF plugins: ovs, linux_bridge, noop
2021-06-04 15:41:53.102 3794 WARNING oslo_config.cfg [req-07219e5f-1da1-4d4a-b912-8d3c7ad32746 - - - - -] Deprecated: Option "use_neutron" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
). Its value may be silently ignored in the future.
2021-06-04 15:41:53.127 3794 INFO nova.virt.driver [req-07219e5f-1da1-4d4a-b912-8d3c7ad32746 - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
2021-06-04 15:41:53.307 3794 WARNING oslo_config.cfg [req-07219e5f-1da1-4d4a-b912-8d3c7ad32746 - - - - -] Deprecated: Option "firewall_driver" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
). Its value may be silently ignored in the future.
2021-06-04 15:41:53.313 3794 WARNING os_brick.initiator.connectors.remotefs [req-07219e5f-1da1-4d4a-b912-8d3c7ad32746 - - - - -] Connection details not present. RemoteFsClient may not initialize properly.
2021-06-04 15:41:53.324 3794 WARNING oslo_config.cfg [req-07219e5f-1da1-4d4a-b912-8d3c7ad32746 - - - - -] Deprecated: Option "dhcpbridge" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
). Its value may be silently ignored in the future.
2021-06-04 15:41:53.326 3794 WARNING oslo_config.cfg [req-07219e5f-1da1-4d4a-b912-8d3c7ad32746 - - - - -] Deprecated: Option "dhcpbridge_flagfile" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
). Its value may be silently ignored in the future.
2021-06-04 15:41:53.328 3794 WARNING oslo_config.cfg [req-07219e5f-1da1-4d4a-b912-8d3c7ad32746 - - - - -] Deprecated: Option "force_dhcp_release" from group "DEFAULT" is deprecated for removal (
nova-network is deprecated, as are any related configuration options.
). Its value may be silently ignored in the future.
2021-06-04 15:41:53.368 3794 INFO nova.service [-] Starting compute node (version 20.6.0-1.el7)
2021-06-04 15:41:53.394 3794 INFO nova.virt.libvirt.driver [-] Connection event '1' reason 'None'
2021-06-04 15:41:53.422 3794 INFO nova.virt.libvirt.host [-] Libvirt host capabilities <capabilities>
<host>
<uuid>5c874d56-99ea-b4e2-7d98-c55106f1e2f9</uuid>
<cpu>
<arch>x86_64</arch>
<model>Broadwell-noTSX-IBRS</model>
<vendor>Intel</vendor>
<microcode version='226'/>
<counter name='tsc' frequency='2591996000' scaling='no'/>
<topology sockets='2' cores='1' threads='1'/>
<feature name='vme'/>
<feature name='ss'/>
<feature name='vmx'/>
<feature name='osxsave'/>
<feature name='f16c'/>
<feature name='rdrand'/>
<feature name='hypervisor'/>
<feature name='arat'/>
<feature name='tsc_adjust'/>
<feature name='clflushopt'/>
<feature name='md-clear'/>
<feature name='stibp'/>
<feature name='arch-facilities'/>
<feature name='ssbd'/>
<feature name='xsaveopt'/>
<feature name='xsavec'/>
<feature name='xgetbv1'/>
<feature name='xsaves'/>
<feature name='pdpe1gb'/>
<feature name='abm'/>
<feature name='invtsc'/>
<pages unit='KiB' size='4'/>
<pages unit='KiB' size='2048'/>
<pages unit='KiB' size='1048576'/>
</cpu>
<power_management>
<suspend_mem/>
<suspend_disk/>
<suspend_hybrid/>
</power_management>
<iommu support='no'/>
<migration_features>
<live/>
<uri_transports>
<uri_transport>tcp</uri_transport>
<uri_transport>rdma</uri_transport>
</uri_transports>
</migration_features>
<topology>
<cells num='1'>
<cell id='0'>
<memory unit='KiB'>3861288</memory>
<pages unit='KiB' size='4'>965322</pages>
<pages unit='KiB' size='2048'>0</pages>
<pages unit='KiB' size='1048576'>0</pages>
<distances>
<sibling id='0' value='10'/>
</distances>
<cpus num='2'>
<cpu id='0' socket_id='0' core_id='0' siblings='0'/>
<cpu id='1' socket_id='2' core_id='0' siblings='1'/>
</cpus>
</cell>
</cells>
</topology>
<cache>
<bank id='0' level='3' type='both' size='6' unit='MiB' cpus='0'/>
<bank id='2' level='3' type='both' size='6' unit='MiB' cpus='1'/>
</cache>
<secmodel>
<model>none</model>
<doi>0</doi>
</secmodel>
<secmodel>
<model>dac</model>
<doi>0</doi>
<baselabel type='kvm'>+107:+107</baselabel>
<baselabel type='qemu'>+107:+107</baselabel>
</secmodel>
</host>
<guest>
<os_type>hvm</os_type>
<arch name='i686'>
<wordsize>32</wordsize>
<emulator>/usr/libexec/qemu-kvm</emulator>
<machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine>
<machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
<machine maxCpus='384'>pc-q35-rhel7.6.0</machine>
<machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine>
<machine maxCpus='240'>rhel6.3.0</machine>
<machine maxCpus='240'>rhel6.4.0</machine>
<machine maxCpus='240'>rhel6.0.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine>
<machine maxCpus='255'>pc-q35-rhel7.3.0</machine>
<machine maxCpus='240'>rhel6.5.0</machine>
<machine maxCpus='384'>pc-q35-rhel7.4.0</machine>
<machine maxCpus='240'>rhel6.6.0</machine>
<machine maxCpus='240'>rhel6.1.0</machine>
<machine maxCpus='240'>rhel6.2.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine>
<machine maxCpus='384'>pc-q35-rhel7.5.0</machine>
<domain type='qemu'/>
<domain type='kvm'>
<emulator>/usr/libexec/qemu-kvm</emulator>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<disksnapshot default='on' toggle='no'/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
<pae/>
<nonpae/>
</features>
</guest>
<guest>
<os_type>hvm</os_type>
<arch name='x86_64'>
<wordsize>64</wordsize>
<emulator>/usr/libexec/qemu-kvm</emulator>
<machine maxCpus='240'>pc-i440fx-rhel7.6.0</machine>
<machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240'>pc</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.0.0</machine>
<machine maxCpus='384'>pc-q35-rhel7.6.0</machine>
<machine canonical='pc-q35-rhel7.6.0' maxCpus='384'>q35</machine>
<machine maxCpus='240'>rhel6.3.0</machine>
<machine maxCpus='240'>rhel6.4.0</machine>
<machine maxCpus='240'>rhel6.0.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.5.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.1.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.2.0</machine>
<machine maxCpus='255'>pc-q35-rhel7.3.0</machine>
<machine maxCpus='240'>rhel6.5.0</machine>
<machine maxCpus='384'>pc-q35-rhel7.4.0</machine>
<machine maxCpus='240'>rhel6.6.0</machine>
<machine maxCpus='240'>rhel6.1.0</machine>
<machine maxCpus='240'>rhel6.2.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.3.0</machine>
<machine maxCpus='240'>pc-i440fx-rhel7.4.0</machine>
<machine maxCpus='384'>pc-q35-rhel7.5.0</machine>
<domain type='qemu'/>
<domain type='kvm'>
<emulator>/usr/libexec/qemu-kvm</emulator>
</domain>
</arch>
<features>
<cpuselection/>
<deviceboot/>
<disksnapshot default='on' toggle='no'/>
<acpi default='on' toggle='yes'/>
<apic default='on' toggle='no'/>
</features>
</guest>
</capabilities>
2021-06-04 15:41:53.539 3794 INFO nova.compute.manager [req-5745e480-586a-4dd5-ad87-dcb12cbf05f1 - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
2021-06-04 15:41:54.296 3794 INFO nova.virt.libvirt.host [req-5745e480-586a-4dd5-ad87-dcb12cbf05f1 - - - - -] kernel doesn't support AMD SEV
已处理,详见 B站视频
报错收集
[root@controller ~]# nova-status upgrade check
+-------------------------------------------------------------------------------------------------+
| Upgrade Check Results |
+------------------------------------------------------------------------------------------------+
| Check: Cells v2 |
| Result: Failure |
| Details: No host mappings found but there are compute nodes. Run | # 这里有错误
| command 'nova-manage cell_v2 simple_cell_setup' and then |
| retry. |
+------------------------------------------------------------------------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+------------------------------------------------------------------------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+-----------------------------------------------------------------------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+------------------------------------------------------------------------------------------------+
解决
重新发现一下计算节点
[root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
Found 3 cell mappings.
Skipping cell0 since it does not contain hosts.
Getting computes from cell 'cell1': 7f3f4158-8b6c-434f-87cc-7c80be46fc62
Checking host mapping for compute host 'compute': 21768f9e-ff6a-49c0-b4aa-1ff777393854
Found 0 unmapped computes in cell: 7f3f4158-8b6c-434f-87cc-7c80be46fc62
Getting computes from cell: 64294aa5-1227-46cf-bba6-4e75f0eadcca
Checking host mapping for compute host 'compute': 21768f9e-ff6a-49c0-b4aa-1ff777393854
Found 0 unmapped computes in cell: 64294aa5-1227-46cf-bba6-4e75f0eadcca
[root@controller ~]# openstack compute service list
+----+----------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+----------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-conductor | controller | internal | enabled | up | 2021-06-04T07:17:11.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2021-06-04T07:17:11.000000 |
| 6 | nova-compute | compute | nova | enabled | up | 2021-06-04T07:17:11.000000 |
| 1 | nova-conductor | controller | internal | enabled | up | 2021-06-04T07:17:11.000000 |
| 2 | nova-scheduler | controller | internal | enabled | up | 2021-06-04T07:17:11.000000 |
| 6 | nova-compute | compute | nova | enabled | up | 2021-06-04T07:17:11.000000 |
+----+----------------+------------+----------+---------+-------+----------------------------+
[root@controller ~]# nova-status upgrade check
+--------------------------------+
| Upgrade Check Results |
+--------------------------------+
| Check: Cells v2 |
| Result: Success | # 解决
| Details: None |
+---------------------------------+
| Check: Placement API |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Ironic Flavor Migration |
| Result: Success |
| Details: None |
+--------------------------------+
| Check: Cinder API |
| Result: Success |
| Details: None |
+--------------------------------+
已处理,详见 B站视频
QQ群一起学习Linux、开源、系统集成,期待你的加入