一.openstack计算服务nova
1.1nova介绍
Nova是openstack最早的两块模块之一,另一个是对象存储swift
。在openstack体系中一个叫做计算节点
,一个叫做控制节点
。这个主要和nova相关,我们把安装为计算节点nova-compute
,把除了nova-compute叫做控制节点。nova-compute是创建虚拟机的,只是创建虚拟机,所有的控制都在另一台上。
1.2Nova组件介绍
- API:实现了RESTful API功能,是外部访问Nova的唯一途径。
接收外部的请求并通过Message Queue将请求发送给其他的服务组件,同时也兼容EC2 API,所以也可以用EC2的管理工具对nova进行日常管理。
- Scheduler:模块在OpenStack中负责决策虚拟机创建在那台主机(计算节点)上。
决策一个虚拟机应该调度到某物理节点,需要分两个步骤:
1.过滤(Fliter) 首先获取过未过滤的主机列表,根据过滤属性,选择服务条件的计算节点主机。
2.计算权值(Weight) 经过主机过滤,需要对主机进行权值的计算,根据策略选择相应的某一台主机。
- Cert:负责身份认证
- Conductor:计算节点访问数据库的中间件
- Consoleauth:用于控制台的授权验证
- Novncproxy:VNC代理
1.3nova环境准备
1.3.1安装nova包及组件
1 [root@linux-node1 ~]# yum install –y \ 2 openstack-nova-api \ 3 openstack-nova-conductor \ 4 openstack-nova-console \ 5 openstack-nova-novncproxy \ 6 openstack-nova-scheduler 7 注解:(从上往下依次) 8 nova-api接口 9 计算节点访问数据库中间件 10 控制台授权认证组件 11 VNC代理组件 12 云主机调度组件
1.3.2创建nova库及用户
1 #登录数据库 2 [root@linux-node1 ~]# mysql -uroot –p 3 #创建nova库 4 MariaDB [(none)]> create database nova; 5 Query OK, 1 row affected (0.00 sec) 6 #创建nova用户并授权 7 MariaDB [(none)]> grant all privileges on nova.* to nova@'%' identified by 'nova'; 8 Query OK, 0 rows affected (0.00 sec) 9 MariaDB [(none)]> grant all privileges on nova.* to nova@'localhost' identified by 'nova'; 10 Query OK, 0 rows affected (0.00 sec) 11 #创建nova_api库 12 MariaDB [(none)]> create database nova_api; 13 Query OK, 1 row affected (0.00 sec) 14 #授权nova用户使用nova_api库 15 MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'localhost' identified by 'nova'; 16 Query OK, 0 rows affected (0.00 sec) 17 MariaDB [(none)]> grant all privileges on nova_api.* to 'nova'@'%' identified by 'nova'; 18 Query OK, 0 rows affected (0.00 sec)
1.3.3创建openstack的nova用户
1 #创建nova用户 2 [root@linux-node1 ~]# openstack user create --domain default --password-prompt nova
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 71f572aaf1ec431695f2ed0b27b8c908 |
| name | nova |
| password_expires_at | None |
+---------------------+----------------------------------+ 3 #将nova用户加入到service项目并且赋予admin角色 4 [root@linux-node1 ~]# openstack role add --project service --user nova admin
1.4安装配置nova控制节点
1.4.1编辑配置文件
1 #编辑nova配置文件 2 [root@linux-node1 ~]# vim /etc/nova/nova.conf 3 #只启用计算和元数据API,打开注释 4 enabled_apis=osapi_compute,metadata 5 #在api_database标签下添加内容 6 [api_database] 7 #nova_api连接数据库 8 connection=mysql+pymysql://nova:nova@192.168.56.11/nova_api 9 #在database标签下添加内容 10 [database] 11 #nova连接数据库 12 connection=mysql+pymysql://nova:nova@192.168.56.11/nova 13 #在default标签下添加内容 14 [default] 15 #消息队列配置 16 transport_url=rabbit://openstack:openstack@192.168.56.11 17 #允许keystone认证方式,打开注释 18 auth_strategy=keystone 19 #在keystone_authtoken标签下添加内容 20 [keystone_authtoken] 21 #配置nova连接keystone 22 auth_uri = http://192.168.56.11:5000 23 auth_url = http://192.168.56.11:35357 24 memcached_servers = 192.168.56.11:11211 25 auth_type = password 26 project_domain_name = default 27 user_domain_name = default 28 project_name = service 29 username = nova 30 password = nova 31 #启用网络服务支持 32 use_neutron=true 33 #防火墙驱动 34 firewall_driver = nova.virt.firewall.NoopFirewallDriver 35 #配置VNC代理使用控制节点的管理接口IP地址 36 vncserver_listen=0.0.0.0 37 #VNC客户端地址 38 vncserver_proxyclient_address=192.168.56.11 39 #配置镜像服务 API 的位置 40 api_servers=http://192.168.56.11:9292 41 #配置锁路径,打开注释 42 lock_path=/var/lib/nova/tmp
1.4.2将数据导入数据库
1 #导入nova-api数据 2 [root@linux-node1 ~]# su -s /bin/sh -c "nova-manage api_db sync" nova 3 #导入nova数据 4 [root@linux-node1 ~]# su -s /bin/sh -c "nova-manage db sync" nova
1.4.3检查数据库
1 [root@linux-node1 ~]# mysql -h192.168.56.11 -unova -pnova -e "use nova;show tables;"
+--------------------------------------------+
| Tables_in_nova |
+--------------------------------------------+
| agent_builds |
| aggregate_hosts |
| aggregate_metadata |
| aggregates |
| allocations |
| block_device_mapping |
| bw_usage_cache |
| cells |
| certificates |
| compute_nodes |
| console_auth_tokens |
| console_pools |
| consoles |
| dns_domains |
| fixed_ips |
| floating_ips |
| instance_actions |
| instance_actions_events |
| instance_extra |
| instance_faults |
| instance_group_member |
| instance_group_policy |
| instance_groups |
| instance_id_mappings |
| instance_info_caches |
| instance_metadata |
| instance_system_metadata |
| instance_type_extra_specs |
| instance_type_projects |
| instance_types |
| instances |
| inventories |
| key_pairs |
| migrate_version |
| migrations |
| networks |
| pci_devices |
| project_user_quotas |
| provider_fw_rules |
| quota_classes |
| quota_usages |
| quotas |
| reservations |
| resource_provider_aggregates |
| resource_providers |
| s3_images |
| security_group_default_rules |
| security_group_instance_association |
| security_group_rules |
| security_groups |
| services |
| shadow_agent_builds |
| shadow_aggregate_hosts |
| shadow_aggregate_metadata |
| shadow_aggregates |
| shadow_block_device_mapping |
| shadow_bw_usage_cache |
| shadow_cells |
| shadow_certificates |
| shadow_compute_nodes |
| shadow_console_pools |
| shadow_consoles |
| shadow_dns_domains |
| shadow_fixed_ips |
| shadow_floating_ips |
| shadow_instance_actions |
| shadow_instance_actions_events |
| shadow_instance_extra |
| shadow_instance_faults |
| shadow_instance_group_member |
| shadow_instance_group_policy |
| shadow_instance_groups |
| shadow_instance_id_mappings |
| shadow_instance_info_caches |
| shadow_instance_metadata |
| shadow_instance_system_metadata |
| shadow_instance_type_extra_specs |
| shadow_instance_type_projects |
| shadow_instance_types |
| shadow_instances |
| shadow_key_pairs |
| shadow_migrate_version |
| shadow_migrations |
| shadow_networks |
| shadow_pci_devices |
| shadow_project_user_quotas |
| shadow_provider_fw_rules |
| shadow_quota_classes |
| shadow_quota_usages |
| shadow_quotas |
| shadow_reservations |
| shadow_s3_images |
| shadow_security_group_default_rules |
| shadow_security_group_instance_association |
| shadow_security_group_rules |
| shadow_security_groups |
| shadow_services |
| shadow_snapshot_id_mappings |
| shadow_snapshots |
| shadow_task_log |
| shadow_virtual_interfaces |
| shadow_volume_id_mappings |
| shadow_volume_usage_cache |
| snapshot_id_mappings |
| snapshots |
| tags |
| task_log |
| virtual_interfaces |
| volume_id_mappings |
| volume_usage_cache |
+--------------------------------------------+
1.4.4项目及端点配置
1 #创建nova实体服务 2 [root@linux-node1 ~]# openstack service create --name nova \ 3 --description "OpenStack Compute" compute 4 #创建nova端点 5 [root@linux-node1 ~]# openstack endpoint create --region RegionOne \ 6 compute public http://192.168.56.11:8774/v2.1/%\(tenant_id\)s 7 [root@linux-node1 ~]# openstack endpoint create --region RegionOne \ 8 compute internal http://192.168.56.11:8774/v2.1/%\(tenant_id\)s 9 [root@linux-node1 ~]# openstack endpoint create --region RegionOne \ 10 compute admin http://192.168.56.11:8774/v2.1/%\(tenant_id\)s
1.4.5检查端点列表
1 [root@linux-node1 ~]# openstack endpoint list
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
| ID | Region | Service Name | Service Type | Enabled | Interface | URL |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
| 1cf120812e2142c1ac9c239a71146ed8 | RegionOne | nova | compute | True | admin | http://192.168.56.11:8774/v2.1/%(tenant_id)s |
| 30ead02b5d1b4198bc5bf5c030182113 | RegionOne | nova | compute | True | public | http://192.168.56.11:8774/v2.1/%(tenant_id)s |
| 46bb270ff4f04b0da6a69a554322bc27 | RegionOne | keystone | identity | True | public | http://192.168.56.11:5000/v3/ |
| 5da8b564f1244915a8d0bdf1d1f65a18 | RegionOne | glance | image | True | internal | http://192.168.56.11:9292 |
| 77bca853dafb413da29dcbac4bed9305 | RegionOne | keystone | identity | True | admin | http://192.168.56.11:35357/v3/ |
| 7cc4f83fc4f34cf9b1ec5033739aefc1 | RegionOne | keystone | identity | True | internal | http://192.168.56.11:35357/v3/ |
| 9f35261f1894470d81abfb8dce6876a4 | RegionOne | glance | image | True | admin | http://192.168.56.11:9292 |
| aa50739225fc4aecb9b2e9fa589d2706 | RegionOne | nova | compute | True | internal | http://192.168.56.11:8774/v2.1/%(tenant_id)s |
| fc8978523b064b518eab75f40a7db017 | RegionOne | glance | image | True | public | http://192.168.56.11:9292 |
+----------------------------------+-----------+--------------+--------------+---------+-----------+----------------------------------------------+
1.4.6验证nova
1 [root@linux-node1 ~]# openstack host list
+-------------------------+-------------+----------+
| Host Name | Service | Zone |
+-------------------------+-------------+----------+
| linux-node1.example.com | consoleauth | internal |
| linux-node1.example.com | conductor | internal |
| linux-node1.example.com | scheduler | internal |
+-------------------------+-------------+----------+
1.4.7启动nova及其所有组件服务
1 #允许开机自启 2 [root@linux-node1 ~]# systemctl enable openstack-nova-api.service \ 3 openstack-nova-consoleauth.service openstack-nova-scheduler.service \ 4 openstack-nova-conductor.service openstack-nova-novncproxy.service 5 #启动服务 6 [root@linux-node1 ~]# systemctl start openstack-nova-api.service \ 7 openstack-nova-consoleauth.service openstack-nova-scheduler.service \ 8 openstack-nova-conductor.service openstack-nova-novncproxy.service
1.5安装配置nova计算节点
1.5.1环境准备
1 #安装计算节点nova 2 [root@linux-node2 ~]# yum install -y openstack-nova-compute
1.5.2修改配置文件
1 #将控制节点nova配置文件拷贝到计算节点 2 [root@linux-node1 ~]# scp /etc/nova/nova.conf 192.168.56.12:/etc/nova/ 3 #编辑配置文件 4 [root@linux-node2 ~]# vim /etc/nova/nova.conf 5 #删除以下两行内容(计算节点的nova连接数据库,用nova-conductor中间件所以不需要配置数据库) 6 connection=mysql+pymysql://nova:nova@192.168.56.11/nova_api 7 connection=mysql+pymysql://nova:nova@192.168.56.11/nova 8 #将VNC客户端地址改为计算节点IP 9 vncserver_proxyclient_address=192.168.56.12 10 #添加VNC代理url地址 11 novncproxy_base_url=http://192.168.56.11:6080/vnc_auto.html 12 #允许使用VNC,打开注释 13 enabled=true 14 #允许键盘,打开注释 15 keymap=en-us 16 #配置virt类型,打开注释 17 virt_type=kvm 18 注:这一步首先要确定计算节点CPU是否支持虚拟化、支持硬件加速,egrep -c '(vmx|svm)' /proc/cpuinfo 执行此命令来查看,如果返回值为1,或者大于1则不需要修改,如果返回值为0则必须配置 libvirt 来使用 QEMU 去代替 KVM 19 #在[default]标签下添加内容 20 [default] 21 #配置消息队列 22 transport_url=rabbit://openstack:openstack@192.168.56.11
1.5.3启动计算节点nova及libvirt服务
1 #允许开机自启 2 [root@linux-node2 ~]# systemctl enable libvirtd.service openstack-nova-compute.service 3 #启动服务 4 [root@linux-node2 ~]# systemctl start libvirtd.service openstack-nova-compute.service
【开源是一种精神,分享是一种美德】
— By GoodCook
— 笔者QQ:253097001
— 欢迎大家随时来交流
—原创作品,允许转载,转载时请务必以超链接形式标明文章 原始出处 、作者信息和本声明。否则将追究法律责任。