• openstack-nova(queens)


    介绍

    openstack nova 组件负责虚拟机的创建管理等功能,提供调度、计算等功能。一般请况nova 组件分两部分部署,nova-compute 部署的主机叫做计算节点,nova-scheduler部署的主机叫做控制节点。

    组件介绍

    nova-api

    实现了RESTful api 功能,是外部访问nova 的唯一途径。接收外部的请求并通过message queue发送给nova 内部其他组件(nova内部其他组件只通过消息队列接收消息)。同时也兼容EC2API(亚马逊),
    所以也可以用EC2的管理工具对nova 进行日常管理。

    nova-scheduler

    决策虚拟机创建在哪个主机(计算节点)上。分两步:先过滤(过滤有条件的主机,比如cpu 等资源不足的被过滤掉),再计算权重(比如A主机有2台虚拟机,B主机有4台,那么就在A上创建,当然有很多算法,也可以自定义计算权重的算法)

    nova-computer

    运行此服务的节点就叫做计算节点,运行nova 其他节点的就是控制节点。nova-computer 主要作用就是创建虚拟机,它本身没有创建虚拟机的能力,它可以调度XEN-api vmware-pi libvirt 分别使用 XEN vmware kvm 虚拟化技术来创建虚拟机。

    cert

    负责身份认证EC2,不使用

    conductor

    用于计算节点访问数据库的中间件(为了安全nova其他服务访问数据库统一通过此服务访问)

    consoleauth

    用户控制台的授权认证

    novncproxy

    VNC 代理

    注意:queens 版本相比newton 在nova 组件中添加了nova-placement 服务组件,并且增加了数据库cell0 的创建等配置

    nova 在数据库上设置      #注:在queens 版本增加了cell0 数据库,newton 版本没有

    MariaDB [(none)]> CREATE DATABASE nova_api;
    MariaDB [(none)]> CREATE DATABASE nova;

    MariaDB [(none)]> grant all on nova_api.* to 'nova'@'localhost' identified by 'nova';

    MariaDB [(none)]> grant all on nova_api.* to 'nova'@'%' identified by 'nova';

    MariaDB [(none)]> grant all on nova.* to 'nova'@'localhost' identified by 'nova';

    MariaDB [(none)]> grant all on nova.* to 'nova'@'%' identified by 'nova';

    MariaDB [(none)]> CREATE DATABASE nova_cell0;

    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost'   IDENTIFIED BY 'NOVA_DBPASS';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%'   IDENTIFIED BY 'NOVA_DBPASS';

    nova 服务在keystone上相关配置

    openstack user create --domain default --password-prompt nova
    openstack role add --project service --user nova admin

    创建 nova service
    openstack service create --name nova --description "OpenStack Compute" compute
    endpoint 注册
    openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
    openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
    openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

    placement 服务、用户相关设置 (queens 版本中增加的,在newton  版本没有此服务和用户)
    openstack user create --domain default --password-prompt placement
    openstack role add --project service --user placement admin
    openstack service create --name placement --description "Placement API" placement
    openstack endpoint create --region RegionOne placement public http://controller:8778
    openstack endpoint create --region RegionOne placement internal http://controller:8778
    openstack endpoint create --region RegionOne placement admin http://controller:8778

    控制节点安装配置

    安装

    yum install openstack-nova-api openstack-nova-conductor  openstack-nova-console openstack-nova-novncproxy  openstack-nova-scheduler openstack-nova-placement-api

    nova 配置文件设置

    连接数据库配置

    [api_database]
    # ...
    connection = connection=mysql+pymysql://nova:nova_api@controller/nova_api

    [database]
    # ...
    connection = mysql+pymysql://nova:nova@controller/nova

    nova 连接keystone 配置

    [api]
    # ...
    auth_strategy = keystone #启用keystone,开启即可

    [keystone_authtoken]
    auth_url = http://controller:5000/v3 #全部添加
    memcached_servers = controller:11211
    auth_type = password
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = nova

    placement 连接keystone配置
    [placement]
    os_region_name = RegionOne
    project_domain_name = Default
    project_name = service
    auth_type = password
    user_domain_name = Default
    auth_url = http://controller:5000/v3
    username = placement
    password = placement

    连接消息队列配置

    [DEFAULT]
    # ...
    transport_url = rabbit://openstack:openstack@controller

    其他配置

    [DEFAULT]

    注意:在官方文档中配置了my_ip 实际安装配置时不用配置,否则容易进坑。
    use_neutron=true
    firewall_driver = nova.virt.firewall.NoopFirewallDriver    #禁用nova防火墙(后面会用neutron 防火墙)

    # ...
    enabled_apis = osapi_compute,metadata        #启用api


    [vnc]
    enabled=true
    #...
    server_proxyclient_address=controller
    server_listen=0.0.0.0
    [glance]
    # ...
    api_servers = http://controller:9292
    [oslo_concurrency]
    # ...
    lock_path = /var/lib/nova/tmp               #打开锁路径

    阿帕奇配置文件配置

    原因:Due to a packaging bug, you must enable access to the Placement API by adding the following configuration to /etc/httpd/conf.d/00-nova-placement-api.conf:

    #在上面所说配置文件添加如下内容
    <Directory /usr/bin>
    <IfVersion >= 2.4>
    Require all granted
    </IfVersion>
    <IfVersion < 2.4>
    Order allow,deny
    Allow from all
    </IfVersion>
    </Directory>

    systemctl restart httpd

    数据库初始化

    #nova-api database 初始化(建表)
    su -s /bin/sh -c "nova-manage api_db sync" nova      #数据库初始化
    注册cell0 数据库
    Register the cell0 database:
    su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
    su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
    #Populate the nova database:
    su -s /bin/sh -c "nova-manage db sync" nova

    #Verify nova cell0 and cell1 are registered correctly:    #验证
    nova-manage cell_v2 list_cells
    +-------+--------------------------------------+
    | Name | UUID |
    +-------+--------------------------------------+
    | cell1 | 109e1d4b-536a-40d0-83c6-5f121b82b650 |
    | cell0 | 00000000-0000-0000-0000-000000000000 |
    +-------+--------------------------------------+

    设置开机自启并启动服务

    # systemctl enable openstack-nova-api.service
    openstack-nova-consoleauth.service openstack-nova-scheduler.service
    openstack-nova-conductor.service openstack-nova-novncproxy.service
    # systemctl start openstack-nova-api.service
    openstack-nova-consoleauth.service openstack-nova-scheduler.service
    openstack-nova-conductor.service openstack-nova-novncproxy.service

    计算节点安装部署

    注意计算节点主机名不可变更且唯一,否则会被当作新的节点。

    安装

    yum -y install openstack-nova-compute (实际安装了libvirt qemu 等)

    配置文件设置

    直接把控制节点配置文件复制过来改一下
    1、删除数据库连接配置(因为计算节点直接连接conducter)
    connection=mysql+pymysql://nova:nova@controller/nova_api

    connection= mysql+pymysql://nova:nova@controller/nova
    2、VNC 代理监听地址改为本机
    server_proxyclient_address=computer      #computer 为域名
    3、开启novnc 配置,(控制台访问)
    novncproxy_base_url=http://computer:6080/vnc_auto.html

    4、查看是否支持虚拟化
    egrep -c '(vmx|svm)' /proc/cpuinfo
    如果不支持则添加如下配置
    [libvirt]
    # ...
    virt_type = qemu
    5、注意确认连接消息队列的配置必须配置在DEFAULT 下面(否则无法启动nova-computer)
    [DEFAULT]
    # ...
    transport_url = rabbit://openstack:RABBIT_PASS@controller

    开机自启并起服务(不管有多少节点逐个配置启动即可,启动后各个计算节点会自动在keystone 上注册)

    systemctl enable libvirtd.service openstack-nova-compute.service
    systemctl start libvirtd.service openstack-nova-compute.service

    启动后报错:

    Job for openstack-nova-compute.service failed because the control process exited with error code. See "systemctl status openstack-nova-compute.service" and "journalctl -xe" for details.
    原因:ll /etc/nova/
    -rw-r----- 1 root nova 2923 12月 20 04:57 api-paste.ini
    -rw-r----- 1 root root 369824 3月 6 22:30 nova.conf
    -rw-r----- 1 root nova 369384 12月 24 15:28 nova.conf.bak
    -rw-r----- 1 root nova 4 12月 24 15:26 policy.json
    -rw-r--r-- 1 root root 64 12月 24 15:28 release
    -rw-r----- 1 root nova 966 12月 20 04:57 rootwrap.conf
    nova 目录下的文件必须是root:nova 权限,因为nova.conf 是由节点1复制过来的所以配置为root:root ,所以需要更改即可

    验证(控制节点运行)

    source admin-openstack
    nova service-list(或openstack compute service list) 查看计算节点是否注册在keystone,nova-computer 启动后正常后还必须要查看是否注册成功这样控制节点才认为其启动了
    openstack image list                  查看镜像是否正常  

    控制节点操作

    queens 相比于newton 版本增加了如下操作(把计算几点信息同步到cell 数据库中)

    Add the compute node to the cell database
    Important
    Run the following commands on the controller node.

    Source the admin credentials to enable admin-only CLI commands, then confirm there are compute hosts in the database:

    $ . admin-openrc

    $ openstack compute service list --service nova-compute
    +----+-------+--------------+------+-------+---------+----------------------------+
    | ID | Host | Binary | Zone | State | Status | Updated At |
    +----+-------+--------------+------+-------+---------+----------------------------+
    | 1 | node1 | nova-compute | nova | up | enabled | 2017-04-14T15:30:44.000000 |
    +----+-------+--------------+------+-------+---------+----------------------------+
    Discover compute hosts:

    # su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

    Found 2 cell mappings.
    Skipping cell0 since it does not contain hosts.
    Getting compute nodes from cell 'cell1': ad5a5985-a719-4567-98d8-8d148aaae4bc
    Found 1 computes in cell: ad5a5985-a719-4567-98d8-8d148aaae4bc
    Checking host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3
    Creating host mapping for compute host 'compute': fe58ddc1-1d65-4f87-9456-bc040dc106b3

    注意:当增加新的计算节点时必须执行 nova-manage cell_v2 discover_hosts (实际就是上面的命令,建议直接用上面命令)在控制节点上注册这些计算节点。或者也可配置文件让服务自动周期性检查
    When you add new compute nodes, you must run nova-manage cell_v2 discover_hosts on the controller node to register those new compute nodes. Alternatively, you can set an appropriate interval in /etc/nova/nova.conf:
    #可以配置,也可以不配置,不配置则添加主机时需要手动发现(执行上面的命令即可)
    [scheduler]
    discover_hosts_in_cells_interval = 300

      

  • 相关阅读:
    IDE-常用插件
    Go-竞态条件-锁
    Go-发送邮件
    复刻网络Yum源配置为本地Yum源使用
    测试
    九.查找算法
    九.多线程-PDF笔记
    八.设计模式
    八.排序算法:复杂度
    七.注解
  • 原文地址:https://www.cnblogs.com/fanggege/p/10484432.html
Copyright © 2020-2023  润新知