• OpenStack记录


    官网文档:https://docs.openstack.org/mitaka/zh_CN/install-guide-ubuntu/nova-compute-i

    官网文档:https://docs.openstack.org/mitaka/zh_CN/install-guide-ubuntu/nova-compute-install.html

    OpenStack项目主要提供:计算服务、存储服务、镜像服务、网络服务,均依赖于身份认证keystone的支撑。其中的每个项目可以拆开部署,同一项目也可以部署在多台=物理机上,并且每个服务都提供了应用接口程序(API),方便与第三方集成调用资源。

    https://app.yinxiang.com/fx/26611639-2495-4bf0-9233-74c1ad98f64a                   参考链接

    (结合自己的文档:注意配置文件修改的权限)

    环境准备

    1.安装OpenStack环境得硬件需求

           centos7.4

    • CPU 支持intel 64或AMD 64 CPU扩展,并启用AMD-H或intel VT硬件虚拟化支持的64位x86处理器

    • 内存 >=2G

    • 磁盘空间 >=50G

    2.虚拟机分配

    主机名操作系统IP地址备注
    controller CentOS-7.4-x86_64 192.168.66.197 控制节点
    compute CentOS-7.4-x86_64 192.168.66.147 计算节点
    cinder CentOS-7.4-x86_64 192.168.66.128 块存储节点

    3.三台主机关闭虚拟机防火墙SElinux

    都要操作如下:

    [root@controller ~]# systemctl disable firewalld.service

    [root@controller ~]# systemctl stop firewalld.service

    [root@controller ~]# sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

    4.配置域名解析

    都要操作:修改所有主机名

    [root@controller ~]# hostnamectl set-hostname controller

    [root@controller ~]# hostnamectl set-hostname compute

    [root@controller ~]# hostnamectl set-hostname cinder
    修改后,重启主机

    都要操作:修改所有主机hosts文件

    [root@controller ~]# cat /etc/hosts
    127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
    ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
    192.168.66.197 controller
    192.168.66.147 compute
    192.168.66.128 cinder

    测试连通性
    都要操作,每台主机ping另外两台主机

    开始搭建OpenStack

    1.配置阿里云的yum源

    都要操作:配置阿里云yum源,备份默认yum源

    [root@controller ~]# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup

    都要操作:下载最新的yum源

    [root@controller ~]# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-7.repo

    2.安装配置NTP服务

    在controller节点安装配置chrony服务

    [root@controller ~]# yum -y install chrony

    启动服务,设置开机自启

    [root@controller ~]# systemctl enable chronyd

    [root@controller ~]# systemctl restart chronyd

    在compute节点安装配置chrony服务

    [root@compute ~]# yum -y install chrony

    启动服务,设置开机自启

    [root@compute ~]# systemctl enable chronyd

    [root@compute ~]# systemctl restart chronyd

    在cinder节点安装配置chrony服务

    启动服务,设置开机自启

    [root@cinder ~]# systemctl enable chronyd

    [root@cinder ~]# systemctl restart chronyd

    验证时钟同步服务

    [root@controller ~]# chronyc sources(每台主机可以执行此命令)

    3.启用OpenStack库

    管理节点部署

    [root@controller ~]#yum install centos-release-openstack-queens -y

    [root@controller ~]#yum upgrade -y //在主机上升级包

    [root@controller ~]#yum install python-openstackclient -y //安装openstack客户端

    [root@controller ~]#yum install openstack-selinux -y //安装openstack-selinux,便于自动管理openstack的安全策略

    4.MySQL数据库部署

    管理节点部署

    [root@controller ~]#yum -y install mariadb mariadb-server python2-PyMySQL

    配置文件修改
    vim /etc/my.cnf.d/mariadb-server.cnf

    [mysqld]
    datadir=/var/lib/mysql
    socket=/var/lib/mysql/mysql.sock
    log-error=/var/log/mariadb/mariadb.log
    pid-file=/var/run/mariadb/mariadb.pid
    bind-address = 192.168.66.197 //修改为控制节点IP,使其他节点可以通过管理网络访问数据库
    default-storage-engine = innodb
    innodb_file_per_table = on
    max_connections = 4096
    collation-server = utf8_general_ci
    character-set-server = utf8
    启动服务,设置为开机自启

    [root@controller ~]#systemctl enable mariadb.service

    [root@controller ~]#systemctl start mariadb.service

    对数据库进行安全加固

    [root@controller ~]# mysql_secure_installation

    5.0 安装配置Messaging server-RabbitMQOpenStack

    使用message queue协调操作和各服务器的状态信息。消息队列服务一般运行在控制节点上。
    在管理节点安装RabbitMQ

    [root@controller ~]# yum -y install rabbitmq-server
    开启服务并设置为开机自启

    [root@controller ~]# systemctl enable rabbitmq-server.service

    [root@controller ~]# systemctl start rabbitmq-server.service
    查看服务

    [root@controller ~]# netstat -ntap | grep 5672

    6. 添加OpenStack用户

    管理节点

    [root@controller ~]# rabbitmqctl add_user openstack 123456
    //创建用户openstack,密码为123456

    [root@controller ~]# rabbitmqctl set_permissions openstack "." "." ".*" //授予新建用户权限

    7.部署memcached服务

    管理节点
    安装软件

    [root@controller ~]# yum -y install memcached python-memcached
    修改配置文件

    开启服务并设置为开机自启

    [root@controller ~]# systemctl enable memcached.service

    [root@controller ~]# systemctl start memcached.service

    8. 部署etcd服务

    etcd是一个分布式,一致的键值存储,用于共享配置和服务发现,特点是,安全,具有可选客户端证书身份验证的自动TLS;快速,基准测试10,000次/秒;可靠,使用Raft正确分发。
    管理节点
    安装软件

    [root@controller ~]# yum -y install etcd
    修改配置文件,改为管理节点的IP

    ETCD_INITIAL_CLUSTER
    ETCD_INITIAL_ADVERTISE_PEER_URLS
    ETCD_ADVERTISE_CLIENT_URLS
    ETCD_LISTEN_CLIENT_URLS
    #[Member]
    ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
    ETCD_LISTEN_PEER_URLS="http://192.168.66.197:2380"
    ETCD_LISTEN_CLIENT_URLS="http://192.168.66.197:2379"
    ETCD_NAME="controller"
    #[Clustering]
    ETCD_INITIAL_ADVERTISE_PEER_URLS="http://192.168.66.197:2380"
    ETCD_ADVERTISE_CLIENT_URLS="http://192.168.66.197:2379"
    ETCD_INITIAL_CLUSTER="controller=http://192.168.66.197:2380"
    ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster-01"
    ETCD_INITIAL_CLUSTER_STATE="new"

    开启服务并设置为开机自启

    [root@controller ~]# systemctl enable etcd.service

    [root@controller ~]# systemctl start etcd.service

    9. 部署keystone认证服务

    Identity服务为其他OpenStack服务提供验证和授权服务,为所有服务提供终端目录,其他OpenStack服务将身份认证当作通用统一API来使用。此外,提供用户信息但是不在OpenStack项目中的服务(如LDAP服务)可被整合进先前存在的基础设施中。
    为了从identify服务中获益,其他的OpenStack服务需要与他合作。当某个OpenStack服务需要与他合作。当某个OpenStack服务收到来自用户的请求时,该服务询问identify服务,验证该用户是否有权限进行此次请求,身份验证服务包括以下组件服务器:一个中心化的服务器使用RESTful接口来提供认证和授权服务驱动:驱动或服务后端被整合进集中式服务器中。它们被用来访问OpenStack外部仓库的身份信息,并且它们可能已经存在于OpenStack被部署在的基础设施中,如SQL数据库模块:中间件模块运行于使用身份验证服务的OpenStack组件的地址空间中。这些模块拦截服务请求,取出用户凭据,并将它们送入中央服务器寻求授权。中间件模块和OpenStack组件间的整合使用python web服务器网关接口。
    当安装OpenStack自身服务时,用户必须将之注册到其OpenStack安装环境的每个服务。身份服务才可以追踪到哪些OpenStack服务已经安装,以及在网络中定位它们。

    Keystone服务的安装配置
    管理节点操作
    配置Mysql数据库及授权

    [root@controller ~]# mysql -uroot -pcdp12345

    Welcome to the MariaDB monitor. Commands end with ; or g.
    Your MariaDB connection id is 12
    Server version: 10.1.20-MariaDB MariaDB Server

    Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

    Type 'help;' or 'h' for help. Type 'c' to clear the current input statement.

    MariaDB [(none)]> create database keystone;
    Query OK, 1 row affected (0.00 sec)

    MariaDB [(none)]> grant all privileges on keyston. to 'keystone'@'localhost' identified by '123456';*
    Query OK, 0 rows affected (0.00 sec)

    MariaDB [(none)]> grant all privileges on keyston. to 'keystone'@'%' identified by '123456';*
    Query OK, 0 rows affected (0.00 sec)

    MariaDB [(none)]> flush privileges;
    Query OK, 0 rows affected (0.00 sec)

    MariaDB [(none)]> exit

    安装软件包

    [root@controller ~]# yum install openstack-keystone httpd mod_wsgi -y
    修改配置文件(keyston.conf)

    vim /etc/keystone/keystone.conf

    [database]
    connection = mysql+pymysql://keystone:123456@controller/keystone

    [token]
    provider = fernet //2922行,安全消息传递算法

    同步数据库

    [root@controller ~]# su -s /bin/sh -c "keystone-manage db_sync" keystone
    此时keystone库中已经有许多的表

    (注:如果看不到表,修改配置文件:改为IP地址,看看日志报错:[root@controller ~]# tail -f -n 20 /var/log/keystone/keystone.log)

    正常如下:

    初始化数据库

    [root@controller ~]# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
    **[root@controller ~]#keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
    *[root@controller ~]# *keystone-manage bootstrap --bootstrap-password 123456

    --bootstrap-admin-url http://controller:35357/v3/
    --bootstrap-internal-url http://controller:5000/v3/
    --bootstrap-public-url http://controller:5000/v3/
    --bootstrap-region-id RegionOne

    admin用户创建完成

    10. 配置Apache服务

    修改主机名

    [root@controller ~]# vim /etc/httpd/conf/httpd.conf

    [root@controller ~]# ln -s /usr/share/keystone/wsgi- keystone.conf /etc/httpd/conf.d/ 创建软链接
    启动服务,设置为开机自启

    [root@controller ~]# systemctl enable httpd.service

    [root@controller ~]# systemctl start httpd.service

    设置环境变量脚本
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_NAME=admin
    export OS_USERNAME=admin
    export OS_PASSWORD=123456export OS_AUTH_URL=http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

    11. 创建域,项目用户和角色

    创建域

    [root@controller ~]# openstack domain create --description "Domain" example

    创建项目

    [root@controller ~]# openstack project create --domain default --description "Service Project" service

    创建平台demo项目

    [root@controller ~]# openstack project create --domain default --description "Demo Project" demo

    创建demo用户

    [root@controller ~]# openstack user create --domain default --password-prompt demo

    创建用户角色

    [root@controller ~]# openstack role create user

    添加用户角色到demo项目和用户

    [root@controller ~]# openstack role add --project demo --user demo user 该没有返回值

    验证keystone
    取消环境变量

    [root@controller ~]# unset OS_AUTH_URL OS_PASSWORD

    admin用户返回的认证token

    [root@controller ~]# openstack --os-auth-url http://controller:35357/v3

    --os-project-domain-name Default --os-user-domain-name Default
    --os-project-name admin --os-username admin token issue

    demo用户返回的认证token

    [root@controller ~]# openstack --os-auth-url http://controller:5000/v3

    --os-project-domain-name Default --os-user-domain-name Default
    --os-project-name demo --os-username demo token issue

    12. 创建openstack客户端环境脚本

    创建admin-openrc脚本
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_NAME=admin
    export OS_USERNAME=admin
    export OS_PASSWORD=123456export OS_AUTH_URL=http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

    创建demo-openrc脚本
    export OS_PROJECT_DOMAIN_NAME=Default
    export OS_USER_DOMAIN_NAME=Default
    export OS_PROJECT_NAME=demo
    export OS_USERNAME=demo
    export OS_PASSWORD=123456export OS_AUTH_URL=http://controller:5000/v3export OS_IDENTITY_API_VERSION=3export OS_IMAGE_API_VERSION=2

    使用脚本验证返回值,查看admin用户的token信息

    [root@controller ~]# source ~/admin-openrc

    [root@controller ~]# openstack token issue

    13. 镜像服务

    管理节点操作
    配置MySQL数据库及授权

    MariaDB [(none)]> create database glance;
    MariaDB [(none)]> grant all privileges on glance. to 'glance'@'localhost' identified by '123456';*
    MariaDB [(none)]> grant all privileges on glance. to 'glance'@'%' identified by '123456';*
    MariaDB [(none)]> flush privileges;

    获取admin用户的环境变量

    [root@controller ~]# source admin-openrc

    [root@controller ~]# export | grep OS_

    创建glance用户

    [root@controller ~]# openstack user create --domain default --password-prompt glance

    admin用户添加到glance用户和项目中

    [root@controller ~]# openstack role add --project service --user glance admin 没有返回值

    创建glance服务

    [root@controller ~]# openstack service create --name glance --description "OpenStack lmage" image

    创建镜像服务API端点
    OpenStack使用三种API端点变种代表每种服务:admin、internal、public。

    [root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292

    [root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292

    [root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292

    安装glance包

    [root@controller ~]# yum -y install openstack-glance

    创建images文件夹,并修改属性

    [root@controller ~]# mkdir /var/lib/glance/images

    [root@controller ~]# cd /var/lib/

    [root@controller lib]# chown -hR glance:glance glance

    修改glance-api.conf配置文件

    修改glance-registry.conf配置文件

    同步镜像数据库

    [root@controller ~]# su -s /bin/sh -c "glance-manage db_sync" glance

    启动服务

    [root@controller ~]# systemctl enable openstack-glance-api.service

    [root@controller ~]# systemctl start openstack-glance-api.service

    [root@controller ~]# systemctl enable openstack-glance-registry.service

    [root@controller ~]# systemctl start openstack-glance-registry.service

    验证上传镜像
    获取admin用户的环境变量并下载镜像

    [root@controller ~]# source ~/admin-openrc

    [root@controller ~]# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img //下载一个小型linux镜像进行测试

    上传镜像
    使用QCOW2磁盘格式,裸容器格式和公开可见性将图像上传到Image服务,以便所有项目都可以访问它

    [root@controller ~]# openstack image create "cirros" --file cirros-0.3.5-x86_64-disk.img --disk-format qcow2 --container-format bare --public

    查看上传的镜像

    [root@controller ~]# openstack image list

    14.部署compute服务

    在controller节点上操作
    安装与配置
    配置MySQL数据库及授权
    CREATE DATABASE nova_api;
    CREATE DATABASE nova;
    CREATE DATABASE nova_cell0;
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '123456';
    GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '123456';

    创建nova用户

    [root@controller ~]# source ~/admin-openrc

    [root@controller ~]# openstack user create --domain default --password-prompt nova

    添加admin用户为nova用户

    [root@controller ~]# openstack role add --project service --user nova admin

    创建nova服务端点

    [root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute

    创建compute API 服务端点
    [root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
    [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
    [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1

    创建一个placement服务用户

    [root@controller ~]# openstack user create --domain default --password-prompt placement

    添加placement用户为项目服务admin角色

    [root@controller ~]# openstack role add --project service --user placement admin

    在服务目录创建Placement API服务

    [root@controller ~]# openstack service create --name placement --description "Placement API" placement

    创建Placement API服务端点

    [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778

    [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778

    [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778

    安装软件包

    [root@controller ~]# yum install openstack-nova-api openstack-nova-conductor openstack-nova-console openstack-nova-novncproxy openstack-nova-scheduler openstack-nova-placement-api -y

    修改nova.conf配置文件

    [root@controller ~]#vim /etc/nova/nova.conf

    [DEFAULT]
    enabled_apis=osapi_compute,metadata //2766行
    transport_url=rabbit://openstack:123456@controller //3156行
    my_ip=172.16.10.33 //1291行
    use_neutron=true //1755行
    firewall_driver=nova.virt.firewall.NoopFirewallDriver //2417行

    [api_database]
    connection=mysql+pymysql://nova:123456@controller/nova_api //3513行

    [database]
    connection=mysql+pymysql://nova:123456@controller/nova //4588行

    [api]
    auth_strategy=keystone //3221行

    [keystone_authtoken]
    auth_uri=http://controller:5000
    auth_url=http://controller:35357 //6073行
    memcached_servers=controller:11211 //6124行
    auth_type=password //6231行
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = nova
    password = 123456

    [vnc]
    enabled=true //10213行
    server_listen=my_ip //10237行 server_proxyclient_address=my_ip //10250行

    [glance]
    api_servers=http://controller:9292 //5266行

    [oslo_concurrency]
    lock_path=/var/lib/nova/tmp //7841行

    [placement]
    os_region_name=RegionOne //8740行
    auth_type=password //8780行
    auth_url=http://controller:35357/v3 //8786行
    project_name=service //8801行
    project_domain_name=Default //8807行
    username=placement //8827行
    user_domain_name=Default //8833行
    password=123456 //8836行

    启用placement API访问
    由于软件包错误,必须启用对Placement API的访问,在配置文件末尾添加即可。
    <Directory /usr/bin>
    <IfVersion >= 2.4>
    Require all granted
    </IfVersion>
    <IfVersion < 2.4>
    Order allow,deny
    Allow from all
    </IfVersion>
    </Directory>

    重启httpd服务

    [root@controller ~]# systemctl restart httpd.service

    同步nova-api数据库

    [root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova

    注册cell0数据库

    [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova

    创建cell1 cell

    [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova

    同步nova数据库

    [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova

    验证数据库是否注册正确

    [root@controller ~]# nova-manage cell_v2 list_cells

    启动并将服务添加为开机自启
    systemctl enable openstack-nova-api.service
    systemctl enable openstack-nova-consoleauth.service
    systemctl enable openstack-nova-scheduler.service
    systemctl enable openstack-nova-conductor.service
    systemctl enable openstack-nova-novncproxy.service
    systemctl start openstack-nova-api.service
    systemctl start openstack-nova-consoleauth.service
    systemctl start openstack-nova-scheduler.service
    systemctl start openstack-nova-conductor.service
    systemctl start openstack-nova-novncproxy.service

    15. 安装和配置compute节点

    安装软件包
    yum -y install openstack-nova-compute

    修改nova.conf配置文件

    vim /etc/nova/nova.conf

    [DEFAULT]
    my_ip = 172.16.10.35 //1291,输入compute节点IP
    use_neutron=true //1755
    firewall_driver=nova.virt.firewall.NoopFirewallDriver //2417
    enabled_apis = osapi_compute,metadata //2756
    transport_url = rabbit://openstack:123456@controller //3156

    [api]
    auth_strategy=keystone //3221

    [keystone_authtoken]
    auth_uri = http://172.16.10.33:5000 //6073controller节点IP
    auth_url = http://controller:35357
    memcached_servers=controller:11211 //6124
    auth_type=password //6231
    project_domain_name=default
    user_domain_name=default
    project_name=service
    username=nova
    password=123456

    [vnc]
    enabled=true //10213
    server_listen=0.0.0.0 //10237
    server_proxyclient_address=$my_ip //10250
    novncproxy_base_url=http://controller:6080/vnc_auto.html //10268

    [glance]
    api_servers=http://controller:9292 //5266

    [oslo_concurrency]
    lock_path=/var/lib/nova/tmp //7841

    [placement]
    os_region_name=RegionOne //8740
    auth_type = password //8780
    auth_url=http://controller:35357/v3 //8786
    project_name = service //8801
    project_domain_name = Default //8807
    user_domain_name = Default //8833
    username = placement //8827
    password = 123456 //8836

    启动服务同时添加为开机自启
    systemctl enable libvirtd.service
    systemctl restart libvirtd
    systemctl enable openstack-nova-compute.service
    systemctl start openstack-nova-compute.service

    16. 添加compute节点到cell数据库

    在controller节点上进行操作
    验证在数据库中的计算节点

    [root@controller ~]# source ~/admin-openrc //在重启虚拟机时需重新加载环境变量

    [root@controller ~]# openstack compute service list --service nova-compute

    发现计算节点

    [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova

    在controller节点验证计算服务操作

    [root@controller ~]# openstack compute service list

    列出身份服务中的API端点以验证与身份服务的连接

    [root@controller ~]# openstack catalog list

    检查cells和placement API是否正常

    [root@controller ~]# nova-status upgrade check

    17. Networking服务

    安装和配置controller节点neutron网络配置
    创建nuetron数据库并授权
    MariaDB [(none)]> CREATE DATABASE neutron;
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '123456';
    MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '123456';

    创建用户

    [root@controller ~]# source ~/admin-openrc

    [root@controller ~]# openstack user create --domain default --password-prompt neutron

    创建网络服务端点

    [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network

    创建网络服务端点

    [root@controller ~]#openstack endpoint create --region RegionOne network public http://controller:9696

    [root@controller ~]#openstack endpoint create --region RegionOne network internal http://controller:9696

    [root@controller ~]#openstack endpoint create --region RegionOne network admin http://controller:9696

    安装软件包

    [root@controller ~]# yum install -y openstack-neutron openstack-neutron-ml2 openstack-neutron-linuxbridge ebtables

    修改配置文件

    vim /etc/neutron/neutron.conf

    [database]
    connection = mysql+pymysql://neutron:123456@controller/neutron //729

    [DEFAULT]
    auth_strategy = keystone //27
    core_plugin = ml2 //30
    service_plugins = //33 不写代表禁用其他插件
    transport_url = rabbit://openstack:123456@controller //570
    notify_nova_on_port_status_changes = true //98
    notify_nova_on_port_data_changes = true //102

    [keystone_authtoken]
    auth_uri = http://controller:5000 //847
    auth_url = http://controller:35357
    memcached_servers = controller:11211 //898
    auth_type = password //1005
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = 123456

    [nova]
    auth_url = http://controller:35357 //1085
    auth_type = password //1089
    project_domain_name = default //1127
    user_domain_name = default //1156
    region_name = RegionOne //1069
    project_name = service //1135
    username = nova //1163
    password = 123456 //1121

    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp //1179

    配置网络二层插件

    vim /etc/neutron/plugins/ml2/ml2_conf.ini

    [ml2]
    type_drivers = flat,vlan //136
    tenant_network_types = //141 设置空是禁用本地网络
    mechanism_drivers = linuxbridge //145
    extension_drivers = port_security //150

    [ml2_type_flat]
    flat_networks = provider //186

    [securitygroup]
    enable_ipset = true //263

    配置Linux网桥

    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

    [linux_bridge]
    physical_interface_mappings = provider:ens33 //157

    [vxlan]
    enable_vxlan = false //208

    [securitygroup]
    enable_security_group = true //193
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver //188

    配置DHCP

    vim /etc/neutron/dhcp_agent.ini

    interface_driver = linuxbridge //16
    dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq //28
    enable_isolated_metadata = true //37

    配置metadata

    vim /etc/neutron/metadata_agent.ini

    [DEFAULT]
    nova_metadata_host = controller //22
    metadata_proxy_shared_secret = 123456 //34

    配置计算服务使用网络服务

    vim /etc/nova/nova.conf

    [neutron]
    url = http://controller:9696 //7534
    auth_url = http://controller:35357 //7610
    auth_type = password //7604
    project_domain_name = default //7631
    user_domain_name = default //7657
    region_name = RegionOne //7678
    project_name = service //7625
    username = neutron //7651
    password = 123456 //7660
    service_metadata_proxy = true //7573
    metadata_proxy_shared_secret = 123456 //7584

    建立服务软连接

    [root@controller ~]# ln -s /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugin.ini

    同步数据库

    [root@controller ~]# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron

    重启compute API服务

    [root@controller ~]# systemctl restart openstack-nova-api.service

    启动neutron服务并添加为开机自启

    systemctl enable neutron-server.service
    systemctl enable neutron-linuxbridge-agent.service
    systemctl enable neutron-dhcp-agent.service
    systemctl enable neutron-metadata-agent.service
    systemctl start neutron-server.service
    systemctl start neutron-linuxbridge-agent.service
    systemctl start neutron-dhcp-agent.service
    systemctl start neutron-metadata-agent.service

    18. 配置compute节点网络服务

    安装软件包

    [root@compute ~]# yum install -y openstack-neutron-linuxbridge ebtables ipset

    配置公共组件

    vim /etc/neutron/neutron.conf

    [DEFAULT]
    auth_strategy = keystone //27
    transport_url = rabbit://openstack:123456@controller //570

    [keystone_authtoken]
    auth_uri = http://controller:5000 //847
    auth_url = http://controller:35357
    memcached_servers = controller:11211 //898
    auth_type = password //1005
    project_domain_name = default
    user_domain_name = default
    project_name = service
    username = neutron
    password = 123456

    [oslo_concurrency]
    lock_path = /var/lib/neutron/tmp //1180

    配置Linux网桥

    vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini

    [linux_bridge]
    physical_interface_mappings = provider:ens33 //157

    [vxlan]
    enable_vxlan = false //208

    [securitygroup]
    enable_security_group = true //193
    firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver //188

    配置计算节点网络服务

    vim /etc/nova/nova.conf

    [neutron]
    url = http://controller:9696 //7534
    auth_url = http://controller:35357 //7610
    auth_type = password //7640
    project_domain_name = default //7631
    user_domain_name = default //7657
    region_name = RegionOne //7678
    project_name = service //7625
    username = neutron //7651
    password = 123456 //7660

    启动服务
    systemctl restart openstack-nova-compute.service
    systemctl enable neutron-linuxbridge-agent.service
    systemctl start neutron-linuxbridge-agent.service

    19. 部署Horizon服务

    在controller节点安装Horizon服务
    安装软件包

    [root@controller ~]# yum install openstack-dashboard -y

    修改配置文件

    vim /etc/openstack-dashboard/local_settings

    OPENSTACK_HOST = "controller" //189
    OPENSTACK_KEYSTONE_DEFAULT_ROLE = "admin" //191
    ALLOWED_HOSTS = ['*'] //38
    SESSION_ENGINE = 'django.contrib.sessions.backends.file' //51

    配置memcache会话存储

    SESSION_ENGINE = 'django.contrib.sessions.backends.cache' //50,添加
    CACHES = { //注释166-170 去掉注释159-164
    'default': {
    'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache',
    'LOCATION': 'controller:11211',
    }
    }
    OPENSTACK_KEYSTONE_URL = "http://%s:5000/v3" % OPENSTACK_HOST //开启身份认证API版本v3 190行
    OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True //开启domains版本支持 76行

    OPENSTACK_API_VERSIONS = { //配置API版本 65行
    "identity": 3,
    "image": 2,
    "volume": 2,
    }
    OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default" //98

    OPENSTACK_NEUTRON_NETWORK = { //324

    'enable_router': False,
    'enable_quotas': False,
    'enable_distributed_router': False,
    'enable_ha_router': False,
    'enable_lb': False,
    'enable_firewall': False,
    'enable_***': False,
    'enable_fip_topology_check': False,
    

    }

    解决网页无法打开检查

    vim /etc/httpd/conf.d/openstack-dashboard.conf

    WSGISocketPrefix run/wsgi
    WSGIApplicationGroup %{GLOBAL} //添加

    重启web服务和会话存储
    systemctl restart httpd.service
    systemctl restart memcached.service

    20. 登陆测试

    http://192.168.66.197/dashboard

    domain: default
    用户名:admin
    密码:123456

     
  • 相关阅读:
    ASP.NET C# 邮件发送全解
    .NET应用框架架构设计实践 概述
    给大家推荐几个国外IT技术论坛
    IIS 内部运行机制
    大型网站后台架构的Web Server与缓存
    CMD 获得当前目录命令
    html之marquee详解
    sharepoint 富文本编辑器
    C# 将数据导出到Execl汇总(C/S和B/S)
    更改应用程序池的密码 (Windows SharePoint Services)
  • 原文地址:https://www.cnblogs.com/Su-per-man/p/12395188.html
Copyright © 2020-2023  润新知