主要参考官方文档:https://docs.openstack.org/liberty/zh_CN/install-guide-ubuntu/environment-nosql-database.html
检查是否有如下命令:
root@hett-virtual-machine:/# command
root@hett-virtual-machine:/# prompt
The program 'prompt' is currently not installed. You can install it by typing:
apt install libmodglue1v5
root@hett-virtual-machine:/# sudo apt-get install libmodglue1v5
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following NEW packages will be installed:
libmodglue1v5
0 upgraded, 1 newly installed, 0 to remove and 265 not upgraded.
Need to get 68.8 kB of archives.
After this operation, 338 kB of additional disk space will be used.
Get:1 http://cn.archive.ubuntu.com/ubuntu xenial/universe amd64 libmodglue1v5 amd64 1.19-0ubuntu3 [68.8 kB]
Fetched 68.8 kB in 0s (134 kB/s)
Selecting previously unselected package libmodglue1v5.
(Reading database ... 182546 files and directories currently installed.)
Preparing to unpack .../libmodglue1v5_1.19-0ubuntu3_amd64.deb ...
Unpacking libmodglue1v5 (1.19-0ubuntu3) ...
Processing triggers for man-db (2.7.5-1) ...
Setting up libmodglue1v5 (1.19-0ubuntu3) ...
Processing triggers for libc-bin (2.23-0ubuntu9) ...
root@hett-virtual-machine:/# prompt
prompt (built on Tue Aug 4 16:53:30 UTC 2015)
Copyright (C) 2001-2006 Kasper Peeters <kasper.peeters@aei.mpg.de>
Usage: prompt [program] [args]
没有的话请安装如上命令
系统的要求是: 内存8g,硬盘 20g
sudo apt-get update
root@hett-virtual-machine:/# sudo apt-get dist-upgrade
一、搭建基础环境
192.168.30.145 controller【2vCPU、4G内存、40G存储、双网卡】
192.168.30.146 compute【2vCPU、4G内存、40G存储、双网卡】
1.安装ssh并配置root密码
$ sudo apt install ssh
$ sudo passwd root
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
2.获取临时认证令牌
# openssl rand -hex 10
bdb5cad50653d4e85b7d
3.添加阿里云镜像
# cp /etc/apt/sources.list /etc/apt/sources.list.bak
# vim /etc/apt/sources.list
deb-src http://archive.Ubuntu.com/ubuntu xenial main restricted
deb http://mirrors.aliyun.com/ubuntu/ xenial main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial main restricted multiverse universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-updates main restricted multiverse universe
deb http://mirrors.aliyun.com/ubuntu/ xenial universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates universe
deb http://mirrors.aliyun.com/ubuntu/ xenial multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-updates multiverse
deb http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-backports main restricted universe multiverse
deb http://archive.canonical.com/ubuntu xenial partner
deb-src http://archive.canonical.com/ubuntu xenial partner
deb http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted
deb-src http://mirrors.aliyun.com/ubuntu/ xenial-security main restricted multiverse universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-security universe
deb http://mirrors.aliyun.com/ubuntu/ xenial-security multiverse
4.配置网络接口IP
# ip addr
# vim /etc/network/interfaces
auto ens33
iface ens33 inet static
address 192.168.30.145
netmask 255.255.255.0
gateway 192.168.30.2
dns-nameserver 114.114.114.114
# The provider network interface(配置第二个接口为提供者接口)
auto ens34
iface ens34 inet manual
up ip link set dev $IFACE up
down ip link set dev $IFACE down
5.配置host
# vim /etc/hosts
192.168.30.145 controller
192.168.30.146 compute
6.配置NTP时间协议
# dpkg-reconfigure tzdata ##修改时区
Current default time zone: 'Asia/Chongqing'
Local time is now: Tue Mar 28 20:54:33 CST 2017.
Universal Time is now: Tue Mar 28 12:54:33 UTC 2017.
# apt -y install chrony ##安装chrony时间同步软件
Controller Node
# vim /etc/chrony/chrony.conf
allow 192.168.30.0/24 ##设置允许该网段与自己同步时间
# service chrony restart
Compute Node
# vim /etc/chrony/chrony.conf
# pool 2.debian.pool.ntp.org offline iburst
server 192.168.30.145 iburst ##设置时间同步服务器地址
# service chrony restart
# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* controller 3 6 377 33 -375us[ -422us] +/- 66ms
7.在所有节点启用openstack库、安装openstack客户端
# apt -y install software-properties-common
# add-apt-repository cloud-archive:ocata
# apt -y update && apt -y dist-upgrade
# apt -y install python-openstackclient
8.安装并配置数据库服务(Controller Node)
# apt -y install mariadb-server python-pymysql
# vim /etc/mysql/mariadb.conf.d/99-openstack.cnf
[mysqld]
bind-address = 192.168.30.145
default-storage-engine = innodb
innodb_file_per_table = on
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
# service mysql restart
# mysql_secure_installation
##运行该脚本来保证数据库安全,为root账户设置一个合适的密码
9.安装并配置Rabbitmq消息队列服务(Controller Node)
# apt -y install rabbitmq-server
# rabbitmqctl add_user openstack openstack ##添加OpenStack用户并配置密码
Creating user "openstack" ...
##允许openstack用户的配置、写、读权限
# rabbitmqctl set_permissions openstack ".*" ".*" ".*"
Setting permissions for user "openstack" in vhost "/" ...
# rabbitmqctl list_users ##列出用户
Listing users ...
guest[administrator]
openstack[]
# rabbitmqctl list_user_permissions openstack ##列出该用户权限
Listing permissions for user "openstack" ...
/.*.*.*
# rabbitmqctl status ##查看RabbitMQ相关信息
# rabbitmq-plugins list ##查看RabbitMQ相关插件
Configured: E = explicitly enabled; e = implicitly enabled
| Status: * = running on rabbit@openstack1
|/
......
# rabbitmq-plugins enable rabbitmq_management ##启用该插件
The following plugins have been enabled:
mochiweb
webmachine
rabbitmq_web_dispatch
amqp_client
rabbitmq_management_agent
rabbitmq_management
Applying plugin configuration to rabbit@openstack1... started 6 plugins.
浏览器输入http://localhost:15672,默认用户名密码都是guest。
10.安装并配置Memcached缓存服务【对认证服务进行缓存】(Controller Node)
# apt -y install memcached python-memcache
# vim /etc/memcached.conf
#-l 127.0.0.1
-l 192.168.30.145
# service memcached restart
二、配置 Keystone 认证服务(Controller Node)
1.创建 keystone 数据库
# mysql
MariaDB [(none)]> CREATE DATABASE keystone; ##创建 keystone 数据库
##对 keystone 数据库授权[用户名@控制节点...BY 密码]
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'192.168.30.145'
IDENTIFIED BY 'keystone';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'
IDENTIFIED BY 'keystone';
MariaDB [(none)]> flush privileges;
2.安装并配置 Keystone
# apt -y install keystone
# vim /etc/keystone/keystone.conf
[database]---配置数据库访问[用户名:密码@控制节点]
connection = mysql+pymysql://keystone:keystone@192.168.30.145/keystone
[token]---配置Fernet UUID令牌的提供者
provider = fernet
# grep ^[a-z] /etc/keystone/keystone.conf
connection = mysql+pymysql://keystone:keystone@192.168.30.145/keystone
provider = fernet
3.初始化身份认证服务数据库
# su -s /bin/sh -c "keystone-manage db_sync" keystone
4.初始化Fernet keys
# keystone-manage fernet_setup --keystone-user keystone --keystone-group keystone
# keystone-manage credential_setup --keystone-user keystone --keystone-group keystone
5.配置引导标识服务
# keystone-manage bootstrap --bootstrap-password qaz123
--bootstrap-admin-url http://192.168.30.145:35357/v3/
--bootstrap-internal-url http://192.168.30.145:5000/v3/
--bootstrap-public-url http://192.168.30.145:5000/v3/
--bootstrap-region-id RegionOne
6.配置 HTTP 服务器
# vim /etc/apache2/apache2.conf
ServerName controller
# service apache2 restart ##重启Apache服务
# service apache2 status
# rm -f /var/lib/keystone/keystone.db ##删除默认的SQLite数据库
7.配置管理账户
# export OS_USERNAME=admin
# export OS_PASSWORD=qaz123
# export OS_PROJECT_NAME=admin
# export OS_USER_DOMAIN_NAME=Default
# export OS_PROJECT_DOMAIN_NAME=Default
# export OS_AUTH_URL=http://192.168.30.145:35357/v3
# export OS_IDENTITY_API_VERSION=3
8.创建 service 项目
# openstack project create --domain default
--description "Service Project" service
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Service Project |
| domain_id | default |
| enabled | True |
| id | 945e37831e74484f8911fb742c925926 |
| is_domain | False |
| name | service |
| parent_id | default |
+-------------+----------------------------------+
9.配置普通(非管理)任务项目和用户权限
a.创建 demo 项目
# openstack project create --domain default
--description "Demo Project" demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Demo Project |
| domain_id | default |
| enabled | True |
| id | 2ef20ce389eb499696f2d7497c6009b0 |
| is_domain | False |
| name | demo |
| parent_id | default |
+-------------+----------------------------------+
b.创建 demo 用户
# openstack user create --domain default
--password-prompt demo
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 7cfc508fd5d44b468aac218bd4029bae |
| name | demo |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
c.创建 user 角色
# openstack role create user
+-----------+----------------------------------+
| Field | Value |
+-----------+----------------------------------+
| domain_id | None |
| id | 83b6ab2af4414ad387b2fc9daf575b3a |
| name | user |
+-----------+----------------------------------+
d.添加 user 角色到 demo 项目和用户
# openstack role add --project demo --user demo user
10.禁用临时身份验证令牌机制
# vim /etc/keystone/keystone-paste.ini
[pipeline:public_api]
# pipeline = admin_token_auth
[pipeline:admin_api]
# pipeline = admin_token_auth
[pipeline:api_v3]
# pipeline = admin_token_auth
11.重置 OS_AUTH_URL 和 OS_PASSWORD 环境变量
# unset OS_AUTH_URL OS_PASSWORD
12.使用 admin 用户,请求认证令牌(密码为admin用户密码)
# openstack --os-auth-url http://192.168.30.145:35357/v3
--os-project-domain-name default --os-user-domain-name default
--os-project-name admin --os-username admin token issue
Password:
+------------+-----------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------+
| expires | 2017-03-28T15:11:50+0000 |
| id | gAAAAABY2m8mE9pMATPuFW9YpgoBMTg9mCI6GcmFeQAudwbhGiVblXZP |
| | kmSmHc5aFwTZSIdjLzPJaMd1k16UZghj59v45Gvzdh5CLhSFGWPsT8rL |
| | fRJD4eE1D_eRz2Jjjk5rDmwAHm5mmffuszJLSe4B2KJyBXkdmmznXL-A |
| project_id | 2461396f6a344c21a2360a612d4f6abe |
| user_id | 63ca263543fb4b02bb34410e3dc8a801 |
+------------+-----------------------------------------------------------+
13.使用 demo 用户,请求认证令牌(密码为demo用户密码)
# openstack --os-auth-url http://192.168.30.145:5000/v3
--os-project-domain-name default --os-user-domain-name default
--os-project-name demo --os-username demo token issue
Password:
+------------+-----------------------------------------------------------+
| Field | Value |
+------------+-----------------------------------------------------------+
| expires | 2017-03-28T15:13:50+0000 |
| id | gAAAAABY2m-eSIWmQg1SyZFaiGcP2kjHf742ktr8YcVH3Q4aHKTflDJ |
| | RLAfgmeoDW2z1sbdHQmKQNSb--F-1Pn_hTFHYqgyMlIxYpEQxGhJ-rg |
| | b0EuxUT9opwl0m5onaA5Cv_MBX6awxeity8Gh1dc50NUeYela5Yl4uSG |
| project_id | 2ef20ce389eb499696f2d7497c6009b0 |
| user_id | 7cfc508fd5d44b468aac218bd4029bae |
+------------+-----------------------------------------------------------+
14.创建脚本
a.创建并编辑文件 admin-openrc 并添加如下内容:
# vim admin-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=qaz123
export OS_AUTH_URL=http://192.168.30.145:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
b.创建并编辑文件 demo-openrc 并添加如下内容:
# vim demo-openrc
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=demo
export OS_AUTH_URL=http://192.168.30.145:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
15.使用脚本
a.加载脚本
# . admin-openrc
b.请求身份认证令牌
# openstack token issue
+------------+----------------------------------------------------------+
| Field | Value |
+------------+----------------------------------------------------------+
| expires | 2017-03-28T15:22:55+0000 |
| id | gAAAAABY2nG_diuPBMl66vJye3mV3S7CWZKesIiSnbicq5XddujfHhc3x|
| | PHni3iHWPcTQAjHoIEMTvSH6yKOQ6Z74QL6hVbshqP1dJrRJ6xEa9WvIk|
| | F7H5j7lPmM7ncfVvr9k96gLJ6Uhz38R5qRnHBWkxrlNsgw1jdnAjxf5e |
| project_id | 2461396f6a344c21a2360a612d4f6abe |
| user_id | 63ca263543fb4b02bb34410e3dc8a801 |
+------------+----------------------------------------------------------+
三、配置 Glance 镜像服务(Controller Node)
1.创建 glance 数据库
# mysql
MariaDB [(none)]> CREATE DATABASE glance; ##创建 glance 数据库
##对 glance 数据库授权[用户名@控制节点...BY 密码]
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'192.168.30.145'
IDENTIFIED BY 'glance';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%'
IDENTIFIED BY 'glance';
MariaDB [(none)]> flush privileges;
2.获取管理员访问权限
# . admin-openrc
3.创建服务证书
a.创建glance用户:
# openstack user create --domain default --password-prompt glance
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 3edeaaae87e14811ac2c6767ab657d6b |
| name | glance |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
b.添加 admin 角色到 glance 用户和 service 项目上:
# openstack role add --project service --user glance admin
c.创建“glance”服务实体:
# openstack service create --name glance
--description "OpenStack Image" image
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Image |
| enabled | True |
| id | 22a0875ba92c4512989666f116ae1585 |
| name | glance |
| type | image |
+-------------+----------------------------------+
d.创建镜像服务的 API 端点:
# openstack endpoint create --region RegionOne
image public http://192.168.30.145:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | ff6d9ed365cf4e7f8cc53d47e57cd46b |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 22a0875ba92c4512989666f116ae1585 |
| service_name | glance |
| service_type | image |
| url | http://192.168.30.145:9292 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne
image internal http://192.168.30.145:9292
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 7408dd72bc1745758cdf23e136ef7392 |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 22a0875ba92c4512989666f116ae1585 |
| service_name | glance |
| service_type | image |
| url | http://192.168.30.145:9292 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne
image admin http://192.168.30.145:9292
--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 8ed4e7e1a5834177b4ce1896c21e6cb9 |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 22a0875ba92c4512989666f116ae1585 |
| service_name | glance |
| service_type | image |
| url | http://192.168.30.145:9292 |
+--------------+----------------------------------+
4.安装并配置 Glance 组件
a.配置镜像API
# apt -y install glance
# vim /etc/glance/glance-api.conf
[database]---配置数据库访问[用户名:密码@控制节点]
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
[keystone_authtoken]---配置身份服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
[glance_store]---配置本地文件系统存储和图像文件位置
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images/
# grep ^[a-z] /etc/glance/glance-api.conf
sqlite_db = /var/lib/glance/glance.sqlite
backend = sqlalchemy
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
stores = file,http
default_store = file
filesystem_store_datadir = /var/lib/glance/images
disk_formats = ami,ari,aki,vhd,vhdx,vmdk,raw,qcow2,vdi,iso,ploop.root-tar
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
flavor = keystone
b.配置镜像注册服务
# vim /etc/glance/glance-registry.conf
[database]---配置数据库访问[用户名:密码@控制节点]
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
[keystone_authtoken]---配置身份服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
[paste_deploy]
flavor = keystone
# grep ^[a-z] /etc/glance/glance-registry.conf
sqlite_db = /var/lib/glance/glance.sqlite
backend = sqlalchemy
connection = mysql+pymysql://glance:glance@192.168.30.145/glance
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = glance
password = glance
flavor = keystone
5.同步镜像服务数据库
# su -s /bin/sh -c "glance-manage db_sync" glance
6.重启服务
# service glance-registry restart
# service glance-api restart
# service glance-registry status
# service glance-api status
7.验证操作
使用 CirrOS 对镜像服务进行验证
CirrOS是一个小型的Linux镜像,可以用来进行 OpenStack部署测试。
a.获取管理员权限
# . admin-openrc
b.下载源镜像
# wget http://download.cirros-cloud.net/0.3.5/cirros-0.3.5-x86_64-disk.img
c.使用 QCOW2 磁盘格式, bare 容器格式上传镜像到镜像服务并设置公共可见
# openstack image create "cirros"
--file cirros-0.3.5-x86_64-disk.img
--disk-format qcow2 --container-format bare
--public
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | f8ab98ff5e73ebab884d80c9dc9c7290 |
| container_format | bare |
| created_at | 2017-03-29T05:57:56Z |
| disk_format | qcow2 |
| file | /v2/images/4b6ebd57-80ab-4b79-8ecc-53a026f3e898/file |
| id | 4b6ebd57-80ab-4b79-8ecc-53a026f3e898 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 2461396f6a344c21a2360a612d4f6abe |
| protected | False |
| schema | /v2/schemas/image |
| size | 13267968 |
| status | active |
| tags | |
| updated_at | 2017-03-29T05:57:56Z |
| virtual_size | None |
| visibility | public |
+------------------+------------------------------------------------------+
d.确认镜像的上传并验证属性
# openstack image list
+--------------------------------------+--------+--------+
| ID | Name | Status |
+--------------------------------------+--------+--------+
| 4b6ebd57-80ab-4b79-8ecc-53a026f3e898 | cirros | active |
+--------------------------------------+--------+--------+
五、配置 Neutron 网络服务【各节点皆要配置】
1.创建 neutron 数据库
# mysql
MariaDB [(none)] CREATE DATABASE neutron; ##创建 neutron 数据库
##对 neutron 数据库授权[用户名@控制节点...BY 密码]
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'192.168.30.145' \
IDENTIFIED BY 'neutron';
MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' \
IDENTIFIED BY 'neutron';
MariaDB [(none)]> flush privileges;
2.获取管理员访问权限
# . admin-openrc
3.创建服务证书
a.创建 neutron 用户
# openstack user create --domain default --password-prompt neutron
User Password:
Repeat User Password:
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | default |
| enabled | True |
| id | 54cd9e72295c411090ea9f641cb02135 |
| name | neutron |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+
b.添加 admin 角色到 neutron 用户
# openstack role add --project service --user neutron admin
c.创建 neutron 服务实体
# openstack service create --name neutron \
--description "OpenStack Networking" network
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | OpenStack Networking |
| enabled | True |
| id | 720687745d354718862255a56d7aea46 |
| name | neutron |
| type | network |
+-------------+----------------------------------+
d.创建 neutron 服务API端点
# openstack endpoint create --region RegionOne \
network public http://192.168.30.145:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | a9b1b5b8fbb842a8b14a9cecca7a58a8 |
| interface | public |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 720687745d354718862255a56d7aea46 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.30.145:9696 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne \
network internal http://192.168.30.145:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 61e2c14b0c8f4003a7099012e9a6331f |
| interface | internal |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 720687745d354718862255a56d7aea46 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.30.145:9696 |
+--------------+----------------------------------+
# openstack endpoint create --region RegionOne \
network admin http://192.168.30.145:9696
+--------------+----------------------------------+
| Field | Value |
+--------------+----------------------------------+
| enabled | True |
| id | 6719539759c34487bd519c0dffb5509d |
| interface | admin |
| region | RegionOne |
| region_id | RegionOne |
| service_id | 720687745d354718862255a56d7aea46 |
| service_name | neutron |
| service_type | network |
| url | http://192.168.30.145:9696 |
+--------------+----------------------------------+
4.配置网络类型2:私有网络
a.安装组件
# apt -y install neutron-server neutron-plugin-ml2 \
neutron-linuxbridge-agent neutron-l3-agent neutron-dhcp-agent \
neutron-metadata-agent
b.配置 Neutron 组件
# vim /etc/neutron/neutron.conf
[database]----配置数据库访问[用户名:密码@控制节点]
#connection = sqlite:////var/lib/neutron/neutron.sqlite
connection = mysql+pymysql://neutron:neutron@192.168.30.145/neutron
[DEFAULT]----启用ML2插件、路由器服务和overlapping IP addresses
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
[DEFAULT]----配置 RabbitMQ 消息队列访问[用户名:密码@控制节点]
transport_url = rabbit://openstack:openstack@192.168.30.145
[DEFAULT]----配置认证服务访问
auth_strategy = keystone
[keystone_authtoken]----配置认证服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
[DEFAULT]----配置网络服务来通知计算节点的网络拓扑变化
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
[nova]----配置网络服务来通知计算节点的网络拓扑变化
auth_url = http://192.168.30.145:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = nova
password = nova
# grep ^[a-z] /etc/neutron/neutron.conf
auth_strategy = keystone
core_plugin = ml2
service_plugins = router
allow_overlapping_ips = true
notify_nova_on_port_status_changes = true
notify_nova_on_port_data_changes = true
transport_url = rabbit://openstack:openstack@192.168.30.145
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
connection = mysql+pymysql://neutron:neutron@192.168.30.145/neutron
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
region_name = RegionOne
auth_url = http://192.168.30.145:35357
auth_type = password
password = nova
project_domain_name = default
project_name = service
user_domain_name = default
username = nova
c.配置 Modular Layer 2 (ML2) 插件
ML2插件使用Linuxbridge机制来为实例创建layer-2虚拟网络基础设施
# vim /etc/neutron/plugins/ml2/ml2_conf.ini
[ml2]----启用flat,VLAN以及VXLAN网络
type_drivers = flat,vlan,vxlan
[ml2]----启用VXLAN私有网络
tenant_network_types = vxlan
[ml2]----启用Linuxbridge和layer-2机制
mechanism_drivers = linuxbridge,l2population
[ml2]----启用端口安全扩展驱动
extension_drivers = port_security
[ml2_type_flat]----配置公共虚拟网络为flat网络
flat_networks = provider
[ml2_type_vxlan]----为私有网络配置VXLAN网络识别的网络范围
vni_ranges = 1:1000
[securitygroup]----启用 ipset 增加安全组规则的高效性
enable_ipset = true
# grep ^[a-z] /etc/neutron/plugins/ml2/ml2_conf.ini
type_drivers = flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers = linuxbridge,l2population
extension_drivers = port_security
flat_networks = provider
vni_ranges = 1:1000
enable_ipset = true
注:Linuxbridge代理只支持VXLAN覆盖网络
d.配置Linuxbridge代理
Linuxbridge代理为实例建立layer-2虚拟网络并且处理安全组规则
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]----对应公共虚拟网络和公共物理网络接口
physical_interface_mappings = provider:ens33
[vxlan]----启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,并启用layer-2 population
enable_vxlan = true
local_ip = 192.168.30.145
l2_population = true
[securitygroup]----启用安全组并配置防火墙服务
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# grep ^[a-z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:ens33
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
enable_vxlan = true
local_ip = 192.168.30.145
l2_population = true
e.配置layer-3代理
Layer-3代理为私有虚拟网络提供路由和NAT服务
# vim /etc/neutron/l3_agent.ini
[DEFAULT]----配置Linuxbridge接口驱动和外部网络网桥
interface_driver = linuxbridge
# grep ^[a-z] /etc/neutron/l3_agent.ini
interface_driver = linuxbridge
f.配置DHCP代理
DHCP代理为虚拟网络提供DHCP服务
# vim /etc/neutron/dhcp_agent.ini
[DEFAULT]----配置Linuxbridge驱动接口,DHCP驱动并启用隔离元数据
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
# grep ^[a-z] /etc/neutron/dhcp_agent.ini
interface_driver = linuxbridge
dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = true
g.配置元数据代理----负责提供配置信息
# vim /etc/neutron/metadata_agent.ini
[DEFAULT]----配置元数据主机以及共享密码
nova_metadata_ip = 192.168.30.145
metadata_proxy_shared_secret = qaz123
# grep ^[a-z] /etc/neutron/metadata_agent.ini
nova_metadata_ip = 192.168.30.145
metadata_proxy_shared_secret = qaz123
5.在控制节点上为计算节点配置网络服务
# vim /etc/nova/nova.conf
[neutron]----配置访问参数,启用元数据代理并设置密码
url = http://192.168.30.145:9696
auth_url = http://192.168.30.145:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
service_metadata_proxy = true
metadata_proxy_shared_secret = qaz123
# grep ^[a-z] /etc/nova/nova.conf
6.完成安装
a.同步数据库
# su -s /bin/sh -c "neutron-db-manage --config-file /etc/neutron/neutron.conf \
--config-file /etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
......
OK
注:数据库的同步发生在 Networking 之后,因为脚本需要完成服务器和插件的配置文件
b.重启计算 API 服务
# service nova-api restart
c.重启 Networking 服务
对于两种网络类型:
# service neutron-server restart
# service neutron-linuxbridge-agent restart
# service neutron-dhcp-agent restart
# service neutron-metadata-agent restart
对于网络类型 2 ,还需重启 L3 服务:
# service neutron-l3-agent restart
d.确认启动与否
# service nova-api status
# service neutron-server status
# service neutron-linuxbridge-agent status
# service neutron-dhcp-agent status
# service neutron-metadata-agent status
# service neutron-l3-agent status
7.配置 Compute Node 的 Neutron 网络服务
# apt -y install neutron-linuxbridge-agent
# vim /etc/neutron/neutron.conf
[database]----计算节点不直接访问数据库
#connection = sqlite:////var/lib/neutron/neutron.sqlite
[DEFAULT]----配置 RabbitMQ 消息队列访问[用户名:密码@控制节点]
transport_url = rabbit://openstack:openstack@192.168.30.145
[DEFAULT]----配置认证服务访问
auth_strategy = keystone
[keystone_authtoken]----配置认证服务访问
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
# grep ^[a-z] /etc/neutron/neutron.conf
auth_strategy = keystone
core_plugin = ml2
transport_url = rabbit://openstack:openstack@192.168.30.145
root_helper = sudo /usr/bin/neutron-rootwrap /etc/neutron/rootwrap.conf
auth_uri = http://192.168.30.145:5000
auth_url = http://192.168.30.145:35357
memcached_servers = 192.168.30.145:11211
auth_type = password
project_domain_name = default
user_domain_name = default
project_name = service
username = neutron
password = neutron
8.为计算节点配置网络服务
# vim /etc/nova/nova.conf
[neutron]----配置访问参数
url = http://192.168.30.145:9696
auth_url = http://192.168.30.145:35357
auth_type = password
project_domain_name = default
user_domain_name = default
region_name = RegionOne
project_name = service
username = neutron
password = neutron
# grep ^[a-z] /etc/nova/nova.conf
9.完成安装
a.重启计算服务:
# service nova-compute restart
# service nova-compute status
b.重启Linuxbridge代理:
# service neutron-linuxbridge-agent restart
# service neutron-linuxbridge-agent status
10.在计算节点上配置网络类型2
配置Linuxbridge代理----为实例建立layer-2虚拟网络并且处理安全组规则
# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini
[linux_bridge]----对应公共虚拟网络和公共物理网络接口
physical_interface_mappings = provider:ens33
[vxlan]----启用VXLAN覆盖网络,配置覆盖网络的物理网络接口的IP地址,启用layer-2 population
enable_vxlan = true
local_ip = 192.168.30.146
l2_population = true
[securitygroup]----启用安全组并配置firewall_driver
enable_security_group = true
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
# grep ^[a-z] /etc/neutron/plugins/ml2/linuxbridge_agent.ini
physical_interface_mappings = provider:ens33
firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
enable_security_group = true
enable_vxlan = true
local_ip = 192.168.30.146
l2_population = true
11.在控制节点上验证操作
a.获取管理员权限
# . admin-openrc
b.列出加载的扩展来验证 neutron-server 进程是否正常启
# openstack extension list --network
+----------------------+----------------------+--------------------------+
| Name | Alias | Description |
+----------------------+----------------------+--------------------------+
| Default Subnetpools | default-subnetpools | Provides ability to mark |
| | | and use a subnetpool as |
| | | the default |
| Network IP | network-ip- | Provides IP availability |
| Availability | availability | data for each network |
| | | and subnet. |
| Network Availability |network_availability_z| Availability zone |
| Zone | one | support for network. |
| Auto Allocated | auto-allocated- | Auto Allocated Topology |
| Topology Services | topology | Services. |
| Neutron L3 | ext-gw-mode | Extension of the router |
| Configurable external| | abstraction for |
| gateway mode | | specifying whether SNAT |
| | | should occur on the |
| | | external gateway |
| Port Binding | binding | Expose port bindings of |
| | | a virtual port to |
| | | external application |
| agent | agent | The agent management |
| | | extension. |
| Subnet Allocation | subnet_allocation | Enables allocation of |
| | | subnets from a subnet |
| | | pool |
| L3 Agent Scheduler | l3_agent_scheduler | Schedule routers among |
| | | l3 agents |
| Tag support | tag | Enables to set tag on |
| | | resources. |
| Neutron external | external-net | Adds external network |
| network | | attribute to network |
| | | resource. |
| Neutron Service | flavors | Flavor specification for |
| Flavors | | Neutron advanced |
| | | services |
| Network MTU | net-mtu | Provides MTU attribute |
| | | for a network resource. |
| Availability Zone | availability_zone | The availability zone |
| | | extension. |
| Quota management | quotas | Expose functions for |
| support | | quotas management per |
| | | tenant |
| HA Router extension | l3-ha | Add HA capability to |
| | | routers. |
| Provider Network | provider | Expose mapping of |
| | | virtual networks to |
| | | physical networks |
|Multi Provider Network| multi-provider | Expose mapping of |
| | | virtual networks to |
| | | multiple physical |
| | | networks |
| Address scope | address-scope | Address scopes |
| | | extension. |
| Neutron Extra Route | extraroute | Extra routes |
| | | configuration for L3 |
| | | router |
| Subnet service types | subnet-service-types | Provides ability to set |
| | | the subnet service_types |
| | | field |
| Resource timestamps | standard-attr- | Adds created_at and |
| | timestamp | updated_at fields to all |
| | | Neutron resources that |
| | | have Neutron standard |
| | | attributes. |
| Neutron Service Type | service-type | API for retrieving |
| Management | | service providers for |
| | | Neutron advanced |
| | | services |
| Router Flavor | l3-flavors | Flavor support for |
| Extension | | routers. |
| Port Security | port-security | Provides port security |
| Neutron Extra DHCP | extra_dhcp_opt | Extra options |
| opts | | configuration for DHCP. |
| | | For example PXE boot |
| | | options to DHCP clients |
| | | can be specified (e.g. |
| | | tftp-server, server-ip- |
| | | address, bootfile-name) |
| Resource revision | standard-attr- | This extension will |
| numbers | revisions | display the revision |
| | | number of neutron |
| | | resources. |
| Pagination support | pagination | Extension that indicates |
| | | that pagination is |
| | | enabled. |
| Sorting support | sorting | Extension that indicates |
| | | that sorting is enabled. |
| security-group | security-group | The security groups |
| | | extension. |
| DHCP Agent Scheduler | dhcp_agent_scheduler | Schedule networks among |
| | | dhcp agents |
| Router Availability |router_availability_zo| Availability zone |
| Zone | ne | support for router. |
| RBAC Policies | rbac-policies | Allows creation and |
| | | modification of policies |
| | | that control tenant |
| | | access to resources. |
| Tag support for | tag-ext | Extends tag support to |
| resources: subnet, | | more L2 and L3 |
| subnetpool, port, | | resources. |
| router | | |
| standard-attr- | standard-attr- | Extension to add |
| description | description | descriptions to standard |
| | | attributes |
| Neutron L3 Router | router | Router abstraction for |
| | | basic L3 forwarding |
| | | between L2 Neutron |
| | | networks and access to |
| | | external networks via a |
| | | NAT gateway. |
| Allowed Address Pairs| allowed-address-pairs| Provides allowed address |
| | | pairs |
| project_id field | project-id | Extension that indicates |
| enabled | | that project_id field is |
| | | enabled. |
| Distributed Virtual | dvr | Enables configuration of |
| Router | | Distributed Virtual |
| | | Routers. |
+----------------------+----------------------+--------------------------+
c.启动 neutron 代理验证是否成功
# neutron agent-list
+--------------------------------------+--------------------+------------+
| id | agent_type | host |
+--------------------------------------+--------------------+------------+
| 23601054-312a-497c-b728-4b791ce76e64 | L3 agent | controller |
| 9a7546d9-73ec-47e0-ab23-ca2a5366660f | Linux bridge agent | controller |
| acd42d89-1af4-413f-be77-3172d38a805d | Metadata agent | controller |
| b438ae93-aaf3-41f0-a7b7-d1502a1986c9 | DHCP agent | controller |
| e1d32b6b-07c6-468b-965d-ce9dfd09b338 | Linux bridge agent | compute |
+--------------------------------------+--------------------+------------+
+-------------------+-------+----------------+---------------------------+
| availability_zone | alive | admin_state_up | binary |
+-------------------+-------+----------------+---------------------------+
| nova | :-) | True | neutron-l3-agent |
| | :-) | True | neutron-linuxbridge-agent |
| | :-) | True | neutron-metadata-agent |
| nova | :-) | True | neutron-dhcp-agent |
| | :-) | True | neutron-linuxbridge-agent |
+-------------------+-------+----------------+---------------------------+