openstack Ocata for centos7
环境:
CentOS7.3
openstack ocata
一.基础环境配置
1.host表
192.168.130.101 controller
192.168.130.111 block1
192.168.130.201 compute1
提示: 生产环境一般都会做bonding,示例
文本界面nmtui(NetworkManager-tui包和NetworkManager服务)可以非常方便生成网卡配置模板,可能利用该界面生成标准网络配置模板,之后添加compute节点可直接修改该模板
2.NTP
controller
yum -y install chrony
sed -i '/server 0.centos.pool.ntp.org iburst/i server
time.nist.gov iburst' /etc/chrony.conf
sed -i '/.centos.pool.ntp.org iburst/d' /etc/chrony.conf
sed -i '/#allow 192.168/c allow 192.168.130.0/24'
/etc/chrony.conf
systemctl enable chronyd.service
systemctl restart chronyd.service
chronyc sources
components
yum -y install chrony
sed -i '/server 0.centos.pool.ntp.org iburst/i server
controller iburst' /etc/chrony.conf
sed -i '/.centos.pool.ntp.org iburst/d' /etc/chrony.conf
systemctl enable chronyd.service
systemctl restart chronyd.service
chronyc sources
3.openstack
client包(所有节点)
cat >/etc/yum.repos.d/extras.repo <<'HERE'
[extras]
name=CentOS-$releasever - extras
baseurl=http://mirror.centos.org/centos/$releasever/extras/$basearch/
gpgcheck=1
gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-CentOS-7
enabled=1
HERE
yum -y install centos-release-openstack-ocata
yum -y install python-openstackclient openstack-selinux
提示: 可以做个本地yum源,以ocata版本为例
mkdir openstack-ocata
yum -y install centos-release-openstack-ocata
yum -y install yum-utils
yumdownloader --destdir chrony python-openstackclient
openstack-selinux mariadb mariadb-server python2-PyMySQL
rabbitmq-server memcached python-memcached openstack-keystone httpd
mod_wsgi openstack-glance openstack-nova-api
openstack-nova-conductor openstack-nova-console
openstack-nova-novncproxy openstack-nova-scheduler
openstack-nova-placement-api openstack-nova-compute
openstack-neutron openstack-neutron-ml2
openstack-neutron-linuxbridge ebtables
openstack-neutron-linuxbridge ebtables ipset openstack-dashboard
openstack-cinder lvm2 openstack-cinder targetcli
python-keystone
或
yum -y --downloadonly --downloaddir=openstack-ocata install
...
4.SQL
database(可单独指定)
提示: 实验环境资源有限,直接装在controller上
yum -y install mariadb mariadb-server
yum -y install python2-PyMySQL
cat >/etc/my.cnf.d/openstack.cnf <<HERE
[mysqld]
bind-address = 192.168.130.101
default-storage-engine = innodb
innodb_file_per_table
max_connections = 4096
collation-server = utf8_general_ci
character-set-server = utf8
HERE
systemctl enable mariadb.service
systemctl start mariadb.service
mysql_secure_installation
5.MQ(可单独指定)
提示: 实验环境资源有限,直接装在controller上
yum -y install rabbitmq-server
systemctl enable rabbitmq-server.service
systemctl start rabbitmq-server.service
添加用户并授权
rabbitmqctl add_user openstack RABBIT_PASS
rabbitmqctl set_permissions openstack ".*" ".*" ".*"
web界面http://controller:15672, 用户名/密码 guest/guest
rabbitmq-plugins enable rabbitmq_management
systemctl restart rabbitmq-server.service
提示:可以适当修改最大连接数增加吞吐量
/etc/security/limits.conf
*
soft
nproc
65536
*
hard
nproc
65536
*
soft
nofile
65536
*
hard
nofile
65536
/usr/lib/systemd/system/rabbitmq-server.service
[Service]
LimitNOFILE=655360
ubuntu修改
/etc/default/rabbitmq-server
ulimit -S -n 655360
6.Memcached(可单独指定)
提示: 实验环境资源有限,直接装在controller上
yum -y install memcached python-memcached
sed -i '/OPTIONS=/c OPTIONS="-l 192.168.130.101"'
/etc/sysconfig/memcached
systemctl enable memcached.service
systemctl start memcached.service
二.Identity
service(controller节点)
1.建库
mysql -u root -proot
CREATE DATABASE keystone;
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost'
IDENTIFIED BY 'KEYSTONE_DBPASS';
GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%'
IDENTIFIED BY 'KEYSTONE_DBPASS';
FLUSH PRIVILEGES;
2.安装identity组件
yum -y install openstack-keystone httpd mod_wsgi
3.配置identity
mv /etc/keystone/keystone.conf{,.default}
cat >/etc/keystone/keystone.conf <<HERE
[DEFAULT]
[assignment]
[auth]
[cache]
[catalog]
[cors]
[cors.subdomain]
[credential]
[database]
connection =
mysql+pymysql://keystone:KEYSTONE_DBPASS@controller/keystone
[domain_config]
[endpoint_filter]
[endpoint_policy]
[eventlet_server]
[federation]
[fernet_tokens]
[healthcheck]
[identity]
[identity_mapping]
[kvs]
[ldap]
[matchmaker_redis]
[memcache]
[oauth1]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
[policy]
[profiler]
[resource]
[revoke]
[role]
[saml]
[security_compliance]
[shadow_users]
[signing]
[token]
provider =
fernet
[tokenless_auth]
[trust]
HERE
4.初始化keystone
su -s /bin/sh -c "keystone-manage db_sync" keystone
keystone-manage fernet_setup --keystone-user keystone
--keystone-group keystone
keystone-manage credential_setup --keystone-user keystone
--keystone-group keystone
keystone-manage bootstrap --bootstrap-password ADMIN_PASS
--bootstrap-admin-url
http://controller:35357/v3/
--bootstrap-internal-url
http://controller:35357/v3/
--bootstrap-public-url
http://controller:5000/v3/
--bootstrap-region-id RegionOne
5.配置apache
sed -i '/^#ServerName/c ServerName controller'
/etc/httpd/conf/httpd.conf
ln -s /usr/share/keystone/wsgi-keystone.conf
/etc/httpd/conf.d/
systemctl enable httpd.service
systemctl start httpd.service
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_PROJECT_NAME=admin
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_DOMAIN_NAME=Default
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
6.创建domain, projects, users,
and roles
openstack project create --domain default --description
"Service Project" service
openstack project create --domain default --description "Demo
Project" demo
openstack user create --domain default --password demo
demo
openstack role create user
openstack role add --project demo --user demo user
7.确认Identity
server配置正确
测试admin获取token
openstack --os-auth-url http://controller:35357/v3
--os-project-domain-name Default
--os-user-domain-name Default
--os-project-name admin --os-username admin
token issue
8.openstack client
rc环境变量
cat >admin-openrc <<HERE
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=admin
export OS_USERNAME=admin
export OS_PASSWORD=ADMIN_PASS
export OS_AUTH_URL=http://controller:35357/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
HERE
cat >demo-openrc <<HERE
export OS_PROJECT_DOMAIN_NAME=Default
export OS_USER_DOMAIN_NAME=Default
export OS_PROJECT_NAME=demo
export OS_USERNAME=demo
export OS_PASSWORD=DEMO_PASS
export OS_AUTH_URL=http://controller:5000/v3
export OS_IDENTITY_API_VERSION=3
export OS_IMAGE_API_VERSION=2
HERE
三.Glance
1.建库
mysql -u root -proot
CREATE DATABASE glance;
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost'
IDENTIFIED BY 'GLANCE_DBPASS';
GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY
'GLANCE_DBPASS';
FLUSH PRIVILEGES;
2.创建凭据
source admin-openrc
openstack user create --domain default --password GLANCE_PASS
glance
openstack role add --project service --user glance admin
3.创建service
openstack service create --name glance --description
"OpenStack Image" image
4.创建api入口
openstack endpoint create --region RegionOne image public
http://controller:9292
openstack endpoint create --region RegionOne image internal
http://controller:9292
openstack endpoint create --region RegionOne image admin
http://controller:9292
5.安装glance
https://docs.openstack.org/ocata/install-guide-rdo/glance-install.html
yum -y install openstack-glance
6.配置glance
mv /etc/glance/glance-api.conf{,.default}
cat >/etc/glance/glance-api.conf <<HERE
[DEFAULT]
[cors]
[cors.subdomain]
[database]
connection =
mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[glance_store]
stores =
file,http
default_store = file
filesystem_store_datadir =
/var/lib/glance/images/
[image_format]
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
glance
password =
GLANCE_PASS
[matchmaker_redis]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[paste_deploy]
flavor =
keystone
[profiler]
[store_type_location_strategy]
[task]
[taskflow_executor]
HERE
mv /etc/glance/glance-registry.conf{,.default}
cat >/etc/glance/glance-registry.conf <<HERE
[DEFAULT]
[database]
connection =
mysql+pymysql://glance:GLANCE_DBPASS@controller/glance
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
glance
password =
GLANCE_PASS
[matchmaker_redis]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_policy]
[paste_deploy]
flavor =
keystone
[profiler]
HERE
6.同步glance数据库
su -s /bin/sh -c "glance-manage db_sync" glance
7.启服务
systemctl enable openstack-glance-api.service
openstack-glance-registry.service
systemctl start openstack-glance-api.service
openstack-glance-registry.service
8.确认Glance服务配置正确
source
admin-openrc
openstack image create "cirros"
--file cirros-0.3.5-x86_64-disk.img
--disk-format qcow2 --container-format bare
--public
--protected
openstack image
list
镜像修改
镜像制作
centos7
virt-install --virt-type kvm --name ct7-cloud --ram 1024
--disk
/var/lib/libvirt/images/ct7-cloud.img,format=qcow2
--network network=default
--graphics vnc,listen=0.0.0.0
--noautoconsole
--os-type=linux --os-variant=centos7.0
--location="http://192.168.8.254/ftp/centos7"
--extra-args="ks=http://192.168.8.254/ks/centos7-minimal.cfg
noipv6"
或直接导入己存在虚拟机镜像
virt-install --name ct7-cloud --vcpus 2 --memory 2048 --disk
ct7-cloud.img--import
virsh console ct7-cloud
yum -y install acpid cloud-init cloud-utils-growpart
systemctl enable acpid
echo "NOZEROCONF=yes" > /etc/sysconfig/network
grubby --update-kernel=ALL --remove-args="rhgb quiet"
grubby --update-kernel=ALL --args="console=tty0
console=ttyS0,115200n8"
grub2-mkconfig -o /boot/grub2/grub.cfg
poweroff
宿主机上执行virt-sysprep去个性化
yum -y
install libguestfs-tools
echo root >/tmp/rootpw
virt-sysprep
-a /var/lib/libvirt/images/ct7-cloud.img --root-password
file:/tmp/rootpw
virt-sparsify --compress
/var/lib/libvirt/images/ct7-cloud.img
ct7-cloud.qcow2
root@router:images#virt-sysprep
-a
/var/lib/libvirt/images/centos.qcow2 --root-password
file:/tmp/rootpw
[
0.0] Examining the guest
...
[
4.1] Performing
"abrt-data" ...
[
4.1] Performing
"bash-history" ...
[
4.1] Performing
"blkid-tab" ...
[
4.1] Performing
"crash-data" ...
[
4.1] Performing
"cron-spool" ...
[
4.1] Performing
"dhcp-client-state" ...
[
4.1] Performing
"dhcp-server-state" ...
[
4.1] Performing
"dovecot-data" ...
[
4.1] Performing "logfiles"
...
[
4.2] Performing
"machine-id" ...
[
4.2] Performing
"mail-spool" ...
[
4.2] Performing
"net-hostname" ...
[
4.2] Performing
"net-hwaddr" ...
[
4.2] Performing
"pacct-log" ...
[
4.2] Performing
"package-manager-cache" ...
[
4.2] Performing "pam-data"
...
[
4.2] Performing
"puppet-data-log" ...
[
4.2] Performing
"rh-subscription-manager" ...
[
4.2] Performing
"rhn-systemid" ...
[
4.2] Performing "rpm-db"
...
[
4.2] Performing
"samba-db-log" ...
[
4.2] Performing "script"
...
[
4.2] Performing
"smolt-uuid" ...
[
4.2] Performing
"ssh-hostkeys" ...
[
4.2] Performing
"ssh-userdir" ...
[
4.2] Performing
"sssd-db-log" ...
[
4.2] Performing
"tmp-files" ...
[
4.2] Performing
"udev-persistent-net" ...
[
4.2] Performing "utmp"
...
[
4.2] Performing "yum-uuid"
...
[
4.2] Performing
"customize" ...
[
4.2] Setting a random
seed
[
4.3] Performing
"lvm-uuids" ...
root@router:images#virt-sparsify
--compress /var/lib/libvirt/images/centos.qcow2
ct7-cloud.qcow2
[
0.2] Create overlay file
in /tmp to protect source disk
[
0.3] Examine source
disk
[
2.4] Fill free space in
/dev/cl/root with zero
100%
⟦?????????????????????????????????????????????????????????????????????????????????????????????????????⟧
00:00
[
82.9] Clearing Linux swap
on /dev/cl/swap
[
84.6] Fill free space in
/dev/sda1 with zero
100%
⟦?????????????????????????????????????????????????????????????????????????????????????????????????????⟧
00:00
[
91.4] Copy to destination
and make sparse
[
243.9] Sparsify operation
completed with no errors.
virt-sparsify:
Before deleting the old disk, carefully check that
the
target
disk boots and works correctly.
root@router:images#ls
-lh ct7-cloud.*
-rw-r--r-- 1 qemu
qemu 1.4G 5月 20 22:16 ct7-cloud.img
-rw-r--r-- 1 root
root 474M 5月 20 22:21 ct7-cloud.qcow2
可以看到经过压缩后的centos7镜像是之前的1/3大小
补充:
通过oz来创建镜像
https://github.com/rcbops/oz-image-build
yum -y install oz
sed -i '/image_type
= raw/s/raw/qcow2/'
/etc/oz/oz.cfg
oz-install -p -u -d3
centos7.3.tdl
通过guestfs来修改
guestmount
-a ct7-cloud.qcow2 -i
--rw /mnt/cloud 进一步修改其它项
ubuntu16.04
apt-get install cloud-init
dpkg-reconfigure cloud-init
virt-sysprep -d ubuntu16.04
四.Nova
A.准备环境
1.建库
mysql -u root -proot
CREATE DATABASE nova_api;
CREATE DATABASE nova;
CREATE DATABASE nova_cell0;
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost'
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY
'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost'
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY
'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost'
IDENTIFIED BY 'NOVA_DBPASS';
GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED
BY 'NOVA_DBPASS';
FLUSH PRIVILEGES;
用admin权限如下操作
source admin-openrc
2.创建nova用户
openstack user create --domain default --password NOVA_PASS
nova
openstack role add --project service --user nova admin
3.创建compute
service
openstack service create --name nova --description "OpenStack
Compute" compute
4.创建compute api
endpoints
openstack endpoint create --region RegionOne compute public
http://controller:8774/v2.1/%(tenant_id)s
openstack endpoint create --region RegionOne compute internal
http://controller:8774/v2.1/%(tenant_id)s
openstack endpoint create --region RegionOne compute admin
http://controller:8774/v2.1/%(tenant_id)s
5.创建placement用户
openstack user create --domain default --password
PLACEMENT_PASS placement
openstack role add --project service --user placement
admin
6.创建placement
service
openstack service create --name placement --description
"Placement API" placement
7.创建placement api
endpoints
openstack endpoint create --region RegionOne placement public
http://controller:8778
openstack endpoint create --region RegionOne placement
internal http://controller:8778
openstack endpoint create --region RegionOne placement admin
http://controller:8778
注:按照官方文档配置80端口下的/placement
会导致placement api启动失败,调用时日志会报/placement 404错误,解决方法是指定8778端口
openstack endpoint list
openstack endpoint delete
B.安装配置nova
controller
1.安装
yum -y install openstack-nova-api openstack-nova-conductor
openstack-nova-console openstack-nova-novncproxy
openstack-nova-scheduler openstack-nova-placement-api
2.配置
mv /etc/nova/nova.conf{,.default}
cat >/etc/nova/nova.conf <<'HERE'
[DEFAULT]
enabled_apis
= osapi_compute,metadata
transport_url =
rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip =
192.168.130.101
use_neutron
= True
firewall_driver =
nova.virt.firewall.NoopFirewallDriver
[api]
[api_database]
connection =
mysql+pymysql://nova:NOVA_DBPASS@controller/nova_api
[barbican]
[cache]
[cells]
[cinder]
[cloudpipe]
[conductor]
[console]
[consoleauth]
[cors]
[cors.subdomain]
[crypto]
[database]
connection =
mysql+pymysql://nova:NOVA_DBPASS@controller/nova
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers
= http://controller:9292
[guestfs]
[healthcheck]
[hyperv]
[image_file_url]
[ironic]
[key_manager]
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
nova
password =
NOVA_PASS
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
lock_path =
/var/lib/nova/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name
= service
auth_type =
password
user_domain_name = Default
auth_url =
http://controller:35357/v3
username =
placement
password =
PLACEMENT_PASS
[placement_database]
[quota]
[rdp]
[remote_debug]
[scheduler]
discover_hosts_in_cells_interval =
300
[serial_console]
[service_user]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
vncserver_listen = $my_ip
vncserver_proxyclient_address =
$my_ip
[workarounds]
[wsgi]
[xenserver]
[xvp]
HERE
/etc/httpd/conf.d/00-nova-placement-api.conf
在VirtualHost段添加如下段
3.更新数据库
https://docs.openstack.org/developer/nova/cells.html#step-by-step-for-common-use-cases
su -s /bin/sh -c "nova-manage api_db sync" nova
su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1
--verbose" nova
su -s /bin/sh -c "nova-manage db sync" nova
4.启nova服务
systemctl enable openstack-nova-api.service
openstack-nova-consoleauth.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service
systemctl start openstack-nova-api.service
openstack-nova-consoleauth.service
openstack-nova-scheduler.service
openstack-nova-conductor.service
openstack-nova-novncproxy.service
compute
1.安装nova组件
yum -y install openstack-nova-compute
2.配置nova
compute
mv /etc/nova/nova.conf{,.default}
cat >/etc/nova/nova.conf <<'HERE'
[DEFAULT]
enabled_apis
= osapi_compute,metadata
transport_url =
rabbit://openstack:RABBIT_PASS@controller
my_ip =
192.168.130.201
use_neutron
= True
firewall_driver =
nova.virt.firewall.NoopFirewallDriver
[api]
auth_strategy = keystone
[api_database]
[barbican]
[cache]
[cells]
[cinder]
[cloudpipe]
[conductor]
[console]
[consoleauth]
[cors]
[cors.subdomain]
[crypto]
[database]
[ephemeral_storage_encryption]
[filter_scheduler]
[glance]
api_servers
= http://controller:9292
[oslo_concurrency]
lock_path =
/var/lib/nova/tmp
[guestfs]
[healthcheck]
[hyperv]
[image_file_url]
[ironic]
[key_manager]
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
nova
password =
NOVA_PASS
[libvirt]
[matchmaker_redis]
[metrics]
[mks]
[neutron]
[notifications]
[osapi_v21]
[oslo_concurrency]
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[pci]
[placement]
os_region_name = RegionOne
project_domain_name = Default
project_name
= service
auth_type =
password
user_domain_name = Default
auth_url =
http://controller:35357/v3
username =
placement
password =
PLACEMENT_PASS
[placement_database]
[quota]
[rdp]
[remote_debug]
[scheduler]
[serial_console]
[service_user]
[spice]
[ssl]
[trusted_computing]
[upgrade_levels]
[vendordata_dynamic_auth]
[vmware]
[vnc]
enabled =
True
vncserver_listen = 0.0.0.0
vncserver_proxyclient_address =
$my_ip
novncproxy_base_url =
http://controller:6080/vnc_auto.html
[workarounds]
[wsgi]
[xenserver]
[xvp]
HERE
systemctl enable libvirtd.service
openstack-nova-compute.service
systemctl restart libvirtd.service
openstack-nova-compute.service
C.确认nova服务配置正确
source admin-openrc
openstack compute service list
openstack catalog list
D.添加compute节点到cell数据库
source admin-openrc
openstack hypervisor list
su -s /bin/sh -c "nova-manage cell_v2 discover_hosts
--verbose" nova
五.Neutron
A.准备环境
1.建库
mysql -u root -proot
CREATE DATABASE neutron;
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost'
IDENTIFIED BY 'NEUTRON_DBPASS';
GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED
BY 'NEUTRON_DBPASS';
FLUSH PRIVILEGES;
用admin权限如下操作
source admin-openrc
2.创建用户
openstack user create --domain default --password NEUTRON_PASS
neutron
openstack role add --project service --user neutron
admin
3.创建service
openstack service create --name neutron --description
"OpenStack Networking" network
4.创建api
endpoint
openstack endpoint create --region RegionOne network public
http://controller:9696
openstack endpoint create --region RegionOne network internal
http://controller:9696
openstack endpoint create --region RegionOne network admin
http://controller:9696
B.安装neutron
controller
1.安装 neutron
controller组件
yum -y install openstack-neutron openstack-neutron-ml2
openstack-neutron-linuxbridge ebtables
2.网络选择(二选一)
Provider
networks方案
/etc/neutron/neutron.conf
[database]
connection =
mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[DEFAULT]
core_plugin = ml2
service_plugins =
transport_url =
rabbit://openstack:RABBIT_PASS@controller
auth_strategy =
keystone
notify_nova_on_port_status_changes = True
notify_nova_on_port_data_changes = True
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers =
controller:11211
auth_type =
password
project_domain_name =
Default
user_domain_name =
Default
project_name =
service
username = neutron
password =
NEUTRON_PASS
[nova]
auth_url =
http://controller:35357
auth_type =
password
project_domain_name =
Default
user_domain_name =
Default
region_name =
RegionOne
project_name =
service
username = nova
password =
NOVA_PASS
[oslo_concurrency]
lock_path =
/var/lib/neutron/tmp
Modular Layer 2 (ML2)
plug-in(/etc/neutron/plugins/ml2/ml2_conf.ini)
[ml2]
type_drivers =
flat,vlan
tenant_network_types
=
mechanism_drivers =
linuxbridge
extension_drivers =
port_security
[ml2_type_flat]
flat_networks =
provider
[securitygroup]
enable_ipset =
True
Linux bridge
agent(/etc/neutron/plugins/ml2/linuxbridge_agent.ini)
[linux_bridge]
physical_interface_mappings =
provider:PROVIDER_INTERFACE_NAME
[vxlan]
enable_vxlan =
False
[securitygroup]
enable_security_group =
True
firewall_driver =
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
DHCP agent(/etc/neutron/dhcp_agent.ini)
[DEFAULT]
interface_driver =
neutron.agent.linux.interface.BridgeInterfaceDriver
dhcp_driver =
neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
Self-service
networks方案
neutron
cat >/etc/neutron/neutron.conf <<HERE
[DEFAULT]
core_plugin
= ml2
service_plugins = router
allow_overlapping_ips = True
transport_url =
rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
notify_nova_on_port_status_changes =
True
notify_nova_on_port_data_changes =
True
[agent]
[cors]
[cors.subdomain]
[database]
connection =
mysql+pymysql://neutron:NEUTRON_DBPASS@controller/neutron
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
neutron
password =
NEUTRON_PASS
[matchmaker_redis]
[nova]
auth_url =
http://controller:35357
auth_type =
password
project_domain_name = Default
user_domain_name = Default
region_name
= RegionOne
project_name
= service
username =
nova
password =
NOVA_PASS
[oslo_concurrency]
lock_path =
/var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[qos]
[quotas]
[ssl]
HERE
Layer 2
cat >/etc/neutron/plugins/ml2/ml2_conf.ini
<<HERE
[DEFAULT]
[ml2]
type_drivers
= flat,vlan,vxlan
tenant_network_types = vxlan
mechanism_drivers =
linuxbridge,l2population
extension_drivers = port_security
[ml2_type_flat]
flat_networks = provider
[ml2_type_geneve]
[ml2_type_gre]
[ml2_type_vlan]
[ml2_type_vxlan]
vni_ranges =
1:1000
[securitygroup]
enable_ipset
= True
HERE
Linux bridge agent
cat >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
<<HERE
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings =
provider:ens3
[securitygroup]
enable_security_group = True
firewall_driver =
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan
= True
local_ip =
192.168.130.101
l2_population = True
HERE
Layer-3
cat >/etc/neutron/l3_agent.ini <<HERE
[DEFAULT]
interface_driver = linuxbridge
[agent]
[ovs]
HERE
DHCP agent
cat >/etc/neutron/dhcp_agent.ini <<HERE
[DEFAULT]
interface_driver = linuxbridge
dhcp_driver =
neutron.agent.linux.dhcp.Dnsmasq
enable_isolated_metadata = True
[agent]
[ovs]
HERE
metadata agent
cat >/etc/neutron/metadata_agent.ini <<HERE
[DEFAULT]
nova_metadata_ip = controller
metadata_proxy_shared_secret =
METADATA_SECRET
[agent]
[cache]
HERE
4.配置compute使用网络
/etc/nova/nova.conf
[neutron]
url =
http://controller:9696
auth_url =
http://controller:35357
auth_type =
password
project_domain_name = Default
user_domain_name = Default
region_name
= RegionOne
project_name
= service
username =
neutron
password =
NEUTRON_PASS
service_metadata_proxy = True
metadata_proxy_shared_secret =
METADATA_SECRET
5.更新数据库
ln -s /etc/neutron/plugins/ml2/ml2_conf.ini
/etc/neutron/plugin.ini
su -s /bin/sh -c "neutron-db-manage --config-file
/etc/neutron/neutron.conf
--config-file
/etc/neutron/plugins/ml2/ml2_conf.ini upgrade head" neutron
6.启服务
systemctl restart openstack-nova-api.service
systemctl enable neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
systemctl start neutron-server.service
neutron-linuxbridge-agent.service
neutron-dhcp-agent.service
neutron-metadata-agent.service
注: self-service network需要额外启动layer-3服务
systemctl enable neutron-l3-agent.service
systemctl start neutron-l3-agent.service
compute
1.安装neutron
compute组件
yum -y install openstack-neutron-linuxbridge ebtables
ipset
2.common配置
mv /etc/neutron/neutron.conf{,.default}
cat >/etc/neutron/neutron.conf <<HERE
[DEFAULT]
transport_url =
rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
[agent]
[cors]
[cors.subdomain]
[database]
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
neutron
password =
NEUTRON_PASS
[matchmaker_redis]
[nova]
[oslo_concurrency]
lock_path =
/var/lib/neutron/tmp
[oslo_messaging_amqp]
[oslo_messaging_kafka]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[qos]
[quotas]
[ssl]
HERE
2.网络配置(二选一)
Provider
networks方案
Linux bridge
agent(/etc/neutron/plugins/ml2/linuxbridge_agent.ini)
[linux_bridge]
physical_interface_mappings = provider:ens33
[vxlan]
enable_vxlan =
False
[securitygroup]
enable_security_group =
True
firewall_driver =
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
Self-service
networks方案
Linux bridge agent
mv
/etc/neutron/plugins/ml2/linuxbridge_agent.ini{,.default}
cat >/etc/neutron/plugins/ml2/linuxbridge_agent.ini
<<HERE
[DEFAULT]
[agent]
[linux_bridge]
physical_interface_mappings =
provider:ens33
[securitygroup]
enable_security_group = True
firewall_driver =
neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
[vxlan]
enable_vxlan
= True
local_ip =
192.168.130.201
l2_population = True
HERE
3.配置compute使用网络
/etc/nova/nova.conf
[neutron]
url =
http://controller:9696
auth_url =
http://controller:35357
auth_type =
password
project_domain_name = Default
user_domain_name = Default
region_name
= RegionOne
project_name
= service
username =
neutron
password =
NEUTRON_PASS
4.启服务
systemctl restart openstack-nova-compute.service
systemctl enable neutron-linuxbridge-agent.service
systemctl restart neutron-linuxbridge-agent.service
C.确认neutron服务配置正确
source admin-openrc
neutron ext-list
openstack network agent list
六.Dashboard(controller节点)
yum -y install openstack-dashboard
配置请参看官方文档
systemctl restart httpd.service memcached.service
七.Cinder
Block
A.准备环境
1.建库
mysql -u root -proot
CREATE DATABASE cinder;
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost'
IDENTIFIED BY 'CINDER_DBPASS';
GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY
'CINDER_DBPASS';
FLUSH PRIVILEGES;
用admin权限如下操作
source admin-openrc
2.创建用户
openstack user create --domain default --password CINDER_PASS
cinder
openstack role add --project service --user cinder admin
3.创建service
openstack service create --name cinder --description
"OpenStack Block Storage" volume
openstack service create --name cinderv2 --description
"OpenStack Block Storage" volumev2
4.创建api
endpoint
openstack endpoint create
--region RegionOne volume public
http://controller:8776/v1/%(tenant_id)s
openstack endpoint create
--region RegionOne volume internal
http://controller:8776/v1/%(tenant_id)s
openstack endpoint create
--region RegionOne volume admin
http://controller:8776/v1/%(tenant_id)s
openstack endpoint create
--region RegionOne volumev2 public
http://controller:8776/v2/%(tenant_id)s
openstack endpoint create
--region RegionOne volumev2 internal
http://controller:8776/v2/%(tenant_id)s
openstack endpoint create
--region RegionOne volumev2 admin
http://controller:8776/v2/%(tenant_id)s
B.安装cinder
controller
1.安装cinder组件
yum -y install openstack-cinder
2.配置cinder
mv /etc/cinder/cinder.conf{,.defalt}
cat >/etc/cinder/cinder.conf <<HERE
[DEFAULT]
transport_url =
rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip =
192.168.130.101
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEY_MANAGER]
[barbican]
[cors]
[cors.subdomain]
[database]
connection =
mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[key_manager]
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
cinder
password =
CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path =
/var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
HERE
3.配置compute使用cinder
/etc/nova/nova.conf
[cinder]
os_region_name = RegionOne
4.更新数据库结构
su -s /bin/sh -c "cinder-manage db sync" cinder
5.启cinder服务
systemctl restart openstack-nova-api.service
systemctl enable openstack-cinder-api.service
openstack-cinder-scheduler.service
systemctl restart openstack-cinder-api.service
openstack-cinder-scheduler.service
block1
1.创建存储设备
yum -y install lvm2
systemctl enable lvm2-lvmetad.service
systemctl start lvm2-lvmetad.service
pvcreate /dev/sdb
vgcreate cinder-volumes /dev/sdb
/etc/lvm/lvm.conf
devices {
filter = [
"a/sdb/", "r/.*/"]
2.安装cinder
yum -y install openstack-cinder targetcli
python-keystone
mv /etc/cinder/cinder.conf{,.default}
cat >/etc/cinder/cinder.conf <<HERE
[DEFAULT]
transport_url =
rabbit://openstack:RABBIT_PASS@controller
auth_strategy = keystone
my_ip =
192.168.130.111
enabled_backends = lvm
glance_api_servers =
http://controller:9292
[BACKEND]
[BRCD_FABRIC_EXAMPLE]
[CISCO_FABRIC_EXAMPLE]
[COORDINATION]
[FC-ZONE-MANAGER]
[KEY_MANAGER]
[barbican]
[cors]
[cors.subdomain]
[database]
connection =
mysql+pymysql://cinder:CINDER_DBPASS@controller/cinder
[key_manager]
[keystone_authtoken]
auth_uri =
http://controller:5000
auth_url =
http://controller:35357
memcached_servers = controller:11211
auth_type =
password
project_domain_name = Default
user_domain_name = Default
project_name
= service
username =
cinder
password =
CINDER_PASS
[matchmaker_redis]
[oslo_concurrency]
lock_path =
/var/lib/cinder/tmp
[oslo_messaging_amqp]
[oslo_messaging_notifications]
[oslo_messaging_rabbit]
[oslo_messaging_zmq]
[oslo_middleware]
[oslo_policy]
[oslo_reports]
[oslo_versionedobjects]
[ssl]
[lvm]
volume_driver =
cinder.volume.drivers.lvm.LVMVolumeDriver
volume_group
= cinder-volumes
iscsi_protocol = iscsi
iscsi_helper
= lioadm
HERE
systemctl enable openstack-cinder-volume.service
target.service
systemctl restart openstack-cinder-volume.service
target.service
C.确认cinder服务配置正确
source admin-openrc
openstack volume service list
提示:如果block
state为down,在保证配置文件都正确的情况,请检查block节点和controller节点的时间是否同步,不同步是也会造成为down状态
八.创建实例
1.network
Self-service
network
创建公有网络(浮动ip池),以admin身份
source admin-openrc.sh
openstack network create public --external
--provider-network-type flat
--provider-physical-network provider
openstack subnet create --network public
--dns-nameserver 192.168.130.2 --gateway
192.168.130.2
--subnet-range 192.168.130.0/24
--allocation-pool
start=192.168.130.31,end=192.168.130.99 sub-public
通常浮动ip无需启用dhcp,可以通过--no-dhcp 来禁用dhcp,启用只是为了方便测试