OpenStack云平台部署 - SkyRainmom
source link: https://www.cnblogs.com/skyrainmom/p/17401818.html
Go to the source link to view the article. You can view the picture content, updated content and better typesetting reading experience. If the link is broken, please click the button below to view the snapshot at that time.
前言:本次部署采用系统的是Centos 8-Stream版,存储库为OpenStack-Victoria版,除基础配置,五大服务中的时间同步服务,七大组件中的nova服务,neutron服务,cinder服务需要在双节点配置外,其他服务配置均在控制节点,neutron配置从公有网络和私有网络中选择一种即可,大多数情况还是选公有网络的配置,此次部署所有密码均为111111,可按自身需要自行配置
安装环境:
- 采用虚拟化软件:VMware Workstation 16 Pro
- 操作系统:Centos 8-Stream
- 控制节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎
- 计算节点配置:内存4G,CPU4核,磁盘100G,启用虚拟化引擎
基础配置(双节点)
Yum源仓库配置
阿里云镜像仓库地址:https://mirrors.aliyun.com,有需要可自行配置,但是这里用不到
1|0(1) 配置Centos 8的源只需改yum仓库.repo文件参数即可如下
#更改CentOS-Stream-AppStream.repo文件,将baseurl参数中的地址改为https://mirrors.aliyun.com
[root@localhost ~]# cd /etc/yum.repos.d/ [root@localhost yum.repos.d]# vi CentOS-Stream-AppStream.repo [appstream] name=CentOS Stream $releasever - AppStream #mirrorlist=http://mirrorlist.centos.org/? release=$stream&arch=$basearch&repo=AppStream&infra=$infra baseurl=https://mirrors.aliyun.com/$contentdir/$stream/AppStream/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
#更改CentOS-Stream-BaseOS.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com
[root@localhost yum.repos.d]# vi CentOS-Stream-BaseOS.repo [baseos] name=CentOS Stream $releasever - BaseOS #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=BaseOS&infra=$infra baseurl=https://mirrors.aliyun.com/$contentdir/$stream/BaseOS/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
#更改CentOS-Stream-Extras.repo 文件,将baseurl参数中的地址改为https://mirrors.aliyun.com
[root@localhost yum.repos.d]# vi CentOS-Stream-Extras.repo [extras] name=CentOS Stream $releasever - Extras #mirrorlist=http://mirrorlist.centos.org/?release=$stream&arch=$basearch&repo=extras&infra=$infra baseurl=https://mirrors.aliyun.com/$contentdir/$stream/extras/$basearch/os/ gpgcheck=1 enabled=1 gpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-centosofficial
1|0(2)配置openstack源
#在yum仓库文件夹下面创建openstack-victoria.repo文件
[root@localhost ~]# vi /etc/yum.repos.d/openstack-victoria.repo #写入以下内容 [virctoria] name=virctoria baseurl=https://mirrors.aliyun.com/centos/8-stream/cloud/x86_64/openstack-victoria/ gpgcheck=0 enabled=1
1|0(3)清除缓存,重建缓存
网络配置
- 控制节点双网卡-------> 仅主机IP:10.10.10.10 Net外网IP:10.10.20.10
- 计算节点双网卡-------> 仅主机IP:10.10.10.20 Net外网IP:10.10.20.20
1|0(1)安装network网络服务
#安装network,由于8系统自带的服务为NetworkManager,它会与neutron服务有冲突,所以安装network,关闭NetworkManager,并设置disable状态
[root@localhost ~]# dnf -y install network-scripts [root@localhost ~]# systemctl disable --now NetManager
#启动network服务,设为开机自启动
[root@localhost ~]# systemctl enable --now network
1|0(2) 配置静态IP
#ens33,以控制节点为例 [root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens33 BOOTPROTO=static #修改 ONBOOT=yes #修改 IPADDR=10.10.10.10 #添加 NETMASK=255.255.255.0 #添加
#ens34,以控制节点为例 [root@localhost ~]# vi /etc/sysconfig/network-scripts/ifcfg-ens34 BOOTPROTO=static #修改 ONBOOT=yes #修改 IPADDR=10.10.20.10 #添加 NETMASK=255.255.255.0 #添加 GATEWAY=10.10.20.2 #添加 DNS1=8.8.8.8 #添加 DNS2=114.114.114.114 #添加
1|0(3)重启网络,测试外网连通性
主机配置
1|0(1)修改主机名
#控制节点 [root@localhost ~]# hostnamectl set-hostname controller [root@localhost ~]# bash [root@controller ~]#
#计算节点 [root@localhost ~]# hostnamectl set-hostname compute [root@localhost ~]# bash [root@compute ~]#
1|0(2)关闭防火墙
1|0(3)关闭selinux安全子系统
#设置selinux并设置disable开机禁启动 [root@controller ~]# vi /etc/selinux/config SELINUX=disabled
#可通过getenforce命令查看selinux状态 [root@controller ~]# getenforce Disabled
1|0(4)配置host主机映射
#控制节点 [root@controller ~]# cat >>etc/hosts<<EOF > 10.10.10.10 controller > 10.10.10.20 computer > EOF
#计算节点 [root@compute ~]# cat >>etc/hosts<<EOF > 10.10.10.10 controller > 10.10.10.20 compute > EOF
openstack存储库
#安装openstack-victoria版存储库 [root@controller ~]# dnf -y install centos-release-openstack-victoria
#升级节点上所有的安装包 [root@controller ~]# dnf -y upgrade
#安装openstack客户端和openstack-selinux [root@controller ~]# dnf -y install python3-openstackclient openstack-selinux
五大服务
Chrony时间同步(双节点)
1|0(1)查看系统是否安装chrony
[root@controller ~]# rpm -qa |grep chrony
#没有的话就安装 [root@controller ~]# dnf -y install chrony
1|0(2)编辑chrony配置文件
#控制节点 [root@controller ~]# vim /etc/chrony.conf server ntp6.aliyun.com iburst #添加与阿里云时间同步 allow 10.10.10.0/24 #添加
#计算节点 [root@controller ~]# vim /etc/chrony.conf server controller iburst #添加与控制节点时间同步
1|0(3)重启时间同步服务,设置开机自启
Mariadb数据库
1|0(1)安装mariadb数据库
[root@controller ~]# dnf -y install mariadb mariadb-server python3-PyMySQL
#启动mariadb数据库 [root@controller ~]# systemctl start mariadb
1|0(2)创建openstack.cnf文件,编辑它
1|0(3)初始化数据库
1|0(4)重启数据库服务并设置开机自启
RabbitMQ消息队列
注意:安装rabbitmq-server时,可能会报错,这是安装源里面没有libSDL,下载所需包,再安装rabbitmq-server就行了
下载命令:wget http://rpmfind.net/linux/centos/8-stream/PowerTools/x86_64/os/Packages/SDL2-2.0.10-2.el8.x86_64.rpm
安装命令:dnf -y install SDL2-2.0.10-2.el8.x86_64.rpm
1|0(1)安装rabbitmq软件包
1|0(2)启动消息队列服务并设置开机自启动
1|0(3) 添加openstack用户并设置密码
1|0(4) 配置openstack用户权限
1|0(5)启用消息队列Web界面管理插件
[root@controller ~]# rabbitmq-plugins enable rabbitmq_management
#这一步启动后,ss -antlu命令查看端口会有一个15672的端口开启,可通过web界面登录RabbitMQ查看,网站地址:http://10.10.10.10:15672,用户和密码默认都是guest
Memcached缓存
1|0(1)安装memcache软件包
1|0(2)编辑memcache配置文件
1|0(3)重启缓存服务并设置开机自启
Etcd集群
1|0(1)安装etcd软件包
1|0(2)编辑etcd配置文件
1|0(3)启动etcd服务并设置开机自启动
七大组件
Keystone认证
1|0(1)数据库创库授权
#进入数据库 [root@controller ~]# mysql -u root -p111111
#创建keystone数据库 MariaDB [(none)]> CREATE DATABASE keystone;
#授权 MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON keystone.* TO 'keystone'@'%' IDENTIFIED BY '111111';
1|0(2)安装keystone软件包
1|0(3)编辑配置文件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/keystone/keystone.conf /etc/keystone/keystone.conf.bak [root@controller ~]# grep -Ev '^$|#' /etc/keystone/keystone.conf.bak >/etc/keystone/keystone.conf
#编辑 [root@controller ~]# vim /etc/keystone/keystone.conf [database] connection = mysql+pymysql://keystone:111111@controller/keystone
[token] provider = fernet
1|0(4)数据库初始化
1|0(5)查看keystone数据库表信息
[root@controller ~]# mysql -uroot -p111111
MariaDB [(none)]> use keystone; MariaDB [keystone]> show tables; MariaDB [keystone]> quit
1|0(6)初始化Fernet
1|0(7)引导身份认证
1|0(8)配置Apache HTTP服务
#编辑httpd.conf文件 [root@controller ~]# vim /etc/httpd/conf/httpd.conf ServerName controller #添加这一行
<Directory /> AllowOverride none Require all granted #这一行改成这样 </Directory>
#创建wsgi-keystone.conf文件链接 [root@controller ~]# ln -s /usr/share/keystone/wsgi-keystone.conf /etc/httpd/conf.d/
1|0(9)重启httpd服务并设置开机自启动
1|0(10)创建admin环境变量脚本
[root@controller ~]# vim /admin-openrc.sh export OS_USERNAME=admin export OS_PASSWORD=111111 export OS_PROJECT_NAME=admin export OS_USER_DOMAIN_NAME=Default export OS_PROJECT_DOMAIN_NAME=Default export OS_AUTH_URL=http://controller:5000/v3 export OS_IDENTITY_API_VERSION=3 export OS_IMAGE_API_VERSION=2
#可通过source /admin-openrc.sh命令来导入环境变量,或./admin-openrc.sh命令,如果不想每次手动导入,可以修改.bashrc配置文件实现开机启动导入 [root@controller ~]# vim .bashrc source /admin-openrc.sh #添加这一行
1|0(11)创建域,项目,用户和角色
#创建域,已有默认域default,自己可随便创一个 [root@controller ~]# openstack domain create --description "An Example Domain" example
#创建service项目 [root@controller ~]# openstack project create --domain default --description "Service Project" service
#创建测试项目 [root@controller ~]# openstack project create --domain default --description "Demo Project" myproject
#创建用户,此命令执行会要求输入密码,输两次即可 [root@controller ~]# openstack user create --domain default --password-prompt myuser
#创建角色 [root@controller ~]# openstack role create myrole
#添加角色与项目,用户绑定 [root@controller ~]# openstack role add --project myproject --user myuser myrole
1|0(12)验证token令牌
Glance镜像
1|0(1) 数据库创库授权
#进入数据库 [root@controller ~]# mysql -u root -p111111
#创建glance数据库 MariaDB [(none)]> CREATE DATABASE glance;
#授权 MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON glance.* TO 'glance'@'%' IDENTIFIED BY '111111';
1|0(2) 安装glance软件包
注:安装报错,修改CentOS-Stream-PowerTools.repo源为enable=1,重新安装
1|0(3) 编辑配置文件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/glance/glance-api.conf /etc/glance/glance-api.conf.bak [root@controller ~]# grep -Ev '^$|#' /etc/glance/glance-api.conf.bak >/etc/glance/glance-api.conf
#编辑 [root@controller ~]# vim /etc/glance/glance-api.conf [database] connection = mysql+pymysql://glance:111111@controller/glance
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = glance password = 111111
[paste_deploy] flavor = keystone
[glance_store] stores = file,http default_store = file filesystem_store_datadir = /var/lib/glance/images/
1|0(4) 数据库初始化
1|0(5) 查看glance数据库表信息
[root@controller ~]# mysql -uroot -p111111
MariaDB [(none)]> use glance; MariaDB [keystone]> show tables; MariaDB [keystone]> quit
1|0(6) 创建glance用户和服务,关联admin角色
#创建glance用户 [root@controller ~]# openstack user create --domain default --password 111111 glance
#关联admin角色 [root@controller ~]# openstack role add --project service --user glance admin
#创建glance服务 [root@controller ~]# openstack service create --name glance --description "OpenStack Image" image
1|0(7) 注册API接口
#public [root@controller ~]# openstack endpoint create --region RegionOne image public http://controller:9292
#internal [root@controller ~]# openstack endpoint create --region RegionOne image internal http://controller:9292
#admin [root@controller ~]# openstack endpoint create --region RegionOne image admin http://controller:9292
1|0(8) 查看服务端点
1|0(9) 启动glance服务并设置开机自启
1|0(10) 测试镜像功能
#此次采用的镜像为cirros-0.5.1-x86_64-disk.img,创建命令如下 [root@controller ~]# openstack image create "cirros" --file cirros-0.5.1-x86_64-disk.img --disk-format qcow2 --container-format bare --public
#创建成功后可通过openstack命令查看 [root@controller ~]# openstack image list
#进入glance数据库查看,存放在images表中 [root@controller ~]# mysql -uroot -p111111
MariaDB [(none)]> use glance; MariaDB [glance]> select * from images\G;
#在/var/lib/glance/images/目录下可以看到镜像文件,如果要删除此镜像需要删除数据库信息,再删除镜像文件 [root@controller ~]# ls /var/lib/glance/images/
Placement放置
1|0(1) 数据库创库授权
#进入数据库 [root@controller ~]# mysql -u root -p111111
#创建placement数据库 MariaDB [(none)]> CREATE DATABASE placement;
#授权 MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON placement.* TO 'placement'@'%' IDENTIFIED BY '111111';
1|0(2) 安装placement软件包
1|0(3) 编辑配置文件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/placement/placement.conf /etc/placement/placement.conf.bak [root@controller ~]# grep -Ev '^$|#' /etc/placement/placement.conf.bak >/etc/placement/placement.conf
#编辑 [root@controller ~]# vim /etc/placement/placement.conf [placement_database] connection = mysql+pymysql://placement:111111@controller/placement
[api] auth_strategy = keystone
[keystone_authtoken] auth_url = http://controller:5000/v3 memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = placement password = 111111
1|0(4) 数据库初始化
1|0(5) 查看placement数据库表信息
[root@controller ~]# mysql -uroot -p111111
MariaDB [(none)]> use placement; MariaDB [keystone]> show tables; MariaDB [keystone]> quit
1|0(6) 创建placement用户和服务,关联admin角色
#创建placement用户 [root@controller ~]# openstack user create --domain default --password 111111 placement
#关联admin角色 [root@controller ~]# openstack role add --project service --user placement admin
#创建placement服务 [root@controller ~]# openstack service create --name placement --description "Placement API" placement
1|0(7) 注册API接口
#public [root@controller ~]# openstack endpoint create --region RegionOne placement internal http://controller:8778
#internal [root@controller ~]# openstack endpoint create --region RegionOne placement public http://controller:8778
#admin [root@controller ~]# openstack endpoint create --region RegionOne placement admin http://controller:8778
1|0(8) 查看服务端点
1|0(9) 重启httpd服务
1|0检测placement服务状态
Nova计算
1|01,控制节点(1)
1|0(1) 数据库创库授权
#进入数据库 [root@controller ~]# mysql -u root -p111111
#创建nova_api,nova和nova_cell0数据库 MariaDB [(none)]> CREATE DATABASE nova_api; MariaDB [(none)]> CREATE DATABASE nova; MariaDB [(none)]> CREATE DATABASE nova_cell0;
#授权 MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_api.* TO 'nova'@'%' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova.* TO 'nova'@'%' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON nova_cell0.* TO 'nova'@'%' IDENTIFIED BY '111111';
1|0(2) 安装nova软件包
1|0(3) 编辑配置文件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak [root@controller ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
#编辑 [root@controller ~]# vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:111111@controller:5672/ my_ip = 10.10.10.10 #本机IP,如果将来换IP,这地方一定要改
[api_database] connection = mysql+pymysql://nova:111111@controller/nova_api
[database] connection = mysql+pymysql://nova:111111@controller/nova
[api] auth_strategy = keystone
[keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = 111111
[vnc] enabled = true server_listen = $my_ip server_proxyclient_address = $my_ip
[glance] api_servers = http://controller:9292
[oslo_concurrency] lock_path = /var/lib/nova/tmp
[placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 111111
1|0(4) 数据库初始化
# 同步nova_api数据库 [root@controller ~]# su -s /bin/sh -c "nova-manage api_db sync" nova
# 同步nova_cell0数据库 [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 map_cell0" nova
# 创建cell1 [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 create_cell --name=cell1 --verbose" nova
# 同步nova数据库 [root@controller ~]# su -s /bin/sh -c "nova-manage db sync" nova
1|0(5) 创建nova用户和服务,关联admin角色
#创建nova用户 [root@controller ~]# openstack user create --domain default --password 111111 nova
#关联admin角色 [root@controller ~]# openstack role add --project service --user nova admin
#创建nova服务 [root@controller ~]# openstack service create --name nova --description "OpenStack Compute" compute
1|0(6) 注册API接口
#public [root@controller ~]# openstack endpoint create --region RegionOne compute public http://controller:8774/v2.1
#internal [root@controller ~]# openstack endpoint create --region RegionOne compute internal http://controller:8774/v2.1
#admin [root@controller ~]# openstack endpoint create --region RegionOne compute admin http://controller:8774/v2.1
1|0(7) 查看服务端点
1|0(8) 验证nova_cell0和cell1是否添加成功
1|0(9) 启动nova所有服务并设为开机自启
1|0(10) 查看nova服务是否启动
[root@controller ~]# nova service-list
#一般只会显示两个服务:nova-scheduler和nova-conductor,这是因为上面这条命令是由nova-api接收,而它控制着nova-scheduler和nova-conductor服务,如果nova-api未开启,那这两个服务也会down掉,nova-novncproxy服务则是通过查看端口号的形式,示例如下: [root@controller ~]# netstat -lntup | grep 6080 tcp 0 0 0.0.0.0:6080 0.0.0.0:* LISTEN 1456/python3 [root@controller ~]# ps -ef | grep 1456 nova 1456 1 0 18:29 ? 00:00:05 /usr/bin/python3 /usr/bin/nova-novncproxy --web /usr/share/novnc/ root 27724 26054 0 20:51 pts/0 00:00:00 grep --color=auto 1456
1|0(11) 如何通过web界面查看
#如果不配置域名解析,就直接用ip http://10.10.10.10:6080
#如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加 10.10.10.10 controller 10.10.10.20 compute #再访问 http://controller:6080
1|02,计算节点
1|0(1) 安装nova软件包
1|0(2) 编辑配置文件
#复制备份配置文件并去掉注释 [root@compute ~]# cp /etc/nova/nova.conf /etc/nova/nova.conf.bak [root@compute ~]# grep -Ev '^$|#' /etc/nova/nova.conf.bak >/etc/nova/nova.conf
#编辑 [root@compute ~]# vim /etc/nova/nova.conf [DEFAULT] enabled_apis = osapi_compute,metadata transport_url = rabbit://openstack:111111@controller my_ip = 10.10.10.20 #本机IP,如果将来换IP,这地方一定要改
[api] auth_strategy = keystone
[keystone_authtoken] www_authenticate_uri = http://controller:5000/ auth_url = http://controller:5000/ memcached_servers = controller:11211 auth_type = password project_domain_name = Default user_domain_name = Default project_name = service username = nova password = 111111
[vnc] enabled = true server_listen = 0.0.0.0 server_proxyclient_address = $my_ip novncproxy_base_url = http://controller:6080/vnc_auto.html
[glance] api_servers = http://controller:9292
[oslo_concurrency] lock_path = /var/lib/nova/tmp
[placement] region_name = RegionOne project_domain_name = Default project_name = service auth_type = password user_domain_name = Default auth_url = http://controller:5000/v3 username = placement password = 111111
1|0(3) 确定计算节点是否支持虚拟机的硬件加速
#如果此命令返回值是别的数字,计算节点支持硬件加速;如果此命令返回值是0,计算节点不支持硬件加速,需要配置[libvirt] [root@compute ~]# egrep -c '(vmx|svm)' /proc/cpuinfo
#配置[libvirt] [root@compute ~]# vim /etc/nova/nova.conf [libvirt] virt_type = qemu
1|0(4) 启动计算节点nova服务并设置开机自启动
1|0控制节点(2)
1|0(5) 将计算节点添加到单元数据库
#确认数据库中存在计算主机 [root@controller ~]# openstack compute service list --service nova-compute
#控制节点发现计算节点 [root@controller ~]# su -s /bin/sh -c "nova-manage cell_v2 discover_hosts --verbose" nova
1|0(6) 设置发现间隔
Neutron网络
1|0(1) 数据库创库授权
#进入数据库 [root@controller ~]# mysql -u root -p111111
#创建neutron数据库 MariaDB [(none)] CREATE DATABASE neutron;
#授权 MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON neutron.* TO 'neutron'@'%' IDENTIFIED BY '111111';
1|0(2) 创建neutron用户和服务,关联admin角色
#创建neutron用户 [root@controller ~]# openstack user create --domain default --password 111111 neutron
#关联admin角色 [root@controller ~]# openstack role add --project service --user neutron admin
#创建neutron服务 [root@controller ~]# openstack service create --name neutron --description "OpenStack Networking" network
1|0(3) 注册API接口
#public [root@controller ~]# openstack endpoint create --region RegionOne network public http://controller:9696
#internal [root@controller ~]# openstack endpoint create --region RegionOne network internal http://controller:9696
#admin [root@controller ~]# openstack endpoint create --region RegionOne network admin http://controller:9696
1|0(4) 查看服务端点
1|0控制节点公有网络
1|0(1) 安装neutron软件包
1|0(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
#编辑 [root@controller ~]# vim /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = transport_url = rabbit://openstack:111111@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true
[database] connection = mysql+pymysql://neutron:111111@controller/neutron
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 111111
[nova] #如果配置文件没有这个参数,就直接加 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 111111
[oslo_concurrency] lock_path = /var/lib/neutron/tmp
1|0(3) 编辑ml2插件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
#编辑 [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan tenant_network_types = mechanism_drivers = linuxbridge extension_drivers = port_security
[ml2_type_flat] flat_networks = provider
[securitygroup] enable_ipset = true
1|0(4) 配置Linux网桥代理
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
[vxlan] enable_vxlan = false
[securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
1|0(5) 配置DHCP代理
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
1|0(6) 设置网桥过滤器
#修改系统参数配置文件 [root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
#加载br_netfilter模块 [root@controller ~]# modprobe br_netfilter
#检查 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功 net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
1|0(7) 配置元数据代理
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET
#'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
1|0(8) 配置计算服务以使用网络服务
1|0(9) 创建网络服务初始化脚本链接
1|0(10) 数据库初始化
1|0(11) 重启nova的API服务
1|0(12) 启动neutron服务并设置开机自启
1|0计算节点公有网络
1|0(1) 安装neutron软件包
1|0(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释 [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak [root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
#编辑 [root@compute ~]# vim /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:111111@controller auth_strategy = keystone
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 111111
[oslo_concurrency] lock_path = /var/lib/neutron/tmp
1|0(3) 配置Linux网桥代理
#复制备份配置文件并去掉注释 [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak [root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
[vxlan] enable_vxlan = false
[securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
1|0(4) 设置网桥过滤器
#修改系统参数配置文件 [root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
#加载br_netfilter模块 [root@compute ~]# modprobe br_netfilter
#检查 [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功 net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
1|0(5) 配置计算服务以使用网络服务
1|0(6) 重启nova的API服务
1|0(7) 启动Linux网桥服务并设置开机自启
1|0公有网络服务是否正常运行
1|0(8) 控制节点查看网络代理服务列表
#控制节点查看网络代理服务列表 [root@controller ~]# openstack network agent list
#一般成功后会出现Metadata agent,DHCP agent,两个Linux bridge agent一共四个代理,一个Linux bridge agent属于controlller,另一个属于compute
1|0控制节点私有网络
1|0(1) 安装neutron软件包
1|0(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
#编辑 [root@controller ~]# vim /etc/neutron/neutron.conf [DEFAULT] core_plugin = ml2 service_plugins = router allow_overlapping_ips = true transport_url = rabbit://openstack:111111@controller auth_strategy = keystone notify_nova_on_port_status_changes = true notify_nova_on_port_data_changes = true
[database] connection = mysql+pymysql://neutron:111111@controller/neutron
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 111111
[nova] #如果配置文件没有这个参数,就直接加 auth_url = http://controller:5000 auth_type = password project_domain_name = default user_domain_name = default region_name = RegionOne project_name = service username = nova password = 111111
[oslo_concurrency] lock_path = /var/lib/neutron/tmp
1|0(3) 编辑ml2插件
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/plugins/ml2/ml2_conf.ini /etc/neutron/plugins/ml2/ml2_conf.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/ml2_conf.ini.bak >/etc/neutron/plugins/ml2/ml2_conf.ini
#编辑 [root@controller ~]# vim /etc/neutron/plugins/ml2/ml2_conf.ini [ml2] type_drivers = flat,vlan,vxlan tenant_network_types = vxlan mechanism_drivers = linuxbridge,l2population extension_drivers = port_security
[ml2_type_flat] flat_networks = provider
[ml2_type_vxlan] vni_ranges = 1:1000
[securitygroup] enable_ipset = true
1|0(4) 配置Linux网桥代理
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
[vxlan] enable_vxlan = true local_ip = 10.10.10.10 l2_population = true
[securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
1|0(5) 设置网桥过滤器
#修改系统参数配置文件 [root@controller ~]# echo 'net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
#加载br_netfilter模块 [root@controller ~]# modprobe br_netfilter
#检查 [root@controller ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功 net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
1|0(6) 配置DHCP代理
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/dhcp_agent.ini /etc/neutron/dhcp_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/dhcp_agent.ini.bak >/etc/neutron/dhcp_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/dhcp_agent.ini [DEFAULT] interface_driver = linuxbridge dhcp_driver = neutron.agent.linux.dhcp.Dnsmasq enable_isolated_metadata = true
1|0(7) 配置第三层代理
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/l3_agent.ini /etc/neutron/l3_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/l3_agent.ini.bak > /etc/neutron/l3_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/l3_agent.ini [DEFAULT] interface_driver = linuxbridge
1|0(8) 配置元数据代理
#复制备份配置文件并去掉注释 [root@controller ~]# cp /etc/neutron/metadata_agent.ini /etc/neutron/metadata_agent.ini.bak [root@controller ~]# grep -Ev '^$|#' /etc/neutron/metadata_agent.ini.bak >/etc/neutron/metadata_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/metadata_agent.ini [DEFAULT] nova_metadata_host = controller metadata_proxy_shared_secret = METADATA_SECRET
#'METADATA_SECRET'为密码,可自行定义。但要与后面配置nova中的元数据参数一致
1|0(9) 配置计算服务以使用网络服务
1|0(10) 创建网络服务初始化脚本链接
1|0(11) 数据库初始化
1|0(12) 重启nova的API服务
1|0(13) 启动neutron服务并设置开机自启
1|0计算节点私有网络
1|0(1) 安装neutron软件包
1|0(2) 编辑neutron配置文件
#复制备份配置文件并去掉注释 [root@compute ~]# cp /etc/neutron/neutron.conf /etc/neutron/neutron.conf.bak [root@compute ~]# grep -Ev '^$|#' /etc/neutron/neutron.conf.bak >/etc/neutron/neutron.conf
#编辑 [root@compute ~]# vim /etc/neutron/neutron.conf [DEFAULT] transport_url = rabbit://openstack:111111@controller auth_strategy = keystone
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = neutron password = 111111
[oslo_concurrency] lock_path = /var/lib/neutron/tmp
1|0(3) 配置Linux网桥代理
#复制备份配置文件并去掉注释 [root@compute ~]# cp /etc/neutron/plugins/ml2/linuxbridge_agent.ini /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak [root@compute ~]# grep -Ev '^$|#' /etc/neutron/plugins/ml2/linuxbridge_agent.ini.bak > /etc/neutron/plugins/ml2/linuxbridge_agent.ini
#编辑 [root@controller ~]# vim /etc/neutron/plugins/ml2/linuxbridge_agent.ini [linux_bridge] physical_interface_mappings = provider:ens34 #这里选择提供给实例的net网卡
[vxlan] enable_vxlan = true local_ip = 10.10.10.20 l2_population = true
[securitygroup] enable_security_group = true firewall_driver = neutron.agent.linux.iptables_firewall.IptablesFirewallDriver
1|0(4) 设置网桥过滤器
#修改系统参数配置文件 [root@compute ~]# echo 'net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1' >> /etc/sysctl.conf
#加载br_netfilter模块 [root@compute ~]# modprobe br_netfilter
#检查 [root@compute ~]# sysctl -p net.bridge.bridge-nf-call-iptables = 1 #出现这个则配置成功 net.bridge.bridge-nf-call-ip6tables = 1 #出现这个则配置成功
1|0(5) 配置计算服务以使用网络服务
1|0(6) 重启nova的API服务
1|0(7) 启动Linux网桥服务并设置开机自启
1|0私有网络服务是否正常运行
1|0(8) 控制节点查看网络代理服务列表
#控制节点查看网络代理服务列表 [root@controller ~]# openstack network agent list
#一般成功后会出现Metadata agent,DHCP agent,L3 agent,两个Linux bridge agent一共五个代理,一个Linux bridge agent属于controlller,另一个属于compute
Dashboard仪表盘
1|0(1) 安装dashboard软件包
1|0(2) 编辑dashboard配置文件
#此文件内所有选项与参数用命令模式搜索,有就修改,没有就添加 [root@controller ~]# vim /etc/openstack-dashboard/local_settings
OPENSTACK_HOST = "controller"
#不配域名解析就要把IP写进去 ALLOWED_HOSTS = ['controller','compute','10.10.10.10','10.10.10.20']
SESSION_ENGINE = 'django.contrib.sessions.backends.cache'
CACHES = { 'default': { 'BACKEND': 'django.core.cache.backends.memcached.MemcachedCache', 'LOCATION': 'controller:11211', }, }
OPENSTACK_KEYSTONE_URL = "http://%s/identity/v3" % OPENSTACK_HOST
OPENSTACK_KEYSTONE_MULTIDOMAIN_SUPPORT = True
OPENSTACK_API_VERSIONS = { "identity": 3, "image": 2, "volume": 3, }
OPENSTACK_KEYSTONE_DEFAULT_DOMAIN = "Default"
OPENSTACK_KEYSTONE_DEFAULT_ROLE = "user"
OPENSTACK_NEUTRON_NETWORK = { 'enable_router': False, 'enable_quotas': False, 'enable_distributed_router': False, 'enable_ha_router': False, 'enable_lb': False, 'enable_firewall': False, 'enable_vpn': False, 'enable_fip_topology_check': False, }
TIME_ZONE = "Asia/Shanghai"
1|0(3) 配置http服务
[root@controller ~]# vi /etc/httpd/conf.d/openstack-dashboard.conf
WSGIApplicationGroup %{GLOBAL} #添加这行
#编辑dashboard配置文件 [root@controller ~]# vim /etc/openstack-dashboard/local_settings
WEBROOT = '/dashboard/' #添加这行
1|0(4) 重启http和缓存服务
1|0(5) 登录web界面
#如果不配置域名解析,就直接用ip http://10.10.10.10/dashboard
#如果要配置域名解析,在电脑C:\Windows\System32\drivers\etc目录下里面的hosts文件里添加 10.10.10.10 controller 10.10.10.20 compute #再访问 http://controller/dashboard
Cinder存储
1|0控制节点
1|0(1) 数据库创库授权
#进入数据库 [root@controller ~]# mysql -u root -p111111
#创建cinder数据库 MariaDB [(none)] CREATE DATABASE cinder;
#授权 MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'localhost' IDENTIFIED BY '111111'; MariaDB [(none)]> GRANT ALL PRIVILEGES ON cinder.* TO 'cinder'@'%' IDENTIFIED BY '111111';
1|0(2) 编辑配置文件
#复制一份去掉注释 [root@controller ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak [root@controller ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
#编辑 [root@controller ~]# vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:111111@controller auth_strategy = keystone my_ip = 10.10.10.10
[database] connection = mysql+pymysql://cinder:111111@controller/cinder
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 111111
[oslo_concurrency] lock_path = /var/lib/cinder/tmp
1|0(3) 数据库初始化
1|0(4) 查看cinder数据库表信息
[root@controller ~]# mysql -uroot -p111111
MariaDB [(none)]> use cinder; MariaDB [cinder]> show tables; MariaDB [cinder]> quit
1|0(5) 创建cinder用户和服务,关联admin角色
#创建cinder用户 [root@controller ~]# openstack user create --domain default --password 111111 placement
#关联admin角色 [root@controller ~]# openstack role add --project service --user cinder admin
#创建cinderv2,cinderv3服务 [root@controller ~]# openstack service create --name cinderv2 \> --description "OpenStack Block Storage" volumev2 [root@controller ~]# openstack service create --name cinderv3 \> --description "OpenStack Block Storage" volumev3
1|0(6) 注册API接口
cinderv2的服务端点
#public [root@controller ~]# openstack endpoint create --region RegionOne \ > volumev2 public http://controller:8776/v2/%\(project_id\)s
#internal [root@controller ~]# openstack endpoint create --region RegionOne \ > volumev2 internal http://controller:8776/v2/%\(project_id\)s
#admin [root@controller ~]# openstack endpoint create --region RegionOne \ > volumev2 admin http://controller:8776/v2/%\(project_id\)s
cinderv3的服务端点
#public [root@controller ~]# openstack endpoint create --region RegionOne \ > volumev3 public http://controller:8776/v3/%\(project_id\)s
#internal [root@controller ~]# openstack endpoint create --region RegionOne \ > volumev3 internal http://controller:8776/v3/%\(project_id\)s
#admin [root@controller ~]# openstack endpoint create --region RegionOne \ > volumev3 admin http://controller:8776/v3/%\(project_id\)s
1|0(7) 查看服务端点
1|0(8) 配置计算服务使用块存储
#编辑nova配置文件 [root@controller cinder]# vi /etc/nova/nova.conf [cinder] os_region_name = RegionOne
#重启nova [root@controller ~]# systemctl restart openstack-nova-api.service
1|0(9) 启动cinder服务并设置开机自启
1|0计算节点(关闭虚拟机添加一块50G硬盘)
1|0(1) 查看磁盘
1|0(2) 安装 LVM 包
1|0(3) 创建 LVM 物理卷/dev/sdb
1|0(4) 创建 LVM 卷组cinder-volumes
1|0(5) 修改LVM配置
#复制一份去掉注释 [root@compute ~]# cp /etc/lvm/lvm.conf /etc/lvm/lvm.conf.bak [root@compute ~]# grep -Ev '^$|#' /etc/lvm/lvm.conf.bak > /etc/lvm/lvm.conf
#编辑 [root@compute ~]# vi /etc/lvm/lvm.conf devices { filter = [ "a/sda/",a/sdb/", "r/.*/"] }
1|0(6) 安装cinder相关软件包
1|0(7) 编辑cinder配置文件
#复制一份去掉注释 [root@compute ~]# cp /etc/cinder/cinder.conf /etc/cinder/cinder.conf.bak [root@compute ~]# grep -Ev '^$|#' /etc/cinder/cinder.conf.bak > /etc/cinder/cinder.conf
#编辑 [root@compute ~]# vim /etc/cinder/cinder.conf [DEFAULT] transport_url = rabbit://openstack:111111@controller auth_strategy = keystone my_ip = 10.10.10.20 enabled_backends = lvm glance_api_servers = http://controller:9292
[database] connection = mysql+pymysql://cinder:111111@controller/cinder
[keystone_authtoken] www_authenticate_uri = http://controller:5000 auth_url = http://controller:5000 memcached_servers = controller:11211 auth_type = password project_domain_name = default user_domain_name = default project_name = service username = cinder password = 111111
[lvm] #没有就添加 volume_driver = cinder.volume.drivers.lvm.LVMVolumeDriver volume_group = cinder-volumes #要与创建的卷组名对应 target_protocol = iscsi target_helper = lioadm
[oslo_concurrency] lock_path = /var/lib/cinder/tmp
1|0(8) 启动cinder服务并设置开机自启
1|0(9) 返回控制节点,查看服务列表
至此,openstack云平台搭建V版已全部完成
__EOF__
Recommend
About Joyk
Aggregate valuable and interesting links.
Joyk means Joy of geeK