OpenStack IceHouse+Ceph 部署

Posted by Mathew on 2017-06-14

  OpenStack 是当今炙手可热的云计算管理平台,今天来聊聊基于Ceph的后端存储的Icehouse版本的OpenStack部署。

部署方式及节点规划

    OpenStack的部署方式有很多种,实际生产中大多采用自动化部署的方式。 本文也采用 rdo的部署方式,底层为puppet。ansible同样提供的也有自动化安装的OpenStack解决方案,有时间会尝试。

节点角色规划如下

各节点操作系统均为 CentOS7.2 最小化安装

IP及角色 主控 网络 计算1 计算2 存储1 存储2
主机名 Controller Network Compute-1 Compute-2 Ceph-1 Ceph-2
管理网 10.112.1.2 10.112.1.3 10.112.1.4 10.112.1.5 10.112.1.6 10.112.1.7
public 192.192.112.2 192.192.112.4 192.192.112.5 192.192.112.6 192.192.112.7
cluster 172.172.112.6 172.172.112.7

基础环境优化设置

在主控,网络,计算节点执行如下脚本

该脚本实现 主机名设置,ssh优化,时区设置,yum源优化,内核升级,ntp时间同步, 关闭selinux,系统参数优化,基础软件包安装

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
#!/bin/bash
# Version: 2.0
# Author: MaricleMathew
# Description: 系统基础环境配置
# Date: 2017年2月8日 15:04:28
##确认英语语言
touch /var/log/scripts_basic_env.log
localectl status |grep en_US
if [ $? -gt 0 ]; then
localectl set-locale LANG=en_US.UTF-8
fi
##确定时区
timedatectl status|grep -i shanghai
if [ $? -gt 0 ]; then
timedatectl set-timezone Asia/Shanghai
fi
# 本地内网源
SQUIDIP='10.63.229.85'

sed -i 's/\(SELINUX=\)enforcing/\1disabled/g' /etc/selinux/config
setenforce 0

sed -i 's/GSSAPIAuthentication yes/GSSAPIAuthentication no/g' /etc/ssh/sshd_config
sed -i 's/#UseDNS yes/UseDNS no/g' /etc/ssh/sshd_config
systemctl restart sshd

export https_proxy=$SQUIDIP:3128
yum install -y python-pip
echo y | pip uninstall eventlet
pip install -v eventlet==0.14.0

echo "ulimit -n 131072" >> /etc/profile
echo "* soft nofile 131072
* hard nofile 131072
* soft nproc 131072
* hard nproc 131072" >> /etc/security/limits.conf
echo 'vm.swappiness = 1
net.ipv6.conf.all.disable_ipv6 = 1
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.bridge.bridge-nf-call-arptables = 1
net.netfilter.nf_conntrack_max = 2097152
net.ipv4.neigh.default.gc_thresh1 = 1024
net.ipv4.neigh.default.gc_thresh2 = 4096
net.ipv4.neigh.default.gc_thresh3 = 8192' >> /etc/sysctl.conf
echo 'NETWORKING_IPV6=no' >> /etc/sysconfig/network
systemctl disable ip6tables
sed -i 's/IPV6INIT=yes/IPV6INIT=no/g' /etc/sysconfig/network-scripts/ifcfg-e*

#export http_proxy=http://$SQUIDIP:3128
sed -e "s/^mirrorlist/#mirrorlist/g" -e "s/#baseurl/baseurl/g" -e "s/mirror\.centos\.org\/centos\/\$releasever/$SQUIDIP\/centos_7.2.1511/g" -i /etc/yum.repos.d/CentOS-Base.repo
yum install epel-release -y
sed -e "s/^mirrorlist/#mirrorlist/g" -e "s/#baseurl/baseurl/g" -e "s/download\.fedoraproject\.org\/pub/$SQUIDIP/g" -i /etc/yum.repos.d/epel.repo
sed -e "s/^mirrorlist/#mirrorlist/g" -e "s/#baseurl/baseurl/g" -e "s/download\.fedoraproject\.org\/pub/$SQUIDIP/g" -i /etc/yum.repos.d/epel-testing.repo
yum install -y http://$SQUIDIP/openstack/openstack-icehouse/epel-7/rdo-release-icehouse-4.noarch.rpm
sed -i "s/repos\.fedorapeople\.org\/repos/$SQUIDIP/g" /etc/yum.repos.d/rdo-release.repo

yum install -y yum-plugin-priorities
##解决yum优先级,解决某些软件无法安装的问题
yum install -y yum-plugin-priorities
sed -i '/\[base\]/apriority=1' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/\[updates\]/apriority=1' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/\[extras\]/apriority=1' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/\[centosplus\]/apriority=11' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/\[contrib\]/apriority=11' /etc/yum.repos.d/CentOS-Base.repo
sed -i '/\[epel\]/apriority=2' /etc/yum.repos.d/epel.repo
sed -i '/\[epel-debuginfo\]/apriority=12' /etc/yum.repos.d/epel.repo
sed -i '/\[epel-source\]/apriority=12' /etc/yum.repos.d/epel.repo
sed -i '/\[epel-extras\]/apriority=12' /etc/yum.repos.d/epel.repo
sed -i '/\[epel-testing\]/apriority=12' /etc/yum.repos.d/epel.repo
sed -i '/\[epel-testing-debuginfo\]/apriority=12' /etc/yum.repos.d/epel.repo
sed -i '/\[epel-testing-source\]/apriority=12' /etc/yum.repos.d/epel.repo

yum clean all
yum makecache
yum update --exclude=kernel* centos-release* -y

yum install make bison flex automake autoconf boost-devel fuse-devel gcc-c++ \
libtool libuuid-devel libblkid-devel keyutils-libs-devel cryptopp-devel fcgi-devel \
libcurl-devel expat-devel gperftools-devel libedit-devel libatomic_ops-devel snappy-devel \
leveldb-devel libaio-devel xfsprogs-devel git libudev-devel gperftools redhat-lsb bzip2 ntp \
iptables-services wget expect vim -y --skip-broken
echo "alias vi='vim'" >> /etc/profile

systemctl stop firewalld.service
systemctl disable firewalld.service
systemctl enable iptables.service

systemctl stop NetworkManager
systemctl disable NetworkManager

sed -i "/server 0.centos.pool.ntp.org iburst/iserver $SQUIDIP prefer" /etc/ntp.conf
ntpdate $SQUIDIP
date && hwclock -w
systemctl stop ntpdate
systemctl disable ntpdate
systemctl enable ntpd
systemctl restart ntpd

##降低eventlet版本否则nova-compute无法启动



export https_proxy=$SQUIDIP:3128
yum install -y python-pip
echo y | pip uninstall eventlet
pip install -v eventlet==0.14.0


wget http://$SQUIDIP/auto_deploy/linux-3.18.16.tar.xz -O /opt/linux-3.18.16.tar.xz
wget http://$SQUIDIP/auto_deploy/conf/kernelconfig -O /opt/kernelconfig
tar -Jxf /opt/linux-3.18.16.tar.xz -C /usr/src/
cp /opt/kernelconfig /usr/src/linux-3.18.16/.config
cd /usr/src/linux-3.18.16/
make oldconfig
make -j72 && make -j72 modules_install && make install
grub2-set-default 'CentOS Linux (3.18.16) 7 (Core)'

reboot
###################

主控节点对其他各节点免密认证

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
[root@cs112-02 ~]# ssh-keygen -q -b 1024 -t rsa -N "" -f /root/.ssh/id_rsa
[root@cs112-02 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
10.112.1.2 cs112-02
10.112.1.3 cs112-03
10.112.1.4 cs112-04
10.112.1.5 cs112-05
10.112.1.6 cs112-06
10.112.1.7 cs112-07
[root@cs112-02 ~]# ssh-copy-id cs112-02
The authenticity of host 'cs112-02 (10.112.1.2)' can't be established.
ECDSA key fingerprint is c6:5b:05:3e:28:e7:84:d0:d3:a6:40:33:7c:e4:ce:a5.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

[root@cs112-02 ~]# ssh-copy-id cs112-03
The authenticity of host 'cs112-03 (10.112.1.3)' can't be established.
ECDSA key fingerprint is b1:d3:47:88:0b:d1:f3:d8:63:ec:28:6a:68:64:be:28.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

[root@cs112-02 ~]# ssh-copy-id cs112-04
The authenticity of host 'cs112-04 (10.112.1.4)' can't be established.
ECDSA key fingerprint is 6f:1f:a4:1b:30:a5:9f:cc:1a:65:aa:c4:46:93:6c:3f.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

[root@cs112-02 ~]# ssh-copy-id cs112-05
The authenticity of host 'cs112-05 (10.112.1.5)' can't be established.
ECDSA key fingerprint is 59:a1:2a:5d:8f:3b:f8:78:55:83:64:ab:dd:81:31:81.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

[root@cs112-02 ~]# ssh-copy-id cs112-06
The authenticity of host 'cs112-06 (10.112.1.6)' can't be established.
ECDSA key fingerprint is 82:c3:a3:f3:80:9c:fe:06:c6:b9:09:7a:bd:81:b0:11.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed

/usr/bin/ssh-copy-id: WARNING: All keys were skipped because they already exist on the remote system.

[root@cs112-02 ~]# ssh-copy-id cs112-07
The authenticity of host 'cs112-07 (10.112.1.7)' can't be established.
ECDSA key fingerprint is f0:b4:87:14:e1:10:c7:b6:60:21:8f:a4:d1:1e:e4:80.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@cs112-07's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'cs112-07'"
and check to make sure that only the key(s) you wanted were added.

主控节点执行

安装pack-stack 并生成应答文件

1
2
[root@cs112-02 ~]# yum install openstack-packstack
[root@cs112-02 ~]# packstack --gen-answer-filer=answer.txt

应答文件内容修改

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
[root@cs112-02 ~]# grep -E -v '^#|^$' answer.txt
[general]
CONFIG_SSH_KEY=/root/.ssh/id_rsa.pub
CONFIG_MARIADB_INSTALL=y
CONFIG_GLANCE_INSTALL=y
CONFIG_CINDER_INSTALL=y
CONFIG_NOVA_INSTALL=y
CONFIG_NEUTRON_INSTALL=y
CONFIG_HORIZON_INSTALL=y
CONFIG_SWIFT_INSTALL=n
CONFIG_CEILOMETER_INSTALL=y
CONFIG_HEAT_INSTALL=n
CONFIG_CLIENT_INSTALL=y
CONFIG_NTP_SERVERS=
CONFIG_NAGIOS_INSTALL=y
EXCLUDE_SERVERS=
CONFIG_DEBUG_MODE=n
CONFIG_CONTROLLER_HOST=10.112.1.2 # 指明需要安装主控节点角色相关服务的机器IP
CONFIG_COMPUTE_HOSTS=10.112.1.4,10.112.1.5 # 指明需要安装计算节点角色相关服务的机器IP
CONFIG_NETWORK_HOSTS=10.112.1.3 # 指明需要安装网络节点角色相关服务的机器IP
CONFIG_VMWARE_BACKEND=n
CONFIG_UNSUPPORTED=n
CONFIG_VCENTER_HOST=
CONFIG_VCENTER_USER=
CONFIG_VCENTER_PASSWORD=
CONFIG_VCENTER_CLUSTER_NAME=
CONFIG_STORAGE_HOST=10.112.1.2
CONFIG_USE_EPEL=y
CONFIG_REPO=
CONFIG_RH_USER=
CONFIG_SATELLITE_URL=
CONFIG_RH_PW=
CONFIG_RH_OPTIONAL=y
CONFIG_RH_PROXY=
CONFIG_RH_PROXY_PORT=
CONFIG_RH_PROXY_USER=
CONFIG_RH_PROXY_PW=
CONFIG_SATELLITE_USER=
CONFIG_SATELLITE_PW=
CONFIG_SATELLITE_AKEY=
CONFIG_SATELLITE_CACERT=
CONFIG_SATELLITE_PROFILE=
CONFIG_SATELLITE_FLAGS=
CONFIG_SATELLITE_PROXY=
CONFIG_SATELLITE_PROXY_USER=
CONFIG_SATELLITE_PROXY_PW=
CONFIG_AMQP_BACKEND=rabbitmq
CONFIG_AMQP_HOST=10.112.1.2 # 指明amqp的主机
CONFIG_AMQP_ENABLE_SSL=n
CONFIG_AMQP_ENABLE_AUTH=n
CONFIG_AMQP_NSS_CERTDB_PW=12c4910655bf43dba447b2f410c1689c
CONFIG_AMQP_SSL_PORT=5671
CONFIG_AMQP_SSL_CACERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_CERT_FILE=/etc/pki/tls/certs/amqp_selfcert.pem
CONFIG_AMQP_SSL_KEY_FILE=/etc/pki/tls/private/amqp_selfkey.pem
CONFIG_AMQP_SSL_SELF_SIGNED=y
CONFIG_AMQP_AUTH_USER=amqp_user
CONFIG_AMQP_AUTH_PASSWORD=7cdf8434ecbe4289
CONFIG_MARIADB_HOST=10.112.1.2
CONFIG_MARIADB_USER=root
CONFIG_MARIADB_PW=2e8a5d77a74a4a1b
CONFIG_KEYSTONE_DB_PW=5c6fd1568abc4e40
CONFIG_KEYSTONE_ADMIN_TOKEN=c019cea98aaa4dae98d611adf40d8a70
CONFIG_KEYSTONE_ADMIN_PW=4637dab13c124ccc
CONFIG_KEYSTONE_DEMO_PW=87c4f433b5fc4186
CONFIG_KEYSTONE_TOKEN_FORMAT=PKI
CONFIG_GLANCE_DB_PW=b25e9163e9924043
CONFIG_GLANCE_KS_PW=d0e3a5ce7cb34dc5
CONFIG_CINDER_DB_PW=0ec8b2143ce641d6
CONFIG_CINDER_KS_PW=b3ab3f5a662a437f
CONFIG_CINDER_BACKEND=lvm
CONFIG_CINDER_VOLUMES_CREATE=y
CONFIG_CINDER_VOLUMES_SIZE=20G
CONFIG_CINDER_GLUSTER_MOUNTS=
CONFIG_CINDER_NFS_MOUNTS=
CONFIG_NOVA_DB_PW=2d84143208514050
CONFIG_NOVA_KS_PW=7061ff65da004039
CONFIG_NOVA_SCHED_CPU_ALLOC_RATIO=16.0
CONFIG_NOVA_SCHED_RAM_ALLOC_RATIO=1.5
CONFIG_NOVA_COMPUTE_MIGRATE_PROTOCOL=tcp
CONFIG_NOVA_COMPUTE_PRIVIF=eno1
CONFIG_NOVA_NETWORK_MANAGER=nova.network.manager.FlatDHCPManager
CONFIG_NOVA_NETWORK_PUBIF=eno1
CONFIG_NOVA_NETWORK_PRIVIF=eth1
CONFIG_NOVA_NETWORK_FIXEDRANGE=192.168.32.0/22
CONFIG_NOVA_NETWORK_FLOATRANGE=10.3.4.0/22
CONFIG_NOVA_NETWORK_DEFAULTFLOATINGPOOL=nova
CONFIG_NOVA_NETWORK_AUTOASSIGNFLOATINGIP=n
CONFIG_NOVA_NETWORK_VLAN_START=100
CONFIG_NOVA_NETWORK_NUMBER=1
CONFIG_NOVA_NETWORK_SIZE=255
CONFIG_NEUTRON_KS_PW=fcbc661653044cf9
CONFIG_NEUTRON_DB_PW=f6292e6e67e44a4a
CONFIG_NEUTRON_L3_EXT_BRIDGE=br-ex
CONFIG_NEUTRON_L2_PLUGIN=ml2
CONFIG_NEUTRON_METADATA_PW=0638bc45810344e5
CONFIG_LBAAS_INSTALL=n
CONFIG_NEUTRON_METERING_AGENT_INSTALL=n
CONFIG_NEUTRON_FWAAS=n
CONFIG_NEUTRON_ML2_TYPE_DRIVERS=vxlan
CONFIG_NEUTRON_ML2_TENANT_NETWORK_TYPES=vxlan
CONFIG_NEUTRON_ML2_MECHANISM_DRIVERS=openvswitch
CONFIG_NEUTRON_ML2_FLAT_NETWORKS=*
CONFIG_NEUTRON_ML2_VLAN_RANGES=
CONFIG_NEUTRON_ML2_TUNNEL_ID_RANGES=
CONFIG_NEUTRON_ML2_VXLAN_GROUP=
CONFIG_NEUTRON_ML2_VNI_RANGES=10:1000
CONFIG_NEUTRON_L2_AGENT=openvswitch
CONFIG_NEUTRON_LB_TENANT_NETWORK_TYPE=local
CONFIG_NEUTRON_LB_VLAN_RANGES=
CONFIG_NEUTRON_LB_INTERFACE_MAPPINGS=
CONFIG_NEUTRON_OVS_TENANT_NETWORK_TYPE=vxlan
CONFIG_NEUTRON_OVS_VLAN_RANGES=
CONFIG_NEUTRON_OVS_BRIDGE_MAPPINGS=
CONFIG_NEUTRON_OVS_BRIDGE_IFACES=
CONFIG_NEUTRON_OVS_TUNNEL_RANGES=
CONFIG_NEUTRON_OVS_TUNNEL_IF=
CONFIG_NEUTRON_OVS_VXLAN_UDP_PORT=4789
CONFIG_HORIZON_SSL=n
CONFIG_SSL_CERT=
CONFIG_SSL_KEY=
CONFIG_SSL_CACHAIN=
CONFIG_SWIFT_KS_PW=85355e05c7dc43c4
CONFIG_SWIFT_STORAGES=
CONFIG_SWIFT_STORAGE_ZONES=1
CONFIG_SWIFT_STORAGE_REPLICAS=1
CONFIG_SWIFT_STORAGE_FSTYPE=ext4
CONFIG_SWIFT_HASH=af98a1befef24066
CONFIG_SWIFT_STORAGE_SIZE=2G
CONFIG_PROVISION_DEMO=n # 安装demo会从网络中下载cirros,我们这里使用内网安装所以不安装demo
CONFIG_PROVISION_TEMPEST=n
CONFIG_PROVISION_TEMPEST_USER=
CONFIG_PROVISION_TEMPEST_USER_PW=7ac7fbc2044c4bbe
CONFIG_PROVISION_DEMO_FLOATRANGE=172.24.4.224/28
CONFIG_PROVISION_TEMPEST_REPO_URI=https://github.com/openstack/tempest.git
CONFIG_PROVISION_TEMPEST_REPO_REVISION=master
CONFIG_PROVISION_ALL_IN_ONE_OVS_BRIDGE=n
CONFIG_HEAT_DB_PW=f97d16d70e1d422d
CONFIG_HEAT_AUTH_ENC_KEY=b428b2896f3942b6
CONFIG_HEAT_KS_PW=c908cfa65c534123
CONFIG_HEAT_CLOUDWATCH_INSTALL=n
CONFIG_HEAT_USING_TRUSTS=y
CONFIG_HEAT_CFN_INSTALL=n
CONFIG_HEAT_DOMAIN=heat
CONFIG_HEAT_DOMAIN_ADMIN=heat_admin
CONFIG_HEAT_DOMAIN_PASSWORD=0f09daf44b004d3e
CONFIG_CEILOMETER_SECRET=29b218e6640f4c05
CONFIG_CEILOMETER_KS_PW=4227b5cc6e844698
CONFIG_MONGODB_HOST=10.112.1.2
CONFIG_NAGIOS_PW=4d3a9a96f689427d

使用pack-stack 部署

执行部署之前,修改几个参数避免部署过程中报错

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127

[root@cs112-02 ~]# sed -i 's/python-iso8601/python2-iso8601/' /usr/lib/python2.7/site-packages/packstack/puppet/templates/openstack_client.pp
[root@cs112-02 ~]# sed -i 's/mongodb.conf/mongod.conf/g' /usr/share/openstack-puppet/modules/mongodb/manifests/params.pp
[root@cs112-02 ~]# packstack --answer-file=answer.txt
Welcome to Installer setup utility
Packstack changed given value to required value /root/.ssh/id_rsa.pub

Installing:
Clean Up [ DONE ]
Setting up ssh keys [ DONE ]
Discovering hosts' details [ DONE ]
Adding pre install manifest entries [ DONE ]
Installing time synchronization via NTP [ DONE ]
Preparing servers [ DONE ]
Adding AMQP manifest entries [ DONE ]
Adding MariaDB manifest entries [ DONE ]
Adding Keystone manifest entries [ DONE ]
Adding Glance Keystone manifest entries [ DONE ]
Adding Glance manifest entries [ DONE ]
Adding Cinder Keystone manifest entries [ DONE ]
Adding Cinder manifest entries [ DONE ]
Checking if the Cinder server has a cinder-volumes vg[ DONE ]
Adding Nova API manifest entries [ DONE ]
Adding Nova Keystone manifest entries [ DONE ]
Adding Nova Cert manifest entries [ DONE ]
Adding Nova Conductor manifest entries [ DONE ]
Creating ssh keys for Nova migration [ DONE ]
Gathering ssh host keys for Nova migration [ DONE ]
Adding Nova Compute manifest entries [ DONE ]
Adding Nova Scheduler manifest entries [ DONE ]
Adding Nova VNC Proxy manifest entries [ DONE ]
Adding Openstack Network-related Nova manifest entries[ DONE ]
Adding Nova Common manifest entries [ DONE ]
Adding Neutron API manifest entries [ DONE ]
Adding Neutron Keystone manifest entries [ DONE ]
Adding Neutron L3 manifest entries [ DONE ]
Adding Neutron L2 Agent manifest entries [ DONE ]
Adding Neutron DHCP Agent manifest entries [ DONE ]
Adding Neutron LBaaS Agent manifest entries [ DONE ]
Adding Neutron Metering Agent manifest entries [ DONE ]
Adding Neutron Metadata Agent manifest entries [ DONE ]
Checking if NetworkManager is enabled and running [ DONE ]
Adding OpenStack Client manifest entries [ DONE ]
Adding Horizon manifest entries [ DONE ]
Adding MongoDB manifest entries [ DONE ]
Adding Ceilometer manifest entries [ DONE ]
Adding Ceilometer Keystone manifest entries [ DONE ]
Adding Nagios server manifest entries [ DONE ]
Adding Nagios host manifest entries [ DONE ]
Adding post install manifest entries [ DONE ]
Installing Dependencies [ DONE ]
Copying Puppet modules and manifests [ DONE ]
Applying 10.112.1.2_prescript.pp
Applying 10.112.1.3_prescript.pp
Applying 10.112.1.4_prescript.pp
10.112.1.2_prescript.pp: [ DONE ]
10.112.1.4_prescript.pp: [ DONE ]
10.112.1.3_prescript.pp: [ DONE ]
Applying 10.112.1.2_ntpd.pp
Applying 10.112.1.3_ntpd.pp
Applying 10.112.1.4_ntpd.pp
10.112.1.2_ntpd.pp: [ DONE ]
10.112.1.3_ntpd.pp: [ DONE ]
10.112.1.4_ntpd.pp: [ DONE ]
Applying 10.112.1.2_amqp.pp
Applying 10.112.1.2_mariadb.pp
10.112.1.2_amqp.pp: [ DONE ]
10.112.1.2_mariadb.pp: [ DONE ]
Applying 10.112.1.2_keystone.pp
Applying 10.112.1.2_glance.pp
Applying 10.112.1.2_cinder.pp
10.112.1.2_keystone.pp: [ DONE ]
10.112.1.2_cinder.pp: [ DONE ]
10.112.1.2_glance.pp: [ DONE ]
Applying 10.112.1.2_api_nova.pp
10.112.1.2_api_nova.pp: [ DONE ]
Applying 10.112.1.2_nova.pp
Applying 10.112.1.4_nova.pp
10.112.1.2_nova.pp: [ DONE ]
10.112.1.4_nova.pp: [ DONE ]
Applying 10.112.1.2_neutron.pp
Applying 10.112.1.3_neutron.pp
Applying 10.112.1.4_neutron.pp
10.112.1.2_neutron.pp: [ DONE ]
10.112.1.4_neutron.pp: [ DONE ]
10.112.1.3_neutron.pp: [ DONE ]
Applying 10.112.1.3_neutron_fwaas.pp
Applying 10.112.1.2_osclient.pp
Applying 10.112.1.2_horizon.pp
10.112.1.2_osclient.pp: [ DONE ]
10.112.1.2_horizon.pp: [ DONE ]
10.112.1.3_neutron_fwaas.pp: [ DONE ]
Applying 10.112.1.2_mongodb.pp
10.112.1.2_mongodb.pp: [ DONE ]
Applying 10.112.1.2_ceilometer.pp
Applying 10.112.1.2_nagios.pp
Applying 10.112.1.2_nagios_nrpe.pp
Applying 10.112.1.3_nagios_nrpe.pp
Applying 10.112.1.4_nagios_nrpe.pp
10.112.1.2_ceilometer.pp: [ DONE ]
10.112.1.2_nagios.pp: [ DONE ]
10.112.1.2_nagios_nrpe.pp: [ DONE ]
10.112.1.4_nagios_nrpe.pp: [ DONE ]
10.112.1.3_nagios_nrpe.pp: [ DONE ]
Applying 10.112.1.2_postscript.pp
Applying 10.112.1.3_postscript.pp
Applying 10.112.1.4_postscript.pp
10.112.1.2_postscript.pp: [ DONE ]
10.112.1.3_postscript.pp: [ DONE ]
10.112.1.4_postscript.pp: [ DONE ]
Applying Puppet manifests [ DONE ]
Finalizing [ DONE ]

**** Installation completed successfully ******


Additional information:
* Did not create a cinder volume group, one already existed
* File /root/keystonerc_admin has been created on OpenStack client host 10.112.1.2. To use the command line tools you need to source the file.
* To access the OpenStack Dashboard browse to http://10.112.1.2/dashboard .
Please, find your login credentials stored in the keystonerc_admin in your home directory.
* To use Nagios, browse to http://10.112.1.2/nagios username: nagiosadmin, password: b5cb85bba8dc445b
* Because of the kernel update the host 10.112.1.2 requires reboot.
* Because of the kernel update the host 10.112.1.3 requires reboot.
* Because of the kernel update the host 10.112.1.4 requires reboot.
* The installation log file is available at: /var/tmp/packstack/20170828-183721-gLrqtG/openstack-setup.log
* The generated manifests are available at: /var/tmp/packstack/20170828-183721-gLrqtG/manifests

ceph安装(ceph编译安装操作在主控节点和计算节点均需要执行)

主控和计算节点执行以下脚本编译安装ceph

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@cs112-02 ~(keystone_admin)]# cat ceph.sh
#!/bin/bash

SQUIDIP='10.63.229.85'
##下载安装ceph
wget http://$SQUIDIP/auto_deploy/ceph-0.94.6.tar.gz -O /opt/ceph-0.94.6.tar.gz
tar zxf /opt/ceph-0.94.6.tar.gz -C /opt/
##编译安装ceph
cd /opt/ceph-0.94.6
CXXFLAGS="-g -O2" ./configure --prefix=/usr --sbindir=/sbin --localstatedir=/var --sysconfdir=/etc
make -j72 && make install # 这里根据自己主机的配置选择线程数
ldconfig
mkdir /var/run/ceph

从ceph节点拉取配置文件到主控节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@cs112-02 ~(keystone_admin)]# scp -r cs112-06:/etc/ceph/ /etc/
[root@cs112-02 ceph(keystone_admin)]# ll
total 36
-rw------- 1 root root 137 Aug 30 09:47 ceph.client.admin.keyring
-rw-r--r-- 1 root root 4076 Aug 30 09:47 ceph.conf
-rw------- 1 root root 214 Aug 30 09:47 ceph.mon.keyring
-rw-r--r-- 1 root root 193 Aug 30 09:47 monmap
[root@cs112-02 ceph(keystone_admin)]# ceph auth get-key client.cinder | tee client.cinder.key

[root@cs112-02 ceph(keystone_admin)]# cat << EOF > /etc/ceph/secret.xml
<secret ephemeral='no' private='no'>
<uuid>eb46c257-7cb1-4920-9ad0-e6521aec255b</uuid>
<usage type='ceph'>
<name>client.cinder secret</name>
</usage>
</secret>
EOF

主控节点对ceph的配置

分别为 Nova/Cinder 和 Glance 创建新用户

1
2
3
[root@cs112-02 (keystone_admin)]# ceph auth get-or-create client.cinder mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=volumes, allow rwx pool=vms, allow rx pool=images'
[root@cs112-02 (keystone_admin)]# ceph auth get-or-create client.glance mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=images'
[root@cs112-02 (keystone_admin)]# ceph auth get-or-create client.cinder-backup mon 'allow r' osd 'allow class-read object_prefix rbd_children, allow rwx pool=backups'

把 client.cinder 、 client.glance 和 client.cinder-backup 的密钥环复制到适当的节点,并更改所有权

1
2
3
4
5
6
[root@cs112-02 (keystone_admin)]# ceph auth get-or-create client.glance | tee /etc/ceph/ceph.client.glance.keyring
[root@cs112-02 (keystone_admin)]# chown glance:glance /etc/ceph/ceph.client.glance.keyring
[root@cs112-02 (keystone_admin)]# ceph auth get-or-create client.cinder | tee /etc/ceph/ceph.client.cinder.keyring
[root@cs112-02 (keystone_admin)]# chown cinder:cinder /etc/ceph/ceph.client.cinder.keyring
[root@cs112-02 (keystone_admin)]# ceph auth get-or-create client.cinder-backup | tee /etc/ceph/ceph.client.cinder-backup.keyring
[root@cs112-02 (keystone_admin)]# chown cinder:cinder /etc/ceph/ceph.client.cinder-backup.keyring

将主控节点/etc/ceph目录分发至各计算节点

1
2
[root@cs112-02 (keystone_admin)]# scp -r /etc/ceph/ cs112-04:/etc/
[root@cs112-02 (keystone_admin)]# scp -r /etc/ceph/ cs112-05:/etc/

配置cinder后端存储使用ceph

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@cs112-02 ~(keystone_admin)]# grep -E -v '^#|^$' /etc/cinder/cinder.conf
[DEFAULT]
amqp_durable_queues=False
rabbit_host=10.112.1.2
rabbit_port=5672
rabbit_hosts=10.112.1.2:5672
rabbit_use_ssl=False
rabbit_userid=guest
rabbit_password=guest
rabbit_virtual_host=/
rabbit_ha_queues=False
notification_driver=cinder.openstack.common.notifier.rpc_notifier
rpc_backend=cinder.openstack.common.rpc.impl_kombu
control_exchange=openstack
osapi_volume_listen=0.0.0.0
osapi_max_limit=2000
backup_ceph_conf=/etc/ceph/ceph.conf
backup_ceph_user=cinder-backup
backup_ceph_chunk_size=134217728
backup_ceph_pool=backups
backup_ceph_stripe_unit=0
backup_ceph_stripe_count=0
restore_discard_excess_bytes=true
api_paste_config=/etc/cinder/api-paste.ini
glance_host=10.112.1.2
glance_port=9292
glance_api_version=2
auth_strategy=keystone
debug=False
verbose=True
log_dir=/var/log/cinder
use_syslog=False
iscsi_ip_address=10.112.1.2
volume_backend_name=ssd
iscsi_helper=lioadm
rbd_pool = volumes
rbd_user = cinder
rbd_ceph_conf= /etc/ceph/ceph.conf
rbd_flatten_volume_from_snapshot=false
rbd_secret_uuid = eb46c257-7cb1-4920-9ad0-e6521aec255b
rbd_max_clone_depth=5
rados_connect_timeout=-1
volume_driver = cinder.volume.drivers.rbd.RBDDriver
[BRCD_FABRIC_EXAMPLE]
[database]
connection=mysql://cinder:0ec8b2143ce641d6@10.112.1.2/cinder
idle_timeout=3600
[fc-zone-manager]
[keymgr]
[keystone_authtoken]
[matchmaker_redis]
[matchmaker_ring]
[oslo_messaging_amqp]
[profiler]
[ssl]

计算节点执行(Libvirt认证)

1
2
3
# cs112-04 cs112-05 都执行如下操作
[root@cs112-04 ~]# virsh secret-define --file secret.xml
[root@cs112-04 ~]# virsh secret-set-value --secret eb46c257-7cb1-4920-9ad0-e6521aec255b --base64 $(cat /etc/ceph/client.cinder.key) && rm client.cinder.key secret.xml

配置添加 neutron 网络

1
2
3
4
5
6
7
8
[root@cs112-02 (keystone_admin)]# neutron net-create private-admin --provider:network_type=vxlan --provider:segmentation_id=2
[root@cs112-02 (keystone_admin)]# neutron subnet-create private-admin 10.168.252.0/22 --name subnet-10.168.252 --dns-name 8.8.8.8
[root@cs112-02 (keystone_admin)]# neutron router-create ext-router
[root@cs112-02 (keystone_admin)]# neutron net-create public --router:external=True --shared --provider:network_type=vxlan --provider:segmentation_id=100
# 模拟公网地址
[root@cs112-02 (keystone_admin)]# neutron subnet-create public 122.122.122.0/24 --name subnet-122.112.122 --enable-dhcp=true --allocation-pool start=122.122.122.2,end=122.122.122.254 --gateway=122.122.122.1
[root@cs112-02 (keystone_admin)]# neutron router-gateway-set ext-router public --disable-snat
[root@cs112-02 (keystone_admin)]# neutron router-interface-add ext-router subnet-10.168.252

重启各节点的服务

1
systemctl list-units | grep openstack | awk '{print $1}' | grep open | while read line; do systemctl restart $line; done

查看各节点服务是否正常

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
# 主控节点
[root@cs112-02 ~(keystone_admin)]# openstack-status
== Nova services ==
openstack-nova-api: active
openstack-nova-cert: active
openstack-nova-compute: inactive (disabled on boot)
openstack-nova-network: inactive (disabled on boot)
openstack-nova-scheduler: active
openstack-nova-volume: inactive (disabled on boot)
openstack-nova-conductor: active
== Glance services ==
openstack-glance-api: active
openstack-glance-registry: active
== Keystone service ==
openstack-keystone: active
== Horizon service ==
openstack-dashboard: active
== neutron services ==
neutron-server: active
neutron-dhcp-agent: inactive (disabled on boot)
neutron-l3-agent: inactive (disabled on boot)
neutron-metadata-agent: inactive (disabled on boot)
neutron-lbaas-agent: inactive (disabled on boot)
neutron-openvswitch-agent: inactive (disabled on boot)
neutron-linuxbridge-agent: inactive (disabled on boot)
neutron-ryu-agent: inactive (disabled on boot)
neutron-nec-agent: inactive (disabled on boot)
neutron-mlnx-agent: inactive (disabled on boot)
== Cinder services ==
openstack-cinder-api: active
openstack-cinder-scheduler: active
openstack-cinder-volume: active
openstack-cinder-backup: inactive (disabled on boot)
== Ceilometer services ==
openstack-ceilometer-api: active
openstack-ceilometer-central: active
openstack-ceilometer-compute: inactive (disabled on boot)
openstack-ceilometer-collector: active
openstack-ceilometer-alarm-notifier: active
openstack-ceilometer-alarm-evaluator: active
== Support services ==
openvswitch: inactive (disabled on boot)
dbus: active
tgtd: inactive (disabled on boot)
rabbitmq-server: active
memcached: active
== Keystone users ==
+----------------------------------+------------+---------+----------------------+
| id | name | enabled | email |
+----------------------------------+------------+---------+----------------------+
| e73399b3a46a4ca9a32b7b29be7aa82b | admin | True | root@localhost |
| d2192256bf9c4cfd81b40349ff742ed5 | ceilometer | True | ceilometer@localhost |
| 308c1e0561994396944331dc1804d919 | cinder | True | cinder@localhost |
| b69c0a4b43a44f37b85f1f9c14ea18fe | glance | True | glance@localhost |
| 6529854ab12a40618bb92a2c9a8c2723 | neutron | True | neutron@localhost |
| 238e89dc80bc44688c88f42f5464db29 | nova | True | nova@localhost |
+----------------------------------+------------+---------+----------------------+
== Glance images ==
+--------------------------------------+----------------------------+-------------+------------------+-------------+--------+
| ID | Name | Disk Format | Container Format | Size | Status |
+--------------------------------------+----------------------------+-------------+------------------+-------------+--------+
| 3c7063bb-4744-4129-bb9f-10a60d09af86 | CentOS-7.2-x64-custom-v1 | raw | bare | 42949672960 | active |
| 132cd63d-2133-48b0-9c8a-57196c9e3a8f | CentOS-7.2-x86_64-20170830 | raw | bare | 21474836480 | active |
| a74c1615-7f3e-4e79-9278-9fa47a86df8d | Cirros-Test | qcow2 | bare | 13147648 | active |
| cb7f712e-c6d0-4f21-bd0d-ec33bb6bc26e | win2k8-R2-D-x64-20170830 | raw | bare | 42949672960 | active |
+--------------------------------------+----------------------------+-------------+------------------+-------------+--------+
== Nova managed services ==
+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| Binary | Host | Zone | Status | State | Updated_at | Disabled Reason |
+------------------+----------+----------+---------+-------+----------------------------+-----------------+
| nova-consoleauth | cs112-02 | internal | enabled | up | 2017-08-31T01:37:11.000000 | - |
| nova-scheduler | cs112-02 | internal | enabled | up | 2017-08-31T01:37:11.000000 | - |
| nova-conductor | cs112-02 | internal | enabled | up | 2017-08-31T01:37:16.000000 | - |
| nova-cert | cs112-02 | internal | enabled | up | 2017-08-31T01:37:09.000000 | - |
| nova-compute | cs112-05 | nova | enabled | up | 2017-08-31T01:37:07.000000 | None |
| nova-compute | cs112-04 | nova | enabled | up | 2017-08-31T01:37:08.000000 | None |
+------------------+----------+----------+---------+-------+----------------------------+-----------------+
== Nova networks ==
+--------------------------------------+---------------+------+
| ID | Label | Cidr |
+--------------------------------------+---------------+------+
| 446b19db-ad13-41f9-ac8b-476664524667 | private-admin | - |
| 67337588-9dc2-40c9-9f16-9865be825de9 | public | - |
+--------------------------------------+---------------+------+
== Nova instance flavors ==
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
| 1 | m1.tiny | 512 | 1 | 0 | | 1 | 1.0 | True |
| 2 | m1.small | 2048 | 20 | 0 | | 1 | 1.0 | True |
| 3 | m1.medium | 4096 | 40 | 0 | | 2 | 1.0 | True |
| 4 | m1.large | 8192 | 80 | 0 | | 4 | 1.0 | True |
| 5 | m1.xlarge | 16384 | 160 | 0 | | 8 | 1.0 | True |
+----+-----------+-----------+------+-----------+------+-------+-------------+-----------+
== Nova instances ==
+--------------------------------------+---------------+--------+------------+-------------+-----------------------------+
| ID | Name | Status | Task State | Power State | Networks |
+--------------------------------------+---------------+--------+------------+-------------+-----------------------------+
| 25f63505-74bb-44c2-b9f2-fceb2cd11145 | C7-12 | ACTIVE | - | Running | private-admin=10.168.252.9 |
| d55f5157-8f2c-46e9-aecf-efefa4a40530 | CentOS-7-2 | ACTIVE | - | Running | private-admin=10.168.252.11 |
| d0d9ec6f-068b-4cfd-8f62-3bf51a6cb80a | Cirrors-First | ACTIVE | - | Running | private-admin=10.168.252.2 |
| 4582bf40-0a49-4ee5-818b-06a31a238532 | Cirros-Second | ACTIVE | - | Running | private-admin=10.168.252.4 |
+--------------------------------------+---------------+--------+------------+-------------+-----------------------------+
# 计算节点
[root@cs112-04 ~]# openstack-status | grep -w active
openstack-nova-compute: active
neutron-openvswitch-agent: active
openstack-ceilometer-compute: active
libvirtd: active
openvswitch: active
dbus: active
# 网络节点
[root@cs112-03 ~]# openstack-status | grep -w active
neutron-dhcp-agent: active
neutron-l3-agent: active
neutron-metadata-agent: active
neutron-openvswitch-agent: active
openvswitch: active
dbus: active

至此,OpenStack-Icehouse版本的安装已经完成,I版本已经比较陈旧,有机会会尝试安装新版本。
关于OpenStack与ceph的结合,在ceph官方文档中有更为详细的说明。
Ceph节点的部署请参考官方文档。

参考:
Ceph快设备与OpenStack
Ceph集群安装