第一套赛题

主机名 接口 ip地址
controller ens33 ens34 192.168.100.10 192.168.200.10
compute en33 ens34 192.168.100.20 192.168.200.20

任务1 基础运维任务(5分)

1.根据表1中的IP地址规划,设置各服务器节点的IP地址,确保网络正常通信,设置云服务器1主机名为Controller,云服务器2主机名为Compute,并修改hosts文件将IP地址映射为主机名,关闭防火墙并设置为开机不启动,设置SELinux为Permissive 模式。

controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#更改主机名
hostnamectl set-hostname controller
#添加映射
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#设置selinux
setenforce 0
vi /etc/selinux/config
SELINUX=permissive #修改为permissive

compute

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#修改主机名
hostnamectl set-hostname compute
#添加映射
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#设置selinux
setenforce 0
vi /etc/selinux/config
SELINUX=permissive #修改为permissive

2.将提供的CentOS-7-x86_64-DVD-1804.iso和OpenStackQueens.iso光盘镜像上传到Controller节点/root目录下,然后在/opt目录下分别创建centos目录和openstack目录,并将镜像文件CentOS-7-x86_64-DVD-1804.iso挂载到centos目录下,将镜像文件OpenStackQueens.iso挂载到openstack目录下。

(若无OpenStackQueens.iso,可用国赛镜像代替)

1
2
3
4
5
6
#创建目录
mkdir /opt/centos
mkdir /opt/openstack
#挂载
mount CentOS-7-x86_64-DVD-1804.iso /opt/centos/
mount chinaskills_cloud_iaas.iso /opt/openstack/

3.在Controller节点上利用centos目录中的软件包安装vsftp服务器,设置开机自启动,并使用ftp提供yum仓库服务,分别设置controller节点和compute节点的yum源文件ftp.repo,其中ftp服务器地址使用IP形式。

controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
mv /etc/yum.repos.d/* /etc/yum
#编写yum源
cat /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[openstack]
name=openstack
baseurl=file:///opt/openstack/iaas-repo
gpgcheck=0
enabled=1
#更新缓存区
yum repolist
#安装并配置vsftpd
yum install -y vsftpd
vi /etc/vsftpd/vsftpd.conf
anon_root=/opt/

#启动服务
systemctl start vsftpd
systemctl enable vsftpd

compute

1
2
3
4
5
6
7
8
9
10
11
12
13
14
mv /etc/yum.repos.d/* /etc/yum
cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
enabled=1
[openstack]
name=openstack
baseurl=ftp://192.168.100.10/openstack/iaas-repo
gpgcheck=0
enabled=1
#更新缓存区
yum repolist

4.在Controller节点上部署chrony服务器,允许其他节点同步时间,启动服务并设置为开机启动;在compute节点上指定controller节点为上游NTP服务器,重启服务并设为开机启动。

controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#安装chrony服务器
yum install -y chrony
vi /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
#添加在最后
allow 192.168.100.0/24
local stratum 10
#重启并设为开机自动启动
systemctl restart chronyd
systemctl enable chronyd
#生效
chronyc sources

compute

1
2
3
4
5
6
7
8
9
10
11
vi /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
#重启并设为开机自动启动
systemctl restart chronyd
systemctl enable chronyd
#生效
chronyc sources

5.在compute节点上利用空白分区划分2个100G分区。

compute

1
fdisk /dev/sdb

任务2 OpenStack搭建任务(10分)

1.在控制节点和计算节点上分别安装quickinstall软件包,根据表2配置脚本文件中基本变量(配置脚本文件为/etc/cloudconfig/openrc.sh)。

image-20221011095611848

2.在controller节点上使用 /usr/local/bin/openstack-install-mysql.sh 脚本安装Mariadb、Memcached、etcd服务。

3.在controller节点上使用 /usr/local/bin/openstack-install-keystone.sh 脚本安装Keystone 服务。

4.在controller节点上使用/usr/local/bin/openstack-install-glance.sh脚本安装glance 服务。

5.在controller节点和compute节点上分别使用/usr/local/bin/openstack-install-nova -controller.sh脚本、/usr/local/bin/openstack-install-nova-compute.sh脚本安装Nova 服务。

6.在controller节点和compute节点上分别修改/usr/local/bin/openstack-install-neutron -controller.sh脚本、/usr/local/bin/openstack-install-neutron-compute.sh脚本分别安装 Neutron 服务,网络选用vlan模式。

7.在controller节点上使用 /usr/local/bin/openstack-install-heat.sh脚本安装dashboad服务。

8.在controller节点和compute节点上分别修改/usr/local/bin/openstack-install–cinder -controller.sh脚本、/usr/local/bin/openstack-install -cinder-compute.sh脚本安装cinder服务。

任务3 OpenStack云平台运维(10分)

1.在openstack私有云平台上,基于cirrors.qcow2镜像,使用命令创建一个名为cirros的镜像。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
 openstack image create cirros --disk-format qcow2 --container bare --file /root/cirros-0.3.4-x86_64-disk.img
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2022-10-11T02:22:06Z |
| disk_format | qcow2 |
| file | /v2/images/4650b6d8-97dc-44e2-89f0-3c674f22f422/file |
| id | 4650b6d8-97dc-44e2-89f0-3c674f22f422 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | d58a1b0d053d4fd7ac1ddd98131973b3 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2022-10-11T02:22:06Z |
| virtual_size | None |
| visibility | shared |
+------------------+------------------------------------------------------+

2.在openstack私有云平台上,使用命令创建一个名为Fmin,ID为1,内存为1024 MB,磁盘为10 GB,vcpu数量为1的云主机类型。

1
2
3
4
5
6
7
nova flavor-create Fmin 1 1024 10 1
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | Description |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 1 | Fmin | 1024 | 10 | 0 | | 1 | 1.0 | True | - |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+

3.在openstack私有云平台上,使用命令创建云主机外部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为192.168.x.0/24(其中x是考位号), 网关为192.168.x.1,网络使用vlan模式。

1
2
3
4
#创建网络
openstack network create extnet --provider-network-type vlan --external --provider-physical-network provider
#绑定子网
openstack subnet create extsubnet --network extnet --dhcp --gateway 192.168.200.1 --allocation-pool start=192.168.200.100,end=192.168.200.200 --subnet-range 192.168.200.0/24

4.在openstack私有云平台上,通过使用命令创建云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为10.10.x.0/24(其中x是考位号),网关为10.10.x.1。

1
2
3
4
#创建内网
openstack network create intnet --internal
#绑定子网
openstack subnet create intsubnet --network intnet --dhcp --gateway 10.10.200.1 --allocation-pool start=10.10.200.100,end=10.10.200.200 --subnet-range 10.10.200.0/24

5.添加名为 ext-router 的路由器,配置路由接口地址,完成内网子网intsubnet和外部网络extnet的连通。

1
2
3
4
5
#创建路由
openstack router create ext-router --enable
openstack router set --enable --enable-snat --external-gateway extnet ext-router
#子网连通
openstack router add subnet ext-router intsubnet

6.在openstack私有云平台上,基于“cirros” 镜像、1vCPU/1G /10G 的flavor、 intsubnet的网络,绑定浮动IP,使用命令创建一台虚拟机VM1,启动VM1,并使用PC机能远程登录到VM1。

1
2
3
4
5
6
7
8
9
#若无法连接虚拟机vm1可能是外部网络的模式不正确。可以修改网络模式
#创建云主机
nova boot --image cirros --flavor Fmin --nic net-name=intnet VM1
#创建浮动ip
openstack floating ip create extnet
#查看浮动ip的id
openstack floating ip list
#绑定
openstack server add floating ip VM1 7bf5fa40-ec57-4abf-a666-b65082102a22

7.在openstack私有云平台上,创建一个名为“lvm”的卷类型,创建1块卷类型为lvm的40G云盘,并附加到虚拟机VM1上。

1
2
3
4
5
6
#创建卷类型
openstack volume type create lvm
#创建lvm类型的卷
openstack volume create --type lvm --size 40 disk1
#绑定
openstack server add volume VM1 disk1

8.在虚拟机VM1上,使用附加的云盘,划分为4个10G的分区,创建一个raid 5,其中1个分区作为热备。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
#分区 (注意分区内存都为10G)
fdisk /dev/vdb
Command (m for help): p

Disk /dev/vdb: 21.5 GB, 21474836480 bytes, 41943040 sectors
Units = sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disk label type: dos
Disk identifier: 0x60fa7e70

Device Boot Start End Blocks Id System
/dev/vdb1 2048 8390655 4194304 83 Linux
/dev/vdb2 8390656 16779263 4194304 83 Linux
/dev/vdb3 16779264 25167871 4194304 83 Linux
/dev/vdb4 25167872 33556479 4194304 83 Linux

#添加yum源
mv /etc/yum.repos.d/* /etc/yum
cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
enabled=1
[openstack]
name=openstack
baseurl=ftp://192.168.100.10/openstack/iaas-repo
gpgcheck=0
enabled=1

#安装mdadm
yum install -y mdadm
mdadm -C /dev/md5 -a yes -l 5 -n 3 -x 1 /dev/vdb{1,2,3,4}
#参数解释
# -C 创建的意思
#/dev/md0 创建的raid名字,题目要求什么就给什么
#-C下的 -a yes|no 是否自动创建目标RAID设备的设备文件
#-l Raid的等级
#-n 使用多少块硬盘来创建当前Raid
#-x 空闲盘,这也就是热备盘了,当正常的盘出问题了,这块盘就能从 share状态 转换到 spare rebuilding状态,模拟情况见下方


#查看详细信息
mdadm -D /dev/md5 #-D 显示raid的详细信息

#挂载
mkdir /backup #创建挂载点,挂载磁盘
mkfs.ext4 /dev/md5 #格式化磁盘,格式方式为ext4
mount /dev/md0 /backup #挂载磁盘到/backup目录
df -h #查看磁盘挂载情况

9.在Controller节点中编写/root/openstack/deletevm.sh的shell脚本,释放虚拟机VM1,执行脚本完成实例释放。

1
2
3
4
5
6
7
cat deletevm.sh
#!/bin/bash
source /etc/keystone/admin-openrc.sh
openstack server remove volume VM1 disk1 #删除挂载的disk1
openstack server remove floating ip VM1 192.168.200.118 #删除网络
openstack server delete VM1

第二套赛题

任务1 基础运维任务

1.根据表1中的IP地址规划,设置各服务器节点的IP地址,确保网络正常通信,设置云服务器1主机名为Controller,云服务器2主机名为Compute,并修改hosts文件将IP地址映射为主机名,关闭防火墙并设置为开机不启动,设置SELinux为Permissive 模式。

controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#更改主机名
hostnamectl set-hostname controller
#添加映射
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#设置selinux
setenforce 0
vi /etc/selinux/config
SELINUX=permissive #修改为permissive

compute

1
2
3
4
5
6
7
8
9
10
11
12
13
14
#修改主机名
hostnamectl set-hostname compute
#添加映射
cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.100.10 controller
192.168.100.20 compute
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#设置selinux
setenforce 0
vi /etc/selinux/config
SELINUX=permissive #修改为permissive

2.将提供的CentOS-7-x86_64-DVD-1804.iso和OpenStackQueens.iso光盘镜像上传到Compute节点的/root目录下,然后在/opt目录下分别创建centos目录和openstack目录,并将镜像文件CentOS-7-x86_64-DVD-1804.iso挂载到centos目录下,将镜像文件OpenStackQueens.iso挂载到openstack目录下。

(若无该镜像用国赛镜像代替)

1
2
3
4
5
6
#创建目录
mkdir /opt/centos
mkdir /opt/openstack
#挂载
mount CentOS-7-x86_64-DVD-1804.iso /opt/centos/
mount chinaskills_cloud_iaas.iso /opt/openstack/

3.在Compute节点上利用centos目录中的软件包安装httpd服务器并设置开机自启动,提供yum仓库服务,并分别设置controller节点和compute节点的yum源文件http.repo,其中节点的地址使用IP形式。

controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
mv /etc/yum.repos.d/* /etc/yum
#编写yum源
cat /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[openstack]
name=openstack
baseurl=file:///opt/openstack/iaas-repo
gpgcheck=0
enabled=1
#更新缓存区
yum repolist
#安装并配置vsftpd
yum install -y vsftpd
vi /etc/vsftpd/vsftpd.conf
anon_root=/opt/

#启动服务
systemctl start vsftpd
systemctl enable vsftpd

compute

1
2
3
4
5
6
7
8
9
10
11
12
13
14
mv /etc/yum.repos.d/* /etc/yum
cat /etc/yum.repos.d/http.repo
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
enabled=1
[openstack]
name=openstack
baseurl=ftp://192.168.100.10/openstack/iaas-repo
gpgcheck=0
enabled=1
#更新缓存区
yum repolist

4.在Controller节点上部署chrony服务器,允许其他节点同步时间,启动服务并设置为开机启动;并在compute节点上指定controller节点为上游NTP服务器,重启服务并设为开机启动。

controller

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#安装chrony服务器
yum install -y chrony
vi /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
#添加在最后
allow 192.168.100.0/24
local stratum 10
#重启并设为开机自动启动
systemctl restart chronyd
systemctl enable chronyd
#生效
chronyc source

compute

1
2
3
4
5
6
7
8
9
10
11
vi /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
#重启并设为开机自动启动
systemctl restart chronyd
systemctl enable chronyd
#生效
chronyc sources

5.在compute节点上查看分区情况,并利用空白分区划分2个100G分区。

1
fdisk /dev/sdb

任务2 OpenStack搭建任务

1.在控制节点和计算节点上分别安装quickinstall软件包,根据表2配置脚本文件中基本变量(配置脚本文件为/etc/cloudconfig/openrc.sh)。

2.在controller点上使用 /usr/local/bin/openstack-install-mysql.sh 脚本安装Mariadb、Memcached、etcd服务。

3.在controller节点上使用 /usr/local/bin/openstack-install-keystone.sh 脚本安装Keystone 服务。

4.在controller节点上使用/usr/local/bin/openstack-install-glance.sh脚本安装 glance 服务。

5.在controller节点和compute节点上分别使用/usr/local/bin/openstack-install-nova -controller.sh脚本、/usr/local/bin/openstack-install-nova-compute.sh脚本安装Nova 服务。

6.在controller节点和compute节点上分别修改/usr/local/bin/openstack-install-neutron -controller.sh脚本、/usr/local/bin/openstack-install-neutron-compute.sh脚本分别安装 Neutron 服务,网络选用vlan模式。

7.在controller节点上使用 /usr/local/bin/openstack-install-heat.sh 脚本dashboad服务。

8.在controller节点和compute节点上分别修改/usr/local/bin/openstack-install-swift -controller.sh脚本、/usr/local/bin/openstack-install-swift -controller.sh脚本安装 swift服务。

任务3 OpenStack云平台运维(10分)

1.在openstack私有云平台上,基于cirrors.qcow2镜像,使用命令创建一个名为cirros的镜像。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
 openstack image create cirros --disk-format qcow2 --container bare --file /root/cirros-0.3.4-x86_64-disk.img
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2022-10-11T02:22:06Z |
| disk_format | qcow2 |
| file | /v2/images/4650b6d8-97dc-44e2-89f0-3c674f22f422/file |
| id | 4650b6d8-97dc-44e2-89f0-3c674f22f422 |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | d58a1b0d053d4fd7ac1ddd98131973b3 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2022-10-11T02:22:06Z |
| virtual_size | None |
| visibility | shared |
+------------------+------------------------------------------------------+

2.在openstack私有云平台上,编写模板server.yml,创建名为“m1.flavor”、 ID 为 1234、 内存为 1024MB、硬盘为 20GB、 vcpu数量为 2的云主机类型。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
# 在 controller节点安装 heat 服务
[root@controller opt]# iaas-install-heat.sh

# 查看资源类型
[root@controller opt]# heat resource-type-list
WARNING (shell) "heat resource-type-list" is deprecated, please use "openstack orchestration resource type list" instead # 这里是警告 让使用 openstack orchestration resource type list 这个命令,不影响后续
+------------------------------------------+
| resource_type |
+------------------------------------------+
| AWS::AutoScaling::AutoScalingGroup |
|... |
| OS::Cinder::EncryptedVolumeType |
| OS::Cinder::QoSAssociation |
| OS::Cinder::QoSSpecs |
| OS::Cinder::Quota |
| OS::Cinder::Volume |
| OS::Cinder::VolumeAttachment |
| OS::Cinder::VolumeType |
| OS::Glance::Image |
| OS::Heat::AccessPolicy |
| OS::Heat::AutoScalingGroup |
| OS::Heat::CloudConfig |
| OS::Heat::DeployedServer |
| OS::Heat::InstanceGroup |
| OS::Heat::MultipartMime |
| OS::Heat::None |
| OS::Heat::RandomString |
| OS::Heat::ResourceChain |
| OS::Heat::ResourceGroup |
| OS::Heat::ScalingPolicy |
| OS::Heat::SoftwareComponent |
| OS::Heat::SoftwareConfig |
| OS::Heat::SoftwareDeployment |
| OS::Heat::SoftwareDeploymentGroup |
| OS::Heat::Stack |
| OS::Heat::StructuredConfig |
| OS::Heat::StructuredDeployment |
| OS::Heat::StructuredDeploymentGroup |
| OS::Heat::TestResource |
| OS::Heat::UpdateWaitConditionHandle |
| OS::Heat::Value |
| OS::Heat::WaitCondition |
| OS::Heat::WaitConditionHandle |
| OS::Keystone::Domain |
| OS::Keystone::Endpoint |
| OS::Keystone::Group |
| OS::Keystone::GroupRoleAssignment |
| OS::Keystone::Project |
| OS::Keystone::Region |
| OS::Keystone::Role |
| OS::Keystone::Service |
| OS::Keystone::User |
| OS::Keystone::UserRoleAssignment |
| OS::Neutron::AddressScope |
| OS::Neutron::ExtraRoute |
| OS::Neutron::FloatingIP |
| OS::Neutron::FloatingIPAssociation |
| OS::Neutron::FlowClassifier |
| OS::Neutron::MeteringLabel |
| OS::Neutron::MeteringRule |
| OS::Neutron::Net |
| OS::Neutron::NetworkGateway |
| OS::Neutron::Port |
| OS::Neutron::PortPair |
| OS::Neutron::ProviderNet |
| OS::Neutron::Quota |
| OS::Neutron::RBACPolicy |
| OS::Neutron::Router |
| OS::Neutron::RouterInterface |
| OS::Neutron::SecurityGroup |
| OS::Neutron::SecurityGroupRule |
| OS::Neutron::Subnet |
| OS::Neutron::SubnetPool |
| OS::Nova::Flavor |
| OS::Nova::FloatingIP |
| OS::Nova::FloatingIPAssociation |
| OS::Nova::HostAggregate |
| OS::Nova::KeyPair |
| OS::Nova::Quota |
| OS::Nova::Server |
| OS::Nova::ServerGroup |
| OS::Senlin::Cluster |
| OS::Senlin::Node |
| OS::Senlin::Policy |
| OS::Senlin::Profile |
| OS::Senlin::Receiver |
+------------------------------------------+

# 查看可用于编排的模板版本
[root@controller opt]# openstack orchestration template version list
+--------------------------------------+------+------------------------------+
| Version | Type | Aliases |
+--------------------------------------+------+------------------------------+
| AWSTemplateFormatVersion.2010-09-09 | cfn | |
| HeatTemplateFormatVersion.2012-12-12 | cfn | |
| heat_template_version.2013-05-23 | hot | |
| heat_template_version.2014-10-16 | hot | |
| heat_template_version.2015-04-30 | hot | |
| heat_template_version.2015-10-15 | hot | |
| heat_template_version.2016-04-08 | hot | |
| heat_template_version.2016-10-14 | hot | heat_template_version.newton |
| heat_template_version.2017-02-24 | hot | heat_template_version.ocata |
| heat_template_version.2017-09-01 | hot | heat_template_version.pike |
| heat_template_version.2018-03-02 | hot | heat_template_version.queens |
+--------------------------------------+------+------------------------------+

# 在/root/下编写 server.yaml 文件
[root@controller ~]# vim server.yaml

# server.yaml 文件内容
[root@controller ~]# cat server.yaml
heat_template_version: 2015-04-30 # 使用的heat模板版本
description: Create Flavor # 描述信息
resources: # 定义资源
flavor: # 在模板的资源部分中必须是唯一的资源ID
type: OS::Nova::Flavor # 资源类型,这里表示一个 Flavor 类型
properties: # 资源特定属性的列表。
name: "m1.flavor" # Flavor类型的名称属性
flavorid: "1234" # id属性,如果没有指定则会自动生成UUID
disk: 20 # 磁盘大小默认是GB
ram: 1024 # 内存大小必须是MB
vcpus: 1
outputs: # 定义输出信息
flavor_info: # 输出信息的名称
description: Get the information of virtual machine type # 输出描述
value: { get_attr: [ flavor, show ] } # get_attr 从相应资源定义创建的实例在运行时解析其属性值进行输出

# 创建资源栈
[root@controller ~]# heat stack-create m1_flavor_stack -f server.yaml
WARNING (shell) "heat stack-create" is deprecated, please use "openstack stack create" instead
WARNING (shell) "heat stack-list" is deprecated, please use "openstack stack list" instead
+--------------------------------------+-----------------+--------------------+----------------------+--------------+----------------------------------+
| id | stack_name | stack_status | creation_time | updated_time | project |
+--------------------------------------+-----------------+--------------------+----------------------+--------------+----------------------------------+
| cb5d6ca6-9106-46f3-aa7d-8f85f0a86461 | m1_flavor_stack | CREATE_IN_PROGRESS | 2022-04-10T09:50:46Z | None | d27d72c12d3b46b89572df53a71e5d04 |
+--------------------------------------+-----------------+--------------------+----------------------+--------------+----------------------------------+

# 查看资源栈列表
[root@controller ~]# openstack stack list
+--------------------------------------+-----------------+----------------------------------+-----------------+----------------------+--------------+
| ID | Stack Name | Project | Stack Status | Creation Time | Updated Time |
+--------------------------------------+-----------------+----------------------------------+-----------------+----------------------+--------------+
| 5a4b1816-aaf6-4739-83e5-4001c80d89d1 | m1_flavor_stack | d27d72c12d3b46b89572df53a71e5d04 | CREATE_COMPLETE | 2022-04-10T10:31:32Z | None |
+--------------------------------------+-----------------+----------------------------------+-----------------+----------------------+--------------+
## Stack Status 显示 CREATE_COMPLETE 表示创建成功

# 查看指定栈的详细信息
[root@controller ~]# openstack stack show m1_flavor_stack
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------+
| id | 5a4b1816-aaf6-4739-83e5-4001c80d89d1 |
| stack_name | m1_flavor_stack |
| description | Create Flavor |
| creation_time | 2022-04-10T10:31:32Z |
| updated_time | None |
| stack_status | CREATE_COMPLETE |
| stack_status_reason | Stack CREATE completed successfully |
| parameters | OS::project_id: d27d72c12d3b46b89572df53a71e5d04 |
| | OS::stack_id: 5a4b1816-aaf6-4739-83e5-4001c80d89d1 |
| | OS::stack_name: m1_flavor_stack |
| | |
| outputs | - description: Get the information of virtual machine type |
| | output_key: flavor_info |
| | output_value: |
| | OS-FLV-DISABLED:disabled: false |
| | OS-FLV-EXT-DATA:ephemeral: 0 |
| | disk: 20 |
| | id: '1234' |
| | links: |
| | - href: http://controller:8774/v2.1/flavors/1234 |
| | rel: self |
| | - href: http://controller:8774/flavors/1234 |
| | rel: bookmark |
| | name: m1.flavor |
| | os-flavor-access:is_public: true |
| | ram: 1024 |
| | rxtx_factor: 1.0 |
| | swap: '' |
| | vcpus: 1 |
| | |
| links | - href: http://controller:8004/v1/d27d72c12d3b46b89572df53a71e5d04/stacks/m1_flavor_stack/5a4b1816-aaf6-4739-83e5-4001c80d89d1 |
| | rel: self |
| | |
| parent | None |
| disable_rollback | True |
| deletion_time | None |
| stack_user_project_id | 374ce98267964767adacf91527ac0412 |
| capabilities | [] |
| notification_topics | [] |
| stack_owner | None |
| timeout_mins | None |
| tags | None |
+-----------------------+--------------------------------------------------------------------------------------------------------------------------------+

3.在openstack私有云平台上,通过使用命令创建云主机外部网络extnet,子网extsubnet,虚拟机浮动 IP 网段为192.168.x.0/24(其中x是考位号), 网关为192.168.x.1,网络使用vlan模式;创建云主机内网intnet,子网intsubnet,虚拟机子网 IP 网段为10.0.x.0/24(其中x是考位号),网关为10.0.x.1;完成内网子网intsubnet和外部网络extnet的连通。

1
2
3
4
5
6
7
8
9
10
11
12
13
#创建网络
openstack network create extnet --provider-network-type vlan --external --provider-physical-network provider
#绑定子网
openstack subnet create extsubnet --network extnet --dhcp --gateway 192.168.200.1 --allocation-pool start=192.168.200.100,end=192.168.200.200 --subnet-range 192.168.200.0/24
#创建内网
openstack network create intnet --internal
#绑定子网
openstack subnet create intsubnet --network intnet --dhcp --gateway 10.10.200.1 --allocation-pool start=10.10.200.100,end=10.10.200.200 --subnet-range 10.10.200.0/24
#创建路由
openstack router create ext-router --enable
openstack router set --enable --enable-snat --external-gateway extnet ext-router
#子网连通
openstack router add subnet ext-router intsubnet

4.在openstack私有云平台上,基于“cirros” 镜像、m1.flavor、 intsubnet的网络,绑定浮动IP,通过使用命令创建一台云主机VM1,启动VM1,并使用PC机能远程登录到VM1。

1
2
3
4
5
6
7
8
9
#若无法连接虚拟机vm1可能是外部网络的模式不正确。可以修改网络模式
#创建云主机
nova boot --image cirros --flavor m1.flavor --nic net-name=intnet VM1
#创建浮动ip
openstack floating ip create extnet
#查看浮动ip的id
openstack floating ip list
#绑定
openstack server add floating ip VM1 7bf5fa40-ec57-4abf-a666-b65082102a22

5.在Controller节点中编写名为modvm.sh的shell脚本查看云主机VM1的内存大小,如果内存小于2G,调整云主机VM1的内存为2G。

所有节点

1
2
3
4
5
6
7
8
9
vim /etc/nova/nova.conf
[DEFAULT]
allow_resize_to_same_host=True
scheduler_default_filters=RetryFilter,AvailabilityZoneFilter,RamFilter,ComputeFilter,ComputeCapabilitiesFilter,ImagePropertiesFilter,ServerGroupAntiAffinityFilter,ServerGroupAffinityFilter

#controller
systemctl restart openstack-nova-api.service openstack-nova-scheduler.service openstack-nova-conductor.service openstack-nova-novncproxy.service
#compute
systemctl restart libvirtd.service openstack-nova-compute.service

编写脚本

1
2
3
4
5
6
7
8
9
#!/bin/bash
source /etc/keystone/admin-openrc.sh
nova show VM1 | grep flavor:ram > a
b=$(awk '{print $4}' a)
echo $b
if [ "$b" -le "2048" ]; then
openstack server resize VM1 --flavor centos
fi
echo '运行结束!'
  1. 在openstack私有云平台上,将云主机VM1保存为qcow2格式的快照并保存到controller节点/root/cloudsave目录下。

    方法一

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    #关闭云主机
    openstack server stop VM1
    #查看云主机在那个节点上
    openstack server show VM1
    #进入后端
    cd /var/lib/nova/instances/0aa53421-7363-415a-a08d-228023a6b857/
    #创建qcow2格式的快照
    qemu-img convert -c -O qcow2 disk VM1.qcow2 -h
    #保存到/root/cloudsave目录下
    mkdir /root/cloudsave
    mv VM1.qcow2 /root/cloudsave
    #查看详细信息
    qemu-img info vm1.qcow2
    image: vm1.qcow2
    file format: qcow2
    virtual size: 20G (21474836480 bytes)
    disk size: 13M
    cluster_size: 65536
    Format specific information:
    compat: 1.1
    lazy refcounts: false
    refcount bits: 16
    corrupt: false

    方法二

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#关闭云主机
openstack server stop VM1
#创建快照文件
openstack server image create VM1 --name demo
#将快照保存到/root/cloudsave
mkdir /root/cloudsave/
openstack image save demo --file /root/cloudsave/demo.qcow2
#查看详细信息
qemu-img info demo.qcow2
image: demo.qcow2
file format: qcow2
virtual size: 20G (21474836480 bytes)
disk size: 21M
cluster_size: 65536
Format specific information:
compat: 1.1
lazy refcounts: false
refcount bits: 16
corrupt: false

7.在controller节点上新建名为Chinaskill的容器,并获取该容器的存放路径;将 centos7_5.qcow2 镜像上传到chinaskill容器中,并设置分段存放, 每一段大小为 10M。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#新建Chinaskill容器
swift post Chinaskill
#上传镜像并分段存放
swift upload Chinaskill -S 10485760 CentOS_7.5_x86_64_XD.qcow2
#查看信息
swift stat Chinaskill CentOS_7.5_x86_64_XD.qcow2
Account: AUTH_d58a1b0d053d4fd7ac1ddd98131973b3
Container: Chinaskill
Object: opt/openstack/images/CentOS_7.5_x86_64_XD.qcow2
Content Type: application/octet-stream
Content Length: 510459904
Last Modified: Wed, 12 Oct 2022 01:41:17 GMT
ETag: "7d5003b2fb7a024c3abd9510bf6198fa"
Manifest: Chinaskill_segments/opt/openstack/images/CentOS_7.5_x86_64_XD.qcow2/1603918661.000000/510459904/10485760/
Meta Mtime: 1603918661.000000
Accept-Ranges: bytes
X-Timestamp: 1665538876.17294
X-Trans-Id: tx97c10bc263f64a93a2ea6-0063461dda
X-Openstack-Request-Id: tx97c10bc263f64a93a2ea6-0063461dda

8. 登录172.17.x.10/dashboard,使用centos7镜像创建三台云主机来搭建rabbitmq集群。使用普通集群模式,其中一台做磁盘节点,另外两台做内存节点,配置完毕后启动rabbitmq服务。

1

9.使用镜像 centos7,创建两台云主机master和slave,并分别绑定浮动IP;在这2台云主机上安装mysql据库系统并配置为主从数据库(master为主节点、slave为从节点);并mater云主机的数据库中创建ChinaSkilldb库,在ChinaSkilldb库中创建表testable (id int not null primary key,Teamname varchar(50), remarks varchar(255)),在表中插入记录(1,”Cloud”,”ChinaSkill”)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
#修改yum源
mv /etc/yum.repos.d/* /etc/yum
cat /etc/yum.repos.d/http.repo
[centos]
name=centos
baseurl=ftp://192.168.100.10/centos
gpgcheck=0
enabled=1
[openstack]
name=openstack
baseurl=ftp://192.168.100.10/openstack/iaas-repo
gpgcheck=0
enabled=1
#更新缓存区
yum repolist

#修改主机映射
#注意修改主机名
10.10.200.101 master
10.10.200.109 slave

#安装mariadb所有节点
yum install -y mariadb mariadb-server #(安装数据库以及服务)
systemctl start mariadb #(启动数据库)
systemctl enable mariadb #(设置开机自启)

#初始化数据库(所有节点)
mysql_secure_installation
回车 y 123456 123456 y n y y
#修改主节点
cat /etc/my.cnf [mysqld] log_bin = mysql-bin #记录操作日志
binlog_ignore_db = mysql #不同步MySQL系统数据库
server_id = 18 #数据库集群中的每个节点id都要不同
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
#从节点添加server_id(注意要不一致)

#重启服务
systemctl restart mariadb

#主节点
mysql -uroot -p123456
#授权在任何客户端机器上可以以 root用户登录到数据库
grant all privileges on *.* to root@'%' identified by "123456";
#在主节点db1数据库上创建一个user用户让从节点db2连接
grant replication slave on *.* to 'user'@'db2' identified by '123456';
#从节点
mysql -uroot -p123456
change master to master_host='db1',master_user='user',master_password='123456';
start slave;
show slave status\G #Slave_IO_Running和Slave_SQL_Running的状态都为YES

#主节点创建数据库
create database Chinaskilldb;
use Chinaskilldb ;
create table testtable(id int not null primary key,Teamname varchar(50), remarks varchar(255));
insert into testtable values (1,"Cloud","ChinaSkill");