A模块题目OpenStack平台部署与运维

任务 1 私有云平台环境初始化(6 分)

IP 主机名
192.168.157.30 controller
192.168.157.31 compute

1.配置主机名

把 controller 节点主机名设置为 controller, compute 节点主机名设置为 compute。

分别在 controller 节点和 compute 节点将 hostname 命令的返回结果提交到答题框。【0.5 分】

1
2
3
4
5
6
[root@controller ~]# hostname
controller

[root@compute ~]# hostname
compute

解法:

1
2
3
hostnamectl set-hostname controller

hostnamectl set-hostname compute

2.配置 hosts 文件

分别在 controller 节点和 compute 节点修改 hosts 文件将 IP 地址映射为主机名。

请在 controller 节点将 cat /etc/hosts 命令的返回结果提交到答题框。 【0.5 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@controller ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.157.30 controller
192.168.157.31 compute


[root@compute ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.157.30 controller
192.168.157.31 compute

3.挂载光盘镜像

将提供的 CentOS-7-x86_64-DVD-1804.iso 和 bricsskills_cloud_iaas.iso 光盘镜像移动到

controller 节点 /root 目录下,然后在 /opt 目录下使用命令创建 centos 目录和 iaas 目录,

并将镜像文件 centOS-7-x86_64-DVD-1804.iso 挂载到 /opt/centos 目录下,将镜像文件

bricsskills_cloud_iaas.iso 挂载到 /iaas 目录下。

请在 controller 节点将 ls /opt/iaas/命令的返回结果提交到答题框。【0.5 分】

~]# lslink
1
iaas-repo  images

解法:

1
2
3
4
5
6
7
#将指定的镜像上传至/root目录下
#挂载

[root@controller ~]# cat >> /etc/fstab << EOF
/root/CentOS-7-x86_64-DVD-1804.iso /opt/centos iso9660 defaults 0 0
/root/chinaskills_cloud_iaas.iso /opt/iaas iso9660 defaults 0 0
EOF

4.配置 controller 节点 yum 源

将 controller 节点原有的 yum 源移动到/home 目录,

为 controller 节点创建本地 yum 源,

yum 源文件名为 local.repo。

请将 yum list | grep vsftpd 的返回结果提交到答题框。【0.5 分】

1
2
[root@controller ~]# yum list | grep vsftpd
vsftpd.x86_64 3.0.2-22.el7 centos

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@controller ~]# mkdir /home/yum
[root@controller ~]# mv /etc/yum.repos.d/* /home/yum
[root@controller ~]# cat /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=file:///opt/iaas/iaas-repo
gpgcheck=0
enabled=1

[root@controller ~]# yum repolist

5.搭建 ftp 服务器

在 controller 节点上安装 vsftp 服务, 将/opt 目录设为共享,并设置为开机自启动,然后

重启服务生效。

请将 cat /etc/vsftpd/vsftpd.conf |grep /opt 命令的返回结果提交到答题框。【1 分】

1
2
[root@controller ~]#  cat /etc/vsftpd/vsftpd.conf |grep /opt
anon_root=/opt

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@controller ~]# yum install -y vsftpd
[root@controller ~]# cat /etc/vsftpd/vsftpd.conf
#添加
anon_root=/opt
[root@controller ~]# systemctl enable vsftpd --now
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.

#关闭防火墙及安全策略
[root@controller ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@controller ~]# setenforce 0
[root@controller ~]# cat /etc/selinux/config
SELINUX=permissive

[root@compute ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@compute ~]# setenforce 0
[root@compute ~]# cat /etc/selinux/config
SELINUX=permissive


6.配置 compute 节点 yum 源

将 compute 节点原有的 yum 源移动到/home 目录,为 compute 节点创建 ftp 源,yum 源文件

名为 ftp.repo,其中 ftp 服务器为 controller 节点,配置 ftp 源时不要写 IP 地址。

请将 yum list | grep xiandian 命令的返回结果提交到答题框【1 分】

1
2
[root@compute ~]# yum list | grep xiandian
iaas-xiandian.x86_64 2.4-2 iaas-0

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@compute ~]# mkdir /home/yum
[root@compute ~]# mv /etc/yum.repos.d/* /home/yum
[root@compute ~]# cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://controller/centos
gpgcheck=0
enabled=1
[iaas]
name=iaas
baseurl=ftp://controller/iaas/iaas-repo
gpgcheck=0
enabled=1
[root@compute ~]# yum repolist

7.分区

在 compute 节点将 vdb 分为两个区分别为 vdb1 和 vdb2,大小自定义。要求分区格式为 gpt,

使用 mkfs.xfs 命令对文件系统格式化。

请将 lsblk -f 命令的返回结果提交到答题框【1 分】

虚拟机没有vdb区,使用sdb分区进行代替

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@compute ~]# lsblk -f
NAME FSTYPE LABEL UUID MOUNTPOINT
sda
├─sda1 xfs c2b0cad1-cdef-48c6-adb7-5e4eafaf7458 /boot
└─sda2 LVM2_member oiioJ7-K5mu-sPhw-R4Rb-S3yH-fWo9-23G9aY
├─centos-root xfs e78104be-5c62-4102-b730-f03cde9fa24a /
└─centos-swap swap 74991746-7fc7-4936-a835-4f603f2468c8 [SWAP]
sdb
├─sdb1 xfs 25a1594a-769d-48ac-966c-d59607cd0bb4
└─sdb2 xfs c49808f0-0ac0-4848-957d-e0525f1117b3
sdc
sr0 iso9660 CentOS 7 x86_64 2018-05-03-20-55-23-00

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@compute ~]# yum install -y gdisk
[root@compute ~]# gdisk /dev/sdb
GPT fdisk (gdisk) version 0.8.10

Partition table scan:
MBR: not present
BSD: not present
APM: not present
GPT: not present

Creating new GPT entries.

Command (? for help): n
Partition number (1-128, default 1):
First sector (34-41943006, default = 2048) or {+-}size{KMGTP}:
Last sector (2048-41943006, default = 41943006) or {+-}size{KMGTP}: +10G
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'

Command (? for help): n
Partition number (2-128, default 2):
First sector (34-41943006, default = 20973568) or {+-}size{KMGTP}:
Last sector (20973568-41943006, default = 41943006) or {+-}size{KMGTP}: +10G
Last sector (20973568-41943006, default = 41943006) or {+-}size{KMGTP}:
Current type is 'Linux filesystem'
Hex code or GUID (L to show codes, Enter = 8300):
Changed type of partition to 'Linux filesystem'

Command (? for help): w

Final checks complete. About to write GPT data. THIS WILL OVERWRITE EXISTING
PARTITIONS!!

Do you want to proceed? (Y/N): y
OK; writing new GUID partition table (GPT) to /dev/sdb.
The operation has completed successfully.


[root@compute ~]# mkfs.xfs /dev/sdb1
[root@compute ~]# mkfs.xfs /dev/sdb2


8.系统调优-脏数据回写

Linux 系统内存中会存在脏数据,一般系统默认脏数据占用内存 30%时会回写磁盘,修

改系统配置文件,要求将回写磁盘的大小调整为 60%。

在 controller 节点请将 sysctl -p 命令的返回结果提交到答题框。【1 分】

1
2
[root@controller ~]# sysctl -p
vm.dirty_ratio = 60

解法:

1
2
[root@controller ~]# cat /etc/sysctl.conf
vm.dirty_ratio = 60

任务 2 OpenStack 搭建任务(8 分)

1.修改脚本文件

在 controller 节点和 compute 节点分别安装 iaas-xiandian 软件包,修改脚本文件基本变

量(脚本文件为/etc/xiandian/openrc.sh),修改完成后使用命令生效该脚本文件。

在 controller 节点请将 echo $INTERFACE_NAME 命令的返回结果提交到答题框。【0.5 分】

1
2
[root@controller ~]# echo $INTERFACE_NAME
eth36

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
#controller节点与compute节点做法相同
#只修改interface_ip和INTERFACE_NAME 即可

[root@controller ~]# yum install -y iaas-xiandian
[root@controller ~]# cat /etc/xiandian/openrc.sh
#--------------------system Config--------------------##
#Controller Server Manager IP. example:x.x.x.x
HOST_IP=192.168.157.30

#Controller HOST Password. example:000000
HOST_PASS=000000

#Controller Server hostname. example:controller
HOST_NAME=controller

#Compute Node Manager IP. example:x.x.x.x
HOST_IP_NODE=192.168.157.31

#Compute HOST Password. example:000000
HOST_PASS_NODE=000000

#Compute Node hostname. example:compute
HOST_NAME_NODE=compute

#--------------------Chrony Config-------------------##
#Controller network segment IP. example:x.x.0.0/16(x.x.x.0/24)
network_segment_IP=192.168.157,0/24

#--------------------Rabbit Config ------------------##
#user for rabbit. example:openstack
RABBIT_USER=openstack

#Password for rabbit user .example:000000
RABBIT_PASS=000000

#--------------------MySQL Config---------------------##
#Password for MySQL root user . exmaple:000000
DB_PASS=000000

#--------------------Keystone Config------------------##
#Password for Keystore admin user. exmaple:000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000

#Password for Mysql keystore user. exmaple:000000
KEYSTONE_DBPASS=000000

#--------------------Glance Config--------------------##
#Password for Mysql glance user. exmaple:000000
GLANCE_DBPASS=000000

#Password for Keystore glance user. exmaple:000000
GLANCE_PASS=000000

#--------------------Nova Config----------------------##
#Password for Mysql nova user. exmaple:000000
NOVA_DBPASS=000000

#Password for Keystore nova user. exmaple:000000
NOVA_PASS=000000

#--------------------Neturon Config-------------------##
#Password for Mysql neutron user. exmaple:000000
NEUTRON_DBPASS=000000

#Password for Keystore neutron user. exmaple:000000
NEUTRON_PASS=000000

#metadata secret for neutron. exmaple:000000
METADATA_SECRET=000000

#Tunnel Network Interface. example:x.x.x.x
INTERFACE_IP=192.168.157.30

#External Network Interface. example:eth1
INTERFACE_NAME=eth36

#External Network The Physical Adapter. example:provider
Physical_NAME=provider

#First Vlan ID in VLAN RANGE for VLAN Network. exmaple:101
minvlan=101

#Last Vlan ID in VLAN RANGE for VLAN Network. example:200
maxvlan=200

#--------------------Cinder Config--------------------##
#Password for Mysql cinder user. exmaple:000000
CINDER_DBPASS=000000

#Password for Keystore cinder user. exmaple:000000
CINDER_PASS=000000

#Cinder Block Disk. example:md126p3
BLOCK_DISK=sdb1

#--------------------Swift Config---------------------##
#Password for Keystore swift user. exmaple:000000
SWIFT_PASS=000000

#The NODE Object Disk for Swift. example:md126p4.
OBJECT_DISK=sdb2

#The NODE IP for Swift Storage Network. example:x.x.x.x.
STORAGE_LOCAL_NET_IP=192.168.157.31

#--------------------Heat Config----------------------##
#Password for Mysql heat user. exmaple:000000
HEAT_DBPASS=000000

#Password for Keystore heat user. exmaple:000000
HEAT_PASS=000000

#--------------------Zun Config-----------------------##
#Password for Mysql Zun user. exmaple:000000
ZUN_DBPASS=000000

#Password for Keystore Zun user. exmaple:000000
ZUN_PASS=000000

#Password for Mysql Kuryr user. exmaple:000000
KURYR_DBPASS=000000

#Password for Keystore Kuryr user. exmaple:000000
KURYR_PASS=000000

#--------------------Ceilometer Config----------------##
#Password for Gnocchi ceilometer user. exmaple:000000
CEILOMETER_DBPASS=000000

#Password for Keystore ceilometer user. exmaple:000000
CEILOMETER_PASS=000000

#--------------------AODH Config----------------##
#Password for Mysql AODH user. exmaple:000000
AODH_DBPASS=000000

#Password for Keystore AODH user. exmaple:000000
AODH_PASS=000000

#--------------------Barbican Config----------------##
#Password for Mysql Barbican user. exmaple:000000
BARBICAN_DBPASS=000000

#Password for Keystore Barbican user. exmaple:000000
BARBICAN_PASS=000000

root@controller ~]# source /etc/xiandian/openrc.sh

2.修改脚本文件

在 compute 节点配置/etc/xiandian/openrc.sh 文件,根据环境情况修改参数,块存储服务

的后端使用第二块硬盘的第一个分区,生效该参数文件。

请将 echo $INTERFACE_IP&& echo $BLOCK_DISK 命令的返回结果提交到答题框。【0.5 分】

1
2
3
[root@compute ~]#  echo $INTERFACE_IP&& echo $BLOCK_DISK
192.168.157.31
sdb1

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
[root@compute ~]# cat /etc/xiandian/openrc.sh
#--------------------system Config--------------------##
#Controller Server Manager IP. example:x.x.x.x
HOST_IP=192.168.157.30

#Controller HOST Password. example:000000
HOST_PASS=000000

#Controller Server hostname. example:controller
HOST_NAME=controller

#Compute Node Manager IP. example:x.x.x.x
HOST_IP_NODE=192.168.157.31

#Compute HOST Password. example:000000
HOST_PASS_NODE=000000

#Compute Node hostname. example:compute
HOST_NAME_NODE=compute

#--------------------Chrony Config-------------------##
#Controller network segment IP. example:x.x.0.0/16(x.x.x.0/24)
network_segment_IP=192.168.157,0/24

#--------------------Rabbit Config ------------------##
#user for rabbit. example:openstack
RABBIT_USER=openstack

#Password for rabbit user .example:000000
RABBIT_PASS=000000

#--------------------MySQL Config---------------------##
#Password for MySQL root user . exmaple:000000
DB_PASS=000000

#--------------------Keystone Config------------------##
#Password for Keystore admin user. exmaple:000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000

#Password for Mysql keystore user. exmaple:000000
KEYSTONE_DBPASS=000000

#--------------------Glance Config--------------------##
#Password for Mysql glance user. exmaple:000000
GLANCE_DBPASS=000000

#Password for Keystore glance user. exmaple:000000
GLANCE_PASS=000000

#--------------------Nova Config----------------------##
#Password for Mysql nova user. exmaple:000000
NOVA_DBPASS=000000

#Password for Keystore nova user. exmaple:000000
NOVA_PASS=000000

#--------------------Neturon Config-------------------##
#Password for Mysql neutron user. exmaple:000000
NEUTRON_DBPASS=000000

#Password for Keystore neutron user. exmaple:000000
NEUTRON_PASS=000000

#metadata secret for neutron. exmaple:000000
METADATA_SECRET=000000

#Tunnel Network Interface. example:x.x.x.x
INTERFACE_IP=192.168.157.31

#External Network Interface. example:eth1
INTERFACE_NAME=eth37

#External Network The Physical Adapter. example:provider
Physical_NAME=provider

#First Vlan ID in VLAN RANGE for VLAN Network. exmaple:101
minvlan=101

#Last Vlan ID in VLAN RANGE for VLAN Network. example:200
maxvlan=200

#--------------------Cinder Config--------------------##
#Password for Mysql cinder user. exmaple:000000
CINDER_DBPASS=000000

#Password for Keystore cinder user. exmaple:000000
CINDER_PASS=000000

#Cinder Block Disk. example:md126p3
BLOCK_DISK=sdb1

#--------------------Swift Config---------------------##
#Password for Keystore swift user. exmaple:000000
SWIFT_PASS=000000

#The NODE Object Disk for Swift. example:md126p4.
OBJECT_DISK=sdb2

#The NODE IP for Swift Storage Network. example:x.x.x.x.
STORAGE_LOCAL_NET_IP=192.168.157.31

#--------------------Heat Config----------------------##
#Password for Mysql heat user. exmaple:000000
HEAT_DBPASS=000000

#Password for Keystore heat user. exmaple:000000
HEAT_PASS=000000

#--------------------Zun Config-----------------------##
#Password for Mysql Zun user. exmaple:000000
ZUN_DBPASS=000000

#Password for Keystore Zun user. exmaple:000000
ZUN_PASS=000000

#Password for Mysql Kuryr user. exmaple:000000
KURYR_DBPASS=000000

#Password for Keystore Kuryr user. exmaple:000000
KURYR_PASS=000000

#--------------------Ceilometer Config----------------##
#Password for Gnocchi ceilometer user. exmaple:000000
CEILOMETER_DBPASS=000000

#Password for Keystore ceilometer user. exmaple:000000
CEILOMETER_PASS=000000

#--------------------AODH Config----------------##
#Password for Mysql AODH user. exmaple:000000
AODH_DBPASS=000000

#Password for Keystore AODH user. exmaple:000000
AODH_PASS=000000

#--------------------Barbican Config----------------##
#Password for Mysql Barbican user. exmaple:000000
BARBICAN_DBPASS=000000

#Password for Keystore Barbican user. exmaple:000000
BARBICAN_PASS=000000

[root@compute ~]# source /etc/xiandian/openrc.sh

3.安装 openstack 包

分别在 controller 节点和 compute 节点执行 iaas-pre-host.sh 文件(不需要重启云主机)。

在 controller 节点请将 openstack –version 命令的返回结果提交到答题框。【1 分】

1
2
[root@controller ~]# openstack --version
openstack 3.14.3

解法:

1
2
[root@controller ~]# iaas-pre-host.sh
[root@compute ~]# iaas-pre-host.sh

4. 搭建数据库组件

在 controller 节点执行 iaas-install-mysql.sh 脚本,会自行安装 mariadb、memcached、

rabbitmq 等服务和完成相关配置。执行完成后修改配置文件将缓存 CACHESIZE 修改为 128,

并重启相应服务。

请将 ps aux|grep memcached 命令的返回结果提交到答题框。【1 分】

1
2
3
[root@controller ~]# ps aux|grep memcached
memcach+ 15901 0.1 0.0 443040 2164 ? Ssl 09:15 0:00 /usr/bin/memcached -p 11211 -u memcached -m 128 -c 1024 -l 127.0.0.1,::1,controller
root 15919 0.0 0.0 112704 960 pts/0 S+ 09:15 0:00 grep --color=auto memcached

解法:

1
2
3
4
5
6
7
8
9
10
11
[root@controller ~]# iaas-install-mysql.sh
[root@controller ~]# rpm -qc memcached
/etc/sysconfig/memcached
[root@controller ~]# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="128"
OPTIONS="-l 127.0.0.1,::1,controller"
[root@controller ~]# systemctl restart memcached

5.搭建认证服务组件

在 controller 节点执行 iaas-install-keystone.sh 脚本,会自行安装 keystone 服务和完成相

关配置。使用 openstack 命令,创建一个名为 tom 的账户,密码为 tompassword123,邮箱为

tom@example.com

请将 openstack user list 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
[root@controller ~]# openstack user list
+----------------------------------+-------+
| ID | Name |
+----------------------------------+-------+
| 0a22a2d4f3964cfbbbd6474dc92cca01 | admin |
| 196d426492be403b8fbaa4b0c0f8e2a9 | tom |
| 314971684b4d4345b5aa43b2dd55339f | demo |
+----------------------------------+-------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@controller ~]# iaas-install-keystone.sh
[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# openstack user create tom --password tompassword123 --email tom@example.com --domain demo
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | ed6f7dc2006d4010bd9194ebc576d9e9 |
| email | tom@example.com |
| enabled | True |
| id | 196d426492be403b8fbaa4b0c0f8e2a9 |
| name | tom |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

6.搭建镜像服务组件

在 controller 节点执行 iaas-install-glance.sh 脚本,会自行安装 glance 服务和完成相关

配 置 。 完 成 后 使 用 openstack 命 令 , 创 建 一 个 名 为 cirros 的 镜 像 , 镜 像 文 件 使 用

cirros-0.3.4-x86_64-disk.img。

请将 openstack image show cirros 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@controller ~]#  openstack image show cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2023-03-13T13:29:50Z |
| disk_format | qcow2 |
| file | /v2/images/4219d1cb-5238-4720-a7be-167f9b158a9b/file |
| id | 4219d1cb-5238-4720-a7be-167f9b158a9b |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 1a99aaa6a1024d84a00a779c4d186b44 |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2023-03-13T13:29:50Z |
| virtual_size | None |
| visibility | shared |
+------------------+------------------------------------------------------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
[root@controller ~]# iaas-install-glance.sh
[root@controller ~]# openstack image create --disk-format qcow2 --container bare --file /root/cirros-0.4.0-x86_64-disk.img cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2023-03-13T13:29:50Z |
| disk_format | qcow2 |
| file | /v2/images/4219d1cb-5238-4720-a7be-167f9b158a9b/file |
| id | 4219d1cb-5238-4720-a7be-167f9b158a9b |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 1a99aaa6a1024d84a00a779c4d186b44 |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2023-03-13T13:29:50Z |
| virtual_size | None |
| visibility | shared |
+------------------+------------------------------------------------------+
[root@controller ~]# openstack image show cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | 443b7623e27ecf03dc9e01ee93f67afe |
| container_format | bare |
| created_at | 2023-03-13T13:29:50Z |
| disk_format | qcow2 |
| file | /v2/images/4219d1cb-5238-4720-a7be-167f9b158a9b/file |
| id | 4219d1cb-5238-4720-a7be-167f9b158a9b |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 1a99aaa6a1024d84a00a779c4d186b44 |
| protected | False |
| schema | /v2/schemas/image |
| size | 12716032 |
| status | active |
| tags | |
| updated_at | 2023-03-13T13:29:50Z |
| virtual_size | None |
| visibility | shared |
+------------------+------------------------------------------------------+

7.搭建计算服务组件

在 controller 节点执行 iaas-install-nova-controller.sh,compute 节点执行

iaas-install-nova-compute.sh,会自行安装 nova 服务和完成相关配置。使用 nova 命令创建一

个名为 t,ID 为 5,内存为 2048MB,磁盘容量为 10GB,vCPU 数量为 2 的云主机类型。

在 controller 节点请将 nova flavor-show t 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

[root@controller ~]# nova flavor-show t
+----------------------------+-------+
| Property | Value |
+----------------------------+-------+
| OS-FLV-DISABLED:disabled | False |
| OS-FLV-EXT-DATA:ephemeral | 0 |
| description | - |
| disk | 10 |
| extra_specs | {} |
| id | 5 |
| name | t |
| os-flavor-access:is_public | True |
| ram | 2048 |
| rxtx_factor | 1.0 |
| swap | |
| vcpus | 2 |
+----------------------------+-------+

解法:

1
2
3
4
5
6
7
8
9
[root@controller ~]# iaas-install-nova-controller.sh
[root@compute ~]# iaas-install-nova-compute.sh
[root@controller ~]# nova flavor-create t 5 2048 10 2
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | Description |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 5 | t | 2048 | 10 | 0 | | 2 | 1.0 | True | - |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+

8.搭建网络组件并初始化网络

在 controller 节点执行 iaas-install-neutron-controller.sh,compute 节点执行

iaas-install-neutron-compute.sh,会自行安装 neutron 服务并完成配置。创建云主机外部网

络 ext-net,子网为 ext-subnet,云主机浮动 IP 可用网段为 192.168.10.100~192.168.10.200,

网关为 192.168.100.1。

在 controller 节点请将 openstack subnet show ext-subnet 命令的返回结果提交到答题

框。【1 分】

注意 本宿主机环境为192.168.157.0/24网段 网关应为192.168.157.2

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@controller ~]# openstack subnet show ext-subnet
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.10.100-192.168.10.200 |
| cidr | 192.168.10.0/24 |
| created_at | 2023-03-13T13:58:58Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.100.1 |
| host_routes | |
| id | 3b4ffa33-6d24-46d5-aa23-e44e8ce86b26 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | ext-subnet |
| network_id | d1a6df4b-3af0-4ed6-b402-3b9eae05af8e |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2023-03-13T13:58:58Z |
+-------------------+--------------------------------------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
[root@controller ~]# iaas-install-neutron-controller.sh
[root@computer ~]# iaas-install-neutron-compute.sh
[root@controller ~]# openstack network create --external --provider-physical-network provider --provider-network-type flat ext-net
+---------------------------+--------------------------------------+
| Field | Value |
+---------------------------+--------------------------------------+
| admin_state_up | UP |
| availability_zone_hints | |
| availability_zones | |
| created_at | 2023-03-13T13:55:36Z |
| description | |
| dns_domain | None |
| id | d1a6df4b-3af0-4ed6-b402-3b9eae05af8e |
| ipv4_address_scope | None |
| ipv6_address_scope | None |
| is_default | False |
| is_vlan_transparent | None |
| mtu | 1500 |
| name | ext-net |
| port_security_enabled | True |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| provider:network_type | flat |
| provider:physical_network | provider |
| provider:segmentation_id | None |
| qos_policy_id | None |
| revision_number | 5 |
| router:external | External |
| segments | None |
| shared | False |
| status | ACTIVE |
| subnets | |
| tags | |
| updated_at | 2023-03-13T13:55:36Z |
+---------------------------+--------------------------------------+
[root@controller ~]# openstack subnet create --network ext-net --subnet-range 192.168.10.0/24 --gateway 192.168.100.1 --allocation-pool start=192.168.10.100,end=192.168.10.200 --dhcp ext-subnet
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| allocation_pools | 192.168.10.100-192.168.10.200 |
| cidr | 192.168.10.0/24 |
| created_at | 2023-03-13T13:58:58Z |
| description | |
| dns_nameservers | |
| enable_dhcp | True |
| gateway_ip | 192.168.100.1 |
| host_routes | |
| id | 3b4ffa33-6d24-46d5-aa23-e44e8ce86b26 |
| ip_version | 4 |
| ipv6_address_mode | None |
| ipv6_ra_mode | None |
| name | ext-subnet |
| network_id | d1a6df4b-3af0-4ed6-b402-3b9eae05af8e |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| revision_number | 0 |
| segment_id | None |
| service_types | |
| subnetpool_id | None |
| tags | |
| updated_at | 2023-03-13T13:58:58Z |
+-------------------+--------------------------------------+

9.搭建图形化界面

在 controller 节点执行 iaas-install-dashboard.sh 脚本,会自行安装 dashboard 服务并完

成配置。请修改 nova 配置文件,使之能通过公网 IP 访问 dashboard 首页。

在 controller 节点请将 curl http://EIP/dashboard –L 命令的返回结果提交到答题框。

【1 分】

1

解法:

1
2
3
4
5
6
7
8
[root@controller ~]# iaas-install-dashboard.sh 
[root@controller ~]# vi /etc/nova/nova.conf
#公共IP的网络主机
--routing_source_ip=192.168.1.50
#高效网络
--multi_host=true
#公网网卡
--public_interface=eth0

任务 3 OpenStack 运维任务(13 分)

某公司构建了一套内部私有云系统,这套私有云系统将为公司内部提供计算服务。你将作为该私有云的维护人员,请完成以下运维工作。

1.安全组管理

使用命令创建名称为 group_web 的安全组该安全组的描述为” Custom security group”,

用 openstack 命令为安全组添加 icmp 规则和 ssh 规则允许任意 ip 地址访问 web,完成后查看

该安全组的详细信息.

将 openstack security group show group_web 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@controller ~]# openstack security group show group_web
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2023-03-13T15:07:06Z |
| description | Custom security group |
| id | 1ba95444-9ba2-4036-84a7-fc67f09f323f |
| name | group_web |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| revision_number | 5 |
| rules | created_at='2023-03-13T16:03:49Z', direction='ingress', ethertype='IPv4', id='b92db258-8484-4082-94f4-51ec184f30c0', port_range_max='80', port_range_min='80', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2023-03-13T16:03:49Z' |
| | created_at='2023-03-13T15:07:06Z', direction='egress', ethertype='IPv6', id='aea10bda-7c20-45e6-a802-940dd0a6761b', updated_at='2023-03-13T15:07:06Z' |
| | created_at='2023-03-13T16:19:18Z', direction='ingress', ethertype='IPv4', id='35d48cca-f4b6-499e-9b06-50e763a695f0', port_range_max='443', port_range_min='443', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2023-03-13T16:19:18Z' |
| | created_at='2023-03-13T15:07:06Z', direction='egress', ethertype='IPv4', id='81eef3e3-5804-4323-b05f-2b0357e25ae3', updated_at='2023-03-13T15:07:06Z' |
| | created_at='2023-03-13T16:03:14Z', direction='ingress', ethertype='IPv4', id='7f53e52f-5fbe-452c-b704-4232cfafd7d8', protocol='icmp', remote_ip_prefix='0.0.0.0/0', updated_at='2023-03-13T16:03:14Z' |
| updated_at | 2023-03-13T16:19:18Z |
+-----------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
[root@controller ~]# openstack security group create group_web --description "Custom security group"
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
| created_at | 2023-03-13T15:07:06Z |
| description | Custom security group |
| id | 1ba95444-9ba2-4036-84a7-fc67f09f323f |
| name | group_web |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| revision_number | 2 |
| rules | created_at='2023-03-13T15:07:06Z', direction='egress', ethertype='IPv4', id='81eef3e3-5804-4323-b05f-2b0357e25ae3', updated_at='2023-03-13T15:07:06Z' |
| | created_at='2023-03-13T15:07:06Z', direction='egress', ethertype='IPv6', id='aea10bda-7c20-45e6-a802-940dd0a6761b', updated_at='2023-03-13T15:07:06Z' |
| updated_at | 2023-03-13T15:07:06Z |
+-----------------+-------------------------------------------------------------------------------------------------------------------------------------------------------+
[root@controller ~]# openstack security group rule create group_web --protocol icmp --ingress
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2023-03-13T16:03:14Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 7f53e52f-5fbe-452c-b704-4232cfafd7d8 |
| name | None |
| port_range_max | None |
| port_range_min | None |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| protocol | icmp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 1ba95444-9ba2-4036-84a7-fc67f09f323f |
| updated_at | 2023-03-13T16:03:14Z |
+-------------------+--------------------------------------+
[root@controller ~]# openstack security group rule create group_web --protocol tcp --ingress --dst-port 80:80
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2023-03-13T16:03:49Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | b92db258-8484-4082-94f4-51ec184f30c0 |
| name | None |
| port_range_max | 80 |
| port_range_min | 80 |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 1ba95444-9ba2-4036-84a7-fc67f09f323f |
| updated_at | 2023-03-13T16:03:49Z |
+-------------------+--------------------------------------+
[root@controller ~]# openstack security group rule create group_web --protocol tcp --ingress --dst-port 443:443
+-------------------+--------------------------------------+
| Field | Value |
+-------------------+--------------------------------------+
| created_at | 2023-03-13T16:19:18Z |
| description | |
| direction | ingress |
| ether_type | IPv4 |
| id | 35d48cca-f4b6-499e-9b06-50e763a695f0 |
| name | None |
| port_range_max | 443 |
| port_range_min | 443 |
| project_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| protocol | tcp |
| remote_group_id | None |
| remote_ip_prefix | 0.0.0.0/0 |
| revision_number | 0 |
| security_group_id | 1ba95444-9ba2-4036-84a7-fc67f09f323f |
| updated_at | 2023-03-13T16:19:18Z |
+-------------------+--------------------------------------+


2.项目管理

在 keystone 中创建 shop 项目添加描述为”Hello shop”,完成后使用 openstack 命令禁用

该项目,然后使用 openstack 命令查看该项目的详细信息。

请将 openstack project show shop 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@controller ~]# openstack project show shop
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Hello shop |
| domain_id | ed6f7dc2006d4010bd9194ebc576d9e9 |
| enabled | False |
| id | 6896ca6270aa429aa22908123b5cfb65 |
| is_domain | False |
| name | shop |
| parent_id | ed6f7dc2006d4010bd9194ebc576d9e9 |
| tags | [] |
+-------------+----------------------------------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@controller ~]# openstack project create shop --description "Hello shop" --domain demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Hello shop |
| domain_id | ed6f7dc2006d4010bd9194ebc576d9e9 |
| enabled | True |
| id | 6896ca6270aa429aa22908123b5cfb65 |
| is_domain | False |
| name | shop |
| parent_id | ed6f7dc2006d4010bd9194ebc576d9e9 |
| tags | [] |
+-------------+----------------------------------+

[root@controller ~]# openstack project set shop --disable

3.用户管理

使用 nova 命令查看 admin 租户的当前配额值,将 admin 租户的实例配额提升到 13。

请将 nova quota-class-show admin 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@controller ~]# nova quota-class-show admin
+----------------------+-------+
| Quota | Limit |
+----------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| metadata_items | 128 |
| key_pairs | 100 |
| server_groups | 13 |
| server_group_members | 10 |
+----------------------+-------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@controller ~]# nova quota-class-show admin
+----------------------+-------+
| Quota | Limit |
+----------------------+-------+
| instances | 10 |
| cores | 20 |
| ram | 51200 |
| metadata_items | 128 |
| key_pairs | 100 |
| server_groups | 10 |
| server_group_members | 10 |
+----------------------+-------+
[root@controller ~]# nova quota-class-update --server-groups 13 admin

4.镜像管理

登 录 controller 节 点 ,使用 glance 相 关 命 令 上 传 镜 像 , 源 使 用

CentOS_7.5_x86_64_XD.qcow2,名字为 centos7.5,修改这个镜像为共享状态,并设置最小

磁盘为 5G。

请将 glance image-list 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
[root@controller ~]# glance image-list
+--------------------------------------+-----------+
| ID | Name |
+--------------------------------------+-----------+
| 29383c02-103a-4d28-ad42-24419970ed79 | centos7.5 |
| 4219d1cb-5238-4720-a7be-167f9b158a9b | cirros |
+--------------------------------------+-----------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@controller ~]# glance image-create --name centos7.5  --min-disk 5 --disk-format qcow2 --file /opt/iaas/images/CentOS_7.5_x86_64_XD.qcow2  --container-format bare
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 3d3e9c954351a4b6953fd156f0c29f5c |
| container_format | bare |
| created_at | 2023-03-13T16:47:13Z |
| disk_format | qcow2 |
| id | 29383c02-103a-4d28-ad42-24419970ed79 |
| min_disk | 5 |
| min_ram | 0 |
| name | centos7.5 |
| owner | 1a99aaa6a1024d84a00a779c4d186b44 |
| protected | False |
| size | 510459904 |
| status | active |
| tags | [] |
| updated_at | 2023-03-13T16:47:17Z |
| virtual_size | None |
| visibility | shared |
+------------------+--------------------------------------+

5.后端配置文件管理

请修改 glance 后端配置文件,将项目的映像存储限制为 10GB,完成后重启 glance 服务。

请将 cat /etc/glance/glance-api.conf |grep user_storage 命令的返回结果提交到答题框。【1分】

1
2
[root@controller ~]# cat /etc/glance/glance-api.conf |grep user_storage
user_storage_quota = 10GB

解法:

1
2
3
4
[root@controller ~]# cat /etc/glance/glance-api.conf |grep user_storage
user_storage_quota = 10GB

[root@controller ~]# systemctl restart openstack-glance-api

6.存储服务管理

在 controller 节点执行 iaas-install-cinder-controller.sh, compute 节点执行

iaas-install-cinder-compute.sh,在 controller 和 compute 节点上会自行安装 cinder 服务并

完成配置。创建一个名为 lvm 的卷类型,创建该类型规格键值对,要求 lvm 卷类型对应 cinder

后端驱动 lvm 所管理的存储资源,名字 lvm_test,大小 1G 的云硬盘并查询该云硬盘的详细信

息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@controller ~]# cinder show lvm_test
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attached_servers | [] |
| attachment_ids | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-03-13T19:49:41.000000 |
| description | None |
| encrypted | False |
| id | 255ef0a6-1d74-4b82-b930-6ae575aca172 |
| metadata | |
| migration_status | None |
| multiattach | False |
| name | lvm_test |
| os-vol-host-attr:host | compute@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2023-03-13T19:49:42.000000 |
| user_id | 0a22a2d4f3964cfbbbd6474dc92cca01 |
| volume_type | lvm |
+--------------------------------+--------------------------------------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@controller ~]# openstack volume type create  lvm
+-------------+--------------------------------------+
| Field | Value |
+-------------+--------------------------------------+
| description | None |
| id | b3872aa9-91df-4b82-a575-9c2e22458ea9 |
| is_public | True |
| name | lvm |
+-------------+--------------------------------------+
[root@controller ~]# openstack volume type set --property volume_backend_name=LVM lvm
[root@controller ~]# openstack volume create --type lvm --size 1 lvm_test
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-03-13T19:49:41.000000 |
| description | None |
| encrypted | False |
| id | 255ef0a6-1d74-4b82-b930-6ae575aca172 |
| migration_status | None |
| multiattach | False |
| name | lvm_test |
| properties | |
| replication_status | None |
| size | 1 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | lvm |
| updated_at | None |
| user_id | 0a22a2d4f3964cfbbbd6474dc92cca01 |
+---------------------+--------------------------------------+

参考链接

7.数据库管理

请使用数据库命令将所有数据库进行备份,备份文件名为 openstack.sql,完成后使用命令

查看文件属性其中文件大小以 mb 显示。

请将 du -h openstack.sql 命令的返回结果提交到答题框。【1 分】

1
2
[root@controller ~]# du openstack.sql -h
1.6M openstack.sql

解法:

1
[root@controller ~]# mysqldump -uroot -p000000 --all-databases > openstack.sql

参考文档

8.数据库管理

进入数据库,创建本地用户 examuser,密码为 000000,然后查询 mysql 数据库中的

user 表的 user,host,password 字段。然后赋予这个用户所有数据库的“查询”“删除”“更新”“创

建”的权限。

请将 select user,host,password from user ;命令的返回结果提交到答题框【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
MariaDB [(none)]> use mysql;
Reading table information for completion of table and column names
You can turn off this feature to get a quicker startup with -A

Database changed
MariaDB [mysql]> select user,host,password from user;
+----------+------------+-------------------------------------------+
| user | host | password |
+----------+------------+-------------------------------------------+
| root | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| root | controller | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| root | 127.0.0.1 | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| root | ::1 | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| keystone | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| keystone | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| glance | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| glance | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| nova | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| nova | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| neutron | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| neutron | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| cinder | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| cinder | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| examuser | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
+----------+------------+-------------------------------------------+
15 rows in set (0.00 sec)

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@controller ~]# mysql -uroot -p
Enter password:
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 400
Server version: 10.1.20-MariaDB MariaDB Server

Copyright (c) 2000, 2016, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]> create user examuser@localhost identified by '000000';
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> grant select,delete,update,create on *.* to examuser@localhost ;
Query OK, 0 rows affected (0.00 sec)

MariaDB [(none)]> flush privileges
-> ;

9.存储管理

请使用 openstack 命令创建一个名为 test 的 cinder 卷,卷大小为 5G。完成后使用 cinder

命令列出卷列表并查看 test 卷的详细信息。

请将 cinder list 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
[root@controller ~]# cinder list
+--------------------------------------+-----------+----------+------+-------------+----------+-------------+
| ID | Status | Name | Size | Volume Type | Bootable | Attached to |
+--------------------------------------+-----------+----------+------+-------------+----------+-------------+
| 255ef0a6-1d74-4b82-b930-6ae575aca172 | available | lvm_test | 1 | lvm | false | |
| 893a1d99-297d-4cf0-8a01-7e637d6ab086 | available | test | 5 | - | false | |
+--------------------------------------+-----------+----------+------+-------------+----------+-------------+

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
[root@controller ~]# openstack volume create --size 5 test
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-03-13T20:08:38.000000 |
| description | None |
| encrypted | False |
| id | 893a1d99-297d-4cf0-8a01-7e637d6ab086 |
| migration_status | None |
| multiattach | False |
| name | test |
| properties | |
| replication_status | None |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | creating |
| type | None |
| updated_at | None |
| user_id | 0a22a2d4f3964cfbbbd6474dc92cca01 |
+---------------------+--------------------------------------+
[root@controller ~]# cinder show test
+--------------------------------+--------------------------------------+
| Property | Value |
+--------------------------------+--------------------------------------+
| attached_servers | [] |
| attachment_ids | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-03-13T20:08:38.000000 |
| description | None |
| encrypted | False |
| id | 893a1d99-297d-4cf0-8a01-7e637d6ab086 |
| metadata | |
| migration_status | None |
| multiattach | False |
| name | test |
| os-vol-host-attr:host | compute@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 1a99aaa6a1024d84a00a779c4d186b44 |
| replication_status | None |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| updated_at | 2023-03-13T20:08:39.000000 |
| user_id | 0a22a2d4f3964cfbbbd6474dc92cca01 |
| volume_type | None |
+--------------------------------+--------------------------------------+

10.存储管理

为了减缓来自实例的数据访问速度的变慢,OpenStack Block Storage 支持对卷数据复制

带宽的速率限制。请修改 cinder 后端配置文件将卷复制带宽限制为最高 100 MiB/s。

请将 cat /etc/cinder/cinder.conf |grep volume_copy 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
[root@controller ~]#  cat /etc/cinder/cinder.conf |grep volume_copy
#volume_copy_blkio_cgroup_name = cinder-volume-copy
volume_copy_bps_limit = 100MiB/s
#volume_copy_blkio_cgroup_name = cinder-volume-copy
#volume_copy_bps_limit = 0

解法:

1
2
3
4
5
[root@controller ~]#  cat /etc/cinder/cinder.conf |grep volume_copy
#volume_copy_blkio_cgroup_name = cinder-volume-copy
volume_copy_bps_limit = 100MiB/s
#volume_copy_blkio_cgroup_name = cinder-volume-copy
#volume_copy_bps_limit = 0

11.存储管理

在controller节点执行 iaas-install-swift-controller.sh, compute

节点执行iaas-install-swift-compute.sh,在controller和compute节点上会自行安装 swift 服务并完成配

置。创建一个名为 file 的容器。

请将 swift stat file 命令的返回结果提交到答题框【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@controller ~]# swift stat file
Account: AUTH_1a99aaa6a1024d84a00a779c4d186b44
Container: file
Objects: 0
Bytes: 0
Read ACL:
Write ACL:
Sync To:
Sync Key:
Accept-Ranges: bytes
X-Storage-Policy: Policy-0
Last-Modified: Mon, 13 Mar 2023 20:21:50 GMT
X-Timestamp: 1678738909.17978
X-Trans-Id: tx6951e04bf6c546429752b-00640f85e4
Content-Type: application/json; charset=utf-8
X-Openstack-Request-Id: tx6951e04bf6c546429752b-00640f85e4

解法:

1
2
3
[root@controller ~]# iaas-install-swift-controller.sh
[root@compute ~]# iaas-install-swift-compute.sh
[root@controller ~]# swift post file

12.存储管理

用 swift 命令,把 cirros-0.3.4-x86_64-disk.img 上传到 file 容器中。

请将 swift list file 命令的返回结果提交到答题框【1 分】

1
2
3
[root@controller ~]# swift list file
cirros-0.4.0-x86_64-disk.img

解法:

1
2
3
[root@controller ~]# swift upload file cirros-0.4.0-x86_64-disk.img
cirros-0.4.0-x86_64-disk.img

13.添加控制节点资源到云平台

修改openrc.sh中的内容,然后在controller节点执行iaas-install-nova-compute.sh,把controller

节点的资源添加到云平台。

请将 openstack compute service list 命令的返回结果提交到答题框【1 分】

1
2
3
4
5
6
7
8
9
10
11
[root@controller ~]# openstack compute service list
+----+------------------+------------+----------+---------+-------+----------------------------+
| ID | Binary | Host | Zone | Status | State | Updated At |
+----+------------------+------------+----------+---------+-------+----------------------------+
| 1 | nova-scheduler | controller | internal | enabled | up | 2023-03-13T20:29:11.000000 |
| 2 | nova-conductor | controller | internal | enabled | up | 2023-03-13T20:29:03.000000 |
| 4 | nova-consoleauth | controller | internal | enabled | up | 2023-03-13T20:29:07.000000 |
| 7 | nova-compute | compute | nova | enabled | up | 2023-03-13T20:29:11.000000 |
| 8 | nova-compute | controller | nova | enabled | up | 2023-03-13T20:29:08.000000 |
+----+------------------+------------+----------+---------+-------+----------------------------+

解法:

1
2
3
4
5
6
7
8
9
#修改配置文件
HOST_IP_NODE=192.168.157.30
OST_NAME_NODE=controller

[root@controller ~]# iaas-install-nova-compute.sh

###可能弹出认证
yes
000000

任务四 OpenStack 架构任务(3 分)

公司内部拥有一套私有云系统,为了调试该私有云,需要编写一些测试用脚本进行功能

性测试,作为公司私有云维护人员请你完成以下工作。

1.请使用 openstack 命令创建一个浮动 IP 地址,完成后使用 openstack 命令查看该浮动

IP 的 id,请编写一个名为 floating_show.sh 的脚本,该脚本$1 变量为浮动 ip 的 id,对接 neutron

服务端点获取该浮动 IP 的详细信息。脚本使用 curl 向 api 端点传递参数,为了兼容性考虑不

得出现 openstack 命令。

请将 floating_show.sh 中*部分替换为正常内容并提交到答题框【1.5 分】

2.请编写脚本 floating_delete.sh,完成浮动 IP 的删除。设置一个$1 变量,当用户向$1 传递

一个浮动 IP 的 id,即可完成该浮动 IP 的删除。脚本使用 curl 向 api 端点传递参数,为了兼

容性考虑不得出现 openstack 命令。

请将 floating_show.sh 中*部分替换为正常内容并提交到答题框【1.5 分】

B 模块题目:容器的编排与运维

ip hostname
192.168.157.40 master
192.168.157.41 node1
192.168.157.42 node2
192.168.157.43 harbor

任务 1 容器云平台环境初始化(10 分)

1.容器云平台的初始化

master 节点主机名设置为 master、node1 节点主机名设置为 node1、node2 节点主机名设

置为 node2、harbor 节点主机名设置为 harbor,所有节点关闭 swap,并配置 hosts 映射。

请在 master 节点将 free –m 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
[root@master ~]# free -m
total used free shared buff/cache available
Mem: 7805 202 127 11 7475 7238
Swap: 0 0 0

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#修改主机名
## master
[root@localhost ~]# hostnamectl set-hostname master
##node1
[root@localhost ~]# hostnamectl set-hostname node1
##node2
[root@localhost ~]# hostnamectl set-hostname node2
## harbor
[root@localhost ~]# hostnamectl set-hostname harbor

#所有节点
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.157.40 master
192.168.157.41 node1
192.168.157.42 node2
192.168.157.43 harbor

[root@localhost ~]# swapoff -a
[root@localhost ~]# cat /etc/fstab
#/dev/mapper/centos-swap swap swap defaults 0 0

2.Yum 源数据的持久化挂载

将提供的 CentOS-7-x86_64-DVD-1804.iso 和 bricsskills_cloud_paas.iso 光盘镜像文件移

动到 master 节点 /root 目录下,然后在 /opt 目录下使用命令创建 centos 目录和 paas 目录,

并将镜像文件 CentOS-7-x86_64-DVD-1804.iso 永久挂载到 centos 目录下,将镜像文件

chinaskills_cloud_paas.iso 永久挂载到 /opt/paas 目录下。

请在 master 节点将 df -h 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
[root@master ~]# df -h
文件系统 容量 已用 可用 已用% 挂载点
/dev/mapper/centos-root 44G 14G 31G 32% /
devtmpfs 3.8G 0 3.8G 0% /dev
tmpfs 3.9G 0 3.9G 0% /dev/shm
tmpfs 3.9G 12M 3.8G 1% /run
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
/dev/sda1 1014M 142M 873M 14% /boot
tmpfs 781M 0 781M 0% /run/user/0
/dev/loop0 4.2G 4.2G 0 100% /opt/centos
/dev/loop1 8.7G 8.7G 0 100% /opt/paas

解法:

1
2
3
4
5
6
7
8
9
10
11
##将指定的文件上传到root

[root@master ~]# mkdir /opt/centos
[root@master ~]# mkdir /opt/paas
[root@master ~]# cat /etc/fstab
/root/CentOS-7-x86_64-DVD-1804.iso /opt/centos iso9660 defaults 0 0
/root/bricsskills_cloud_paas.iso /opt/paas iso9660 defaults 0 0
[root@master ~]# mount -a
mount: /dev/loop0 写保护,将以只读方式挂载
mount: /dev/loop1 写保护,将以只读方式挂载

3.Yum 源的编写

在 master 节点首先将系统自带的 yum 源移动到/home 目录,然后为 master 节点配置本

地 yum 源,yum 源文件名为 local.repo。

请将 yum list | grep docker 命令的返回结果提交到答题框。【1 分】

1
2
3
4
[root@master ~]# yum list | grep docker
docker-ce.x86_64 3:19.03.13-3.el7 paas
docker-ce-cli.x86_64 1:19.03.13-3.el7 paas

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master ~]# mv /etc/yum.repos.d/* /home/
[root@master ~]# cat /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[paas]
name=paas
baseurl=file:///opt/paas/kubernetes-repo
gpgcheck=0
enabled=1

4.Yum 源的编写

在 master 节点安装 ftp 服务,将 ftp 共享目录设置为 /opt/。

请将 curl -l ftp://云主机 IP 命令的返回结果提交到答题框。【1 分】

1
2
3
[root@master ~]# curl -l ftp://192.168.157.40
centos
paas

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
#关闭防火墙及安全策略 全部节点
[root@master ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.
[root@master ~]# setenforce 0
[root@master ~]# cat /etc/selinux/config
SELINUX=permissive

[root@master ~]# yum install -y vsftpd
[root@master ~]# cat /etc/vsftpd/vsftpd.conf
anon_root=/opt

[root@master ~]# systemctl enable vsftpd --now

5.Yum 源的编写

为 node1 节点和 node2 节点分别配置 ftp 源,yum 源文件名称为 ftp.repo,其中 ftp 服务

器地址为 master 节点,配置 ftp 源时不要写 IP 地址,配置之后,两台机器都安装 kubectl 包作

为安装测试。

在 node1 节点请将 yum list | grep kubectl 命令的返回结果提交到答题框。【2 分】

1
2
[root@node1 ~]# yum list | grep kubectl
kubectl.x86_64 1.18.1-0 paas

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
root@localhost ~]# mv /etc/yum.repos.d/* /home
[root@localhost ~]# cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://master/centos
gpgcheck=0
enabled=1
[paas]
name=paas
baseurl=ftp://master/paas/kubernetes-repo
gpgcheck=0
enabled=1

6.设置时间同步服务器

在 master 节点上部署 chrony 服务器,允许其它节点同步时间,启动服务并设置为开机

自启动;在其他节点上指定 master 节点为上游 NTP 服务器,重启服务并设为开机自启动。

在 node1 节点将 chronyc sources 命令的返回结果提交到答题框。【2 分】

1
2
3
4
5
6
[root@node1 ~]# chronyc sources
210 Number of sources = 1
MS Name/IP address Stratum Poll Reach LastRx Last sample
===============================================================================
^* master 11 6 17 36 -1673ns[ -15us] +/- 453us

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@master ~]# cat /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server master iburst
.....
local stratum 10
allow 192.168.157.0/24


#其他节点
[root@node1 ~]# cat /etc/chrony.conf
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server master iburst

#重启服务(全部节点)
[root@node1 ~]# systemctl restart chronyd
[root@node1 ~]# chronyc sources

7.设置免密登录

为四台服务器设置免密登录,保证服务器之间能够互相免密登录。

在 master 节点将 ssh node1 命令的返回结果提交到答题框。【2 分】

1
2
[root@master ~]# ssh node1
Last login: Mon Mar 13 23:49:45 2023 from 192.168.157.1

解法:

1
2
3
4
5
[root@master ~]# ssh-keygen
-> 直接回车即可
[root@master ~]# ssh-copy-id root@node1
[root@master ~]# ssh-copy-id root@node2
[root@master ~]# ssh-copy-id root@harbor

任务 2 k8s 搭建任务(15 分)

1.安装 docker 应用

在所有节点上安装 dokcer-ce,并设置为开机自启动。

在 master 节点请将 docker version 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@master ~]# docker version
Client: Docker Engine - Community
Version: 19.03.13
API version: 1.40
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:03:45 2020
OS/Arch: linux/amd64
Experimental: false

Server: Docker Engine - Community
Engine:
Version: 19.03.13
API version: 1.40 (minimum version 1.12)
Go version: go1.13.15
Git commit: 4484c46d9d
Built: Wed Sep 16 17:02:21 2020
OS/Arch: linux/amd64
Experimental: false
containerd:
Version: 1.3.7
GitCommit: 8fba4e9a7d01810a393d5d25a3621dc101981175
runc:
Version: 1.0.0-rc10
GitCommit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
docker-init:
Version: 0.18.0
GitCommit: fec3683

解法:

1
2
3
##所有节点
[root@master ~]# yum install -y docker-ce
[root@master ~]# systemctl enable docker --now

2.安装 docker 应用

所有节点配置阿里云镜像加速地址(https://5twf62k1.mirror.aliyuncs.com)并把启动引擎

设置为 systemd,配置成功重启 docker 服务。

请将 json 文件中的内容提交到答题框。【1 分】

1
2
3
4
5
6
[root@node1 ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

解法:

1
2
3
4
5
6
7
[root@master ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"],
"exec-opts": ["native.vgroupdriver=systemd"]
}

[root@node1 ~]# systemctl restart docker

3.安装 docker-compose

在 Harbor 节点创建目录/opt/paas,并把 bricsskills_cloud_paas.iso,挂载到/opt/paas 目录下,

使 用 /opt/paas/docker-compose/v1.25.5-docker-compose-Linux-x86_64

文 件 安 装docker-compose。安装完成后执行 docker-compose version 命令。

请将 docker-compose version 命令返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7

[root@localhost paas]# docker-compose version
docker-compose version 1.25.5, build 8a1c60f6
docker-py version: 4.1.0
CPython version: 3.7.5
OpenSSL version: OpenSSL 1.1.0l 10 Sep 2019

解法:

1
2
#在 Harbor 节点创建目录/opt/paas,并把 bricsskills_cloud_paas.iso,挂载到/opt/paas 目录下
[root@harbor paas]# cp -p v1.25.5-docker-compose-Linux-x86_64 /usr/local/bin/docker-compose

4.搭建 horbor 仓库

在 Harbor 节点使用/opt/paas/harbor/ harbor-offline-installer-v2.1.0.tgz 离线安装包,安装

harbor 仓库,并修改各节点默认 docker 仓库为 harbor 仓库地址。

在 master 节点请将 docker login harbor private ip 命令的返回结果提交到答题框。【1

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master ~]# docker login 192.168.157.43
Username: admin
Password:
Error response from daemon: Get http://harbor/v2/: dial tcp 192.168.157.43:80: connect: connection refused
[root@master ~]# docker login harbor
Username: admin
Password:
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

Login Succeeded

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
#所有节点添加harbor地址
[root@master ~]# cat /etc/docker/daemon.json
{
"registry-mirrors": ["https://5twf62k1.mirror.aliyuncs.com"],
"insecure-registries": ["192.168.157.43"],
"exec-opts": ["native.cgroupdriver=systemd"]
}
#解压压缩包
[root@harbor paas]# tar -zxvf harbor-offline-installer-v2.1.0.tgz
[root@harbor harbor]# cp -p harbor.yml.tmpl harbor.yml
hostname: 192.168.157.43
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
# certificate: /your/certificate/path
# private_key: /your/private/key/path

[root@harbor harbor]# ./prepare
[root@harbor harbor]# ./install.sh

5.上传 docker 镜像

在 master 节点使用命令将/opt/paas/images 目录下所有镜像导入本地。然后使用

/opt/paas/k8s_image_push.sh 将所有镜像上传至 docker 仓库。

在master 节点请将 docker images | grep wordpress命令的返回结果提交到答题框。【1分】

1
2
3
[root@master paas]# docker images | grep wordpress
wordpress latest 1b83fad37165 2 years ago 546MB

解法:

1
2
[root@master images]# for i in $(ls /opt/paas/images|grep tar); do   docker load -i /opt/paas/images/$i; done
[root@master images]# ../k8s_image_push.sh

6.安装 kubeadm 工具

在 master 节点、node1 节点、node2 节点分别安装 Kubeadm 工具并设置为开机自启动。

在 master 节点请将 kubeadm version 命令的返回结果提交到答题框。【1 分】

1
2
3
[root@master images]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.1", GitCommit:"7879fc12a63337efff607952a323df90cdc7a335", GitTreeState:"clean", BuildDate:"2020-04-08T17:36:32Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}

解法:

1
2
[root@master images]# yum install -y kubeadm kubelet kubectl
[root@master images]# systemctl enable kubelet --now

7.初始化 master 节点

使用 kubeadm 命令生成 yaml 文件,并修改 yaml 文件,设置 kubernetes 虚拟内部网段地

址为 10.244.0.0/16,通过该 yaml 文件初始化 master 节点,然后使用 kube-flannel.yaml 完成

控制节点初始化设置。

在 master 节点的 kube-flannel.yaml 执行前将 kubectl get nodes 命令的返回结果提交

到答题框。【1 分】

1
2
3
4
[root@master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master NotReady master 5m57s v1.18.1

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
#生成yaml文件
[root@master ~]# kubeadm config print init-defaults > kubeadm-config.yaml
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.200.3 # 本机IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: master1 # 本主机名
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controlPlaneEndpoint: "192.168.200.16:16443" # 虚拟IP和haproxy端口
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: k8s.gcr.io # 镜像仓库源要根据自己实际情况修改
kind: ClusterConfiguration
kubernetesVersion: v1.18.2 # k8s版本
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16"
serviceSubnet: 10.96.0.0/12
scheduler: {}

---
apiVersion: kubeproxy.config.k8s.io/v1alpha1
kind: KubeProxyConfiguration
featureGates:
SupportIPVSProxyMode: true
mode: ipvs

[root@master ~]# kubeadm init --config kubeadm-config.yaml

[root@master ~]# mkdir -p $HOME/.kube
[root@master ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@master ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

8.删除污点

使用命令删除 master 节点的污点,使得 Pod 也可以调度到 master 节点上。

在 master 节点请将 kubectl get nodes -o yaml master | grep -A10 spec 命令的返回结果提

交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@master paas]#  kubectl get nodes -o yaml master | grep -A10 spec
f:spec:
f:taints: {}
manager: kube-controller-manager
operation: Update
time: "2023-03-14T05:56:43Z"
name: master
resourceVersion: "2383"
selfLink: /api/v1/nodes/master
uid: abca28aa-3941-436f-ad5f-db5e12cbcaab
spec:
taints:
- effect: NoSchedule
key: node.kubernetes.io/not-ready
- effect: NoExecute
key: node.kubernetes.io/not-ready
timeAdded: "2023-03-14T05:56:43Z"
status:
addresses:
- address: 192.168.157.40
type: InternalIP

解法:

1
[root@master paas]# kubectl taint nodes master node-role.kubernetes.io/master-

9.安装 kubernetes 网络插件

使用 kube-flannel.yaml 安装 kubernetes 网络插件,安装完成后使用命令查看节点状态。

在 master 节点请将 kubectl get nodes 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
[root@master yaml]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 15m v1.18.1
root@master paas]# cd yaml/
[root@master yaml]# ls
dashboard flannel
[root@master yaml]# kubectl apply -f flannel/kube-flannel.yaml

10.给 kubernetes 创建证书。

在 master 节点请将 kubectl get csr 命令的返回结果提交到答题框。【2 分】

1
2
3
4
[root@master paas]# kubectl get csr
NAME AGE SIGNERNAME REQUESTOR CONDITION
csr-w6k9j 16m kubernetes.io/kube-apiserver-client-kubelet system:node:master Approved,Issued

11.kubernetes 图形化界面的安装

使用 recommended.yaml 和 dashboard-adminuser.yaml 安装 kubernetes dashboard 界面,

完成后查看首页。

请将 kubectl get pod,svc -n kubernetes-dashboard 命令的返回结果提交到答题框。【2 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
mkdir dashboard-certs
cd dashboard-certs/

kubectl create namespace kubernetes-dashboard
openssl genrsa -out dashboard.key 2048
openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard

##安装dashboard
kubectl apply -f recommended.yaml
#查看状态
kubectl get pod,svc -n kubernetes-dashboard
#
kubectl apply -f dashboard-adminuser.yaml

#获取token

12.扩展计算节点

在 node1 节点和 node2 节点分别使用 kubeadm config 命令生成 yaml 文件,并通过 yaml

文件将 node 节点加入 kubernetes 集群。完成后在 master 节点上查看所有节点状态。在 master 节点请将 kubectl get nodes 命令的返回结果提交到答题框。【2 分】

1
2
3
4
5
6
[root@master nfs]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 5d9h v1.19.0
node1 Ready <none> 5d9h v1.19.0
node2 Ready <none> 5d9h v1.19.0

解法:

1
2
3
kubeadm config print join-defaults > kubeadm-config.yaml

##然后把相应token修改即可

任务 3 EFK 日志平台构建(15 分)

1.导入镜像

将提供的 efk-img.tar.gz 压缩包中的镜像导入到 master 节点,并使用命令将镜像上传至

haboor 镜像仓库中。

在master节点将docker images | grep elasticsearch命令的返回结果提交到答题框。【1分】

1

2.NFS 配置网段访问

在 master 节点、node1 节点、node2 节点分别安装 nfs 服务,

master 节点作为服务端,把/data/volume1 目录作为共享目录,只允许 192.168.10 网段访问。

在 master 节点,将 showmount –e 命令的返回结果提交到答题框。【1 分】

1
2
3
[root@master ~]# showmount -e
Export list for master:
/data/volume1 192.168.157.0/24

解法:

1
2
3
4
#所有节点
[root@node1 ~]# yum install -y nfs-utils
[root@master ~]# cat /etc/exports
/data/volume1 192.168.157.0/24(rw)

3.RBAC 配置

在 master 节点,编写 sa.yaml,创建名称为 nfs-provisioner 的 SA 账号。

将 kubectl get serviceaccounts -n kube-logging 命令的返回结果提交到答题框。【1 分】

1
2
3
4
[root@master rbac]# kubectl get serviceaccounts -n kube-logging
NAME SECRETS AGE
default 1 23s
nfs-provisioner 1 14s

解法:

1
2
3
4
5
6
7
8
9
10
root@master ~]# mkdir rbac
[root@master ~]# cd rbac/
[root@master rbac]# vi sa.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
name: nfs-client-provisioner
namespace: kube-logging
[root@master rbac]# kubectl create ns kube-logging
[root@master rbac]# kubectl apply -f sa.yaml

4.RBAC 配置

编写 rbac.yaml ,对创建的 sa 账号进行 RBAC 授权,基于 yaml 文件创建完成后使用命令分别查看 sa 账号和 rbac 授权信息。

将 kubectl get roles.rbac.authorization.k8s.io 命令的返回结果提交到答题框。【1 分】

1
2
3
root@master rbac]# kubectl get roles.rbac.authorization.k8s.io
NAME CREATED AT
leader-nfs-provisioner 2023-03-14T09:04:04Z

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: nfs-client-provisioner-runner
rules:
- apiGroups: [""]
resources: ["nodes"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["persistentvolumes"]
verbs: ["get", "list", "watch", "create", "delete"]
- apiGroups: [""]
resources: ["persistentvolumeclaims"]
verbs: ["get", "list", "watch", "update"]
- apiGroups: ["storage.k8s.io"]
resources: ["storageclasses"]
verbs: ["get", "list", "watch"]
- apiGroups: [""]
resources: ["events"]
verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: run-nfs-client-provisioner
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-logging
roleRef:
kind: ClusterRole
name: nfs-client-provisioner-runner
apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: kube-logging
rules:
- apiGroups: [""]
resources: ["endpoints"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: leader-locking-nfs-client-provisioner
namespace: kube-logging
subjects:
- kind: ServiceAccount
name: nfs-client-provisioner
namespace: kube-logging
roleRef:
kind: Role
name: leader-locking-nfs-client-provisioner
apiGroup: rbac.authorization.k8s.io

5.StorageClass 动态绑定

编写 nfs-deploy.yaml 文件,基于 nfs-client-provisioner 镜像创建 nfs-provisioner 的

deployment 对象,绑定 nfs 服务端的共享目录。

将 kubectl get pods 命令的返回结果提交到答题框。【1 分】

1
2
[root@master rbac]# kubectl get pod
nfs-deploy-754d9b668c-chhqx 2/2 Running 1 9s

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
[root@master nfs]# cat nfs-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: nfs-client-provisioner
labels:
app: nfs-client-provisioner
namespace: kube-logging
spec:
replicas: 1
strategy:
type: Recreate
selector:
matchLabels:
app: nfs-client-provisioner
template:
metadata:
labels:
app: nfs-client-provisioner
spec:
nodeName: master #设置在master节点运行
serviceAccountName: nfs-client-provisioner
containers:
- name: nfs-client-provisioner
image: quay.io/external_storage/nfs-client-provisioner:latest
imagePullPolicy: IfNotPresent
volumeMounts:
- name: nfs-client-root
mountPath: /persistentvolumes
env:
- name: PROVISIONER_NAME
value: k8s/nfs-subdir-external-provisioner
- name: NFS_SERVER
value: 192.168.157.10
- name: NFS_PATH
value: /data/volume1
volumes:
- name: nfs-client-root
nfs:
server: 192.168.157.10 # NFS SERVER_IP
path: /data/volume1

6.StorageClass 动态绑定

编写 storageclass.yaml 文件,创建 storageclass 动态绑定 nfs-provisioner,完成后查看

nfs-provisioner 的 pod 及 storageclasses 对象。

将 kubectl get storageclasses.storage.k8s.io 命令的返回结果提交到答题框。【2 分】

1
2
3
4
[root@master rbac]#  kubectl get storageclasses.storage.k8s.io
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-deploy (default) k8s/nfs-subdir-external-provisioner Delete Immediate true 22m

解法:

1
2
3
4
5
6
7
8
9
10
[root@master nfs]# cat storageclass.yml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: managed-nfs-storage
provisioner: k8s/nfs-subdir-external-provisioner # or choose another name, must match deployment's env PROVISIONER_NAME'
allowVolumeExpansion: true
parameters:
archiveOnDelete: "false" # 设置为"false"时删除PVC不会保留数据,"true"则保留数据

7.通过 statefulset 创建 elasticsearch 集群

编写 es-statefulset.yaml,通过 yaml 文件构建 elasticsearch 的 statefulset 集群,集群中有

3 个副本名字分别为 es-cluster-0、es-cluster-1、es-cluster-2,并且使用上述 storageclass 提供

的存储,使用 elasticsearch:7.2.0 镜像,并且声明 9200 端口为 api 端口,9300 端口为内部访

问 端 口 , 并 且 添 加 busybox 的 初 始 化 容 器 对 elasticsearch 的 数 据 目 录

/usr/share/elasticsearch/data 进行授权操作。

将 kubectl get pods 命令的返回结果提交到答题框。【2 分】

1
2
3
4
5
6
7

[root@master nfs]# kubectl get pod
NAME READY STATUS RESTARTS AGE
es-cluster-0 1/1 Running 0 2m29s
es-cluster-1 1/1 Running 0 112s
es-cluster-2 1/1 Running 0 105s

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
[root@master nfs]# cat es-statefulset.yaml
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: es-cluster
spec:
serviceName: elasticsearch
replicas: 3
selector:
matchLabels:
app: elasticsearch
template:
metadata:
labels:
app: elasticsearch
spec:
initContainers:
- name: increase-vm-max-map
image: busybox
command: ["sysctl", "-w", "vm.max_map_count=262144"]
securityContext:
privileged: true
- name: chmod file
image: busybox
command: ["sh","-c","chown 1000:1000 /usr/share/elasticsearch/data"]
volumeMounts:
- name: es-pvc
mountPath: /usr/share/elasticsearch/data
- name: increase-fd-ulimit
image: busybox
command: ["sh", "-c", "ulimit -n 65536"]
securityContext:
privileged: true
containers:
- name: elasticsearch
image: elasticsearch:7.2.0
ports:
- name: db
containerPort: 9200
- name: int
containerPort: 9300
resources:
limits:
cpu: 1000m
requests:
cpu: 1000m
volumeMounts:
- name: es-pvc
mountPath: /usr/share/elasticsearch/data
env:
- name: cluster.name
value: k8s-logs
- name: node.name
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: cluster.initial_master_nodes
value: "es-cluster-0,es-cluster-1,es-cluster-2"
- name: discovery.zen.minimum_master_nodes
value: "2"
- name: discovery.seed_hosts
value: "elasticsearch"
- name: ES_JAVA_OPTS
value: "-Xms512m -Xmx512m"
- name: network.host
value: "0.0.0.0"
volumeClaimTemplates:
- metadata:
name: es-pvc
labels:
app: elasticsearch
spec:
accessModes: [ "ReadWriteMany" ]
storageClassName: "managed-nfs-storage"
resources:
requests:
storage: 10Gi

8.创建 headless service

编写 es-svc.yaml 文件,为 elasticsearch 的 pod 创建一个 headless service,并在 service

中声明 9200 和 9300 端口。

将 kubectl get svc 命令的返回结果提交到答题框。【2 分】

es-svc.yaml

1
2
3
4
[root@master nfs]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 17s

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@master nfs]# cat es-svc.yaml
kind: Service
apiVersion: v1
metadata:
name: elasticsearch
labels:
app: elasticsearch
spec:
selector:
app: elasticsearch
clusterIP: None
ports:
- port: 9200
name: api
- port: 9300
name: int


9.Kibana 可视化 UI 界面部署

编写 kibana.yaml,通过该文件创建 deployment 和 service,其中 deployment 基于

kibana:7.2.0 镜像创建并通过环境变量 ELASTICSEARCH_URL 指定 elasticsearch 服务地址;

service 代理 kibana 的 pod 服务,并且使用 NodePort 类型。创建成功后在浏览器访问 Kibana

的 UI 界面。

将 kubectl get svc 命令的返回结果提交到答题框。【2 分】

1
2
3
4
5
[root@master nfs]# kubectl get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
elasticsearch ClusterIP None <none> 9200/TCP,9300/TCP 12m
kibana NodePort 10.104.107.171 <none> 5601:30601/TCP 82s
kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 5d7h

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
[root@master nfs]# cat kibana.yml
apiVersion: apps/v1
kind: Deployment
metadata:
name: kibana
labels:
k8s-app: kibana
spec:
replicas: 1
selector:
matchLabels:
k8s-app: kibana
template:
metadata:
labels:
k8s-app: kibana
spec:
containers:
- name: kibana
image: kibana:7.2.0
resources:
limits:
cpu: 2
memory: 2Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: ELASTICSEARCH_HOSTS
value: http://elasticsearch.default:9200
- name: ELASTICSEARCH_URL
value: http://elasticsearch.default:9200
- name: I18N_LOCALE
value: zh-CN
ports:
- containerPort: 5601
name: ui
protocol: TCP

---
apiVersion: v1
kind: Service
metadata:
name: kibana
spec:
type: NodePort
ports:
- port: 5601
protocol: TCP
targetPort: ui
nodePort: 30601
selector:
k8s-app: kibana

10.Fluentd 组件部署

编写 fluentd.yaml,通过 yaml 文件创建 DaemonSet 控制器部署 fluentd 服务,并在该文

件中同时编写相关的 sa 账号和 rbac 内容,创建成功后保证可以正确采集容器内的日志。

将 kubectl get pods 命令的返回结果提交到答题框。【2 分】

C 模块题目:企业级应用的自动化部署和运维

1
2
3
4
5
###环境
192.168.157.20 ansible-control ansible
192.168.157.21 ansible-compute1 host1
192.168.157.22 ansible-compute2 host2
192.168.157.23 ansible-compute3

任务 1 企业级应用的自动化部署(10 分)

1.Ansible 自动化运维工具部署主从数据库

(1)修改主机名 ansible 节点主机名为 ansible,host1 节点主机名为 host1,host2 节点主机名为

host2,请使用提供的软件包在 ansible 节点安装 ansible。

将 ansible –version 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
[root@ansible ~]#  ansible --version
ansible 2.9.27
config file = /etc/ansible/ansible.cfg
configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules']
ansible python module location = /usr/lib/python2.7/site-packages/ansible
executable location = /usr/bin/ansible
python version = 2.7.5 (default, Apr 11 2018, 07:36:10) [GCC 4.8.5 20150623 (Red Hat 4.8.5-28)]

解法:

1
2
3
4
5
6
7
8
#设置主机名(其他节点同理)
hostnamectl set-hostname ansible
#安装依赖
[root@ansible ~]# yum install -y jinja2 PyYAML cryptography

[root@ansible ~]# rpm -ivh ansible-2.4.6.0-1.el7.ans.noarch.rpm

[root@ansible ~]# ansible --version

(2)配置主机清单文件,创建 mysql 主机组,mysql 主机组内添加 host1 和 host2 主机;创

建 mysql1 主机组,mysql1 组内添加 host1 主机;创建 mysql2 主机组,mysql2 组内添加 host2

主机,并配置免密登录。

将 ansible all -m ping 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@ansible ~]# ansible all -m ping
host2 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}
host1 | SUCCESS => {
"ansible_facts": {
"discovered_interpreter_python": "/usr/bin/python"
},
"changed": false,
"ping": "pong"
}

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[[root@ansible ~]# cat /etc/ansible/hosts
[mysql]
host1
host2
[mysql1]
host1
[mysql2]
host2

##设置免密登录
[root@ansible ~]# ssh-keygen

[root@ansible ~]# ssh-copy-id test@192.168.157.20
[root@ansible ~]# ssh-copy-id test@192.168.157.21
[root@ansible ~]# ssh-copy-id test@192.168.157.22

(3)mysql 主机组内所有主机安装 mariadb 数据库,启动数据库并设置为开机自启动。

在 host1 节点将 systemctl status mariadb 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19

[root@host1 ~]# systemctl status mariadb
● mariadb.service - MariaDB database server
Loaded: loaded (/usr/lib/systemd/system/mariadb.service; enabled; vendor preset: disabled)
Active: active (running) since Thu 2023-03-16 01:04:32 EDT; 56min ago
Process: 992 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
Process: 897 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
Main PID: 990 (mysqld_safe)
CGroup: /system.slice/mariadb.service
├─ 990 /bin/sh /usr/bin/mysqld_safe --basedir=/usr
└─1259 /usr/libexec/mysqld --basedir=/usr --datadir=/var/lib/mysql --plugin-dir=/usr/lib64/mysql/plugin --log-error=/var/log/mariadb/mariadb.log...

Mar 16 01:04:30 ansible-compute1 systemd[1]: Starting MariaDB database server...
Mar 16 01:04:30 ansible-compute1 mariadb-prepare-db-dir[897]: Database MariaDB is probably initialized in /var/lib/mysql already, nothing is done.
Mar 16 01:04:30 ansible-compute1 mariadb-prepare-db-dir[897]: If this is not the case, make sure the /var/lib/mysql is empty before running mariadb-...db-dir.
Mar 16 01:04:30 ansible-compute1 mysqld_safe[990]: 230316 01:04:30 mysqld_safe Logging to '/var/log/mariadb/mariadb.log'.
Mar 16 01:04:30 ansible-compute1 mysqld_safe[990]: 230316 01:04:30 mysqld_safe Starting mysqld daemon with databases from /var/lib/mysql
Mar 16 01:04:32 ansible-compute1 systemd[1]: Started MariaDB database server.
Hint: Some lines were ellipsized, use -l to show in full.

解法:

1
2
3
4
5
6
7
8
9
10
11
[root@ansible ~]# cat mysql.yaml
---
- name: install mariadb
hosts: mysql
tasks:
- name: install mariadb
yum: name=mariadb-server state=present
- name: start mariadb
service: name=mariadb state=started enabled=yes

[root@ansible ~]# ansible-playbook mysql.yaml

(4)编写一名称为 mariadb.sh 的 shell 脚本,该脚本具有完成 mariadb 数据库的初始化功能

(要求数据库用户名为 root,密码为 123456),通过 ansible 对应模块执行 mariadb.sh 完成对

mysql 主机组下的所有节点进行数据库初始化。

在 node1 节点,将 mysql -uroot -p123456 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
[root@host1 ~]#  mysql -uroot -p123456
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 3
Server version: 5.5.68-MariaDB MariaDB Server

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MariaDB [(none)]>

解法:

1
2
3
4
5
6
7
8
9
10
11
[root@ansible ~]# cat mariadb.sh
#!/bin/bash
mysqladmin -u root password "123456"
[root@ansible ~]# cat mysql-sh.yaml
---
- name: init mariadb
hosts: mysql
tasks:
- name: init mariadb
script: /root/mariadb.sh

(5)创建主机变量,所有主机组中 host1 节点创建变量 id=20,hots2 节点创建变量 id=30。

将 cat /etc/ansible/hosts | grep id 命令的返回结果提交到答题框。【1 分】

1
2
3
4
[root@ansible ~]#  cat /etc/ansible/hosts | grep id
id=20
id=30

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@ansible ~]# cat /etc/ansible/hosts
[mysql]
host1
host2
[mysql1]
host1
[mysql2]
host2
[mysql1:vars]
id=20
[mysql2:vars]
id=30

(6)根据 mysql 配置文件创建 mysql 的 Janja2 模板文件命名为 my.cnf.j2,编写 mariadb.yaml

文件实现主从数据库的配置和权限设置。

在 ansible 节点通过 cat mariadb.yaml 命令查看文件内容返回结果提交到答题框,在 host2

节点进入数据库将 show slave status \G 命令的返回结果提交到答题框。【1 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
MariaDB [(none)]> show slave status \G;
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 192.168.157.21
Master_User: user
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000001
Read_Master_Log_Pos: 524
Relay_Log_File: ansible-compute2-relay-bin.000002
Relay_Log_Pos: 808
Relay_Master_Log_File: mysql-bin.000001
Slave_IO_Running: Yes
Slave_SQL_Running: Yes
Replicate_Do_DB:
Replicate_Ignore_DB:
Replicate_Do_Table:
Replicate_Ignore_Table:
Replicate_Wild_Do_Table:
Replicate_Wild_Ignore_Table:
Last_Errno: 0
Last_Error:
Skip_Counter: 0
Exec_Master_Log_Pos: 524
Relay_Log_Space: 1113
Until_Condition: None
Until_Log_File:
Until_Log_Pos: 0
Master_SSL_Allowed: No
Master_SSL_CA_File:
Master_SSL_CA_Path:
Master_SSL_Cert:
Master_SSL_Cipher:
Master_SSL_Key:
Seconds_Behind_Master: 0
Master_SSL_Verify_Server_Cert: No
Last_IO_Errno: 0
Last_IO_Error:
Last_SQL_Errno: 0
Last_SQL_Error:
Replicate_Ignore_Server_Ids:
Master_Server_Id: 20
1 row in set (0.00 sec)

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
cat my.cnf.j2
[mysqld]
log_bin=mysql-bin
server_id={{ id }}


[root@ansible ~]# cat mariadb.yaml
---
- name: config mariadb
hosts: mysql1,mysql2
tasks:
- name: config my.cnf
template: src=my.cnf.j2 dest=/etc/my.cnf
- name: restart mariadb
service: name=mariadb state=restarted enabled=yes
- name: grant user
shell: mysql -uroot -p123456 -e "grant all privileges on *.* to root@'%' identified by '123456';"
when: inventory_hostname in groups.mysql1
- name: master create user
shell: mysql -uroot -p123456 -e "grant replication slave on *.* to 'user'@'%' identified by '123456';"
when: inventory_hostname in groups.mysql1
- name: node
shell: mysql -uroot -p123456 -e "change master to master_host='192.168.157.21',master_user='user',master_password='123456';"
when: inventory_hostname in groups.mysql2
- name: start slave
shell: mysql -uroot -p123456 -e "start slave;"
when: inventory_hostname in groups.mysql2

2.Ansible 自动化运维工具部署 zookeeper 集群

zookeeper 是一个分布式服务框架,是 Apache Hadoop 的一个子项目,主要是用来解决

分布式应用中经常遇到的一些数据管理问题,如:统一命名服务、状态同步服务、集群管理、

分布式应用配置项的管理等。gpmall 商城系统中用到了 kafka 消息队列,kafka 集群的搭建

依赖 zookeeper 集群来进行元数据的管理。

(1)编写主机清单文件,创建 zookeeper 主机组,zookeeper 主机组内添加 ansible、host1

和 host2 主机,分别创建主机变量 zk_id=私有 IP 最后一个数字。

将 ansible all -a “id”命令的返回结果提交到答题框。【2 分】

1
2
3
4
5
6
7
8
root@ansible ~]# ansible all -a "id"
ansible | CHANGED | rc=0 >>
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host1 | CHANGED | rc=0 >>
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
host2 | CHANGED | rc=0 >>
uid=0(root) gid=0(root) groups=0(root) context=unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023

解法:

1
2
3
4
5
6
[root@ansible ~]# cat /etc/ansible/hosts
[zookeeper]
ansible zk_id=1
host1 zk_id=2
host2 zk_id=3

(2)在 ansible 节点,使用提供的 zookeeper-3.4.14.tar.gz 软件包,编写 zookeeper.yaml 文件,

实现 zookeeper 集群搭建,创建任务清单实现 zookeeper 安装包批量解压、通过 Janja2 模板

文件配置 zookeeper、创建 zookeeper 的 myid 文件和批量启动 zookeeper 功能。在三个节点

相应的目录使用./zkServer.sh status 命令查看三个 Zookeeper 节点的状态。

在 ansible 主机上将 cat zookeeper.yaml 命令结果提交到答题框,将 jps 命令的返回结果

提交到答题框。【2 分】

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@ansible zookeeper]# jps
7620 Jps
7574 QuorumPeerMain

[root@ansible zookeeper]# cat zookeeper.yaml
---
- hosts: zookeeper
tasks:
- name: install inneed
yum: name=java-1.8.0-openjdk* state=present
- name: tar zookeeper
copy: src=/root/zookeeper-3.4.14.tar.gz dest=/opt/
- name: tar zookeeper
shell: tar zxvf /opt/zookeeper-3.4.14.tar.gz -C /opt
- name: copy
copy: src=zoo.cfg dest=/opt/zookeeper-3.4.14/conf/
- name: create file
file: path=/tmp/zookeeper state=directory
- name: copy j2id
template: src=myid.j2 dest=/tmp/zookeeper/myid
- name: start zk
shell: "/opt/zookeeper-3.4.14/bin/zkServer.sh start"

解法:

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@ansible zookeeper]# cat zoo.cfg
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/tmp/zookeeper
clientPort=2181
server.1=ansible:2888:3888
server.2=host1:2888:3888
server.3=host2:2888:3888

[root@ansible zookeeper]# cat myid.j2
{{ zk_id }}

任务 2 应用商城系统部署【10 分】

1.在 ansible 节点,使用提供的 gpmall-cluster 软件包,完成集群应用系统部署。部署完成后,

进行登录,最后使用 curl 命令去获取商城首页的返回信息,

先将 netstat –ntpl 命令的返回结果提交到答题框,然后将 curl -l http://EIP:80 命令的返回

结果提交到答题框。【10 分】

image-20230316200644461

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
#在controller节点进行操作
将文件包进行上传,目前已完成mariadb,zookeeper部署

[root@ansible ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.157.20 ansible-control ansible
192.168.157.21 ansible-compute1 host1
192.168.157.22 ansible-compute2 host2
192.168.157.23 ansible-compute3
127.0.0.1 mysql.mall
127.0.0.1 redis.mall
127.0.0.1 zk1.mall
127.0.0.1 kafka1.mall

##配置kafka
cd kafka_2.11-1.1.1
[root@ansible ~]# cd kafka_2.11-1.1.1
[root@ansible kafka_2.11-1.1.1]# ls
bin config libs LICENSE NOTICE site-docs
[root@ansible kafka_2.11-1.1.1]# cd bin/
[root@ansible bin]# ./kafka-server-start.sh ../config/server.properties

#配置redis
# yum install -y redis
# sed -i 's/bind 127.0.0.1/#bind 127.0.0.1/g' /etc/redis.conf
# sed -i 's/protected-mode yes/protected-mode no/g' /etc/redis.conf
# redis-server etc/redis.conf

#配置mariadb
mysql -u root -p123456 -e "grant all on *.* to 'root'@'% identified by '123456';"
mysql -u root -p123456 -e 'create database gpmall default character set=utf8;'
mysql -u root -p123456 -e "user gpmall;source/root/gpmall.sql"
##修改端口
[root@ansible bin]# cat /etc/my.cnf

[mysqld]
port=8066


#配置nginx
# vi /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;

#charset koi8-r;
#access_log /var/log/nginx/host.access.log main;

location / {
root /usr/share/nginx/html;
index index.html index.htm;
}
location /user {
proxy_pass http://127.0.0.1:8082;
}

location /shopping {
proxy_pass http://127.0.0.1:8081;
}

location /cashier {
proxy_pass http://127.0.0.1:8083;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
#location ~ \.php$ {
# root html;
# fastcgi_pass 127.0.0.1:9000;
# fastcgi_index index.php;
# fastcgi_param SCRIPT_FILENAME /scripts$fastcgi_script_name;
# include fastcgi_params;
#}

# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}

systemctl enable nginx

# rm -rf /usr/share/nginx/html/*
# cp -rvf dist/* /usr/share/nginx/html/



#启动jar包
nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &