A模块:OpenStack平台部署与运维(样题)

业务场景:

某企业拟使用OpenStack搭建一个企业云平台,用于部署各类企业应用对外对内服务。云平台可实现IT资源池化,弹性分配,集中管理,性能优化以及统一安全认证等。系统结构如下图

企业云平台的搭建使用竞赛平台提供的两台云服务器,配置如下表:

说明:

①选手自行检查工位pc机硬件及网络是否正常;1.选手自行检查工位PC机硬件及网络是否正常;

②竞赛使用集群模式进行,给每个参赛队提供华为云账号和密码及考试系统的账号和密码。选手通过用户名与密码分别登录华为云和考试系统;

③考试用到的软件包都在云主机/opt下。

④表1中的公网IP和私网IP以自己云主机显示为准,每个人的公网IP和私网IP不同。使用第三方软件远程连接云主机,使用公网IP连接。

任务1私有云平台环境初始化

①根据表1中的IP地址规划,设置各服务器节点的IP地址,确保网络正常通信,设置云服务器1主机名为Controller,云服务器2主机名为Compute,并修改hosts文件将IP地址映射为主机名,关闭防火墙并设置为开机不启动,设置SELinux为Permissive 模式。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@localhost ~]# hostnamectl set-hostname controller

###ip地址映射主机,注意该ip地址为自身环境地址
[root@localhost ~]# vi /etc/hosts
[root@localhost ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.157.20 controller
192.168.157.21 compute

###关闭防火墙
[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service

##关闭安全策略
[root@localhost ~]# setenforce 0
[root@localhost ~]# cat /etc/selinux/config | grep -v ^$ | grep -v ^#
SELINUX=permissive
SELINUXTYPE=targeted

##compute节点修改主机名即可,其他配置相同

②将提供的CentOS-7-x86_64-DVD-1804.iso和qdkills_cloud_iaas.iso光盘镜像上传到Compute节点的/root目录下,然后在/opt目录下分别创建centos目录和openstack目录,并将镜像文件CentOS-7-x86_64-DVD-1804.iso挂载到centos目录下,将镜像文件qdkills_cloud_iaas.iso挂载到openstack目录下。

1
2
3
4
5
6
7
8
[root@localhost ~]# mkdir /opt/centos
[root@localhost ~]# mkdir /opt/openstack

[root@localhost ~]# mount CentOS-7-x86_64-DVD-1804.iso /opt/centos/
mount: /dev/loop0 is write-protected, mounting read-only
[root@localhost ~]# mount chinaskills_cloud_iaas.iso /opt/openstack/
mount: /dev/loop1 is write-protected, mounting read-only

③在Compute节点上利用centos目录中的软件包安装vsftpd服务器并设置开机自启动,提供yum仓库服务,并分别设置controller节点和compute节点的yum源文件ftp.repo,其中节点的地址使用IP形式。

在compute节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@localhost ~]# mv /etc/yum.repos.d/CentOS-* /home/
[root@localhost ~]# vi /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[openstack]
name=centos
baseurl=file:///opt/openstack/iaas-repo
gpgcheck=0
enabled=1


[root@localhost ~]# yum repolist

##安装并配置vsftp
[root@localhost ~]# yum install -y vsftpd
[root@localhost ~]# echo "anon_root=/opt" >> /etc/vsftpd/vsftpd.conf
[root@localhost ~]# systemctl enable --now vsftpd
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.

在controller节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@localhost ~]# cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://compute/centos
enabled=1
gpgcheck=0
[openstack]
name=centos
baseurl=ftp://compute/openstack/iaas-repo
enabled=1
gpgcheck=0

[root@localhost ~]# yum repolist

④在Controller节点上部署chrony服务器,允许其他节点同步时间,启动服务并设置为开机启动;并在compute节点上指定controller节点为上游NTP服务器,重启服务并设为开机启动。

controller节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
vi /etc/chrony.conf

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking
allow 192.168.157.0/24
local stratum 10

###使配置生效
systemctl restart chronyd
systemctl enable chronyd
chronyc sources

compute节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
vi /etc/chrony.conf

# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server controller iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

###使配置生效
systemctl restart chronyd
systemctl enable chronyd
chronyc sources

⑤在compute节点上查看分区情况,并利用空白分区划分2个20G分区。

compute节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
##查看空白分区
[root@localhost ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 50G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 49G 0 part
├─centos-root 253:0 0 44G 0 lvm /
└─centos-swap 253:1 0 5G 0 lvm [SWAP]
sdb 8:16 0 40G 0 disk
sr0 11:0 1 1024M 0 rom


[root@localhost ~]# fdisk /dev/sdb
Welcome to fdisk (util-linux 2.23.2).

Changes will remain in memory only, until you decide to write them.
Be careful before using the write command.

Device does not contain a recognized partition table
Building a new DOS disklabel with disk identifier 0x4cacbd86.

Command (m for help): n
Partition type:
p primary (0 primary, 0 extended, 4 free)
e extended
Select (default p):
Using default response p
Partition number (1-4, default 1):
First sector (2048-83886079, default 2048):
Using default value 2048
Last sector, +sectors or +size{K,M,G} (2048-83886079, default 83886079): +20G
Partition 1 of type Linux and of size 20 GiB is set

Command (m for help): n
Partition type:
p primary (1 primary, 0 extended, 3 free)
e extended
Select (default p):
Using default response p
Partition number (2-4, default 2):
First sector (41945088-83886079, default 41945088):
Using default value 41945088
Last sector, +sectors or +size{K,M,G} (41945088-83886079, default 83886079): +20G
Using default value 83886079
Partition 2 of type Linux and of size 20 GiB is set

Command (m for help): w
The partition table has been altered!

Calling ioctl() to re-read partition table.
Syncing disks.

任务2 OpenStack平台搭建

①在 controller 节点和 compute 节点分别安装 iaas-xiandian 软件包,修改脚本文件基本变量(脚本文件为/etc/xiandian/openrc.sh),修改完成后使用命令生效该脚本文件。

②在 compute 节点配置/etc/xiandian/openrc.sh 文件,根据环境情况修改参数,块存储服务的后端使用第二块硬盘的第一个分区,生效该参数文件。

controller节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
[root@localhost ~]# yum install -y iaas-xiandian
[root@localhost ~]# vi /etc/xiandian/openrc.sh
[root@localhost ~]# cat /etc/xiandian/openrc.sh | grep -v ^$ | grep -v ^#
HOST_IP=192.168.157.20
HOST_PASS=000000
HOST_NAME=controller
HOST_IP_NODE=192.168.157.21
HOST_PASS_NODE=000000
HOST_NAME_NODE=compute
network_segment_IP=192.168.157.0/24
RABBIT_USER=openstack
RABBIT_PASS=000000
DB_PASS=000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000
KEYSTONE_DBPASS=000000
GLANCE_DBPASS=000000
GLANCE_PASS=000000
NOVA_DBPASS=000000
NOVA_PASS=000000
NEUTRON_DBPASS=000000
NEUTRON_PASS=000000
METADATA_SECRET=000000
INTERFACE_IP=192.168.157.20
INTERFACE_NAME=ens34
Physical_NAME=provider
minvlan=101
maxvlan=200
CINDER_DBPASS=000000
CINDER_PASS=000000
BLOCK_DISK=sdb1
SWIFT_PASS=000000
OBJECT_DISK=sdb2
STORAGE_LOCAL_NET_IP=192.168.157.21
HEAT_DBPASS=000000
HEAT_PASS=000000
ZUN_DBPASS=000000
ZUN_PASS=000000
KURYR_DBPASS=000000
KURYR_PASS=000000
CEILOMETER_DBPASS=000000
CEILOMETER_PASS=000000
AODH_DBPASS=000000
AODH_PASS=000000
BARBICAN_DBPASS=000000
BARBICAN_PASS=000000

##将配置文件复制到compute
[root@localhost ~]# scp /etc/xiandian/openrc.sh root@compute:/etc/xiandian/openrc.sh
The authenticity of host 'compute (192.168.157.21)' can't be established.
ECDSA key fingerprint is SHA256:571qhtjNb3asAlUU69GoE8W2Eel7T4VD8/VbitmzBxQ.
ECDSA key fingerprint is MD5:9d:69:e5:7f:58:f8:84:87:9c:d2:1a:39:7b:9f:53:03.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added 'compute,192.168.157.21' (ECDSA) to the list of known hosts.
root@compute's password:
openrc.sh 100% 3819 2.5MB/s 00:00

compute节点:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
[root@localhost ~]# cat /etc/xiandian/openrc.sh
#--------------------system Config--------------------##
#Controller Server Manager IP. example:x.x.x.x
HOST_IP=192.168.157.20

#Controller HOST Password. example:000000
HOST_PASS=000000

#Controller Server hostname. example:controller
HOST_NAME=controller

#Compute Node Manager IP. example:x.x.x.x
HOST_IP_NODE=192.168.157.21

#Compute HOST Password. example:000000
HOST_PASS_NODE=000000

#Compute Node hostname. example:compute
HOST_NAME_NODE=compute

#--------------------Chrony Config-------------------##
#Controller network segment IP. example:x.x.0.0/16(x.x.x.0/24)
network_segment_IP=192.168.157.0/24

#--------------------Rabbit Config ------------------##
#user for rabbit. example:openstack
RABBIT_USER=openstack

#Password for rabbit user .example:000000
RABBIT_PASS=000000

#--------------------MySQL Config---------------------##
#Password for MySQL root user . exmaple:000000
DB_PASS=000000

#--------------------Keystone Config------------------##
#Password for Keystore admin user. exmaple:000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000

#Password for Mysql keystore user. exmaple:000000
KEYSTONE_DBPASS=000000

#--------------------Glance Config--------------------##
#Password for Mysql glance user. exmaple:000000
GLANCE_DBPASS=000000

#Password for Keystore glance user. exmaple:000000
GLANCE_PASS=000000

#--------------------Nova Config----------------------##
#Password for Mysql nova user. exmaple:000000
NOVA_DBPASS=000000

#Password for Keystore nova user. exmaple:000000
NOVA_PASS=000000

#--------------------Neturon Config-------------------##
#Password for Mysql neutron user. exmaple:000000
NEUTRON_DBPASS=000000

#Password for Keystore neutron user. exmaple:000000
NEUTRON_PASS=000000

#metadata secret for neutron. exmaple:000000
METADATA_SECRET=000000

#Tunnel Network Interface. example:x.x.x.x
INTERFACE_IP=192.168.157.21

#External Network Interface. example:eth1
INTERFACE_NAME=ens34

#External Network The Physical Adapter. example:provider
Physical_NAME=provider

#First Vlan ID in VLAN RANGE for VLAN Network. exmaple:101
minvlan=101

#Last Vlan ID in VLAN RANGE for VLAN Network. example:200
maxvlan=200

#--------------------Cinder Config--------------------##
#Password for Mysql cinder user. exmaple:000000
CINDER_DBPASS=000000

#Password for Keystore cinder user. exmaple:000000
CINDER_PASS=000000

#Cinder Block Disk. example:md126p3
BLOCK_DISK=sdb1

#--------------------Swift Config---------------------##
#Password for Keystore swift user. exmaple:000000
SWIFT_PASS=000000

#The NODE Object Disk for Swift. example:md126p4.
OBJECT_DISK=sdb2

#The NODE IP for Swift Storage Network. example:x.x.x.x.
STORAGE_LOCAL_NET_IP=192.168.157.21

#--------------------Heat Config----------------------##
#Password for Mysql heat user. exmaple:000000
HEAT_DBPASS=000000

#Password for Keystore heat user. exmaple:000000
HEAT_PASS=000000

#--------------------Zun Config-----------------------##
#Password for Mysql Zun user. exmaple:000000
ZUN_DBPASS=000000

#Password for Keystore Zun user. exmaple:000000
ZUN_PASS=000000

#Password for Mysql Kuryr user. exmaple:000000
KURYR_DBPASS=000000

#Password for Keystore Kuryr user. exmaple:000000
KURYR_PASS=000000

#--------------------Ceilometer Config----------------##
#Password for Gnocchi ceilometer user. exmaple:000000
CEILOMETER_DBPASS=000000

#Password for Keystore ceilometer user. exmaple:000000
CEILOMETER_PASS=000000

#--------------------AODH Config----------------##
#Password for Mysql AODH user. exmaple:000000
AODH_DBPASS=000000

#Password for Keystore AODH user. exmaple:000000
AODH_PASS=000000

#--------------------Barbican Config----------------##
#Password for Mysql Barbican user. exmaple:000000
BARBICAN_DBPASS=000000

#Password for Keystore Barbican user. exmaple:000000
BARBICAN_PASS=000000
[root@localhost ~]# cat /etc/xiandian/openrc.sh | grep -v ^# | grep -v ^$
HOST_IP=192.168.157.20
HOST_PASS=000000
HOST_NAME=controller
HOST_IP_NODE=192.168.157.21
HOST_PASS_NODE=000000
HOST_NAME_NODE=compute
network_segment_IP=192.168.157.0/24
RABBIT_USER=openstack
RABBIT_PASS=000000
DB_PASS=000000
DOMAIN_NAME=demo
ADMIN_PASS=000000
DEMO_PASS=000000
KEYSTONE_DBPASS=000000
GLANCE_DBPASS=000000
GLANCE_PASS=000000
NOVA_DBPASS=000000
NOVA_PASS=000000
NEUTRON_DBPASS=000000
NEUTRON_PASS=000000
METADATA_SECRET=000000
INTERFACE_IP=192.168.157.21
INTERFACE_NAME=ens34
Physical_NAME=provider
minvlan=101
maxvlan=200
CINDER_DBPASS=000000
CINDER_PASS=000000
BLOCK_DISK=sdb1
SWIFT_PASS=000000
OBJECT_DISK=sdb2
STORAGE_LOCAL_NET_IP=192.168.157.21
HEAT_DBPASS=000000
HEAT_PASS=000000
ZUN_DBPASS=000000
ZUN_PASS=000000
KURYR_DBPASS=000000
KURYR_PASS=000000
CEILOMETER_DBPASS=000000
CEILOMETER_PASS=000000
AODH_DBPASS=000000
AODH_PASS=000000
BARBICAN_DBPASS=000000
BARBICAN_PASS=000000

③分别在 controller 节点和 compute 节点执行 iaas-pre-host.sh 文件(不需要重启云主机)。

1
[root@controller ~]# iaas-pre-host.sh

④在 controller 节点执行 iaas-install-mysql.sh 脚本,会自行安装 mariadb、memcached、rabbitmq 等服务和完成相关配置。执行完成后修改配置文件将缓存 CACHESIZE 修改为 128,并重启相应服务。

1
2
3
4
5
6
7
8
9
10
[root@controller ~]# iaas-install-mysql.sh
[root@controller ~]# cat /etc/sysconfig/memcached
PORT="11211"
USER="memcached"
MAXCONN="1024"
CACHESIZE="128"
OPTIONS="-l 127.0.0.1,::1,controller"

#重启生效
[root@controller ~]# systemctl restart memcached

⑤在 controller 节点执行 iaas-install-keystone.sh 脚本,会自行安装 keystone 服务和完成相关配置。使用 openstack 命令,创建一个名为 tom 的账户,密码为 tompassword123,邮箱为tom@example.com

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@controller ~]# iaas-install-keystone.sh
[root@controller ~]# source /etc/keystone/admin-openrc.sh
[root@controller ~]# openstack user create tom --password tompassword123 --email tom@example.com --domain demo
+---------------------+----------------------------------+
| Field | Value |
+---------------------+----------------------------------+
| domain_id | c7a3303c6f7748f2b22f6421149226b5 |
| email | tom@example.com |
| enabled | True |
| id | 131e1e035c174fd0a10862fe47844cf1 |
| name | tom |
| options | {} |
| password_expires_at | None |
+---------------------+----------------------------------+

⑥在 controller 节点执行 iaas-install-glance.sh 脚本,会自行安装 glance 服务和完成相关配 置 。 完 成 后 使 用 openstack 命 令 , 创 建 一 个 名 为 cirros 的 镜 像 , 镜 像 文 件 使 用cirros-0.3.4-x86_64-disk.img。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
###注意请将镜像文件上传
[root@controller ~]# iaas-install-glance.sh

[root@controller ~]# openstack image create --disk-format qcow2 --container bare --file cirros-0.3.4-x86_64-disk.img cirros
+------------------+------------------------------------------------------+
| Field | Value |
+------------------+------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2023-04-17T03:13:42Z |
| disk_format | qcow2 |
| file | /v2/images/c51eee70-1885-4482-9252-d808c2832cdb/file |
| id | c51eee70-1885-4482-9252-d808c2832cdb |
| min_disk | 0 |
| min_ram | 0 |
| name | cirros |
| owner | 3b0c23a093dd4f11bbd8d7316634b784 |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2023-04-17T03:13:42Z |
| virtual_size | None |
| visibility | shared |
+------------------+------------------------------------------------------+

⑦在 controller 节点执行 iaas-install-nova-controller.sh,compute 节点执行iaas-install-nova-compute.sh,会自行安装 nova 服务和完成相关配置。使用 nova 命令创建一个名为 t,ID 为 5,内存为 2048MB,磁盘容量为 10GB,vCPU 数量为 2 的云主机类型。

1
2
3
4
5
6
7
8
9
10
11
12
[root@controller ~]#  iaas-install-nova-controller.sh

[root@compute ~]# iaas-install-nova-compute.sh

##注意在compute节点跑完之后运行
[root@controller ~]# nova flavor-create t 5 2048 10 2
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| ID | Name | Memory_MB | Disk | Ephemeral | Swap | VCPUs | RXTX_Factor | Is_Public | Description |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+
| 5 | t | 2048 | 10 | 0 | | 2 | 1.0 | True | - |
+----+------+-----------+------+-----------+------+-------+-------------+-----------+-------------+

⑧在 controller 节点执行 iaas-install-neutron-controller.sh,compute节点执行iaas-install-neutron-compute.sh,会自行安装 neutron 服务并完成配置。创建云主机外部网络 ext-net,子网为 ext-subnet,云主机浮动 IP 可用网段为192.168.10.100 ~ 192. 168.10.200,网关为 192.168.100.1。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
[root@controller ~]# iaas-install-neutron-controller.sh
[root@compute ~]# iaas-install-neutron-compute.sh
# 创建网络
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack network create --external ext-net
# 创建子网
[root@controller ~]# openstack subnet create --gateway 192.168.100.1 --allocation-pool start=192.168.10.100,end=192.168.10.200 --network ext-net --subnet-range 192.168.10.0/24 ext-subnet

[root@controller ~]# openstack subnet show ext-subnet
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field             | Value                                                                                                                                                   |
+-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+
| allocation_pools  | 192.168.200.100-192.168.200.200                                                                                                                         |
| cidr              | 192.168.200.0/24                                                                                                                                        |
| created_at        | 2023-02-22T16:29:20Z                                                                                                                                    |
| description       |                                                                                                                                                         |
| dns_nameservers   |                                                                                                                                                         |
| enable_dhcp       | True                                                                                                                                                    |
| gateway_ip        | 192.168.200.1                                                                                                                                           |
| host_routes       |                                                                                                                                                         |
| id                | 6ab2ab75-3a82-44d5-9bc8-c2c0a65872d6                                                                                                                    |
| ip_version        | 4                                                                                                                                                       |
| ipv6_address_mode | None                                                                                                                                                    |
| ipv6_ra_mode      | None                                                                                                                                                    |
| location          | cloud='', project.domain_id=, project.domain_name='Default', project.id='ce21284fd468495995218ea6e1aeea2a', project.name='admin', region_name='', zone= |
| name              | ext-subnet                                                                                                                                              |
| network_id        | bc39443b-9ef8-4a4d-91b3-fd2637ada43f                                                                                                                    |
| prefix_length     | None                                                                                                                                                    |
| project_id        | ce21284fd468495995218ea6e1aeea2a                                                                                                                        |
| revision_number   | 0                                                                                                                                                       |
| segment_id        | None                                                                                                                                                    |
| service_types     |                                                                                                                                                         |
| subnetpool_id     | None                                                                                                                                                    |
| tags              |                                                                                                                                                         |
| updated_at        | 2023-02-22T16:29:20Z 

⑨在 controller 节点执行 iaas-install-dashboard.sh 脚本,会自行安装 dashboard 服务并完成配置。请修改 nova 配置文件,使之能通过公网 IP 访问 dashboard 首页。

1
2
3
4
5
6
7
[root@controller ~]# openstack-controller-dashboard.sh
[root@controller ~]# vim /etc/nova/nova.conf
修改内容如下
novncproxy_base_url = http://公网IP:6080/vnc_auto.html

[root@controller ~]# cat /etc/nova/nova.conf | grep 公网IP
novncproxy_base_url = http://公网IP:6080/vnc_auto.html

任务 3 OpenStack 运维任务

①使用命令创建名称为 group_web 的安全组该安全组的描述为” Custom security group”,用 openstack 命令为安全组添加 icmp 规则和 ssh 规则允许任意 ip 地址访问 web,完成后查看该安全组的详细信息.

1
2
3
4
5
6
7
8
9
# 创建描述为Custom security group的安全组
[root@controller ~]# openstack security group create --description "Custom security group" group_web
# 添加访问80
[root@controller ~]# openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 80:80 group_web
# 添加访问ssh(22)
[root@controller ~]# openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 22:22 group_web
# 添加访问icmp
[root@controller ~]# openstack security group rule create --ingress --protocol icmp group_web

②在 keystone 中创建 shop 项目添加描述为”Hello shop”,完成后使用 openstack 命令禁用该项目,然后使用 openstack 命令查看该项目的详细信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@controller ~]# openstack project create shop --description "Heelo shop" --domain demo
+-------------+----------------------------------+
| Field | Value |
+-------------+----------------------------------+
| description | Heelo shop |
| domain_id | c7a3303c6f7748f2b22f6421149226b5 |
| enabled | True |
| id | d610f67035114665b15c367ab4e4d879 |
| is_domain | False |
| name | shop |
| parent_id | c7a3303c6f7748f2b22f6421149226b5 |
| tags | [] |
+-------------+----------------------------------+
[root@controller ~]# openstack project set shop --disable

使用 nova 命令查看 admin 租户的当前配额值,将 admin 租户的实例配额提升到 13。登 录 controller 节 点 ,使用 glance 相 关 命 令 上 传 镜 像 , 源 使 用CentOS_7.5_x86_64_XD.qcow2,名字为 centos7.5,修改这个镜像为共享状态,并设置最小磁盘为 5G。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
[root@controller ~]# openstack quota set admin --instances 13
[root@controller ~]# openstack quota show admin
+----------------------+----------------------------------+
| Field | Value |
+----------------------+----------------------------------+
| cores | 20 |
| fixed-ips | -1 |
| floating-ips | 50 |
| health_monitors | None |
| injected-file-size | 10240 |
| injected-files | 5 |
| injected-path-size | 255 |
| instances | 13 |
| key-pairs | 100 |
| l7_policies | None |
| listeners | None |
| load_balancers | None |
| location | None |
| name | None |
| networks | 100 |
| pools | None |
| ports | 500 |
| project | 3b0c23a093dd4f11bbd8d7316634b784 |
| project_name | admin |
| properties | 128 |
| ram | 51200 |
| rbac_policies | 10 |
| routers | 10 |
| secgroup-rules | 100 |
| secgroups | 10 |
| server-group-members | 10 |
| server-groups | 10 |
| subnet_pools | -1 |
| subnets | 100 |
+----------------------+----------------------------------+

[root@controller ~]# glance image-create --disk-format qcow2 --container bare --file CentOS-7-x86_64-DVD-1804.iso --min-disk 5 --name centos7.5
+------------------+--------------------------------------+
| Property | Value |
+------------------+--------------------------------------+
| checksum | 660aab9894136872770ecb6e1e370c08 |
| container_format | bare |
| created_at | 2023-04-17T03:54:50Z |
| disk_format | qcow2 |
| id | 84cd8c51-ee79-4591-8a9e-b3d689d34c04 |
| min_disk | 5 |
| min_ram | 0 |
| name | centos7.5 |
| owner | 3b0c23a093dd4f11bbd8d7316634b784 |
| protected | False |
| size | 4470079488 |
| status | active |
| tags | [] |
| updated_at | 2023-04-17T03:55:15Z |
| virtual_size | None |
| visibility | shared |
+------------------+--------------------------------------+
[root@controller ~]# openstack image set centos7.5 --share

④请修改 glance 后端配置文件,将项目的映像存储限制为 10GB,完成后重启 glance 服务。

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@controller ~]# vim /etc/glance/glance-api.conf
user_storage_quota = 10737418240
# 重启
[root@controller ~]# systemctl restart openstack-glance-*
# 查询
[root@controller ~]# cat /etc/glance/glance-api.conf |grep _quota
# ``image_property_quota`` configuration option.
#     * image_property_quota
#image_member_quota = 128
#image_property_quota = 128
#image_tag_quota = 128
#image_location_quota = 10
user_storage_quota = 10737418240

⑤在 controller 节点执行 iaas-install-cinder-controller.sh, compute 节点执行iaas-install-cinder-compute.sh,在 controller 和 compute 节点上会自行安装 cinder 服务并完成配置。创建一个名为 lvm 的卷类型,创建该类型规格键值对,要求 lvm 卷类型对应 cinder后端驱动 lvm 所管理的存储资源,名字 lvm_test,大小 1G 的云硬盘并查询该云硬盘的详细信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
[root@controller ~]# openstack-controller-cinder.sh 
[root@compute ~]# openstack-compute-cinder.sh
# 创建卷类型lvm
[root@controller ~]# source admin-openrc
[root@controller ~]# openstack volume type create lvm
+-------------+--------------------------------------+
| Field       | Value                                |
+-------------+--------------------------------------+
| description | None                                 |
| id          | 5a1ac113-b226-4646-9a7c-46eee3f6346f |
| is_public   | True                                 |
| name        | lvm                                  |
+-------------+--------------------------------------+
[root@controller ~]# cinder type-key lvm set volume_backend_name=LVM
# 创建云硬盘
[root@controller ~]# cinder create --volume-type lvm --name lvm_test 1
略                                               
# 查看详细信息
[root@controller ~]# cinder show lvm_test
+--------------------------------+--------------------------------------+
| Property                       | Value                                |
+--------------------------------+--------------------------------------+
| attached_servers               | []                                   |
| attachment_ids                 | []                                   |
| availability_zone              | nova                                 |
| bootable                       | false                                |
| consistencygroup_id            | None                                 |
| created_at                     | 2022-10-25T12:28:55.000000           |
| description                    | None                                 |
| encrypted                      | False                                |
| id                             | 39f131c3-6ee2-432a-8096-e13173307339 |
| metadata                       |                                      |
| migration_status               | None                                 |
| multiattach                    | False                                |
| name                           | lvm_test                             |
| os-vol-host-attr:host          | compute@lvm#LVM                      |
| os-vol-mig-status-attr:migstat | None                                 |
| os-vol-mig-status-attr:name_id | None                                 |
| os-vol-tenant-attr:tenant_id   | 4885b78813a5466d9d6d483026f2067c     |
| replication_status             | None                                 |
| size                           | 1                                    |
| snapshot_id                    | None                                 |
| source_volid                   | None                                 |
| status                         | available                            |
| updated_at                     | 2022-10-25T12:28:56.000000           |
| user_id                        | b4a6c1eb18c247edba11b57be18ec752     |
| volume_type                    | lvm                                  |

⑥请使用数据库命令将所有数据库进行备份,备份文件名为 openstack.sql,完成后使用命令查看文件属性其中文件大小以 mb 显示。

1
2
3
[root@controller ~]# mysqldump -uroot -p000000 --all-databases > /root/openstack.sql
[root@controller ~]# du -h /root/openstack.sql
1.6M    /root/openstack.sql

⑦进入数据库,创建本地用户 examuser,密码为 000000,然后查询 mysql 数据库中的user 表的 user,host,password 字段。然后赋予这个用户所有数据库的“查询”“删除”“更新”“创建”的权限。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
[root@controller ~]# mysql -uroot -p
MariaDB [(none)]> create user examuser@'localhost' identified by '000000';
Query OK, 0 rows affected (0.005 sec)
MariaDB [(none)]> use mysql
Database changed
MariaDB [mysql]> select user,host,password from user;
+-----------+------------+-------------------------------------------+
| user      | host       | password                                  |
+-----------+------------+-------------------------------------------+
| root      | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| root      | controller | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| root      | 127.0.0.1  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| root      | ::1        | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| keystone  | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| keystone  | %          | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| glance    | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| glance    | %          | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| nova      | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| nova      | %          | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| placement | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| placement | %          | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| neutron   | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| neutron   | %          | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| cinder    | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| cinder    | %          | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
| examuser  | localhost  | *032197AE5731D4664921A6CCAC7CFCE6A0698693 |
+-----------+------------+-------------------------------------------+
17 rows in set (0.000 sec)
MariaDB [mysql]> grant select,delete,update,create on *.* to examuser@'localhost'; 
Query OK, 0 rows affected (0.000 sec)
MariaDB [mysql]> select User, Select_priv,Update_priv,Delete_priv,Create_priv from user;
+-----------+-------------+-------------+-------------+-------------+
| User      | Select_priv | Update_priv | Delete_priv | Create_priv |
+-----------+-------------+-------------+-------------+-------------+
| root      | Y           | Y           | Y           | Y           |
| root      | Y           | Y           | Y           | Y           |
| root      | Y           | Y           | Y           | Y           |
| root      | Y           | Y           | Y           | Y           |
| keystone  | N           | N           | N           | N           |
| keystone  | N           | N           | N           | N           |
| glance    | N           | N           | N           | N           |
| glance    | N           | N           | N           | N           |
| nova      | N           | N           | N           | N           |
| nova      | N           | N           | N           | N           |
| placement | N           | N           | N           | N           |
| placement | N           | N           | N           | N           |
| neutron   | N           | N           | N           | N           |
| neutron   | N           | N           | N           | N           |
| examuser  | Y           | Y           | Y           | Y           |
+-----------+-------------+-------------+-------------+-------------+
15 rows in set (0.000 sec)

⑧请使用 openstack 命令创建一个名为 test 的 cinder 卷,卷大小为 5G。完成后使用 cinder命令列出卷列表并查看 test 卷的详细信息。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
[root@controller ~]# openstack volume create --size 5 test
[root@controller ~]# openstack volume show test
+--------------------------------+--------------------------------------+
| Field | Value |
+--------------------------------+--------------------------------------+
| attachments | [] |
| availability_zone | nova |
| bootable | false |
| consistencygroup_id | None |
| created_at | 2023-04-17T04:05:21.000000 |
| description | None |
| encrypted | False |
| id | 67df96a9-cc9f-4a59-8602-e0e50bdf4f26 |
| migration_status | None |
| multiattach | False |
| name | test |
| os-vol-host-attr:host | compute@lvm#LVM |
| os-vol-mig-status-attr:migstat | None |
| os-vol-mig-status-attr:name_id | None |
| os-vol-tenant-attr:tenant_id | 3b0c23a093dd4f11bbd8d7316634b784 |
| properties | |
| replication_status | None |
| size | 5 |
| snapshot_id | None |
| source_volid | None |
| status | available |
| type | None |
| updated_at | 2023-04-17T04:05:22.000000 |
| user_id | 0a6447639e3b44acb584c6b87f194c9e |
+--------------------------------+--------------------------------------+

⑨为了减缓来自实例的数据访问速度的变慢,OpenStack Block Storage 支持对卷数据复制带宽的速率限制。请修改 cinder 后端配置文件将卷复制带宽限制为最高 100 MiB/s。

1
2
3
4
5
6
7
8
9
[root@controller ~]# vim /etc/cinder/cinder.conf
[lvmdriver-1]
volume_group=cinder-volumes-1
volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver
volume_backend_name=LVM
volume_copy_bps_limit=104857600
[root@controller ~]# systemctl restart openstack-cinder-*
[root@controller ~]# cat /etc/cinder/cinder.conf | grep 104857600
volume_copy_bps_limit=104857600

⑩在controller节点执行 iaas-install-swift-controller.sh, compute节点执行iaas-install-swift-compute.sh,在controller和compute节点上会自行安装 swift 服务并完成配置。创建一个名为 file 的容器。

在提供的OpenStack平台上,使用Swift对象存储服务,修改相应的配置文件,使对象存储Swift作为glance镜像服务的后端存储。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
[root@controller ~]# openstack-controller-swift.sh
[root@compute ~]# openstack-compute-swift.sh
[root@controller ~]# swift post file

###修改配置文件
[root@controller ~]# vi /etc/glance/glance-api.conf
[glance_store]
stores=glance.store.filesystem.Store,glance.store.swift.Store,glance.store.http.Store
default_store=swift
swift_store_region=RegionOne
swift_store_endpoint_type=internalURL
swift_store_container=glance
swift_store_large_object_size=5120
swift_store_large_object_chunk_size=200
swift_store_create_container_on_put=True
swift_store_multi_tenant=True
swift_store_admin_tenants=service
swift_store_auth_address=http://controller:5000/v3
swift_store_user=glance
swift_store_key=000000

##重启 glance 所有组件
systemctl restart openstack-glance-*

11用 swift 命令,把 cirros-0.3.4-x86_64-disk.img 上传到 file 容器中。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
[root@controller ~]# swift upload file /root/cirros-0.3.4-x86_64-disk.img
root/cirros-0.3.4-x86_64-disk.img
[root@controller ~]# swift stat file
               Account: AUTH_d23ad8b534f44b02ad30c9f7847267df
             Container: file
               Objects: 1
                 Bytes: 13287936
              Read ACL:
             Write ACL:
               Sync To:
              Sync Key:
         Accept-Ranges: bytes
      X-Storage-Policy: Policy-0
         Last-Modified: Fri, 10 Mar 2023 02:43:07 GMT
           X-Timestamp: 1678416180.44884
            X-Trans-Id: txfdc2fb777c4641d3a9292-00640a9941
          Content-Type: application/json; charset=utf-8
X-Openstack-Request-Id: txfdc2fb777c4641d3a9292-00640a9941

12使用提供的云安全框架组件,将提供的OpenStack云平台的安全策略从http优化至https。

controller节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
##安装工具包
yum install -y mod_wsgi httpd mod_ssl
###修改/etc/openstack-dashboard/local_settings文件
vi /etc/openstack-dashboard/local_settings
##在DEBUG = False下增加4行
USE_SSL = True
CSRF_COOKIE_SECURE = True ##原文中有,去掉注释即可
SESSION_COOKIE_SECURE = True ##原文中有,去掉注释即可
SESSION_COOKIE_HTTPONLY = True

##修改/etc/httpd/conf.d/ssl.conf配置文件
vi /etc/httpd/conf.d/ssl.conf
##将SSLProtocol all -SSLv2 -SSLv3改成:
SSLProtocol all -SSLv2

##重启服务
systemctl restart httpd
systemctl restart memcached

13在提供的OpenStack平台上,通过修改相关参数对openstack平台进行调优操作,相应的调优操作有:

设置内存超售比例为1.5倍;

设置nova服务心跳检查时间为120秒。

1
2
3
vi /etc/nova/nova.conf
ram_allocation_ratio = 1.5
service_down_time = 120

任务四 OpenStack架构任务

①在controller节点安装python3环境。安装完之后查看python3版本,使用提供的whl文件安装依赖。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@controller python-depend]# yum install python3 –y
[root@controller python-depend]# pip3 install certifi-2019.11.28-py2.py3-none-any.whl
[root@controller python-depend]# pip3 install urllib3-1.25.11-py3-none-any.whl
[root@controller python-depend]# pip3 install idna-2.8-py2.py3-none-any.whl
[root@controller python-depend]# pip3 install chardet-3.0.4-py2.py3-none-any.whl
[root@controller python-depend]# pip3 install requests-2.24.0-py2.py3-none-any.whl
[root@controller ~]# python3 --version
Python 3.6.8
[root@controller ~]# pip3 list
DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning.
certifi (2019.11.28)
chardet (3.0.4)
idna (2.8)
pip (9.0.3)
requests (2.24.0)
setuptools (39.2.0)
urllib3 (1.25.11)

②编写python代码对接OpenStack API,完成镜像的上传。在controller节点的/root目录下创建create_image.py文件,在该文件中编写python代码对接openstack api(需在py文件中获取token),要求在openstack私有云平台中上传镜像cirros-0.3.4-x86_64-disk.img,名字为cirros001,disk_format为qcow2,container_format为bare。执行完代码要求输出“创建镜像成功,id为:xxxxxx”。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
[root@controller python3]# python3 create_image.py
请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx)
192.168.100.x
创建镜像成功,id为:0591f693-a7c7-4e7f-ac6c-957b7bccffc9
镜像文件上传成功
[root@controller ~]# cat create_image.py
import requests,json,time

# *******************全局变量IP*****************************
#执行代码前,请修改controller_ip的IP地址,与指定router,IP可以input,也可以写成静态
controller_ip = input("请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx)\n")

image_name = "cirros001"
file_path = "/root/cirros-0.3.4-x86_64-disk.img"

try:
    url  = f"http://{controller_ip}:5000/v3/auth/tokens"
    body = {
       "auth": {
           "identity": {
              "methods":["password"],
              "password": {
                  "user": {
                     "domain":{
                         "name": "Default"
                     },
                     "name": "admin",
                     "password": "000000"
                  }
              }
           },
           "scope": {
              "project": {
                  "domain": {
                     "name": "Default"
                  },
                  "name": "admin"
              }
           }
       }
    }
    headers = {"Content-Type": "application/json"}
    Token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token']
    headers = {"X-Auth-Token": Token}
except Exception as e:
    print(f"获取Token值失败,请检查访问云主机控制节点IP是否正确?输出错误信息如下:{str(e)}")
    exit(0)

class glance_api:
    def __init__(self, headers: dict, resUrl: str):
       self.headers = headers
       self.resUrl = resUrl
    #创建glance镜像
    def create_glance(self, container_format="bare", disk_format="qcow2"):
       body = {
           "container_format": container_format,
           "disk_format": disk_format,
           "name": image_name,
        }
       status_code = requests.post(self.resUrl, data=json.dumps(body), headers=self.headers).status_code
       if(status_code == 201):
           return f"创建镜像成功,id为:{glance_api.get_glance_id()}"
       else:
           return "创建镜像失败"
    #获取glance镜像id
    def get_glance_id(self):
       result = json.loads(requests.get(self.resUrl,headers=self.headers).text)
       for item in result['images']:
           if(item['name'] == image_name):
              return item['id']
    #上传glance镜像
    def update_glance(self):
       self.resUrl=self.resUrl+"/"+self.get_glance_id()+"/file"
       self.headers['Content-Type'] = "application/octet-stream"
       status_code = requests.put(self.resUrl,data=open(file_path,'rb').read(),headers=self.headers).status_code
       if(status_code == 204):
           return "镜像文件上传成功"
       else:
           return "镜像文件上传失败"
glance_api = glance_api(headers,f"http://{controller_ip}:9292/v2/images")
print(glance_api.create_glance())  #调用glance-api中创建镜像方法
print(glance_api.update_glance())

③编写python代码对接OpenStack API,完成用户的创建。在controller节点的/root目录下创建create_user.py文件,在该文件中编写python代码对接openstack api(需在py文件中获取token),要求在openstack私有云平台中创建用户guojibeisheng。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
[root@controller python3]# python3 create_user.py
请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx)
192.168.100.x
用户 guojibeisheng 创建成功,ID为dcb0fc7bacf54038b624463921123aed
该平台的用户为:
guojibeisheng
admin
myuser
tom
glance
nova
placement
neutron
heat
heat_domain_admin
cinder
swift
用户 guojibeisheng 已删除!
[root@controller python3]# cat create_user.py
import requests,json,time

# *******************全局变量IP*****************************
#执行代码前,请修改controller_ip的IP地址,与指定router,IP可以input,也可以写成静态
controller_ip = input("请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx)\n")

try:
    url  = f"http://{controller_ip}:5000/v3/auth/tokens"
    body = {
       "auth": {
           "identity": {
              "methods":["password"],
              "password": {
                  "user": {
                     "domain":{
                         "name": "Default"
                     },
                      "name": "admin",
                     "password": "000000"
                  }
              }
           },
           "scope": {
              "project": {
                  "domain": {
                     "name": "Default"
                  },
                  "name": "admin"
              }
           }
       }
    }
    headers = {"Content-Type": "application/json"}
    Token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token']
    headers = {"X-Auth-Token": Token}
except Exception as e:
    print(f"获取Token值失败,请检查访问云主机控制节点IP是否正确?输出错误信息如下:{str(e)}")
    exit(0)

class openstack_user_api:
    def __init__(self, handers: dict, resUrl: str):
        self.headers = handers
        self.resUrl = resUrl
    def create_users(self, user_name):
        body = {
            "user": {
                "description": "API create user!",
                "domain_id": "default",
                "name": user_name
            }
        }
        status_code = requests.post(self.resUrl, data=json.dumps(body), headers=self.headers).text
        result = json.loads(requests.get(self.resUrl, headers=self.headers).text)
        user_name = user_name
        for i in result['users']:
            if i['name'] == user_name:
                return f"用户 {user_name} 创建成功,ID为{i['id']}"
    def list_users(self):
        result = json.loads(requests.get(self.resUrl, headers=self.headers).text)
        roles = []
        for i in result['users']:
            if i['name'] not in roles:
                roles.append(i['name'])
        return "该平台的用户为:\n"+'\n'.join(roles)

    def get_user_id(self, user_name):
        result = json.loads(requests.get(self.resUrl, headers=self.headers).text)
        user_name = user_name
        for i in result['users']:
            if i['name'] == user_name:
                return (f"用户 {user_name} 的ID为{i['id']}")

    def delete_user(self, user_name):
        result = json.loads(requests.get(self.resUrl, headers=self.headers).text)
        for i in result['users']:
            if i['name'] == user_name:
                i = i['id']
                status_code = requests.delete(f'http://{controller_ip}:5000/v3/users/{i}', headers=self.headers)
                return f"用户 {user_name} 已删除!"

openstack_user_api = openstack_user_api(headers, f"http://{controller_ip}:5000/v3/users")

print(openstack_user_api.create_users("guojibeisheng"))
print(openstack_user_api.list_users())
print(openstack_user_api.delete_user("guojibeisheng"))

B模块:容器的编排与运维(样题)

说明:本任务提供有4台服务器master、node1、node2和harbor,都安装了centos7.5操作系统,在/opt/centos目录下有CentOS-7-x86_64-DVD-1804系统光盘文件所有文件,在/opt/containerk8s目录下有本次容器云运维所需的所有文件。

任务 1 容器云平台环境初始化

①master 节点主机名设置为 master、node1 节点主机名设置为 node1、node2 节点主机名设置为 node2、harbor 节点主机名设置为 harbor,所有节点关闭 swap,并配置 hosts 映射。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
##设置主机名
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# hostnamectl set-hostname node1
[root@localhost ~]# hostnamectl set-hostname node2
[root@localhost ~]# hostnamectl set-hostname harbor

##主机映射
[root@master ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.157.50 master
192.168.157.51 node1
192.168.157.52 node2
192.168.157.53 harbor

##关闭swap
[root@localhost ~]# swapoff -a
[root@localhost ~]# vi + /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Apr 17 08:27:55 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=e1892f7d-c16f-47b3-888b-77d0af3521f6 /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0

#关闭防火墙
[root@localhost ~]# systemctl stop firewalld && systemctl disable firewalld
Removed symlink /etc/systemd/system/multi-user.target.wants/firewalld.service.
Removed symlink /etc/systemd/system/dbus-org.fedoraproject.FirewallD1.service.

#关闭安全策略
[root@localhost ~]# setenforce 0
[root@localhost ~]# vi /etc/selinux/config
# This file controls the state of SELinux on the system.
# SELINUX= can take one of these three values:
# enforcing - SELinux security policy is enforced.
# permissive - SELinux prints warnings instead of enforcing.
# disabled - No SELinux policy is loaded.
SELINUX=permissive
# SELINUXTYPE= can take one of three two values:
# targeted - Targeted processes are protected,
# minimum - Modification of targeted policy. Only selected processes are protected.
# mls - Multi Level Security protection.
SELINUXTYPE=targeted

②将提供的 CentOS-7-x86_64-DVD-1804.iso 和 qdkills_cloud_paas.iso 光盘镜像文件移动到 master 节点 /root 目录下,然后在 /opt 目录下使用命令创建 centos 目录和 paas 目录,并将镜像文件 CentOS-7-x86_64-DVD-1804.iso 永久挂载到 centos 目录下,将镜像文件qdskills_cloud_paas.iso 永久挂载到 /opt/paas 目录下。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
[root@localhost ~]# mkdir /opt/centos
[root@localhost ~]# mkdir /opt/paas
[root@localhost ~]# vi + /etc/fstab
#
# /etc/fstab
# Created by anaconda on Mon Apr 17 08:27:55 2023
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root / xfs defaults 0 0
UUID=e1892f7d-c16f-47b3-888b-77d0af3521f6 /boot xfs defaults 0 0
#/dev/mapper/centos-swap swap swap defaults 0 0
/root/chinaskills_cloud_paas.iso /opt/paas iso9660 defaults 0 0
/root/CentOS-7-x86_64-DVD-1804.iso /opt/centos iso9660 defaults 0 0
[root@localhost ~]# mount -a
mount: /dev/loop0 is write-protected, mounting read-only
mount: /dev/loop1 is write-protected, mounting read-only

③在 master 节点首先将系统自带的 yum 源移动到/home 目录,然后为 master 节点配置本地 yum 源,yum 源文件名为 local.repo。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@master ~]# mv /etc/yum.repos.d/CentOS-* /home/
[root@master ~]# vi /etc/yum.repos.d/local.repo
[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[paas]
name=centos
baseurl=file:///opt/paas/kubernetes-repo
gpgcheck=0
enabled=1

[root@master ~]# yum repolist


④在 master 节点安装 ftp 服务,将 ftp 共享目录设置为 /opt/。

1
2
3
4
5
[root@master ~]# yum install -y vsftpd
[root@master ~]# echo "anon_root=/opt" >> /etc/vsftpd/vsftpd.conf
[root@master ~]# systemctl enable --now vsftpd
Created symlink from /etc/systemd/system/multi-user.target.wants/vsftpd.service to /usr/lib/systemd/system/vsftpd.service.

⑤为 node1 节点和 node2 节点分别配置 ftp 源,yum 源文件名称为 ftp.repo,其中 ftp 服务器地址为 master 节点,配置 ftp 源时不要写 IP 地址,配置之后,两台机器都安装 kubectl 包作为安装测试。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[root@master ~]# mv /etc/yum.repos.d/CentOS-* /home/
[root@localhost ~]# cat /etc/yum.repos.d/ftp.repo
[centos]
name=centos
baseurl=ftp://master/centos
gpgcheck=0
enabled=1
[paas]
name=paas
baseurl=ftp://master/paas/kubernetes-repo
gpgcheck=0
enabled=1

[root@localhost ~]# yum install -y kubectl

⑥在 master 节点上部署 chrony 服务器,允许其它节点同步时间,启动服务并设置为开机自启动;在其他节点上指定 master 节点为上游 NTP 服务器,重启服务并设为开机自启动。

master节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45

[root@master ~]# vi /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server master iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16
allow 192.168.157.0/24
# Serve time even if not synchronized to a time source.
#local stratum 10
local stratum 10
# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking

##重启服务
[root@localhost ~]# systemctl restart chronyd
[root@localhost ~]# systemctl enable chronyd
[root@localhost ~]# chronyc sources

node节点

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
[root@localhost ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
#server 0.centos.pool.ntp.org iburst
#server 1.centos.pool.ntp.org iburst
#server 2.centos.pool.ntp.org iburst
#server 3.centos.pool.ntp.org iburst
server master iburst
# Record the rate at which the system clock gains/losses time.
driftfile /var/lib/chrony/drift

# Allow the system clock to be stepped in the first three updates
# if its offset is larger than 1 second.
makestep 1.0 3

# Enable kernel synchronization of the real-time clock (RTC).
rtcsync

# Enable hardware timestamping on all interfaces that support it.
#hwtimestamp *

# Increase the minimum number of selectable sources required to adjust
# the system clock.
#minsources 2

# Allow NTP client access from local network.
#allow 192.168.0.0/16

# Serve time even if not synchronized to a time source.
#local stratum 10

# Specify file containing keys for NTP authentication.
#keyfile /etc/chrony.keys

# Specify directory for log files.
logdir /var/log/chrony

# Select which information is logged.
#log measurements statistics tracking


##重启服务
[root@localhost ~]# systemctl restart chronyd
[root@localhost ~]# systemctl enable chronyd
[root@localhost ~]# chronyc sources

⑦为四台服务器设置免密登录,保证服务器之间能够互相免密登录。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
[root@localhost ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory '/root/.ssh'.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:qveHAExYRiPQVv5rlsI3owC/pO72K9ZK+pOJFPnPfFQ root@node1
The key's randomart image is:
+---[RSA 2048]----+
|.o.=* |
| ++.. |
| ..o. |
| o o. E |
|. o .. S |
| + o .= |
|.o=o* @. . |
|+*++ @.+. . |
|B=*++..... |
+----[SHA256]-----+

[root@master ~]# ssh-copy-id root@node1
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node1's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@node1'"
and check to make sure that only the key(s) you wanted were added.

[root@master ~]# ssh-copy-id root@node2
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@node2's password:
Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@node2'"
and check to make sure that only the key(s) you wanted were added.

[root@master ~]# ssh-copy-id root@harbor
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@harbor's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@harbor'"
and check to make sure that only the key(s) you wanted were added.

任务2 k8s 搭建任务

①在所有节点上安装dokcer-ce,并设置为开机自启动。

1
2
3
4
##所有节点

[root@master ~]# yum install -y docker docker-ce
[root@master ~]# systemctl enable --now docker

②所有节点配置阿里云镜像加速地址(https://7hw6x2is.mirror.aliyuncs.com)并把启动引擎设置为systemd,配置成功后加载配置文件并重启docker服务。

1
2
3
4
5
6
7
8
[root@master ~]# cat /etc/docker/daemon.json
{
"insecure-registries": ["192.168.157.53"],
"registry-mirrors": ["https://7hw6x2is.mirror.aliyuncs.com"],
"exec-opts": ["native.cgroupdriver=systemd"]
}

[root@master ~]# systemctl restart docker

③在master节点/opt/images目录下使用tar归档文件载入镜像。

1
[root@master ~]# for i in $(ls /opt/paas/images | grep tar) ; do docker load -i /opt/paas/images/$i ;done

④在master节点使用 /opt/docker-compose/v2.10.2-docker-compose-linux-x86_64文件安装docker-compose。安装完成后执行docker-compose version命令。

1
[root@localhost opt]# cp -p /opt/paas/docker-compose/v1.25.5-docker-compose-Linux-x86_64 /usr/local/bin/docker-compose

⑤在master节点解压/opt/harbor/ harbor-offline-installer-v2.5.3.tgz离线安装包,然后安装harbor仓库,并修改相应的yml文件,使各节点默认docker仓库为harbor仓库地址。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

[root@localhost opt]# tar -zxvf harbor-offline-installer-v2.1.0.tgz
harbor/harbor.v2.1.0.tar.gz
harbor/prepare
harbor/LICENSE
harbor/install.sh
harbor/common.sh
harbor/harbor.yml.tmpl
[root@localhost opt]# cd harbor
[root@localhost harbor]# ls
common.sh harbor.v2.1.0.tar.gz harbor.yml.tmpl install.sh LICENSE prepare
[root@localhost harbor]# mv harbor.yml.tmpl harbor.yml
[root@localhost harbor]# vi harbor.yml
[root@harbor harbor]# cat harbor.yml
# Configuration file of Harbor

# The IP address or hostname to access admin UI and registry service.
# DO NOT use localhost or 127.0.0.1, because Harbor needs to be accessed by external clients.
hostname: 192.168.157.50 ##修改IP地址

# http related config
http:
# port for http, default is 80. If https enabled, this port will redirect to https port
port: 80
###在此处注释掉https
# https related config
#https:
# https port for harbor, default is 443
# port: 443
# The path of cert and key files for nginx
#certificate: /your/certificate/path
#private_key: /your/private/key/path
........
[root@localhost harbor]# ./prepare
[root@harbor harbor]# ./install.sh

⑥在master节点执行/opt/k8s_image_push.sh将所有镜像上传至docker仓库。

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@master paas]# ./k8s_image_push.sh
输入镜像仓库地址(不加http/https): 192.168.157.53
输入镜像仓库用户名: admin
输入镜像仓库用户密码: Harbor12345
您设置的仓库地址为: 192.168.157.53,用户名: admin,密码: xxx
是否确认(Y/N): y
WARNING! Using --password via the CLI is insecure. Use --password-stdin.
WARNING! Your password will be stored unencrypted in /root/.docker/config.json.
Configure a credential helper to remove this warning. See
https://docs.docker.com/engine/reference/commandline/login/#credentials-store

镜像仓库 Login Succeeded

⑦执行/opt/k8s_con_ner_bui_install.sh部署Kubeadm、containerd、nerdctl和buildkit。

##此处只安装kubectl kubeadm kubelet即可

1
2
3
4
5
[root@master ~]# yum install -y kubelet kubeadm kubectl
##开机自启
[root@localhost ~]# systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.

⑧在master节点kubeadm命令初始化集群,使用本地Harbor仓库。

1
2
3
4
[root@master ~]# kubeadm init --apiserver-advertise-address 192.168.157.50 --pod-network-cidr 10.244.0.0/16 --kubernetes-version 1.18.1 --image-repository 192.168.157.50/library/
[root@localhost ~]# mkdir -p $HOME/.kube
[root@localhost ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@localhost ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config

⑨修改提供的/opt/yaml/flannel/kube-flannel.yaml,使其镜像来源为本地Harbor仓库,然后安装kubernetes网络插件,安装完成后使用命令查看节点状态。

1
2
3
4
5
# eval sed -i 's@docker.io/flannel@192.168.157.50/library@g' /opt/paas/yaml/flannel/kube-flannel.yaml
[root@master ~]# kubectl apply -f /opt/paas/yaml/flannel/kube-flannel.yaml
[root@master ~]# kubectl get nodes
NAME     STATUS     ROLES           AGE     VERSION
master   Ready   control-plane   9m42s   v1.18.1

⑩给kubernetes创建证书,命名空间为kubernetes-dashboard,涉及到的所有文件命名为dashboard例如dashboard.crt。

1
2
3
4
5
6
7
8
9
10
11
[root@master ~]# mkdir dashboard-certs
[root@master ~]# cd dashboard-certs/
[root@master ~]# kubectl create namespace kubernetes-dashboard
[root@master ~]# openssl genrsa -out dashboard.key 2048
[root@master ~]# openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert'
[root@master ~]# openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt
[root@master ~]# kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard
[root@master ~]# sed -i "s/kubernetesui/$IP\/library/g" /opt/yaml/dashboard/recommended.yaml
[root@master ~]# kubectl apply -f /opt/paas/yaml/dashboard/recommended.yaml
[root@master ~]# kubectl apply -f /opt/paas/yaml/dashboard/dashboard-adminuser.yaml

11修改/opt/yaml/dashboard/recommended.yaml的镜像来源为本地Harbor仓库,然后使用/opt/yaml/dashboard/recommended.yaml和/opt/yaml/dashboard/dashboard-adminuser.yaml安装kubernetes dashboard界面,完成后查看首页。

1
2
3
4
5
6
7
[root@master ~]# eval sed -i "s/kubernetesui/192.168.157.50\/library/g" /opt/yaml/dashboard/recommended.yaml
[root@master ~]# kubectl apply -f /opt/yaml/dashboard/recommended.yaml
[root@master ~]# kubectl apply -f /opt/yaml/dashboard/dashadmin-user.yaml
[root@master ~]# kubectl get svc -n kubernetes-dashboard
NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)         AGE
dashboard-metrics-scraper   ClusterIP   10.105.211.63    <none>        8000/TCP        23m
kubernetes-dashboard        NodePort    10.104.143.162   <none>        443:30001/TCP   23m

12为了能使pod调度到master节点,用命令删除污点。在浏览器访问dashboard(https://IP:30001)

1
2
3
4
# kubectl describe nodes master | grep Taints
kubectl taint nodes master node-role.kubernetes.io/control-plane-
# kubectl describe nodes master | grep Taints
Taints:             <none>

13在node节点执行k8s_node_install.sh,将该节点加入kubernetes集群。完成后在master节点上查看所有节点状态。

1
2
3
##此处在kubeadm init 最后一条
kubeadm join 192.168.157.51:6443 --token s6fq1p.kmluoke4b9qp7hdi \
--discovery-token-ca-cert-hash sha256:1d2867c654a33891b6077357bfb6d1e4babfb2e04f834944fcbad83f05d1bdc3

任务3部署Owncloud网盘服务

①编写yaml文件(文件名自定义)创建PV和PVC来提供持久化存储,以便保存 ownCloud 服务中的文件和数据。要求:PV(访问模式为读写,只能被单个节点挂载;存储为5Gi;存储类型为hostPath,存储路径自定义)PVC(访问模式为读写,只能被单个节点挂载;申请存储空间大小为5Gi)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
# cat owncloud-pvc.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: owncloud-pv
spec:
  accessModes:
    - ReadWriteOnce
  capacity:
    storage: 5Gi
  hostPath:
    path: /data/owncloud
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: owncloud-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
# kubectl apply -f /opt/owncloud-pvc.yaml
# kubectl get pv,pvc
NAME                           CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                  STORAGECLASS   REASON   AGE
persistentvolume/owncloud-pv   5Gi        RWO            Retain           Bound    default/owncloud-pvc                           2m41s
NAME                                 STATUS   VOLUME        CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/owncloud-pvc   Bound    owncloud-pv   5Gi        RWO                           2m41s

②编写yaml文件(文件名自定义)创建一个configMap对象,指定OwnCloud的环境变量。登录账号对应的环境变量为OWNCLOUD_ADMIN_USERNAME,密码对应的环境变量为OWNCLOUD_ADMIN_PASSWORD。(变量值自定义)。

1
2
3
4
5
6
7
8
9
10
11
12
13
# cat owncloud-configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: owncloud-config
data:
  OWNCLOUD_ADMIN_USERNAME: “admin”
  OWNCLOUD_ADMIN_PASSWORD: “123456”
# kubectl apply -f  owncloud-configmap.yaml
# kubectl get ConfigMap
NAME               DATA   AGE
kube-root-ca.crt   1      20h
owncloud-config    2      2m11s

③编写yaml文件(文件名自定义)创建一个Secret对象,以保存OwnCloud数据库的密码。对原始密码采用base64编码格式进行加密。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# echo 123456 | base64
MTIzNDU2Cg==
# cat owncloud-secret.yaml
apiVersion: v1
kind: Secret
metadata:
  name: owncloud-db-password
type: Opaque
data:
  password: MTIzNDU2Cg==
# kubectl apply -f /opt/owncloud-secret.yaml
#kubectl get Secret
NAME                   TYPE     DATA   AGE
owncloud-db-password   Opaque   1      46s

④编写yaml文件(文件名自定义) 创建Deployment对象, 指定OwnCloud的容器和相关的环境变量。(Deployment资源命名为owncloud-deployment,镜像为Harbor仓库中的owncloud:latest,存储的挂载路径为/var/www/html,其它根据具体情况进行配置)。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
# cat owncloud-deploy.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: owncloud-deployment
spec:
  replicas: 1
  selector:
    matchLabels:
      app: owncloud
  template:
    metadata:
      labels:
        app: owncloud
    spec:
      containers:
      - name: owncloud
        image: 192.168.100.91/library/owncloud:latest
        imagePullPolicy: IfNotPresent
        envFrom:
        - configMapRef:
            name: owncloud-config
        env:
        - name: OWNCLOUD_DB_PASSWORD
          valueFrom:
            secretKeyRef:
              name: owncloud-db-password
              key: password
        ports:
        - containerPort: 80
        volumeMounts:
        - name: owncloud-pv
          mountPath: /var/www/html
      volumes:
      - name: owncloud-pv
        persistentVolumeClaim:
          claimName: owncloud-pvc
# kubectl apply -f /opt/owncloud-deploy.yaml
# kubectl describe pod
Name:             owncloud-deployment-845c85cfcb-6ptqr
Namespace:        default
Priority:         0
Service Account:  default
Node:             node/192.168.100.23
Start Time:       Fri, 17 Mar 2023 02:56:31 +0000
Labels:           app=owncloud
                  pod-template-hash=845c85cfcb
Annotations:      <none>
Status:           Running
IP:               10.244.1.3
IPs:
  IP:           10.244.1.3
Controlled By:  ReplicaSet/owncloud-deployment-845c85cfcb
Containers:
  owncloud:
    Container ID:   containerd://d60dc4426c06cef6525e4e37f0ee37dcef762c2806c19efcd666f951d66a5c84
    Image:          192.168.100.91/library/owncloud:latest
    Image ID:       192.168.100.91/library/owncloud@sha256:5c77bfdf8cfaf99ec94309be2687032629f4f985d6bd388354dfd85475aa5f21
    Port:           80/TCP
    Host Port:      0/TCP
    State:          Running
      Started:      Fri, 17 Mar 2023 02:56:39 +0000
    Ready:          True
    Restart Count:  0
    Environment Variables from:
      owncloud-config  ConfigMap  Optional: false
    Environment:
      OWNCLOUD_DB_PASSWORD:  <set to the key 'password' in secret 'owncloud-db-password'>  Optional: false
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtpd9 (ro)
      /var/www/html from owncloud-pv (rw)
Conditions:
  Type              Status
  Initialized       True
  Ready             True
  ContainersReady   True
  PodScheduled      True
Volumes:
  owncloud-pv:
    Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
    ClaimName:  owncloud-pvc
    ReadOnly:   false
  kube-api-access-vtpd9:
    Type:                    Projected (a volume that contains injected data from multiple sources)
    TokenExpirationSeconds:  3607
    ConfigMapName:           kube-root-ca.crt
    ConfigMapOptional:       <nil>
    DownwardAPI:             true
QoS Class:                   BestEffort
Node-Selectors:              <none>
Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
  Type    Reason     Age   From               Message
  ----    ------     ----  ----               -------
  Normal  Scheduled  14m   default-scheduler  Successfully assigned default/owncloud-deployment-845c85cfcb-6ptqr to node
  Normal  Pulling    14m   kubelet            Pulling image "192.168.100.91/library/owncloud:latest"
  Normal  Pulled     14m   kubelet            Successfully pulled image "192.168.100.91/library/owncloud:latest" in 7.266482912s
  Normal  Created    14m   kubelet            Created container owncloud
  Normal  Started    14m   kubelet            Started container owncloud

⑤编写yaml文件(文件名自定义)创建一个Service对象将OwnCloud公开到集群外部。通过http://IP:端口号可查看owncloud.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
# cat owncloud-svc.yaml
apiVersion: v1
kind: Service
metadata:
  name: owncloud-service
spec:
  selector:
    app: owncloud
  ports:
    - name: http
      port: 80
  type: NodePort
# kubectl apply -f /opt/owncloud-svc.yaml
#kubectl get svc -A
NAMESPACE              NAME                        TYPE        CLUSTER-IP       EXTERNAL-IP   PORT(S)                  AGE
default                kubernetes                  ClusterIP   10.96.0.1        <none>        443/TCP                  24h
default                owncloud-service            NodePort    10.98.228.242    <none>        80:31024/TCP             17m
kube-system            kube-dns                    ClusterIP   10.96.0.10       <none>        53/UDP,53/TCP,9153/TCP   24h
kubernetes-dashboard   dashboard-metrics-scraper   ClusterIP   10.105.211.63    <none>        8000/TCP                 22h
kubernetes-dashboard   kubernetes-dashboard        NodePort    10.104.143.162   <none>        443:30001/TCP            22h

C模块:企业级应用的自动化部署和运维(样题)

zabbix是一个基于WEB界面的提供分布式系统监视以及网络监视功能的企业级的开源解决方案。zabbix能监视各种网络参数,保证服务器系统安全运营;并提供灵活的通知机制以让系统管理员快速定位/解决存在的各种问题。zabbix由2部分构成,zabbix server与可选组件zabbix agent。

IP 主机名 节点
192.168.157.40 zabbix-server Server节点
192.168.157.41 zabbix-agent Agent节点

部署方式:监控主机zabbix_server节点采用手动部署,被监控主机zabbix_agent采用Playbook部署。

注意 压缩包需要解压后配置为yum仓库可以使用

①修改主机名zabbix_server节点主机名为zabbix_server,zabbix_agent节点主机名为Zabbix_agent,使用提供的软件包/root/autoDeployment.tar在zabbix_server节点安装ansible.

1
2
[root@localhost ~]# hostnamectl set-hostname zabbix-server  
[root@localhost ~]# hostnamectl set-hostname zabbix-agent

②在zabbix_server节点配置hosts文件,并将该文件远程发送给zabbix_agent节点,并配置免密登录。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
[root@localhost ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:r7ZDne6h3xaK6zVevWSj77paApHMV6RAXHzH2s78cMc root@zabbix-agent
The key's randomart image is:
+---[RSA 2048]----+
| ooo..o. |
| o.o.o. o |
| = o. + |
| o . . |
| S. . + . |
| .oo ..= E|
| . o*.o.=+.|
| +=oB.+ o.|
| o**=o==+ |
+----[SHA256]-----+
[root@localhost ~]# ssh-copy-id root@zabbix-agent
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
The authenticity of host '192.168.157.40 (192.168.157.40)' can't be established.
ECDSA key fingerprint is SHA256:EWGohbn7cIhP7AAYHbnuMx/IoLAEybzPJENWQazAFG4.
ECDSA key fingerprint is MD5:81:d6:a5:02:87:4b:13:1b:eb:69:76:1c:5c:aa:80:bf.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@192.168.157.40's password:

Number of key(s) added: 1

Now try logging into the machine, with: "ssh 'root@192.168.157.40'"
and check to make sure that only the key(s) you wanted were added.

③在Zabbix_server节点配置ansible主机清单,在清单中创建agent主机组。

1
2
3
[root@zabbix-server ~]# cat /etc/ansible/hosts
[agent]
zabbix-agent

④配置基础环境,安装nginx和php74(根据实际需要安装相关php74扩展包),并开启相关服务。

1
2
3
4
5
6
7
8
9
[root@zabbix_server opt]# yum -y install nginx
[root@zabbix_server ~]# systemctl start nginx
[root@zabbix_server ~]# yum -y install php74-php-fpm php74-php-common php74-php-cli php74-php-gd php74-php-ldap php74-php-mbstring php74-php-mysqlnd php74-php-xml php74-php-bcmath php74-php
[root@zabbix_server ~]#systemctl start php74-php-fpm
[root@zabbix_server ~]#  nginx -v && php74 -v
nginx version: nginx/1.22.1
PHP 7.4.33 (cli) (built: Feb 14 2023 08:49:52) ( NTS )
Copyright (c) The PHP Group
Zend Engine v3.4.0, Copyright (c) Zend Technologies

⑤在zabbix_server节点安装zabbix服务器、代理和web前端,安装前注意查看rpm包的名字,并分别启动zabbix-server和zabbix-agent。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# yum -y install zabbix-server  zabbix-web-mysql zabbix-agent
# systemctl start zabbix-server&& systemctl start zabbix-agent
# systemctl status zabbix-server&& systemctl status zabbix-agent
● zabbix-server-mysql.service - Zabbix Server with MySQL DB
   Loaded: loaded (/usr/lib/systemd/system/zabbix-server-mysql.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2023-03-18 04:36:50 UTC; 4min 5s ago
 Main PID: 20737 (zabbix_server)
   CGroup: /system.slice/zabbix-server-mysql.service
           └─20737 /usr/sbin/zabbix_server -f

Mar 18 04:36:50 zabbix_server systemd[1]: Started Zabbix Serve...
Hint: Some lines were ellipsized, use -l to show in full.
● zabbix-agent.service - Zabbix Agent
   Loaded: loaded (/usr/lib/systemd/system/zabbix-agent.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2023-03-18 04:37:47 UTC; 3min 8s ago
  Process: 20752 ExecStart=/usr/sbin/zabbix_agentd -c $CONFFILE (code=exited, status=0/SUCCESS)
  Main PID: 20754 (zabbix_agentd)
   CGroup: /system.slice/zabbix-agent.service
           ├─20754 /usr/sbin/zabbix_agentd -c /etc/zabbix/zabb...
           ├─20755 /usr/sbin/zabbix_agentd: collector [idle 1 ...
           ├─20756 /usr/sbin/zabbix_agentd: listener #1 [waiti...
           ├─20757 /usr/sbin/zabbix_agentd: listener #2 [waiti...
           ├─20758 /usr/sbin/zabbix_agentd: listener #3 [waiti...
           └─20759 /usr/sbin/zabbix_agentd: active checks #1 [...

Mar 18 04:37:47 zabbix_server systemd[1]: Starting Zabbix Agen...
Mar 18 04:37:47 zabbix_server systemd[1]: Started Zabbix Agent.
Hint: Some lines were ellipsized, use -l to show in full.

⑥安装数据库MariaDB,启动数据库并设置为开机自启动。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
# yum -y install mariadb-server
# systemctl enable --now mariadb
# systemctl status mariadb
● mariadb.service - MariaDB database server
   Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled)
   Active: active (running) since Sat 2023-03-18 04:52:20 UTC; 1min 2s ago
  Process: 20907 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS)
  Process: 20822 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS)
 Main PID: 20905 (mysqld_safe)
   CGroup: /system.slice/mariadb.service
           ├─20905 /bin/sh /usr/bin/mysqld_safe --basedir=/usr...
           └─21071 /usr/libexec/mysqld --basedir=/usr --datadi...

Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: M...
Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: P...
Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: T...
Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: Y...
Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: h...
Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: C...
Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: h...
Mar 18 04:52:18 zabbix_server mysqld_safe[20905]: 230318 04:52...
Mar 18 04:52:18 zabbix_server mysqld_safe[20905]: 230318 04:52...
Mar 18 04:52:20 zabbix_server systemd[1]: Started MariaDB data...
Hint: Some lines were ellipsized, use -l to show in full.

⑦登录mysql,创建数据库zabbix和用户zabbix密码自定义,并授权zabbix用户拥有zabbix数据库的所有权限。

1
2
3
4
5
6
7
8
9
# mysql -uroot -p
MariaDB [(none)]> create database zabbix charset utf8 collate utf8_bin;;
MariaDB [(none)]> grant all privileges on zabbix.* to zabbix@localhost identified by 'password';
MariaDB [zabbix]> show grants for 'zabbix'@'localhost';
+---------------------------------------------------------------------------------------------------------------+
| Grants for zabbix@localhost                                                                                   |
+---------------------------------------------------------------------------------------------------------------+
| GRANT USAGE ON *.* TO 'zabbix'@'localhost' IDENTIFIED BY PASSWORD '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' |
| GRANT ALL PRIVILEGES ON `zabbix`.* TO 'zabbix'@'localhost'

⑧分别导入数据库架构及数据,对应的文件分别为schema.sql、images.sql和data.sql(文件顺便不能乱)。

1
2
#导入zabbix SQL
[root@zabbix-server ~]# zcat /usr/share/doc/zabbix-server-mysql-4.0.24/create.sql.gz | mysql -uroot -p123456 zabbix

⑨配置default.conf。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
vim /etc/nginx/conf.d/default.conf
修改内容如下
root /usr/share/zabbix/;
index index.php index.html index.htm;
#cat /etc/nginx/conf.d/default.conf
server {
listen 80;
server_name localhost;

#access_log /var/log/nginx/host.access.log main;

location / {
root /usr/share/zabbix/;
index index.php index.html index.htm;
}

#error_page 404 /404.html;

# redirect server error pages to the static page /50x.html
#
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}

# proxy the PHP scripts to Apache listening on 127.0.0.1:80
#
#location ~ \.php$ {
# proxy_pass http://127.0.0.1;
#}

# pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
#
location ~ \.php$ {
root /usr/share/zabbix;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
# deny access to .htaccess files, if Apache's document root
# concurs with nginx's one
#
#location ~ /\.ht {
# deny all;
#}
}

⑩分别修改配置文件zabbix_server.conf(修改数据库密码)和zabbix_agentd.conf(修改服务器IP,活动服务器IP和主机名),并重启对应服务使配置生效。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
[root@zabbix_server ~]# vim /etc/zabbix_server.conf
DBName=zabbix
DBUser=zabbix
DBPassword=password
[root@zabbix_server ~]# vim /etc/zabbix_agentd.conf
Server=192.168.157.40
ServerActive=192.168.157.40
Hostname=zabbix_server
[root@zabbix_server ~]# cat /etc/zabbix_agentd.conf | grep -v '^#\|^$'
PidFile=/run/zabbix/zabbix_agentd.pid
LogFile=/var/log/zabbix/zabbix_agentd.log
LogFileSize=0
Server=192.168.100.91
ServerActive=192.168.100.91
Hostname=zabbix_server
[root@master ~]# systemctl restart zabbix-server
[root@master ~]# systemctl restart zabbix-agent

11修改php.ini文件,其中最大POST数据限制为16M,程序执行时间限制为300,PHP页面接受数据所需最大时间限制为300,把时区设为Asia/Shanghai,并重启相关服务。

1
2
3
4
5
6
[root@zabbix_server ~]# vim /etc/php.ini
post_max_size = 16M
max_execution_time = 300
max_input_time = 300
date.timezone = Asia/Shanghai
[root@zabbix_server ~]# systemctl restart php74-php-fpm

12修改www.conf文件,把用户和组都设置为nginx.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
 [root@zabbix_server ~]# vim /etc/php-fpm.d/www.conf
user = nginx
group = nginx
[root@zabbix_server ~]# cat /etc/php-fpm.d/www.conf | grep -v '^;\|^$'
[www]
listen = 127.0.0.1:9000
 
listen.allowed_clients = 127.0.0.1
user = nginx
group = nginx
pm = dynamic
pm.max_children = 50
pm.start_servers = 5
pm.min_spare_servers = 5
pm.max_spare_servers = 35
slowlog = /var/log/php-fpm/www-slow.log
php_admin_value[error_log] = /var/log/php-fpm/www-error.log
php_admin_flag[log_errors] = on
php_value[session.save_handler] = files
php_value[session.save_path] = /var/lib/php/session

13修改zabbix.conf文件,把用户和组都设置为nginx,并将index.php所在的目录和php.ini文件拥有者和用户组改为nginx。重启相关服务,在浏览器中输入http://公网IP/ setup.php即可看到zabbix 6.0界面。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
vim /etc/php-fpm.d/zabbix.conf
[zabbix]
user = nginx
group = nginx
[root@zabbix_server ~]# chown -R nginx:nginx /usr/share/zabbix/
[root@zabbix_server ~]# chown -R nginx:nginx /etc/opt/remi/php74/php.ini
[root@zabbix_server ~]# chmod +x /usr/share/zabbix
[root@zabbix_server ~]# systemctl restart nginx
[root@zabbix_server ~]# systemctl restart zabbix-server
[root@zabbix_server ~]# systemctl restart zabbix-agent
[root@zabbix_server ~]# systemctl restart php74-php-fpm 
[root@zabbix_server ~]# curl http://123.249.10.60/setup.php
<!DOCTYPE html>
<html lang="en">
       <head>
              <meta http-equiv="X-UA-Compatible" content="IE=Edge"/>
              <meta charset="utf-8" />
              <meta name="viewport" content="width=device-width, initial-scale=1">
              <meta name="Author" content="Zabbix SIA" />
              <title>Installation</title>
              <link rel="icon" href="favicon.ico">
              <link rel="apple-touch-icon-precomposed" sizes="76x76" href="assets/img/apple-touch-icon-76x76-precomposed.png">
              <link rel="apple-touch-icon-precomposed" sizes="120x120" href="assets/img/apple-touch-icon-120x120-precomposed.png">
              <link rel="apple-touch-icon-precomposed" sizes="152x152" href="assets/img/apple-touch-icon-152x152-precomposed.png">
              <link rel="apple-touch-icon-precomposed" sizes="180x180" href="assets/img/apple-touch-icon-180x180-precomposed.png">
              <link rel="icon" sizes="192x192" href="assets/img/touch-icon-192x192.png">
              <meta name="csrf-token" content="5d4324e81318a310"/>
              <meta name="msapplication-TileImage" content="assets/img/ms-tile-144x144.png">
              <meta name="msapplication-TileColor" content="#d40000">
              <meta name="msapplication-config" content="none"/>
<link rel="stylesheet" type="text/css" href="assets/styles/blue-theme.css?1675235994" />
<script src="js/browsers.js?1674462826"></script>
<script src="jsLoader.php?ver=6.0.13&amp;lang=en_US"></script>
<script src="jsLoader.php?ver=6.0.13&amp;lang=en_US&amp;files%5B0%5D=setup.js"></script>
</head>
<body><div class="wrapper"><main><form method="post" action="setup.php" accept-charset="utf-8" id="setup-form"><div class="setup-container"><div class="setup-left"><div class="setup-logo"><div class="zabbix-logo"></div></div><ul><li class="setup-left-current">Welcome</li><li>Check of pre-requisites</li><li>Configure DB connection</li><li>Settings</li><li>Pre-installation summary</li><li>Install</li></ul></div><div class="setup-right"><div class="setup-right-body"><div class="setup-title"><span>Welcome to</span>Zabbix 6.0</div><ul class="table-forms"><li><div class="table-forms-td-left"><label for="label-default-lang">Default language</label></div><div class="table-forms-td-right"><z-select id="default-lang" value="en_US" focusable-element-id="label-default-lang" autofocus="autofocus" name="default_lang" data-options="[{&quot;value&quot;:&quot;en_GB&quot;,&quot;label&quot;:&quot;English (en_GB)&quot;},{&quot;value&quot;:&quot;en_US&quot;,&quot;label&quot;:&quot;English (en_US)&quot;},{&quot;value&quot;:&quot;ca_ES&quot;,&quot;label&quot;:&quot;Catalan (ca_ES)&quot;},{&quot;value&quot;:&quot;zh_CN&quot;,&quot;label&quot;:&quot;Chinese (zh_CN)&quot;},{&quot;value&quot;:&quot;cs_CZ&quot;,&quot;label&quot;:&quot;Czech (cs_CZ)&quot;},{&quot;value&quot;:&quot;fr_FR&quot;,&quot;label&quot;:&quot;French (fr_FR)&quot;},{&quot;value&quot;:&quot;de_DE&quot;,&quot;label&quot;:&quot;German (de_DE)&quot;},{&quot;value&quot;:&quot;he_IL&quot;,&quot;label&quot;:&quot;Hebrew (he_IL)&quot;},{&quot;value&quot;:&quot;it_IT&quot;,&quot;label&quot;:&quot;Italian (it_IT)&quot;},{&quot;value&quot;:&quot;ko_KR&quot;,&quot;label&quot;:&quot;Korean (ko_KR)&quot;},{&quot;value&quot;:&quot;ja_JP&quot;,&quot;label&quot;:&quot;Japanese (ja_JP)&quot;},{&quot;value&quot;:&quot;nb_NO&quot;,&quot;label&quot;:&quot;Norwegian (nb_NO)&quot;},{&quot;value&quot;:&quot;pl_PL&quot;,&quot;label&quot;:&quot;Polish (pl_PL)&quot;},{&quot;value&quot;:&quot;pt_BR&quot;,&quot;label&quot;:&quot;Portuguese (pt_BR)&quot;},{&quot;value&quot;:&quot;pt_PT&quot;,&quot;label&quot;:&quot;Portuguese (pt_PT)&quot;},{&quot;value&quot;:&quot;ro_RO&quot;,&quot;label&quot;:&quot;Romanian (ro_RO)&quot;},{&quot;value&quot;:&quot;ru_RU&quot;,&quot;label&quot;:&quot;Russian (ru_RU)&quot;},{&quot;value&quot;:&quot;sk_SK&quot;,&quot;label&quot;:&quot;Slovak (sk_SK)&quot;},{&quot;value&quot;:&quot;tr_TR&quot;,&quot;label&quot;:&quot;Turkish (tr_TR)&quot;},{&quot;value&quot;:&quot;uk_UA&quot;,&quot;label&quot;:&quot;Ukrainian (uk_UA)&quot;},{&quot;value&quot;:&quot;vi_VN&quot;,&quot;label&quot;:&quot;Vietnamese (vi_VN)&quot;}]" tabindex="-1"></z-select></div></li></ul></div></div><div class="setup-footer"><div><button type="submit" id="next_1" name="next[1]" value="Next step">Next step</button><button type="submit" id="back_1" name="back[1]" value="Back" class="btn-alt float-left" disabled="disabled">Back</button></div></div></div></form><div class="signin-links">Licensed under <a target="_blank" rel="noopener noreferrer" class="grey link-alt" href="https://www.zabbix.com/license">GPL v2</a></div></main><footer role="contentinfo">Zabbix 6.0.13. &copy; 2001&ndash;2023, <a class="grey link-alt" target="_blank" rel="noopener noreferrer" href="https://www.zabbix.com/">Zabbix SIA</a></footer></div></body></html>

14随机找一目录,在其下分别创建tasks和file目录,把autoDeployment.tar、编写好的repo文件和zabbix_agentd.conf传至file目录,在tasks目录下编写agent.yaml文件,要求在被监控机能远程部署zabbix-agent服务。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
[root@zabbix_server opt]# cat agent.yaml
---
- hosts: agent
  become: true
  tasks:
  - name: copy local.repo
    copy:
      src: local.repo
      dest: /etc/yum.repos.d/local.repo
  - name: Copy autoDeployment.tar
    copy:
      src: autoDeployment.tar
      dest: /opt
  - name: Copy zabbix_agentd.conf file
    copy:
      src: zabbix_agentd.conf
      dest: /etc/zabbix/zabbix_agentd.conf
      owner: zabbix
      group: zabbix
      mode: '0644'
  - name: tar autoDeployment.tar
    shell:
      cmd: tar -vxf autoDeployment.tar  -C /opt
  - name: Install Zabbix Agent
    yum:
      name: zabbix-agent
      state: present
  - name: Start and enable Zabbix Agent
    service:
      name: zabbix-agent
      state: started
      enabled: true