A模块题目:OpenStack 平台部署与运维 业务场景:
某企业拟使用OpenStack搭建一个企业云平台,用于部署各类企业应用对外对内服务。云平台可实现IT资源池化、弹性分配、集中管理、性能优化以及统一安全认证等。系统结构如下图:
企业云平台的搭建使用竞赛平台提供的两台云服务器,配置如下表:
表1 IP地址规划
设备名称
主机名
接口
IP地址
云服务器1
controller
eth0
公网IP: 私网IP:192.168.100./24
eth1
私网IP:192.168.200. */24
云服务器2
compute
eth0
公网IP: 私网IP:192.168.100. */24
eth1
私网IP:192.168.200. */24
说明:
1.选手自行检查工位pc机硬件及网络是否正常;
2.竞赛使用集群模式进行,给每个参赛队提供华为云账号和密码及考试系统的账号和密码。选手通过用户名与密码分别登录华为云和考试系统;
3.竞赛用到的软件包都在云主机/root下。
4.表1中的公网IP和私网IP以自己云主机显示为准,每个人的公网IP和私网IP不同。使用第三方软件远程连接云主机,使用公网IP连接。
任务1 1私有云平台环境初始化(5 分) 1.配置主机名 把controller节点主机名设置为controller,compute 节点主机名设置为compute,修改hosts文件将IP地址映射为主机名
在controller节点将cat /etc/hosts命令的返回结果提交到答题框。【1分】
标准: controller&&compute&&192.168.100
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 [root@localhost ~]# hostnamectl set-hostname controller [root@localhost ~]# bash [root@localhost ~]# hostnamectl set-hostname compute [root@localhost ~]# bash [root@controller ~]# hostname controller [root@compute ~]# hostname compute [root@controller ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.10 controller 192.168.100.20 compute [root@compute ~]# cat /etc/hosts 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.10 controller 192.168.100.20 compute
2.挂载光盘镜像 在controller节点的root目录下有CentOS-7-x86_64-DVD-2009.iso和openstack-train.tar.gz,在/opt下创建centos目录,将镜像文件CentOS-7-x86_64-DVD-2009.iso挂载到/opt/centos下,将openstack-train.tar.gz解压到/opt目录下,并创建本地yum源local.repo。
在controller节点将yum list | grep glance命令的返回结果提交到答题框。【1分】
标准: openstack-glance&&python2-glance&&python2-glance-store
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 [root@controller ~]# mv /opt/CentOS-7-x86_64-DVD-2009.iso /root/ [root@controller ~]# mkdir /opt/centos [root@controller ~]# tar -zxvf /root/openstack-train.tar.gz -C /opt/ # 进行挂载 [root@controller ~]# mount /root/CentOS-7-x86_64-DVD-2009.iso /opt/centos/ # 配置controller 本地yum源 [root@controller ~]# vim /etc/yum.repos.d/local.repo [centos] name=centos baseurl=file:///opt/centos gpgcheck=0 enabled=1 [openstack] name=openstack baseurl=file:///opt/openstack gpgcheck=0 enabled=1 [root@controller ~]# yum list | grep glance openstack-glance.noarch 1:19.0.4-1.el7 openstack python2-glance.noarch 1:19.0.4-1.el7 openstack python2-glance-store.noarch 1.0.1-1.el7 openstack python2-glanceclient.noarch 1:2.17.1-1.el7 openstack
3.搭建ftp服务器 在controller节点上安装vsftp服务,将/opt目录设为共享,并设置为开机自启动,然后重启服务生效;在compute节点创建FTP源ftp.repo,使用controller节点为FTP服务器,配置文件中的FTP地址使用主机名。
在compute节点将cat /etc/yum.repos.d/ftp.repo命令的返回结果提交到答题框。【1分】
标准: controller/centos&&controller/openstack
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 # 安装vsftpd [root@controller ~]# yum install vsftpd -y # 配置/opt目录进行共享 [root@controller ~]# echo anon_root=/opt >> /etc/vsftpd/vsftpd.conf # 重启并设置开机自启动 [root@controller ~]# systemctl restart vsftpd [root@controller ~]# systemctl enable vsftpd [root@controller ~]# cat /etc/vsftpd/vsftpd.conf | grep /opt anon_root=/opt [root@controller ~]# systemctl status vsftpd | grep -P Loaded\|Active Loaded: loaded (/usr/lib/systemd/system/vsftpd.service; enabled; vendor preset: disabled) Active: active (running) since Wed 2023-02-22 22:01:15 CST; 9min ago # 配置compute ftp源 [root@compute ~]# vim /etc/yum.repos.d/ftp.repo [centos] name=centos baseurl=ftp://controller/centos gpgcheck=0 enabled=1 [openstack] name=openstack baseurl=ftp://controller/openstack gpgcheck=0 enabled=1
4.分区 在compute节点将vdb分为两个区分别为vdb1和vdb2,大小自定义(如果是sdb,则分为sdb1和sdb2)。要求分区格式为gpt,使用mkfs.xfs命令对文件系统格式化。
将lsblk -f命令的返回结果提交到答题框。【1分】
标准: db1&&db2&&xfs
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 [root@compute ~]# parted /dev/vdb mklabel gpt Information: You may need to update /etc/fstab. [root@compute ~]# parted /dev/vdb (parted) mkpart Partition name? []? f1 File system type? [ext2]? xfs Start? 0G End? 20G (parted) mkpart Partition name? []? f2 File system type? [ext2]? xfs Start? 20G End? 100G (parted) quit [root@compute ~]# mkfs.xfs /dev/vdb1 [root@compute ~]# mkfs.xfs /dev/vdb2 [root@compute ~]# lsblk -f NAME FSTYPE LABEL UUID MOUNTPOINT vda └─sda1 xfs 5f1871e2-c19c-4f86-8d6c-04d5fda71a0a / sdb ├─vdb1 xfs 91b262c1-1767-4968-9563-8174e3accdeb └─vdb2 xfs 721a62f9-a00b-4ddf-9454-24bb65090b8b
5.系统调优-脏数据回写 Linux系统内存中会存在脏数据,一般系统默认脏数据占用内存30%时会回写磁盘,修改系统配置文件,要求将回写磁盘的大小调整为60%。
在controller节点将sysctl -p命令的返回结果提交到答题框。【1分】
标准: vm.dirty_ratio&&60
解法:
1 2 3 4 [root@controller ~]# echo vm.dirty_ratio = 60 >> /etc/sysctl.conf [root@controller ~]# sysctl -p vm.dirty_ratio = 60
任务2 OpenStack搭建任务 (8分) 1.修改脚本文件 在controller节点和compute节点分别安装sh-guoji软件包,修改脚本文件基本变量(脚本文件为/root/variable.sh),修改完成后使用命令生效该脚本文件并替换到compute节点对应位置。
在controller节点请将echo $HOST_NAME $HOST_NAME_NODE命令的返回结果提交到答题框。【1分】
标准: controller&&compute
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 [root@controller ~]# yum install sh-guoji -y [root@compute ~]# yum install sh-guoji -y # 配置 [root@controller ~]# cat /root/variable.sh | grep -Ev "^$|#" HOST_IP=192.168.100.x HOST_PASS=000000 HOST_NAME=controller HOST_IP_NODE=192.168.100.x HOST_PASS_NODE=000000 HOST_NAME_NODE=compute network_segment_IP=192.168.100.0/24 RABBIT_USER=openstack RABBIT_PASS=000000 DB_PASS=000000 ADMIN_PASS=000000 DEMO_PASS=000000 KEYSTONE_DBPASS=000000 GLANCE_DBPASS=000000 GLANCE_PASS=000000 NOVA_DBPASS=000000 NOVA_PASS=000000 NEUTRON_DBPASS=000000 NEUTRON_PASS=000000 METADATA_SECRET=000000 INTERFACE_IP_HOST=192.168.100.x INTERFACE_IP_NODE=192.168.100.x INTERFACE_NAME_HOST=eth0 INTERFACE_NAME_NODE=eth0 Physical_NAME=provider minvlan=100 maxvlan=200 CINDER_DBPASS=000000 CINDER_PASS=000000 BLOCK_DISK=vdb1 SWIFT_PASS=000000 OBJECT_DISK=vdb2 STORAGE_LOCAL_NET_IP=192.168.100.x HEAT_DBPASS=000000 HEAT_PASS=000000 # 复制到compute节点 [root@controller ~]# scp /root/variable.sh compute:/root/ [root@controller ~]# source /root/variable.sh [root@controller ~]# echo $HOST_NAME $HOST_NAME_NODE controller compute
2.安装openstack基础组件 分别在controller节点和compute节点执行openstack-completion.sh文件(执行完闭需重连终端)。
在controller节点将openstack –version命令的返回结果提交到答题框。【1分】
标准: openstack&&4.0.2
解法:
1 2 3 4 5 [root@controller ~]# openstack-completion.sh [root@compute ~]# openstack-completion.sh [root@controller ~]# openstack --version openstack 4.0.2
3.搭建数据库组件 在controller节点执行openstack-controller-mysql.sh脚本,会自行安装mariadb、memcached、rabbitmq等服务和完成相关配置。执行完成后修改配置文件将缓存CACHESIZE修改为128,并重启相应服务。
将ps aux|grep memcached命令的返回结果提交到答题框。【1分】
标准: memcached&&128
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # 执行mysql脚本 [root@controller ~]# openstack-controller-mysql.sh # 修改memcached缓存CACHESIZE [root@controller ~]# vim /etc/sysconfig/memcached PORT="11211" USER="memcached" MAXCONN="1024" CACHESIZE="128" OPTIONS="-l 127.0.0.1,::1,controller" # 重启服务 [root@controller ~]# systemctl restart memcached [root@controller ~]# ps aux|grep memcached memcach+ 14171 0.0 0.0 443060 2164 ? Ssl 23:43 0:00 /usr/bin/memcached -p 11211 -u memcached -m 128 -c 1024 -l 127.0.0.1,::1,controller root 14186 0.0 0.0 112808 968 pts/0 S+ 23:43 0:00 grep --color=auto memcached
4.搭建认证服务组件 在controller节点执行openstack-controller-keystone.sh脚本,会自行安装keystone服务和完成相关配置。使用openstack命令,创建一个名为tom的账户,密码为tompassword123,邮箱为tom@example.com 。
将openstack user show tom命令的返回结果提交到答题框。【1分】
标准: id&&tom@example.com &&password_expires_at
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@controller ~]# openstack-controller-keystone.sh # 创建账户 [root@controller ~]# source admin-openrc [root@controller ~]# openstack user create --password tompassword123 --email tom@example.com tom [root@controller ~]# openstack user show tom +---------------------+----------------------------------+ | Field | Value | +---------------------+----------------------------------+ | domain_id | default | | email | tom@example.com | | enabled | True | | id | e20fb9c5f8ec4e4d98b5e1848d6c8b26 | | name | tom | | options | {} | | password_expires_at | None | +---------------------+----------------------------------+
5.搭建镜像服务组件 在controller节点执行openstack-controller-glance.sh脚本,会自行安装glance服务和完成相关配置。完成后使用openstack命令,创建一个qcow2格式,名为cirros_0.3.4的镜像,镜像文件使用cirros-0.3.4-x86_64-disk.img。
将openstack image show cirros_0.3.4命令的返回结果提交到答题框。【1分】
标准: disk_format&&qcow2&&cirros_0.3.4&&13287936
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 [root@controller ~]# openstack-controller-glance.sh # 创建镜像 [root@controller ~]# source admin-openrc [root@controller ~]# openstack image create --container-format bare --disk-format qcow2 --file /root/cirros-0.3.4-x86_64-disk.img --public cirros_0.3.4 [root@controller ~]# openstack image show cirros_0.3.4 +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | checksum | ee1eca47dc88f4879d8a229cc70a07c6 | | container_format | bare | | created_at | 2023-03-07T06:15:49Z | | disk_format | qcow2 | | file | /v2/images/07d1a62d-7d20-4564-ba78-75841308ce59/file | | id | 07d1a62d-7d20-4564-ba78-75841308ce59 | | min_disk | 0 | | min_ram | 0 | | name | cirros_0.3.4 | | owner | de20c5e28ce6426fb23f764019b47a54 | | properties | os_hash_algo='sha512', os_hash_value='1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f739d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2', os_hidden='False' | | protected | False | | schema | /v2/schemas/image | | size | 13287936 | | status | active | | tags | | | updated_at | 2023-03-07T06:15:49Z | | virtual_size | None | | visibility | public
6.搭建计算服务组件 在controller节点执行openstack-controller-nova.sh,compute节点执行openstack-compute-nova.sh,会自行安装nova服务和完成相关配置。使用openstack命令创建一个名为m1,ID为56,内存为2048MB,磁盘容量为20GB,vCPU数量为2的云主机类型。
在controller节点将openstack flavor show m1命令的返回结果提交到答题框。【1分】
标准:disk&&20&&name&&m1&&ram&&2048&&id&&56&&vcpus&&properties
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 [root@controller ~]# openstack-controller-nova.sh [root@compute ~]# openstack-compute-nova.sh [root@controller ~]# source admin-openrc [root@controller ~]# openstack flavor create --id 56 --ram 2048 --disk 20 --vcpus 2 m1 +----------------------------+-------+ | Field | Value | +----------------------------+-------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | disk | 20 | | id | 56 | | name | m1 | | os-flavor-access:is_public | True | | properties | | | ram | 2048 | | rxtx_factor | 1.0 | | swap | | | vcpus | 2 | +----------------------------+-------+ [root@controller ~]# openstack flavor show m1 +----------------------------+-------+ | Field | Value | +----------------------------+-------+ | OS-FLV-DISABLED:disabled | False | | OS-FLV-EXT-DATA:ephemeral | 0 | | access_project_ids | None | | disk | 20 | | id | 56 | | name | m1 | | os-flavor-access:is_public | True | | properties | | | ram | 2048 | | rxtx_factor | 1.0 | | swap | | | vcpus | 2 | +----------------------------+-------+
7.搭建网络组件并初始化网络 在controller节点执行openstack-controller-neutron.sh,compute节点执行openstack-compute-neutron.sh,会自行安装neutron服务并完成配置。创建云主机外部网络ext-net,子网为ext-subnet,云主机浮动 IP 可用网段为 192.168.200.100~192.168.200.200,网关为 192.168.200.1。
在controller节点将openstack subnet show ext-subnet命令的返回结果提交到答题框。【1分】
标准: 192.168.200.100-192.168.200.200&&allocation_pools&&gateway_ip&&192.168.200.1&&ext-subnet&&project_id
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 [root@controller ~]# openstack-controller-neutron.sh [root@compute ~]# openstack-compute-neutron.sh # 创建网络 [root@controller ~]# source admin-openrc [root@controller ~]# openstack network create --external ext-net # 创建子网 [root@controller ~]# openstack subnet create --ip-version 4 --gateway 192.168.200.1 --allocation-pool start=192.168.200.100,end=192.168.200.200 --network ext-net --subnet-range 192.168.200.0/24 ext-subnet [root@controller ~]# openstack subnet show ext-subnet +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-------------------+---------------------------------------------------------------------------------------------------------------------------------------------------------+ | allocation_pools | 192.168.200.100-192.168.200.200 | | cidr | 192.168.200.0/24 | | created_at | 2023-02-22T16:29:20Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 192.168.200.1 | | host_routes | | | id | 6ab2ab75-3a82-44d5-9bc8-c2c0a65872d6 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | location | cloud='', project.domain_id=, project.domain_name='Default', project.id='ce21284fd468495995218ea6e1aeea2a', project.name='admin', region_name='', zone= | | name | ext-subnet | | network_id | bc39443b-9ef8-4a4d-91b3-fd2637ada43f | | prefix_length | None | | project_id | ce21284fd468495995218ea6e1aeea2a | | revision_number | 0 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2023-02-22T16:29:20Z
8.搭建图形化界面 在 controller节点执行openstack-controller-dashboard.sh脚本,会自行安装 dashboard服务并完成配置。请修改compute节点nova配置文件,使之后创建的实例可以在网页通过公网访问控制台页面。
在compute节点请将cat /etc/nova/nova.conf | grep 公网IP 命令的返回结果提交到答题框。(例:cat /etc/nova/nova.conf | grep 121.36.12.138)【1分】
标准: novncproxy_base_url&&vnc_auto.html
解法:
1 2 3 4 5 6 7 [root@controller ~]# openstack-controller-dashboard.sh [root@compute ~]# vim /etc/nova/nova.conf 修改内容如下 novncproxy_base_url = http://公网IP:6080/vnc_auto.html [root@compute ~]# cat /etc/nova/nova.conf | grep 公网IP novncproxy_base_url = http://公网IP:6080/vnc_auto.html
任务3 OpenStack运维任务(11分) 某公司构建了一套内部私有云系统,这套私有云系统将为公司内部提供计算服务。你将作为该私有云的维护人员,请完成以下运维工作。
1.数据库管理 请使用数据库命令将所有数据库备份到/root路径下,备份文件名为openstack.sql,完成后使用命令查看文件属性其中文件大小以mb显示。
请将所有命令和返回结果提交到答题框。【1分】
标准: 1.6M&&openstack.sql
解法:
1 2 3 [root@controller ~]# mysqldump -uroot -p000000 --all-databases > /root/openstack.sql [root@controller ~]# du -h /root/openstack.sql 1.6M /root/openstack.sql
2.数据库管理 进入数据库,创建本地用户examuser,密码为 000000,然后查询mysql数据库中的 user 表的user,host,password字段。然后赋予这个用户所有数据库的“查询”“删除”“更新”“创建”的权限。
将select User, Select_priv,Update_priv,Delete_priv,Create_priv from user;命令的返回结果提交到答题框。【1分】
标准: keystone&&glance&&nova&&placement&&examuser&&Y
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 [root@controller ~]# mysql -uroot -p MariaDB [(none)]> create user examuser@'localhost' identified by '000000'; Query OK, 0 rows affected (0.005 sec) MariaDB [(none)]> use mysql Database changed MariaDB [mysql]> select user,host,password from user; +-----------+------------+-------------------------------------------+ | user | host | password | +-----------+------------+-------------------------------------------+ | root | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | root | controller | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | root | 127.0.0.1 | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | root | ::1 | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | keystone | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | keystone | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | glance | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | glance | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | nova | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | nova | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | placement | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | placement | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | neutron | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | neutron | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | cinder | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | cinder | % | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | | examuser | localhost | *032197AE5731D4664921A6CCAC7CFCE6A0698693 | +-----------+------------+-------------------------------------------+ 17 rows in set (0.000 sec) MariaDB [mysql]> grant select,delete,update,create on *.* to examuser@'localhost'; Query OK, 0 rows affected (0.000 sec) MariaDB [mysql]> select User, Select_priv,Update_priv,Delete_priv,Create_priv from user; +-----------+-------------+-------------+-------------+-------------+ | User | Select_priv | Update_priv | Delete_priv | Create_priv | +-----------+-------------+-------------+-------------+-------------+ | root | Y | Y | Y | Y | | root | Y | Y | Y | Y | | root | Y | Y | Y | Y | | root | Y | Y | Y | Y | | keystone | N | N | N | N | | keystone | N | N | N | N | | glance | N | N | N | N | | glance | N | N | N | N | | nova | N | N | N | N | | nova | N | N | N | N | | placement | N | N | N | N | | placement | N | N | N | N | | neutron | N | N | N | N | | neutron | N | N | N | N | | examuser | Y | Y | Y | Y | +-----------+-------------+-------------+-------------+-------------+ 15 rows in set (0.000 sec)
3.安全组管理 使用openstack命令创建名称为group_web的安全组该安全组的描述为”Custom security group”,用openstack命令为安全组添加icmp规则和ssh规则允许任意ip地址访问web,完成后使用openstack命令查看该安全组的详细信息。
将openstack security group show group_web命令的返回结果提交到答题框。【1分】
*标准: created_at&&rules&&port_range_max&&22&&protocol&&icmp
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 # 创建描述为Custom security group的安全组 [root@controller ~]# openstack security group create --description "Custom security group" group_web # 添加访问80 [root@controller ~]# openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 80:80 group_web # 添加访问ssh(22) [root@controller ~]# openstack security group rule create --ingress --ethertype IPv4 --protocol tcp --dst-port 22:22 group_web # 添加访问icmp [root@controller ~]# openstack security group rule create --ingress --protocol icmp group_web [root@controller ~]# openstack security group show group_web +-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +-----------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | created_at | 2023-02-23T08:46:51Z | | description | Custom security group | | id | 66376557-a4b3-46ac-aae3-174b0d12d687 | | location | cloud='', project.domain_id=, project.domain_name='Default', project.id='ce21284fd468495995218ea6e1aeea2a', project.name='admin', region_name='', zone= | | name | group_web | | project_id | ce21284fd468495995218ea6e1aeea2a | | revision_number | 4 | | rules | created_at='2023-02-23T08:48:42Z', direction='ingress', ethertype='IPv4', id='3eccfa5c-3886-4873-92b5-c19e653ef2c8', port_range_max='80', port_range_min='80', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2023-02-23T08:48:42Z' | | | created_at='2023-02-23T08:50:59Z', direction='ingress', ethertype='IPv4', id='6ad68ada-ca6f-4905-b78a-3f53607333d8', port_range_max='22', port_range_min='22', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2023-02-23T08:50:59Z' | | | created_at='2023-02-23T08:51:27Z', direction='ingress', ethertype='IPv4', id='b09e5950-ee02-4531-91c8-7fcb3cc427a0', protocol='tcp', remote_ip_prefix='0.0.0.0/0', updated_at='2023-02-23T08:51:27Z' | | | created_at='2023-02-23T08:46:51Z', direction='egress', ethertype='IPv4', id='bb5ce76b-75f1-41ab-aa5b-8cb50702f9d4', updated_at='2023-02-23T08:46:51Z' | | | created_at='2023-02-23T08:46:51Z', direction='egress', ethertype='IPv6', id='f52f4f79-2e9f-479f-abdf-1baee9d56f14', updated_at='2023-02-23T08:46:51Z' | | tags | [] | | updated_at | 2023-02-23T08:51:27Z
4.项目管理 在 keystone 中使用openstack创建shop项目添加描述为”Hello shop”,完成后使用openstack命令禁用该项目,然后使用openstack命令查看该项目的详细信息。
将openstack project show shop命令的返回结果提交到答题框。【1分】
标准: enabled&&False&&name&&shop
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 [root@controller ~]# openstack project create --description "Hello shop" shop +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Hello shop | | domain_id | default | | enabled | True | | id | 0e37ad8443764f759f6691a1f0dbff9d | | is_domain | False | | name | shop | | options | {} | | parent_id | default | | tags | [] | +-------------+----------------------------------+ # 禁用shop项目 [root@controller ~]# openstack project set --disable shop # 查看 [root@controller ~]# openstack project show shop +-------------+----------------------------------+ | Field | Value | +-------------+----------------------------------+ | description | Hello shop | | domain_id | default | | enabled | False | | id | 0e37ad8443764f759f6691a1f0dbff9d | | is_domain | False | | name | shop | | options | {} | | parent_id | default | | tags | [] |
5.用户管理 使用openstack命令查看admin租户的当前配额值、将admin租户的实例配额提升到13,然后查看修改后admin租户的配额值。
将openstack quota show admin命令的返回结果提交到答题框。【1分】
标准: instances&&13&&project_name&&admin&&routers&&ram
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 [root@controller ~]# openstack quota show admin # 修改 [root@controller ~]# openstack quota set --instances 13 admin # 查看 [root@controller ~]# openstack quota show admin +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | Field | Value | +----------------------+------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+ | cores | 20 | | fixed-ips | -1 | | floating-ips | 50 | | health_monitors | None | | injected-file-size | 10240 | | injected-files | 5 | | injected-path-size | 255 | | instances | 13 | | key-pairs | 100 | | l7_policies | None | | listeners | None | | load_balancers | None | | location | Munch({'project': Munch({'domain_name': 'Default', 'domain_id': None, 'name': 'admin', 'id': u'ce21284fd468495995218ea6e1aeea2a'}), 'cloud': '', 'region_name': '', 'zone': None}) | | name | None | | networks | 100 | | pools | None | | ports | 500 | | project | ce21284fd468495995218ea6e1aeea2a | | project_name | admin | | properties | 128 | | ram | 51200 | | rbac_policies | 10 | | routers | 10 | | secgroup-rules | 100 | | secgroups | 10 | | server-group-members | 10 | | server-groups | 10 | | subnet_pools | -1 | | subnets | 100
6.heat模板管理 执行脚本openstack-controller-heat.sh安装完heat服务后,编写Heat模板create_flavor.yaml,创建名为“m2.flavor”、ID为 1234、内存为1024MB、硬盘为20GB、vcpu数量为1的云主机类型,创建完成后使用openstack命令查看堆栈列表。
将openstack stack list命令的返回结果提交到答题框。【1分】
标准: Stack&&Status&&CREATE_COMPLETE
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [root@controller ~]# cat create_flavor.yaml heat_template_version: 2018-08-31 description: Generated template resources: nova_flavor: type: OS::Nova::Flavor properties: name: m2.flavor disk: 20 is_public: True ram: 1024 vcpus: 1 flavorid: 1234 # 执行heat模板 [root@controller ~]# openstack stack create -t create_flavor.yaml test +---------------------+--------------------------------------+ | Field | Value | +---------------------+--------------------------------------+ | id | ffd515d5-9d06-4ead-872e-e698ceb77959 | | stack_name | test | | description | Generated template | | creation_time | 2023-02-27T10:19:40Z | | updated_time | None | | stack_status | CREATE_IN_PROGRESS | | stack_status_reason | Stack CREATE started | +---------------------+--------------------------------------+. # 查看列表 [root@controller ~]# openstack stack list +--------------------------------------+------------+----------------------------------+----------------+----------------------+--------------+ | ID | Stack Name | Project | Stack Status | Creation Time | Updated Time | +--------------------------------------+------------+----------------------------------+----------------+----------------------+--------------+ | ffd515d5-9d06-4ead-872e-e698ceb77959 | test | ce21284fd468495995218ea6e1aeea2a | CHECK_COMPLETE | 2023-02-27T10:19:40Z | None |
7.后端配置文件管理 修改glance后端配置文件,将项目的映像存储限制为10GB,完成后重启glance服务。
将cat /etc/glance/glance-api.conf |grep _quota命令的返回结果提交到答题框。【1分】
标准: user_storage_quota&&10737418240
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 [root@controller ~]# vim /etc/glance/glance-api.conf user_storage_quota = 10737418240 # 重启 [root@controller ~]# systemctl restart openstack-glance-* # 查询 [root@controller ~]# cat /etc/glance/glance-api.conf |grep _quota # ``image_property_quota`` configuration option. # * image_property_quota # image_member_quota = 128 # image_property_quota = 128 # image_tag_quota = 128 # image_location_quota = 10 user_storage_quota = 10737418240
8.存储服务管理 在controller节点执行openstack-controller-cinder.sh,compute节点执行openstack-compute-cinder.sh,在controller和compute节点上会自行安装cinder服务并完成配置。使用openstack命令创建一个名为lvm的卷类型,使用cinder命令创建该类型规格键值对,要求lvm卷类型对应cinder后端驱动lvm所管理的存储资源,名字lvm_test,大小1G的云硬盘并查询该云硬盘的详细信息。
将cinder show lvm_test命令的返回结果提交到答题框。【1分】
标准: name&&lvm_test&&size&&1&&volume_type&&lvm
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 [root@controller ~]# openstack-controller-cinder.sh [root@compute ~]# openstack-compute-cinder.sh # 创建卷类型lvm [root@controller ~]# source admin-openrc [root@controller ~]# openstack volume type create lvm +-------------+--------------------------------------+ | Field | Value | +-------------+--------------------------------------+ | description | None | | id | 5a1ac113-b226-4646-9a7c-46eee3f6346f | | is_public | True | | name | lvm | +-------------+--------------------------------------+ [root@controller ~]# cinder type-key lvm set volume_backend_name=LVM # 创建云硬盘 [root@controller ~]# cinder create --volume-type lvm --name lvm_test 1 略 # 查看详细信息 [root@controller ~]# cinder show lvm_test +--------------------------------+--------------------------------------+ | Property | Value | +--------------------------------+--------------------------------------+ | attached_servers | [] | | attachment_ids | [] | | availability_zone | nova | | bootable | false | | consistencygroup_id | None | | created_at | 2022-10-25T12:28:55.000000 | | description | None | | encrypted | False | | id | 39f131c3-6ee2-432a-8096-e13173307339 | | metadata | | | migration_status | None | | multiattach | False | | name | lvm_test | | os-vol-host-attr:host | compute@lvm#LVM | | os-vol-mig-status-attr:migstat | None | | os-vol-mig-status-attr:name_id | None | | os-vol-tenant-attr:tenant_id | 4885b78813a5466d9d6d483026f2067c | | replication_status | None | | size | 1 | | snapshot_id | None | | source_volid | None | | status | available | | updated_at | 2022-10-25T12:28:56.000000 | | user_id | b4a6c1eb18c247edba11b57be18ec752 | | volume_type | lvm |
9.存储管理 为了减缓来自实例的数据访问速度的变慢,OpenStack Block Storage 支持对卷数据复制带宽的速率限制。请修改cinder后端配置文件将卷复制带宽限制为最高100MiB/s(对应数值修改为104857600)。
将cat /etc/cinder/cinder.conf | grep 104857600命令的返回结果提交到答题框。【1分】
标准: volume_copy_bps_limit&&104857600
解法:
1 2 3 4 5 6 7 8 9 [root@controller ~]# vim /etc/cinder/cinder.conf [lvmdriver-1] volume_group=cinder-volumes-1 volume_driver=cinder.volume.drivers.lvm.LVMVolumeDriver volume_backend_name=LVM volume_copy_bps_limit=104857600 [root@controller ~]# systemctl restart openstack-cinder-* [root@controller ~]# cat /etc/cinder/cinder.conf | grep 104857600 volume_copy_bps_limit=104857600
10.存储管理 在controller节点执行openstack-controller-swift.sh,compute节点执行openstack-compute-swift.sh,在controller和compute节点上会自行安装swift服务并完成配置。使用swift命令创建一个名为file的容器并查看,然后把cirros-0.3.4-x86_64-disk.img上传到file容器中。
将swift stat file命令的返回结果提交到答题框。【1分】
标准: Container&&file&&Objects&&1&&Bytes&&13287936
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 [root@controller ~]# openstack-controller-swift.sh [root@compute ~]# openstack-compute-swift.sh [root@controller ~]# swift post file [root@controller ~]# swift upload file /root/cirros-0.3.4-x86_64-disk.img root/cirros-0.3.4-x86_64-disk.img [root@controller ~]# swift stat file Account: AUTH_d23ad8b534f44b02ad30c9f7847267df Container: file Objects: 1 Bytes: 13287936 Read ACL: Write ACL: Sync To: Sync Key: Accept-Ranges: bytes X-Storage-Policy: Policy-0 Last-Modified: Fri, 10 Mar 2023 02:43:07 GMT X-Timestamp: 1678416180.44884 X-Trans-Id: txfdc2fb777c4641d3a9292-00640a9941 Content-Type: application/json; charset=utf-8 X-Openstack-Request-Id: txfdc2fb777c4641d3a9292-00640a9941
11.OpenStack API 管理 使用curl的方式获取admin用户token值;使用已获取的token值通过curl的方式获取domain为default所有用户名(ip使用主机名)。
将获取到的所有用户名提交到答题框。【1分】
标准: admin&&myuser&&tom&&glance&&nova&&placement&&neutron&&heat&&cinder&&swift
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@controller ~]# curl -i -X POST http://controller:5000/v3/auth/tokens -H "Content-type: application/json" -d '{"auth": {"identity": {"methods":["password"],"password": {"user": {"domain": {"name": "default"},"name": "admin","password": "000000"}}},"scope": {"project": {"domain": {"name": "default"},"name": "admin"}}}}' | grep X-Subject-Token % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 6821 100 6612 100 209 21381 675 --:--:-- --:--:-- --:--:-- 21398 X-Subject-Token: gAAAAABkB9xoMQZNnMcPh_gB4T0Pmo4TUO1ezwBZtFSjAR68fUOppadNTTCpcOGjMpN3al9FM8MHma9FCSoWxQHSuG9vbxOxkELeKBqF_I2_uzmouvxGQ7a35oJ5IvGwNp4hap5doeXt-2dG5LvPyqxW7hndEAQDjuTKbnqVlwHbjXVpT4zoYuc [root@controller ~]# curl http://controller:5000/v3/users?domain_id=default -H "X-Auth-Token: gAAAAABkB9xoMQZNnMcPh_gB4T0Pmo4TUO1ezwBZtFSjAR68fUOppadNTTCpcOGjMpN3al9FM8MHma9FCSoWxQHSuG9vbxOxkELeKBqF_I2_uzmouvxGQ7a35oJ5IvGwNp4hap5doeXt-2dG5LvPyqxW7hndEAQDjuTKbnqVlwHbjXVpT4zoYuc" | python -m json.tool | grep name % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 2479 100 2479 0 0 22848 0 --:--:-- --:--:-- --:--:-- 22953 "name": "admin", "name": "myuser", "name": "tom", "name": "glance", "name": "nova", "name": "placement", "name": "neutron", "name": "cinder", "name": "swift", "name": "heat",
任务四 OpenStack架构任务(6分) 1.安装python3环境 在controller节点安装python3环境。安装完之后查看python3版本,使用提供的whl文件安装依赖。
将pip3 list命令的返回结果提交到答题框。【2分】
标准: certifi&&2019.11.28&&pip&&9.0.3&&urllib3&&1.25.11&&setuptools&&39.2.0
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@controller python-depend]# yum install python3 –y [root@controller python-depend]# pip3 install certifi-2019.11.28-py2.py3-none-any.whl [root@controller python-depend]# pip3 install urllib3-1.25.11-py3-none-any.whl [root@controller python-depend]# pip3 install idna-2.8-py2.py3-none-any.whl [root@controller python-depend]# pip3 install chardet-3.0.4-py2.py3-none-any.whl [root@controller python-depend]# pip3 install requests-2.24.0-py2.py3-none-any.whl [root@controller ~]# python3 --version Python 3.6.8 [root@controller ~]# pip3 list DEPRECATION: The default format will switch to columns in the future. You can use --format=(legacy|columns) (or define a format=(legacy|columns) in your pip.conf under the [list] section) to disable this warning. certifi (2019.11.28) chardet (3.0.4) idna (2.8) pip (9.0.3) requests (2.24.0) setuptools (39.2.0) urllib3 (1.25.11)
2.python对接OpenStack API创建image镜像 编写python代码对接OpenStack API,完成镜像的上传。在controller节点的/root目录下创建create_image.py文件,在该文件中编写python代码对接openstack api(需在py文件中获取token),要求在openstack私有云平台中上传镜像cirros-0.3.4-x86_64-disk.img,名字为cirros001,disk_format为qcow2,container_format为bare。执行完代码要求输出“创建镜像成功,id为:xxxxxx”。
分别将cat /root/create_image.py命令和python3 create_image.py命令的返回结果提交到答题框。【2分】
标准: import&&requests&&:5000/v3/auth/tokens&&:9292/v2/images&&password&&admin&&000000&&X-Auth-Token&&container_format&&bare&&disk_format&&qcow2&&/file&&cirros-0.3.4-x86_64-disk.img&&application
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 [root@controller python3]# python3 create_image.py 请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx) 192.168.100.x 创建镜像成功,id为:0591f693-a7c7-4e7f-ac6c-957b7bccffc9 镜像文件上传成功 [root@controller ~]# cat create_image.py import requests,json,time # *******************全局变量IP***************************** # 执行代码前,请修改controller_ip的IP地址,与指定router,IP可以input,也可以写成静态 controller_ip = input("请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx)\n") image_name = "cirros001" file_path = "/root/cirros-0.3.4-x86_64-disk.img" try: url = f"http://{controller_ip}:5000/v3/auth/tokens" body = { "auth": { "identity": { "methods":["password"], "password": { "user": { "domain":{ "name": "Default" }, "name": "admin", "password": "000000" } } }, "scope": { "project": { "domain": { "name": "Default" }, "name": "admin" } } } } headers = {"Content-Type": "application/json"} Token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token'] headers = {"X-Auth-Token": Token} except Exception as e: print(f"获取Token值失败,请检查访问云主机控制节点IP是否正确?输出错误信息如下:{str(e)}") exit(0) class glance_api: def __init__(self, headers: dict, resUrl: str): self.headers = headers self.resUrl = resUrl #创建glance镜像 def create_glance(self, container_format="bare", disk_format="qcow2"): body = { "container_format": container_format, "disk_format": disk_format, "name": image_name, } status_code = requests.post(self.resUrl, data=json.dumps(body), headers=self.headers).status_code if(status_code == 201): return f"创建镜像成功,id为:{glance_api.get_glance_id()}" else: return "创建镜像失败" #获取glance镜像id def get_glance_id(self): result = json.loads(requests.get(self.resUrl,headers=self.headers).text) for item in result['images']: if(item['name'] == image_name): return item['id'] #上传glance镜像 def update_glance(self): self.resUrl=self.resUrl+"/"+self.get_glance_id()+"/file" self.headers['Content-Type'] = "application/octet-stream" status_code = requests.put(self.resUrl,data=open(file_path,'rb').read(),headers=self.headers).status_code if(status_code == 204): return "镜像文件上传成功" else: return "镜像文件上传失败" glance_api = glance_api(headers,f"http://{controller_ip}:9292/v2/images") print(glance_api.create_glance()) #调用glance-api中创建镜像方法 print(glance_api.update_glance())
3.python对接OpenStack API创建用户 编写python代码对接OpenStack API,完成用户的创建。在controller节点的/root目录下创建create_user.py文件,在该文件中编写python代码对接openstack api(需在py文件中获取token),要求在openstack私有云平台中创建用户guojibeisheng。
将cat /root/create_user.py命令的返回结果提交到答题框。【2分】
标准: import&&requests&&:5000/v3/auth/tokens&&:5000/v3/users&&password&&admin&&000000&&X-Auth-Token&&domain_id&&name&&application&&:5000/v3/users
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 [root@controller python3]# python3 create_user.py 请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx) 192.168.100.x 用户 guojibeisheng 创建成功,ID为dcb0fc7bacf54038b624463921123aed 该平台的用户为: guojibeisheng admin myuser tom glance nova placement neutron heat heat_domain_admin cinder swift 用户 guojibeisheng 已删除! [root@controller python3]# cat create_user.py import requests,json,time # *******************全局变量IP***************************** # 执行代码前,请修改controller_ip的IP地址,与指定router,IP可以input,也可以写成静态 controller_ip = input("请输入访问openstack平台控制节点IP地址:(xx.xx.xx.xx)\n") try: url = f"http://{controller_ip}:5000/v3/auth/tokens" body = { "auth": { "identity": { "methods":["password"], "password": { "user": { "domain":{ "name": "Default" }, "name": "admin", "password": "000000" } } }, "scope": { "project": { "domain": { "name": "Default" }, "name": "admin" } } } } headers = {"Content-Type": "application/json"} Token = requests.post(url, data=json.dumps(body), headers=headers).headers['X-Subject-Token'] headers = {"X-Auth-Token": Token} except Exception as e: print(f"获取Token值失败,请检查访问云主机控制节点IP是否正确?输出错误信息如下:{str(e)}") exit(0) class openstack_user_api: def __init__(self, handers: dict, resUrl: str): self.headers = handers self.resUrl = resUrl def create_users(self, user_name): body = { "user": { "description": "API create user!", "domain_id": "default", "name": user_name } } status_code = requests.post(self.resUrl, data=json.dumps(body), headers=self.headers).text result = json.loads(requests.get(self.resUrl, headers=self.headers).text) user_name = user_name for i in result['users']: if i['name'] == user_name: return f"用户 {user_name} 创建成功,ID为{i['id']}" def list_users(self): result = json.loads(requests.get(self.resUrl, headers=self.headers).text) roles = [] for i in result['users']: if i['name'] not in roles: roles.append(i['name']) return "该平台的用户为:\n"+'\n'.join(roles) def get_user_id(self, user_name): result = json.loads(requests.get(self.resUrl, headers=self.headers).text) user_name = user_name for i in result['users']: if i['name'] == user_name: return (f"用户 {user_name} 的ID为{i['id']}") def delete_user(self, user_name): result = json.loads(requests.get(self.resUrl, headers=self.headers).text) for i in result['users']: if i['name'] == user_name: i = i['id'] status_code = requests.delete(f'http://{controller_ip}:5000/v3/users/{i}', headers=self.headers) return f"用户 {user_name} 已删除!" openstack_user_api = openstack_user_api(headers, f"http://{controller_ip}:5000/v3/users") print(openstack_user_api.create_users("guojibeisheng")) print(openstack_user_api.list_users()) print(openstack_user_api.delete_user("guojibeisheng"))
B模块题目:容器的编排与运维 某企业计划使用k8s平台搭建微服务系统,现在先使用简单的微服务项目进行测试,请按照要求完成相应任务。
表1 IP地址规划
设备名称
主机名
接口
IP地址
说明
云服务器1
master
eth0
公网IP:*******私网IP:192.168.100. /24
Harbor也是使用该云服务器
云服务器2
node
eth0
公网IP:*******私网IP:192.168.100. /24
说明:
1.表1中的公网IP和私网IP以自己云主机显示为准,每个人的公网IP和私网IP不同。使用第三方软件远程连接云主机,使用公网IP连接。
2.华为云中云主机名字已命好,直接使用对应名字的云主机即可。
3.竞赛用到的软件包都在云主机/root下。
任务 1容器云平台环境初始化(10.5分) 1.容器云平台的初始化 master节点主机名设置为master、node节点主机名设置为node,所有节点root密码设置为000000,所有节点关闭swap,并配置hosts映射。
请在master节点将ping node -c 3命令的返回结果提交到答题框。【1.5分】
标准: icmp_seq&&0%
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 (1)修改主机名并配置映射 master节点: # hostnamectl set-hostname master # passwd root # cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.91 master 192.168.100.23 node node1节点: # hostnamectl set-hostname node # passwd root # cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.91 master 192.168.100.23 node (2)关闭swap master和node节点关闭swap: # swapoff -a [root@master ~]# free -m total used free shared buff/cache available Mem: 7821 129 7548 16 143 7468 Swap: 0 0 0 [root@master ~]# ping node -c 3 PING node (192.168.100.23) 56(84) bytes of data. 64 bytes from node (192.168.100.23): icmp_seq=1 ttl=64 time=0.228 ms 64 bytes from node (192.168.100.23): icmp_seq=2 ttl=64 time=0.242 ms 64 bytes from node (192.168.100.23): icmp_seq=3 ttl=64 time=0.151 ms --- node ping statistics --- 3 packets transmitted, 3 received, 0% packet loss, time 1999ms rtt min/avg/max/mdev = 0.151/0.207/0.242/0.040 ms
2.镜像文件的复制 /root目录存放有CentOS-7-x86_64-DVD-2009.iso和kubernetes_V1.2.iso光盘镜像文件,在/opt目录下使用命令创建centos目录,并将镜像文件CentOS-7-x86_64-DVD-2009.iso中的内容复制到centos目录下,将镜像文件kubernetes_V1.2.iso中的内容复制到 /opt目录下。
请在master节点将du -h /opt/ –max-depth=1命令的返回结果提交到答题框。【1.5分】
标准: 4.5G&¢os&&2.8G&&images
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 master节点: # mkdir /opt/centos# mount -o loop CentOS-7-x86_64-DVD-2009.iso /mnt/ mount: /dev/loop0 is write-protected, mounting read-only # cp -rvf /mnt/* /opt/centos/# umount /mnt # mount -o loop kubernetes_V1.2.iso /mnt/ mount: /dev/loop0 is write-protected, mounting read-only # cp -rvf /mnt/* /opt/# umount /mnt [root@master ~]# du -h /opt/ --max-depth=1 4.5G /opt/centos 77M /opt/cri 25M /opt/docker-compose 630M /opt/harbor 2.8G /opt/images 172M /opt/kubernetes-repo 20K /opt/yaml 8.1G /opt/
3.Yum源的编写 在master节点首先将系统自带的yum源移动到/home目录,然后为master节点配置本地yum源,yum源文件名为local.repo。
将yum repolist命令的返回结果提交到答题框。【1.5分】
标准: 4070&&45
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 # mv /etc/yum.repos.d/CentOS-* /home# [root@master ~] cat /etc/yum.repos.d/local.repo [centos] name=centos baseurl=file:///opt/centos gpgcheck=0 enabled=1 [k8s] name=k8s baseurl=file:///opt/kubernetes-repo gpgcheck=0 enabled=1 [root@master ~]# yum repolist Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile repo id repo name status centos centos 4,070 k8s k8s 45 repolist: 4,115
4.安装ftp服务 在master节点安装ftp服务,将ftp共享目录设置为 /opt。
将ps -ef | grep ftp命令的返回结果提交到答题框。【1.5分】
标准: vsftpd.conf
解法:
1 2 3 4 5 6 7 # yum install -y vsftpd # vi /etc/vsftpd/vsftpd.conf anon_root=/opt # systemctl start vsftpd && systemctl enable vsftpd [root@master ~]# ps -ef | grep ftp root 8112 1 0 07:32 ? 00:00:00 /usr/sbin/vsftpd /etc/vsftpd/vsftpd.conf root 8171 7670 0 07:36 pts/0 00:00:00 grep --color=auto ftp
5.ftp源的编写 为node节点配置ftp源,ftp源文件名称为ftp.repo,其中ftp服务器地址为master节点,配置ftp源时不要写IP地址。
在node节点请将curl ftp://master命令的返回结果提交到答题框。【1.5分】
标准: harbor&¢os
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 [root@node1 ~]# rm -rf /etc/yum.repos.d/* [root@node1 ~]# cat /etc/yum.repos.d/ftp.repo [centos] name=centos baseurl=ftp://master/centos gpgcheck=0 enabled=1 [k8s] name=k8s baseurl=ftp://master/kubernetes-repo gpgcheck=0 enabled=1 [root@node ~]# curl ftp://master drwxr-xr-x 8 0 0 220 Mar 15 07:14 centos dr-xr-xr-x 2 0 0 131 Mar 15 07:15 cri dr-xr-xr-x 2 0 0 49 Mar 15 07:15 docker-compose dr-xr-xr-x 2 0 0 49 Mar 15 07:15 harbor dr-xr-xr-x 2 0 0 72 Mar 15 07:16 images dr-xr-xr-x 3 0 0 4096 Mar 15 07:16 kubernetes-repo
6.设置时间同步服务器 在master节点上部署chrony服务器,允许其它节点同步时间,启动服务并设置为开机自启动;在其他节点上指定master节点为上游NTP服务器,重启服务并设为开机自启动。(配置文件IP用计算机名代替)
在node节点将chronyc sources命令的返回结果提交到答题框。【1.5分】
标准: master&&us
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 master节点: yum install -y chrony vi /etc/chrony.conf server master iburst allow all local stratum 10 systemctl start chronyd systemctl enable chronyd node节点: yum install -y chrony vi /etc/chrony.conf server master iburst systemctl start chronyd systemctl enable chronyd [root@node ~]# chronyc sources 210 Number of sources = 1 MS Name/IP address Stratum Poll Reach LastRx Last sample =============================================================================== ^* master 10 6 7 15 -1014ns[ -999us] +/- 134us
7.设置免密登录 为两台台服务器设置免密登录,保证服务器之间能够互相免密登录。
在master节点将ssh node命令的返回结果提交到答题框。【1.5分】
标准: Last&&login&&from&&successful
解法:
1 2 3 4 5 6 7 8 ssh-keygen ssh-copy-id master ssh-copy-id node 其它节点的命令和上面一样 [root@harbor ~]# ssh node Last failed login: Wed Mar 15 09:30:02 UTC 2023 from 170.210.208.108 on ssh:notty There were 17 failed login attempts since the last successful login. Last login: Wed Mar 15 02:57:03 2023 from 58.240.20.122
任务2 k8s搭建任务(19.5分) 1.安装docker应用 在所有节点上安装dokcer-ce,并设置为开机自启动。
在master节点请将docker version命令的返回结果提交到答题框。【1.5分】
标准: 20.10.22&&1.41&&go1.18.9
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 # yum install -y yum-utils device-mapper-persistent-data lvm2 # yum install -y docker-ce 启动Docker: # systemctl start docker # systemctl enable docker # docker version [root@master ~]# docker version Client: Docker Engine - Community Version: 20.10.22 API version: 1.41 Go version: go1.18.9 Git commit: 3a2c30b Built: Thu Dec 15 22:30:24 2022 OS/Arch: linux/amd64 Context: default Experimental: true Server: Docker Engine - Community Engine: Version: 20.10.22 API version: 1.41 (minimum version 1.12) Go version: go1.18.9 Git commit: 42c8b31 Built: Thu Dec 15 22:28:33 2022 OS/Arch: linux/amd64 Experimental: false containerd: Version: 1.6.14 GitCommit: 9ba4b250366a5ddde94bb7c9d1def331423aa323 runc: Version: 1.1.4 GitCommit: v1.1.4-0-g5fd4c4d docker-init: Version: 0.19.0 GitCommit: de40ad0
2.安装docker应用 所有节点配置阿里云镜像加速地址(https://d8b3zdiw.mirror.aliyuncs.com)并把启动引擎设置为systemd,配置成功后加载配置文件并重启docker服务。
将docker pull ubuntu命令的返回结果提交到答题框。【1.5分】
标准: complete&&docker.io/library/ubuntu:latest
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 [root@master ~]# vi /etc/docker/daemon.json { "insecure-registries" : ["0.0.0.0/0"], "registry-mirrors": ["https://d8b3zdiw.mirror.aliyuncs.com"], "exec-opts": ["native.cgroupdriver=systemd"] } # systemctl restart docker # systemctl daemon-reload [root@master ~]# docker pull ubuntu Using default tag: latest latest: Pulling from library/ubuntu 7b1a6ab2e44d: Pull complete Digest: sha256:626ffe58f6e7566e00254b638eb7e0f3b11d4da9675088f4781a50ae288f3322 Status: Downloaded newer image for ubuntu:latest docker.io/library/ubuntu:latest
3.载入镜像 在master节点/opt/images目录下使用tar归档文件载入镜像。
将docker images | grep mysql命令的返回结果提交到答题框。【1.5分】
标准: mysql&&5.6&&303MB
解法:
1 2 3 4 5 # docker load -i images/httpd.tar # docker load -i images/Kubernetes_Base.tar # docker load -i images/Resource-1.tar # docker images | grep mysql mysql 5.6 dd3b2a5dcb48 14 months ago 303MB
4.安装docker-compose 在master节点使用 /opt/docker-compose/v2.10.2-docker-compose-linux-x86_64文件安装docker-compose。安装完成后执行docker-compose version命令。
将docker-compose version命令的返回结果提交到答题框。【1.5分】
标准: Compose&&v2.10.2
1 2 3 4 # chmod +x /opt/docker-compose/v2.10.2-docker-compose-linux-x86_64# mv /opt/docker-compose/v2.10.2-docker-compose-linux-x86_64 /usr/local/bin/docker-compose# docker-compose version Docker Compose version v2.10.2
5.搭建horbor仓库 在master节点解压/opt/harbor/ harbor-offline-installer-v2.5.3.tgz离线安装包,然后安装harbor仓库,并修改相应的yml文件,使各节点默认docker仓库为harbor仓库地址。
在master节点请将docker-compose ps命令的返回结果提交到答题框。【1.5分】
标准: harbor-core&&nginx&®istry&&running&&(healthy)
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 # cd /opt/harbor/# tar -zxvf harbor-offline-installer-v2.5.3.tgz # cd harbor# cp harbor.yml.tmpl harbor.yml# vi harbor.yml hostname: 192.168.100.10 # 将域名修改为本机IP harbor_admin_password: Harbor12345 # sed -i "13s/^/#/g" harbor.yml # sed -i "15,18s/^/#/g" harbor.yml # docker load -i harbor.v2.5.3.tar.gz # ./prepare # ./install.sh # docker-compose ps
6.上传docker镜像 在master节点执行/opt/k8s_image_push.sh将所有镜像上传至docker仓库。
将docker login master命令的返回结果提交到答题框(填写完整提示输入的内容)。【1.5分】
标准: Login&&Succeeded
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 # cd /opt/images/# ./k8s_image_push.sh 输入镜像仓库地址(不加http/https):192.168.100.91 输入镜像仓库用户名: admin 输入镜像仓库用户密码: Harbor12345 您设置的仓库地址为: 192.168.100.10,用户名: admin,密码: xxx 是否确认(Y/N): Y # docker login master Username: admin Password: WARNING! Your password will be stored unencrypted in /root/.docker/config.json. Configure a credential helper to remove this warning. See https://docs.docker.com/engine/reference/commandline/login/#credentials-store Login Succeeded
7.部署Kubeadm、containerd、nerdctl和buildkit 执行/opt/k8s_con_ner_bui_install.sh部署Kubeadm、containerd、nerdctl和buildkit。
将ctr version命令的返回结果提交到答题框。【1.5分】
标准: 1.6.14&&go1.18.9
解法:
1 2 3 4 5 6 7 8 9 10 11 # /opt/k8s_con_ner_bui_install.sh # ctr version Client: Version: 1.6.14 Revision: 9ba4b250366a5ddde94bb7c9d1def331423aa323 Go version: go1.18.9 Server: Version: 1.6.14 Revision: 9ba4b250366a5ddde94bb7c9d1def331423aa323 UUID: ce069adb-c580-4c0d-b451-f22d0df0bae6
8.初始化集群 在master节点kubeadm命令初始化集群,使用本地Harbor仓库。
将kubectl get nodes命令的返回结果提交到答题框。【1.5分】
标准: master&&NotReady&&v1.25.0
解法
1 2 3 4 # kubeadm init --kubernetes-version=1.25.0 --apiserver-advertise-address=192.168.100.91 --image-repository 192.168.100.91/library --pod-network-cidr=10.244.0.0/16 # kubectl get nodes NAME STATUS ROLES AGE VERSION master NotReady control-plane 9m42s v1.25.0
9.安装kubernetes网络插件 修改提供的/opt/yaml/flannel/kube-flannel.yaml,使其镜像来源为本地Harbor仓库,然后安装kubernetes网络插件,安装完成后使用命令查看节点状态。
将kubectl get pods -A命令的返回结果提交到答题框。【1.5分】
标准: etcd-master&&kube-controller-manager-master&&Running
解法:
1 2 3 4 5 6 7 8 9 10 11 12 # eval sed -i 's@docker.io/flannel@192.168.100.91/library@g' /opt/yaml/flannel/kube-flannel.yaml# [root@master ~]# kubectl apply -f /opt/yaml/flannel/kube-flannel.yaml [root@master opt]# kubectl get pods -A NAMESPACE NAME READY STATUS RESTARTS AGE kube-flannel kube-flannel-ds-bqd2x 1/1 Running 0 74s kube-system coredns-7f474965b8-88ckf 1/1 Running 0 34m kube-system coredns-7f474965b8-rzh2x 1/1 Running 0 34m kube-system etcd-master 1/1 Running 0 34m kube-system kube-apiserver-master 1/1 Running 0 34m kube-system kube-controller-manager-master 1/1 Running 0 34m kube-system kube-proxy-fb29c 1/1 Running 0 34m kube-system kube-scheduler-master 1/1 Running 0 34m
10.创建证书 给kubernetes创建证书,命名空间为kubernetes-dashboard,涉及到的所有文件命名为dashboard例如dashboard.crt。
将kubectl get csr命令的返回结果提交到答题框。【1.5分】
标准:kubernetes.io/kube-apiserver-client-kubelet&&system:node:master
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 # mkdir /opt/dashboard-certs# cd /opt/dashboard-certs/# kubectl create namespace kubernetes-dashboard # openssl genrsa -out dashboard.key 2048 # openssl req -days 36000 -new -out dashboard.csr -key dashboard.key -subj '/CN=dashboard-cert' # openssl x509 -req -in dashboard.csr -signkey dashboard.key -out dashboard.crt Signature ok subject=/CN=dashboard-cert Getting Private key # kubectl create secret generic kubernetes-dashboard-certs --from-file=dashboard.key --from-file=dashboard.crt -n kubernetes-dashboard # kubectl get csr NAME AGE SIGNERNAME REQUESTOR REQUESTEDDURATION CONDITION csr-s5d6s 63m kubernetes.io/kube-apiserver-client-kubelet system:node:master <none> Approved,Issued
11.kubernetes 图形化界面的安装 修改/opt/yaml/dashboard/recommended.yaml的镜像来源为本地Harbor仓库,然后使用/opt/yaml/dashboard/recommended.yaml和/opt/yaml/dashboard/dashadmin-user.yaml安装kubernetes dashboard界面,完成后查看首页。
将kubectl get svc -n kubernetes-dashboard命令的返回结果提交到答题框。【1.5分】
标准: dashboard-metrics-scraper&&kubernetes-dashboard&&NodePort&&ClusterIP
解法:
1 2 3 4 5 6 7 # eval sed -i "s/kubernetesui/192.168.100.91\/library/g" /opt/yaml/dashboard/recommended.yaml# kubectl apply -f /opt/yaml/dashboard/recommended.yaml # kubectl apply -f /opt/yaml/dashboard/dashadmin-user.yaml # kubectl get svc -n kubernetes-dashboard NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE dashboard-metrics-scraper ClusterIP 10.105.211.63 <none> 8000/TCP 23m kubernetes-dashboard NodePort 10.104.143.162 <none> 443:30001/TCP 23m
12.删除污点 为了能使pod调度到master节点,用命令删除污点。在浏览器访问dashboard(https://IP:30001)
将kubectl describe nodes master | grep Taints命令的返回结果提交到答题框。【1.5分】
标准:Taints&&none
解法:
1 2 3 4 # kubectl describe nodes master | grep Taints kubectl taint nodes master node-role.kubernetes.io/control-plane- # kubectl describe nodes master | grep Taints Taints: <none>
13.扩展计算节点 在node节点执行k8s_node_install.sh,将该节点加入kubernetes集群。完成后在master节点上查看所有节点状态。
在master节点请将kubectl get nodes命令的返回结果提交到答题框。【1.5分】
标准: master&&node&&v1.25.0
解法:
1 2 3 4 5 [root@node opt]# ./k8s_node_install.sh [root@master opt]# kubectl get nodes NAME STATUS ROLES AGE VERSION master Ready control-plane 151m v1.25.0 node Ready <none> 4m50s v1.25.0
任务3部署Owncloud网盘服务(10 分) ownCloud 是一个开源免费专业的私有云存储项目,它能帮你快速在个人电脑或服务器上架设一套专属的私有云文件同步网盘,可以像 百度云那样实现文件跨平台同步、共享、版本控制、团队协作等。
1.创建PV和PVC 编写yaml文件(文件名自定义)创建PV和PVC来提供持久化存储,以便保存 ownCloud 服务中的文件和数据。
要求:PV(访问模式为读写,只能被单个节点挂载;存储为5Gi;存储类型为hostPath,存储路径自定义)
PVC(访问模式为读写,只能被单个节点挂载;申请存储空间大小为5Gi)
将kubectl get pv,pvc命令的返回结果提交到答题框。【2分】
标准: persistentvolume/owncloud-pv&&RWO&&persistentvolumeclaim/owncloud-pvc&&Bound
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 # cat owncloud-pvc.yamlapiVersion: v1 kind: PersistentVolume metadata: name: owncloud-pv spec: accessModes: - ReadWriteOnce capacity: storage: 5Gi hostPath: path: /data/owncloud --- apiVersion: v1 kind: PersistentVolumeClaim metadata: name: owncloud-pvc spec: accessModes: - ReadWriteOnce resources: requests: storage: 5Gi # kubectl apply -f /opt/owncloud-pvc.yaml # kubectl get pv,pvc NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE persistentvolume/owncloud-pv 5Gi RWO Retain Bound default/owncloud-pvc 2m41s NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE persistentvolumeclaim/owncloud-pvc Bound owncloud-pv 5Gi RWO 2m41s
2. 配置ConfigMap 编写yaml文件(文件名自定义)创建一个configMap对象,指定OwnCloud的环境变量。登录账号对应的环境变量为OWNCLOUD_ADMIN_USERNAME,密码对应的环境变量为OWNCLOUD_ADMIN_PASSWORD。(变量值自定义)
将kubectl get ConfigMap命令的返回结果提交到答题框。【2分】
标准: kube-root-ca.crt&&1&&2
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 # cat owncloud-configmap.yamlapiVersion: v1 kind: ConfigMap metadata: name: owncloud-config data: OWNCLOUD_ADMIN_USERNAME: “admin” OWNCLOUD_ADMIN_PASSWORD: “123456” # kubectl apply -f owncloud-configmap.yaml # kubectl get ConfigMap NAME DATA AGE kube-root-ca.crt 1 20h owncloud-config 2 2m11s
3.创建Secret 编写yaml文件(文件名自定义)创建一个Secret对象,以保存OwnCloud数据库的密码。对原始密码采用base64编码格式进行加密。
将kubectl get Secret命令的返回结果提交到答题框。【2分】
标准:Opaque&&1
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 # echo 123456 | base64 MTIzNDU2Cg== # cat owncloud-secret.yamlapiVersion: v1 kind: Secret metadata: name: owncloud-db-password type: Opaque data: password: MTIzNDU2Cg== # kubectl apply -f /opt/owncloud-secret.yaml # kubectl get Secret NAME TYPE DATA AGE owncloud-db-password Opaque 1 46s
4.部署owncloud Deployment应用 编写yaml文件(文件名自定义) 创建Deployment对象, 指定OwnCloud的容器和相关的环境变量。(Deployment资源命名为owncloud-deployment,镜像为Harbor仓库中的owncloud:latest,存储的挂载路径为/var/www/html,其它根据具体情况进行配置)
将kubectl describe pod命令的返回结果提交到答题框。【2分】
标准: ReplicaSet/owncloud-deployment&&owncloud@sha256&&kube-root-ca.crt&&Successfully&&Started
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 # cat owncloud-deploy.yamlapiVersion: apps/v1 kind: Deployment metadata: name: owncloud-deployment spec: replicas: 1 selector: matchLabels: app: owncloud template: metadata: labels: app: owncloud spec: containers: - name: owncloud image: 192.168.100.91/library/owncloud:latest imagePullPolicy: IfNotPresent envFrom: - configMapRef: name: owncloud-config env: - name: OWNCLOUD_DB_PASSWORD valueFrom: secretKeyRef: name: owncloud-db-password key: password ports: - containerPort: 80 volumeMounts: - name: owncloud-pv mountPath: /var/www/html volumes: - name: owncloud-pv persistentVolumeClaim: claimName: owncloud-pvc # kubectl apply -f /opt/owncloud-deploy.yaml # kubectl describe pod Name: owncloud-deployment-845c85cfcb-6ptqr Namespace: default Priority: 0 Service Account: default Node: node/192.168.100.23 Start Time: Fri, 17 Mar 2023 02:56:31 +0000 Labels: app=owncloud pod-template-hash=845c85cfcb Annotations: <none> Status: Running IP: 10.244.1.3 IPs: IP: 10.244.1.3 Controlled By: ReplicaSet/owncloud-deployment-845c85cfcb Containers: owncloud: Container ID: containerd://d60dc4426c06cef6525e4e37f0ee37dcef762c2806c19efcd666f951d66a5c84 Image: 192.168.100.91/library/owncloud:latest Image ID: 192.168.100.91/library/owncloud@sha256:5c77bfdf8cfaf99ec94309be2687032629f4f985d6bd388354dfd85475aa5f21 Port: 80/TCP Host Port: 0/TCP State: Running Started: Fri, 17 Mar 2023 02:56:39 +0000 Ready: True Restart Count: 0 Environment Variables from: owncloud-config ConfigMap Optional: false Environment: OWNCLOUD_DB_PASSWORD: <set to the key 'password' in secret 'owncloud-db-password'> Optional: false Mounts: /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-vtpd9 (ro) /var/www/html from owncloud-pv (rw) Conditions: Type Status Initialized True Ready True ContainersReady True PodScheduled True Volumes: owncloud-pv: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: owncloud-pvc ReadOnly: false kube-api-access-vtpd9: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 14m default-scheduler Successfully assigned default/owncloud-deployment-845c85cfcb-6ptqr to node Normal Pulling 14m kubelet Pulling image "192.168.100.91/library/owncloud:latest" Normal Pulled 14m kubelet Successfully pulled image "192.168.100.91/library/owncloud:latest" in 7.266482912s Normal Created 14m kubelet Created container owncloud Normal Started 14m kubelet Started container owncloud
5.创建Service 编写yaml文件(文件名自定义)创建一个Service对象将OwnCloud公开到集群外部。通过http://IP :端口号可查看owncloud。
将kubectl get svc -A命令的返回结果提交到答题框。【2分】
标准: ClusterIP&&NodePort&&443&&53&&9153&&8000&&80:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 # cat owncloud-svc.yamlapiVersion: v1 kind: Service metadata: name: owncloud-service spec: selector: app: owncloud ports: - name: http port: 80 type: NodePort # kubectl apply -f /opt/owncloud-svc.yaml # kubectl get svc -A NAMESPACE NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE default kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 24h default owncloud-service NodePort 10.98.228.242 <none> 80:31024/TCP 17m kube-system kube-dns ClusterIP 10.96.0.10 <none> 53/UDP,53/TCP,9153/TCP 24h kubernetes-dashboard dashboard-metrics-scraper ClusterIP 10.105.211.63 <none> 8000/TCP 22h kubernetes-dashboard kubernetes-dashboard NodePort 10.104.143.162 <none> 443:30001/TCP 22h
C模块题目:企业级应用的自动化部署和运维 *虚拟机与环境规划*
表3
设备名称
主机名
接口
IP地址
角色
云服务器1
zabbix_server
eth0
公网IP:*******私网IP:192.168.100. /24
ansible,zabbix_server
云服务器2
zabbix_agent
eth0
公网IP:*******私网IP:192.168.100. /24
zabbix_agent
\1. 上表中的公网IP以自己云主机显示为准,每个人的公网IP不同。使用第三方软件远程连接云主机,使用公网IP连接。
2.华为云中云主机名字已命好,直接使用对应名字的云主机即可。
3.竞赛用到的软件包都在云主机/root下。
企业级应用的自动化部署(30分) 部署方式:监控主机zabbix_server节点采用手动部署,被监控主机zabbix_agent采用Playbook部署。
1.安装ansible 修改主机名zabbix_server节点主机名为zabbix_server,zabbix_agent节点主机名为Zabbix_agent,使用提供的软件包/root/autoDeployment.tar在zabbix_server节点安装ansible。
将ansible –version 命令的返回结果提交到答题框。【2分】
标准:2.9.27&&2.7.5
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 [root@master ~]# hostnamectl set-hostname zabbix_server [root@master ~]# bash [root@node ~]# hostnamectl set-hostname zabbix_agent [root@node ~]# bash [root@zabbix_server ~]# mv autoDeployment.tar /opt/ [root@zabbix_server ~]# cd /opt/ # tar -xvf autoDeployment.tar # mount CentOS-7-x86_64-DVD-2009.iso /mnt/ # rm -rf /etc/yum.repos.d/*# vi /etc/yum.repos.d/local.repo [auto] name=auto baseurl=file:///opt/autoDeployment enabled=1 gpgcheck=0 [centos] name=centos baseurl=file:///mnt/ enabled=1 gpgcheck=0 # yum -y install ansible # ansible --version ansible 2.9.27 config file = /etc/ansible/ansible.cfg configured module search path = [u'/root/.ansible/plugins/modules', u'/usr/share/ansible/plugins/modules'] ansible python module location = /usr/lib/python2.7/site-packages/ansible executable location = /usr/bin/ansible python version = 2.7.5 (default, Oct 14 2020, 14:45:30) [GCC 4.8.5 20150623 (Red Hat 4.8.5-44)]
2.配置免密登录 在zabbix_server节点配置hosts文件,并将该文件远程发送给zabbix_agent节点,并配置免密登录。
在zabbix_server节点将ssh zabbix_agent命令的返回结果提交到答题框。【2分】
标准: login&&Welcome
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 # cat /etc/hosts127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1 localhost localhost.localdomain localhost6 localhost6.localdomain6 192.168.100.91 zabbix_server 192.168.100.23 zabbix_agent # scp /etc/hosts 192.168.100.23:/etc/ [root@ansible Zabbix_server ~]# ssh-keygen [root@ansible Zabbix_server ~]# ssh-copy-id Zabbix_agent [root@ zabbix_server ~]# ssh zabbix_agent Last failed login: Fri Mar 17 12:56:03 UTC 2023 from 58.33.154.106 on ssh:notty There were 20 failed login attempts since the last successful login. Last login: Fri Mar 17 11:58:03 2023 from 121.229.222.70 ****************************** * Welcome to GuoJiBeiSheng * ****************************** [root@ zabbix_agent ~]# exit
3.配置主机清单 在Zabbix_server节点配置ansible主机清单,在清单中创建agent主机组。
将ansible agent –m ping命令的返回结果提交到答题框。【2分】
标准: zabbix_agent&&SUCCESS&&pong
解法:
1 2 3 4 5 6 7 8 9 10 11 # tail -2 /etc/ansible/hosts[agent] zabbix_agent # ansible agent -m ping zabbix_agent | SUCCESS => { "ansible_facts": { "discovered_interpreter_python": "/usr/bin/python" }, "changed": false, "ping": "pong" }
4.安装nginx和php 配置基础环境,安装nginx和php74(根据实际需要安装相关php74扩展包),并开启相关服务。
将nginx -v && php74 –v命令的返回结果提交到答题框。【2分】
标准: nginx/1.22.1&&7.4.33&&v3.4.0
解法:
1 2 3 4 5 6 7 8 9 [root@zabbix_server opt]# yum -y install nginx [root@zabbix_server ~]# systemctl start nginx [root@zabbix_server ~]# yum -y install php74-php-fpm php74-php-common php74-php-cli php74-php-gd php74-php-ldap php74-php-mbstring php74-php-mysqlnd php74-php-xml php74-php-bcmath php74-php [root@zabbix_server ~]#systemctl start php74-php-fpm [root@zabbix_server ~]# nginx -v && php74 -v nginx version: nginx/1.22.1 PHP 7.4.33 (cli) (built: Feb 14 2023 08:49:52) ( NTS ) Copyright (c) The PHP Group Zend Engine v3.4.0, Copyright (c) Zend Technologies
5.安装zabbix服务器端和客户端 在zabbix_server节点安装zabbix服务器、代理和web前端,安装前注意查看rpm包的名字,并分别启动zabbix-server和zabbix-agent。
将systemctl status zabbix-server&& systemctl status zabbix-agent命令的返回结果提交到答题框。【2分】
标准: zabbix-server-mysql.service&&zabbix-agent.service&&active (running)&&SUCCESS
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 # yum -y install zabbix6.0-server zabbix6.0-web-mysql zabbix-agent # systemctl start zabbix-server&& systemctl start zabbix-agent # systemctl status zabbix-server&& systemctl status zabbix-agent ● zabbix-server-mysql.service - Zabbix Server with MySQL DB Loaded: loaded (/usr/lib/systemd/system/zabbix-server-mysql.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2023-03-18 04:36:50 UTC; 4min 5s ago Main PID: 20737 (zabbix_server) CGroup: /system.slice/zabbix-server-mysql.service └─20737 /usr/sbin/zabbix_server -f Mar 18 04:36:50 zabbix_server systemd[1]: Started Zabbix Serve... Hint: Some lines were ellipsized, use -l to show in full. ● zabbix-agent.service - Zabbix Agent Loaded: loaded (/usr/lib/systemd/system/zabbix-agent.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2023-03-18 04:37:47 UTC; 3min 8s ago Process: 20752 ExecStart=/usr/sbin/zabbix_agentd -c $CONFFILE (code=exited, status=0/SUCCESS) Main PID: 20754 (zabbix_agentd) CGroup: /system.slice/zabbix-agent.service ├─20754 /usr/sbin/zabbix_agentd -c /etc/zabbix/zabb... ├─20755 /usr/sbin/zabbix_agentd: collector [idle 1 ... ├─20756 /usr/sbin/zabbix_agentd: listener #1 [waiti... ├─20757 /usr/sbin/zabbix_agentd: listener #2 [waiti... ├─20758 /usr/sbin/zabbix_agentd: listener #3 [waiti... └─20759 /usr/sbin/zabbix_agentd: active checks #1 [... Mar 18 04:37:47 zabbix_server systemd[1]: Starting Zabbix Agen... Mar 18 04:37:47 zabbix_server systemd[1]: Started Zabbix Agent. Hint: Some lines were ellipsized, use -l to show in full.
6.安装数据库 安装数据库MariaDB,启动数据库并设置为开机自启动。
将systemctl status mariadb命令的返回结果提交到答题框。【2分】
标准:mariadb.service&&active&&(running)&&SUCCESS&&mariadb-wait-ready
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 # yum -y install mariadb-server # systemctl start mariadb # systemctl status mariadb ● mariadb.service - MariaDB database server Loaded: loaded (/usr/lib/systemd/system/mariadb.service; disabled; vendor preset: disabled) Active: active (running) since Sat 2023-03-18 04:52:20 UTC; 1min 2s ago Process: 20907 ExecStartPost=/usr/libexec/mariadb-wait-ready $MAINPID (code=exited, status=0/SUCCESS) Process: 20822 ExecStartPre=/usr/libexec/mariadb-prepare-db-dir %n (code=exited, status=0/SUCCESS) Main PID: 20905 (mysqld_safe) CGroup: /system.slice/mariadb.service ├─20905 /bin/sh /usr/bin/mysqld_safe --basedir=/usr... └─21071 /usr/libexec/mysqld --basedir=/usr --datadi... Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: M... Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: P... Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: T... Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: Y... Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: h... Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: C... Mar 18 04:52:18 zabbix_server mariadb-prepare-db-dir[20822]: h... Mar 18 04:52:18 zabbix_server mysqld_safe[20905]: 230318 04:52... Mar 18 04:52:18 zabbix_server mysqld_safe[20905]: 230318 04:52... Mar 18 04:52:20 zabbix_server systemd[1]: Started MariaDB data... Hint: Some lines were ellipsized, use -l to show in full.
7.配置数据库 登录mysql,创建数据库zabbix和用户zabbix密码自定义,并授权zabbix用户拥有zabbix数据库的所有权限。
将show grants for ‘zabbix‘@’localhost’;命令的返回结果提交到答题框。【2分】
标准: ALL&&PRIVILEGES
解法:
1 2 3 4 5 6 7 8 9 # mysql -uroot -p MariaDB [(none)]> create database zabbix character set utf8mb4 collate utf8mb4_general_ci; MariaDB [(none)]> grant all privileges on zabbix.* to zabbix@localhost identified by 'password'; MariaDB [zabbix]> show grants for 'zabbix'@'localhost'; +---------------------------------------------------------------------------------------------------------------+ | Grants for zabbix@localhost | +---------------------------------------------------------------------------------------------------------------+ | GRANT USAGE ON *.* TO 'zabbix'@'localhost' IDENTIFIED BY PASSWORD '*2470C0C06DEE42FD1618BB99005ADCA2EC9D1E19' | | GRANT ALL PRIVILEGES ON `zabbix`.* TO 'zabbix'@'localhost'
8.导入数据库架构 分别导入数据库架构及数据,对应的文件分别为schema.sql、images.sql和data.sql(文件顺便不能乱)。
登录数据库将select username from users;命令的返回结果提交到答题框(用zabbix数据库)。【2分】
标准:Admin&&guest
解法:
1 2 3 4 5 6 7 8 9 10 11 12 [root@zabbix_server ~]# mysql -uroot -ppassword zabbix < /usr/share/zabbix-mysql/schema.sql [root@zabbix_server ~]# mysql -uroot -ppassword zabbix < /usr/share/zabbix-mysql/images.sql [root@zabbix_server ~]# mysql -uroot -ppassword zabbix < /usr/share/zabbix-mysql/data.sql [root@zabbix_server ~]# mysql -uzabbix -p Enter password: MariaDB [(none)]> use zabbix; MariaDB [zabbix]> select username from users; +----------+ | username | +----------+ | Admin | | guest |
9.配置文件 配置default.conf。
将cat /etc/nginx/conf.d/default.conf命令的返回结果提交到答题框。【2分】
标准: index.php
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 vim /etc/nginx/conf.d/default.conf 修改内容如下 root /usr/share/zabbix/; index index.php index.html index.htm; # cat /etc/nginx/conf.d/default.confserver { listen 80; server_name localhost; # access_log /var/log/nginx/host.access.log main; location / { root /usr/share/zabbix/; index index.php index.html index.htm; } # error_page 404 /404.html; # redirect server error pages to the static page /50x.html # error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } # proxy the PHP scripts to Apache listening on 127.0.0.1:80 # # proxy_pass http://127.0.0.1; # } # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000 # location ~ \.php$ { root /usr/share/zabbix; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } # deny access to .htaccess files, if Apache's document root # concurs with nginx' s one# # deny all; # } }
10.配置文件 分别修改配置文件zabbix_server.conf(修改数据库密码)和zabbix_agentd.conf(修改服务器IP,活动服务器IP和主机名),并重启对应服务使配置生效。
将cat /etc/zabbix_agentd.conf | grep -v ‘^#|^$’命令的返回结果提交到答题框。【2分】
标准: Server=192.168.100&&ServerActive=192.168.100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 [root@zabbix_server ~]# vim /etc/zabbix_server.conf DBName=zabbix DBUser=zabbix DBPassword=password [root@zabbix_server ~]# vim /etc/zabbix_agentd.conf Server=192.168.100.91 ServerActive=192.168.100.91 Hostname=zabbix_server [root@zabbix_server ~]# cat /etc/zabbix_agentd.conf | grep -v '^#\|^$' PidFile=/run/zabbix/zabbix_agentd.pid LogFile=/var/log/zabbix/zabbix_agentd.log LogFileSize=0 Server=192.168.100.91 ServerActive=192.168.100.91 Hostname=zabbix_server [root@master ~]# systemctl restart zabbix-server [root@master ~]# systemctl restart zabbix-agent
11.配置文件 修改php.ini文件,其中最大POST数据限制为16M,程序执行时间限制为300,PHP页面接受数据所需最大时间限制为300,把时区设为Asia/Shanghai,并重启相关服务。
将cat /etc/php.ini | grep -v ‘^#|^$’命令的返回结果提交到答题框。【2分】
标准: Server=192.168.100&&ServerActive=192.168.100
解法:
1 2 3 4 5 6 [root@zabbix_server ~]# vim /etc/php.ini post_max_size = 16M max_execution_time = 300 max_input_time = 300 date.timezone = Asia/Shanghai [root@zabbix_server ~]# systemctl restart php74-php-fpm
12.配置文件 修改www.conf 文件,把用户和组都设置为nginx.
将cat /etc/php-fpm.d/www.conf | grep -v ‘^;|^$’命令的返回结果提交到答题框。【2分】
标准: user&&nginx&&group
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 [root@zabbix_server ~]# vim /etc/php-fpm.d/www.conf user = nginx group = nginx [root@zabbix_server ~]# cat /etc/php-fpm.d/www.conf | grep -v '^;\|^$' [www] listen = 127.0.0.1:9000 listen.allowed_clients = 127.0.0.1 user = nginx group = nginx pm = dynamic pm.max_children = 50 pm.start_servers = 5 pm.min_spare_servers = 5 pm.max_spare_servers = 35 slowlog = /var/log/php-fpm/www-slow.log php_admin_value[error_log] = /var/log/php-fpm/www-error.log php_admin_flag[log_errors] = on php_value[session.save_handler] = files php_value[session.save_path] = /var/lib/php/session
13.配置文件 修改zabbix.conf文件,把用户和组都设置为nginx,并将index.php所在的目录和php.ini文件拥有者和用户组改为nginx。重启相关服务,在浏览器中输入http://公网IP/ setup.php即可看到zabbix 6.0界面。
将curl http:// 公网IP /setup.php命令的返回结果提交到答题框。【2分】
标准: SIA&&favicon.ico&&msapplication-config
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 vim /etc/php-fpm.d/zabbix.conf [zabbix] user = nginx group = nginx [root@zabbix_server ~]# chown -R nginx:nginx /usr/share/zabbix/ [root@zabbix_server ~]# chown -R nginx:nginx /etc/opt/remi/php74/php.ini [root@zabbix_server ~]# chmod +x /usr/share/zabbix [root@zabbix_server ~]# systemctl restart nginx [root@zabbix_server ~]# systemctl restart zabbix-server [root@zabbix_server ~]# systemctl restart zabbix-agent [root@zabbix_server ~]# systemctl restart php74-php-fpm [root@zabbix_server ~]# curl http://123.249.10.60/setup.php <!DOCTYPE html> <html lang="en"> <head> <meta http-equiv="X-UA-Compatible" content="IE=Edge"/> <meta charset="utf-8" /> <meta name="viewport" content="width=device-width, initial-scale=1"> <meta name="Author" content="Zabbix SIA" /> <title>Installation</title> <link rel="icon" href="favicon.ico"> <link rel="apple-touch-icon-precomposed" sizes="76x76" href="assets/img/apple-touch-icon-76x76-precomposed.png"> <link rel="apple-touch-icon-precomposed" sizes="120x120" href="assets/img/apple-touch-icon-120x120-precomposed.png"> <link rel="apple-touch-icon-precomposed" sizes="152x152" href="assets/img/apple-touch-icon-152x152-precomposed.png"> <link rel="apple-touch-icon-precomposed" sizes="180x180" href="assets/img/apple-touch-icon-180x180-precomposed.png"> <link rel="icon" sizes="192x192" href="assets/img/touch-icon-192x192.png"> <meta name="csrf-token" content="5d4324e81318a310"/> <meta name="msapplication-TileImage" content="assets/img/ms-tile-144x144.png"> <meta name="msapplication-TileColor" content="#d40000"> <meta name="msapplication-config" content="none"/> <link rel="stylesheet" type="text/css" href="assets/styles/blue-theme.css?1675235994" /> <script src="js/browsers.js?1674462826"></script> <script src="jsLoader.php?ver=6.0.13&lang=en_US"></script> <script src="jsLoader.php?ver=6.0.13&lang=en_US&files%5B0%5D=setup.js"></script> </head> <body><div class="wrapper"><main><form method="post" action="setup.php" accept-charset="utf-8" id="setup-form"><div class="setup-container"><div class="setup-left"><div class="setup-logo"><div class="zabbix-logo"></div></div><ul><li class="setup-left-current">Welcome</li><li>Check of pre-requisites</li><li>Configure DB connection</li><li>Settings</li><li>Pre-installation summary</li><li>Install</li></ul></div><div class="setup-right"><div class="setup-right-body"><div class="setup-title"><span>Welcome to</span>Zabbix 6.0</div><ul class="table-forms"><li><div class="table-forms-td-left"><label for="label-default-lang">Default language</label></div><div class="table-forms-td-right"><z-select id="default-lang" value="en_US" focusable-element-id="label-default-lang" autofocus="autofocus" name="default_lang" data-options="[{"value":"en_GB","label":"English (en_GB)"},{"value":"en_US","label":"English (en_US)"},{"value":"ca_ES","label":"Catalan (ca_ES)"},{"value":"zh_CN","label":"Chinese (zh_CN)"},{"value":"cs_CZ","label":"Czech (cs_CZ)"},{"value":"fr_FR","label":"French (fr_FR)"},{"value":"de_DE","label":"German (de_DE)"},{"value":"he_IL","label":"Hebrew (he_IL)"},{"value":"it_IT","label":"Italian (it_IT)"},{"value":"ko_KR","label":"Korean (ko_KR)"},{"value":"ja_JP","label":"Japanese (ja_JP)"},{"value":"nb_NO","label":"Norwegian (nb_NO)"},{"value":"pl_PL","label":"Polish (pl_PL)"},{"value":"pt_BR","label":"Portuguese (pt_BR)"},{"value":"pt_PT","label":"Portuguese (pt_PT)"},{"value":"ro_RO","label":"Romanian (ro_RO)"},{"value":"ru_RU","label":"Russian (ru_RU)"},{"value":"sk_SK","label":"Slovak (sk_SK)"},{"value":"tr_TR","label":"Turkish (tr_TR)"},{"value":"uk_UA","label":"Ukrainian (uk_UA)"},{"value":"vi_VN","label":"Vietnamese (vi_VN)"}]" tabindex="-1"></z-select></div></li></ul></div></div><div class="setup-footer"><div><button type="submit" id="next_1" name="next[1]" value="Next step">Next step</button><button type="submit" id="back_1" name="back[1]" value="Back" class="btn-alt float-left" disabled="disabled">Back</button></div></div></div></form><div class="signin-links">Licensed under <a target="_blank" rel="noopener noreferrer" class="grey link-alt" href="https://www.zabbix.com/license">GPL v2</a></div></main><footer role="contentinfo">Zabbix 6.0.13. © 2001–2023, <a class="grey link-alt" target="_blank" rel="noopener noreferrer" href="https://www.zabbix.com/">Zabbix SIA</a></footer></div></body></html>
14.编写playbook 随机找一目录,在其下分别创建tasks和file目录,把autoDeployment.tar、编写好的repo文件和zabbix_agentd.conf传至file目录,在tasks目录下编写agent.yaml文件,要求在被监控机能远程部署zabbix-agent服务。
将cat agent.yaml命令的返回结果提交到答题框。【4分】
标准: copy&&src&&dest&&yum&&name&&zabbix-agent&&state
解法:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 [root@zabbix_server opt]# cat agent.yaml --- - hosts: agent become: true tasks: - name: copy local.repo copy: src: local.repo dest: /etc/yum.repos.d/local.repo - name: Copy autoDeployment.tar copy: src: autoDeployment.tar dest: /opt - name: Copy zabbix_agentd.conf file copy: src: zabbix_agentd.conf dest: /etc/zabbix/zabbix_agentd.conf owner: zabbix group: zabbix mode: '0644' - name: tar autoDeployment.tar shell: cmd: tar -vxf autoDeployment.tar -C /opt - name: Install Zabbix Agent yum: name: zabbix-agent state: present - name: Start and enable Zabbix Agent service: name: zabbix-agent state: started enabled: true