第一套

私有云运维任务:

题目3. OpenStack云平台运维(10分)

1.使用提供的云安全框架组件,将提供的OpenStack云平台的安全策略从http优化至https。

可能会遇到 mod_ssl 安装不成功的问题,原因是mod_ssl版本不匹配,上传mod_ssl-2.4.6-97.el7.centos.x86_64.rpm

①安装工具包

1
yum install -y mod_wsgi httpd mod_ssl

②修改/etc/openstack-dashboard/local_settings文件

1
2
3
4
5
6
vi /etc/openstack-dashboard/local_settings
##在DEBUG = False下增加4行
USE_SSL = True
CSRF_COOKIE_SECURE = True ##原文中有,去掉注释即可
SESSION_COOKIE_SECURE = True ##原文中有,去掉注释即可
SESSION_COOKIE_HTTPONLY = True

③修改/etc/httpd/conf.d/ssl.conf配置文件

1
2
3
vi /etc/httpd/conf.d/ssl.conf
##将SSLProtocol all -SSLv2 -SSLv3改成:
SSLProtocol all -SSLv2

④重启服务

1
2
systemctl restart httpd
systemctl restart memcached
  1. 访问Web

2.在提供的OpenStack平台上,通过修改相关参数对openstack平台进行调优操作,相应的调优操作有:

(1)设置内存超售比例为1.5倍;

(2)设置nova服务心跳检查时间为120秒。

1
2
3
vi /etc/nova/nova.conf
ram_allocation_ratio = 1.5
service_down_time = 120

3.在提供的OpenStack平台上,使用Swift对象存储服务,修改相应的配置文件,使对象存储Swift作为glance镜像服务的后端存储。

①修改配置文件

vi /etc/glance/glance-api.conf

1
2
3
4
5
6
7
8
9
10
11
12
13
14
[glance_store]
stores=glance.store.filesystem.Store,glance.store.swift.Store,glance.store.http.Store
default_store=swift
swift_store_region=RegionOne
swift_store_endpoint_type=internalURL
swift_store_container=glance
swift_store_large_object_size=5120
swift_store_large_object_chunk_size=200
swift_store_create_container_on_put=True
swift_store_multi_tenant=True
swift_store_admin_tenants=service
swift_store_auth_address=http://controller:5000/v3
swift_store_user=glance
swift_store_key=000000

重启 glance 所有组件

1
systemctl restart openstack-glance-*

4.在提供的OpenStack平台上,编写heat模板createvm.yml文件,模板作用为按照要求创建一个云主机。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
 cat createvm.yml
heat_template_version: 2018-08-31
resources:
server1:
type: OS::Nova::Server
properties:
name: mytest1
image: "centos7.5"
flavor: "small"
networks:
- network: "intnet"

outputs:
server_names:
value: { get_attr: [ server1 , name ] }


openstack stack create -t createvm.yml vm

5.在提供的OpenStack平台上,对cinder存储空间进行扩容操作,要求将cinder存储空间扩容10G。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
[root@compute ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 200G 0 disk
├─sda1 8:1 0 1G 0 part /boot
└─sda2 8:2 0 19G 0 part
├─centos-root 253:0 0 17G 0 lvm /
└─centos-swap 253:1 0 2G 0 lvm [SWAP]
sdb 8:16 0 50G 0 disk
├─sdb1 8:17 0 10G 0 part
│ ├─cinder--volumes-cinder--volumes--pool_tmeta 253:2 0 12M 0 lvm
│ │ └─cinder--volumes-cinder--volumes--pool 253:4 0 9.5G 0 lvm
│ └─cinder--volumes-cinder--volumes--pool_tdata 253:3 0 9.5G 0 lvm
│ └─cinder--volumes-cinder--volumes--pool 253:4 0 9.5G 0 lvm
├─sdb2 8:18 0 10G 0 part /swift/node/sdb2
├─sdb3 8:19 0 10G 0 part
└─sdb4 8:20 0 5G 0 part
sr0 11:0 1 4.4G 0 rom
[root@compute ~]#
[root@compute ~]# vgdisplay
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 4
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 1
Act PV 1
VG Size <10.00 GiB
PE Size 4.00 MiB
Total PE 2559
Alloc PE / Size 2438 / 9.52 GiB
Free PE / Size 121 / 484.00 MiB
VG UUID 3k0yKg-iQB2-b2CM-a0z2-2ddJ-cdG3-8WpyrG

--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <19.00 GiB
PE Size 4.00 MiB
Total PE 4863
Alloc PE / Size 4863 / <19.00 GiB
Free PE / Size 0 / 0
VG UUID acAXNK-eqKm-qs9b-ly3T-R3Sh-8qyv-nELNWv
[root@compute ~]# vgextend cinder-volumes /dev/sdb4
Volume group "cinder-volumes" successfully extended
[root@compute ~]# vgdisplay
--- Volume group ---
VG Name cinder-volumes
System ID
Format lvm2
Metadata Areas 2
Metadata Sequence No 5
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 1
Open LV 0
Max PV 0
Cur PV 2
Act PV 2
VG Size 14.99 GiB
PE Size 4.00 MiB
Total PE 3838
Alloc PE / Size 2438 / 9.52 GiB
Free PE / Size 1400 / <5.47 GiB
VG UUID 3k0yKg-iQB2-b2CM-a0z2-2ddJ-cdG3-8WpyrG

--- Volume group ---
VG Name centos
System ID
Format lvm2
Metadata Areas 1
Metadata Sequence No 3
VG Access read/write
VG Status resizable
MAX LV 0
Cur LV 2
Open LV 2
Max PV 0
Cur PV 1
Act PV 1
VG Size <19.00 GiB
PE Size 4.00 MiB
Total PE 4863
Alloc PE / Size 4863 / <19.00 GiB
Free PE / Size 0 / 0
VG UUID acAXNK-eqKm-qs9b-ly3T-R3Sh-8qyv-nELNWv

6.在OpenStack私有云平台,创建一台云主机,使用提供的软件包,编写一键部署脚本,要求可以一键部署gpmall商城应用系统。

手动部署

Ⅰ.环境配置

修改主机名

1
hostnamectl set-hostname mall

②配置/etc/hosts主机映射

1
2
vi /etc/hosts
192.168.200.103 mall

③关闭防火墙和selinux

1
2
3
4
5
#关闭防火墙
systemctl stop firewalld && systemctl disable firewalld
#关闭安全策略
setenforce 0
sed -i 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/selinux/config

④配置本地仓库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
#上传相对应的镜像和gpmall-repo
mkdir /opt/centos
mount CentOS-7-x86_64-DVD-1804.iso /opt/centos/

mv /etc/yum.repos.d/* /etc/yum
vi /etc/yum.repos.d/local.repo

[centos]
name=centos
baseurl=file:///opt/centos
gpgcheck=0
enabled=1
[gpmall-mall]
name=gpmall-mall
baseurl=file:///root/gpmall-repo
gpgcheck=0
enabled=1

Ⅱ.应用系统的基础服务安装

①安装java环境

1
2
3
4
5
6
yum install -y java-1.8.0-openjdk java-1.8.0-openjdk-devel
[root@mall ~]# java -version
openjdk version "1.8.0_222"
OpenJDK Runtime Environment (build 1.8.0_222-b10)
OpenJDK 64-Bit Server VM (build 25.222-b10, mixed mode)

②安装redis,nginx,mariadb

1
yum install redis nginx mariadb mariadb-server -y

③安装zookeeper

1
2
3
4
5
6
7
8
9
#上传安装包
tar -zxvf zookeeper-3.4.14.tar.gz -C /opt/
#进入到zookeeper-3.4.14/bin目录下,并启动ZooKeeper服务
cd /opt/zookeeper-3.4.14/conf
mv zoo_sample.cfg zoo.cfg
cd ../bin
./zkServer.sh start
#查看状态
./zkServer.sh status

④安装kafka

1
2
3
4
5
6
7
8
9
10
11
#将提供的kafka_2.11-1.1.1.tgz包上传到服务器的/opt目录下,并解压该压缩包
tar -zxvf kafka_2.11-1.1.1.tgz -C /opt/
#进入到kafka_2.11-1.1.1/bin目录下,启动Kafka服务
cd kafka_2.11-1.1.1/bin/
./kafka-server-start.sh -daemon ../config/server.properties
#查看是否成功启动
[root@mall ~]# jps
11371 QuorumPeerMain
11692 Kafka
13183 Jps

Ⅲ修改服务配置

①配置mariadb服务

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#修改/etc/my.cnf配置文件
vi /etc/my.cnf
[mysqld]
port=8066 #若使用的是项目四软件包
init_connect='SET collation_connection = utf8_unicode_ci'
init_connect='SET NAMES utf8'
character-set-server=utf8
collation-server=utf8_unicode_ci
skip-character-set-client-handshake
#启动数据库服务
systemctl start mariadb
#设置密码
mysqladmin -uroot password 123456
#配置授权
mysql -uroot -p123456
-> grant all privileges on *.* to root@localhost identified by '123456' with grant option;
-> grant all privileges on *.* to root@"%" identified by '123456' with grant option;
#将gpmall.sql上传到root
#创建数据库gpmall并导入gpmall.sql文件
-> create database gpmall;
-> source /root/gpmall.sql
-> exit

②配置reids服务

1
2
3
4
5
6
7
#修改Redis配置文件,编辑/etc/redis.conf文件
vi /etc/redis.conf
# 将61行的bind 127.0.0.1这一行注释掉(在前面加个#号注释)
#将80行的protected-mode yes 改为 protected-mode no
#启动服务
systemctl start redis
systemctl enable redis

③配置nginx

1
2
3
#启动服务
systemctl start nginx
systemctl enable nginx

Ⅳ修改全局变量

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
#根据jar报的错误修改hosts
vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.200.103 mall
192.168.200.103 kafka1.mall
127.0.0.1 mysql.mall
192.168.200.103 redis.mall
192.168.200.103 zk1.mall
#部署前端
rm -rf /usr/share/nginx/html/*
cp -rvf /root/dist/* /usr/share/nginx/html/
vi /etc/nginx/conf.d/default.conf
#注意对齐
location /user {
proxy_pass http://127.0.0.1:8082;
}
location /shopping {
proxy_pass http://127.0.0.1:8081;
}
location /cashier {
proxy_pass http://127.0.0.1:8083;
}
#重启Nginx服务
systemctl restart nginx
#部署后端
nohup java -jar shopping-provider-0.0.1-SNAPSHOT.jar &
nohup java -jar user-provider-0.0.1-SNAPSHOT.jar &
nohup java -jar gpmall-shopping-0.0.1-SNAPSHOT.jar &
nohup java -jar gpmall-user-0.0.1-SNAPSHOT.jar &
#验证状态
jobs

7.使用manila共享文件系统服务,使manila为多租户云环境中的共享文件系统提供统一的管理服务。

使用Manila命令创建default_type_share共享类型

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
[root@controller ~]# source /etc/keystone/admin-openrc.sh 
[root@controller ~]# manila type-create default_share_type False
+----------------------+--------------------------------------+
| Property | Value |
+----------------------+--------------------------------------+
| required_extra_specs | driver_handles_share_servers : False |
| Name | default_share_type |
| Visibility | public |
| is_default | YES |
| ID | 0fec7bca-f1a1-4e92-8d4b-aaf02147571a |
| optional_extra_specs | |
| Description | None |
+----------------------+--------------------------------------+‘

#查看类型列表信息
[root@controller ~]# manila type-list
+--------------------------------------+--------------------+------------+------------+--------------------------------------+----------------------+-------------+
| ID | Name | visibility | is_default | required_extra_specs | optional_extra_specs | Description |
+--------------------------------------+--------------------+------------+------------+--------------------------------------+----------------------+-------------+
| 0fec7bca-f1a1-4e92-8d4b-aaf02147571a | default_share_type | public | YES | driver_handles_share_servers : False | | None |
+--------------------------------------+--------------------+------------+------------+--------------------------------------+----------------------+-------------+

创建共享文件目录

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
#创建目录大小为2g的共享目录shre-test
[root@controller ~]# manila create NFS 2 --name share-test
+---------------------------------------+--------------------------------------+
| Property | Value |
+---------------------------------------+--------------------------------------+
| status | creating |
| share_type_name | default_share_type |
| description | None |
| availability_zone | None |
| share_network_id | None |
| share_server_id | None |
| share_group_id | None |
| host | |
| revert_to_snapshot_support | False |
| access_rules_status | active |
| snapshot_id | None |
| create_share_from_snapshot_support | False |
| is_public | False |
| task_state | None |
| snapshot_support | False |
| id | a4b2a4f1-421f-4de3-8fca-d2ee8a5f4bb9 |
| size | 2 |
| source_share_group_snapshot_member_id | None |
| user_id | 89f8027475294689ae6c0183fa35bf5a |
| name | share-test |
| share_type | 0fec7bca-f1a1-4e92-8d4b-aaf02147571a |
| has_replicas | False |
| replication_type | None |
| created_at | 2022-05-06T11:24:02.000000 |
| share_proto | NFS |
| mount_snapshot_support | False |
| project_id | 0b6f2d0be1d342e09edc31dc841db7a5 |
| metadata | {} |
+---------------------------------------+--------------------------------------+

#查询所创建的共享目录列表信息
[root@controller ~]# manila list
+--------------------------------------+------------+------+-------------+-----------+-----------+--------------------+--------------------------------+-------------------+
| ID | Name | Size | Share Proto | Status | Is Public | Share Type Name | Host | Availability Zone |
+--------------------------------------+------------+------+-------------+-----------+-----------+--------------------+--------------------------------+-------------------+
| a4b2a4f1-421f-4de3-8fca-d2ee8a5f4bb9 | share-test | 2 | NFS | available | False | default_share_type | controller@lvm#lvm-single-pool | nova |
+--------------------------------------+------------+------+-------------+-----------+-----------+--------------------+--------------------------------+-------------------+
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
#使用Manila命令开放share-test目录对OpenStack管理网段使用权限
[root@controller ~]# manila access-allow share-test ip 0.0.0.0/24 --access-level rw
+--------------+--------------------------------------+
| Property | Value |
+--------------+--------------------------------------+
| access_key | None |
| share_id | a4b2a4f1-421f-4de3-8fca-d2ee8a5f4bb9 |
| created_at | 2022-05-06T11:27:19.000000 |
| updated_at | None |
| access_type | ip |
| access_to | 0.0.0.0/24 |
| access_level | rw |
| state | queued_to_apply |
| id | 9813f7f2-d15f-46cf-ad2d-062ce6ce3264 |
| metadata | {} |
+--------------+--------------------------------------+
#查看share-test目录共享目录权限及开放网段
[root@controller ~]# manila access-list share-test
+--------------------------------------+-------------+----------------+--------------+--------+------------+----------------------------+------------+
| id | access_type | access_to | access_level | state | access_key | created_at | updated_at |
+--------------------------------------+-------------+----------------+--------------+--------+------------+----------------------------+------------+
| 9813f7f2-d15f-46cf-ad2d-062ce6ce3264 | ip | 10.24.195.0/24 | rw | active | None | 2022-05-06T11:27:19.000000 | None |
+--------------------------------------+-------------+----------------+--------------+--------+------------+----------------------------+------------+
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
#查看share-test目录共享目录访问路径
[root@controller ~]# manila show share-test | grep path | cut -d'|' -f3
path = 127.0.0.1:/var/lib/manila/mnt/share-55f94a46-9ac0-4b7e-8981-d83ac6fce8d7
#在openStack控制节点将share-test共享目录挂载至/mnt目录下
[root@controller ~]# mount -t nfs 172.30.17.5:/var/lib/manila/mnt/share-c3f5a9fc-a8e7-40a6-a43b-56cfd1738724 /mnt/

#查看挂载信息
[root@controller ~]# df -th
文件系统 类型 容量 已用 可用 已用% 挂载点
devtmpfs devtmpfs 5.8G 0 5.8G 0% /dev
tmpfs tmpfs 5.8G 68K 5.8G 1% /dev/shm
tmpfs tmpfs 5.8G 592M 5.3G 10% /run
tmpfs tmpfs 5.8G 0 5.8G 0% /sys/fs/cgroup
/dev/vda1 xfs 50G 8.1G 42G 17% /
tmpfs tmpfs 1.2G 0 1.2G 0% /run/user/0
172.30.17.5:/var/lib/manila/mnt/share-c3f5a9fc-a8e7-40a6-a43b-56cfd1738724 nfs4 2.0G 6.0M 1.8G 1% /mnt

第二套

私有云运维任务:

1.使用自动搭建的OpenStack平台,登录数据库,创建库test,并在库test中创建表company(表结构如(id int not null primary key,name varchar(50),addr varchar(255))所示),在表company中插入一条数据(1,”alibaba”,”china”)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
#登录数据库
root@controller ~]# mysql -uroot -p
Enter password:
MariaDB [(none)]> create database test;
Query OK, 1 row affected (0.000 sec)

MariaDB [(none)]> use test;
Database changed

MariaDB [test]> create table company(id int not null primary key,name varchar(50),addr varchar(255));
Query OK, 0 rows affected (0.003 sec)

MariaDB [test]> insert into company values (1,"alibaba","china");
Query OK, 1 row affected (0.001 sec)

MariaDB [test]> select * from company;
+----+---------+-------+
| id | name | addr |
+----+---------+-------+
| 1 | alibaba | china |
+----+---------+-------+
1 row in set (0.000 sec)


2.OpenStack各服务内部通信都是通过RPC来交互,各agent都需要去连接RabbitMQ;随着各服务agent增多,MQ的连接数会随之增多,最终可能会到达上限,成为瓶颈。使用提供的OpenStack私有云平台,通过修改limits.conf配置文件来修改RabbitMQ服务的最大连接数为10240

1
2
3
4
[root@controller ~]# vi /etc/security/limits.conf
openstack soft nofile 10240
openstack hard nofile 10240
#在配置文件的最后添加两行内容如上,修改完之后,保存退出

3.在提供的OpenStack私有云平台上,在/root目录下编写Heat模板create_user.yaml,创建名为heat-user的用户,属于admin项目,并赋予heat-user用户admin的权限,配置用户密码为123456。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@controller ~]# cat create_user.yaml
heat_template_version: 2018-08-31
resources:
user:
type: OS::Keystone::User
properties:
name: heat-user
password: "123456"
domain: demo
default_project: admin
roles: [{"role":"admin","project":"admin"}]
[root@controller ~]# openstack stack create -t create_user.yaml heat-user
+---------------------+--------------------------------------+
| Field | Value |
+---------------------+--------------------------------------+
| id | f5bbca42-7962-49ce-b8e7-2b772544a920 |
| stack_name | heat-user |
| description | No description |
| creation_time | 2022-10-25T07:53:28Z |
| updated_time | None |
| stack_status | CREATE_IN_PROGRESS |
| stack_status_reason | Stack CREATE started |
+---------------------+--------------------------------------+

4.在提供的OpenStack私有云平台上,使用cirros-0.3.4-x86_64-disk.img镜像,创建一个名为Gmirror1的镜像,要求启动该镜像的最小硬盘是30G、最小内存是2048M。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
[root@controller ~]# openstack image create --disk-format qcow2 --container-format bare --min-disk 30 --min-ram 2048 --file ./cirros-0.3.4-x86_64-disk.img Gmirror1
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| Field | Value |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
| checksum | ee1eca47dc88f4879d8a229cc70a07c6 |
| container_format | bare |
| created_at | 2022-10-25T07:57:28Z |
| disk_format | qcow2 |
| file | /v2/images/1acb0e45-eefa-4e64-aaed-b3f4e3d85c02/file |
| id | 1acb0e45-eefa-4e64-aaed-b3f4e3d85c02 |
| min_disk | 30 |
| min_ram | 2048 |
| name | Gmirror1 |
| owner | ef3705db528144cc9a33f8ace06d6d3b |
| properties | os_hash_algo='sha512', os_hash_value='1b03ca1bc3fafe448b90583c12f367949f8b0e665685979d95b004e48574b953316799e23240f4f739d1b5eb4c4ca24d38fdc6f4f9d8247a2bc64db25d6bbdb2', os_hidden='False' |
| protected | False |
| schema | /v2/schemas/image |
| size | 13287936 |
| status | active |
| tags | |
| updated_at | 2022-10-25T07:57:29Z |
| virtual_size | None |
| visibility | shared |
+------------------+--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+

5.在提供的OpenStack私有云平台上,自行安装Swift服务,新建名为chinaskill的容器,将cirros-0.3.4-x86_64-disk.img镜像上传到chinaskill容器中,并设置分段存放,每一段大小为10M。

1
2
3
4
5
[root@controller ~]# swift post chinaskill
[root@controller ~]# swift upload chiaskill -S 10000000 cirros-0.3.4-x86_64-disk.img
cirros-0.3.4-x86_64-disk.img segment 1
cirros-0.3.4-x86_64-disk.img segment 0
cirros-0.3.4-x86_64-disk.img

6.使用OpenStack私有云平台,创建两台云主机vm1和vm2,在这两台云主机上分别安装数据库服务,并配置成主从数据库,vm1节点为主库,vm2节点为从库(数据库密码设置为000000)。

vm1 192.168.200.101 主库
vm2 192.168.200.102 从库

①添加主机映射(并且关闭防火墙)

1
2
3
vi /etc/hosts
192.168.200.101 vm1
192.168.200.102 vm2

②安装并配置mariadb

1
2
3
4
5
6
7
yum -y install mariadb mariadb-server

#启动mariadb
systemctl start mariadb
#配置
mysql_secure_installation
y 000000 000000 y n y y

③配置主节点

1
2
3
4
5
6
7
8
9
10
11
12
13
vi /etc/my.cnf

log-bin=mysql-bin
binlog_ignore_db=mysql
server-id=101 #每个服务器都需要添加server_id配置,各个服务器的server_id需要保证唯一性,实践中通过设置为服务器ip地址的最后一位

#重启服务
systemctl restart mariadb
#登录数据库
mysql -uroot -p000000
#授权
-> grant replication slave on *.* to 'root'@'%' identified by '000000';
-> grant replication slave on *.* to 'user'@'%' identified by '000000';

④配置从节点

1
2
3
4
5
6
7
8
9
10
11
12
vi /etc/my.cnf
server-id=102
log-bin=mysql-bin

#重启服务
systemctl restart mariadb
#登录并配置
mysql -uroot -p000000
-> change master to master_host='vm1',master_user='user',master_password='000000
-> start slave;
-> show slave status\G;

9.使用cloudkitty计费服务,处理虚拟机实例(compute)、云硬盘(volume)、镜像(image)、网络进出流量(network.bw.in, network.bw.out)、浮动IP(network.floating)的计费数据并进行计费规则创建,以达到费用核算目的。

实例类型收费

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
#创建云主机服务instance_test,通过命令创建service服务
[root@controller ~]# openstack rating hashmap service create instance_test
+---------------+--------------------------------------+
| Name | Service ID |
+---------------+--------------------------------------+
| instance_test | cf8029bf-dc35-4e40-b8fd-5af4a4d25a30 |
+---------------+--------------------------------------+
#创建名为flavor_name的fields
root@controller ~]# openstack rating hashmap field create cf8029bf-dc35-4e40-b8fd-5af4a4d25a30 flavor_name
+-------------+--------------------------------------+--------------------------------------+
| Name | Field ID | Service ID |
+-------------+--------------------------------------+--------------------------------------+
| flavor_name | b2f0d485-df20-4f2e-bd44-d3696971cb8f | cf8029bf-dc35-4e40-b8fd-5af4a4d25a30 |
+-------------+--------------------------------------+--------------------------------------+
#设置规格为m1.small的云主机单价为1元
[root@controller ~]# openstack rating hashmap mapping create --field-id b2f0d485-df20-4f2e-bd44-d3696971cb8f -t flat --value m1.small 1
+--------------------+--------+----------+----+--------------------+----------+--------+----------+
| Mapping ID |Value |Cost |Type| Field ID |Service ID|Group ID|Project ID|
+--------------------+--------+----------+----+--------------------+----------+--------+----------+
| c1b7d4db-c1d2-4488 |m1.small|1.00000000|flat| b2f0d485-df20-4f2e | None | None | None |
-ac46-1a8eb70d76e4 -bd44-d3696971cb8f
+--------------------+--------+----------+----+--------------------+----------+--------+----------+

云硬盘服务费用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#创建云硬盘服务费用volume_size
[root@controller ~]# openstack rating hashmap service create volume_size
+-------------+--------------------------------------+
| Name | Service ID |
+-------------+--------------------------------------+
| volume_size | 6bd25052-eb27-49b1-ad68-aab723059a95 |
+-------------+--------------------------------------+
#设置价格为1.2元
[root@controller ~]# openstack rating hashmap mapping create -s 6bd25052-eb27-49b1-ad68-aab723059a95 -t flat 1.2
+--------------------------------------+-------+------------+------+----------+--------------------------------------+----------+------------+
| Mapping ID | Value | Cost | Type | Field ID | Service ID | Group ID | Project ID |
+--------------------------------------+-------+------------+------+----------+--------------------------------------+----------+------------+
| bd57621f-523b-43f2-89fb-2ea07fd04fac | None | 1.20000000 | flat | None | 6bd25052-eb27-49b1-ad68-aab723059a95 | None | None |
+--------------------------------------+-------+------------+------+----------+--------------------------------------+----------+------------+

镜像服务费用

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
#创建镜像收费服务image_size_test
[root@controller ~]# openstack rating hashmap service create image_size_test
+-----------------+--------------------------------------+
| Name | Service ID |
+-----------------+--------------------------------------+
| image_size_test | 80a098cf-d793-47cf-b63e-df6cbd56e88d |
+-----------------+--------------------------------------+
#并为该服务单价设置为0.8元
[root@controller ~]# openstack rating hashmap mapping create -s 80a098cf-d793-47cf-b63e-df6cbd56e88d -t flat 0.8
+--------------------+-------+------------+------+----------+--------------------+--------+----------+
| Mapping ID | Value | Cost | Type | Field ID | Service ID |Group ID|Project ID|
+--------------------+-------+------------+------+----------+--------------------+--------+----------+
| 64952e70-6e37-4c8a | None | 0.80000000 | flat | None | 80a098cf-d793-47cf | None | None |
-9d3a-b4c70de1fb87 -b63e-df6cbd56e88d
+--------------------+-------+------------+------+----------+--------------------+--------+----------

第三套

OpenStack云平台运维

2.在提供的OpenStack平台上,通过修改相关参数对openstack平台进行调优操作,相应的调优操作有:

(1)预留前2个物理CPU,把后面的所有CPU分配给虚拟机使用(假设vcpu为16个);

(2)设置cpu超售比例为4倍;

1
2
3
4
5
6
7
8
9
10
11
12
vi /etc/nova/nova.conf
###不知道哪个对
cpu_dedicated_set = 2
cpu_shared_set = 14

vcpu_pin_set = 2-15




cpu_allocation_ratio = 4.0

3.在提供的OpenStack平台上,对mencached服务进行操作使memcached的缓存由64MB变为256MB。

1
2
vi /etc/sysconfig/memcached
CACHESIZE="256"

4.在提供的OpenStack平台上,编写heat模板createnet.yml文件,模板作用为按照要求创建一个网络和子网。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@controller ~]# cat createnet.yml
heat_template_version: 2018-08-31
resources:
net1:
type: OS::Neutron::Net
properties:
name: net1
net-subnet:
properties:
cidr: 10.1.0.0/24
name: net1-subent
enable_dhcp: true
gateway_ip: 10.1.0.2
allocation_pools:
- start: 10.1.0.100
end: 10.1.0.200
network: {get_resource: net1}
type: OS::Neutron::Subnet


#创建网络
openstack stack create -t createnet.yml net

5.使用提供的OpenStack私有云平台,修改普通用户权限,使普通用户不能对镜像进行创建和删除操作从(参考(1条消息) OpenStack Keystone (2): 角色权限管理_hhzzk的博客-CSDN博客_openstack 角色 权限

(2条消息) openstack 权限控制 (添加自定义角色)keystone等组件_weixin_33963594的博客-CSDN博客

1
2
3
vi /etc/glance/policy.json
"add_image": "role:admin",
"delete_image": "role:admin",

9.在OpenStack私有云平台,创建一台云主机,编写脚本,要求可以完成数据库的定期备份,并把数据库备份文件存放在/opt目录下。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
#!bin/bash
#备份路径
BACKUP=/root/mysql-backup
#数据库的定时备份
DATETIME=`date +%Y_%m_%d_%H%%M%S`
#echo "$DATETIME"
echo "====start backup to $BACKUP/$DATETIME/$DATETIME.tar.gz====="
#主机
HOST=rabbitmq
DB_USER=xy
DB_PWD=000000
#要备份的数据库名
DATABASE=xy
#创建备份的路径,如果路径不存在就创建
[ ! -d "$BACKUP/$DATETIME" ] && mkdir -p "$BACKUP/$DATETIME"
#执行mysql的备份数据库指令
mysqldump -u${DB_USER} -p${DB_PWD} --host=$HOST $DATABASE | gzip > $BACKUP/$DATETIME/$DATETIME.sql.gz
#打包备份文件
cd $BACKUP
tar -zcvf $DATETIME.tar.gz $DATETIME
#删除临时目录
rm -rf $BACKUP/$DATETIME
#删除1天前的备份文件
#在$backup目录下按照时间找10天前的名称为*.tar.gz的文件,-exec表示执行找到的文件
find $BACKUP -mtime +1 -name "*.tar.gz" -exec rm rf {} \;
echo "============backup success============"
#加入定时任务表示每天8点30分执行后面的命令(shell脚本的路径)
crontab -e
30 8 * * * /root/mysql_backup.sh