Ceph块存储和对象存储

1 案例1:块存储应用案例
1.1 问题

延续Day04的实验内容,演示块存储在KVM虚拟化中的应用案例,实现以下功能:
Ceph创建块存储镜像
客户端安装部署ceph软件
客户端部署虚拟机
客户端创建secret
设置虚拟机配置文件,调用ceph存储
1.2 方案

使用Ceph存储创建镜像。
KVM虚拟机调用Ceph镜像作为虚拟机的磁盘。
1.3 步骤

实现此案例需要按照如下步骤进行。
1)创建磁盘镜像。
[root@node1 ~]# rbd create vm1-image –image-feature layering –size 10G
[root@node1 ~]# rbd create vm2-image –image-feature layering –size 10G
[root@node1 ~]# rbd list
[root@node1 ~]# rbd info vm1-image
[root@node1 ~]# qemu-img info rbd:rbd/vm1-image
image: rbd:rbd/vm1-image
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: unavailable
2)Ceph认证账户。
Ceph默认开启用户认证,客户端需要账户才可以访问,
默认账户名称为client.admin,key是账户的密钥,
可以使用ceph auth添加新账户(案例我们使用默认账户)。
[root@node1 ~]# cat /etc/ceph/ceph.conf //配置文件
[global]
mon_initial_members = node1, node2, node3
mon_host = 192.168.2.10,192.168.2.20,192.168.2.30
auth_cluster_required = cephx //开启认证
auth_service_required = cephx //开启认证
auth_client_required = cephx //开启认证
[root@node1 ~]# cat /etc/ceph/ceph.client.admin.keyring //账户文件
[client.admin]
key = AQBTsdRapUxBKRAANXtteNUyoEmQHveb75bISg==
代码如下:
注意:配置文件不用更改
[root@node1 ceph-cluster]# rbd list
demo-image
image
image-clone
vm1-image
vm2-image
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]# rbd info vm1-image
rbd image ‘vm1-image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.105b238e1f29
format: 2
features: layering
flags:
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]# qemu-img info rbd:rbd/vm1-image
image: rbd:rbd/vm1-image
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: unavailable
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]# cat /etc/ceph/ceph.conf
[global]
fsid = 29908a48-7574-4aac-ac14-80a44b7cffbf
mon_initial_members = node1, node2, node3
mon_host = 192.168.4.11,192.168.4.12,192.168.4.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQCXwL5bTwkVGxAA2ls+tuA8SMl2vmzQOPCdCg==
[root@node1 ceph-cluster]#

3)部署客户端环境。
注意:这里使用真实机当客户端!!!
客户端需要安装ceph-common软件包,拷贝配置文件(否则不知道集群在哪),
拷贝连接密钥(否则无连接权限)。
[root@room9pc01 ~]# yum -y install ceph-common
[root@room9pc01 ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph/
[root@room9pc01 ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring \
/etc/ceph/

代码如下
注意,真机一样要先添加repo
[root@room9pc52 yum.repos.d]# vim ceph.repo
[root@room9pc52 yum.repos.d]# cat ceph.repo
[mon]
name=mon
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/MON
gpgcheck=0
[osd]
name=osd
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/OSD
gpgcheck=0
[tools]
name=tools
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/Tools
gpgcheck=0
[root@room9pc52 yum.repos.d]# yum repolist
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* epel: mirrors.ustc.edu.cn
* extras: mirrors.163.com
* updates: mirrors.163.com
file:///cos7/repodata/repomd.xml: [Errno 14] curl#37 – “Couldn’t open file /cos7/repodata/repomd.xml”
正在尝试其它镜像。
mon | 4.1 kB 00:00
osd | 4.1 kB 00:00
tools | 3.8 kB 00:00
(1/6): mon/primary_db | 40 kB 00:00
(2/6): mon/group_gz | 489 B 00:00
(3/6): tools/primary_db | 31 kB 00:00
(4/6): osd/primary_db | 31 kB 00:00
(5/6): osd/group_gz | 447 B 00:00
(6/6): tools/group_gz | 459 B 00:00
源标识 源名称 状态
base/7/x86_64 CentOS-7 – Base 9,911
!cos7 cos7 9,591
epel/x86_64 Extra Packages for Enterprise Linux 7 – x86_6 12,726
extras/7/x86_64 CentOS-7 – Extras 432
librehat-shadowsocks/x86_64 Copr repo for shadowsocks owned by librehat 54
mon mon 41
osd osd 28
teamviewer/x86_64 TeamViewer – x86_64 18
tools tools 33
updates/7/x86_64 CentOS-7 – Updates 1,543
repolist: 34,377

安装软件并拷贝配置文件
[root@room9pc52 yum.repos.d]# yum install -y ceph-common
已加载插件:fastestmirror, langpacks
Loading mirror speeds from cached hostfile
* base: mirrors.163.com
* epel: mirrors.ustc.edu.cn
* extras: mirrors.163.com
* updates: mirrors.163.com
file:///cos7/repodata/repomd.xml: [Errno 14] curl#37 – “Couldn’t open file /cos7/repodata/repomd.xml”
正在尝试其它镜像。
正在解决依赖关系
–> 正在检查事务
—> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在检查事务
—> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
—> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:libcephfs1-10.2.2-38.el7cp.x86_64 需要
—> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
—> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
–> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:librados2-10.2.2-38.el7cp.x86_64 需要
—> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
—> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
—> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:librgw2-10.2.2-38.el7cp.x86_64 需要
—> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
—> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
—> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在检查事务
—> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
—> 软件包 lttng-ust.x86_64.0.2.4.1-4.el7 将被 安装
–> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-4.el7.x86_64 需要
–> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-4.el7.x86_64 需要
–> 正在检查事务
—> 软件包 userspace-rcu.x86_64.0.0.7.16-1.el7 将被 安装
–> 解决依赖关系完成

依赖关系解决

================================================================================
Package 架构 版本 源 大小
================================================================================
正在安装:
ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
为依赖而安装:
boost-iostreams x86_64 1.53.0-27.el7 base 61 k
boost-program-options x86_64 1.53.0-27.el7 base 156 k
boost-random x86_64 1.53.0-27.el7 base 39 k
boost-regex x86_64 1.53.0-27.el7 base 300 k
fcgi x86_64 2.4.0-25.el7cp mon 47 k
libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
lttng-ust x86_64 2.4.1-4.el7 epel 176 k
python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
userspace-rcu x86_64 0.7.16-1.el7 epel 73 k
为依赖而更新:
librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M

事务概要
================================================================================
安装 1 软件包 (+13 依赖软件包)
升级 ( 2 依赖软件包)

总下载量:26 M
Downloading packages:
No Presto metadata available for mon
(1/16): boost-iostreams-1.53.0-27.el7.x86_64.rpm | 61 kB 00:00
(2/16): boost-program-options-1.53.0-27.el7.x86_64.rpm | 156 kB 00:00
(3/16): boost-random-1.53.0-27.el7.x86_64.rpm | 39 kB 00:00
(4/16): fcgi-2.4.0-25.el7cp.x86_64.rpm | 47 kB 00:00
(5/16): libbabeltrace-1.2.4-3.el7cp.x86_64.rpm | 147 kB 00:00
(6/16): boost-regex-1.53.0-27.el7.x86_64.rpm | 300 kB 00:00
(7/16): libcephfs1-10.2.2-38.el7cp.x86_64.rpm | 1.9 MB 00:00
(8/16): librados2-10.2.2-38.el7cp.x86_64.rpm | 1.9 MB 00:00
(9/16): librbd1-10.2.2-38.el7cp.x86_64.rpm | 2.5 MB 00:00
(10/16): librgw2-10.2.2-38.el7cp.x86_64.rpm | 2.9 MB 00:00
(11/16): python-cephfs-10.2.2-38.el7cp.x86_64.rpm | 86 kB 00:00
(12/16): python-rados-10.2.2-38.el7cp.x86_64.rpm | 164 kB 00:00
(13/16): ceph-common-10.2.2-38.el7cp.x86_64.rpm | 16 MB 00:00
(14/16): python-rbd-10.2.2-38.el7cp.x86_64.rpm | 93 kB 00:00
(15/16): lttng-ust-2.4.1-4.el7.x86_64.rpm | 176 kB 00:00
(16/16): userspace-rcu-0.7.16-1.el7.x86_64.rpm | 73 kB 00:00
——————————————————————————–
总计 14 MB/s | 26 MB 00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/18
正在安装 : boost-random-1.53.0-27.el7.x86_64 2/18
正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/18
正在安装 : boost-program-options-1.53.0-27.el7.x86_64 4/18
正在安装 : fcgi-2.4.0-25.el7cp.x86_64 5/18
正在安装 : userspace-rcu-0.7.16-1.el7.x86_64 6/18
正在安装 : lttng-ust-2.4.1-4.el7.x86_64 7/18
正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 8/18
正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 9/18
正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 10/18
正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 11/18
正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 12/18
正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 13/18
正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 14/18
正在安装 : boost-regex-1.53.0-27.el7.x86_64 15/18
正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 16/18
清理 : 1:librbd1-0.94.5-2.el7.x86_64 17/18
清理 : 1:librados2-0.94.5-2.el7.x86_64 18/18
验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 1/18
验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 2/18
验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 3/18
验证中 : boost-regex-1.53.0-27.el7.x86_64 4/18
验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 5/18
验证中 : userspace-rcu-0.7.16-1.el7.x86_64 6/18
验证中 : lttng-ust-2.4.1-4.el7.x86_64 7/18
验证中 : boost-iostreams-1.53.0-27.el7.x86_64 8/18
验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 9/18
验证中 : fcgi-2.4.0-25.el7cp.x86_64 10/18
验证中 : boost-random-1.53.0-27.el7.x86_64 11/18
验证中 : boost-program-options-1.53.0-27.el7.x86_64 12/18
验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 13/18
验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 14/18
验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 15/18
验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 16/18
验证中 : 1:librbd1-0.94.5-2.el7.x86_64 17/18
验证中 : 1:librados2-0.94.5-2.el7.x86_64 18/18

已安装:
ceph-common.x86_64 1:10.2.2-38.el7cp

作为依赖被安装:
boost-iostreams.x86_64 0:1.53.0-27.el7
boost-program-options.x86_64 0:1.53.0-27.el7
boost-random.x86_64 0:1.53.0-27.el7
boost-regex.x86_64 0:1.53.0-27.el7
fcgi.x86_64 0:2.4.0-25.el7cp
libbabeltrace.x86_64 0:1.2.4-3.el7cp
libcephfs1.x86_64 1:10.2.2-38.el7cp
librgw2.x86_64 1:10.2.2-38.el7cp
lttng-ust.x86_64 0:2.4.1-4.el7
python-cephfs.x86_64 1:10.2.2-38.el7cp
python-rados.x86_64 1:10.2.2-38.el7cp
python-rbd.x86_64 1:10.2.2-38.el7cp
userspace-rcu.x86_64 0:0.7.16-1.el7

作为依赖被升级:
librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp

完毕!

[root@room9pc52 ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ce
centos-release centos-release-upstream ceph/
[root@room9pc52 ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ce
centos-release centos-release-upstream ceph/
[root@room9pc52 ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph/
[email protected]’s password:
ceph.conf 100% 235 670.5KB/s 00:00
[root@room9pc52 ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
[email protected]’s password:
ceph.client.admin.keyring 100% 63 141.1KB/s 00:00
[root@room9pc52 ~]# ll /etc/ceph
总用量 12
-rw——- 1 root root 63 10月 12 09:31 ceph.client.admin.keyring
-rw-r–r– 1 root root 235 10月 12 09:30 ceph.conf
-rwxr-xr-x 1 root root 92 8月 10 2016 rbdmap

4)创建KVM虚拟机。
使用virt-manager创建2台普通的KVM虚拟机。
我创建了2台新的虚拟机,1台14,1台15

5)配置libvirt secret。
编写账户信息文件(真实机操作)
[root@room9pc01 ~]# vim secret.xml //新建临时文件,内容如下
<secret ephemeral=’no’ private=’no’>
<usage type=’ceph’>
<name>client.admin secret</name>
</usage>
</secret>
#使用XML配置文件创建secret
代码如下
[root@room9pc52 ~]# vim secret.xml
[root@room9pc52 ~]#
[root@room9pc52 ~]# cat secret.xml
<secret ephemeral=’no’ private=’no’>
<usage type=’ceph’>
<name>client.admin secret</name>
</usage>
</secret>

[root@room9pc01 ~]# virsh secret-define –file secret.xml
733f0fd1-e3d6-4c25-a69f-6681fc19802b
//随机的UUID,这个UUID对应的有账户信息
代码如下,注意这个UUID后面要用到
[root@room9pc52 ~]# virsh secret-define –file secret.xml
生成 secret 2962a576-d80c-48cc-bcdf-a807a8339c64

编写账户信息文件(真实机操作)
[root@room9pc01 ~]# ceph auth get-key client.admin
//获取client.admin的key,或者直接查看密钥文件
[root@room9pc01 ~]# cat /etc/ceph/ceph.client.admin.keyring
代码如下
[root@room9pc52 ~]# ceph auth get-key client.admin
AQCXwL5bTwkVGxAA2ls+tuA8SMl2vmzQOPCdCg==[root@room9pc52 ~]#
[root@room9pc52 ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQCXwL5bTwkVGxAA2ls+tuA8SMl2vmzQOPCdCg==

设置secret,添加账户的密钥
[root@room9pc01] virsh secret-set-value \
–secret 733f0fd1-e3d6-4c25-a69f-6681fc19802b \
–base64 AQBTsdRapUxBKRAANXtteNUyoEmQHveb75bISg
//这里secret后面是之前创建的secret的UUID
//base64后面是client.admin账户的密码
//现在secret中既有账户信息又有密钥信息
代码如下
[root@room9pc52 ~]# virsh secret-set-value –secret 2962a576-d80c-48cc-bcdf-a807a8339c64 –base64 AQCXwL5bTwkVGxAA2ls+tuA8SMl2vmzQOPCdCg==
secret 值设定
(注意,本次实验中,secret是上面生成的UUID,而base是后面生成的key)

6)虚拟机的XML配置文件。
每个虚拟机都会有一个XML配置文件,包括:
虚拟机的名称、内存、CPU、磁盘、网卡等信息
[root@room9pc01 ~]# vim /etc/libvirt/qemu/vm1.xml
//修改前内容如下
<disk type=’file’ device=’disk’>
<driver name=’qemu’ type=’qcow2’/>
<source file=’/var/lib/libvirt/images/vm1.qcow2’/>
<target dev=’vda’ bus=’virtio’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
</disk>
不推荐直接使用vim修改配置文件,推荐使用virsh edit修改配置文件,效果如下:
[root@room9pc01] virsh edit vm1 //vm1为虚拟机名称
<disk type=’network’ device=’disk’>
<driver name=’qemu’ type=’raw’/>
<auth username=’admin’>
<secret type=’ceph’ uuid=’733f0fd1-e3d6-4c25-a69f-6681fc19802b’/>
</auth>
<source protocol=’rbd’ name=’rbd/vm1′> <host name=’192.168.4.11′ port=’6789’/> </source>
<target dev=’vda’ bus=’virtio’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
</disk>

代码如下
(注意就是修改UUID要保持一直)
[root@room9pc52 ~]# virsh edit a14
没有更改域 a14 XML 配置。

[root@room9pc52 ~]#
(修改以下信息)
<disk type=’network’ device=’disk’>
<driver name=’qemu’ type=’raw’/>
<auth username=’admin’>
<secret type=’ceph’ uuid=’2962a576-d80c-48cc-bcdf-a807a8339c64’/>
</auth>
<source protocol=’rbd’ name=’rbd/vm1-image’>
<host name=’192.168.4.11′ port=’6789’/>
</source>
<target dev=’vda’ bus=’virtio’/>
<address type=’pci’ domain=’0x0000′ bus=’0x00′ slot=’0x07′ function=’0x0’/>
</disk>
注意,上面有一个坑,name=’rbd/vm1-iamge’ ceph创建的是什么镜像名,就写什么名,不要直接复制粘贴
完成后启动虚拟机,正常安装系统即可

拓展
真机上面也可以正常使用
[root@room9pc52 ~]# rbd ls
demo-image
image
image-clone
vm1-image
vm2-image
[root@room9pc52 ~]# rbd info vm1-image
rbd image ‘vm1-image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.105b238e1f29
format: 2
features: layering
flags:
[root@room9pc52 ~]# qemu-img info rbd:rbd/vm1-image
image: rbd:rbd/vm1-image
file format: raw
virtual size: 10G (10737418240 bytes)
disk size: unavailable
查看密钥
[root@room9pc52 ~]# virsh secret-list
UUID 用量
——————————————————————————–
2962a576-d80c-48cc-bcdf-a807a8339c64 ceph client.admin secret

2 案例2:Ceph文件系统
2.1 问题
延续前面的实验,实现Ceph文件系统的功能。具体实现有以下功能:
部署MDSs节点
创建Ceph文件系统
客户端挂载文件系统
2.2 方案
添加一台虚拟机,部署MDS节点。
主机的主机名及对应的IP地址如表-1所示。
node4 192.168.4.15
先配置好免密登陆
[root@node1 ceph-cluster]# ssh-copy-id 192.168.4.14
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.14 (192.168.4.14)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.14’”
and check to make sure that only the key(s) you wanted were added.

2.3 步骤

实现此案例需要按照如下步骤进行。
1)添加一台新的虚拟机,要求如下:
IP地址:192.168.4.14
主机名:node4
配置yum源(包括rhel、ceph的源)
与Client主机同步时间
node1允许无密码远程node4
2)部署元数据服务器
登陆node4,安装ceph-mds软件包
[root@node4 ~]# yum -y install ceph-mds
登陆node1部署节点操作
[root@node1 ~]# cd /root/ceph-cluster
//该目录,是最早部署ceph集群时,创建的目录
[root@node1 ceph-cluster]# ceph-deploy mds create node4
//给nod4拷贝配置文件,启动mds服务

以下为代码
NODE4先配置好,这里是直接再NODE4上面安装MDS,而不是在管理主机上面安装,同时配置好时间同步NTP
[root@node4 ~]# vim /etc/chrony.conf
[root@node4 ~]# systemctl restart chronyd
[root@node4 ~]# yum -y install ceph-mds
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
mon | 4.1 kB 00:00
osd | 4.1 kB 00:00
tools | 3.8 kB 00:00
(1/6): mon/group_gz | 489 B 00:00
(2/6): mon/primary_db | 40 kB 00:00
(3/6): osd/group_gz | 447 B 00:00
(4/6): tools/primary_db | 31 kB 00:00
(5/6): osd/primary_db | 31 kB 00:00
(6/6): tools/group_gz | 459 B 00:00
正在解决依赖关系
–> 正在检查事务
—> 软件包 ceph-mds.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
–> 正在检查事务
—> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libcephfs.so.1()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
–> 正在检查事务
—> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
–> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
—> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
—> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
—> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
—> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
—> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
—> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
—> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
—> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
–> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:librgw2-10.2.2-38.el7cp.x86_64 需要
—> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
–> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
–> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
—> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
–> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在检查事务
—> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
—> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
—> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
—> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
—> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
—> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
—> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
—> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
—> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
—> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
—> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
–> 解决依赖关系完成
192.168.4.254_rhel7/group_gz | 137 kB 00:00

依赖关系解决

================================================================================
Package 架构 版本 源 大小
================================================================================
正在安装:
ceph-mds x86_64 1:10.2.2-38.el7cp tools 2.8 M
为依赖而安装:
boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
fcgi x86_64 2.4.0-25.el7cp mon 47 k
hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
为依赖而更新:
librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M

事务概要
================================================================================
安装 1 软件包 (+22 依赖软件包)
升级 ( 2 依赖软件包)

总下载量:34 M
Downloading packages:
No Presto metadata available for mon
(1/25): boost-program-options-1.53.0-27.el7.x86_64.rpm | 156 kB 00:00
(2/25): boost-iostreams-1.53.0-27.el7.x86_64.rpm | 61 kB 00:00
(3/25): boost-random-1.53.0-27.el7.x86_64.rpm | 39 kB 00:00
(4/25): boost-regex-1.53.0-27.el7.x86_64.rpm | 300 kB 00:00
(5/25): ceph-mds-10.2.2-38.el7cp.x86_64.rpm | 2.8 MB 00:00
(6/25): ceph-base-10.2.2-38.el7cp.x86_64.rpm | 4.2 MB 00:00
(7/25): ceph-selinux-10.2.2-38.el7cp.x86_64.rpm | 38 kB 00:00
(8/25): fcgi-2.4.0-25.el7cp.x86_64.rpm | 47 kB 00:00
(9/25): libbabeltrace-1.2.4-3.el7cp.x86_64.rpm | 147 kB 00:00
(10/25): hdparm-9.43-5.el7.x86_64.rpm | 83 kB 00:00
(11/25): libcephfs1-10.2.2-38.el7cp.x86_64.rpm | 1.9 MB 00:00
(12/25): ceph-common-10.2.2-38.el7cp.x86_64.rpm | 16 MB 00:00
(13/25): librados2-10.2.2-38.el7cp.x86_64.rpm | 1.9 MB 00:00
(14/25): librbd1-10.2.2-38.el7cp.x86_64.rpm | 2.5 MB 00:00
(15/25): lttng-ust-2.4.1-1.el7cp.x86_64.rpm | 176 kB 00:00
(16/25): librgw2-10.2.2-38.el7cp.x86_64.rpm | 2.9 MB 00:00
(17/25): python-cephfs-10.2.2-38.el7cp.x86_64.rpm | 86 kB 00:00
(18/25): python-rados-10.2.2-38.el7cp.x86_64.rpm | 164 kB 00:00
(19/25): python-rbd-10.2.2-38.el7cp.x86_64.rpm | 93 kB 00:00
(20/25): m4-1.4.16-10.el7.x86_64.rpm | 256 kB 00:00
(21/25): patch-2.7.1-8.el7.x86_64.rpm | 110 kB 00:00
(22/25): redhat-lsb-core-4.1-27.el7.x86_64.rpm | 37 kB 00:00
(23/25): redhat-lsb-submod-security-4.1-27.el7.x86_64.rpm | 15 kB 00:00
(24/25): spax-1.5.2-13.el7.x86_64.rpm | 260 kB 00:00
(25/25): userspace-rcu-0.7.9-2.el7rhgs.x86_64.rpm | 70 kB 00:00
——————————————————————————–
总计 23 MB/s | 34 MB 00:01
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/27
正在安装 : boost-random-1.53.0-27.el7.x86_64 2/27
正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/27
正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/27
正在安装 : spax-1.5.2-13.el7.x86_64 5/27
正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 6/27
正在安装 : patch-2.7.1-8.el7.x86_64 7/27
正在安装 : boost-program-options-1.53.0-27.el7.x86_64 8/27
正在安装 : hdparm-9.43-5.el7.x86_64 9/27
正在安装 : m4-1.4.16-10.el7.x86_64 10/27
正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 11/27
正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 12/27
正在安装 : boost-regex-1.53.0-27.el7.x86_64 13/27
正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 14/27
正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 15/27
正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 16/27
正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 17/27
正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 18/27
正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 19/27
正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 20/27
正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 21/27
正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 22/27
正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 23/27
正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 24/27
正在安装 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 25/27
清理 : 1:librbd1-0.94.5-2.el7.x86_64 26/27
清理 : 1:librados2-0.94.5-2.el7.x86_64 27/27
192.168.4.254_rhel7/productid | 1.6 kB 00:00
mon/productid | 1.6 kB 00:00
osd/productid | 1.6 kB 00:00
验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/27
验证中 : boost-regex-1.53.0-27.el7.x86_64 2/27
验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 3/27
验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 4/27
验证中 : m4-1.4.16-10.el7.x86_64 5/27
验证中 : hdparm-9.43-5.el7.x86_64 6/27
验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 7/27
验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 8/27
验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 9/27
验证中 : boost-iostreams-1.53.0-27.el7.x86_64 10/27
验证中 : boost-random-1.53.0-27.el7.x86_64 11/27
验证中 : boost-program-options-1.53.0-27.el7.x86_64 12/27
验证中 : patch-2.7.1-8.el7.x86_64 13/27
验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 14/27
验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 15/27
验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 16/27
验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 17/27
验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 18/27
验证中 : redhat-lsb-core-4.1-27.el7.x86_64 19/27
验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 20/27
验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 21/27
验证中 : spax-1.5.2-13.el7.x86_64 22/27
验证中 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 23/27
验证中 : fcgi-2.4.0-25.el7cp.x86_64 24/27
验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 25/27
验证中 : 1:librbd1-0.94.5-2.el7.x86_64 26/27
验证中 : 1:librados2-0.94.5-2.el7.x86_64 27/27

已安装:
ceph-mds.x86_64 1:10.2.2-38.el7cp

作为依赖被安装:
boost-iostreams.x86_64 0:1.53.0-27.el7
boost-program-options.x86_64 0:1.53.0-27.el7
boost-random.x86_64 0:1.53.0-27.el7
boost-regex.x86_64 0:1.53.0-27.el7
ceph-base.x86_64 1:10.2.2-38.el7cp
ceph-common.x86_64 1:10.2.2-38.el7cp
ceph-selinux.x86_64 1:10.2.2-38.el7cp
fcgi.x86_64 0:2.4.0-25.el7cp
hdparm.x86_64 0:9.43-5.el7
libbabeltrace.x86_64 0:1.2.4-3.el7cp
libcephfs1.x86_64 1:10.2.2-38.el7cp
librgw2.x86_64 1:10.2.2-38.el7cp
lttng-ust.x86_64 0:2.4.1-1.el7cp
m4.x86_64 0:1.4.16-10.el7
patch.x86_64 0:2.7.1-8.el7
python-cephfs.x86_64 1:10.2.2-38.el7cp
python-rados.x86_64 1:10.2.2-38.el7cp
python-rbd.x86_64 1:10.2.2-38.el7cp
redhat-lsb-core.x86_64 0:4.1-27.el7
redhat-lsb-submod-security.x86_64 0:4.1-27.el7
spax.x86_64 0:1.5.2-13.el7
userspace-rcu.x86_64 0:0.7.9-2.el7rhgs

作为依赖被升级:
librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp

完毕!
##################
以上,NODE4配置完毕,现在返回管理节点去操作
##############

[root@node1 ceph-cluster]# pwd
/root/ceph-cluster
[root@node1 ceph-cluster]# vim /etc/hosts
(把node4写入hosts)
下面进行免密操作
[root@node1 ceph-cluster]# for i in node{2..4}
> do
> scp /etc/hosts $i:/etc/
> done
hosts 100% 269 419.6KB/s 00:00
hosts 100% 269 401.4KB/s 00:00
The authenticity of host ‘node4 (192.168.4.14)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘node4’ (ECDSA) to the list of known hosts.
hosts 100% 269 406.9KB/s 00:00
[root@node1 ceph-cluster]# ceph-deploy mds create node4
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy mds create node4
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fb47e755b48>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mds at 0x7fb47e72c7d0>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] mds : [(‘node4’, ‘node4’)]
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mds][DEBUG ] Deploying mds, cluster ceph hosts node4:node4
[node4][DEBUG ] connected to host: node4
[node4][DEBUG ] detect platform information from remote host
[node4][DEBUG ] detect machine type
[ceph_deploy.mds][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.mds][DEBUG ] remote host will use systemd
[ceph_deploy.mds][DEBUG ] deploying mds bootstrap to node4
[node4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node4][WARNIN] mds keyring does not exist yet, creating one
[node4][DEBUG ] create a keyring file
[node4][DEBUG ] create path if it doesn’t exist
[node4][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-mds –keyring /var/lib/ceph/bootstrap-mds/ceph.keyring auth get-or-create mds.node4 osd allow rwx mds allow mon allow profile mds -o /var/lib/ceph/mds/ceph-node4/keyring
[node4][INFO ] Running command: systemctl enable ceph-mds@node4
[node4][WARNIN] Created symlink from /etc/systemd/system/ceph-mds.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node4][INFO ] Running command: systemctl start ceph-mds@node4
[node4][INFO ] Running command: systemctl enable ceph.target

同步配置文件和key
[root@node1 ceph-cluster]# ceph-deploy admin node4
代码如下
[root@node1 ceph-cluster]# ceph-deploy admin node4
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy admin node4
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fcdc9d38440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [‘node4’]
[ceph_deploy.cli][INFO ] func : <function admin at 0x7fcdca993140>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node4
[node4][DEBUG ] connected to host: node4
[node4][DEBUG ] detect platform information from remote host
[node4][DEBUG ] detect machine type
[node4][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

3)创建存储池
[root@fdfs_storage1 FastDFS]# mkdir -pv /data/fastdfs
4)修改配置文件。
[root@node4 ~]# ceph osd pool create cephfs_data 128
//创建存储池,对应128个PG
[root@node4 ~]# ceph osd pool create cephfs_metadata 128
//创建存储池,对应128个PG

[root@node4 ~]# ceph osd pool create cephfs_data 128
pool ‘cephfs_data’ created
[root@node4 ~]# ceph osd pool create cephfs_metadata 128
pool ‘cephfs_metadata’ created

[root@node4 ~]# ceph mds stat
e2:, 1 up:standby

5)创建Ceph文件系统
[root@node4 ~]# ceph mds stat //查看mds状态
e2:, 1 up:standby
[root@node4 ~]# ceph fs new myfs1 cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
//注意,现在medadata池,再写data池
//默认,只能创建1个文件系统,多余的会报错
[root@node4 ~]# ceph fs ls
name: myfs1, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@node4 ~]# ceph mds stat
e4: 1/1/1 up {0=node4=up:creating}

代码如下
[root@node4 ~]# ceph mds stat
e2:, 1 up:standby
[root@node4 ~]#
[root@node4 ~]# ceph osd pool ls
rbd
cephfs_data
cephfs_metadata
[root@node4 ~]#
[root@node4 ~]#
[root@node4 ~]# ceph fs new myfs1 cephfs_metadata cephfs_data
new fs with metadata pool 2 and data pool 1
[root@node4 ~]#
[root@node4 ~]# ceph fs ls
name: myfs1, metadata pool: cephfs_metadata, data pools: [cephfs_data ]
[root@node4 ~]#
[root@node4 ~]# ceph mds stat
e5: 1/1/1 up {0=node4=up:active}

客户端挂载
[root@client ~]# mount -t ceph 192.168.4.11:6789:/ /mnt/cephfs/ \
-o name=admin,secret=AQBTsdRapUxBKRAANXtteNUyoEmQHveb75bISg==
//注意:文件系统类型为ceph
//192.168.4.11为MON节点的IP(不是MDS节点)
//admin是用户名,secret是密钥
//密钥可以在/etc/ceph/ceph.client.admin.keyring中找到
[root@client ~]# cat /etc/ceph/ceph.client.admin.keyring
[client.admin]
key = AQCXwL5bTwkVGxAA2ls+tuA8SMl2vmzQOPCdCg==
[root@client ~]#
[root@client ~]# mkdir /mnt/cephfs
[root@client ~]# mount -t ceph 192.168.4.11:6789:/ /mnt/cephfs -o name=admin,secret=AQCXwL5bTwkVGxAA2ls+tuA8SMl2vmzQOPCdCg==
[root@client ~]# ll /mnt/cephfs/
总用量 0

3 案例3:创建对象存储服务器
3.1 问题
延续前面的实验,实现Ceph对象存储的功能。具体实现有以下功能:
安装部署Rados Gateway
启动RGW服务
设置RGW的前端服务与端口
客户端测试
3.2 步骤
步骤一:部署对象存储服务器
1)准备实验环境,要求如下:
IP地址:192.168.4.15
主机名:node5
配置yum源(包括rhel、ceph的源)
与Client主机同步时间
node1允许无密码远程node5
修改node1的/etc/hosts,并同步到所有node主机
代码如下:
配置好yum,改主机名,写hosts,免密登陆
[root@node5 ~]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3
192.168.4.14 node4
192.168.4.15 node5
配置时间同步NTP
[root@client ~]# vim /etc/chrony.conf
[root@client ~]# systemctl restart chronyd
[root@client ~]# cat /etc/chrony.conf
# Use public servers from the pool.ntp.org project.
# Please consider joining the pool (http://www.pool.ntp.org/join.html).
server 192.168.4.10 iburst

2)部署RGW软件包
[root@node1 ~]# ceph-deploy install –rgw node5
代码如下
[root@node1 ceph-cluster]# ceph-deploy install –rgw node5
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy install –rgw node5
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f05af807f80>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f05b04777d0>
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : [‘node5’]
[ceph_deploy.cli][INFO ] install_rgw : True
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts node5
[ceph_deploy.install][DEBUG ] Detecting platform for host node5 …
The authenticity of host ‘node5 (192.168.4.15)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘node5’ (ECDSA) to the list of known hosts.
root@node5’s password:
root@node5’s password:
[node5][DEBUG ] connected to host: node5
[node5][DEBUG ] detect platform information from remote host
[node5][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node5][INFO ] installing Ceph on node5
[node5][INFO ] Running command: yum clean all
[node5][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node5][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node5][DEBUG ] 正在清理软件源: 192.168.4.254_rhel7 mon osd tools
[node5][DEBUG ] Cleaning up everything
[node5][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node5][INFO ] Running command: yum -y install ceph-radosgw
[node5][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node5][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node5][DEBUG ] 正在解决依赖关系
[node5][DEBUG ] –> 正在检查事务
[node5][DEBUG ] —> 软件包 ceph-radosgw.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] –> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 mailcap,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在检查事务
[node5][DEBUG ] —> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] –> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] —> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] –> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-selinux-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] —> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
[node5][DEBUG ] —> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
[node5][DEBUG ] —> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
[node5][DEBUG ] –> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:librados2-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:librados2-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] —> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] —> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
[node5][DEBUG ] –> 正在检查事务
[node5][DEBUG ] —> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
[node5][DEBUG ] —> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
[node5][DEBUG ] —> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
[node5][DEBUG ] —> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
[node5][DEBUG ] —> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] –> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node5][DEBUG ] —> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
[node5][DEBUG ] —> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] —> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
[node5][DEBUG ] —> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
[node5][DEBUG ] —> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
[node5][DEBUG ] –> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node5][DEBUG ] —> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] —> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] —> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node5][DEBUG ] –> 正在检查事务
[node5][DEBUG ] —> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
[node5][DEBUG ] —> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
[node5][DEBUG ] –> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node5][DEBUG ] –> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node5][DEBUG ] —> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
[node5][DEBUG ] –> 正在检查事务
[node5][DEBUG ] —> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
[node5][DEBUG ] —> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
[node5][DEBUG ] —> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
[node5][DEBUG ] —> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
[node5][DEBUG ] –> 解决依赖关系完成
[node5][DEBUG ]
[node5][DEBUG ] 依赖关系解决
[node5][DEBUG ]
[node5][DEBUG ] ================================================================================
[node5][DEBUG ] Package 架构 版本 源 大小
[node5][DEBUG ] ================================================================================
[node5][DEBUG ] 正在安装:
[node5][DEBUG ] ceph-radosgw x86_64 1:10.2.2-38.el7cp tools 265 k
[node5][DEBUG ] 为依赖而安装:
[node5][DEBUG ] boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
[node5][DEBUG ] boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
[node5][DEBUG ] boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
[node5][DEBUG ] boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
[node5][DEBUG ] ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
[node5][DEBUG ] ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
[node5][DEBUG ] ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
[node5][DEBUG ] fcgi x86_64 2.4.0-25.el7cp mon 47 k
[node5][DEBUG ] hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
[node5][DEBUG ] libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
[node5][DEBUG ] libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node5][DEBUG ] librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
[node5][DEBUG ] lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
[node5][DEBUG ] m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
[node5][DEBUG ] mailcap noarch 2.1.41-2.el7 192.168.4.254_rhel7 31 k
[node5][DEBUG ] patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
[node5][DEBUG ] python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
[node5][DEBUG ] python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
[node5][DEBUG ] python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
[node5][DEBUG ] redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
[node5][DEBUG ] redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
[node5][DEBUG ] spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
[node5][DEBUG ] userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
[node5][DEBUG ] 为依赖而更新:
[node5][DEBUG ] librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node5][DEBUG ] librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M
[node5][DEBUG ]
[node5][DEBUG ] 事务概要
[node5][DEBUG ] ================================================================================
[node5][DEBUG ] 安装 1 软件包 (+23 依赖软件包)
[node5][DEBUG ] 升级 ( 2 依赖软件包)
[node5][DEBUG ]
[node5][DEBUG ] 总下载量:31 M
[node5][DEBUG ] Downloading packages:
[node5][DEBUG ] No Presto metadata available for mon
[node5][DEBUG ] ——————————————————————————–
[node5][DEBUG ] 总计 29 MB/s | 31 MB 00:01
[node5][DEBUG ] Running transaction check
[node5][DEBUG ] Running transaction test
[node5][DEBUG ] Transaction test succeeded
[node5][DEBUG ] Running transaction
[node5][DEBUG ] 正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/28
[node5][DEBUG ] 正在安装 : boost-random-1.53.0-27.el7.x86_64 2/28
[node5][DEBUG ] 正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/28
[node5][DEBUG ] 正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/28
[node5][DEBUG ] 正在安装 : spax-1.5.2-13.el7.x86_64 5/28
[node5][DEBUG ] 正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 6/28
[node5][DEBUG ] 正在安装 : patch-2.7.1-8.el7.x86_64 7/28
[node5][DEBUG ] 正在安装 : boost-program-options-1.53.0-27.el7.x86_64 8/28
[node5][DEBUG ] 正在安装 : hdparm-9.43-5.el7.x86_64 9/28
[node5][DEBUG ] 正在安装 : m4-1.4.16-10.el7.x86_64 10/28
[node5][DEBUG ] 正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 11/28
[node5][DEBUG ] 正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 12/28
[node5][DEBUG ] 正在安装 : boost-regex-1.53.0-27.el7.x86_64 13/28
[node5][DEBUG ] 正在安装 : mailcap-2.1.41-2.el7.noarch 14/28
[node5][DEBUG ] 正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 15/28
[node5][DEBUG ] 正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 16/28
[node5][DEBUG ] 正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 17/28
[node5][DEBUG ] 正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 18/28
[node5][DEBUG ] 正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 19/28
[node5][DEBUG ] 正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 20/28
[node5][DEBUG ] 正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 21/28
[node5][DEBUG ] 正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 22/28
[node5][DEBUG ] 正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 23/28
[node5][DEBUG ] 正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 24/28
[node5][DEBUG ] 正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 25/28
[node5][DEBUG ] 正在安装 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 26/28
[node5][DEBUG ] 清理 : 1:librbd1-0.94.5-2.el7.x86_64 27/28
[node5][DEBUG ] 清理 : 1:librados2-0.94.5-2.el7.x86_64 28/28
[node5][DEBUG ] 验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/28
[node5][DEBUG ] 验证中 : mailcap-2.1.41-2.el7.noarch 2/28
[node5][DEBUG ] 验证中 : boost-regex-1.53.0-27.el7.x86_64 3/28
[node5][DEBUG ] 验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 4/28
[node5][DEBUG ] 验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 5/28
[node5][DEBUG ] 验证中 : m4-1.4.16-10.el7.x86_64 6/28
[node5][DEBUG ] 验证中 : hdparm-9.43-5.el7.x86_64 7/28
[node5][DEBUG ] 验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 8/28
[node5][DEBUG ] 验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 9/28
[node5][DEBUG ] 验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 10/28
[node5][DEBUG ] 验证中 : boost-iostreams-1.53.0-27.el7.x86_64 11/28
[node5][DEBUG ] 验证中 : boost-random-1.53.0-27.el7.x86_64 12/28
[node5][DEBUG ] 验证中 : boost-program-options-1.53.0-27.el7.x86_64 13/28
[node5][DEBUG ] 验证中 : patch-2.7.1-8.el7.x86_64 14/28
[node5][DEBUG ] 验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 15/28
[node5][DEBUG ] 验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 16/28
[node5][DEBUG ] 验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 17/28
[node5][DEBUG ] 验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 18/28
[node5][DEBUG ] 验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 19/28
[node5][DEBUG ] 验证中 : redhat-lsb-core-4.1-27.el7.x86_64 20/28
[node5][DEBUG ] 验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 21/28
[node5][DEBUG ] 验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 22/28
[node5][DEBUG ] 验证中 : spax-1.5.2-13.el7.x86_64 23/28
[node5][DEBUG ] 验证中 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 24/28
[node5][DEBUG ] 验证中 : fcgi-2.4.0-25.el7cp.x86_64 25/28
[node5][DEBUG ] 验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 26/28
[node5][DEBUG ] 验证中 : 1:librbd1-0.94.5-2.el7.x86_64 27/28
[node5][DEBUG ] 验证中 : 1:librados2-0.94.5-2.el7.x86_64 28/28
[node5][DEBUG ]
[node5][DEBUG ] 已安装:
[node5][DEBUG ] ceph-radosgw.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ]
[node5][DEBUG ] 作为依赖被安装:
[node5][DEBUG ] boost-iostreams.x86_64 0:1.53.0-27.el7
[node5][DEBUG ] boost-program-options.x86_64 0:1.53.0-27.el7
[node5][DEBUG ] boost-random.x86_64 0:1.53.0-27.el7
[node5][DEBUG ] boost-regex.x86_64 0:1.53.0-27.el7
[node5][DEBUG ] ceph-base.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] ceph-common.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] ceph-selinux.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] fcgi.x86_64 0:2.4.0-25.el7cp
[node5][DEBUG ] hdparm.x86_64 0:9.43-5.el7
[node5][DEBUG ] libbabeltrace.x86_64 0:1.2.4-3.el7cp
[node5][DEBUG ] libcephfs1.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] librgw2.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] lttng-ust.x86_64 0:2.4.1-1.el7cp
[node5][DEBUG ] m4.x86_64 0:1.4.16-10.el7
[node5][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node5][DEBUG ] patch.x86_64 0:2.7.1-8.el7
[node5][DEBUG ] python-cephfs.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] python-rados.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] python-rbd.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ] redhat-lsb-core.x86_64 0:4.1-27.el7
[node5][DEBUG ] redhat-lsb-submod-security.x86_64 0:4.1-27.el7
[node5][DEBUG ] spax.x86_64 0:1.5.2-13.el7
[node5][DEBUG ] userspace-rcu.x86_64 0:0.7.9-2.el7rhgs
[node5][DEBUG ]
[node5][DEBUG ] 作为依赖被升级:
[node5][DEBUG ] librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp
[node5][DEBUG ]
[node5][DEBUG ] 完毕!
[node5][INFO ] Running command: ceph –version
[node5][DEBUG ] ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]#

添加免密登陆
[root@node1 ceph-cluster]# ssh-copy-id 192.168.4.15

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.15’”
and check to make sure that only the key(s) you wanted were added.

同步配置文件与密钥到node5
[root@node1 ~]# cd /root/ceph-cluster
[root@node1 ~]# ceph-deploy admin node5
[root@node1 ceph-cluster]# ceph-deploy admin node5
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy admin node5
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f08adf35440>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] client : [‘node5’]
[ceph_deploy.cli][INFO ] func : <function admin at 0x7f08aeb90140>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.admin][DEBUG ] Pushing admin keys and conf to node5
[node5][DEBUG ] connected to host: node5
[node5][DEBUG ] detect platform information from remote host
[node5][DEBUG ] detect machine type
[node5][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf

3)新建网关实例
启动一个rgw服务
[root@node1 ~]# ceph-deploy rgw create node5

[root@node1 ceph-cluster]# ceph-deploy rgw create node5
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy rgw create node5
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] rgw : [(‘node5’, ‘rgw.node5’)]
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7fc0a32e7fc8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function rgw at 0x7fc0a3f50050>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.rgw][DEBUG ] Deploying rgw, cluster ceph hosts node5:rgw.node5
[node5][DEBUG ] connected to host: node5
[node5][DEBUG ] detect platform information from remote host
[node5][DEBUG ] detect machine type
[ceph_deploy.rgw][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.rgw][DEBUG ] remote host will use systemd
[ceph_deploy.rgw][DEBUG ] deploying rgw bootstrap to node5
[node5][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node5][WARNIN] rgw keyring does not exist yet, creating one
[node5][DEBUG ] create a keyring file
[node5][DEBUG ] create path recursively if it doesn’t exist
[node5][INFO ] Running command: ceph –cluster ceph –name client.bootstrap-rgw –keyring /var/lib/ceph/bootstrap-rgw/ceph.keyring auth get-or-create client.rgw.node5 osd allow rwx mon allow rw -o /var/lib/ceph/radosgw/ceph-rgw.node5/keyring
[node5][INFO ] Running command: systemctl enable [email protected]
[node5][WARNIN] Created symlink from /etc/systemd/system/ceph-radosgw.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node5][INFO ] Running command: systemctl start [email protected]
[node5][INFO ] Running command: systemctl enable ceph.target
[ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host node5 and default port 7480
[root@node1 ceph-cluster]#

到node5去确认以下
[root@node5 ~]# ps aux | grep radosgw
ceph 3395 0.5 2.0 2291248 21120 ? Ssl 15:59 0:00 /usr/bin/radosgw -f –cluster ceph –name client.rgw.node5 –setuser ceph –setgroup ceph
root 3618 0.0 0.0 112676 984 pts/1 S+ 16:00 0:00 grep –color=auto radosgw
[root@node5 ~]# netstat -antup | grep 7480
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 3395/radosgw
[root@node5 ~]# netstat -antup | grep radosgw
tcp 0 0 0.0.0.0:7480 0.0.0.0:* LISTEN 3395/radosgw
tcp 0 0 192.168.4.15:40238 192.168.4.11:6800 ESTABLISHED 3395/radosgw
tcp 0 9 192.168.4.15:41050 192.168.4.13:6804 ESTABLISHED 3395/radosgw
tcp 0 0 192.168.4.15:45620 192.168.4.13:6800 ESTABLISHED 3395/radosgw
tcp 0 9 192.168.4.15:54810 192.168.4.11:6804 ESTABLISHED 3395/radosgw
tcp 0 0 192.168.4.15:51324 192.168.4.12:6789 ESTABLISHED 3395/radosgw
tcp 0 9 192.168.4.15:54852 192.168.4.12:6804 ESTABLISHED 3395/radosgw
tcp 0 0 192.168.4.15:49496 192.168.4.12:6800 ESTABLISHED 3395/radosgw

4)修改服务端口
登陆node5,RGW默认服务端口为7480,修改为8000或80更方便客户端记忆和使用
[root@node5 ~]# vim /etc/ceph/ceph.conf
[client.rgw.node5]
host = node5
rgw_frontends = “civetweb port=8000”
//node5为主机名
//civetweb是RGW内置的一个web服务

[root@node5 ~]# cat /etc/ceph/ceph.conf
[global]
fsid = 29908a48-7574-4aac-ac14-80a44b7cffbf
mon_initial_members = node1, node2, node3
mon_host = 192.168.4.11,192.168.4.12,192.168.4.13
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
[client.rgw.node5]
host = node5
rgw_frontends = “civetweb port=8000″

[root@node5 ~]# vim /etc/ceph/ceph.conf
[root@node5 ~]# systemctl restart ceph\*
[root@node5 ~]# netstat -antup | grep radosgw
tcp 0 0 0.0.0.0:8000 0.0.0.0:* LISTEN 3711/radosgw
tcp 0 0 192.168.4.15:54838 192.168.4.11:6804 ESTABLISHED 3711/radosgw
tcp 0 0 192.168.4.15:45638 192.168.4.13:6800 ESTABLISHED 3711/radosgw
tcp 0 0 192.168.4.15:49524 192.168.4.12:6800 ESTABLISHED 3711/radosgw
tcp 0 0 192.168.4.15:41082 192.168.4.13:6804 ESTABLISHED 3711/radosgw
tcp 0 0 192.168.4.15:54882 192.168.4.12:6804 ESTABLISHED 3711/radosgw
tcp 0 0 192.168.4.15:47214 192.168.4.13:6789 ESTABLISHED 3711/radosgw
tcp 0 0 192.168.4.15:40270 192.168.4.11:6800 ESTABLISHED 3711/radosgw

步骤二:客户端测试
1)curl测试
[root@client ~]# curl 192.168.4.15:8000
<?xml version=”1.0″ encoding=”UTF-8″?><ListAllMyBucketsResult xmlns=”http://s3.amazonaws.com/doc/2006-03-01/”><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>

代码如下
[root@client ~]# curl 192.168.4.15:8000
<?xml version=”1.0″ encoding=”UTF-8″?><ListAllMyBucketsResult xmlns=”http://s3.amazonaws.com/doc/2006-03-01/”><Owner><ID>anonymous</ID><DisplayName></DisplayName></Owner><Buckets></Buckets></ListAllMyBucketsResult>[root@client ~]#

2)使用第三方软件访问
登陆node5(RGW)创建账户
[root@node5 ~]# radosgw-admin user create \
–uid=”testuser” –display-name=”First User”
… …
“keys”: [
{
“user”: “testuser”,
“access_key”: “5E42OEGB1M95Y49IBG7B”,
“secret_key”: “i8YtM8cs7QDCK3rTRopb0TTPBFJVXdEryRbeLGK6″
}
],
… …
#
[root@node5 ~]# radosgw-admin user info –uid=testuser
//testuser为用户,key是账户访问密钥

[root@node5 ~]# radosgw-admin user create –uid=”testuser” –display-name=”First User”
{
“user_id”: “testuser”,
“display_name”: “First User”,
“email”: “”,
“suspended”: 0,
“max_buckets”: 1000,
“auid”: 0,
“subusers”: [],
“keys”: [
{
“user”: “testuser”,
“access_key”: “9ZQIAYP67UVVNG7LFMKC”,
“secret_key”: “ilIzMZ0GVGGWHnmd6Q3KppfRNJkDQoaFttHb3SLO”
}
],
“swift_keys”: [],
“caps”: [],
“op_mask”: “read, write, delete”,
“default_placement”: “”,
“placement_tags”: [],
“bucket_quota”: {
“enabled”: false,
“max_size_kb”: -1,
“max_objects”: -1
},
“user_quota”: {
“enabled”: false,
“max_size_kb”: -1,
“max_objects”: -1
},
“temp_url_keys”: []
}

[root@node5 ~]# radosgw-admin user info –uid=testuser
{
“user_id”: “testuser”,
“display_name”: “First User”,
“email”: “”,
“suspended”: 0,
“max_buckets”: 1000,
“auid”: 0,
“subusers”: [],
“keys”: [
{
“user”: “testuser”,
“access_key”: “9ZQIAYP67UVVNG7LFMKC”,
“secret_key”: “ilIzMZ0GVGGWHnmd6Q3KppfRNJkDQoaFttHb3SLO”
}
],
“swift_keys”: [],
“caps”: [],
“op_mask”: “read, write, delete”,
“default_placement”: “”,
“placement_tags”: [],
“bucket_quota”: {
“enabled”: false,
“max_size_kb”: -1,
“max_objects”: -1
},
“user_quota”: {
“enabled”: false,
“max_size_kb”: -1,
“max_objects”: -1
},
“temp_url_keys”: []
}

3)客户端安装软件
[root@client ~]# yum install s3cmd-2.0.1-1.el7.noarch.rpm

[root@client ~]# yum install -y s3cmd-2.0.1-1.el7.noarch.rpm
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
正在检查 s3cmd-2.0.1-1.el7.noarch.rpm: s3cmd-2.0.1-1.el7.noarch
s3cmd-2.0.1-1.el7.noarch.rpm 将被安装
正在解决依赖关系
–> 正在检查事务
—> 软件包 s3cmd.noarch.0.2.0.1-1.el7 将被 安装
–> 解决依赖关系完成

依赖关系解决

======================================================================================
Package 架构 版本 源 大小
======================================================================================
正在安装:
s3cmd noarch 2.0.1-1.el7 /s3cmd-2.0.1-1.el7.noarch 734 k

事务概要
======================================================================================
安装 1 软件包

总计:734 k
安装大小:734 k
Downloading packages:
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : s3cmd-2.0.1-1.el7.noarch 1/1
验证中 : s3cmd-2.0.1-1.el7.noarch 1/1

已安装:
s3cmd.noarch 0:2.0.1-1.el7

修改软件配置
[root@client ~]# s3cmd –configure
Access Key: 5E42OEGB1M95Y49IBG7BSecret Key: i8YtM8cs7QDCK3rTRopb0TTPBFJVXdEryRbeLGK6
S3 Endpoint [s3.amazonaws.com]: 192.168.4.15:8000
[%(bucket)s.s3.amazonaws.com]: %(bucket)s.192.168.4.15:8000
Use HTTPS protocol [Yes]: No
Test access with supplied credentials? [Y/n] Y
Save settings? [y/N] y
//注意,其他提示都默认回车

代码如下
[root@client ~]# s3cmd –configure
(注意,因配置的时候出错了,所以中途重新配置了以下,没有显示的地方默认按回车)
Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key: 9ZQIAYP67UVVNG7LFMKC
Secret Key: ilIzMZ0GVGGWHnmd6Q3KppfRNJkDQoaFttHb3SLO
Default Region [US]:

Use “s3.amazonaws.com” for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [s3.amazonaws.com]: 192.168.4.15:8000

Use “%(bucket)s.s3.amazonaws.com” to the target Amazon S3. “%(bucket)s” and “%(location)s” vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [%(bucket)s.s3.amazonaws.com]: %(bucket)s.192.168.4.15:8000

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [Yes]: No

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can’t connect to S3 directly
HTTP Proxy server name:

New settings:
Access Key: 9ZQIAYP67UVVNG7LFMKC
Secret Key: ilIzMZ0GVGGWHnmd6Q3KppfRNJkDQoaFttHb3SLO
Default Region: US
S3 Endpoint: 192.168.4.15:8000
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.192.168.4.15:8000
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n] y
Please wait, attempting to list all buckets…
ERROR: Test failed: [Errno -2] Name or service not known

Retry configuration? [Y/n] y

Enter new values or accept defaults in brackets with Enter.
Refer to user manual for detailed description of all options.

Access key and Secret key are your identifiers for Amazon S3. Leave them empty for using the env variables.
Access Key [9ZQIAYP67UVVNG7LFMKC]:
Secret Key [ilIzMZ0GVGGWHnmd6Q3KppfRNJkDQoaFttHb3SLO]:
Default Region [US]:

Use “s3.amazonaws.com” for S3 Endpoint and not modify it to the target Amazon S3.
S3 Endpoint [ 192.168.4.15:8000]: 192.168.4.15:8000

Use “%(bucket)s.s3.amazonaws.com” to the target Amazon S3. “%(bucket)s” and “%(location)s” vars can be used
if the target S3 system supports dns based buckets.
DNS-style bucket+hostname:port template for accessing a bucket [ %(bucket)s.192.168.4.15:8000]:

Encryption password is used to protect your files from reading
by unauthorized persons while in transfer to S3
Encryption password:
Path to GPG program [/usr/bin/gpg]:

When using secure HTTPS protocol all communication with Amazon S3
servers is protected from 3rd party eavesdropping. This method is
slower than plain HTTP, and can only be proxied with Python 2.7 or newer
Use HTTPS protocol [No]:

On some networks all internet access must go through a HTTP proxy.
Try setting it here if you can’t connect to S3 directly
HTTP Proxy server name:

New settings:
Access Key: 9ZQIAYP67UVVNG7LFMKC
Secret Key: ilIzMZ0GVGGWHnmd6Q3KppfRNJkDQoaFttHb3SLO
Default Region: US
S3 Endpoint: 192.168.4.15:8000
DNS-style bucket+hostname:port template for accessing a bucket: %(bucket)s.192.168.4.15:8000
Encryption password:
Path to GPG program: /usr/bin/gpg
Use HTTPS protocol: False
HTTP Proxy server name:
HTTP Proxy server port: 0

Test access with supplied credentials? [Y/n]
Please wait, attempting to list all buckets…
Success. Your access key and secret key worked fine 🙂

Now verifying that encryption works…
Not configured. Never mind.

Save settings? [y/N] y
Configuration saved to ‘/root/.s3cfg’

4)创建存储数据的bucket(类似于存储数据的目录)
[root@client ~]# s3cmd ls
[root@client ~]# s3cmd mb s3://my_bucket
Bucket ‘s3://my_bucket/’ created
[root@client ~]# s3cmd ls
2018-05-09 08:14 s3://my_bucket
[root@client ~]# s3cmd put /var/log/messages s3://my_bucket/log/
[root@client ~]# s3cmd ls
2018-05-09 08:14 s3://my_bucket
[root@client ~]# s3cmd ls s3://my_bucket
DIR s3://my_bucket/log/
[root@client ~]# s3cmd ls s3://my_bucket/log/
2018-05-09 08:19 309034 s3://my_bucket/log/messages

代码如下
[root@client ~]# s3cmd ls
(没创建的时候显示为空)
[root@client ~]# s3cmd mb s3://my_bucket
Bucket ‘s3://my_bucket/’ created
[root@client ~]# s3cmd ls
2018-10-12 08:24 s3://my_bucket
(上传文件测试一下)
[root@client ~]# s3cmd put /var/log/messages s3://my_bucket/log/
upload: ‘/var/log/messages’ -> ‘s3://my_bucket/log/messages’ [1 of 1]
586080 of 586080 100% in 3s 151.45 kB/s done
[root@client ~]# s3cmd ls
2018-10-12 08:24 s3://my_bucket
[root@client ~]# s3cmd ls s3://my_bucket
DIR s3://my_bucket/log/
[root@client ~]# s3cmd ls s3://my_bucket/log
DIR s3://my_bucket/log/
[root@client ~]# s3cmd ls s3://my_bucket/log/
2018-10-12 08:32 586080 s3://my_bucket/log/messages

测试下载功能
[root@client ~]# s3cmd get s3://my_bucket/log/messages /tmp/

[root@client ~]# s3cmd get s3://my_bucket/log/messages /tmp/
download: ‘s3://my_bucket/log/messages’ -> ‘/tmp/messages’ [1 of 1]
586080 of 586080 100% in 0s 28.11 MB/s done
[root@client ~]# ll /tmp/
总用量 576
-rw-r–r–. 1 root root 586080 10月 12 08:32 messages

测试删除功能
[root@client ~]# s3cmd del s3://my_bucket/log/messages

[root@client ~]# s3cmd del s3://my_bucket/log/messages
delete: ‘s3://my_bucket/log/messages’

发表在 ceph | 标签为 | 留下评论

创建ceph集群时常见问题徽宗

问题一:
时间不同步
[root@node1 ~]# ceph -s
cluster 29908a48-7574-4aac-ac14-80a44b7cffbf
health HEALTH_WARN
clock skew detected on mon.node2, mon.node3
Monitor clock skew detected
monmap e1: 3 mons at {node1=192.168.4.11:6789/0,node2=192.168.4.12:6789/0,node3=192.168.4.13:6789/0}
election epoch 10, quorum 0,1,2 node1,node2,node3
osdmap e36: 6 osds: 6 up, 6 in
flags sortbitwise
pgmap v4381: 64 pgs, 1 pools, 115 MB data, 3915 objects
578 MB used, 60795 MB / 61373 MB avail
64 active+clean

时间同步后正常
[root@node1 ~]# ceph -s
cluster 29908a48-7574-4aac-ac14-80a44b7cffbf
health HEALTH_OK
monmap e1: 3 mons at {node1=192.168.4.11:6789/0,node2=192.168.4.12:6789/0,node3=192.168.4.13:6789/0}
election epoch 10, quorum 0,1,2 node1,node2,node3
osdmap e36: 6 osds: 6 up, 6 in
flags sortbitwise
pgmap v4381: 64 pgs, 1 pools, 115 MB data, 3915 objects
578 MB used, 60795 MB / 61373 MB avail
64 active+clean
[root@node1 ~]#

error 正在同步数据,同步完成后恢复

问题二:
重新节点后,检查状态报错
[root@node1 ceph-cluster]# ceph -s
cluster 29908a48-7574-4aac-ac14-80a44b7cffbf
health HEALTH_ERR
56 pgs are stuck inactive for more than 300 seconds
56 pgs stale
56 pgs stuck stale
monmap e1: 3 mons at {node1=192.168.4.11:6789/0,node2=192.168.4.12:6789/0,node3=192.168.4.13:6789/0}
election epoch 16, quorum 0,1,2 node1,node2,node3
osdmap e39: 6 osds: 1 up, 1 in; 28 remapped pgs
flags sortbitwise
pgmap v4384: 64 pgs, 1 pools, 115 MB data, 3915 objects
98352 kB used, 10132 MB / 10228 MB avail
56 stale+active+clean
8 active+clean
解决办法
重新授权
[root@node1 ceph-cluster]# chown ceph.ceph /dev/vdb1
[root@node1 ceph-cluster]# chown ceph.ceph /dev/vdb2
重启服务
[root@node1 ceph-cluster]# systemctl restart ceph\*
重新检查状态就正常了理论
[root@node1 ceph-cluster]# ceph -s
cluster 29908a48-7574-4aac-ac14-80a44b7cffbf
health HEALTH_OK
monmap e1: 3 mons at {node1=192.168.4.11:6789/0,node2=192.168.4.12:6789/0,node3=192.168.4.13:6789/0}
election epoch 22, quorum 0,1,2 node1,node2,node3
osdmap e46: 6 osds: 6 up, 6 in
flags sortbitwise
pgmap v4398: 64 pgs, 1 pools, 115 MB data, 3915 objects
565 MB used, 60808 MB / 61373 MB avail
64 active+clean
把授权写进/etc/rc.local里面,下次重启就不用再手动授权
[root@node1 ceph-cluster]# cat /etc/rc.local
#!/bin/bash
# THIS FILE IS ADDED FOR COMPATIBILITY PURPOSES
#
# It is highly advisable to create own systemd services or udev rules
# to run scripts during boot instead of using this file.
#
# In contrast to previous versions due to parallel execution during boot
# this script will NOT be run after all other services.
#
# Please note that you must run ‘chmod +x /etc/rc.d/rc.local’ to ensure
# that this script will be executed during boot.

touch /var/lock/subsys/local
chown ceph.ceph /dev/vdb1
chown ceph.ceph /dev/vdb2

发表在 ceph | 标签为 | 留下评论

创建ceph集群

创建ceph集群
使用4台虚拟机,1台客户端、3台存储集群服务器,IP规划如下
client 192.168.4.10
node1 192.168.4.11
node2 192.168.4.12
node3 192.168.4.13
步骤一:安装前准备
1)物理机为所有节点配置yum源,注意所有的虚拟主机均需要挂载安装光盘。
[root@root9pc01 ~]# yum -y install vsftpd
[root@root9pc01 ~]# mkdir /var/ftp/ceph
##################################
把cluster解压并挂载到FTP目录
[root@room9pc52 ~]# cd cluster/
[root@room9pc52 cluster]# ll
总用量 968676
drwxr-xr-x 2 root root 4096 6月 12 16:55 clusterPPT
drwxr-xr-x 2 root root 4096 6月 12 16:46 cluster
-rw-r–r– 1 root root 10919964 3月 23 2018 Discuz_X3.3_SC_UTF8.zip
-rw-r–r– 1 root root 980799488 5月 16 19:42 rhcs2.0-rhosp9-20161113-x86_64.iso
-rw-r–r– 1 root root 190956 5月 16 19:44 s3cmd-2.0.1-1.el7.noarch.rpm
[root@room9pc52 cluster]# cp rhcs2.0-rhosp9-20161113-x86_64.iso /iso/
[root@room9pc52 cluster]# ll
总用量 968680
drwxr-xr-x 2 root root 4096 6月 12 16:55 clusterPPT
drwxr-xr-x 2 root root 4096 6月 12 16:46 cluster
-rw-r–r– 1 root root 10919964 3月 23 2018 Discuz_X3.3_SC_UTF8.zip
-rw-r–r– 1 root root 980799488 5月 16 19:42 rhcs2.0-rhosp9-20161113-x86_64.iso
-rw-r–r– 1 root root 190956 5月 16 19:44 s3cmd-2.0.1-1.el7.noarch.rpm
[root@room9pc52 cluster]# cd /iso/
[root@room9pc52 iso]# ll
总用量 23948404
-rwxr-xr-x 1 qemu qemu 8694792192 4月 9 2018 CentOS-7-x86_64-Everything-1708.iso
-rwxrwxrwx 1 root root 3419052032 12月 1 2014 cn_windows_7_ultimate_with_sp1_x64_dvd_618537.iso
drwx—— 2 root root 4096 1月 18 2018 lost+found
-rw-r–r– 1 root root 980799488 10月 11 10:11 rhcs2.0-rhosp9-20161113-x86_64.iso
-rw-r–r– 1 root root 3841982464 11月 18 2017 rhel-server-6.7-x86_64-dvd.iso
-rw-r–r– 1 qemu qemu 4059037696 1月 10 2018 rhel-server-7.4-x86_64-dvd.iso
-rw-r–r– 1 qemu qemu 3527475200 1月 12 2018 Win10_Pro_X64_zh_CN.iso
[root@room9pc52 iso]# mount -o loop rhcs2.0-rhosp9-20161113-x86_64.iso /var/ftp/ceph
mount: /dev/loop1 写保护,将以只读方式挂载
###################################################
[root@root9pc01 ~]# mount -o loop \
rhcs2.0-rhosp9-20161113-x86_64.iso /var/ftp/ceph
[root@root9pc01 ~]# systemctl restart vsftpd

2)修改所有节点yum配置(以node1为例)
[root@node1 ~]# cat /etc/yum.repos.d/ceph.repo
[mon]
name=mon
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/MON
gpgcheck=0
[osd]
name=osd
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/OSD
gpgcheck=0
[tools]
name=tools
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/Tools
gpgcheck=0
弄完之后确定一下
[root@11 yum.repos.d]# yum repolist
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
192.168.4.254_rhel7 | 4.1 kB 00:00:00
(1/2): 192.168.4.254_rhel7/group_gz | 137 kB 00:00:00
(2/2): 192.168.4.254_rhel7/primary_db | 4.0 MB 00:00:00
源标识 源名称 状态
192.168.4.254_rhel7 added from: ftp://192.168.4.254/rhel7 4,986
mon mon 41
osd osd 28
tools tools 33
repolist: 5,088

3)修改/etc/hosts并同步到所有主机。
警告:/etc/hosts解析的域名必须与本机主机名一致!!!!
[root@node1 ~]# cat /etc/hosts
… …
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3
[root@node1 ~]# for i in 10 11 12 13
> do
> scp /etc/hosts 192.168.2.$i:/etc/
> done
警告:/etc/hosts解析的域名必须与本机主机名一致!!!!
[root@11 yum.repos.d]# for i in 10 11 12 13; do scp /etc/hosts 192.168.4.$i:/etc/; done
The authenticity of host ‘192.168.4.10 (192.168.4.10)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.10’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 219.3KB/s 00:00
The authenticity of host ‘192.168.4.11 (192.168.4.11)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.11’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 574.1KB/s 00:00
The authenticity of host ‘192.168.4.12 (192.168.4.12)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.12’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 255.2KB/s 00:00
The authenticity of host ‘192.168.4.13 (192.168.4.13)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.13’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 284.7KB/s 00:00
[root@11 yum.repos.d]#
[root@11 yum.repos.d]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3

注意,主机名与/etc/hosts里的主机名必须一致,不一致抓紧时间改过来,以node1为例,原来的起的名字是11
[root@11 yum.repos.d]# hostnamectl set-hostname node1
[root@11 yum.repos.d]# exit
登出
Connection to 192.168.4.11 closed.
[root@room9pc52 ~]# ssh 192.168.4.11
[email protected]’s password:
Last login: Thu Oct 11 10:09:18 2018 from 192.168.4.254
[root@node1 ~]#

3)配置无密码连接。
[root@node1 ~]# ssh-keygen -f /root/.ssh/id_rsa -N ”
[root@node1 ~]# for i in 10 11 12 13
> do
> ssh-copy-id 192.168.4.$i
> done
注意,所有的主机之间必须无密码连接
[root@client ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:bW6lHr46lUqKY+fHysXRzWoNGdVUuKZ9PNPhcv+aEhY root@client
The key’s randomart image is:
+—[RSA 2048]—-+
| .o.o.|
| . o |
| . . |
| o =E o. |
| S *.+=..o|
| ..+o*+..=+|
| . +ooB…o.+|
| +.o.== .. ..|
| . ++o.o+. .o.o|
+—-[SHA256]—–+
[root@client ~]#
[root@client ~]# for i in 10 11 12 13
> do
> ssh-copy-id 192.168.4.$i
> done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.10 (192.168.4.10)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.10’”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.11 (192.168.4.11)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.11’”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.12 (192.168.4.12)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.12’”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.13 (192.168.4.13)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.13’”
and check to make sure that only the key(s) you wanted were added.

步骤二:配置NTP时间同步

1)创建NTP服务器。
[root@client ~]# yum -y install chrony
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
软件包 chrony-3.1-2.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@client ~]# vim /etc/chrony.conf
[root@client ~]# cat /etc/chrony.conf | grep -v “^#” | grep -v “^$”
server 0.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.4.0/24
local stratum 10
logdir /var/log/chrony
[root@client ~]# systemctl restart chronyd

2)其他所有节点与NTP服务器同步时间(以node1为例)。
[root@node1 ~]# cat /etc/chrony.conf
server 192.168.4.10 iburst
[root@node1 ~]# systemctl restart chronyd

步骤三:准备存储磁盘
1)物理机上为每个虚拟机准备3块磁盘。(可以使用命令,也可以使用图形直接添加)
[root@room9pc52 iso]# cd /var/lib/libvirt/images/
[root@room9pc52 images]# ll
总用量 295860
-rw-r–r– 1 qemu qemu 74252288 10月 11 10:54 a10.img
-rw-r–r– 1 qemu qemu 74907648 10月 11 10:53 a11.img
-rw-r–r– 1 qemu qemu 75235328 10月 11 10:57 a12.img
-rw-r–r– 1 qemu qemu 75104256 10月 11 10:56 a13.img
-rw-r–r– 1 root root 197120 8月 14 22:53 a50.img
drwxr-xr-x 2 root root 4096 1月 19 2018 bin
drwxr-xr-x 2 root root 4096 1月 23 2018 conf.d
drwxr-xr-x 5 root root 4096 1月 12 2018 content
drwxr-xr-x 7 root root 4096 1月 19 2018 db
drwxr-xr-x 4 root root 4096 1月 10 2018 exam
drwxr-xr-x 4 root root 4096 10月 11 10:11 iso
drwx——. 2 root root 16384 1月 18 2018 lost+found
drwx—— 3 root root 4096 1月 16 2018 qemu
-rw-r–r– 1 root root 1860 1月 19 2018 Student.sh
-rw-r–r– 1 root root 2794667 1月 13 2018 tedu-wallpaper-01.png
-rw-r–r– 1 root root 427125 1月 19 2018 tedu-wallpaper-weekend.png
-rw——- 1 root root 4644 8月 13 09:14 vsftpd.conf
-rw-r–r– 1 root root 1859 1月 19 2018 Weekend.sh
-rw-r–r– 1 root root 197632 8月 12 13:13 win.img
[root@room9pc52 images]# qemu-img create -f qcow2 node1-vdb.vol 10g
Formatting ‘node1-vdb.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node1-vdc.vol 10g
Formatting ‘node1-vdc.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node1-vdd.vol 10g
Formatting ‘node1-vdd.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node2-vdb.vol 10g
Formatting ‘node2-vdb.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node2-vdc.vol 10g
Formatting ‘node2-vdc.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node2-vdd.vol 10g
Formatting ‘node2-vdd.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node3-vdb.vol 10g
Formatting ‘node3-vdb.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node3-vdc.vol 10g
Formatting ‘node3-vdc.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node3-vdd.vol 10g
Formatting ‘node3-vdd.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# ll -h node*
-rw-r–r– 1 root root 193K 10月 11 11:00 node1-vdb.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node1-vdc.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node1-vdd.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node2-vdb.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node2-vdc.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node2-vdd.vol
-rw-r–r– 1 root root 193K 10月 11 11:01 node3-vdb.vol
-rw-r–r– 1 root root 193K 10月 11 11:01 node3-vdc.vol
-rw-r–r– 1 root root 193K 10月 11 11:01 node3-vdd.vol
2)使用virt-manager为虚拟机添加磁盘。
(到图形界面把刚才创建的磁盘添加到虚拟机,每台虚拟机3块)
[root@node1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk

案例2:部署ceph集群
2.1 问题

沿用练习一,部署Ceph集群服务器,实现以下目标:
安装部署工具ceph-deploy
创建ceph集群
准备日志磁盘分区
创建OSD存储空间
查看ceph状态,验证
2.2 步骤

实现此案例需要按照如下步骤进行。
步骤一:部署软件

1)在node1安装部署工具,学习工具的语法格式。
[root@node1 ~]# yum -y install ceph-deploy
[root@node1 ~]# ceph-deploy –help

代码如下
[root@node1 ~]# yum install -y ceph-deploy
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
正在解决依赖关系
–> 正在检查事务
—> 软件包 ceph-deploy.noarch.0.1.5.33-1.el7cp 将被 安装
–> 解决依赖关系完成

依赖关系解决

=============================================================================================
Package 架构 版本 源 大小
=============================================================================================
正在安装:
ceph-deploy noarch 1.5.33-1.el7cp tools 272 k

事务概要
=============================================================================================
安装 1 软件包

总下载量:272 k
安装大小:1.1 M
Downloading packages:
ceph-deploy-1.5.33-1.el7cp.noarch.rpm | 272 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : ceph-deploy-1.5.33-1.el7cp.noarch 1/1
192.168.4.254_rhel7/productid | 1.6 kB 00:00:00
mon/productid | 1.6 kB 00:00:00
osd/productid | 1.6 kB 00:00:00
验证中 : ceph-deploy-1.5.33-1.el7cp.noarch 1/1

已安装:
ceph-deploy.noarch 0:1.5.33-1.el7cp

完毕!
[root@node1 ~]# ceph-deploy –help
usage: ceph-deploy [-h] [-v | -q] [–version] [–username USERNAME]
[–overwrite-conf] [–cluster NAME] [–ceph-conf CEPH_CONF]
COMMAND …

Easy Ceph deployment

-^-
/ \
|O o| ceph-deploy v1.5.33
).-.(
‘/|||\`
| ‘|` |
‘|`

Full documentation can be found at: http://ceph.com/ceph-deploy/docs

optional arguments:
-h, –help show this help message and exit
-v, –verbose be more verbose
-q, –quiet be less verbose
–version the current installed version of ceph-deploy
–username USERNAME the username to connect to the remote host
–overwrite-conf overwrite an existing conf file on remote host (if
present)
–cluster NAME name of the cluster
–ceph-conf CEPH_CONF
use (or reuse) a given ceph.conf file

commands:
COMMAND description
new Start deploying a new cluster, and write a
CLUSTER.conf and keyring for it.
install Install Ceph packages on remote hosts.
rgw Ceph RGW daemon management
mds Ceph MDS daemon management
mon Ceph MON Daemon management
gatherkeys Gather authentication keys for provisioning new nodes.
disk Manage disks on a remote host.
osd Prepare a data disk on remote host.
admin Push configuration and client.admin key to a remote
host.
repo Repo definition management
config Copy ceph.conf to/from remote host(s)
uninstall Remove Ceph packages from remote hosts.
purge Remove Ceph packages from remote hosts and purge all
data.
purgedata Purge (delete, destroy, discard, shred) any Ceph data
from /var/lib/ceph
forgetkeys Remove authentication keys from the local directory.
pkg Manage packages on remote hosts.
calamari Install and configure Calamari nodes. Assumes that a
repository with Calamari packages is already
configured. Refer to the docs for examples
(http://ceph.com/ceph-deploy/docs/conf.html)

2)创建目录
[root@node1 ~]# mkdir ceph-cluster
[root@node1 ~]# cd ceph-cluster/

步骤二:部署Ceph集群

1)创建Ceph集群配置。
[root@node1 ceph-cluster]# ceph-deploy new node1 node2 node3

代码如下
[root@node1 ceph-cluster]# ceph-deploy new node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy new node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f1a4519fc80>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1a445055f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : [‘node1’, ‘node2’, ‘node3’]
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ip link show
[node1][INFO ] Running command: /usr/sbin/ip addr show
[node1][DEBUG ] IP addresses found: [‘192.168.4.11’, ‘192.168.122.1’]
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.4.11
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node2][DEBUG ] connected to host: node1
[node2][INFO ] Running command: ssh -CT -o BatchMode=yes node2
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO ] will connect again with password prompt
The authenticity of host ‘node2 (192.168.4.12)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘node2’ (ECDSA) to the list of known hosts.
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[ceph_deploy.new][INFO ] adding public keys to authorized_keys
[node2][DEBUG ] append contents to file
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ip link show
[node2][INFO ] Running command: /usr/sbin/ip addr show
[node2][DEBUG ] IP addresses found: [‘192.168.4.12’, ‘192.168.122.1’]
[ceph_deploy.new][DEBUG ] Resolving host node2
[ceph_deploy.new][DEBUG ] Monitor node2 at 192.168.4.12
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node3][DEBUG ] connected to host: node1
[node3][INFO ] Running command: ssh -CT -o BatchMode=yes node3
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO ] will connect again with password prompt
The authenticity of host ‘node3 (192.168.4.13)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘node3’ (ECDSA) to the list of known hosts.
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[ceph_deploy.new][INFO ] adding public keys to authorized_keys
[node3][DEBUG ] append contents to file
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ip link show
[node3][INFO ] Running command: /usr/sbin/ip addr show
[node3][DEBUG ] IP addresses found: [‘192.168.4.13’, ‘192.168.122.1’]
[ceph_deploy.new][DEBUG ] Resolving host node3
[ceph_deploy.new][DEBUG ] Monitor node3 at 192.168.4.13
[ceph_deploy.new][DEBUG ] Monitor initial members are [‘node1’, ‘node2’, ‘node3’]
[ceph_deploy.new][DEBUG ] Monitor addrs are [‘192.168.4.11’, ‘192.168.4.12’, ‘192.168.4.13’]
[ceph_deploy.new][DEBUG ] Creating a random mon key…
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring…
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf…

2)给所有节点安装软件包。
[root@node1 ceph-cluster]# ceph-deploy install node1 node2 node3
代码如下
[root@node1 ceph-cluster]# ceph-deploy install node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy install node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4993bfdb48>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f499486b7d0>
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : [‘node1’, ‘node2’, ‘node3’]
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts node1 node2 node3
[ceph_deploy.install][DEBUG ] Detecting platform for host node1 …
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][INFO ] installing Ceph on node1
[node1][INFO ] Running command: yum clean all
[node1][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node1][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node1][DEBUG ] 正在清理软件源: 192.168.4.254_rhel7 mon osd tools
[node1][DEBUG ] Cleaning up everything
[node1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node1][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[node1][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node1][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node1][DEBUG ] 正在解决依赖关系
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 ceph-mds.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-mon.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-flask,它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libleveldb.so.1()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-osd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-osd-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-radosgw.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 mailcap,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libcephfs.so.1()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 leveldb.x86_64.0.1.12.0-5.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
[node1][DEBUG ] —> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
[node1][DEBUG ] —> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-flask.noarch.1.0.10.1-5.el7 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-itsdangerous,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-jinja2,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-werkzeug,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
[node1][DEBUG ] —> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
[node1][DEBUG ] —> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
[node1][DEBUG ] —> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 python-itsdangerous.noarch.0.0.23-1.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-jinja2.noarch.0.2.7.2-2.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-babel >= 0.8,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-markupsafe,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node1][DEBUG ] —> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 python-werkzeug.noarch.0.0.9.1-1.el7 将被 安装
[node1][DEBUG ] —> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
[node1][DEBUG ] —> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-babel.noarch.0.0.9.6-8.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-markupsafe.x86_64.0.0.11-10.el7 将被 安装
[node1][DEBUG ] —> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
[node1][DEBUG ] —> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
[node1][DEBUG ] –> 解决依赖关系完成
[node1][DEBUG ]
[node1][DEBUG ] 依赖关系解决
[node1][DEBUG ]
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Package 架构 版本 源 大小
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] 正在安装:
[node1][DEBUG ] ceph-mds x86_64 1:10.2.2-38.el7cp tools 2.8 M
[node1][DEBUG ] ceph-mon x86_64 1:10.2.2-38.el7cp mon 2.8 M
[node1][DEBUG ] ceph-osd x86_64 1:10.2.2-38.el7cp osd 9.0 M
[node1][DEBUG ] ceph-radosgw x86_64 1:10.2.2-38.el7cp tools 265 k
[node1][DEBUG ] 为依赖而安装:
[node1][DEBUG ] boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
[node1][DEBUG ] boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
[node1][DEBUG ] boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
[node1][DEBUG ] boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
[node1][DEBUG ] ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
[node1][DEBUG ] ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
[node1][DEBUG ] ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
[node1][DEBUG ] fcgi x86_64 2.4.0-25.el7cp mon 47 k
[node1][DEBUG ] hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
[node1][DEBUG ] leveldb x86_64 1.12.0-5.el7cp mon 161 k
[node1][DEBUG ] libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
[node1][DEBUG ] libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node1][DEBUG ] librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
[node1][DEBUG ] lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
[node1][DEBUG ] m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
[node1][DEBUG ] mailcap noarch 2.1.41-2.el7 192.168.4.254_rhel7 31 k
[node1][DEBUG ] patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
[node1][DEBUG ] python-babel noarch 0.9.6-8.el7 192.168.4.254_rhel7 1.4 M
[node1][DEBUG ] python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
[node1][DEBUG ] python-flask noarch 1:0.10.1-5.el7 mon 204 k
[node1][DEBUG ] python-itsdangerous noarch 0.23-1.el7 mon 24 k
[node1][DEBUG ] python-jinja2 noarch 2.7.2-2.el7cp mon 516 k
[node1][DEBUG ] python-markupsafe x86_64 0.11-10.el7 192.168.4.254_rhel7 25 k
[node1][DEBUG ] python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
[node1][DEBUG ] python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
[node1][DEBUG ] python-werkzeug noarch 0.9.1-1.el7 mon 562 k
[node1][DEBUG ] redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
[node1][DEBUG ] redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
[node1][DEBUG ] spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
[node1][DEBUG ] userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
[node1][DEBUG ] 为依赖而更新:
[node1][DEBUG ] librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node1][DEBUG ] librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M
[node1][DEBUG ]
[node1][DEBUG ] 事务概要
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] 安装 4 软件包 (+30 依赖软件包)
[node1][DEBUG ] 升级 ( 2 依赖软件包)
[node1][DEBUG ]
[node1][DEBUG ] 总下载量:49 M
[node1][DEBUG ] Downloading packages:
[node1][DEBUG ] No Presto metadata available for mon
[node1][DEBUG ] ——————————————————————————–
[node1][DEBUG ] 总计 30 MB/s | 49 MB 00:01
[node1][DEBUG ] Running transaction check
[node1][DEBUG ] Running transaction test
[node1][DEBUG ] Transaction test succeeded
[node1][DEBUG ] Running transaction
[node1][DEBUG ] 正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/38
[node1][DEBUG ] 正在安装 : boost-random-1.53.0-27.el7.x86_64 2/38
[node1][DEBUG ] 正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/38
[node1][DEBUG ] 正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/38
[node1][DEBUG ] 正在安装 : boost-program-options-1.53.0-27.el7.x86_64 5/38
[node1][DEBUG ] 正在安装 : leveldb-1.12.0-5.el7cp.x86_64 6/38
[node1][DEBUG ] 正在安装 : python-werkzeug-0.9.1-1.el7.noarch 7/38
[node1][DEBUG ] 正在安装 : spax-1.5.2-13.el7.x86_64 8/38
[node1][DEBUG ] 正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 9/38
[node1][DEBUG ] 正在安装 : python-markupsafe-0.11-10.el7.x86_64 10/38
[node1][DEBUG ] 正在安装 : patch-2.7.1-8.el7.x86_64 11/38
[node1][DEBUG ] 正在安装 : python-babel-0.9.6-8.el7.noarch 12/38
[node1][DEBUG ] 正在安装 : python-jinja2-2.7.2-2.el7cp.noarch 13/38
[node1][DEBUG ] 正在安装 : hdparm-9.43-5.el7.x86_64 14/38
[node1][DEBUG ] 正在安装 : m4-1.4.16-10.el7.x86_64 15/38
[node1][DEBUG ] 正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 16/38
[node1][DEBUG ] 正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 17/38
[node1][DEBUG ] 正在安装 : boost-regex-1.53.0-27.el7.x86_64 18/38
[node1][DEBUG ] 正在安装 : mailcap-2.1.41-2.el7.noarch 19/38
[node1][DEBUG ] 正在安装 : python-itsdangerous-0.23-1.el7.noarch 20/38
[node1][DEBUG ] 正在安装 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node1][DEBUG ] 正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 22/38
[node1][DEBUG ] 正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 23/38
[node1][DEBUG ] 正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 24/38
[node1][DEBUG ] 正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 25/38
[node1][DEBUG ] 正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 26/38
[node1][DEBUG ] 正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 27/38
[node1][DEBUG ] 正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 28/38
[node1][DEBUG ] 正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 29/38
[node1][DEBUG ] 正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 30/38
[node1][DEBUG ] 正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 31/38
[node1][DEBUG ] 正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 32/38
[node1][DEBUG ] 正在安装 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 33/38
[node1][DEBUG ] 正在安装 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 34/38
[node1][DEBUG ] 正在安装 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 35/38
[node1][DEBUG ] 正在安装 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 36/38
[node1][DEBUG ] 清理 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node1][DEBUG ] 清理 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node1][DEBUG ] 验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/38
[node1][DEBUG ] 验证中 : python-itsdangerous-0.23-1.el7.noarch 2/38
[node1][DEBUG ] 验证中 : mailcap-2.1.41-2.el7.noarch 3/38
[node1][DEBUG ] 验证中 : boost-regex-1.53.0-27.el7.x86_64 4/38
[node1][DEBUG ] 验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 5/38
[node1][DEBUG ] 验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 6/38
[node1][DEBUG ] 验证中 : m4-1.4.16-10.el7.x86_64 7/38
[node1][DEBUG ] 验证中 : hdparm-9.43-5.el7.x86_64 8/38
[node1][DEBUG ] 验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 9/38
[node1][DEBUG ] 验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 10/38
[node1][DEBUG ] 验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 11/38
[node1][DEBUG ] 验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 12/38
[node1][DEBUG ] 验证中 : boost-iostreams-1.53.0-27.el7.x86_64 13/38
[node1][DEBUG ] 验证中 : python-babel-0.9.6-8.el7.noarch 14/38
[node1][DEBUG ] 验证中 : boost-random-1.53.0-27.el7.x86_64 15/38
[node1][DEBUG ] 验证中 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 16/38
[node1][DEBUG ] 验证中 : patch-2.7.1-8.el7.x86_64 17/38
[node1][DEBUG ] 验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 18/38
[node1][DEBUG ] 验证中 : python-markupsafe-0.11-10.el7.x86_64 19/38
[node1][DEBUG ] 验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 20/38
[node1][DEBUG ] 验证中 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node1][DEBUG ] 验证中 : leveldb-1.12.0-5.el7cp.x86_64 22/38
[node1][DEBUG ] 验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 23/38
[node1][DEBUG ] 验证中 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 24/38
[node1][DEBUG ] 验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 25/38
[node1][DEBUG ] 验证中 : python-jinja2-2.7.2-2.el7cp.noarch 26/38
[node1][DEBUG ] 验证中 : boost-program-options-1.53.0-27.el7.x86_64 27/38
[node1][DEBUG ] 验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 28/38
[node1][DEBUG ] 验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 29/38
[node1][DEBUG ] 验证中 : redhat-lsb-core-4.1-27.el7.x86_64 30/38
[node1][DEBUG ] 验证中 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 31/38
[node1][DEBUG ] 验证中 : spax-1.5.2-13.el7.x86_64 32/38
[node1][DEBUG ] 验证中 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 33/38
[node1][DEBUG ] 验证中 : python-werkzeug-0.9.1-1.el7.noarch 34/38
[node1][DEBUG ] 验证中 : fcgi-2.4.0-25.el7cp.x86_64 35/38
[node1][DEBUG ] 验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 36/38
[node1][DEBUG ] 验证中 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node1][DEBUG ] 验证中 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node1][DEBUG ]
[node1][DEBUG ] 已安装:
[node1][DEBUG ] ceph-mds.x86_64 1:10.2.2-38.el7cp ceph-mon.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] ceph-osd.x86_64 1:10.2.2-38.el7cp ceph-radosgw.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ]
[node1][DEBUG ] 作为依赖被安装:
[node1][DEBUG ] boost-iostreams.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] boost-program-options.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] boost-random.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] boost-regex.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] ceph-base.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] ceph-common.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] ceph-selinux.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] fcgi.x86_64 0:2.4.0-25.el7cp
[node1][DEBUG ] hdparm.x86_64 0:9.43-5.el7
[node1][DEBUG ] leveldb.x86_64 0:1.12.0-5.el7cp
[node1][DEBUG ] libbabeltrace.x86_64 0:1.2.4-3.el7cp
[node1][DEBUG ] libcephfs1.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] librgw2.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] lttng-ust.x86_64 0:2.4.1-1.el7cp
[node1][DEBUG ] m4.x86_64 0:1.4.16-10.el7
[node1][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node1][DEBUG ] patch.x86_64 0:2.7.1-8.el7
[node1][DEBUG ] python-babel.noarch 0:0.9.6-8.el7
[node1][DEBUG ] python-cephfs.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] python-flask.noarch 1:0.10.1-5.el7
[node1][DEBUG ] python-itsdangerous.noarch 0:0.23-1.el7
[node1][DEBUG ] python-jinja2.noarch 0:2.7.2-2.el7cp
[node1][DEBUG ] python-markupsafe.x86_64 0:0.11-10.el7
[node1][DEBUG ] python-rados.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] python-rbd.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] python-werkzeug.noarch 0:0.9.1-1.el7
[node1][DEBUG ] redhat-lsb-core.x86_64 0:4.1-27.el7
[node1][DEBUG ] redhat-lsb-submod-security.x86_64 0:4.1-27.el7
[node1][DEBUG ] spax.x86_64 0:1.5.2-13.el7
[node1][DEBUG ] userspace-rcu.x86_64 0:0.7.9-2.el7rhgs
[node1][DEBUG ]
[node1][DEBUG ] 作为依赖被升级:
[node1][DEBUG ] librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ]
[node1][DEBUG ] 完毕!
[node1][INFO ] Running command: ceph –version
[node1][DEBUG ] ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)
[ceph_deploy.install][DEBUG ] Detecting platform for host node2 …
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][INFO ] installing Ceph on node2
[node2][INFO ] Running command: yum clean all
[node2][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node2][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node2][DEBUG ] 正在清理软件源: 192.168.4.254_rhel7 mon osd tools
[node2][DEBUG ] Cleaning up everything
[node2][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node2][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[node2][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node2][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node2][DEBUG ] 正在解决依赖关系
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 ceph-mds.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-mon.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-flask,它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libleveldb.so.1()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-osd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-osd-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-radosgw.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 mailcap,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libcephfs.so.1()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 leveldb.x86_64.0.1.12.0-5.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
[node2][DEBUG ] —> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
[node2][DEBUG ] —> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-flask.noarch.1.0.10.1-5.el7 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-itsdangerous,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-jinja2,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-werkzeug,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
[node2][DEBUG ] —> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
[node2][DEBUG ] —> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
[node2][DEBUG ] —> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 python-itsdangerous.noarch.0.0.23-1.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-jinja2.noarch.0.2.7.2-2.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-babel >= 0.8,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-markupsafe,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node2][DEBUG ] —> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 python-werkzeug.noarch.0.0.9.1-1.el7 将被 安装
[node2][DEBUG ] —> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
[node2][DEBUG ] —> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-babel.noarch.0.0.9.6-8.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-markupsafe.x86_64.0.0.11-10.el7 将被 安装
[node2][DEBUG ] —> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
[node2][DEBUG ] —> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
[node2][DEBUG ] –> 解决依赖关系完成
[node2][DEBUG ]
[node2][DEBUG ] 依赖关系解决
[node2][DEBUG ]
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Package 架构 版本 源 大小
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] 正在安装:
[node2][DEBUG ] ceph-mds x86_64 1:10.2.2-38.el7cp tools 2.8 M
[node2][DEBUG ] ceph-mon x86_64 1:10.2.2-38.el7cp mon 2.8 M
[node2][DEBUG ] ceph-osd x86_64 1:10.2.2-38.el7cp osd 9.0 M
[node2][DEBUG ] ceph-radosgw x86_64 1:10.2.2-38.el7cp tools 265 k
[node2][DEBUG ] 为依赖而安装:
[node2][DEBUG ] boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
[node2][DEBUG ] boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
[node2][DEBUG ] boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
[node2][DEBUG ] boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
[node2][DEBUG ] ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
[node2][DEBUG ] ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
[node2][DEBUG ] ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
[node2][DEBUG ] fcgi x86_64 2.4.0-25.el7cp mon 47 k
[node2][DEBUG ] hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
[node2][DEBUG ] leveldb x86_64 1.12.0-5.el7cp mon 161 k
[node2][DEBUG ] libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
[node2][DEBUG ] libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node2][DEBUG ] librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
[node2][DEBUG ] lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
[node2][DEBUG ] m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
[node2][DEBUG ] mailcap noarch 2.1.41-2.el7 192.168.4.254_rhel7 31 k
[node2][DEBUG ] patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
[node2][DEBUG ] python-babel noarch 0.9.6-8.el7 192.168.4.254_rhel7 1.4 M
[node2][DEBUG ] python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
[node2][DEBUG ] python-flask noarch 1:0.10.1-5.el7 mon 204 k
[node2][DEBUG ] python-itsdangerous noarch 0.23-1.el7 mon 24 k
[node2][DEBUG ] python-jinja2 noarch 2.7.2-2.el7cp mon 516 k
[node2][DEBUG ] python-markupsafe x86_64 0.11-10.el7 192.168.4.254_rhel7 25 k
[node2][DEBUG ] python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
[node2][DEBUG ] python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
[node2][DEBUG ] python-werkzeug noarch 0.9.1-1.el7 mon 562 k
[node2][DEBUG ] redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
[node2][DEBUG ] redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
[node2][DEBUG ] spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
[node2][DEBUG ] userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
[node2][DEBUG ] 为依赖而更新:
[node2][DEBUG ] librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node2][DEBUG ] librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M
[node2][DEBUG ]
[node2][DEBUG ] 事务概要
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] 安装 4 软件包 (+30 依赖软件包)
[node2][DEBUG ] 升级 ( 2 依赖软件包)
[node2][DEBUG ]
[node2][DEBUG ] 总下载量:49 M
[node2][DEBUG ] Downloading packages:
[node2][DEBUG ] No Presto metadata available for mon
[node2][DEBUG ] ——————————————————————————–
[node2][DEBUG ] 总计 40 MB/s | 49 MB 00:01
[node2][DEBUG ] Running transaction check
[node2][DEBUG ] Running transaction test
[node2][DEBUG ] Transaction test succeeded
[node2][DEBUG ] Running transaction
[node2][DEBUG ] 正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/38
[node2][DEBUG ] 正在安装 : boost-random-1.53.0-27.el7.x86_64 2/38
[node2][DEBUG ] 正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/38
[node2][DEBUG ] 正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/38
[node2][DEBUG ] 正在安装 : boost-program-options-1.53.0-27.el7.x86_64 5/38
[node2][DEBUG ] 正在安装 : leveldb-1.12.0-5.el7cp.x86_64 6/38
[node2][DEBUG ] 正在安装 : python-werkzeug-0.9.1-1.el7.noarch 7/38
[node2][DEBUG ] 正在安装 : spax-1.5.2-13.el7.x86_64 8/38
[node2][DEBUG ] 正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 9/38
[node2][DEBUG ] 正在安装 : python-markupsafe-0.11-10.el7.x86_64 10/38
[node2][DEBUG ] 正在安装 : patch-2.7.1-8.el7.x86_64 11/38
[node2][DEBUG ] 正在安装 : python-babel-0.9.6-8.el7.noarch 12/38
[node2][DEBUG ] 正在安装 : python-jinja2-2.7.2-2.el7cp.noarch 13/38
[node2][DEBUG ] 正在安装 : hdparm-9.43-5.el7.x86_64 14/38
[node2][DEBUG ] 正在安装 : m4-1.4.16-10.el7.x86_64 15/38
[node2][DEBUG ] 正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 16/38
[node2][DEBUG ] 正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 17/38
[node2][DEBUG ] 正在安装 : boost-regex-1.53.0-27.el7.x86_64 18/38
[node2][DEBUG ] 正在安装 : mailcap-2.1.41-2.el7.noarch 19/38
[node2][DEBUG ] 正在安装 : python-itsdangerous-0.23-1.el7.noarch 20/38
[node2][DEBUG ] 正在安装 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node2][DEBUG ] 正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 22/38
[node2][DEBUG ] 正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 23/38
[node2][DEBUG ] 正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 24/38
[node2][DEBUG ] 正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 25/38
[node2][DEBUG ] 正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 26/38
[node2][DEBUG ] 正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 27/38
[node2][DEBUG ] 正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 28/38
[node2][DEBUG ] 正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 29/38
[node2][DEBUG ] 正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 30/38
[node2][DEBUG ] 正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 31/38
[node2][DEBUG ] 正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 32/38
[node2][DEBUG ] 正在安装 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 33/38
[node2][DEBUG ] 正在安装 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 34/38
[node2][DEBUG ] 正在安装 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 35/38
[node2][DEBUG ] 正在安装 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 36/38
[node2][DEBUG ] 清理 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node2][DEBUG ] 清理 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node2][DEBUG ] 验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/38
[node2][DEBUG ] 验证中 : python-itsdangerous-0.23-1.el7.noarch 2/38
[node2][DEBUG ] 验证中 : mailcap-2.1.41-2.el7.noarch 3/38
[node2][DEBUG ] 验证中 : boost-regex-1.53.0-27.el7.x86_64 4/38
[node2][DEBUG ] 验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 5/38
[node2][DEBUG ] 验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 6/38
[node2][DEBUG ] 验证中 : m4-1.4.16-10.el7.x86_64 7/38
[node2][DEBUG ] 验证中 : hdparm-9.43-5.el7.x86_64 8/38
[node2][DEBUG ] 验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 9/38
[node2][DEBUG ] 验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 10/38
[node2][DEBUG ] 验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 11/38
[node2][DEBUG ] 验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 12/38
[node2][DEBUG ] 验证中 : boost-iostreams-1.53.0-27.el7.x86_64 13/38
[node2][DEBUG ] 验证中 : python-babel-0.9.6-8.el7.noarch 14/38
[node2][DEBUG ] 验证中 : boost-random-1.53.0-27.el7.x86_64 15/38
[node2][DEBUG ] 验证中 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 16/38
[node2][DEBUG ] 验证中 : patch-2.7.1-8.el7.x86_64 17/38
[node2][DEBUG ] 验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 18/38
[node2][DEBUG ] 验证中 : python-markupsafe-0.11-10.el7.x86_64 19/38
[node2][DEBUG ] 验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 20/38
[node2][DEBUG ] 验证中 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node2][DEBUG ] 验证中 : leveldb-1.12.0-5.el7cp.x86_64 22/38
[node2][DEBUG ] 验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 23/38
[node2][DEBUG ] 验证中 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 24/38
[node2][DEBUG ] 验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 25/38
[node2][DEBUG ] 验证中 : python-jinja2-2.7.2-2.el7cp.noarch 26/38
[node2][DEBUG ] 验证中 : boost-program-options-1.53.0-27.el7.x86_64 27/38
[node2][DEBUG ] 验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 28/38
[node2][DEBUG ] 验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 29/38
[node2][DEBUG ] 验证中 : redhat-lsb-core-4.1-27.el7.x86_64 30/38
[node2][DEBUG ] 验证中 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 31/38
[node2][DEBUG ] 验证中 : spax-1.5.2-13.el7.x86_64 32/38
[node2][DEBUG ] 验证中 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 33/38
[node2][DEBUG ] 验证中 : python-werkzeug-0.9.1-1.el7.noarch 34/38
[node2][DEBUG ] 验证中 : fcgi-2.4.0-25.el7cp.x86_64 35/38
[node2][DEBUG ] 验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 36/38
[node2][DEBUG ] 验证中 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node2][DEBUG ] 验证中 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node2][DEBUG ]
[node2][DEBUG ] 已安装:
[node2][DEBUG ] ceph-mds.x86_64 1:10.2.2-38.el7cp ceph-mon.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] ceph-osd.x86_64 1:10.2.2-38.el7cp ceph-radosgw.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ]
[node2][DEBUG ] 作为依赖被安装:
[node2][DEBUG ] boost-iostreams.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] boost-program-options.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] boost-random.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] boost-regex.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] ceph-base.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] ceph-common.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] ceph-selinux.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] fcgi.x86_64 0:2.4.0-25.el7cp
[node2][DEBUG ] hdparm.x86_64 0:9.43-5.el7
[node2][DEBUG ] leveldb.x86_64 0:1.12.0-5.el7cp
[node2][DEBUG ] libbabeltrace.x86_64 0:1.2.4-3.el7cp
[node2][DEBUG ] libcephfs1.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] librgw2.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] lttng-ust.x86_64 0:2.4.1-1.el7cp
[node2][DEBUG ] m4.x86_64 0:1.4.16-10.el7
[node2][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node2][DEBUG ] patch.x86_64 0:2.7.1-8.el7
[node2][DEBUG ] python-babel.noarch 0:0.9.6-8.el7
[node2][DEBUG ] python-cephfs.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] python-flask.noarch 1:0.10.1-5.el7
[node2][DEBUG ] python-itsdangerous.noarch 0:0.23-1.el7
[node2][DEBUG ] python-jinja2.noarch 0:2.7.2-2.el7cp
[node2][DEBUG ] python-markupsafe.x86_64 0:0.11-10.el7
[node2][DEBUG ] python-rados.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] python-rbd.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] python-werkzeug.noarch 0:0.9.1-1.el7
[node2][DEBUG ] redhat-lsb-core.x86_64 0:4.1-27.el7
[node2][DEBUG ] redhat-lsb-submod-security.x86_64 0:4.1-27.el7
[node2][DEBUG ] spax.x86_64 0:1.5.2-13.el7
[node2][DEBUG ] userspace-rcu.x86_64 0:0.7.9-2.el7rhgs
[node2][DEBUG ]
[node2][DEBUG ] 作为依赖被升级:
[node2][DEBUG ] librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ]
[node2][DEBUG ] 完毕!
[node2][INFO ] Running command: ceph –version
[node2][DEBUG ] ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)
[ceph_deploy.install][DEBUG ] Detecting platform for host node3 …
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][INFO ] installing Ceph on node3
[node3][INFO ] Running command: yum clean all
[node3][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node3][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node3][DEBUG ] 正在清理软件源: 192.168.4.254_rhel7 mon osd tools
[node3][DEBUG ] Cleaning up everything
[node3][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node3][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[node3][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node3][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node3][DEBUG ] 正在解决依赖关系
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 ceph-mds.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-mon.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-flask,它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libleveldb.so.1()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-osd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-osd-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-radosgw.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 mailcap,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libcephfs.so.1()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 leveldb.x86_64.0.1.12.0-5.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
[node3][DEBUG ] —> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
[node3][DEBUG ] —> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-flask.noarch.1.0.10.1-5.el7 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-itsdangerous,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-jinja2,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-werkzeug,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
[node3][DEBUG ] —> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
[node3][DEBUG ] —> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
[node3][DEBUG ] —> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 python-itsdangerous.noarch.0.0.23-1.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-jinja2.noarch.0.2.7.2-2.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-babel >= 0.8,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-markupsafe,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node3][DEBUG ] —> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 python-werkzeug.noarch.0.0.9.1-1.el7 将被 安装
[node3][DEBUG ] —> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
[node3][DEBUG ] —> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-babel.noarch.0.0.9.6-8.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-markupsafe.x86_64.0.0.11-10.el7 将被 安装
[node3][DEBUG ] —> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
[node3][DEBUG ] —> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
[node3][DEBUG ] –> 解决依赖关系完成
[node3][DEBUG ]
[node3][DEBUG ] 依赖关系解决
[node3][DEBUG ]
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Package 架构 版本 源 大小
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] 正在安装:
[node3][DEBUG ] ceph-mds x86_64 1:10.2.2-38.el7cp tools 2.8 M
[node3][DEBUG ] ceph-mon x86_64 1:10.2.2-38.el7cp mon 2.8 M
[node3][DEBUG ] ceph-osd x86_64 1:10.2.2-38.el7cp osd 9.0 M
[node3][DEBUG ] ceph-radosgw x86_64 1:10.2.2-38.el7cp tools 265 k
[node3][DEBUG ] 为依赖而安装:
[node3][DEBUG ] boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
[node3][DEBUG ] boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
[node3][DEBUG ] boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
[node3][DEBUG ] boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
[node3][DEBUG ] ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
[node3][DEBUG ] ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
[node3][DEBUG ] ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
[node3][DEBUG ] fcgi x86_64 2.4.0-25.el7cp mon 47 k
[node3][DEBUG ] hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
[node3][DEBUG ] leveldb x86_64 1.12.0-5.el7cp mon 161 k
[node3][DEBUG ] libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
[node3][DEBUG ] libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node3][DEBUG ] librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
[node3][DEBUG ] lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
[node3][DEBUG ] m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
[node3][DEBUG ] mailcap noarch 2.1.41-2.el7 192.168.4.254_rhel7 31 k
[node3][DEBUG ] patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
[node3][DEBUG ] python-babel noarch 0.9.6-8.el7 192.168.4.254_rhel7 1.4 M
[node3][DEBUG ] python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
[node3][DEBUG ] python-flask noarch 1:0.10.1-5.el7 mon 204 k
[node3][DEBUG ] python-itsdangerous noarch 0.23-1.el7 mon 24 k
[node3][DEBUG ] python-jinja2 noarch 2.7.2-2.el7cp mon 516 k
[node3][DEBUG ] python-markupsafe x86_64 0.11-10.el7 192.168.4.254_rhel7 25 k
[node3][DEBUG ] python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
[node3][DEBUG ] python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
[node3][DEBUG ] python-werkzeug noarch 0.9.1-1.el7 mon 562 k
[node3][DEBUG ] redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
[node3][DEBUG ] redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
[node3][DEBUG ] spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
[node3][DEBUG ] userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
[node3][DEBUG ] 为依赖而更新:
[node3][DEBUG ] librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node3][DEBUG ] librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M
[node3][DEBUG ]
[node3][DEBUG ] 事务概要
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] 安装 4 软件包 (+30 依赖软件包)
[node3][DEBUG ] 升级 ( 2 依赖软件包)
[node3][DEBUG ]
[node3][DEBUG ] 总下载量:49 M
[node3][DEBUG ] Downloading packages:
[node3][DEBUG ] No Presto metadata available for mon
[node3][DEBUG ] ——————————————————————————–
[node3][DEBUG ] 总计 34 MB/s | 49 MB 00:01
[node3][DEBUG ] Running transaction check
[node3][DEBUG ] Running transaction test
[node3][DEBUG ] Transaction test succeeded
[node3][DEBUG ] Running transaction
[node3][DEBUG ] 正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/38
[node3][DEBUG ] 正在安装 : boost-random-1.53.0-27.el7.x86_64 2/38
[node3][DEBUG ] 正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/38
[node3][DEBUG ] 正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/38
[node3][DEBUG ] 正在安装 : boost-program-options-1.53.0-27.el7.x86_64 5/38
[node3][DEBUG ] 正在安装 : leveldb-1.12.0-5.el7cp.x86_64 6/38
[node3][DEBUG ] 正在安装 : python-werkzeug-0.9.1-1.el7.noarch 7/38
[node3][DEBUG ] 正在安装 : spax-1.5.2-13.el7.x86_64 8/38
[node3][DEBUG ] 正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 9/38
[node3][DEBUG ] 正在安装 : python-markupsafe-0.11-10.el7.x86_64 10/38
[node3][DEBUG ] 正在安装 : patch-2.7.1-8.el7.x86_64 11/38
[node3][DEBUG ] 正在安装 : python-babel-0.9.6-8.el7.noarch 12/38
[node3][DEBUG ] 正在安装 : python-jinja2-2.7.2-2.el7cp.noarch 13/38
[node3][DEBUG ] 正在安装 : hdparm-9.43-5.el7.x86_64 14/38
[node3][DEBUG ] 正在安装 : m4-1.4.16-10.el7.x86_64 15/38
[node3][DEBUG ] 正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 16/38
[node3][DEBUG ] 正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 17/38
[node3][DEBUG ] 正在安装 : boost-regex-1.53.0-27.el7.x86_64 18/38
[node3][DEBUG ] 正在安装 : mailcap-2.1.41-2.el7.noarch 19/38
[node3][DEBUG ] 正在安装 : python-itsdangerous-0.23-1.el7.noarch 20/38
[node3][DEBUG ] 正在安装 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node3][DEBUG ] 正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 22/38
[node3][DEBUG ] 正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 23/38
[node3][DEBUG ] 正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 24/38
[node3][DEBUG ] 正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 25/38
[node3][DEBUG ] 正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 26/38
[node3][DEBUG ] 正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 27/38
[node3][DEBUG ] 正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 28/38
[node3][DEBUG ] 正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 29/38
[node3][DEBUG ] 正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 30/38
[node3][DEBUG ] 正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 31/38
[node3][DEBUG ] 正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 32/38
[node3][DEBUG ] 正在安装 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 33/38
[node3][DEBUG ] 正在安装 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 34/38
[node3][DEBUG ] 正在安装 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 35/38
[node3][DEBUG ] 正在安装 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 36/38
[node3][DEBUG ] 清理 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node3][DEBUG ] 清理 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node3][DEBUG ] 验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/38
[node3][DEBUG ] 验证中 : python-itsdangerous-0.23-1.el7.noarch 2/38
[node3][DEBUG ] 验证中 : mailcap-2.1.41-2.el7.noarch 3/38
[node3][DEBUG ] 验证中 : boost-regex-1.53.0-27.el7.x86_64 4/38
[node3][DEBUG ] 验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 5/38
[node3][DEBUG ] 验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 6/38
[node3][DEBUG ] 验证中 : m4-1.4.16-10.el7.x86_64 7/38
[node3][DEBUG ] 验证中 : hdparm-9.43-5.el7.x86_64 8/38
[node3][DEBUG ] 验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 9/38
[node3][DEBUG ] 验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 10/38
[node3][DEBUG ] 验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 11/38
[node3][DEBUG ] 验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 12/38
[node3][DEBUG ] 验证中 : boost-iostreams-1.53.0-27.el7.x86_64 13/38
[node3][DEBUG ] 验证中 : python-babel-0.9.6-8.el7.noarch 14/38
[node3][DEBUG ] 验证中 : boost-random-1.53.0-27.el7.x86_64 15/38
[node3][DEBUG ] 验证中 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 16/38
[node3][DEBUG ] 验证中 : patch-2.7.1-8.el7.x86_64 17/38
[node3][DEBUG ] 验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 18/38
[node3][DEBUG ] 验证中 : python-markupsafe-0.11-10.el7.x86_64 19/38
[node3][DEBUG ] 验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 20/38
[node3][DEBUG ] 验证中 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node3][DEBUG ] 验证中 : leveldb-1.12.0-5.el7cp.x86_64 22/38
[node3][DEBUG ] 验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 23/38
[node3][DEBUG ] 验证中 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 24/38
[node3][DEBUG ] 验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 25/38
[node3][DEBUG ] 验证中 : python-jinja2-2.7.2-2.el7cp.noarch 26/38
[node3][DEBUG ] 验证中 : boost-program-options-1.53.0-27.el7.x86_64 27/38
[node3][DEBUG ] 验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 28/38
[node3][DEBUG ] 验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 29/38
[node3][DEBUG ] 验证中 : redhat-lsb-core-4.1-27.el7.x86_64 30/38
[node3][DEBUG ] 验证中 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 31/38
[node3][DEBUG ] 验证中 : spax-1.5.2-13.el7.x86_64 32/38
[node3][DEBUG ] 验证中 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 33/38
[node3][DEBUG ] 验证中 : python-werkzeug-0.9.1-1.el7.noarch 34/38
[node3][DEBUG ] 验证中 : fcgi-2.4.0-25.el7cp.x86_64 35/38
[node3][DEBUG ] 验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 36/38
[node3][DEBUG ] 验证中 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node3][DEBUG ] 验证中 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node3][DEBUG ]
[node3][DEBUG ] 已安装:
[node3][DEBUG ] ceph-mds.x86_64 1:10.2.2-38.el7cp ceph-mon.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] ceph-osd.x86_64 1:10.2.2-38.el7cp ceph-radosgw.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ]
[node3][DEBUG ] 作为依赖被安装:
[node3][DEBUG ] boost-iostreams.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] boost-program-options.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] boost-random.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] boost-regex.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] ceph-base.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] ceph-common.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] ceph-selinux.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] fcgi.x86_64 0:2.4.0-25.el7cp
[node3][DEBUG ] hdparm.x86_64 0:9.43-5.el7
[node3][DEBUG ] leveldb.x86_64 0:1.12.0-5.el7cp
[node3][DEBUG ] libbabeltrace.x86_64 0:1.2.4-3.el7cp
[node3][DEBUG ] libcephfs1.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] librgw2.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] lttng-ust.x86_64 0:2.4.1-1.el7cp
[node3][DEBUG ] m4.x86_64 0:1.4.16-10.el7
[node3][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node3][DEBUG ] patch.x86_64 0:2.7.1-8.el7
[node3][DEBUG ] python-babel.noarch 0:0.9.6-8.el7
[node3][DEBUG ] python-cephfs.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] python-flask.noarch 1:0.10.1-5.el7
[node3][DEBUG ] python-itsdangerous.noarch 0:0.23-1.el7
[node3][DEBUG ] python-jinja2.noarch 0:2.7.2-2.el7cp
[node3][DEBUG ] python-markupsafe.x86_64 0:0.11-10.el7
[node3][DEBUG ] python-rados.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] python-rbd.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] python-werkzeug.noarch 0:0.9.1-1.el7
[node3][DEBUG ] redhat-lsb-core.x86_64 0:4.1-27.el7
[node3][DEBUG ] redhat-lsb-submod-security.x86_64 0:4.1-27.el7
[node3][DEBUG ] spax.x86_64 0:1.5.2-13.el7
[node3][DEBUG ] userspace-rcu.x86_64 0:0.7.9-2.el7rhgs
[node3][DEBUG ]
[node3][DEBUG ] 作为依赖被升级:
[node3][DEBUG ] librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ]
[node3][DEBUG ] 完毕!
[node3][INFO ] Running command: ceph –version
[node3][DEBUG ] ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)

3)初始化所有节点的mon服务(主机名解析必须对)
[root@node1 ceph-cluster]# ceph-deploy mon create-initial
代码如下

[root@node1 ceph-cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5c3dcb46c8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7f5c3dcaa938>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2 node3
[ceph_deploy.mon][DEBUG ] detecting platform for host node1 …
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][DEBUG ] determining if provided host has same hostname in remote
[node1][DEBUG ] get remote short hostname
[node1][DEBUG ] deploying mon to node1
[node1][DEBUG ] get remote short hostname
[node1][DEBUG ] remote hostname: node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][DEBUG ] create the mon path if it does not exist
[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done
[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done
[node1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create the monitor keyring file
[node1][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i node1 –keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring –setuser 167 –setgroup 167
[node1][DEBUG ] ceph-mon: mon.noname-a 192.168.4.11:6789/0 is local, renaming to mon.node1
[node1][DEBUG ] ceph-mon: set fsid to 29908a48-7574-4aac-ac14-80a44b7cffbf
[node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
[node1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[node1][DEBUG ] create the init path if it does not exist
[node1][INFO ] Running command: systemctl enable ceph.target
[node1][INFO ] Running command: systemctl enable ceph-mon@node1
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node1][INFO ] Running command: systemctl start ceph-mon@node1
[node1][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[node1][DEBUG ] ********************************************************************************
[node1][DEBUG ] status for monitor: mon.node1
[node1][DEBUG ] {
[node1][DEBUG ] “election_epoch”: 0,
[node1][DEBUG ] “extra_probe_peers”: [
[node1][DEBUG ] “192.168.4.12:6789/0”,
[node1][DEBUG ] “192.168.4.13:6789/0”
[node1][DEBUG ] ],
[node1][DEBUG ] “monmap”: {
[node1][DEBUG ] “created”: “2018-10-11 11:16:27.048381”,
[node1][DEBUG ] “epoch”: 0,
[node1][DEBUG ] “fsid”: “29908a48-7574-4aac-ac14-80a44b7cffbf”,
[node1][DEBUG ] “modified”: “2018-10-11 11:16:27.048381”,
[node1][DEBUG ] “mons”: [
[node1][DEBUG ] {
[node1][DEBUG ] “addr”: “192.168.4.11:6789/0”,
[node1][DEBUG ] “name”: “node1”,
[node1][DEBUG ] “rank”: 0
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] “addr”: “0.0.0.0:0/1”,
[node1][DEBUG ] “name”: “node2”,
[node1][DEBUG ] “rank”: 1
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] “addr”: “0.0.0.0:0/2”,
[node1][DEBUG ] “name”: “node3”,
[node1][DEBUG ] “rank”: 2
[node1][DEBUG ] }
[node1][DEBUG ] ]
[node1][DEBUG ] },
[node1][DEBUG ] “name”: “node1”,
[node1][DEBUG ] “outside_quorum”: [
[node1][DEBUG ] “node1”
[node1][DEBUG ] ],
[node1][DEBUG ] “quorum”: [],
[node1][DEBUG ] “rank”: 0,
[node1][DEBUG ] “state”: “probing”,
[node1][DEBUG ] “sync_provider”: []
[node1][DEBUG ] }
[node1][DEBUG ] ********************************************************************************
[node1][INFO ] monitor: mon.node1 is running
[node1][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 …
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][DEBUG ] determining if provided host has same hostname in remote
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] deploying mon to node2
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] remote hostname: node2
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][DEBUG ] create the mon path if it does not exist
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done
[node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create the monitor keyring file
[node2][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i node2 –keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring –setuser 167 –setgroup 167
[node2][DEBUG ] ceph-mon: mon.noname-b 192.168.4.12:6789/0 is local, renaming to mon.node2
[node2][DEBUG ] ceph-mon: set fsid to 29908a48-7574-4aac-ac14-80a44b7cffbf
[node2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node2 for mon.node2
[node2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create a done file to avoid re-doing the mon deployment
[node2][DEBUG ] create the init path if it does not exist
[node2][INFO ] Running command: systemctl enable ceph.target
[node2][INFO ] Running command: systemctl enable ceph-mon@node2
[node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node2][INFO ] Running command: systemctl start ceph-mon@node2
[node2][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] {
[node2][DEBUG ] “election_epoch”: 1,
[node2][DEBUG ] “extra_probe_peers”: [
[node2][DEBUG ] “192.168.4.11:6789/0”,
[node2][DEBUG ] “192.168.4.13:6789/0”
[node2][DEBUG ] ],
[node2][DEBUG ] “monmap”: {
[node2][DEBUG ] “created”: “2018-10-11 11:16:31.198150”,
[node2][DEBUG ] “epoch”: 0,
[node2][DEBUG ] “fsid”: “29908a48-7574-4aac-ac14-80a44b7cffbf”,
[node2][DEBUG ] “modified”: “2018-10-11 11:16:31.198150”,
[node2][DEBUG ] “mons”: [
[node2][DEBUG ] {
[node2][DEBUG ] “addr”: “192.168.4.11:6789/0”,
[node2][DEBUG ] “name”: “node1”,
[node2][DEBUG ] “rank”: 0
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] “addr”: “192.168.4.12:6789/0”,
[node2][DEBUG ] “name”: “node2”,
[node2][DEBUG ] “rank”: 1
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] “addr”: “0.0.0.0:0/2”,
[node2][DEBUG ] “name”: “node3”,
[node2][DEBUG ] “rank”: 2
[node2][DEBUG ] }
[node2][DEBUG ] ]
[node2][DEBUG ] },
[node2][DEBUG ] “name”: “node2”,
[node2][DEBUG ] “outside_quorum”: [],
[node2][DEBUG ] “quorum”: [],
[node2][DEBUG ] “rank”: 1,
[node2][DEBUG ] “state”: “electing”,
[node2][DEBUG ] “sync_provider”: []
[node2][DEBUG ] }
[node2][DEBUG ] ********************************************************************************
[node2][INFO ] monitor: mon.node2 is running
[node2][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node3 …
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][DEBUG ] determining if provided host has same hostname in remote
[node3][DEBUG ] get remote short hostname
[node3][DEBUG ] deploying mon to node3
[node3][DEBUG ] get remote short hostname
[node3][DEBUG ] remote hostname: node3
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node3][DEBUG ] create the mon path if it does not exist
[node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node3/done
[node3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node3/done
[node3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create the monitor keyring file
[node3][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i node3 –keyring /var/lib/ceph/tmp/ceph-node3.mon.keyring –setuser 167 –setgroup 167
[node3][DEBUG ] ceph-mon: mon.noname-c 192.168.4.13:6789/0 is local, renaming to mon.node3
[node3][DEBUG ] ceph-mon: set fsid to 29908a48-7574-4aac-ac14-80a44b7cffbf
[node3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node3 for mon.node3
[node3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create a done file to avoid re-doing the mon deployment
[node3][DEBUG ] create the init path if it does not exist
[node3][INFO ] Running command: systemctl enable ceph.target
[node3][INFO ] Running command: systemctl enable ceph-mon@node3
[node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node3][INFO ] Running command: systemctl start ceph-mon@node3
[node3][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[node3][DEBUG ] ********************************************************************************
[node3][DEBUG ] status for monitor: mon.node3
[node3][DEBUG ] {
[node3][DEBUG ] “election_epoch”: 4,
[node3][DEBUG ] “extra_probe_peers”: [
[node3][DEBUG ] “192.168.4.11:6789/0”,
[node3][DEBUG ] “192.168.4.12:6789/0”
[node3][DEBUG ] ],
[node3][DEBUG ] “monmap”: {
[node3][DEBUG ] “created”: “2018-10-11 11:16:27.048381”,
[node3][DEBUG ] “epoch”: 1,
[node3][DEBUG ] “fsid”: “29908a48-7574-4aac-ac14-80a44b7cffbf”,
[node3][DEBUG ] “modified”: “2018-10-11 11:16:27.048381”,
[node3][DEBUG ] “mons”: [
[node3][DEBUG ] {
[node3][DEBUG ] “addr”: “192.168.4.11:6789/0”,
[node3][DEBUG ] “name”: “node1”,
[node3][DEBUG ] “rank”: 0
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] “addr”: “192.168.4.12:6789/0”,
[node3][DEBUG ] “name”: “node2”,
[node3][DEBUG ] “rank”: 1
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] “addr”: “192.168.4.13:6789/0”,
[node3][DEBUG ] “name”: “node3”,
[node3][DEBUG ] “rank”: 2
[node3][DEBUG ] }
[node3][DEBUG ] ]
[node3][DEBUG ] },
[node3][DEBUG ] “name”: “node3”,
[node3][DEBUG ] “outside_quorum”: [],
[node3][DEBUG ] “quorum”: [
[node3][DEBUG ] 0,
[node3][DEBUG ] 1,
[node3][DEBUG ] 2
[node3][DEBUG ] ],
[node3][DEBUG ] “rank”: 2,
[node3][DEBUG ] “state”: “peon”,
[node3][DEBUG ] “sync_provider”: []
[node3][DEBUG ] }
[node3][DEBUG ] ********************************************************************************
[node3][INFO ] monitor: mon.node3 is running
[node3][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.node1
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][INFO ] mon.node1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][INFO ] mon.node2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] mon.node3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys…
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /etc/ceph/ceph.client.admin.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from node1.
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-osd/ceph.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from node1.
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-mds/ceph.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on node1
[ceph_deploy.gatherkeys][DEBUG ] Checking node2 for /var/lib/ceph/bootstrap-mds/ceph.keyring
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from node2.
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-rgw/ceph.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from node1.

提示:(初始化操作常见错误解决办法,非必要操作,有错误可以参考)
如果提示如下错误信息:
[node1][ERROR ] admin_socket: exception getting command descriptions: [Error 2] No such file or directory
解决方案如下(在node1操作):
先检查自己的命令是否是在ceph-cluster目录下执行的!!!!如果时确认是在该目录下执行的create-initial命令,依然保存,可以使用如下方式修复。
[root@node1 ceph-cluster]# vim ceph.conf #文件最后追加以下内容
public_network = 192.168.4.0/24
修改后重新推送配置文件:
[root@node1 ceph-cluster]# ceph-deploy –overwrite-conf config push node1 node2 node3

步骤三:创建OSD
1)准备磁盘分区
[root@node1 ~]# parted /dev/vdb mklabel gpt
[root@node1 ~]# parted /dev/vdb mkpart primary 1M 50%
[root@node1 ~]# parted /dev/vdb mkpart primary 50% 100%
[root@node1 ~]# chown ceph.ceph /dev/vdb1
[root@node1 ~]# chown ceph.ceph /dev/vdb2
//这两个分区用来做存储服务器的日志journal盘
注意,每个节点都要操作

代码如下
node1
[root@node1 ceph-cluster]# parted /dev/vdb mklabel gpt
信息: You may need to update /etc/fstab.

[root@node1 ceph-cluster]# parted /dev/vdb mkpart primary 1M 50%
信息: You may need to update /etc/fstab.

[root@node1 ceph-cluster]# parted /dev/vdb mkpart primary 50% 100%
信息: You may need to update /etc/fstab.

[root@node1 ceph-cluster]# chown ceph.ceph /dev/vdb1
[root@node1 ceph-cluster]# chown ceph.ceph /dev/vdb2

node2
[root@node2 ~]# parted /dev/vdb mklabel gpt
信息: You may need to update /etc/fstab.

[root@node2 ~]# parted /dev/vdb mkpart primary 1M 50%
信息: You may need to update /etc/fstab.

[root@node2 ~]# parted /dev/vdb mkpart primary 50% 100%
信息: You may need to update /etc/fstab.

[root@node2 ~]# chown ceph.ceph /dev/vdb1
[root@node2 ~]# chown ceph.ceph /dev/vdb2

node3
[root@node3 ~]# parted /dev/vdb mklabel gpt
信息: You may need to update /etc/fstab.

[root@node3 ~]# parted /dev/vdb mkpart primary 1M 50%
信息: You may need to update /etc/fstab.

[root@node3 ~]# parted /dev/vdb mkpart primary 50% 100%
信息: You may need to update /etc/fstab.

[root@node3 ~]# chown ceph.ceph /dev/vdb1
[root@node3 ~]# chown ceph.ceph /dev/vdb2

2)初始化清空磁盘数据(仅node1操作即可)
[root@node1 ~]# ceph-deploy disk zap node1:vdc node1:vdd
[root@node1 ~]# ceph-deploy disk zap node2:vdc node2:vdd
[root@node1 ~]# ceph-deploy disk zap node3:vdc node3:vdd

代码如下,注意只需要在node1上面操作就行了,不用每个节点都要跑到节点上去操作
NODE1初始化:
[root@node1 ceph-cluster]# ceph-deploy disk zap node1:vdc node1:vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy disk zap node1:vdc node1:vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f18d69c5b90>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f18d69bb2a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [(‘node1’, ‘/dev/vdc’, None), (‘node1’, ‘/dev/vdd’, None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on node1
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][DEBUG ] zeroing last few blocks of device
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdc
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node1][DEBUG ] other utilities.
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdc
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/partx -a /dev/vdc
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on node1
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][DEBUG ] zeroing last few blocks of device
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdd
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node1][DEBUG ] other utilities.
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdd
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/partx -a /dev/vdd

NODE2初始化
[root@node1 ceph-cluster]# ceph-deploy disk zap node2:vdc node2:vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy disk zap node2:vdc node2:vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7faca8e50b90>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x7faca8e462a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [(‘node2’, ‘/dev/vdc’, None), (‘node2’, ‘/dev/vdd’, None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on node2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][DEBUG ] zeroing last few blocks of device
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdc
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node2][DEBUG ] other utilities.
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdc
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/partx -a /dev/vdc
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on node2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][DEBUG ] zeroing last few blocks of device
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdd
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node2][DEBUG ] other utilities.
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdd
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/partx -a /dev/vdd

NODE3初始化
[root@node1 ceph-cluster]# ceph-deploy disk zap node3:vdc node3:vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy disk zap node3:vdc node3:vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f119e29eb90>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f119e2942a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [(‘node3’, ‘/dev/vdc’, None), (‘node3’, ‘/dev/vdd’, None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][DEBUG ] zeroing last few blocks of device
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdc
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node3][DEBUG ] other utilities.
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdc
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/partx -a /dev/vdc
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][DEBUG ] zeroing last few blocks of device
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdd
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node3][DEBUG ] other utilities.
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdd
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/partx -a /dev/vdd

3)创建OSD存储空间(仅node1操作即可)
[root@node1 ~]# ceph-deploy osd create node1:vdc:/dev/vdb1 node1:vdd:/dev/vdb2
//创建osd存储设备,vdc为集群提供存储空间,vdb1提供JOURNAL日志,
//一个存储设备对应一个日志设备,日志需要SSD,不需要很大
[root@node1 ~]# ceph-deploy osd create node2:vdc:/dev/vdb1 node2:vdd:/dev/vdb2
[root@node1 ~]# ceph-deploy osd create node3:vdc:/dev/vdb1 node3:vdd:/dev/vdb2

代码如下:可以看到,vdb作为vdc和vdd的日志盘,所以创建了2个分区

NODE1创建存储空间
[root@node1 ceph-cluster]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk
创建node1的OSD空间
[root@node1 ceph-cluster]# ceph-deploy osd create node1:vdc:/dev/vdb1 node1:vdd:/dev/vdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy osd create node1:vdc:/dev/vdb1 node1:vdd:/dev/vdb2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [(‘node1’, ‘/dev/vdc’, ‘/dev/vdb1’), (‘node1’, ‘/dev/vdd’, ‘/dev/vdb2’)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xf51638>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0xf44230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/vdc:/dev/vdb1 node1:/dev/vdd:/dev/vdb2
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/vdc journal /dev/vdb1 activate True
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdc /dev/vdb1
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node1][WARNIN] prepare_device: Journal /dev/vdb1 is a partition
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node1][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb1
[node1][WARNIN] prepare_device: Journal /dev/vdb1 was not prepared with ceph-disk. Symlinking directly.
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] set_data_partition: Creating osd partition on /dev/vdc
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] ptype_tobe_for_name: name = data
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:a462a571-af0a-4717-be67-5539845b34f2 –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdc
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on created device /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid
[node1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdc1
[node1][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdc1
[node1][DEBUG ] meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=655295 blks
[node1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node1][DEBUG ] = crc=1 finobt=0, sparse=0
[node1][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node1][DEBUG ] = sunit=0 swidth=0 blks
[node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node1][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node1][WARNIN] mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.TNgP2M with options noatime,inode64
[node1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdc1 /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/ceph_fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/ceph_fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/magic.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/magic.5402.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/journal_uuid.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/journal_uuid.5402.tmp
[node1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.TNgP2M/journal -> /dev/vdb1
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdc
[node1][DEBUG ] Warning: The kernel is still using the old partition table.
[node1][DEBUG ] The new table will be used at the next reboot.
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdc1
[node1][INFO ] Running command: systemctl enable ceph.target
[node1][INFO ] checking OSD status…
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/vdd journal /dev/vdb2 activate True
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdd /dev/vdb2
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node1][WARNIN] prepare_device: Journal /dev/vdb2 is a partition
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node1][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb2
[node1][WARNIN] prepare_device: Journal /dev/vdb2 was not prepared with ceph-disk. Symlinking directly.
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] set_data_partition: Creating osd partition on /dev/vdd
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] ptype_tobe_for_name: name = data
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:e92979e9-2ce0-4be7-a3e0-9d667d16643a –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdd
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on created device /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid
[node1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdd1
[node1][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdd1
[node1][DEBUG ] meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=655295 blks
[node1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node1][DEBUG ] = crc=1 finobt=0, sparse=0
[node1][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node1][DEBUG ] = sunit=0 swidth=0 blks
[node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node1][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node1][WARNIN] mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.QZ47GJ with options noatime,inode64
[node1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdd1 /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/ceph_fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/ceph_fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/magic.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/magic.5881.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/journal_uuid.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/journal_uuid.5881.tmp
[node1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.QZ47GJ/journal -> /dev/vdb2
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdd
[node1][DEBUG ] Warning: The kernel is still using the old partition table.
[node1][DEBUG ] The new table will be used at the next reboot.
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdd1
[node1][INFO ] Running command: systemctl enable ceph.target
[node1][INFO ] checking OSD status…
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.

NODE2创建存储空间
[root@node2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk

[root@node1 ceph-cluster]# ceph-deploy osd create node2:vdc:/dev/vdb1 node2:vdd:/dev/vdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy osd create node2:vdc:/dev/vdb1 node2:vdd:/dev/vdb2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [(‘node2’, ‘/dev/vdc’, ‘/dev/vdb1’), (‘node2’, ‘/dev/vdd’, ‘/dev/vdb2’)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1590638>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x1583230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node2:/dev/vdc:/dev/vdb1 node2:/dev/vdd:/dev/vdb2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to node2
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node2 disk /dev/vdc journal /dev/vdb1 activate True
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdc /dev/vdb1
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node2][WARNIN] prepare_device: Journal /dev/vdb1 is a partition
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node2][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node2][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb1
[node2][WARNIN] prepare_device: Journal /dev/vdb1 was not prepared with ceph-disk. Symlinking directly.
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] set_data_partition: Creating osd partition on /dev/vdc
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] ptype_tobe_for_name: name = data
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:4bf10cc7-68bc-463d-9d29-f6ca9081d0bc –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdc
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on created device /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid
[node2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdc1
[node2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdc1
[node2][DEBUG ] meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=655295 blks
[node2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node2][DEBUG ] = crc=1 finobt=0, sparse=0
[node2][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node2][DEBUG ] = sunit=0 swidth=0 blks
[node2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node2][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node2][WARNIN] mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.zv4xfo with options noatime,inode64
[node2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdc1 /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/ceph_fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/ceph_fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/magic.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/magic.5364.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/journal_uuid.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/journal_uuid.5364.tmp
[node2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.zv4xfo/journal -> /dev/vdb1
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdc
[node2][DEBUG ] Warning: The kernel is still using the old partition table.
[node2][DEBUG ] The new table will be used at the next reboot.
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdc1
[node2][INFO ] Running command: systemctl enable ceph.target
[node2][INFO ] checking OSD status…
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Preparing host node2 disk /dev/vdd journal /dev/vdb2 activate True
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdd /dev/vdb2
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node2][WARNIN] prepare_device: Journal /dev/vdb2 is a partition
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node2][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node2][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb2
[node2][WARNIN] prepare_device: Journal /dev/vdb2 was not prepared with ceph-disk. Symlinking directly.
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] set_data_partition: Creating osd partition on /dev/vdd
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] ptype_tobe_for_name: name = data
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:eda48f95-efaf-435e-8700-9511747dcec3 –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdd
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on created device /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid
[node2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdd1
[node2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdd1
[node2][DEBUG ] meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=655295 blks
[node2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node2][DEBUG ] = crc=1 finobt=0, sparse=0
[node2][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node2][DEBUG ] = sunit=0 swidth=0 blks
[node2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node2][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node2][WARNIN] mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.VkorNk with options noatime,inode64
[node2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdd1 /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/ceph_fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/ceph_fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/magic.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/magic.5874.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/journal_uuid.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/journal_uuid.5874.tmp
[node2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.VkorNk/journal -> /dev/vdb2
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdd
[node2][DEBUG ] Warning: The kernel is still using the old partition table.
[node2][DEBUG ] The new table will be used at the next reboot.
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdd1
[node2][INFO ] Running command: systemctl enable ceph.target
[node2][INFO ] checking OSD status…
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.

NODE3创建存储空间
[root@node3 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk

[root@node1 ceph-cluster]# ceph-deploy osd create node3:vdc:/dev/vdb1 node3:vdd:/dev/vdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy osd create node3:vdc:/dev/vdb1 node3:vdd:/dev/vdb2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [(‘node3’, ‘/dev/vdc’, ‘/dev/vdb1’), (‘node3’, ‘/dev/vdd’, ‘/dev/vdb2’)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1cd4638>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x1cc7230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node3:/dev/vdc:/dev/vdb1 node3:/dev/vdd:/dev/vdb2
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to node3
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node3 disk /dev/vdc journal /dev/vdb1 activate True
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdc /dev/vdb1
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node3][WARNIN] prepare_device: Journal /dev/vdb1 is a partition
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node3][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node3][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb1
[node3][WARNIN] prepare_device: Journal /dev/vdb1 was not prepared with ceph-disk. Symlinking directly.
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] set_data_partition: Creating osd partition on /dev/vdc
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] ptype_tobe_for_name: name = data
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:e84a0ea4-f5c2-4615-803e-a6d57f11bc18 –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdc
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on created device /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid
[node3][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdc1
[node3][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdc1
[node3][DEBUG ] meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=655295 blks
[node3][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node3][DEBUG ] = crc=1 finobt=0, sparse=0
[node3][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node3][DEBUG ] = sunit=0 swidth=0 blks
[node3][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node3][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node3][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node3][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node3][WARNIN] mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.sFNa72 with options noatime,inode64
[node3][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdc1 /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/ceph_fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/ceph_fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/magic.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/magic.5354.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/journal_uuid.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/journal_uuid.5354.tmp
[node3][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.sFNa72/journal -> /dev/vdb1
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdc
[node3][DEBUG ] Warning: The kernel is still using the old partition table.
[node3][DEBUG ] The new table will be used at the next reboot.
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdc1
[node3][INFO ] Running command: systemctl enable ceph.target
[node3][INFO ] checking OSD status…
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Preparing host node3 disk /dev/vdd journal /dev/vdb2 activate True
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdd /dev/vdb2
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node3][WARNIN] prepare_device: Journal /dev/vdb2 is a partition
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node3][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node3][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb2
[node3][WARNIN] prepare_device: Journal /dev/vdb2 was not prepared with ceph-disk. Symlinking directly.
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] set_data_partition: Creating osd partition on /dev/vdd
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] ptype_tobe_for_name: name = data
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:c2ddeef5-1f3b-4ebd-93ff-3e5733ad3c3f –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdd
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on created device /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid
[node3][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdd1
[node3][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdd1
[node3][DEBUG ] meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=655295 blks
[node3][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node3][DEBUG ] = crc=1 finobt=0, sparse=0
[node3][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node3][DEBUG ] = sunit=0 swidth=0 blks
[node3][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node3][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node3][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node3][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node3][WARNIN] mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.rMP6kE with options noatime,inode64
[node3][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdd1 /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/ceph_fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/ceph_fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/magic.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/magic.5887.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/journal_uuid.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/journal_uuid.5887.tmp
[node3][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.rMP6kE/journal -> /dev/vdb2
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdd
[node3][DEBUG ] Warning: The kernel is still using the old partition table.
[node3][DEBUG ] The new table will be used at the next reboot.
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdd1
[node3][INFO ] Running command: systemctl enable ceph.target
[node3][INFO ] checking OSD status…
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.

4)常见错误(非必须操作)
使用osd create创建OSD存储空间时,如提示run ‘gatherkeys’,可以使用如下命令修复:
[root@node1 ~]# ceph-deploy gatherkeys node1 node2 node3

步骤四:验证测试

1) 查看集群状态
[root@node1 ~]# ceph -s

[root@node1 ceph-cluster]# ceph -s
cluster 29908a48-7574-4aac-ac14-80a44b7cffbf
health HEALTH_OK
monmap e1: 3 mons at {node1=192.168.4.11:6789/0,node2=192.168.4.12:6789/0,node3=192.168.4.13:6789/0}
election epoch 4, quorum 0,1,2 node1,node2,node3
osdmap e33: 6 osds: 6 up, 6 in
flags sortbitwise
pgmap v79: 64 pgs, 1 pools, 0 bytes data, 0 objects
203 MB used, 61170 MB / 61373 MB avail
64 active+clean

2)常见错误(非必须操作)
如果查看状态包含如下信息:
health: HEALTH_WARN
clock skew detected on node2, node3…
clock skew表示时间不同步,解决办法:请先将所有主机的时间都使用NTP时间同步!!!
如果状态还是失败,可以尝试执行如下命令,重启ceph服务:
[root@node1 ~]# systemctl restart ceph\*.service ceph\*.target

3 案例3:创建Ceph块存储
3.1 问题

沿用练习一,使用Ceph集群的块存储功能,实现以下目标:
创建块存储镜像
客户端映射镜像
创建镜像快照
使用快照还原数据
使用快照克隆镜像
删除快照与镜像
3.2 步骤

实现此案例需要按照如下步骤进行。
步骤一:创建镜像

1)查看存储池。
[root@node1 ~]# ceph osd lspools
0 rbd,
代码如下
[root@node1 ceph-cluster]# ceph osd lspools
0 rbd,

2)创建镜像、查看镜像
[root@node1 ~]# rbd create demo-image –image-feature layering –size 10G
[root@node1 ~]# rbd create rbd/image –image-feature layering –size 10G
[root@node1 ~]# rbd list
[root@node1 ~]# rbd info demo-image
rbd image ‘demo-image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.d3aa2ae8944a
format: 2
features: layering
代码如下
[root@node1 ceph-cluster]# rbd create demo-image –image-feature layering –size 10G
[root@node1 ceph-cluster]# rbd create rbd/image –image-feature layering –size 10G
[root@node1 ceph-cluster]# rbd list
demo-image
image
[root@node1 ceph-cluster]# rbd info demo-image
rbd image ‘demo-image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.101b238e1f29
format: 2
features: layering
flags:

步骤二:动态调整

1)缩小容量
[root@node1 ~]# rbd resize –size 7G image –allow-shrink
[root@node1 ~]# rbd info image
2)扩容容量
[root@node1 ~]# rbd resize –size 15G image
[root@node1 ~]# rbd info image

代码如下
缩小
[root@node1 ceph-cluster]# rbd info image
rbd image ‘image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.103a238e1f29
format: 2
features: layering
flags:
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]# rbd resize –size 7G image –allow-shrink
Resizing image: 100% complete…done.
[root@node1 ceph-cluster]# rbd info image
rbd image ‘image’:
size 7168 MB in 1792 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.103a238e1f29
format: 2
features: layering
flags:

扩大
[root@node1 ceph-cluster]# rbd resize –size 15G image
Resizing image: 100% complete…done.
[root@node1 ceph-cluster]# rbd info image
rbd image ‘image’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.103a238e1f29
format: 2
features: layering
flags:

步骤三:通过KRBD访问

1)集群内将镜像映射为本地磁盘
[root@node1 ~]# rbd map demo-image
/dev/rbd0
[root@node1 ~]# lsblk
… …
rbd0 251:0 0 10G 0 disk
[root@node1 ~]# mkfs.xfs /dev/rbd0
[root@node1 ~]# mount /dev/rbd0 /mnt

代码如下
[root@node1 ceph-cluster]# rbd map demo-image
/dev/rbd0
[root@node1 ceph-cluster]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
└─vdc1 252:33 0 10G 0 part /var/lib/ceph/osd/ceph-0
vdd 252:48 0 10G 0 disk
└─vdd1 252:49 0 10G 0 part /var/lib/ceph/osd/ceph-1
rbd0 251:0 0 10G 0 disk
格式化并挂载
[root@node1 ceph-cluster]# mkfs.xfs /dev/rbd
rbd/ rbd0
[root@node1 ceph-cluster]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=17, agsize=162816 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@node1 ceph-cluster]# mount /dev/rbd0 /mnt/
[root@node1 ceph-cluster]# ll -d /mnt/
drwxr-xr-x. 2 root root 6 10月 11 13:59 /mnt/

2)客户端通过KRBD访问
#客户端需要安装ceph-common软件包
#拷贝配置文件(否则不知道集群在哪)
#拷贝连接密钥(否则无连接权限)
[root@client ~]# yum -y install ceph-common
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph/
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring \
/etc/ceph/
[root@client ~]# rbd map image
[root@client ~]# lsblk
[root@client ~]# rbd showmapped
id pool image snap device
0 rbd image – /dev/rbd0

代码如下
[root@client ~]# yum install -y ceph-common
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
正在解决依赖关系
–> 正在检查事务
—> 软件包 ceph-common.x86_64.1.0.94.5-2.el7 将被 安装
–> 正在处理依赖关系 python-rados = 1:0.94.5-2.el7,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在处理依赖关系 python-rbd = 1:0.94.5-2.el7,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在处理依赖关系 hdparm,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在检查事务
—> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
—> 软件包 python-rados.x86_64.1.0.94.5-2.el7 将被 安装
—> 软件包 python-rbd.x86_64.1.0.94.5-2.el7 将被 安装
—> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
–> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在检查事务
—> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
—> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
—> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
—> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
–> 解决依赖关系完成

依赖关系解决

==============================================================================================
Package 架构 版本 源 大小
==============================================================================================
正在安装:
ceph-common x86_64 1:0.94.5-2.el7 192.168.4.254_rhel7 6.2 M
为依赖而安装:
hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
python-rados x86_64 1:0.94.5-2.el7 192.168.4.254_rhel7 39 k
python-rbd x86_64 1:0.94.5-2.el7 192.168.4.254_rhel7 29 k
redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k

事务概要
==============================================================================================
安装 1 软件包 (+8 依赖软件包)

总下载量:7.0 M
安装大小:26 M
Downloading packages:
(1/9): hdparm-9.43-5.el7.x86_64.rpm | 83 kB 00:00:00
(2/9): m4-1.4.16-10.el7.x86_64.rpm | 256 kB 00:00:00
(3/9): patch-2.7.1-8.el7.x86_64.rpm | 110 kB 00:00:00
(4/9): python-rados-0.94.5-2.el7.x86_64.rpm | 39 kB 00:00:00
(5/9): python-rbd-0.94.5-2.el7.x86_64.rpm | 29 kB 00:00:00
(6/9): redhat-lsb-core-4.1-27.el7.x86_64.rpm | 37 kB 00:00:00
(7/9): redhat-lsb-submod-security-4.1-27.el7.x86_64.rpm | 15 kB 00:00:00
(8/9): spax-1.5.2-13.el7.x86_64.rpm | 260 kB 00:00:00
(9/9): ceph-common-0.94.5-2.el7.x86_64.rpm | 6.2 MB 00:00:00
———————————————————————————————-
总计 25 MB/s | 7.0 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : 1:python-rados-0.94.5-2.el7.x86_64 1/9
正在安装 : 1:python-rbd-0.94.5-2.el7.x86_64 2/9
正在安装 : patch-2.7.1-8.el7.x86_64 3/9
正在安装 : hdparm-9.43-5.el7.x86_64 4/9
正在安装 : m4-1.4.16-10.el7.x86_64 5/9
正在安装 : spax-1.5.2-13.el7.x86_64 6/9
正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 7/9
正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 8/9
正在安装 : 1:ceph-common-0.94.5-2.el7.x86_64 9/9
192.168.4.254_rhel7/productid | 1.6 kB 00:00:00
验证中 : 1:python-rados-0.94.5-2.el7.x86_64 1/9
验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 2/9
验证中 : spax-1.5.2-13.el7.x86_64 3/9
验证中 : 1:python-rbd-0.94.5-2.el7.x86_64 4/9
验证中 : m4-1.4.16-10.el7.x86_64 5/9
验证中 : redhat-lsb-core-4.1-27.el7.x86_64 6/9
验证中 : 1:ceph-common-0.94.5-2.el7.x86_64 7/9
验证中 : hdparm-9.43-5.el7.x86_64 8/9
验证中 : patch-2.7.1-8.el7.x86_64 9/9

已安装:
ceph-common.x86_64 1:0.94.5-2.el7

作为依赖被安装:
hdparm.x86_64 0:9.43-5.el7 m4.x86_64 0:1.4.16-10.el7
patch.x86_64 0:2.7.1-8.el7 python-rados.x86_64 1:0.94.5-2.el7
python-rbd.x86_64 1:0.94.5-2.el7 redhat-lsb-core.x86_64 0:4.1-27.el7
redhat-lsb-submod-security.x86_64 0:4.1-27.el7 spax.x86_64 0:1.5.2-13.el7

完毕!
[root@client ~]#
[root@client ~]#
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph
ceph.conf 100% 235 338.4KB/s 00:00
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
ceph.client.admin.keyring 100% 63 81.4KB/s 00:00
[root@client ~]# ll /etc/ceph/
总用量 12
-rw——-. 1 root root 63 10月 11 14:25 ceph.client.admin.keyring
-rw-r–r–. 1 root root 235 10月 11 14:24 ceph.conf
-rwxr-xr-x. 1 root root 92 6月 28 2017 rbdmap
[root@client ~]#
[root@client ~]#
[root@client ~]# rbd map image
/dev/rbd0
[root@client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
rbd0 251:0 0 15G 0 disk
[root@client ~]#
[root@client ~]# rbd showmapped
id pool image snap device
0 rbd image – /dev/rbd0

3) 客户端格式化、挂载分区
[root@client ~]# mkfs.xfs /dev/rbd0
[root@client ~]# mount /dev/rbd0 /mnt/
[root@client ~]# echo “test” > /mnt/test.txt
代码如下
[root@client ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=17, agsize=244736 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=3932160, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@client ~]# mount /dev/rbd0 /mnt/
[root@client ~]# echo “test” > /mnt/test.txt
[root@client ~]# cat /mnt/test.txt
test
[root@client ~]#

步骤四:创建镜像快照

1) 查看镜像快照
[root@node1 ~]# rbd snap ls image
2) 创建镜像快照
[root@node1 ~]# rbd snap create image –snap image-snap1
[root@node1 ~]# rbd snap ls image
SNAPID NAME SIZE
4 image-snap1 15360 MB
3) 删除客户端写入的测试文件
[root@client ~]# rm -rf /mnt/test.txt
4) 还原快照
[root@node1 ~]# rbd snap rollback image –snap image-snap1
#客户端重新挂载分区
[root@client ~]# umount /mnt
[root@client ~]# mount /dev/rbd0 /mnt/
[root@client ~]# ls /mnt

代码如下
先确认下当前的快照信息
[root@node1 ceph-cluster]# rbd snap ls image
创建快照
[root@node1 ceph-cluster]# rbd snap create image –snap image-snap1
再查看一下
[root@node1 ceph-cluster]# rbd snap ls image
SNAPID NAME SIZE
4 image-snap1 15360 MB
先在客户端client去操作,把刚才创建的test.txt删除掉
[root@client ~]# rm -rf /mnt/test.txt
[root@client ~]# ll /mnt/
总用量 0
然后在node1上面还原快照
[root@node1 ceph-cluster]# rbd snap rollback image –snap image-snap1
Rolling back to snapshot: 100% complete…done.
然后客户端卸载/mnt,重新挂载确认
[root@client ~]# umount /mnt/
[root@client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
rbd0 251:0 0 15G 0 disk
[root@client ~]# mount /dev/rbd0 /mnt
[root@client ~]# ll /mnt/
总用量 4
-rw-r–r–. 1 root root 5 10月 11 14:27 test.txt

步骤四:创建快照克隆

1)克隆快照
[root@node1 ~]# rbd snap protect image –snap image-snap1
[root@node1 ~]# rbd snap rm image –snap image-snap1 //会失败
[root@node1 ~]# rbd clone \
image –snap image-snap1 image-clone –image-feature layering
//使用image的快照image-snap1克隆一个新的image-clone镜像
2)查看克隆镜像与父镜像快照的关系
[root@node1 ~]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.d3f53d1b58ba
format: 2
features: layering
flags:
parent: rbd/image@image-snap1
#克隆镜像很多数据都来自于快照链
#如果希望克隆镜像可以独立工作,就需要将父快照中的数据,全部拷贝一份,但比较耗时!!!
[root@node1 ~]# rbd flatten image-clone
[root@node1 ~]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.d3f53d1b58ba
format: 2
features: layering
flags:
#注意,父快照信息没了!

代码如下
先锁定快照image-snap1
[root@node1 ceph-cluster]# rbd snap protect image –snap image-snap1
[root@node1 ceph-cluster]# rbd snap rm image –snap image-snap1
rbd: snapshot ‘image-snap1’ is protected from removal.
2018-10-11 14:40:14.728450 7f9f5fca9d80 -1 librbd::Operations: snapshot is protected

然后克隆一下快照
[root@node1 ceph-cluster]# rbd clone image –snap image-snap1 image-clone –image-feature layering

然后查看一下克隆快照的信息,可以发现该快照来源元image-snap1
[root@node1 ceph-cluster]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1033238e1f29
format: 2
features: layering
flags:
parent: rbd/image@image-snap1
overlap: 15360 MB

如果要想独立工作,就得把父快照完全复制,但非常耗时
[root@node1 ceph-cluster]# rbd flatten image-clone
Image flatten: 100% complete…done.

然后再看一下镜像信息,可以发现没有父快照的信息了
[root@node1 ceph-cluster]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1033238e1f29
format: 2
features: layering
flags:

步骤四:其他操作

1) 客户端撤销磁盘映射
[root@client ~]# umount /mnt
[root@client ~]# rbd showmapped
id pool image snap device
0 rbd image – /dev/rbd0
//语法格式:
[root@client ~]# rbd unmap /dev/rbd/{poolname}/{imagename}
[root@client ~]# rbd unmap /dev/rbd/rbd/image
2)删除快照与镜像
[root@node1 ~]# rbd snap rm image –snap image-snap
[root@node1 ~]# rbd list
[root@node1 ~]# rbd rm image

发表在 ceph | 标签为 | 留下评论

创建LVS集群

案例:练习使用ipvsadm
先配置yum源
[root@60 ~]# yum-config-manager –add ftp://192.168.4.254/rhel7
已加载插件:langpacks, product-id
adding repo from: ftp://192.168.4.254/rhel7

[192.168.4.254_rhel7]
name=added from: ftp://192.168.4.254/rhel7
baseurl=ftp://192.168.4.254/rhel7
enabled=1
[root@60 ~]# echo “gpgcheck=0” >> /etc/yum.repos.d/
192.168.4.254_rhel7.repo redhat.repo
[root@60 ~]# echo “gpgcheck=0” >> /etc/yum.repos.d/192.168.4.254_rhel7.repo
开始装包ipvsadm
[root@60 ~]# yum install -y ipvsadm
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
正在解决依赖关系
–> 正在检查事务
—> 软件包 ipvsadm.x86_64.0.1.27-7.el7 将被 安装
–> 解决依赖关系完成

依赖关系解决

==========================================================================================
Package 架构 版本 源 大小
==========================================================================================
正在安装:
ipvsadm x86_64 1.27-7.el7 192.168.4.254_rhel7 45 k

事务概要
==========================================================================================
安装 1 软件包

总下载量:45 k
安装大小:75 k
Downloading packages:
ipvsadm-1.27-7.el7.x86_64.rpm | 45 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : ipvsadm-1.27-7.el7.x86_64 1/1
192.168.4.254_rhel7/productid | 1.6 kB 00:00:00
验证中 : ipvsadm-1.27-7.el7.x86_64 1/1

已安装:
ipvsadm.x86_64 0:1.27-7.el7

完毕!
命令的基本格式和用户
-A -E -D 添加、修改、删除虚拟服务器
-a -e -d 添加、修改、删除真实服务器
-C 清空所有
-L 查看所有
-s (rr|wrr|lc|wlc)制定集群算法
-g(DR模式) -i(隧道模式) -m(NAT模式)

1、创建LVS虚拟集群服务器
[root@60 ~]# ipvsadm -A -t 192.168.4.60:80 -s wrr
查看一下
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.60:80 wrr

2、为集群添加2台真实服务器real server 61和62
[root@60 ~]# ipvsadm -a -t 192.168.4.60:80 -r 192.168.4.61 -m -w 1
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.60:80 wrr
-> 192.168.4.61:80 Masq 1 0 0
[root@60 ~]# ipvsadm -a -t 192.168.4.60:80 -r 192.168.4.62 -m -w 2
查看一下
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.60:80 wrr
-> 192.168.4.61:80 Masq 1 0 0
-> 192.168.4.62:80 Masq 2 0 0

3、修改集群的调度算法,刚才是加权轮询wrr,现在修改成rr轮询
(下面是错误的例子,如果虚拟IP写错了,直接报错Memory allocation problem)
[root@60 ~]# ipvsadm -E -t 192.168.4.5:80 -s rr
Memory allocation problem
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.60:80 wrr
-> 192.168.4.61:80 Masq 1 0 0
-> 192.168.4.62:80 Masq 2 0 0
(上面报错的原因就是虚拟IP写错了,改回正确的)
[root@60 ~]# ipvsadm -E -t 192.168.4.60:80 -s rr
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.60:80 rr
-> 192.168.4.61:80 Masq 1 0 0
-> 192.168.4.62:80 Masq 2 0 0

4、修改read server(将模式改为DR模式)
[root@60 ~]# ipvsadm -e -t 192.168.4.60:80 -r 192.168.4.62 -g
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.60:80 rr
-> 192.168.4.61:80 Masq 1 0 0
-> 192.168.4.62:80 Route 1 0 0

5、再创建一个集群
[root@60 ~]# ipvsadm -A -t 192.168.4.5:3306 -s lc
[root@60 ~]# ipvsadm -a -t 192.168.4.5:3306 -r 192.168.2.100 -m
[root@60 ~]# ipvsadm -a -t 192.168.4.5:3306 -r 192.168.2.200 -m
[root@60 ~]#
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.4.5:3306 lc
-> 192.168.2.100:3306 Masq 1 0 0
-> 192.168.2.200:3306 Masq 1 0 0
TCP 192.168.4.60:80 rr
-> 192.168.4.61:80 Masq 1 0 0
-> 192.168.4.62:80 Route 1 0 0

6、把刚才的规则保存一下
ipvsadm ipvsadm-restore ipvsadm-save
[root@60 ~]# ipvsadm-save -n > /etc/sysconfig/ipvsadm
[root@60 ~]# cat /etc/sysconfig/ipvsadm
-A -t 192.168.4.5:3306 -s lc
-a -t 192.168.4.5:3306 -r 192.168.2.100:3306 -m -w 1
-a -t 192.168.4.5:3306 -r 192.168.2.200:3306 -m -w 1
-A -t 192.168.4.60:80 -s rr
-a -t 192.168.4.60:80 -r 192.168.4.61:80 -m -w 1
-a -t 192.168.4.60:80 -r 192.168.4.62:80 -g -w 1

7、把刚才的规则清空
[root@60 ~]# ipvsadm -C
[root@60 ~]# ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
(注意下面的,虽然清空了规则,但是刚才的配置文件是不会删除了,所以为了避免混乱,如果清楚规则就必须删除配置文件)
[root@60 ~]# cat /etc/sysconfig/ipvsadm
-A -t 192.168.4.5:3306 -s lc
-a -t 192.168.4.5:3306 -r 192.168.2.100:3306 -m -w 1
-a -t 192.168.4.5:3306 -r 192.168.2.200:3306 -m -w 1
-A -t 192.168.4.60:80 -s rr
-a -t 192.168.4.60:80 -r 192.168.4.61:80 -m -w 1
-a -t 192.168.4.60:80 -r 192.168.4.62:80 -g -w 1
[root@60 ~]#

案例
部署LVS-DR集群
使用4台虚拟机,1台作为客户端、1台作为Director调度器、2台作为Real Server、,拓扑结构如下
PC 客户端 eth0 192.168.4.100/24
LVS调度器 eth0 192.168.4.5/24
Web服务器1 eth0:192.168.4.10/24 VIP(lo:0):192.168.4.5/32
Web服务器1 eth0:192.168.4.20/24 VIP(lo:0):192.168.4.5/32
说明:VIP是对客户端提供服务的IP地址,RIP是后端服务器的真实IP地址,DIP是调度器与后端服务器通信的IP地址(DIP必须配置在虚拟接口)。

步骤一:配置实验网络环境
1)设置Proxy代理服务器的VIP和DIP
注意:为了防止冲突,VIP必须要配置在网卡的虚拟接口!!!
调度器使用DIP与RIP通信,否则会出现192.168.4.5与192.168.4.5通信。
[root@proxy ~]# cd /etc/sysconfig/network-scripts/
[root@proxy ~]# cp ifcfg-eth0{,:0}
[root@proxy ~]# vim ifcfg-eth0
TYPE=Ethernet
BOOTPROTO=none
NAME=eth0
DEVICE=eth0
ONBOOT=yes
IPADDR=192.168.4.15
PREFIX=24
[root@proxy ~]# vim ifcfg-eth0:0
TYPE=Ethernet
BOOTPROTO=none
DEFROUTE=yes
NAME=eth0:0
DEVICE=eth0:0
ONBOOT=yes
IPADDR=192.168.4.5
PREFIX=24
[root@proxy ~]# systemctl restart network

2)设置Web1服务器网络参数
[root@web1 ~]# nmcli connection modify eth0 ipv4.method manual ipv4.addresses 192.168.4.10/24 connection.autoconnect yes
[root@web1 ~]# nmcli connection up eth0
接下来给web1配置VIP地址
注意:这里的子网掩码必须是32(也就是全255),网络地址与IP地址一样,广播地址与IP地址也一样。
[root@web1 ~]# cd /etc/sysconfig/network-scripts/
[root@web1 ~]# cp ifcfg-lo{,:0}
[root@web1 ~]# vim ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.4.5
NETMASK=255.255.255.255
NETWORK=192.168.4.5
BROADCAST=192.168.4.5
ONBOOT=yes
NAME=lo:0
注意:这里因为web1也配置与代理一样的VIP地址,默认肯定会出现地址冲突。
写入这四行的主要目的就是访问192.168.4.5的数据包,只有调度器会响应,其他主机都不做任何响应。
[root@web1 ~]# vim /etc/sysctl.conf
#手动写入如下4行内容
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
#当有arp广播问谁是192.168.4.5时,本机忽略该ARP广播,不做任何回应
#本机不要向外宣告自己的lo回环地址是192.168.4.5
重启网络服务,设置防火墙与SELinux
[root@web1 ~]# systemctl restart network
[root@web1 ~]# ifdown eth1
[root@web1 ~]# ifconfig
[root@web1 ~]# systemctl stop firewalld
[root@web1 ~]# setenforce 0

3)设置Web2服务器网络参数
[root@web2 ~]# nmcli connection modify eth0 ipv4.method manual \
ipv4.addresses 192.168.4.20/24 connection.autoconnect yes
[root@web2 ~]# nmcli connection up eth0
接下来给web2配置VIP地址
注意:这里的子网掩码必须是32(也就是全255),网络地址与IP地址一样,广播地址与IP地址也一样。
[root@web2 ~]# cd /etc/sysconfig/network-scripts/
[root@web2 ~]# cp ifcfg-lo{,:0}
[root@web2 ~]# vim ifcfg-lo:0
DEVICE=lo:0
IPADDR=192.168.4.5
NETMASK=255.255.255.255
NETWORK=192.168.4.5
BROADCAST=192.168.4.5
ONBOOT=yes
NAME=lo:0
注意:这里因为web2也配置与代理一样的VIP地址,默认肯定会出现地址冲突。
写入这四行的主要目的就是访问192.168.4.5的数据包,只有调度器会响应,其他主机都不做任何响应。
[root@web2 ~]# vim /etc/sysctl.conf
#手动写入如下4行内容
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
#当有arp广播问谁是192.168.4.5时,本机忽略该ARP广播,不做任何回应
#本机不要向外宣告自己的lo回环地址是192.168.4.5
重启网络服务,设置防火墙与SELinux
[root@web2 ~]# systemctl restart network
[root@web2 ~]# ifdown eth1
[root@web2 ~]# ifconfig
[root@web2 ~]# systemctl stop firewalld
[root@web2 ~]# setenforce 0

步骤二:配置实验网络环境

1)自定义Web页面
[root@web1 ~]# yum -y install httpd
[root@web1 ~]# echo “192.168.4.10” > /var/www/html/index.html
[root@web2 ~]# yum -y install httpd
[root@web2 ~]# echo “192.168.4.20” > /var/www/html/index.html
2)启动Web服务器软件
[root@web1 ~]# systemctl start httpd; systemctl enable httpd
[root@web2 ~]# systemctl start httpd; systemctl enable httpd

步骤三:proxy调度器安装软件并部署LVS-DR模式调度器
1)安装软件(如果已经安装,此步骤可以忽略)
[root@proxy Packages]# yum -y install ipvsadm
2)清理之前实验的规则,创建新的集群服务器规则
[root@proxy ~]# ipvsadm -C #清空所有规则
[root@proxy ~]# ipvsadm -A -t 192.168.4.5:80 -s wrr
3)添加真实服务器(-g参数设置LVS工作模式为DR模式)
[root@proxy ~]# ipvsadm -a -t 192.168.4.5:80 -r 192.168.4.10 -g -w 1
[root@proxy ~]# ipvsadm -a -t 192.168.4.5:80 -r 192.168.4.20 -g -w 1
4)查看规则列表,并保存规则
[root@proxy ~]# ipvsadm –Ln
[root@proxy ~]# ipvsadm-save -n > /etc/sysconfig/ipvsadm

步骤四:客户端测试
客户端使用curl命令反复连接http://192.168.4.5,查看访问的页面是否会轮询到不同的后端真实服务器。
[root@client ~]# curl 192.168.4.5
192.168.45.20
[root@client ~]# curl 192.168.4.5
192.168.4.10
[root@client ~]# curl 192.168.4.5
192.168.45.20
[root@client ~]# curl 192.168.4.5
192.168.4.10
[root@client ~]#

扩展知识:默认LVS不带健康检查功能,需要自己手动编写动态检测脚本,实现该功能:(参考脚本如下,仅供参考)
[root@proxy ~]# vim check.sh
#!/bin/bash
VIP=192.168.4.5:80
RIP1=192.168.4.10
RIP2=192.168.4.20
while :
do
for IP in $RIP1 $RIP2
do
curl -s http://$IP &>/dev/vnull
web_stat=$?
ipvsadm -Ln | grep -q $IP
web_in_lvs=$?
if [ $web_stat -ne 0 -a $web_in_lvs -eq 0 ];then
ipvsadm -d -t $VIP -r $IP
elif [ $web_stat -eq 0 -a $web_in_lvs -ne 0 ];then
ipvsadm -a -t $VIP -r $IP
fi
done
sleep 1
done

发表在 ceph | 标签为 | 留下评论

Msql扩展查询操作练习

–创建测试数据
create table Student(S# varchar(10),Sname nvarchar(10),Sage datetime,Ssex nvarchar(10))
insert into Student values(’01’ , N’赵雷’ , ‘1990-01-01′ , N’男’)
insert into Student values(’02’ , N’钱电’ , ‘1990-12-21′ , N’男’)
insert into Student values(’03’ , N’孙风’ , ‘1990-05-20′ , N’男’)
insert into Student values(’04’ , N’李云’ , ‘1990-08-06′ , N’男’)
insert into Student values(’05’ , N’周梅’ , ‘1991-12-01′ , N’女’)
insert into Student values(’06’ , N’吴兰’ , ‘1992-03-01′ , N’女’)
insert into Student values(’07’ , N’郑竹’ , ‘1989-07-01′ , N’女’)
insert into Student values(’08’ , N’王菊’ , ‘1990-01-20′ , N’女’)
create table Course(C# varchar(10),Cname nvarchar(10),T# varchar(10))
insert into Course values(’01’ , N’语文’ , ’02’)
insert into Course values(’02’ , N’数学’ , ’01’)
insert into Course values(’03’ , N’英语’ , ’03’)
create table Teacher(T# varchar(10),Tname nvarchar(10))
insert into Teacher values(’01’ , N’张三’)
insert into Teacher values(’02’ , N’李四’)
insert into Teacher values(’03’ , N’王五’)
create table SC(S# varchar(10),C# varchar(10),score decimal(18,1))
insert into SC values(’01’ , ’01’ , 80)
insert into SC values(’01’ , ’02’ , 90)
insert into SC values(’01’ , ’03’ , 99)
insert into SC values(’02’ , ’01’ , 70)
insert into SC values(’02’ , ’02’ , 60)
insert into SC values(’02’ , ’03’ , 80)
insert into SC values(’03’ , ’01’ , 80)
insert into SC values(’03’ , ’02’ , 80)
insert into SC values(’03’ , ’03’ , 80)
insert into SC values(’04’ , ’01’ , 50)
insert into SC values(’04’ , ’02’ , 30)
insert into SC values(’04’ , ’03’ , 20)
insert into SC values(’05’ , ’01’ , 76)
insert into SC values(’05’ , ’02’ , 87)
insert into SC values(’06’ , ’01’ , 31)
insert into SC values(’06’ , ’03’ , 34)
insert into SC values(’07’ , ’02’ , 89)
insert into SC values(’07’ , ’03’ , 98)

— 1、查询”01″课程比”02″课程成绩高的学生的信息及课程分数

select a.* ,b.s_score as 01_score,c.s_score as 02_score from
student a
join score b on a.s_id=b.s_id and b.c_id=’01’
left join score c on a.s_id=c.s_id and c.c_id=’02’ or c.c_id = NULL where b.s_score>c.s_score

— 2、查询”01″课程比”02″课程成绩低的学生的信息及课程分数

select a.* ,b.s_score as 01_score,c.s_score as 02_score from
student a left join score b on a.s_id=b.s_id and b.c_id=’01’ or b.c_id=NULL
join score c on a.s_id=c.s_id and c.c_id=’02’ where b.s_score<c.s_score

— 3、查询平均成绩大于等于60分的同学的学生编号和学生姓名和平均成绩
select b.s_id,b.s_name,ROUND(AVG(a.s_score),2) as avg_score from
student b
join score a on b.s_id = a.s_id
GROUP BY b.s_id,b.s_name HAVING ROUND(AVG(a.s_score),2)>=60;

— 4、查询平均成绩小于60分的同学的学生编号和学生姓名和平均成绩
— (包括有成绩的和无成绩的)

select b.s_id,b.s_name,ROUND(AVG(a.s_score),2) as avg_score from
student b
left join score a on b.s_id = a.s_id
GROUP BY b.s_id,b.s_name HAVING ROUND(AVG(a.s_score),2)<60
union
select a.s_id,a.s_name,0 as avg_score from
student a
where a.s_id not in (
select distinct s_id from score);

— 5、查询所有同学的学生编号、学生姓名、选课总数、所有课程的总成绩
select a.s_id,a.s_name,count(b.c_id) as sum_course,sum(b.s_score) as sum_score from
student a
left join score b on a.s_id=b.s_id
GROUP BY a.s_id,a.s_name;

— 6、查询”李”姓老师的数量
select count(t_id) from teacher where t_name like ‘李%’;

— 7、查询学过”张三”老师授课的同学的信息
select a.* from
student a
join score b on a.s_id=b.s_id where b.c_id in(
select c_id from course where t_id =(
select t_id from teacher where t_name = ‘张三’));

— 8、查询没学过”张三”老师授课的同学的信息
select * from
student c
where c.s_id not in(
select a.s_id from student a join score b on a.s_id=b.s_id where b.c_id in(
select c_id from course where t_id =(
select t_id from teacher where t_name = ‘张三’)));
— 9、查询学过编号为”01″并且也学过编号为”02″的课程的同学的信息

select a.* from
student a,score b,score c
where a.s_id = b.s_id and a.s_id = c.s_id and b.c_id=’01’ and c.c_id=’02’;

— 10、查询学过编号为”01″但是没有学过编号为”02″的课程的同学的信息

select a.* from
student a
where a.s_id in (select s_id from score where c_id=’01’ ) and a.s_id not in(select s_id from score where c_id=’02’)
— 11、查询没有学全所有课程的同学的信息
select s.* from
student s where s.s_id in(
select s_id from score where s_id not in(
select a.s_id from score a
join score b on a.s_id = b.s_id and b.c_id=’02’
join score c on a.s_id = c.s_id and c.c_id=’03’
where a.c_id=’01’))
— 12、查询至少有一门课与学号为”01″的同学所学相同的同学的信息
select * from student where s_id in(
select distinct a.s_id from score a where a.c_id in(select a.c_id from score a where a.s_id=’01’)
);

— 13、查询和”01″号的同学学习的课程完全相同的其他同学的信息

select a.* from student a where a.s_id in(
select distinct s_id from score where s_id!=’01’ and c_id in(select c_id from score where s_id=’01’)
group by s_id
having count(1)=(select count(1) from score where s_id=’01’));
— 14、查询没学过”张三”老师讲授的任一门课程的学生姓名
select a.s_name from student a where a.s_id not in (
select s_id from score where c_id =
(select c_id from course where t_id =(
select t_id from teacher where t_name = ‘张三’))
group by s_id);

— 15、查询两门及其以上不及格课程的同学的学号,姓名及其平均成绩
select a.s_id,a.s_name,ROUND(AVG(b.s_score)) from
student a
left join score b on a.s_id = b.s_id
where a.s_id in(
select s_id from score where s_score<60 GROUP BY s_id having count(1)>=2)
GROUP BY a.s_id,a.s_name

— 16、检索”01″课程分数小于60,按分数降序排列的学生信息
select a.*,b.c_id,b.s_score from
student a,score b
where a.s_id = b.s_id and b.c_id=’01’ and b.s_score<60 ORDER BY b.s_score DESC;

— 17、按平均成绩从高到低显示所有学生的所有课程的成绩以及平均成绩
select a.s_id,(select s_score from score where s_id=a.s_id and c_id=’01’) as 语文,
(select s_score from score where s_id=a.s_id and c_id=’02’) as 数学,
(select s_score from score where s_id=a.s_id and c_id=’03’) as 英语,
round(avg(s_score),2) as 平均分 from score a GROUP BY a.s_id ORDER BY 平均分 DESC;

— 18.查询各科成绩最高分、最低分和平均分:以如下形式显示:课程ID,课程name,最高分,最低分,平均分,及格率,中等率,优良率,优秀率
–及格为>=60,中等为:70-80,优良为:80-90,优秀为:>=90
select a.c_id,b.c_name,MAX(s_score),MIN(s_score),ROUND(AVG(s_score),2),
ROUND(100*(SUM(case when a.s_score>=60 then 1 else 0 end)/SUM(case when a.s_score then 1 else 0 end)),2) as 及格率,
ROUND(100*(SUM(case when a.s_score>=70 and a.s_score<=80 then 1 else 0 end)/SUM(case when a.s_score then 1 else 0 end)),2) as 中等率,
ROUND(100*(SUM(case when a.s_score>=80 and a.s_score<=90 then 1 else 0 end)/SUM(case when a.s_score then 1 else 0 end)),2) as 优良率,
ROUND(100*(SUM(case when a.s_score>=90 then 1 else 0 end)/SUM(case when a.s_score then 1 else 0 end)),2) as 优秀率
from score a left join course b on a.c_id = b.c_id GROUP BY a.c_id,b.c_name
— 19、按各科成绩进行排序,并显示排名(实现不完全)
— mysql没有rank函数
select a.s_id,a.c_id,
@i:=@i +1 as i保留排名,
@k:=(case when @score=a.s_score then @k else @i end) as rank不保留排名,
@score:=a.s_score as score
from (
select s_id,c_id,s_score from score WHERE c_id=’01’ GROUP BY s_id,c_id,s_score ORDER BY s_score DESC
)a,(select @k:=0,@i:=0,@score:=0)s
union
select a.s_id,a.c_id,
@i:=@i +1 as i,
@k:=(case when @score=a.s_score then @k else @i end) as rank,
@score:=a.s_score as score
from (
select s_id,c_id,s_score from score WHERE c_id=’02’ GROUP BY s_id,c_id,s_score ORDER BY s_score DESC
)a,(select @k:=0,@i:=0,@score:=0)s
union
select a.s_id,a.c_id,
@i:=@i +1 as i,
@k:=(case when @score=a.s_score then @k else @i end) as rank,
@score:=a.s_score as score
from (
select s_id,c_id,s_score from score WHERE c_id=’03’ GROUP BY s_id,c_id,s_score ORDER BY s_score DESC
)a,(select @k:=0,@i:=0,@score:=0)s
— 20、查询学生的总成绩并进行排名
select a.s_id,
@i:=@i+1 as i,
@k:=(case when @score=a.sum_score then @k else @i end) as rank,
@score:=a.sum_score as score
from (select s_id,SUM(s_score) as sum_score from score GROUP BY s_id ORDER BY sum_score DESC)a,
(select @k:=0,@i:=0,@score:=0)s
— 21、查询不同老师所教不同课程平均分从高到低显示
select a.t_id,c.t_name,a.c_id,ROUND(avg(s_score),2) as avg_score from course a
left join score b on a.c_id=b.c_id
left join teacher c on a.t_id=c.t_id
GROUP BY a.c_id,a.t_id,c.t_name ORDER BY avg_score DESC;
— 22、查询所有课程的成绩第2名到第3名的学生信息及该课程成绩

select d.*,c.排名,c.s_score,c.c_id from (
select a.s_id,a.s_score,a.c_id,@i:=@i+1 as 排名 from score a,(select @i:=0)s where a.c_id=’01’
)c
left join student d on c.s_id=d.s_id
where 排名 BETWEEN 2 AND 3
UNION
select d.*,c.排名,c.s_score,c.c_id from (
select a.s_id,a.s_score,a.c_id,@j:=@j+1 as 排名 from score a,(select @j:=0)s where a.c_id=’02’
)c
left join student d on c.s_id=d.s_id
where 排名 BETWEEN 2 AND 3
UNION
select d.*,c.排名,c.s_score,c.c_id from (
select a.s_id,a.s_score,a.c_id,@k:=@k+1 as 排名 from score a,(select @k:=0)s where a.c_id=’03’
)c
left join student d on c.s_id=d.s_id
where 排名 BETWEEN 2 AND 3;

— 23、统计各科成绩各分数段人数:课程编号,课程名称,[100-85],[85-70],[70-60],[0-60]及所占百分比

select distinct f.c_name,a.c_id,b.`85-100`,b.百分比,c.`70-85`,c.百分比,d.`60-70`,d.百分比,e.`0-60`,e.百分比 from score a
left join (select c_id,SUM(case when s_score >85 and s_score <=100 then 1 else 0 end) as `85-100`,
ROUND(100*(SUM(case when s_score >85 and s_score <=100 then 1 else 0 end)/count(*)),2) as 百分比
from score GROUP BY c_id)b on a.c_id=b.c_id
left join (select c_id,SUM(case when s_score >70 and s_score <=85 then 1 else 0 end) as `70-85`,
ROUND(100*(SUM(case when s_score >70 and s_score <=85 then 1 else 0 end)/count(*)),2) as 百分比
from score GROUP BY c_id)c on a.c_id=c.c_id
left join (select c_id,SUM(case when s_score >60 and s_score <=70 then 1 else 0 end) as `60-70`,
ROUND(100*(SUM(case when s_score >60 and s_score <=70 then 1 else 0 end)/count(*)),2) as 百分比
from score GROUP BY c_id)d on a.c_id=d.c_id
left join (select c_id,SUM(case when s_score >=0 and s_score <=60 then 1 else 0 end) as `0-60`,
ROUND(100*(SUM(case when s_score >=0 and s_score <=60 then 1 else 0 end)/count(*)),2) as 百分比
from score GROUP BY c_id)e on a.c_id=e.c_id
left join course f on a.c_id = f.c_id
— 24、查询学生平均成绩及其名次
select a.s_id,
@i:=@i+1 as ‘不保留空缺排名’,
@k:=(case when @avg_score=a.avg_s then @k else @i end) as ‘保留空缺排名’,
@avg_score:=avg_s as ‘平均分’
from (select s_id,ROUND(AVG(s_score),2) as avg_s from score GROUP BY s_id)a,(select @avg_score:=0,@i:=0,@k:=0)b;
— 25、查询各科成绩前三名的记录
— 1.选出b表比a表成绩大的所有组
— 2.选出比当前id成绩大的 小于三个的
select a.s_id,a.c_id,a.s_score from score a
left join score b on a.c_id = b.c_id and a.s_score<b.s_score
group by a.s_id,a.c_id,a.s_score HAVING COUNT(b.s_id)<3
ORDER BY a.c_id,a.s_score DESC

— 26、查询每门课程被选修的学生数

select c_id,count(s_id) from score a GROUP BY c_id

— 27、查询出只有两门课程的全部学生的学号和姓名
select s_id,s_name from student where s_id in(
select s_id from score GROUP BY s_id HAVING COUNT(c_id)=2);

— 28、查询男生、女生人数
select s_sex,COUNT(s_sex) as 人数 from student GROUP BY s_sex

— 29、查询名字中含有”风”字的学生信息

select * from student where s_name like ‘%风%’;

— 30、查询同名同性学生名单,并统计同名人数

select a.s_name,a.s_sex,count(*) from student a JOIN
student b on a.s_id !=b.s_id and a.s_name = b.s_name and a.s_sex = b.s_sex
GROUP BY a.s_name,a.s_sex

— 31、查询1990年出生的学生名单

select s_name from student where s_birth like ‘1990%’

— 32、查询每门课程的平均成绩,结果按平均成绩降序排列,平均成绩相同时,按课程编号升序排列

select c_id,ROUND(AVG(s_score),2) as avg_score from score GROUP BY c_id ORDER BY avg_score DESC,c_id ASC

— 33、查询平均成绩大于等于85的所有学生的学号、姓名和平均成绩

select a.s_id,b.s_name,ROUND(avg(a.s_score),2) as avg_score from score a
left join student b on a.s_id=b.s_id GROUP BY s_id HAVING avg_score>=85

— 34、查询课程名称为”数学”,且分数低于60的学生姓名和分数

select a.s_name,b.s_score from score b LEFT JOIN student a on a.s_id=b.s_id where b.c_id=(
select c_id from course where c_name =’数学’) and b.s_score<60

— 35、查询所有学生的课程及分数情况;

select a.s_id,a.s_name,
SUM(case c.c_name when ‘语文’ then b.s_score else 0 end) as ‘语文’,
SUM(case c.c_name when ‘数学’ then b.s_score else 0 end) as ‘数学’,
SUM(case c.c_name when ‘英语’ then b.s_score else 0 end) as ‘英语’,
SUM(b.s_score) as ‘总分’
from student a left join score b on a.s_id = b.s_id
left join course c on b.c_id = c.c_id
GROUP BY a.s_id,a.s_name

— 36、查询任何一门课程成绩在70分以上的姓名、课程名称和分数;
select a.s_name,b.c_name,c.s_score from course b left join score c on b.c_id = c.c_id
left join student a on a.s_id=c.s_id where c.s_score>=70

— 37、查询不及格的课程
select a.s_id,a.c_id,b.c_name,a.s_score from score a left join course b on a.c_id = b.c_id
where a.s_score<60

–38、查询课程编号为01且课程成绩在80分以上的学生的学号和姓名;
select a.s_id,b.s_name from score a LEFT JOIN student b on a.s_id = b.s_id
where a.c_id = ’01’ and a.s_score>80

— 39、求每门课程的学生人数
select count(*) from score GROUP BY c_id;

— 40、查询选修”张三”老师所授课程的学生中,成绩最高的学生信息及其成绩

— 查询老师id
select c_id from course c,teacher d where c.t_id=d.t_id and d.t_name=’张三’
— 查询最高分(可能有相同分数)
select MAX(s_score) from score where c_id=’02’
— 查询信息
select a.*,b.s_score,b.c_id,c.c_name from student a
LEFT JOIN score b on a.s_id = b.s_id
LEFT JOIN course c on b.c_id=c.c_id
where b.c_id =(select c_id from course c,teacher d where c.t_id=d.t_id and d.t_name=’张三’)
and b.s_score in (select MAX(s_score) from score where c_id=’02’)
— 41、查询不同课程成绩相同的学生的学生编号、课程编号、学生成绩
select DISTINCT b.s_id,b.c_id,b.s_score from score a,score b where a.c_id != b.c_id and a.s_score = b.s_score
— 42、查询每门功成绩最好的前两名
— 牛逼的写法
select a.s_id,a.c_id,a.s_score from score a
where (select COUNT(1) from score b where b.c_id=a.c_id and b.s_score>=a.s_score)<=2 ORDER BY a.c_id
— 43、统计每门课程的学生选修人数(超过5人的课程才统计)。要求输出课程号和选修人数,查询结果按人数降序排列,若人数相同,按课程号升序排列
select c_id,count(*) as total from score GROUP BY c_id HAVING total>5 ORDER BY total,c_id ASC
— 44、检索至少选修两门课程的学生学号
select s_id,count(*) as sel from score GROUP BY s_id HAVING sel>=2
— 45、查询选修了全部课程的学生信息
select * from student where s_id in(
select s_id from score GROUP BY s_id HAVING count(*)=(select count(*) from course))
–46、查询各学生的年龄
— 按照出生日期来算,当前月日 < 出生年月的月日则,年龄减一
select s_birth,(DATE_FORMAT(NOW(),’%Y’)-DATE_FORMAT(s_birth,’%Y’) –
(case when DATE_FORMAT(NOW(),’%m%d’)>DATE_FORMAT(s_birth,’%m%d’) then 0 else 1 end)) as age
from student;

— 47、查询本周过生日的学生
select * from student where WEEK(DATE_FORMAT(NOW(),’%Y%m%d’))=WEEK(s_birth)
select * from student where YEARWEEK(s_birth)=YEARWEEK(DATE_FORMAT(NOW(),’%Y%m%d’))

select WEEK(DATE_FORMAT(NOW(),’%Y%m%d’))

— 48、查询下周过生日的学生
select * from student where WEEK(DATE_FORMAT(NOW(),’%Y%m%d’))+1 =WEEK(s_birth)

— 49、查询本月过生日的学生

select * from student where MONTH(DATE_FORMAT(NOW(),’%Y%m%d’)) =MONTH(s_birth)

— 50、查询下月过生日的学生
select * from student where MONTH(DATE_FORMAT(NOW(),’%Y%m%d’))+1 =MONTH(s_birth)

发表在 mysql | 标签为 | 留下评论

案例:Mysql基本查询练习

+++++++++++++++++++++++++++++++++++++++++++++++++++

表基本操作练习题

1 复制user表的所有记录到teadb库的teacher表里

只复制user表的表结构给新teadb库的tea2表

mysql> create table teacher select * from teadb;

Query OK, 41 rows affected (0.32 sec)

Records: 41  Duplicates: 0  Warnings: 0

 

mysql> select * from teacher;

+—-+———————+——+——+——–+———-+——-+——-+—————————————————————–+—————————+—————-+———-+

| id | name                | sex  | age  | s_year | password | uid   | gid   | comment                                                         | homedir                   | shell          | pay      |

+—-+———————+——+——+——–+———-+——-+——-+—————————————————————–+—————————+—————-+———-+

|  1 | root                | boy  |   21 |   1990 | x        |     0 |     0 | root                                                            | /root                     | /sbin/nologin  | 30000.00 |

|  2 | bin                 | boy  |   21 |   1990 | x        |     1 |     1 | bin                                                             | /bin                      | /sbin/nologin  |  5000.00 |

|  4 | adm                 | boy  |   21 |   1990 | x        |     3 |     4 | adm                                                             | /var/adm                  | /sbin/nologin  |  5000.00 |

|  5 | lp                  | boy  |   21 |   1990 | x        |     4 |     7 | lp                                                              | /var/spool/lpd            | /sbin/nologin  |  5000.00 |

|  6 | sync                | boy  |   21 |   1990 | x        |     5 |     0 | sync                                                            | /sbin                     | /sbin/nologin  |  5000.00 |

|  7 | shutdown            | boy  |   21 |   1990 | x        |     6 |     0 | shutdown                                                        | /sbin                     | /sbin/shutdown |  5000.00 |

|  8 | halt                | boy  |   21 |   1990 | x        |     7 |     0 | halt                                                            | /sbin                     | /sbin/halt     |  5000.00 |

|  9 | mail                | boy  |   21 |   1990 | x        |     8 |    12 | mail                                                            | /var/spool/mail           | /sbin/nologin  |  5000.00 |

| 10 | operator            | girl |   21 |   1990 | x        |    11 |     0 | operator                                                        | /root                     | /sbin/nologin  | 10000.00 |

| 11 | games               | girl |   21 |   1990 | x        |    12 |   100 | games                                                           | /root                     | /sbin/nologin  | 10000.00 |

| 12 | ftp                 | girl |   21 |   1990 | x        |    14 |    50 | FTP User                                                        | /var/ftp                  | /sbin/nologin  | 10000.00 |

| 13 | nobody              | girl |   21 |   1990 | x        |    99 |    99 | Nobody                                                          | /                         | /sbin/nologin  | 10000.00 |

| 14 | systemd-network     | girl |   21 |   1990 | x        |   192 |   192 | systemd Network Management                                      | /root                     | /sbin/nologin  | 10000.00 |

| 16 | polkitd             | girl |   21 |   1990 | x        |   999 |   998 | User for polkitd                                                | /                         | /sbin/nologin  | 10000.00 |

| 17 | libstoragemgmt      | girl |   21 |   1990 | x        |   998 |   996 | daemon account for libstoragemgmt                               | /var/run/lsm              | /sbin/nologin  | 10000.00 |

| 18 | rpc                 | girl |   21 |   1990 | x        |    32 |    32 | Rpcbind Daemon                                                  | /var/lib/rpcbind          | /sbin/nologin  | 10000.00 |

| 19 | colord              | girl |   21 |   1990 | x        |   997 |   995 | User for colord                                                 | /var/lib/colord           | /sbin/nologin  | 10000.00 |

| 20 | saslauth            | girl |   21 |   1990 | x        |   996 |    76 | Saslauthd user                                                  | /run/saslauthd            | /sbin/nologin  | 10000.00 |

| 21 | abrt                | girl |   21 |   1990 | x        |   173 |   173 |                                                                 | /root                     | /sbin/nologin  | 10000.00 |

| 22 | rtkit               | girl |   21 |   1990 | x        |   172 |   172 | RealtimeKit                                                     | /root                     | /sbin/nologin  | 10000.00 |

| 23 | radvd               | girl |   21 |   1990 | x        |    75 |    75 | radvd user                                                      | /                         | /sbin/nologin  | 10000.00 |

| 24 | chrony              | girl |   21 |   1990 | x        |   995 |   993 |                                                                 | /var/lib/chrony           | /sbin/nologin  | 10000.00 |

| 25 | tss                 | girl |   21 |   1990 | x        |    59 |    59 | Account used by the trousers package to sandbox the tcsd daemon | /dev/null                 | /sbin/nologin  | 10000.00 |

| 26 | usbmuxd             | girl |   21 |   1990 | x        |   113 |   113 | usbmuxd user                                                    | /root                     | /sbin/nologin  | 10000.00 |

| 27 | geoclue             | girl |   21 |   1990 | x        |   994 |   991 | User for geoclue                                                | /var/lib/geoclue          | /sbin/nologin  | 10000.00 |

| 28 | qemu                | girl |   21 |   1990 | x        |   107 |   107 | qemu user                                                       | /root                     | /sbin/nologin  | 10000.00 |

| 29 | rpcuser             | girl |   21 |   1990 | x        |    29 |    29 | RPC Service User                                                | /var/lib/nfs              | /sbin/nologin  | 10000.00 |

| 30 | nfsnobody           | girl |   21 |   1990 | x        | 65534 | 65534 | Anonymous NFS User                                              | /var/lib/nfs              | /sbin/nologin  | 10000.00 |

| 31 | setroubleshoot      | girl |   21 |   1990 | x        |   993 |   990 |                                                                 | /var/lib/setroubleshoot   | /sbin/nologin  | 10000.00 |

| 32 | pulse               | girl |   21 |   1990 | x        |   171 |   171 | PulseAudio System Daemon                                        | /root                     | /sbin/nologin  | 10000.00 |

| 33 | gdm                 | girl |   21 |   1990 | x        |    42 |    42 |                                                                 | /var/lib/gdm              | /sbin/nologin  | 10000.00 |

| 34 | gnome-initial-setup | girl |   21 |   1990 | x        |   992 |   987 |                                                                 | /run/gnome-initial-setup/ | /sbin/nologin  | 10000.00 |

| 35 | sshd                | girl |   21 |   1990 | x        |    74 |    74 | Privilege-separated SSH                                         | /var/empty/sshd           | /sbin/nologin  | 10000.00 |

| 36 | avahi               | girl |   21 |   1990 | x        |    70 |    70 | Avahi mDNS/DNS-SD Stack                                         | /var/run/avahi-daemon     | /sbin/nologin  | 10000.00 |

| 37 | postfix             | girl |   21 |   1990 | x        |    89 |    89 |                                                                 | /var/spool/postfix        | /sbin/nologin  | 10000.00 |

| 38 | ntp                 | girl |   21 |   1990 | x        |    38 |    38 |                                                                 | /etc/ntp                  | /sbin/nologin  | 10000.00 |

| 39 | tcpdump             | girl |   21 |   1990 | x        |    72 |    72 |                                                                 | /                         | /sbin/nologin  | 10000.00 |

| 40 | lisi                | girl |   21 |   1990 | x        |  1000 |  1000 | lisi                                                            | /home/lisi                | /bin/bash      | 10000.00 |

| 41 | mysql               | girl |   21 |   1990 | x        |    27 |    27 | MySQL Server                                                    | /var/lib/mysql            | /bin/false     | 10000.00 |

| 42 | rtestd              | boy  |   21 |   1990 | NULL     |  1000 |  NULL | NULL                                                            | NULL                      | NULL           |  5000.00 |

| 43 | rtest2d             | boy  |   21 |   1990 | NULL     |  2000 |  NULL | NULL                                                            | NULL                      | NULL           |  5000.00 |

+—-+———————+——+——+——–+———-+——-+——-+—————————————————————–+—————————+—————-+———-+

41 rows in set (0.00 sec)

 

 

mysql> create table tea2 select * from teadb where 1=2;

Query OK, 0 rows affected (0.29 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

mysql> desc tea2;

+———-+——————–+——+—–+———+——-+

| Field    | Type               | Null | Key | Default | Extra |

+———-+——————–+——+—–+———+——-+

| id       | int(3)             | NO   |     | 0       |       |

| name     | char(50)           | YES  |     | NULL    |       |

| sex      | enum(‘boy’,’girl’) | YES  |     | boy     |       |

| age      | int(2) unsigned    | YES  |     | 21      |       |

| s_year   | int(4)             | YES  |     | 1990    |       |

| password | char(30)           | YES  |     | NULL    |       |

| uid      | int(3)             | YES  |     | NULL    |       |

| gid      | int(3)             | YES  |     | NULL    |       |

| comment  | char(80)           | YES  |     | NULL    |       |

| homedir  | char(50)           | YES  |     | NULL    |       |

| shell    | char(50)           | YES  |     | NULL    |       |

| pay      | float(7,2)         | YES  |     | 5000.00 |       |

+———-+——————–+——+—–+———+——-+

12 rows in set (0.00 sec)

 

2 查看teadb库的tea2表的表结构,并删除 id 字段

mysql> create table tea2 select * from teadb where 1=2;

Query OK, 0 rows affected (0.29 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

mysql> desc tea2;

+———-+——————–+——+—–+———+——-+

| Field    | Type               | Null | Key | Default | Extra |

+———-+——————–+——+—–+———+——-+

| id       | int(3)             | NO   |     | 0       |       |

| name     | char(50)           | YES  |     | NULL    |       |

| sex      | enum(‘boy’,’girl’) | YES  |     | boy     |       |

| age      | int(2) unsigned    | YES  |     | 21      |       |

| s_year   | int(4)             | YES  |     | 1990    |       |

| password | char(30)           | YES  |     | NULL    |       |

| uid      | int(3)             | YES  |     | NULL    |       |

| gid      | int(3)             | YES  |     | NULL    |       |

| comment  | char(80)           | YES  |     | NULL    |       |

| homedir  | char(50)           | YES  |     | NULL    |       |

| shell    | char(50)           | YES  |     | NULL    |       |

| pay      | float(7,2)         | YES  |     | 5000.00 |       |

+———-+——————–+——+—–+———+——-+

12 rows in set (0.00 sec)

 

mysql> alter table tea2 drop id;

Query OK, 0 rows affected (0.76 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

mysql> desc tea2;

+———-+——————–+——+—–+———+——-+

| Field    | Type               | Null | Key | Default | Extra |

+———-+——————–+——+—–+———+——-+

| name     | char(50)           | YES  |     | NULL    |       |

| sex      | enum(‘boy’,’girl’) | YES  |     | boy     |       |

| age      | int(2) unsigned    | YES  |     | 21      |       |

| s_year   | int(4)             | YES  |     | 1990    |       |

| password | char(30)           | YES  |     | NULL    |       |

| uid      | int(3)             | YES  |     | NULL    |       |

| gid      | int(3)             | YES  |     | NULL    |       |

| comment  | char(80)           | YES  |     | NULL    |       |

| homedir  | char(50)           | YES  |     | NULL    |       |

| shell    | char(50)           | YES  |     | NULL    |       |

| pay      | float(7,2)         | YES  |     | 5000.00 |       |

+———-+——————–+——+—–+———+——-+

11 rows in set (0.00 sec)

 

3 把/etc/passwd文件的内容保存到teadb库的tea2表里

mysql> create table teadb(

-> name char(50),

-> password char(30),

-> uid int(3),

-> gid int(3),

-> comment char(80),

-> homedir char(50),

-> shell char(50)

-> );

mysql> desc teadb;

+———-+———-+——+—–+———+——-+

| Field    | Type     | Null | Key | Default | Extra |

+———-+———-+——+—–+———+——-+

| name     | char(50) | YES  |     | NULL    |       |

| password | char(30) | YES  |     | NULL    |       |

| uid      | int(3)   | YES  |     | NULL    |       |

| gid      | int(3)   | YES  |     | NULL    |       |

| comment  | char(80) | YES  |     | NULL    |       |

| homedir  | char(50) | YES  |     | NULL    |       |

| shell    | char(50) | YES  |     | NULL    |       |

+———-+———-+——+—–+———+——-+

 

Qmysql> load data infile “/mysqldir/passwd” into table teadb fields terminated by “:” lines terminated by “\n”;

Query OK, 41 rows affected (0.07 sec)

Records: 41  Deleted: 0  Skipped: 0  Warnings: 0

uery OK, 0 rows affected (0.28 sec)

 

对teadb库的tea2表执行如下操作:

 

4 把name字段设置为index字段

mysql> alter table teadb add index(name);

Query OK, 0 rows affected (0.24 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

5 添加记录编号字段id  在所有字段上方,字段值可以自动增长。

mysql> alter table teadb add id int(3) primary key auto_increment first;

Query OK, 0 rows affected (0.61 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

mysql> desc teadb;

+———-+———-+——+—–+———+—————-+

| Field    | Type     | Null | Key | Default | Extra          |

+———-+———-+——+—–+———+—————-+

| id       | int(3)   | NO   | PRI | NULL    | auto_increment |

| name     | char(50) | YES  | MUL | NULL    |                |

| password | char(30) | YES  |     | NULL    |                |

| uid      | int(3)   | YES  |     | NULL    |                |

| gid      | int(3)   | YES  |     | NULL    |                |

| comment  | char(80) | YES  |     | NULL    |                |

| homedir  | char(50) | YES  |     | NULL    |                |

| shell    | char(50) | YES  |     | NULL    |                |

+———-+———-+——+—–+———+—————-+

8 rows in set (0.00 sec)

 

6 在name字段下方添加s_year字段 存放出生年份 默认值是1990

mysql> alter table teadb add s_year int(4) default 1990 after name;

Query OK, 0 rows affected (0.57 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

mysql> desc teadb;

+———-+———-+——+—–+———+—————-+

| Field    | Type     | Null | Key | Default | Extra          |

+———-+———-+——+—–+———+—————-+

| id       | int(3)   | NO   | PRI | NULL    | auto_increment |

| name     | char(50) | YES  | MUL | NULL    |                |

| s_year   | int(4)   | YES  |     | 1990    |                |

| password | char(30) | YES  |     | NULL    |                |

| uid      | int(3)   | YES  |     | NULL    |                |

| gid      | int(3)   | YES  |     | NULL    |                |

| comment  | char(80) | YES  |     | NULL    |                |

| homedir  | char(50) | YES  |     | NULL    |                |

| shell    | char(50) | YES  |     | NULL    |                |

+———-+———-+——+—–+———+—————-+

9 rows in set (0.00 sec)

 

 

7 在name字段下方添加字段名sex 字段值只能是gril 或boy 默认值是 boy

mysql> alter table teadb add sex enum(“boy”,”girl”) default “boy”;

mysql> alter table teadb change sex sex enum(“boy”,”girl”) default “boy” after name;

mysql> desc teadb;

+———-+——————–+——+—–+———+—————-+

| Field    | Type               | Null | Key | Default | Extra          |

+———-+——————–+——+—–+———+—————-+

| id       | int(3)             | NO   | PRI | NULL    | auto_increment |

| name     | char(50)           | YES  | MUL | NULL    |                |

| sex      | enum(‘boy’,’girl’) | YES  |     | boy     |                |

| s_year   | int(4)             | YES  |     | 1990    |                |

| password | char(30)           | YES  |     | NULL    |                |

| uid      | int(3)             | YES  |     | NULL    |                |

| gid      | int(3)             | YES  |     | NULL    |                |

| comment  | char(80)           | YES  |     | NULL    |                |

| homedir  | char(50)           | YES  |     | NULL    |                |

| shell    | char(50)           | YES  |     | NULL    |                |

+———-+——————–+——+—–+———+—————-+

10 rows in set (0.00 sec)

 

8 在sex字段下方添加 age字段  存放年龄 不允许输入负数。默认值 是 21

mysql> alter table teadb add age int(2) unsigned default 21 after sex;

Query OK, 0 rows affected (0.82 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

mysql> desc teadb;

+———-+——————–+——+—–+———+—————-+

| Field    | Type               | Null | Key | Default | Extra          |

+———-+——————–+——+—–+———+—————-+

| id       | int(3)             | NO   | PRI | NULL    | auto_increment |

| name     | char(50)           | YES  | MUL | NULL    |                |

| sex      | enum(‘boy’,’girl’) | YES  |     | boy     |                |

| age      | int(2) unsigned    | YES  |     | 21      |                |

| s_year   | int(4)             | YES  |     | 1990    |                |

| password | char(30)           | YES  |     | NULL    |                |

| uid      | int(3)             | YES  |     | NULL    |                |

| gid      | int(3)             | YES  |     | NULL    |                |

| comment  | char(80)           | YES  |     | NULL    |                |

| homedir  | char(50)           | YES  |     | NULL    |                |

| shell    | char(50)           | YES  |     | NULL    |                |

+———-+——————–+——+—–+———+—————-+

11 rows in set (0.00 sec)

 

9 把id字段值是10到50之间的用户的性别修改为 girl

mysql> select id,name,sex,age,s_year from teadb where id between 10 and 50;

+—-+———————+——+——+——–+

| id | name                | sex  | age  | s_year |

+—-+———————+——+——+——–+

| 10 | operator            | boy  |   21 |   1990 |

| 11 | games               | boy  |   21 |   1990 |

| 12 | ftp                 | boy  |   21 |   1990 |

| 13 | nobody              | boy  |   21 |   1990 |

| 14 | systemd-network     | boy  |   21 |   1990 |

| 15 | dbus                | boy  |   21 |   1990 |

| 16 | polkitd             | boy  |   21 |   1990 |

| 17 | libstoragemgmt      | boy  |   21 |   1990 |

| 18 | rpc                 | boy  |   21 |   1990 |

| 19 | colord              | boy  |   21 |   1990 |

| 20 | saslauth            | boy  |   21 |   1990 |

| 21 | abrt                | boy  |   21 |   1990 |

| 22 | rtkit               | boy  |   21 |   1990 |

| 23 | radvd               | boy  |   21 |   1990 |

| 24 | chrony              | boy  |   21 |   1990 |

| 25 | tss                 | boy  |   21 |   1990 |

| 26 | usbmuxd             | boy  |   21 |   1990 |

| 27 | geoclue             | boy  |   21 |   1990 |

| 28 | qemu                | boy  |   21 |   1990 |

| 29 | rpcuser             | boy  |   21 |   1990 |

| 30 | nfsnobody           | boy  |   21 |   1990 |

| 31 | setroubleshoot      | boy  |   21 |   1990 |

| 32 | pulse               | boy  |   21 |   1990 |

| 33 | gdm                 | boy  |   21 |   1990 |

| 34 | gnome-initial-setup | boy  |   21 |   1990 |

| 35 | sshd                | boy  |   21 |   1990 |

| 36 | avahi               | boy  |   21 |   1990 |

| 37 | postfix             | boy  |   21 |   1990 |

| 38 | ntp                 | boy  |   21 |   1990 |

| 39 | tcpdump             | boy  |   21 |   1990 |

| 40 | lisi                | boy  |   21 |   1990 |

| 41 | mysql               | boy  |   21 |   1990 |

+—-+———————+——+——+——–+

32 rows in set (0.00 sec)

 

mysql> update teadb set sex=”girl” where id between 10 and 50;

Query OK, 32 rows affected (0.11 sec)

Rows matched: 32  Changed: 32  Warnings: 0

 

mysql> select id,name,sex,age,s_year from teadb where id between 10 and 50;

+—-+———————+——+——+——–+

| id | name                | sex  | age  | s_year |

+—-+———————+——+——+——–+

| 10 | operator            | girl |   21 |   1990 |

| 11 | games               | girl |   21 |   1990 |

| 12 | ftp                 | girl |   21 |   1990 |

| 13 | nobody              | girl |   21 |   1990 |

| 14 | systemd-network     | girl |   21 |   1990 |

| 15 | dbus                | girl |   21 |   1990 |

| 16 | polkitd             | girl |   21 |   1990 |

| 17 | libstoragemgmt      | girl |   21 |   1990 |

| 18 | rpc                 | girl |   21 |   1990 |

| 19 | colord              | girl |   21 |   1990 |

| 20 | saslauth            | girl |   21 |   1990 |

| 21 | abrt                | girl |   21 |   1990 |

| 22 | rtkit               | girl |   21 |   1990 |

| 23 | radvd               | girl |   21 |   1990 |

| 24 | chrony              | girl |   21 |   1990 |

| 25 | tss                 | girl |   21 |   1990 |

| 26 | usbmuxd             | girl |   21 |   1990 |

| 27 | geoclue             | girl |   21 |   1990 |

| 28 | qemu                | girl |   21 |   1990 |

| 29 | rpcuser             | girl |   21 |   1990 |

| 30 | nfsnobody           | girl |   21 |   1990 |

| 31 | setroubleshoot      | girl |   21 |   1990 |

| 32 | pulse               | girl |   21 |   1990 |

| 33 | gdm                 | girl |   21 |   1990 |

| 34 | gnome-initial-setup | girl |   21 |   1990 |

| 35 | sshd                | girl |   21 |   1990 |

| 36 | avahi               | girl |   21 |   1990 |

| 37 | postfix             | girl |   21 |   1990 |

| 38 | ntp                 | girl |   21 |   1990 |

| 39 | tcpdump             | girl |   21 |   1990 |

| 40 | lisi                | girl |   21 |   1990 |

| 41 | mysql               | girl |   21 |   1990 |

+—-+———————+——+——+——–+

32 rows in set (0.00 sec)

 

10 统计性别是girl的用户有多少个。

mysql> select count(*) from teadb where sex=”girl”;

+———-+

| count(*) |

+———-+

|       32 |

+———-+

1 row in set (0.00 sec)

 

12 查看性别是girl用户里 uid号 最大的用户名 叫什么。

mysql> select max(id) from teadb;

+———+

| max(id) |

+———+

|      41 |

+———+

1 row in set (0.00 sec)

 

 

13 添加一条新记录只给name、uid 字段赋值 值为rtestd  1000

 

添加一条新记录只给name、uid 字段赋值 值为rtest2d   2000

mysql> insert into teadb(name,uid) values(“rtestd”,1000),(“rtest2d”,2000);

Query OK, 2 rows affected (0.34 sec)

Records: 2  Duplicates: 0  Warnings: 0

 

mysql> select * from teadb where uid=1000 or uid=2000;

+—-+———+——+——+——–+———-+——+——+———+————+———–+

| id | name    | sex  | age  | s_year | password | uid  | gid  | comment | homedir    | shell     |

+—-+———+——+——+——–+———-+——+——+———+————+———–+

| 40 | lisi    | girl |   21 |   1990 | x        | 1000 | 1000 | lisi    | /home/lisi | /bin/bash |

| 42 | rtestd  | boy  |   21 |   1990 | NULL     | 1000 | NULL | NULL    | NULL       | NULL      |

| 43 | rtest2d | boy  |   21 |   1990 | NULL     | 2000 | NULL | NULL    | NULL       | NULL      |

+—-+———+——+——+——–+———-+——+——+———+————+———–+

3 rows in set (0.00 sec)

 

14 显示uid 是四位数的用户的用户名和uid值。

mysql> select name,uid from teadb where uid like”____”;

+———+——+

| name    | uid  |

+———+——+

| lisi    | 1000 |

| rtestd  | 1000 |

| rtest2d | 2000 |

+———+——+

3 rows in set (0.00 sec)

 

15 显示名字是以字母r 开头 且是以字母d结尾的用户名和uid。

mysql> select name,uid from teadb where name like’r%d’;

+———+——+

| name    | uid  |

+———+——+

| radvd   |   75 |

| rtest2d | 2000 |

| rtestd  | 1000 |

+———+——+

3 rows in set (0.00 sec)

 

 

16  查看是否有 名字以字母a开头 并且是 以字母c结尾的用户。

mysql> select name,uid from teadb where name like’a%c’;

Empty set (0.00 sec)

 

 

8  把gid  在100到500间用户的家目录修改为/root

mysql> select gid,homedir from teadb where gid between 100 and 500;

+——+—————-+

| gid  | homedir        |

+——+—————-+

|  100 | /usr/games     |

|  192 | /              |

|  173 | /etc/abrt      |

|  172 | /proc          |

|  113 | /              |

|  107 | /              |

|  171 | /var/run/pulse |

+——+—————-+

7 rows in set (0.00 sec)

 

mysql> update teadb set homedir=”/root” where gid between 100 and 500;

Query OK, 7 rows affected (0.04 sec)

Rows matched: 7  Changed: 7  Warnings: 0

 

mysql> select gid,homedir from teadb where gid between 100 and 500;

+——+———+

| gid  | homedir |

+——+———+

|  100 | /root   |

|  192 | /root   |

|  173 | /root   |

|  172 | /root   |

|  113 | /root   |

|  107 | /root   |

|  171 | /root   |

+——+———+

7 rows in set (0.00 sec)

 

9  把用户是  root 、 bin 、  sync 用户的shell 修改为  /sbin/nologin

mysql> select name,shell from teadb where name=”root” or  name=”bin” or name=”sync”;

+——+—————+

| name | shell         |

+——+—————+

| bin  | /sbin/nologin |

| root | /bin/bash     |

| sync | /bin/sync     |

+——+—————+

3 rows in set (0.00 sec)

 

mysql> update teadb set shell=”/sbin/nologin” where name=”root” or  name=”bin” or name=”sync”;

Query OK, 2 rows affected (0.30 sec)

Rows matched: 3  Changed: 2  Warnings: 0

 

mysql> select name,shell from teadb where name=”root” or  name=”bin” or name=”sync”;

+——+—————+

| name | shell         |

+——+—————+

| bin  | /sbin/nologin |

| root | /sbin/nologin |

| sync | /sbin/nologin |

+——+—————+

3 rows in set (0.00 sec)

 

10   查看  gid 小于10的用户 都使用那些shell

mysql> select name,shell from teadb where gid<10;

+———-+—————-+

| name     | shell          |

+———-+—————-+

| root     | /sbin/nologin  |

| bin      | /sbin/nologin  |

| daemon   | /sbin/nologin  |

| adm      | /sbin/nologin  |

| lp       | /sbin/nologin  |

| sync     | /sbin/nologin  |

| shutdown | /sbin/shutdown |

| halt     | /sbin/halt     |

| operator | /sbin/nologin  |

+———-+—————-+

9 rows in set (0.00 sec)

 

12   删除  名字以字母d开头的用户。

mysql> select id,name from teadb where name like “d%”;

+—-+——–+

| id | name   |

+—-+——–+

|  3 | daemon |

| 15 | dbus   |

+—-+——–+

2 rows in set (0.00 sec)

 

mysql> delete from teadb where name like “d%”;

Query OK, 2 rows affected (0.04 sec)

 

mysql> select id,name from teadb where name like “d%”;

Empty set (0.00 sec)

 

13   查询  gid 最大的前5个用户 使用的 shell

asc/desc

 

mysql> select name,shell,gid from teadb order by gid desc limit 5;

+—————-+—————+——-+

| name           | shell         | gid   |

+—————-+—————+——-+

| nfsnobody      | /sbin/nologin | 65534 |

| lisi           | /bin/bash     |  1000 |

| polkitd        | /sbin/nologin |   998 |

| libstoragemgmt | /sbin/nologin |   996 |

| colord         | /sbin/nologin |   995 |

+—————-+—————+——-+

5 rows in set (0.00 sec)

 

 

14   查看那些用户没有家目录

mysql> select name,homedir from teadb where homedir=”” or homedir is NULL;

+———+———+

| name    | homedir |

+———+———+

| rtestd  | NULL    |

| rtest2d | NULL    |

+———+———+

2 rows in set (0.00 sec)

 

15  把gid号最小的前5个用户信息保存到/mybak/min5.txt文件里。

mysql> select gid,name from teadb order by gid asc limit 5;

+——+———-+

| gid  | name     |

+——+———-+

| NULL | rtestd   |

| NULL | rtest2d  |

|    0 | root     |

|    0 | operator |

|    0 | sync     |

+——+———-+

5 rows in set (0.00 sec)

mysql> mysql> select gid,name from teadb order by gid asc limit 5 into outfile “/mysqldir/min5.txt”;

Query OK, 5 rows affected (0.00 sec)

 

使用useradd 命令添加登录系统的用户 名为lucy

 

16  把lucy用户的信息 添加到teacher表里

mysql> load data infile ‘/mysqldir/passwd1’ into table teadb3 fields terminated by “:” lines terminated by “\n” (name,password,uid,gid,comment,homedir,shell);

Query OK, 1 row affected (0.06 sec)

Records: 1  Deleted: 0  Skipped: 0  Warnings: 0

 

mysql> select * from teadb3;

+—-+———-+——+—–+——–+———-+——+——+———+———————–+—————-+

| id | name     | sex  | age | s_year | password | uid  | gid  | comment | homedir               | shell          |

+—-+———-+——+—–+——–+———-+——+——+———+———————–+—————-+

|  1 | root     | boy  |  21 |   1990 | x        |    0 |    0 | XXX     | /root                 | /sbin/nologin  |

|  2 | bin      | boy  |  21 |   1990 | x        |    1 |    1 | XXX     | /bin                  | /sbin/nologin  |

|  4 | adm      | boy  |  21 |   1990 | x        |    3 |    4 | XXX     | /var/adm              | /sbin/nologin  |

|  5 | lp       | boy  |  21 |   1990 | x        |    4 |    7 | XXX     | /var/spool/lpd        | /sbin/nologin  |

|  6 | sync     | boy  |  21 |   1990 | x        |    5 |    0 | XXX     | /sbin                 | /sbin/nologin  |

|  7 | shutdown | boy  |  21 |   1990 | x        |    6 |    0 | XXX     | /sbin                 | /sbin/shutdown |

|  8 | halt     | boy  |  21 |   1990 | x        |    7 |    0 | XXX     | /sbin                 | /sbin/halt     |

|  9 | mail     | boy  |  21 |   1990 | x        |    8 |   12 | XXX     | /var/spool/mail       | /sbin/nologin  |

| 10 | operator | girl |  21 |   1990 | x        |   11 |    0 | XXX     | /root                 | /sbin/nologin  |

| 11 | games    | girl |  21 |   1990 | x        |   12 |  100 | XXX     | /root                 | /sbin/nologin  |

| 12 | ftp      | girl |  21 |   1990 | x        |   14 |   50 | XXX     | /var/ftp              | /sbin/nologin  |

| 13 | nobody   | girl |  21 |   1990 | x        |   99 |   99 | XXX     | /                     | /sbin/nologin  |

| 18 | rpc      | girl |  21 |   1990 | x        |   32 |   32 | XXX     | /var/lib/rpcbind      | /sbin/nologin  |

| 23 | radvd    | girl |  21 |   1990 | x        |   75 |   75 | XXX     | /                     | /sbin/nologin  |

| 25 | tss      | girl |  21 |   1990 | x        |   59 |   59 | XXX     | /dev/null             | /sbin/nologin  |

| 29 | rpcuser  | girl |  21 |   1990 | x        |   29 |   29 | XXX     | /var/lib/nfs          | /sbin/nologin  |

| 33 | gdm      | girl |  21 |   1990 | x        |   42 |   42 | XXX     | /var/lib/gdm          | /sbin/nologin  |

| 35 | sshd     | girl |  21 |   1990 | x        |   74 |   74 | XXX     | /var/empty/sshd       | /sbin/nologin  |

| 36 | avahi    | girl |  21 |   1990 | x        |   70 |   70 | XXX     | /var/run/avahi-daemon | /sbin/nologin  |

| 37 | postfix  | girl |  21 |   1990 | x        |   89 |   89 | XXX     | /var/spool/postfix    | /sbin/nologin  |

| 38 | ntp      | girl |  21 |   1990 | x        |   38 |   38 | XXX     | /etc/ntp              | /sbin/nologin  |

| 39 | tcpdump  | girl |  21 |   1990 | x        |   72 |   72 | XXX     | /                     | /sbin/nologin  |

| 41 | mysql    | girl |  21 |   1990 | x        |   27 |   27 | XXX     | /var/lib/mysql        | /bin/false     |

|  0 | lucy     | boy  |   0 |      0 | x        | 1001 | 1001 |         | /home/lucy            | /bin/bash      |

+—-+———-+——+—–+——–+———-+——+——+———+———————–+—————-+

24 rows in set (0.00 sec)

 

 

17  删除表中的 comment 字段

mysql> alter table teadb2 drop comment;

Query OK, 0 rows affected (0.48 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

18  设置表中所有字段值不允许为空

mysql> alter table teadb2 change age age int(2) unsigned not null;

Query OK, 0 rows affected (0.47 sec)

Records: 0  Duplicates: 0  Warnings: 0

 

mysql> alter table teadb2 change s_year s_year int(4) not null;

Query OK, 0 rows affected (0.51 sec)

Records: 0  Duplicates: 0  Warnings: 0

mysql> desc teadb2;

+———-+——————–+——+—–+———+——-+

| Field    | Type               | Null | Key | Default | Extra |

+———-+——————–+——+—–+———+——-+

| id       | int(3)             | NO   |     | 0       |       |

| name     | char(50)           | NO   |     | NULL    |       |

| sex      | enum(‘boy’,’girl’) | NO   |     | NULL    |       |

| age      | int(2) unsigned    | NO   |     | NULL    |       |

| s_year   | int(4)             | NO   |     | NULL    |       |

| password | char(30)           | YES  |     | NULL    |       |

| uid      | int(3)             | YES  |     | NULL    |       |

| gid      | int(3)             | YES  |     | NULL    |       |

| homedir  | char(50)           | YES  |     | NULL    |       |

| shell    | char(50)           | YES  |     | NULL    |       |

+———-+——————–+——+—–+———+——-+

10 rows in set (0.00 sec)

 

19  删除root 用户家目录字段的值

mysql> select * from teadb where name=’root’;

+—-+——+——+——+——–+———-+——+——+———+———+—————+

| id | name | sex  | age  | s_year | password | uid  | gid  | comment | homedir | shell         |

+—-+——+——+——+——–+———-+——+——+———+———+—————+

|  1 | root | boy  |   21 |   1990 | x        |    0 |    0 | root    | /root   | /sbin/nologin |

+—-+——+——+——+——–+———-+——+——+———+———+—————+

1 row in set (0.00 sec)

 

20  显示 gid 大于500的用户的用户名 家目录和使用的shell

mysql> select name,homedir,shell from teadb where gid>500;

+———————+—————————+—————+

| name                | homedir                   | shell         |

+———————+—————————+—————+

| polkitd             | /                         | /sbin/nologin |

| libstoragemgmt      | /var/run/lsm              | /sbin/nologin |

| colord              | /var/lib/colord           | /sbin/nologin |

| chrony              | /var/lib/chrony           | /sbin/nologin |

| geoclue             | /var/lib/geoclue          | /sbin/nologin |

| nfsnobody           | /var/lib/nfs              | /sbin/nologin |

| setroubleshoot      | /var/lib/setroubleshoot   | /sbin/nologin |

| gnome-initial-setup | /run/gnome-initial-setup/ | /sbin/nologin |

| lisi                | /home/lisi                | /bin/bash     |

+———————+—————————+—————+

9 rows in set (0.00 sec)

 

21  删除uid大于100的用户记录

mysql> delete from teadb2 where uid>100;

Query OK, 18 rows affected (0.06 sec)

 

mysql> select name,uid from teadb2;

+———-+——+

| name     | uid  |

+———-+——+

| root     |    0 |

| bin      |    1 |

| adm      |    3 |

| lp       |    4 |

| sync     |    5 |

| shutdown |    6 |

| halt     |    7 |

| mail     |    8 |

| operator |   11 |

| games    |   12 |

| ftp      |   14 |

| nobody   |   99 |

| rpc      |   32 |

| radvd    |   75 |

| tss      |   59 |

| rpcuser  |   29 |

| gdm      |   42 |

| sshd     |   74 |

| avahi    |   70 |

| postfix  |   89 |

| ntp      |   38 |

| tcpdump  |   72 |

| mysql    |   27 |

+———-+——+

23 rows in set (0.01 sec)

 

22  显示uid号在10到30区间的用户有多少个。

mysql> select count(*) from teadb where uid between 10 and 30;

+———-+

| count(*) |

+———-+

|        5 |

+———-+

1 row in set (0.00 sec)

 

23  显示uid号是100以内的用户使用shell的类型。

mysql> select name,shell from teadb where uid<100;

+———-+—————-+

| name     | shell          |

+———-+—————-+

| root     | /sbin/nologin  |

| bin      | /sbin/nologin  |

| adm      | /sbin/nologin  |

| lp       | /sbin/nologin  |

| sync     | /sbin/nologin  |

| shutdown | /sbin/shutdown |

| halt     | /sbin/halt     |

| mail     | /sbin/nologin  |

| operator | /sbin/nologin  |

| games    | /sbin/nologin  |

| ftp      | /sbin/nologin  |

| nobody   | /sbin/nologin  |

| rpc      | /sbin/nologin  |

| radvd    | /sbin/nologin  |

| tss      | /sbin/nologin  |

| rpcuser  | /sbin/nologin  |

| gdm      | /sbin/nologin  |

| sshd     | /sbin/nologin  |

| avahi    | /sbin/nologin  |

| postfix  | /sbin/nologin  |

| ntp      | /sbin/nologin  |

| tcpdump  | /sbin/nologin  |

| mysql    | /bin/false     |

+———-+—————-+

23 rows in set (0.00 sec)

 

 

24  显示uid号最小的前10个用户的信息。

mysql> select name,uid,shell from teadb order by uid asc limit 10;

+———-+——+—————-+

| name     | uid  | shell          |

+———-+——+—————-+

| root     |    0 | /sbin/nologin  |

| bin      |    1 | /sbin/nologin  |

| adm      |    3 | /sbin/nologin  |

| lp       |    4 | /sbin/nologin  |

| sync     |    5 | /sbin/nologin  |

| shutdown |    6 | /sbin/shutdown |

| halt     |    7 | /sbin/halt     |

| mail     |    8 | /sbin/nologin  |

| operator |   11 | /sbin/nologin  |

| games    |   12 | /sbin/nologin  |

+———-+——+—————-+

10 rows in set (0.00 sec)

 

 

25  显示表中第10条到第15条记录

mysql> select id,name,uid,shell from teadb limit 9,15;

+—-+—————–+——+—————+

| id | name            | uid  | shell         |

+—-+—————–+——+—————+

| 11 | games           |   12 | /sbin/nologin |

| 12 | ftp             |   14 | /sbin/nologin |

| 13 | nobody          |   99 | /sbin/nologin |

| 14 | systemd-network |  192 | /sbin/nologin |

| 16 | polkitd         |  999 | /sbin/nologin |

| 17 | libstoragemgmt  |  998 | /sbin/nologin |

| 18 | rpc             |   32 | /sbin/nologin |

| 19 | colord          |  997 | /sbin/nologin |

| 20 | saslauth        |  996 | /sbin/nologin |

| 21 | abrt            |  173 | /sbin/nologin |

| 22 | rtkit           |  172 | /sbin/nologin |

| 23 | radvd           |   75 | /sbin/nologin |

| 24 | chrony          |  995 | /sbin/nologin |

| 25 | tss             |   59 | /sbin/nologin |

| 26 | usbmuxd         |  113 | /sbin/nologin |

+—-+—————–+——+—————+

15 rows in set (0.00 sec)

 

26  显示uid号小于50且名字里有字母a  用户的详细信息

mysql> select * from teadb where uid<50 and name regexp ‘a’;

+—-+———-+——+——+——–+———-+——+——+———-+—————–+—————+

| id | name     | sex  | age  | s_year | password | uid  | gid  | comment  | homedir         | shell         |

+—-+———-+——+——+——–+———-+——+——+———-+—————–+—————+

|  4 | adm      | boy  |   21 |   1990 | x        |    3 |    4 | adm      | /var/adm        | /sbin/nologin |

|  8 | halt     | boy  |   21 |   1990 | x        |    7 |    0 | halt     | /sbin           | /sbin/halt    |

|  9 | mail     | boy  |   21 |   1990 | x        |    8 |   12 | mail     | /var/spool/mail | /sbin/nologin |

| 10 | operator | girl |   21 |   1990 | x        |   11 |    0 | operator | /root           | /sbin/nologin |

| 11 | games    | girl |   21 |   1990 | x        |   12 |  100 | games    | /root           | /sbin/nologin |

+—-+———-+——+——+——–+———-+——+——+———-+—————–+—————+

5 rows in set (0.00 sec)

 

27  只显示用户 root   bin   daemon  3个用户的详细信息。

mysql> select * from teadb where name=’root’ or name=’bin’ or name=’daemon’;

+—-+——+——+——+——–+———-+——+——+———+———+—————+

| id | name | sex  | age  | s_year | password | uid  | gid  | comment | homedir | shell         |

+—-+——+——+——+——–+———-+——+——+———+———+—————+

|  2 | bin  | boy  |   21 |   1990 | x        |    1 |    1 | bin     | /bin    | /sbin/nologin |

|  1 | root | boy  |   21 |   1990 | x        |    0 |    0 | root    | /root   | /sbin/nologin |

+—-+——+——+——+——–+———-+——+——+———+———+—————+

2 rows in set (0.00 sec)

 

 

28  显示除root用户之外所有用户的详细信息。

mysql> select * from teadb where name!=’root’ and id<10;

+—-+———-+——+——+——–+———-+——+——+———-+—————–+—————-+

| id | name     | sex  | age  | s_year | password | uid  | gid  | comment  | homedir         | shell          |

+—-+———-+——+——+——–+———-+——+——+———-+—————–+—————-+

|  2 | bin      | boy  |   21 |   1990 | x        |    1 |    1 | bin      | /bin            | /sbin/nologin  |

|  4 | adm      | boy  |   21 |   1990 | x        |    3 |    4 | adm      | /var/adm        | /sbin/nologin  |

|  5 | lp       | boy  |   21 |   1990 | x        |    4 |    7 | lp       | /var/spool/lpd  | /sbin/nologin  |

|  6 | sync     | boy  |   21 |   1990 | x        |    5 |    0 | sync     | /sbin           | /sbin/nologin  |

|  7 | shutdown | boy  |   21 |   1990 | x        |    6 |    0 | shutdown | /sbin           | /sbin/shutdown |

|  8 | halt     | boy  |   21 |   1990 | x        |    7 |    0 | halt     | /sbin           | /sbin/halt     |

|  9 | mail     | boy  |   21 |   1990 | x        |    8 |   12 | mail     | /var/spool/mail | /sbin/nologin  |

+—-+———-+——+——+——–+———-+——+——+———-+—————–+—————-+

7 rows in set (0.00 sec)

 

 

29  统计username 字段有多少条记录

 

mysql> select count(*) from teadb;

+———-+

| count(*) |

+———-+

|       41 |

+———-+

1 row in set (0.00 sec)

 

30  显示名字里含字母c  用户的详细信息

mysql> select * from teadb where name regexp ‘c’;

+—-+———+——+——+——–+———-+——+——+——————+——————+—————+

| id | name    | sex  | age  | s_year | password | uid  | gid  | comment          | homedir          | shell         |

+—-+———+——+——+——–+———-+——+——+——————+——————+—————+

|  6 | sync    | boy  |   21 |   1990 | x        |    5 |    0 | sync             | /sbin            | /sbin/nologin |

| 18 | rpc     | girl |   21 |   1990 | x        |   32 |   32 | Rpcbind Daemon   | /var/lib/rpcbind | /sbin/nologin |

| 19 | colord  | girl |   21 |   1990 | x        |  997 |  995 | User for colord  | /var/lib/colord  | /sbin/nologin |

| 24 | chrony  | girl |   21 |   1990 | x        |  995 |  993 |                  | /var/lib/chrony  | /sbin/nologin |

| 27 | geoclue | girl |   21 |   1990 | x        |  994 |  991 | User for geoclue | /var/lib/geoclue | /sbin/nologin |

| 29 | rpcuser | girl |   21 |   1990 | x        |   29 |   29 | RPC Service User | /var/lib/nfs     | /sbin/nologin |

| 39 | tcpdump | girl |   21 |   1990 | x        |   72 |   72 |                  | /                | /sbin/nologin |

+—-+———+——+——+——–+———-+——+——+——————+——————+—————+

7 rows in set (0.00 sec)

 

31  在sex字段下方添加名为pay的字段,用来存储工资,默认值    是5000.00

mysql> alter table teadb add pay float(7,2) default 5000.00;

Query OK, 0 rows affected (0.67 sec)

Records: 0  Duplicates: 0  Warnings: 0

mysql> select id,name,pay from teadb where id<5;

+—-+——+———+

| id | name | pay     |

+—-+——+———+

|  1 | root | 5000.00 |

|  2 | bin  | 5000.00 |

|  4 | adm  | 5000.00 |

+—-+——+———+

3 rows in set (0.00 sec)

 

32  把所有女孩的工资修改为10000

mysql> select id,name,sex,pay from teadb where sex=’girl’;

+—-+———————+——+———+

| id | name                | sex  | pay     |

+—-+———————+——+———+

| 10 | operator            | girl | 5000.00 |

| 11 | games               | girl | 5000.00 |

| 12 | ftp                 | girl | 5000.00 |

| 13 | nobody              | girl | 5000.00 |

| 14 | systemd-network     | girl | 5000.00 |

| 16 | polkitd             | girl | 5000.00 |

| 17 | libstoragemgmt      | girl | 5000.00 |

| 18 | rpc                 | girl | 5000.00 |

| 19 | colord              | girl | 5000.00 |

| 20 | saslauth            | girl | 5000.00 |

| 21 | abrt                | girl | 5000.00 |

| 22 | rtkit               | girl | 5000.00 |

| 23 | radvd               | girl | 5000.00 |

| 24 | chrony              | girl | 5000.00 |

| 25 | tss                 | girl | 5000.00 |

| 26 | usbmuxd             | girl | 5000.00 |

| 27 | geoclue             | girl | 5000.00 |

| 28 | qemu                | girl | 5000.00 |

| 29 | rpcuser             | girl | 5000.00 |

| 30 | nfsnobody           | girl | 5000.00 |

| 31 | setroubleshoot      | girl | 5000.00 |

| 32 | pulse               | girl | 5000.00 |

| 33 | gdm                 | girl | 5000.00 |

| 34 | gnome-initial-setup | girl | 5000.00 |

| 35 | sshd                | girl | 5000.00 |

| 36 | avahi               | girl | 5000.00 |

| 37 | postfix             | girl | 5000.00 |

| 38 | ntp                 | girl | 5000.00 |

| 39 | tcpdump             | girl | 5000.00 |

| 40 | lisi                | girl | 5000.00 |

| 41 | mysql               | girl | 5000.00 |

+—-+———————+——+———+

31 rows in set (0.00 sec)

mysql> update teadb set pay=10000 where sex=’girl’;

Query OK, 31 rows affected (0.03 sec)

Rows matched: 31  Changed: 31  Warnings: 0

 

mysql> select id,name,sex,pay from teadb where sex=’girl’;

+—-+———————+——+———-+

| id | name                | sex  | pay      |

+—-+———————+——+———-+

| 10 | operator            | girl | 10000.00 |

| 11 | games               | girl | 10000.00 |

| 12 | ftp                 | girl | 10000.00 |

| 13 | nobody              | girl | 10000.00 |

| 14 | systemd-network     | girl | 10000.00 |

| 16 | polkitd             | girl | 10000.00 |

| 17 | libstoragemgmt      | girl | 10000.00 |

| 18 | rpc                 | girl | 10000.00 |

| 19 | colord              | girl | 10000.00 |

| 20 | saslauth            | girl | 10000.00 |

| 21 | abrt                | girl | 10000.00 |

| 22 | rtkit               | girl | 10000.00 |

| 23 | radvd               | girl | 10000.00 |

| 24 | chrony              | girl | 10000.00 |

| 25 | tss                 | girl | 10000.00 |

| 26 | usbmuxd             | girl | 10000.00 |

| 27 | geoclue             | girl | 10000.00 |

| 28 | qemu                | girl | 10000.00 |

| 29 | rpcuser             | girl | 10000.00 |

| 30 | nfsnobody           | girl | 10000.00 |

| 31 | setroubleshoot      | girl | 10000.00 |

| 32 | pulse               | girl | 10000.00 |

| 33 | gdm                 | girl | 10000.00 |

| 34 | gnome-initial-setup | girl | 10000.00 |

| 35 | sshd                | girl | 10000.00 |

| 36 | avahi               | girl | 10000.00 |

| 37 | postfix             | girl | 10000.00 |

| 38 | ntp                 | girl | 10000.00 |

| 39 | tcpdump             | girl | 10000.00 |

| 40 | lisi                | girl | 10000.00 |

| 41 | mysql               | girl | 10000.00 |

+—-+———————+——+———-+

31 rows in set (0.00 sec)

 

33  把root用户的工资修改为30000

mysql> update teadb set pay=30000 where name=’root’;

Query OK, 1 row affected (0.07 sec)

Rows matched: 1  Changed: 1  Warnings: 0

 

mysql> select id,name,sex,pay from teadb where name=’root’;

+—-+——+——+———-+

| id | name | sex  | pay      |

+—-+——+——+———-+

|  1 | root | boy  | 30000.00 |

+—-+——+——+———-+

1 row in set (0.00 sec)

 

34  查看所有用户的名字和工资

mysql> select name,pay from teadb;

+———————+———-+

| name                | pay      |

+———————+———-+

| root                | 30000.00 |

| bin                 |  5000.00 |

| adm                 |  5000.00 |

| lp                  |  5000.00 |

| sync                |  5000.00 |

| shutdown            |  5000.00 |

| halt                |  5000.00 |

| mail                |  5000.00 |

| operator            | 10000.00 |

| games               | 10000.00 |

| ftp                 | 10000.00 |

| nobody              | 10000.00 |

| systemd-network     | 10000.00 |

| polkitd             | 10000.00 |

| libstoragemgmt      | 10000.00 |

| rpc                 | 10000.00 |

| colord              | 10000.00 |

| saslauth            | 10000.00 |

| abrt                | 10000.00 |

| rtkit               | 10000.00 |

| radvd               | 10000.00 |

| chrony              | 10000.00 |

| tss                 | 10000.00 |

| usbmuxd             | 10000.00 |

| geoclue             | 10000.00 |

| qemu                | 10000.00 |

| rpcuser             | 10000.00 |

| nfsnobody           | 10000.00 |

| setroubleshoot      | 10000.00 |

| pulse               | 10000.00 |

| gdm                 | 10000.00 |

| gnome-initial-setup | 10000.00 |

| sshd                | 10000.00 |

| avahi               | 10000.00 |

| postfix             | 10000.00 |

| ntp                 | 10000.00 |

| tcpdump             | 10000.00 |

| lisi                | 10000.00 |

| mysql               | 10000.00 |

| rtestd              |  5000.00 |

| rtest2d             |  5000.00 |

+———————+———-+

41 rows in set (0.00 sec)

 

 

35  查看工资字段的平均值

mysql> select avg(pay) avg from teadb;

+————-+

| avg         |

+————-+

| 9390.243902 |

+————-+

1 row in set (0.00 sec)

 

 

36  查看工资字段值小于平均工资的用户 是谁。

查看女生里谁的uid号最大

mysql> select id,name,sex,pay from teadb where pay<(select avg(pay) from teadb);

+—-+———-+——+———+

| id | name     | sex  | pay     |

+—-+———-+——+———+

|  2 | bin      | boy  | 5000.00 |

|  4 | adm      | boy  | 5000.00 |

|  5 | lp       | boy  | 5000.00 |

|  6 | sync     | boy  | 5000.00 |

|  7 | shutdown | boy  | 5000.00 |

|  8 | halt     | boy  | 5000.00 |

|  9 | mail     | boy  | 5000.00 |

| 42 | rtestd   | boy  | 5000.00 |

| 43 | rtest2d  | boy  | 5000.00 |

+—-+———-+——+———+

9 rows in set (0.00 sec)

mysql> select name,uid from teadb where sex=’girl’ order by uid desc limit 1;

+———–+——-+

| name      | uid   |

+———–+——-+

| nfsnobody | 65534 |

+———–+——-+

1 row in set (0.00 sec)

 

38  查看bin用户的uid gid 字段的值 及 这2个字段相加的和

mysql> select uid,gid,(uid+gid) sum from teadb where name=’bin’;

+——+——+——+

| uid  | gid  | sum  |

+——+——+——+

|    1 |    1 |    2 |

+——+——+——+

1 row in set (0.00 sec)

 

 

39  把teacher表中前7条记录中如下字段的值保存到当前库下 userone表里

id 、 username 、 sex

mysql> create table userone select id,name,sex from teadb limit 7;

Query OK, 7 rows affected (0.31 sec)

Records: 7  Duplicates: 0  Warnings: 0

 

mysql> select * from userone;

+—-+———-+——+

| id | name     | sex  |

+—-+———-+——+

|  1 | root     | boy  |

|  2 | bin      | boy  |

|  4 | adm      | boy  |

|  5 | lp       | boy  |

|  6 | sync     | boy  |

|  7 | shutdown | boy  |

|  8 | halt     | boy  |

+—-+———-+——+

7 rows in set (0.00 sec)

 

 

40  把teacher表中前5条记录中如下字段的值保存到当前库下 usertwo表里

id 、 username 、 sex 、shell

mysql> create table usertwo select id,name,sex,shell from teadb limit 5;

Query OK, 5 rows affected (0.43 sec)

Records: 5  Duplicates: 0  Warnings: 0

 

mysql> select * from usertwo;

+—-+——+——+—————+

| id | name | sex  | shell         |

+—-+——+——+—————+

|  1 | root | boy  | /sbin/nologin |

|  2 | bin  | boy  | /sbin/nologin |

|  4 | adm  | boy  | /sbin/nologin |

|  5 | lp   | boy  | /sbin/nologin |

|  6 | sync | boy  | /sbin/nologin |

+—-+——+——+—————+

5 rows in set (0.00 sec)

发表在 mysql | 标签为 | 留下评论

Day05.Mysql备份与恢复

物理备份
rm -rf /var/lib/mysql/*
systemctl stop mysqld
rm -rf /var/lib/mysql
把刚才的备份文件拷贝到的当前库下
scp 192.168.4.50:/root/mysql.bak /var/lib/mysql/
重启正常

重新初始化数据库
1、停数据库服务
systemctl stop mysqld
2、删除库
rm -rf /var/lib/mysql/*
3、重新启动,mysql数据库会自己检测
[root@51 mysql]# grep password /var/log/mysqld.log | tail
2018-09-13T02:33:34.691299Z 0 [Note] Shutting down plugin ‘sha256_password’
2018-09-13T02:33:34.691305Z 0 [Note] Shutting down plugin ‘mysql_native_password’
2018-09-13T02:33:36.106363Z 0 [ERROR] unknown variable ‘validate_password_policy=0’
2018-09-13T02:33:37.992951Z 0 [Note] Shutting down plugin ‘sha256_password’
2018-09-13T02:33:37.992957Z 0 [Note] Shutting down plugin ‘mysql_native_password’
2018-09-13T02:33:41.240994Z 0 [Note] Shutting down plugin ‘sha256_password’
2018-09-13T02:33:41.240996Z 0 [Note] Shutting down plugin ‘mysql_native_password’
2018-09-13T02:48:15.931350Z 0 [Note] Shutting down plugin ‘sha256_password’
2018-09-13T02:48:15.931352Z 0 [Note] Shutting down plugin ‘mysql_native_password’
2018-09-13T02:48:27.688311Z 1 [Note] A temporary password is generated for root@localhost: p>su6#SVqixP

使用 innobackupex完全备份与恢复
• 应用示例
– 将所有库完全备份到 /backup
[root@dbsvr1 ~]# innobackupex –user root –password
123456 /backup –no-timestamp // 完全备份
[root@dbsvr1 ~]# innobackupex –user root –password 123456
–apply-log /backup // 同步日志
[root@dbsvr1 ~]# rm -rf /var/lib/mysql // 恢复时要求空的库目录
[root@dbsvr1 ~]# mkdir /var/lib/mysql
[root@dbsvr1 ~]# innobackupex –user root –password 123456
–copy-back /backup // 恢复数据
[root@dbsvr1 ~]# chown -R mysql:mysql /var/lib/mysql
[root@dbsvr1 ~]# systemctl stop mysqld
[root@dbsvr1 ~]# systemctl start mysqld
[root@dbsvr1 ~]# mysql -uroot -p123456
mysql> show databases;

代码如下
第一步,准备素材,创建一个数据库
mysql> create database test;
Query OK, 1 row affected (10.08 sec)
mysql> exit

第二步,安装软件
[root@51 ~]# yum install -y libev-4.15-1.el6.rf.x86_64.rpm 1/1
已安装:
libev.x86_64 0:4.15-1.el6.rf

[root@51 ~]# yum install -y percona-xtrabackup-24-2.4.7-1.el7.x86_64.rpm
已安装:
percona-xtrabackup-24.x86_64 0:2.4.7-1.el7
作为依赖被安装:
perl-Digest.noarch 0:1.17-245.el7 perl-Digest-MD5.x86_64 0:2.52-3.el7
完毕!

第三步,执行完全备份
[root@51 ~]# innobackupex –user root –password 123456 /backup –no-timestamp
180914 09:51:12 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints “completed OK!”.
第四步
同步日志
[root@51 ~]# innobackupex –user root –password 123456 –apply-log /backup
180914 09:51:46 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints “completed OK!”.

删除库
[root@51 ~]# systemctl stop mysqld
[root@51 ~]# rm -rf /var/lib/mysql-
[root@51 ~]# mkdir /var/lib/mysql

拷贝日志
[root@51 ~]# innobackupex –user root –password 123456 –copy-back /backup
180914 09:52:46 innobackupex: Starting the copy-back operation

给数据库文件夹授权
[root@51 ~]# chown -R mysql.mysql /var/lib/mysql

登陆查看确认
[root@51 ~]# mysql -uroot -p123456
mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
+——————–+

增量备份

增量备份与恢复
– 必须先有一次完全备份 , 备份到 /allbak
– 第 1 次增量备份到 /new1
– 第 2 次增量备份到 /new2
#innobackupex –user root –password 123456 \
–databases =” 库名列表 ” /fullbak –no-timestamp // 完全备份
#innobackupex –user root –password 123456 \
–databases =” 库名列表” –incremental /new1 \
–incremental-basedir=”/fullbak” –no-timestamp // 第 1 次增量备份
#innobackupex –user root –password 123456 \
–databases=” 库名列表 ” –incremental /new2 \
–incremental-basedir=”/new1″ –no-timestamp // 第 2 次增量备份

代码如下
第一步,先完全备份
[root@51 ~]# innobackupex –user root –password 123456 /fullbak –no-timestamp
180914 10:22:39 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints “completed OK!”.
#######################################
注意观察文件结尾的号码
180914 10:22:50 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS…
xtrabackup: The latest check point (for incremental): ‘2537019’
xtrabackup: Stopping log copying thread.
.180914 10:22:50 >> log scanned up to (2537028)

180914 10:22:50 Executing UNLOCK TABLES
180914 10:22:50 All tables unlocked
180914 10:22:50 [00] Copying ib_buffer_pool to /fullbak/ib_buffer_pool
180914 10:22:50 [00] …done
180914 10:22:51 Backup created in directory ‘/fullbak/’
MySQL binlog position: filename ‘master51.000001’, position ‘316’
180914 10:22:51 [00] Writing backup-my.cnf
180914 10:22:51 [00] …done
180914 10:22:51 [00] Writing xtrabackup_info
180914 10:22:51 [00] …done
xtrabackup: Transaction log of lsn (2537019) to (2537028) was copied.
180914 10:22:51 completed OK!
######################################

第二步,第一次增量备份
备份前,先整点数据进去
[root@51 ~]# mysql -uroot -p123456
mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
| test1 |
+——————–+
6 rows in set (0.00 sec)

mysql> create database zengliang1;
Query OK, 1 row affected (0.08 sec)

mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
| test1 |
| zengliang1 |
+——————–+
7 rows in set (0.00 sec)

执行增量备份
[root@51 ~]# innobackupex –user root –password 123456 –incremental /new1 –incremental-basedir=”/fullbak” –no-timestamp
180914 10:26:44 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints “completed OK!”.
################################################
注意观察数字
180914 10:26:56 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS…
xtrabackup: The latest check point (for incremental): ‘2537019’
xtrabackup: Stopping log copying thread.
.180914 10:26:56 >> log scanned up to (2537028)

180914 10:26:56 Executing UNLOCK TABLES
180914 10:26:56 All tables unlocked
180914 10:26:56 [00] Copying ib_buffer_pool to /new1/ib_buffer_pool
180914 10:26:56 [00] …done
180914 10:26:57 Backup created in directory ‘/new1/’
MySQL binlog position: filename ‘master51.000001’, position ‘493’
180914 10:26:57 [00] Writing backup-my.cnf
180914 10:26:57 [00] …done
180914 10:26:57 [00] Writing xtrabackup_info
180914 10:26:57 [00] …done
xtrabackup: Transaction log of lsn (2537019) to (2537028) was copied.
180914 10:26:57 completed OK!
###############################################

第三步,第2次增量备份
老规矩,备份之前先搞点数据进去
[root@51 ~]# mysql -uroot -p123456
mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
| test1 |
| zengliang1 |
+——————–+
7 rows in set (0.00 sec)

mysql> create database zengliang2;
Query OK, 1 row affected (0.05 sec)

mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
| test1 |
| zengliang1 |
| zengliang2 |
+——————–+
8 rows in set (0.00 sec)

执行第2此增量备份
[root@51 ~]# innobackupex –user root –password 123456 –incremental /new2 –incremental-basedir=”/new1″ –no-timestamp
180914 10:40:30 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints “completed OK!”.
#######################################################################
观察下数字
180914 10:40:42 >> log scanned up to (2537028)
180914 10:40:42 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS…
xtrabackup: The latest check point (for incremental): ‘2537019’
xtrabackup: Stopping log copying thread.
.180914 10:40:42 >> log scanned up to (2537028)

180914 10:40:43 Executing UNLOCK TABLES
180914 10:40:43 All tables unlocked
180914 10:40:43 [00] Copying ib_buffer_pool to /new2/ib_buffer_pool
180914 10:40:43 [00] …done
180914 10:40:43 Backup created in directory ‘/new2/’
MySQL binlog position: filename ‘master51.000001’, position ‘670’
180914 10:40:43 [00] Writing backup-my.cnf
180914 10:40:43 [00] …done
180914 10:40:43 [00] Writing xtrabackup_info
180914 10:40:43 [00] …done
xtrabackup: Transaction log of lsn (2537019) to (2537028) was copied.
180914 10:40:43 completed OK!
######################################################################

增量备份与恢复(续 1 )
#rm -rf /var/lib/mysql ; mkdir /var/lib/mysql/
#innobackupex –user root –password 123456 –databases =” 库名列表 ” –apply-log /fullbak // 恢复完全备份
#innobackupex –user root –password 123456 –databases=“ 库名列表” –apply-log –redo-only /fullbak –incremental-dir=”/new1″ // 恢复增量
#innobackupex –user root –password 123456 –databases=“ 库名列表 ” –apply-log –redo-only /fullbak–incremental-dir=”/new2″ // 恢复增量
#innobackupex –user root –password 123456 –databases=“ 库名列表 ” –copy-back /fullbak // 拷贝文件
#chown -R mysql:mysql /var/lib/mysql/
#systemctl stop mysqld ; systemctl start mysqld \在完全备份文件中恢复单个表

– 完全备份数据库到 /allbak 目录
– 导出表信息

[root@dbsvr1 ~]#innobackupex –user root –password 123456 –databases=”gamedb” /allbak –no-timestamp
mysql> drop table gamedb.a;
[root@dbsvr1 ~]#innobackupex –user root –password 123456 –databases=”gamedb” –apply-log –export /allbak // 导出表信息
mysql> create table gamedb.a(id int); // 创建表
mysql> alter table gamedb.a discard tablespace; // 删除表空间在完全备份文件中恢复单个表
mysql> system cp /allbak/gamedb/a.{ibd,cfg,exp}
/var/lib/mysql/gamedb// 拷贝表信息文件

mysql> system chown mysql:mysql /var/lib/mysql/bbsdb/a.*
// 修改所有者
mysql> alter table gamedb.a import tablespace; // 导入表空间
mysql> select * from gamedb.a;
+——+
| id |
+——+
| 1001 |
| 1002 |
+——+

代码如下
本次测试,一次一次的恢复,来展示效果
恢复完全备份,先回滚日志
[root@51 ~]# innobackupex –user=root –password=123456 –apply-log /fullbak
180914 10:47:26 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints “completed OK!”.
##############################################################
注意观察下面的数字
#########################################################
InnoDB: Starting crash recovery.
InnoDB: Removed temporary tablespace data file: “ibtmp1”
InnoDB: Creating shared tablespace for temporary tables
InnoDB: Setting file ‘./ibtmp1’ size to 12 MB. Physically writing the file full; Please wait …
InnoDB: File ‘./ibtmp1’ size is now 12 MB.
InnoDB: 96 redo rollback segment(s) found. 1 redo rollback segment(s) are active.
InnoDB: 32 non-redo rollback segment(s) are active.
InnoDB: Waiting for purge to start
InnoDB: 5.7.13 started; log sequence number 2537493
xtrabackup: starting shutdown with innodb_fast_shutdown = 1
InnoDB: FTS optimize thread exiting.
InnoDB: Starting shutdown…
InnoDB: Shutdown completed; log sequence number 2537512
180914 10:47:32 completed OK!

回滚完成后,拷贝数据
[root@51 ~]# innobackupex –user root –password 123456 –copy-back /fullbak
180914 10:48:52 innobackupex: Starting the copy-back operation

IMPORTANT: Please check that the copy-back run completes successfully.
At the end of a successful copy-back run innobackupex
prints “completed OK!”.
############################################
180914 10:49:04 completed OK!

授权,并重启服务
[root@51 ~]# chown -R mysql.mysql /var/lib/mysql
[root@51 ~]# systemctl start mysqld

[root@51 ~]# mysql -uroot -p123456

mysql> show databases;
+——————–+
| Database |
+——————–+
| information_schema |
| mysql |
| performance_schema |
| sys |
| test |
| test1 |
+——————–+
6 rows in set (0.00 sec)

第一份增量备份恢复
老规矩,先回滚日志
[root@51 ~]# innobackupex –user root –password 123456 –apply-log-only –redo-only /fullbak –incremental-dir=”/new1″180914 11:26:06 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints “completed OK!”.

180914 11:26:06 version_check Connecting to MySQL server with DSN ‘dbi:mysql:;mysql_read_default_group=xtrabackup’ as ‘root’ (using password: YES).
180914 11:26:06 version_check Connected to MySQL server
180914 11:26:06 version_check Executing a version check against the server…
180914 11:26:06 version_check Done.
180914 11:26:06 Connecting to MySQL server host: localhost, user: root, password: set, port: not set, socket: not set
Using server version 5.7.17-log
innobackupex version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 6f7a799)
xtrabackup: uses posix_fadvise().
xtrabackup: cd to /var/lib/mysql
xtrabackup: open files limit requested 0, set to 1024
xtrabackup: using the following InnoDB configuration:
xtrabackup: innodb_data_home_dir = .
xtrabackup: innodb_data_file_path = ibdata1:12M:autoextend
xtrabackup: innodb_log_group_home_dir = ./
xtrabackup: innodb_log_files_in_group = 2
xtrabackup: innodb_log_file_size = 50331648
InnoDB: Number of pools: 1
180914 11:26:06 >> log scanned up to (2537540)
#########################################################################3
注意观察数字
####################################################################
180914 11:26:17 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS…
xtrabackup: The latest check point (for incremental): ‘2537531’
xtrabackup: Stopping log copying thread.
.180914 11:26:17 >> log scanned up to (2537540)

180914 11:26:18 Executing UNLOCK TABLES
180914 11:26:18 All tables unlocked
180914 11:26:18 [00] Copying ib_buffer_pool to /fullbak/2018-09-14_11-26-06/ib_buffer_pool
180914 11:26:18 [00] …done
180914 11:26:18 Backup created in directory ‘/fullbak/2018-09-14_11-26-06/’
MySQL binlog position: filename ‘master51.000001’, position ‘154’
180914 11:26:18 [00] Writing backup-my.cnf
180914 11:26:18 [00] …done
180914 11:26:18 [00] Writing xtrabackup_info
180914 11:26:18 [00] …done
xtrabackup: Transaction log of lsn (2537531) to (2537540) was copied.
180914 11:26:18 completed OK!

再停止服务,
[root@51 ~]# systemctl stop mysqld
[root@51 ~]# rm -rf /var/lib/mysql

拷贝数据
[root@51 ~]# innobackupex –user root –password 123456 –copy-back /fullbak
180914 11:30:21 innobackupex: Starting the copy-back operation

IMPORTANT: Please check that the copy-back run completes successfully.
At the end of a successful copy-back run innobackupex
prints “completed OK!”.
180914 11:32:41 completed OK!

登陆查看一下
(注意,刚才备份的时候忘记加上 no-timestamp,所以显示的是数字)
[root@51 ~]# chown -R mysql.mysql /var/lib/mysql
[root@51 ~]# systemctl start mysqld
[root@51 ~]# mysql -uroot -p123456
mysql> show databases;
+——————————+
| Database |
+——————————+
| information_schema |
| #mysql50#2018-09-14_11-26-06 |
| mysql |
| performance_schema |
| sys |
| test |
| test1 |
+——————————+
7 rows in set (0.00 sec)

第二份增量备份恢复
跟上面一样,先回滚日志,再拷贝数据,授权,重启,登陆测试
代码如下
[root@51 ~]# innobackupex –user root –password 123456 –apply-log –redo-only /fullbak –incremental-dir=/new2
180914 11:39:47 innobackupex: Starting the apply-log operation

IMPORTANT: Please check that the apply-log run completes successfully.
At the end of a successful apply-log run innobackupex
prints “completed OK!”.

innobackupex version 2.4.7 based on MySQL server 5.7.13 Linux (x86_64) (revision id: 6f7a799)
incremental backup from 2537019 is enabled.
xtrabackup: cd to /fullbak/
xtrabackup: This target seems to be already prepared.
xtrabackup: error: applying incremental backup needs target prepared with –apply-log-only.
[root@51 ~]#
[root@51 ~]# innobackupex –user root –password 123456 –apply-log-only –redo-only /fullbak –incremental-dir=/new2
180914 11:39:53 innobackupex: Starting the backup operation

IMPORTANT: Please check that the backup run completes successfully.
At the end of a successful backup run innobackupex
prints “completed OK!”.
##############################################################################
注意观察数字
180914 11:40:05 Executing FLUSH NO_WRITE_TO_BINLOG ENGINE LOGS…
xtrabackup: The latest check point (for incremental): ‘2537531’
xtrabackup: Stopping log copying thread.
.180914 11:40:05 >> log scanned up to (2537540)

180914 11:40:05 Executing UNLOCK TABLES
180914 11:40:05 All tables unlocked
180914 11:40:05 [00] Copying ib_buffer_pool to /fullbak/2018-09-14_11-39-53/ib_buffer_pool
180914 11:40:05 [00] …done
180914 11:40:05 Backup created in directory ‘/fullbak/2018-09-14_11-39-53/’
MySQL binlog position: filename ‘master51.000001’, position ‘154’
180914 11:40:05 [00] Writing backup-my.cnf
180914 11:40:05 [00] …done
180914 11:40:05 [00] Writing xtrabackup_info
180914 11:40:05 [00] …done
xtrabackup: Transaction log of lsn (2537531) to (2537540) was copied.
180914 11:40:06 completed OK!

[root@51 ~]# rm -rf /var/lib/mysql

回滚数据
[root@51 ~]# innobackupex –user root –password 123456 –copy-back /fullbak
180914 11:40:37 innobackupex: Starting the copy-back operation

IMPORTANT: Please check that the copy-back run completes successfully.
At the end of a successful copy-back run innobackupex
prints “completed OK!”.

重启服务,登陆测试一下
[root@51 ~]# chown -R mysql.mysql /var/lib/mysql
[root@51 ~]#
[root@51 ~]# systemctl start mysqld
[root@51 ~]# systemctl restart mysqld

[root@51 ~]# mysql -uroot -p123456

mysql> show databases;
+——————————+
| Database |
+——————————+
| information_schema |
| #mysql50#2018-09-14_11-26-06 |
| #mysql50#2018-09-14_11-39-53 |
| mysql |
| performance_schema |
| sys |
| test |
| test1 |
+——————————+
8 rows in set (0.00 sec)

发表在 mysql | 标签为 | 留下评论

Day04.Mysql查询

复制表
1、把整张表,包括数据,全部复制
mysql> create table tea8 select * from tea7;
Query OK, 8 rows affected (0.32 sec)
Records: 8 Duplicates: 0 Warnings: 0

mysql> select * from tea8;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
| 8 | 16 | jimmy | boy | sleep,girl |
| 9 | 35 | kitty | girl | eat,sleep |
+—-+—–+——–+———+————-+
8 rows in set (0.00 sec)

2、只复制表的结构
mysql> create table tea71 select * from tea7 where false;
Query OK, 0 rows affected (0.28 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> select * from tea71;
Empty set (0.00 sec)

mysql> desc tea71;
+——-+—————————————+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+——-+—————————————+——+—–+———+——-+
| id | int(5) | NO | | 0 | |
| age | int(3) | NO | | 18 | |
| name | char(10) | NO | | NULL | |
| sex | enum(‘boy’,’girl’,’secrect’) | YES | | girl | |
| hobby | set(‘eat’,’sleep’,’game’,’it’,’girl’) | YES | | NULL | |
+——-+—————————————+——+—–+———+——-+
5 rows in set (0.00 sec)

多表查询
mysql> select * from tea7,tea8;
+—-+—–+——–+———+————-+—-+—–+——–+———+————-+
| id | age | name | sex | hobby | id | age | name | sex | hobby |
+—-+—–+——–+———+————-+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game | 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl | 1 | 18 | lucy | girl | game |
| 4 | 18 | lily | girl | eat,it | 1 | 18 | lucy | girl | game |
| 5 | 22 | tarena | girl | eat,sleep | 1 | 18 | lucy | girl | game |
| 6 | 25 | kitty | girl | sleep | 1 | 18 | lucy | girl | game |
| 7 | 21 | jimmy | boy | eat,it,girl | 1 | 18 | lucy | girl | game |
| 8 | 16 | jimmy | boy | sleep,girl | 1 | 18 | lucy | girl | game |
| 9 | 35 | kitty | girl | eat,sleep | 1 | 18 | lucy | girl | game |
| 1 | 18 | lucy | girl | game | 3 | 18 | bob | secrect | eat,it,girl |
| 3 | 18 | bob | secrect | eat,it,girl | 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it | 3 | 18 | bob | secrect | eat,it,girl |
| 5 | 22 | tarena | girl | eat,sleep | 3 | 18 | bob | secrect | eat,it,girl |
| 6 | 25 | kitty | girl | sleep | 3 | 18 | bob | secrect | eat,it,girl |
| 7 | 21 | jimmy | boy | eat,it,girl | 3 | 18 | bob | secrect | eat,it,girl |
| 8 | 16 | jimmy | boy | sleep,girl | 3 | 18 | bob | secrect | eat,it,girl |
| 9 | 35 | kitty | girl | eat,sleep | 3 | 18 | bob | secrect | eat,it,girl |
| 1 | 18 | lucy | girl | game | 4 | 18 | lily | girl | eat,it |
| 3 | 18 | bob | secrect | eat,it,girl | 4 | 18 | lily | girl | eat,it |
| 4 | 18 | lily | girl | eat,it | 4 | 18 | lily | girl | eat,it |
| 5 | 22 | tarena | girl | eat,sleep | 4 | 18 | lily | girl | eat,it |
| 6 | 25 | kitty | girl | sleep | 4 | 18 | lily | girl | eat,it |
| 7 | 21 | jimmy | boy | eat,it,girl | 4 | 18 | lily | girl | eat,it |
| 8 | 16 | jimmy | boy | sleep,girl | 4 | 18 | lily | girl | eat,it |
| 9 | 35 | kitty | girl | eat,sleep | 4 | 18 | lily | girl | eat,it |
| 1 | 18 | lucy | girl | game | 5 | 22 | tarena | girl | eat,sleep |
| 3 | 18 | bob | secrect | eat,it,girl | 5 | 22 | tarena | girl | eat,sleep |
| 4 | 18 | lily | girl | eat,it | 5 | 22 | tarena | girl | eat,sleep |
| 5 | 22 | tarena | girl | eat,sleep | 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep | 5 | 22 | tarena | girl | eat,sleep |
| 7 | 21 | jimmy | boy | eat,it,girl | 5 | 22 | tarena | girl | eat,sleep |
| 8 | 16 | jimmy | boy | sleep,girl | 5 | 22 | tarena | girl | eat,sleep |
| 9 | 35 | kitty | girl | eat,sleep | 5 | 22 | tarena | girl | eat,sleep |
| 1 | 18 | lucy | girl | game | 6 | 25 | kitty | girl | sleep |
| 3 | 18 | bob | secrect | eat,it,girl | 6 | 25 | kitty | girl | sleep |
| 4 | 18 | lily | girl | eat,it | 6 | 25 | kitty | girl | sleep |
| 5 | 22 | tarena | girl | eat,sleep | 6 | 25 | kitty | girl | sleep |
| 6 | 25 | kitty | girl | sleep | 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl | 6 | 25 | kitty | girl | sleep |
| 8 | 16 | jimmy | boy | sleep,girl | 6 | 25 | kitty | girl | sleep |
| 9 | 35 | kitty | girl | eat,sleep | 6 | 25 | kitty | girl | sleep |
| 1 | 18 | lucy | girl | game | 7 | 21 | jimmy | boy | eat,it,girl |
| 3 | 18 | bob | secrect | eat,it,girl | 7 | 21 | jimmy | boy | eat,it,girl |
| 4 | 18 | lily | girl | eat,it | 7 | 21 | jimmy | boy | eat,it,girl |
| 5 | 22 | tarena | girl | eat,sleep | 7 | 21 | jimmy | boy | eat,it,girl |
| 6 | 25 | kitty | girl | sleep | 7 | 21 | jimmy | boy | eat,it,girl |
| 7 | 21 | jimmy | boy | eat,it,girl | 7 | 21 | jimmy | boy | eat,it,girl |
| 8 | 16 | jimmy | boy | sleep,girl | 7 | 21 | jimmy | boy | eat,it,girl |
| 9 | 35 | kitty | girl | eat,sleep | 7 | 21 | jimmy | boy | eat,it,girl |
| 1 | 18 | lucy | girl | game | 8 | 16 | jimmy | boy | sleep,girl |
| 3 | 18 | bob | secrect | eat,it,girl | 8 | 16 | jimmy | boy | sleep,girl |
| 4 | 18 | lily | girl | eat,it | 8 | 16 | jimmy | boy | sleep,girl |
| 5 | 22 | tarena | girl | eat,sleep | 8 | 16 | jimmy | boy | sleep,girl |
| 6 | 25 | kitty | girl | sleep | 8 | 16 | jimmy | boy | sleep,girl |
| 7 | 21 | jimmy | boy | eat,it,girl | 8 | 16 | jimmy | boy | sleep,girl |
| 8 | 16 | jimmy | boy | sleep,girl | 8 | 16 | jimmy | boy | sleep,girl |
| 9 | 35 | kitty | girl | eat,sleep | 8 | 16 | jimmy | boy | sleep,girl |
| 1 | 18 | lucy | girl | game | 9 | 35 | kitty | girl | eat,sleep |
| 3 | 18 | bob | secrect | eat,it,girl | 9 | 35 | kitty | girl | eat,sleep |
| 4 | 18 | lily | girl | eat,it | 9 | 35 | kitty | girl | eat,sleep |
| 5 | 22 | tarena | girl | eat,sleep | 9 | 35 | kitty | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep | 9 | 35 | kitty | girl | eat,sleep |
| 7 | 21 | jimmy | boy | eat,it,girl | 9 | 35 | kitty | girl | eat,sleep |
| 8 | 16 | jimmy | boy | sleep,girl | 9 | 35 | kitty | girl | eat,sleep |
| 9 | 35 | kitty | girl | eat,sleep | 9 | 35 | kitty | girl | eat,sleep |
+—-+—–+——–+———+————-+—-+—–+——–+———+————-+
64 rows in set (0.00 sec)

where子查询
把内层查询结果作为外层的查询条件
例如,查出年龄小于平均年龄的学生的姓名和年龄。
mysql> select avg(age) from tea7;
+———-+
| avg(age) |
+———-+
| 21.6250 |
+———-+
1 row in set (0.00 sec)

mysql> select name,age from tea7 where age < (select avg(age) from tea7);
+——-+—–+
| name | age |
+——-+—–+
| lucy | 18 |
| bob | 18 |
| lily | 18 |
| jimmy | 21 |
| jimmy | 16 |
+——-+—–+
5 rows in set (0.00 sec)

内连接、外连接、左连接、右连接
先准备2张表,作为素材
mysql> create table a_t (a_id int(11) default null,a_name varchar(10) default null,a_part varchar(10) default null);;
Query OK, 0 rows affected (0.23 sec)

mysql> create table b_t (b_id int(11) default null,b_name varchar(10) default null,b_part varchar(10) default null);
Query OK, 0 rows affected (0.34 sec)

mysql> desc a_t;
+——–+————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+——–+————-+——+—–+———+——-+
| a_id | int(11) | YES | | NULL | |
| a_name | varchar(10) | YES | | NULL | |
| a_part | varchar(10) | YES | | NULL | |
+——–+————-+——+—–+———+——-+
3 rows in set (0.00 sec)

mysql> desc b_t;
+——–+————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+——–+————-+——+—–+———+——-+
| b_id | int(11) | YES | | NULL | |
| b_name | varchar(10) | YES | | NULL | |
| b_part | varchar(10) | YES | | NULL | |
+——–+————-+——+—–+———+——-+
3 rows in set (0.00 sec)

mysql> insert into a_t values(1,”pan”,”zongcai”),(2,”wang”,”mishu”),(3,”zhang”,”sheji”),(4,”li”,”yunying”);
Query OK, 4 rows affected (0.14 sec)
Records: 4 Duplicates: 0 Warnings: 0

mysql> insert into b_t values(2,”wang”,”mishu”),(3,”zhang”,”sheji”),(5,”liu”,”renshi”),(6,”huang”,”shengchan”);
Query OK, 4 rows affected (0.08 sec)
Records: 4 Duplicates: 0 Warnings: 0

mysql> select * from a_t;
+——+——–+———+
| a_id | a_name | a_part |
+——+——–+———+
| 1 | pan | zongcai |
| 2 | wang | mishu |
| 3 | zhang | sheji |
| 4 | li | yunying |
+——+——–+———+
4 rows in set (0.00 sec)

mysql> select * from b_t;
+——+——–+———–+
| b_id | b_name | b_part |
+——+——–+———–+
| 2 | wang | mishu |
| 3 | zhang | sheji |
| 5 | liu | renshi |
| 6 | huang | shengchan |
+——+——–+———–+
4 rows in set (0.00 sec)

内连接测试
mysql> select * from a_t inner join b_t on a_id=b_id;
+——+——–+——–+——+——–+——–+
| a_id | a_name | a_part | b_id | b_name | b_part |
+——+——–+——–+——+——–+——–+
| 2 | wang | mishu | 2 | wang | mishu |
| 3 | zhang | sheji | 3 | zhang | sheji |
+——+——–+——–+——+——–+——–+
2 rows in set (0.00 sec)

疑问?和select from有什么区别?
mysql> select * from a_t,b_t where a_id=b_id;
+——+——–+——–+——+——–+——–+
| a_id | a_name | a_part | b_id | b_name | b_part |
+——+——–+——–+——+——–+——–+
| 2 | wang | mishu | 2 | wang | mishu |
| 3 | zhang | sheji | 3 | zhang | sheji |
+——+——–+——–+——+——–+——–+
2 rows in set (0.00 sec)
接上面的疑问,查询过程也差不多。
mysql> explain select * from a_t inner join b_t on a_id=b_id;
+—-+————-+——-+————+——+—————+——+———+——+——+———-+—————————————————-+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+—-+————-+——-+————+——+—————+——+———+——+——+———-+—————————————————-+
| 1 | SIMPLE | a_t | NULL | ALL | NULL | NULL | NULL | NULL | 4 | 100.00 | NULL |
| 1 | SIMPLE | b_t | NULL | ALL | NULL | NULL | NULL | NULL | 4 | 25.00 | Using where; Using join buffer (Block Nested Loop) |
+—-+————-+——-+————+——+—————+——+———+——+——+———-+—————————————————-+
2 rows in set, 1 warning (0.00 sec)

mysql> explain select * from a_t,b_t where a_id=b_id;
+—-+————-+——-+————+——+—————+——+———+——+——+———-+—————————————————-+
| id | select_type | table | partitions | type | possible_keys | key | key_len | ref | rows | filtered | Extra |
+—-+————-+——-+————+——+—————+——+———+——+——+———-+—————————————————-+
| 1 | SIMPLE | a_t | NULL | ALL | NULL | NULL | NULL | NULL | 4 | 100.00 | NULL |
| 1 | SIMPLE | b_t | NULL | ALL | NULL | NULL | NULL | NULL | 4 | 25.00 | Using where; Using join buffer (Block Nested Loop) |
+—-+————-+——-+————+——+—————+——+———+——+——+———-+—————————————————-+
2 rows in set, 1 warning (0.00 sec)

关键字:inner join on
语句:select * from a_table a inner join b_table bon a.a_id = b.b_id;

说明:组合两个表中的记录,返回关联字段相符的记录,也就是返回两个表的交集(阴影)部分。

左连接
mysql> select * from a_t;
+——+——–+———+
| a_id | a_name | a_part |
+——+——–+———+
| 1 | pan | zongcai |
| 2 | wang | mishu |
| 3 | zhang | sheji |
| 4 | li | yunying |
+——+——–+———+
4 rows in set (0.00 sec)

mysql> select * from b_t;
+——+——–+———–+
| b_id | b_name | b_part |
+——+——–+———–+
| 2 | wang | mishu |
| 3 | zhang | sheji |
| 5 | liu | renshi |
| 6 | huang | shengchan |
+——+——–+———–+
4 rows in set (0.00 sec)

mysql> select * from a_t a left join b_t b on a_id=b_id;
+——+——–+———+——+——–+——–+
| a_id | a_name | a_part | b_id | b_name | b_part |
+——+——–+———+——+——–+——–+
| 2 | wang | mishu | 2 | wang | mishu |
| 3 | zhang | sheji | 3 | zhang | sheji |
| 1 | pan | zongcai | NULL | NULL | NULL |
| 4 | li | yunying | NULL | NULL | NULL |
+——+——–+———+——+——–+——–+
4 rows in set (0.00 sec)

关键字:left join on / left outer join on
语句:select * from a_table a left join b_table bon a.a_id = b.b_id;
说明:
left join 是left outer join的简写,它的全称是左外连接,是外连接中的一种。
左(外)连接,左表(a_table)的记录将会全部表示出来,而右表(b_table)只会显示符合搜索条件的记录。右表记录不足的地方均为NULL。

发表在 mysql | 标签为 | 留下评论

Day03.Mysql字段值操作

mysql> show engines;
+——————–+———+—————————————————————-+————–+——+————+
| Engine | Support | Comment | Transactions | XA | Savepoints |
+——————–+———+—————————————————————-+————–+——+————+
| InnoDB | DEFAULT | Supports transactions, row-level locking, and foreign keys | YES | YES | YES |
| MRG_MYISAM | YES | Collection of identical MyISAM tables | NO | NO | NO |
| MEMORY | YES | Hash based, stored in memory, useful for temporary tables | NO | NO | NO |
| BLACKHOLE | YES | /dev/null storage engine (anything you write to it disappears) | NO | NO | NO |
| MyISAM | YES | MyISAM storage engine | NO | NO | NO |
| CSV | YES | CSV storage engine | NO | NO | NO |
| ARCHIVE | YES | Archive storage engine | NO | NO | NO |
| PERFORMANCE_SCHEMA | YES | Performance Schema | NO | NO | NO |
| FEDERATED | NO | Federated MySQL storage engine | NULL | NULL | NULL |
+——————–+———+—————————————————————-+————–+——+————+
9 rows in set (0.01 sec)
可以在创建表的时候制定引擎
mysql> create table intab (id int(4)) engine=myisam;
Query OK, 0 rows affected (0.07 sec)

mysql> show create table intab;
+——-+——————————————————————————————+
| Table | Create Table |
+——-+——————————————————————————————+
| intab | CREATE TABLE `intab` (
`id` int(4) DEFAULT NULL
) ENGINE=MyISAM DEFAULT CHARSET=latin1 |
+——-+——————————————————————————————+
1 row in set (0.00 sec)

查看当前锁的状态
mysql> show status like ‘table_lock%’;
+———————–+——-+
| Variable_name | Value |
+———————–+——-+
| Table_locks_immediate | 100 |
| Table_locks_waited | 0 |
+———————–+——-+
2 rows in set (0.00 sec)

查看mysql的默认使用目录
mysql> show variables like ‘secure_file_priv’;
+——————+———————–+
| Variable_name | Value |
+——————+———————–+
| secure_file_priv | /var/lib/mysql-files/ |
+——————+———————–+
1 row in set (0.00 sec)
自行到目录去查看一下
[root@50 ~]# ls -ld /var/lib/mysql-files/
drwxr-x—. 2 mysql mysql 6 11月 29 2016 /var/lib/mysql-files/

修改默认目录,并查看
[root@50 ~]# mkdir /myload
[root@50 ~]# chown mysql /myload/
[root@50 ~]# grep -v “^$” /etc/my.cnf | grep -v “^#”
[mysqld]
datadir=/var/lib/mysql
socket=/var/lib/mysql/mysql.sock
symbolic-links=0
log-error=/var/log/mysqld.log
pid-file=/var/run/mysqld/mysqld.pid
secure_file_priv=”/myload”
validate_password_policy=0
validate_password_length=6

[root@50 ~]# systemctl restart mysqld

mysql> show variables like “secure_file_priv”;
+——————+———-+
| Variable_name | Value |
+——————+———-+
| secure_file_priv | /myload/ |
+——————+———-+
1 row in set (0.01 sec)

从文件导入数据
mysql> create table user (username char(20) not null,pass char(5),uid int(5),gid int(5),comment varchar(100),homedir varchar(200),shell varchar(50));
Query OK, 0 rows affected (0.25 sec)

mysql> desc user;
+———-+————–+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+————–+——+—–+———+——-+
| username | char(20) | NO | | NULL | |
| pass | char(5) | YES | | NULL | |
| uid | int(5) | YES | | NULL | |
| gid | int(5) | YES | | NULL | |
| comment | varchar(100) | YES | | NULL | |
| homedir | varchar(200) | YES | | NULL | |
| shell | varchar(50) | YES | | NULL | |
+———-+————–+——+—–+———+——-+
7 rows in set (0.00 sec)

mysql> load data infile “/myload/passwd” into table user fields terminated by “:” lines terminated by “\n”;
Query OK, 41 rows affected (0.08 sec)
Records: 41 Deleted: 0 Skipped: 0 Warnings: 0

mysql> select * from user;
+———————+——+——-+——-+—————————————————————–+—————————+—————-+
| username | pass | uid | gid | comment | homedir | shell |
+———————+——+——-+——-+—————————————————————–+—————————+—————-+
| root | x | 0 | 0 | root | /root | /bin/bash |
| bin | x | 1 | 1 | bin | /bin | /sbin/nologin |
| daemon | x | 2 | 2 | daemon | /sbin | /sbin/nologin |
| adm | x | 3 | 4 | adm | /var/adm | /sbin/nologin |
| lp | x | 4 | 7 | lp | /var/spool/lpd | /sbin/nologin |
| sync | x | 5 | 0 | sync | /sbin | /bin/sync |
| shutdown | x | 6 | 0 | shutdown | /sbin | /sbin/shutdown |
| halt | x | 7 | 0 | halt | /sbin | /sbin/halt |
| mail | x | 8 | 12 | mail | /var/spool/mail | /sbin/nologin |
| operator | x | 11 | 0 | operator | /root | /sbin/nologin |
| games | x | 12 | 100 | games | /usr/games | /sbin/nologin |
| ftp | x | 14 | 50 | FTP User | /var/ftp | /sbin/nologin |
| nobody | x | 99 | 99 | Nobody | / | /sbin/nologin |
| systemd-network | x | 192 | 192 | systemd Network Management | / | /sbin/nologin |
| dbus | x | 81 | 81 | System message bus | / | /sbin/nologin |
| polkitd | x | 999 | 998 | User for polkitd | / | /sbin/nologin |
| libstoragemgmt | x | 998 | 996 | daemon account for libstoragemgmt | /var/run/lsm | /sbin/nologin |
| rpc | x | 32 | 32 | Rpcbind Daemon | /var/lib/rpcbind | /sbin/nologin |
| colord | x | 997 | 995 | User for colord | /var/lib/colord | /sbin/nologin |
| saslauth | x | 996 | 76 | Saslauthd user | /run/saslauthd | /sbin/nologin |
| abrt | x | 173 | 173 | | /etc/abrt | /sbin/nologin |
| rtkit | x | 172 | 172 | RealtimeKit | /proc | /sbin/nologin |
| radvd | x | 75 | 75 | radvd user | / | /sbin/nologin |
| chrony | x | 995 | 993 | | /var/lib/chrony | /sbin/nologin |
| tss | x | 59 | 59 | Account used by the trousers package to sandbox the tcsd daemon | /dev/null | /sbin/nologin |
| usbmuxd | x | 113 | 113 | usbmuxd user | / | /sbin/nologin |
| geoclue | x | 994 | 991 | User for geoclue | /var/lib/geoclue | /sbin/nologin |
| qemu | x | 107 | 107 | qemu user | / | /sbin/nologin |
| rpcuser | x | 29 | 29 | RPC Service User | /var/lib/nfs | /sbin/nologin |
| nfsnobody | x | 65534 | 65534 | Anonymous NFS User | /var/lib/nfs | /sbin/nologin |
| setroubleshoot | x | 993 | 990 | | /var/lib/setroubleshoot | /sbin/nologin |
| pulse | x | 171 | 171 | PulseAudio System Daemon | /var/run/pulse | /sbin/nologin |
| gdm | x | 42 | 42 | | /var/lib/gdm | /sbin/nologin |
| gnome-initial-setup | x | 992 | 987 | | /run/gnome-initial-setup/ | /sbin/nologin |
| sshd | x | 74 | 74 | Privilege-separated SSH | /var/empty/sshd | /sbin/nologin |
| avahi | x | 70 | 70 | Avahi mDNS/DNS-SD Stack | /var/run/avahi-daemon | /sbin/nologin |
| postfix | x | 89 | 89 | | /var/spool/postfix | /sbin/nologin |
| ntp | x | 38 | 38 | | /etc/ntp | /sbin/nologin |
| tcpdump | x | 72 | 72 | | / | /sbin/nologin |
| lisi | x | 1000 | 1000 | lisi | /home/lisi | /bin/bash |
| mysql | x | 27 | 27 | MySQL Server | /var/lib/mysql | /bin/false |
+———————+——+——-+——-+—————————————————————–+—————————+—————-+
41 rows in set (0.00 sec)

给上面这个表添加一个id字段,并设为主键
步骤1:添加id字段并设置为not null
mysql> alter table user add id int(5) not null first;
Query OK, 0 rows affected (0.63 sec)
Records: 0 Duplicates: 0 Warnings: 0
步骤2:设置为索引
mysql> create index id on user(id);
Query OK, 0 rows affected (0.34 sec)
Records: 0 Duplicates: 0 Warnings: 0
步骤3:设置为自增
mysql> alter table user modify id int(5) auto_increment not null first;
Query OK, 41 rows affected (0.79 sec)
Records: 41 Duplicates: 0 Warnings: 0
步骤4:设置为主键
mysql> alter table user add primary key(id);
Query OK, 0 rows affected (0.67 sec)
Records: 0 Duplicates: 0 Warnings: 0

步骤5:查看并确认
mysql> desc user;
+———-+————–+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+———-+————–+——+—–+———+—————-+
| id | int(5) | NO | PRI | NULL | auto_increment |
| username | char(20) | NO | | NULL | |
| pass | char(5) | YES | | NULL | |
| uid | int(5) | YES | | NULL | |
| gid | int(5) | YES | | NULL | |
| comment | varchar(100) | YES | | NULL | |
| homedir | varchar(200) | YES | | NULL | |
| shell | varchar(50) | YES | | NULL | |
+———-+————–+——+—–+———+—————-+
8 rows in set (0.00 sec)

mysql> show create table user;
+——-+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————-+
| Table | Create Table |
+——-+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————-+
| user | CREATE TABLE `user` (
`id` int(5) NOT NULL AUTO_INCREMENT,
`username` char(20) NOT NULL,
`pass` char(5) DEFAULT NULL,
`uid` int(5) DEFAULT NULL,
`gid` int(5) DEFAULT NULL,
`comment` varchar(100) DEFAULT NULL,
`homedir` varchar(200) DEFAULT NULL,
`shell` varchar(50) DEFAULT NULL,
PRIMARY KEY (`id`),
KEY `id` (`id`)
) ENGINE=InnoDB AUTO_INCREMENT=42 DEFAULT CHARSET=latin1 |
+——-+———————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————————-+
1 row in set (0.00 sec)

mysql> select * from user;
+—-+———————+——+——-+——-+—————————————————————–+—————————+—————-+
| id | username | pass | uid | gid | comment | homedir | shell |
+—-+———————+——+——-+——-+—————————————————————–+—————————+—————-+
| 1 | root | x | 0 | 0 | root | /root | /bin/bash |
| 2 | bin | x | 1 | 1 | bin | /bin | /sbin/nologin |
| 3 | daemon | x | 2 | 2 | daemon | /sbin | /sbin/nologin |
| 4 | adm | x | 3 | 4 | adm | /var/adm | /sbin/nologin |
| 5 | lp | x | 4 | 7 | lp | /var/spool/lpd | /sbin/nologin |
| 6 | sync | x | 5 | 0 | sync | /sbin | /bin/sync |
| 7 | shutdown | x | 6 | 0 | shutdown | /sbin | /sbin/shutdown |
| 8 | halt | x | 7 | 0 | halt | /sbin | /sbin/halt |
| 9 | mail | x | 8 | 12 | mail | /var/spool/mail | /sbin/nologin |
| 10 | operator | x | 11 | 0 | operator | /root | /sbin/nologin |
| 11 | games | x | 12 | 100 | games | /usr/games | /sbin/nologin |
| 12 | ftp | x | 14 | 50 | FTP User | /var/ftp | /sbin/nologin |
| 13 | nobody | x | 99 | 99 | Nobody | / | /sbin/nologin |
| 14 | systemd-network | x | 192 | 192 | systemd Network Management | / | /sbin/nologin |
| 15 | dbus | x | 81 | 81 | System message bus | / | /sbin/nologin |
| 16 | polkitd | x | 999 | 998 | User for polkitd | / | /sbin/nologin |
| 17 | libstoragemgmt | x | 998 | 996 | daemon account for libstoragemgmt | /var/run/lsm | /sbin/nologin |
| 18 | rpc | x | 32 | 32 | Rpcbind Daemon | /var/lib/rpcbind | /sbin/nologin |
| 19 | colord | x | 997 | 995 | User for colord | /var/lib/colord | /sbin/nologin |
| 20 | saslauth | x | 996 | 76 | Saslauthd user | /run/saslauthd | /sbin/nologin |
| 21 | abrt | x | 173 | 173 | | /etc/abrt | /sbin/nologin |
| 22 | rtkit | x | 172 | 172 | RealtimeKit | /proc | /sbin/nologin |
| 23 | radvd | x | 75 | 75 | radvd user | / | /sbin/nologin |
| 24 | chrony | x | 995 | 993 | | /var/lib/chrony | /sbin/nologin |
| 25 | tss | x | 59 | 59 | Account used by the trousers package to sandbox the tcsd daemon | /dev/null | /sbin/nologin |
| 26 | usbmuxd | x | 113 | 113 | usbmuxd user | / | /sbin/nologin |
| 27 | geoclue | x | 994 | 991 | User for geoclue | /var/lib/geoclue | /sbin/nologin |
| 28 | qemu | x | 107 | 107 | qemu user | / | /sbin/nologin |
| 29 | rpcuser | x | 29 | 29 | RPC Service User | /var/lib/nfs | /sbin/nologin |
| 30 | nfsnobody | x | 65534 | 65534 | Anonymous NFS User | /var/lib/nfs | /sbin/nologin |
| 31 | setroubleshoot | x | 993 | 990 | | /var/lib/setroubleshoot | /sbin/nologin |
| 32 | pulse | x | 171 | 171 | PulseAudio System Daemon | /var/run/pulse | /sbin/nologin |
| 33 | gdm | x | 42 | 42 | | /var/lib/gdm | /sbin/nologin |
| 34 | gnome-initial-setup | x | 992 | 987 | | /run/gnome-initial-setup/ | /sbin/nologin |
| 35 | sshd | x | 74 | 74 | Privilege-separated SSH | /var/empty/sshd | /sbin/nologin |
| 36 | avahi | x | 70 | 70 | Avahi mDNS/DNS-SD Stack | /var/run/avahi-daemon | /sbin/nologin |
| 37 | postfix | x | 89 | 89 | | /var/spool/postfix | /sbin/nologin |
| 38 | ntp | x | 38 | 38 | | /etc/ntp | /sbin/nologin |
| 39 | tcpdump | x | 72 | 72 | | / | /sbin/nologin |
| 40 | lisi | x | 1000 | 1000 | lisi | /home/lisi | /bin/bash |
| 41 | mysql | x | 27 | 27 | MySQL Server | /var/lib/mysql | /bin/false |
+—-+———————+——+——-+——-+—————————————————————–+—————————+—————-+

案例:将userdb库user表中uid小于100的前10条记录导出,存为/myload/user2.txt文件
mysql> select * from user where uid<100 limit 10;
+—-+———-+——+——+——+———-+—————–+—————-+
| id | username | pass | uid | gid | comment | homedir | shell |
+—-+———-+——+——+——+———-+—————–+—————-+
| 1 | root | x | 0 | 0 | root | /root | /bin/bash |
| 2 | bin | x | 1 | 1 | bin | /bin | /sbin/nologin |
| 3 | daemon | x | 2 | 2 | daemon | /sbin | /sbin/nologin |
| 4 | adm | x | 3 | 4 | adm | /var/adm | /sbin/nologin |
| 5 | lp | x | 4 | 7 | lp | /var/spool/lpd | /sbin/nologin |
| 6 | sync | x | 5 | 0 | sync | /sbin | /bin/sync |
| 7 | shutdown | x | 6 | 0 | shutdown | /sbin | /sbin/shutdown |
| 8 | halt | x | 7 | 0 | halt | /sbin | /sbin/halt |
| 9 | mail | x | 8 | 12 | mail | /var/spool/mail | /sbin/nologin |
| 10 | operator | x | 11 | 0 | operator | /root | /sbin/nologin |
+—-+———-+——+——+——+———-+—————–+—————-+
10 rows in set (0.00 sec)

mysql> select * from user where uid<100 limit 10 into outfile “/myload/select.txt” fields terminated by “:” lines terminated by “\n”;
Query OK, 10 rows affected (0.00 sec)

到文件去确认一下
[root@50 ~]# cat /myload/select.txt
1:root:x:0:0:root:/root:/bin/bash
2:bin:x:1:1:bin:/bin:/sbin/nologin
3:daemon:x:2:2:daemon:/sbin:/sbin/nologin
4:adm:x:3:4:adm:/var/adm:/sbin/nologin
5:lp:x:4:7:lp:/var/spool/lpd:/sbin/nologin
6:sync:x:5:0:sync:/sbin:/bin/sync
7:shutdown:x:6:0:shutdown:/sbin:/sbin/shutdown
8:halt:x:7:0:halt:/sbin:/sbin/halt
9:mail:x:8:12:mail:/var/spool/mail:/sbin/nologin
10:operator:x:11:0:operator:/root:/sbin/nologin

插入到制定的字段
mysql> load data infile ‘/myload/123’ into table tea8 fields terminated by ‘:’ lines terminated by ‘\n’ (id,age,name);
Query OK, 3 rows affected (0.10 sec)
Records: 3 Deleted: 0 Skipped: 0 Warnings: 0
确认一下
mysql> select * from tea8;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
| 8 | 16 | jimmy | boy | sleep,girl |
| 9 | 35 | kitty | girl | eat,sleep |
| 10 | 26 | po | girl | NULL |
| 11 | 19 | lipo | girl | NULL |
| 12 | 32 | pily | girl | NULL |
+—-+—–+——–+———+————-+
11 rows in set (0.00 sec)

管理表记录:增删改查
先创建个表,并适当修改一下
mysql> create table tea7(id int(5) not null auto_increment,name char(10) not null,index(id),primary key(id));
Query OK, 0 rows affected (0.31 sec)

mysql> show create table tea7;
+——-+————————————————————————————————————————————————————————-+
| Table | Create Table |
+——-+————————————————————————————————————————————————————————-+
| tea7 | CREATE TABLE `tea7` (
`id` int(5) NOT NULL AUTO_INCREMENT,
`name` char(10) NOT NULL,
PRIMARY KEY (`id`),
KEY `id` (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+——-+————————————————————————————————————————————————————————-+
1 row in set (0.00 sec)

mysql> desc tea7;
+——-+———-+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+——-+———-+——+—–+———+—————-+
| id | int(5) | NO | PRI | NULL | auto_increment |
| name | char(10) | NO | | NULL | |
+——-+———-+——+—–+———+—————-+
2 rows in set (0.00 sec)

mysql> alter table tea7 add(sex enum(“boy”,”girl”,”secrect”) default “girl”,hobby set(“eat”,”sleep”,”game”,”it”,”girl”));
Query OK, 0 rows affected (0.52 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea7;
+——-+—————————————+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+——-+—————————————+——+—–+———+—————-+
| id | int(5) | NO | PRI | NULL | auto_increment |
| name | char(10) | NO | | NULL | |
| sex | enum(‘boy’,’girl’,’secrect’) | YES | | girl | |
| hobby | set(‘eat’,’sleep’,’game’,’it’,’girl’) | YES | | NULL | |
+——-+—————————————+——+—–+———+—————-+
4 rows in set (0.00 sec)

插入值
格式 insert into 表名(字段名列表) values(字段值列表)
不制定字段则插入全部字段

字段值与字段类型必须匹配
字符型字段,要使用单引或双印括起来
依次给所有字段赋值时,字段名可以省略
只给部分字段赋值,必须写明对应的字段名
mysql> desc tea7;
+——-+—————————————+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+——-+—————————————+——+—–+———+—————-+
| id | int(5) | NO | PRI | NULL | auto_increment |
| name | char(10) | NO | | NULL | |
| sex | enum(‘boy’,’girl’,’secrect’) | YES | | girl | |
| hobby | set(‘eat’,’sleep’,’game’,’it’,’girl’) | YES | | NULL | |
+——-+—————————————+——+—–+———+—————-+
4 rows in set (0.01 sec)

mysql> insert into tea7(name,sex,hobby) values(“lucy”,”girl”,”game”);
Query OK, 1 row affected (0.08 sec)

mysql> select * from tea7;
+—-+——+——+——-+
| id | name | sex | hobby |
+—-+——+——+——-+
| 1 | lucy | girl | game |
+—-+——+——+——-+
1 row in set (0.00 sec)

mysql> insert into tea7(name,hobby) values(“lily”,”sleep,eat”);
Query OK, 1 row affected (0.34 sec)

mysql> select * from tea7;
+—-+——+——+———–+
| id | name | sex | hobby |
+—-+——+——+———–+
| 1 | lucy | girl | game |
| 2 | lily | girl | eat,sleep |
+—-+——+——+———–+
2 rows in set (0.00 sec)

mysql> insert into tea7(name,hobby) values(“lily”,”sleep,eat,drunk”);
ERROR 1265 (01000): Data truncated for column ‘hobby’ at row 1

更新值
方法一:把整个字段全部更新
mysql> update tea7 set sex=”secrect”;
Query OK, 3 rows affected (0.15 sec)
Rows matched: 3 Changed: 3 Warnings: 0

mysql> select * from tea7;
+—-+——+———+————-+
| id | name | sex | hobby |
+—-+——+———+————-+
| 1 | lucy | secrect | game |
| 2 | lily | secrect | eat,sleep |
| 3 | bob | secrect | eat,it,girl |
+—-+——+———+————-+
3 rows in set (0.00 sec)

方法二,只更新一部分
mysql> update tea7 set sex=”girl” where name=”lucy”;
Query OK, 1 row affected (0.10 sec)
Rows matched: 1 Changed: 1 Warnings: 0

mysql> select * from tea7;
+—-+——+———+————-+
| id | name | sex | hobby |
+—-+——+———+————-+
| 1 | lucy | girl | game |
| 2 | lily | secrect | eat,sleep |
| 3 | bob | secrect | eat,it,girl |
+—-+——+———+————-+
3 rows in set (0.00 sec)

删除值
方法一:只删除制定的字段
mysql> delete from tea7 where name=”lily”;
Query OK, 1 row affected (0.07 sec)

mysql> select * from tea7;
+—-+——+———+————-+
| id | name | sex | hobby |
+—-+——+———+————-+
| 1 | lucy | girl | game |
| 3 | bob | secrect | eat,it,girl |
+—-+——+———+————-+
2 rows in set (0.00 sec)

方法二:删除所有的列
delete from 表名;

查询
匹配条件
字段类型
等于= 大于或大于等于 > >= 小于或小于等于<,<= 不等于!=

字符类型
等于= 不等于!= 匹配空IS NULL 匹配非空IS NOT NULL

逻辑判断
逻辑或or 逻辑与and 逻辑非! 提高优先级()

范围内匹配/去重显示
在…范围内in (值列表) 不在…范围内not in (值列表) 在…之间between 数字1 and 数字2 去重显示distinct 字段名

增加1个字段,插入一些值
mysql> alter table tea7 add age int(3) not null default 18 after id;
Query OK, 0 rows affected (0.54 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea7;
+——-+—————————————+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+——-+—————————————+——+—–+———+—————-+
| id | int(5) | NO | PRI | NULL | auto_increment |
| age | int(3) | NO | | 18 | |
| name | char(10) | NO | | NULL | |
| sex | enum(‘boy’,’girl’,’secrect’) | YES | | girl | |
| hobby | set(‘eat’,’sleep’,’game’,’it’,’girl’) | YES | | NULL | |
+——-+—————————————+——+—–+———+—————-+
5 rows in set (0.00 sec)

mysql> select * from tea7;
+—-+—–+——+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
+—-+—–+——+———+————-+
3 rows in set (0.00 sec)

mysql> insert into tea7(age,name,sex,hobby) values(22,”tarena”,”girl”,”eat,sleep”),(25,”kitty”,”girl”,”sleep”),(21,”jimmy
Query OK, 3 rows affected (0.03 sec)
Records: 3 Duplicates: 0 Warnings: 0

mysql> select * from tea7;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
+—-+—–+——–+———+————-+
6 rows in set (0.00 sec)

验证范围内匹配
mysql> select * from tea7;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
+—-+—–+——–+———+————-+
6 rows in set (0.00 sec)

mysql> select * from tea7 where age in (18,22);
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 5 | 22 | tarena | girl | eat,sleep |
+—-+—–+——–+———+————-+
4 rows in set (0.00 sec)

mysql> select * from tea7 where age not in (18,22);
+—-+—–+——-+——+————-+
| id | age | name | sex | hobby |
+—-+—–+——-+——+————-+
| 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
+—-+—–+——-+——+————-+
2 rows in set (0.00 sec)

mysql> select * from tea7 where age between 22 and 25;
+—-+—–+——–+——+———–+
| id | age | name | sex | hobby |
+—-+—–+——–+——+———–+
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
+—-+—–+——–+——+———–+
2 rows in set (0.00 sec)

去重较为复杂,以下为简单用法
6,distinct与聚集函数:

Mysql5.0.3以及之后的版本,聚集函数和distinct可以搭配使用,比如:

1:对所有的行执行计算,指定all参数或不给参数(all是默认所有行为,不需要指定,如果不指定distinct,则假定为all);
2:只包含不同的值,指定distinct参数;
3:如果指定列名,则distinct只能用于count();distinct不能用于count(*),因此不允许使用count(distinct);distinct必须使用列名,不能用于计算或者表达式;

SELECT avg(distinct age) as id FROM person WHERE age = 20;
这条SQL语句中,使用avg()函数返回vend列中vend_id=1003的对应的price平均价格,因为使用了distinct参数,因此平均值只考虑不同的值(唯一值)

先统计一下tea7一共多少行
mysql> select count(*) as count from tea7 ;
+——-+
| count |
+——-+
| 6 |
+——-+
1 row in set (0.00 sec)

然后把年龄去重,看看有多少行
mysql> select count(distinct age) as count_distinct from tea7 ;
+—————-+
| count_distinct |
+—————-+
| 4 |
+—————-+
1 row in set (0.00 sec)

字符比较验证
mysql> select * from tea7 where age!=18;
+—-+—–+——–+——+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+——+————-+
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
+—-+—–+——–+——+————-+
3 rows in set (0.00 sec)

mysql> select * from tea7 where age=18;
+—-+—–+——+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
+—-+—–+——+———+————-+
3 rows in set (0.00 sec)

高级匹配条件
where 字段名 like ‘通配符’ _匹配单个字符 %匹配任意多个字符
mysql> select * from tea7 where name like ‘lu_’;
Empty set (0.00 sec)

mysql> select * from tea7 where name like ‘luc_’;
+—-+—–+——+——+——-+
| id | age | name | sex | hobby |
+—-+—–+——+——+——-+
| 1 | 18 | lucy | girl | game |
+—-+—–+——+——+——-+
1 row in set (0.00 sec)

mysql> select * from tea7 where name like ‘l%’;
+—-+—–+——+——+——–+
| id | age | name | sex | hobby |
+—-+—–+——+——+——–+
| 1 | 18 | lucy | girl | game |
| 4 | 18 | lily | girl | eat,it |
+—-+—–+——+——+——–+
2 rows in set (0.00 sec)

regexp匹配正则表达式
以j或者l开头
mysql> select * from tea7 where name regexp ‘^j|l’;
+—-+—–+——-+——+————-+
| id | age | name | sex | hobby |
+—-+—–+——-+——+————-+
| 1 | 18 | lucy | girl | game |
| 4 | 18 | lily | girl | eat,it |
| 7 | 21 | jimmy | boy | eat,it,girl |
+—-+—–+——-+——+————-+
3 rows in set (0.00 sec)

以j开头或者b结尾
mysql> select * from tea7 where name regexp ‘^j|b$’;
+—-+—–+——-+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——-+———+————-+
| 3 | 18 | bob | secrect | eat,it,girl |
| 7 | 21 | jimmy | boy | eat,it,girl |
+—-+—–+——-+———+————-+
2 rows in set (0.00 sec)

四则运算
加法+ 减法- 乘法* 除法/ 求模%

查询结果操作
聚集函数
平均值avg(字段名) 求和sum 最小值min 最大值max 统计个数count
接上面的例子,统计一下,以j开头或者b结尾的记录个数
mysql> select * from tea7 where name regexp ‘^j|b$’;
+—-+—–+——-+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——-+———+————-+
| 3 | 18 | bob | secrect | eat,it,girl |
| 7 | 21 | jimmy | boy | eat,it,girl |
+—-+—–+——-+———+————-+
2 rows in set (0.00 sec)

mysql> select count(*) from tea7 where name regexp ‘^j|b$’;
+———-+
| count(*) |
+———-+
| 2 |
+———-+
1 row in set (0.00 sec)

统计下所有人、女孩、男孩的平均年龄
mysql> select avg(age) from tea7;
+———-+
| avg(age) |
+———-+
| 20.3333 |
+———-+
1 row in set (0.00 sec)

mysql> select avg(age) from tea7 where sex=”girl”;
+———-+
| avg(age) |
+———-+
| 20.7500 |
+———-+
1 row in set (0.00 sec)

mysql> select avg(age) from tea7 where sex=”boy”;
+———-+
| avg(age) |
+———-+
| 21.0000 |
+———-+
1 row in set (0.00 sec)

查询并排序,默认是升序
mysql> select * from tea7 order by age;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 7 | 21 | jimmy | boy | eat,it,girl |
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
+—-+—–+——–+———+————-+
6 rows in set (0.00 sec)

升序排列
mysql> select * from tea7 order by age asc;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 7 | 21 | jimmy | boy | eat,it,girl |
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
+—-+—–+——–+———+————-+
6 rows in set (0.00 sec)

降序排列
mysql> select * from tea7 order by age desc;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 6 | 25 | kitty | girl | sleep |
| 5 | 22 | tarena | girl | eat,sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
+—-+—–+——–+———+————-+
6 rows in set (0.00 sec)

查询结果分组

查询一下表里各个年龄段的人数
mysql> select age,count(age) as num from tea7 group by age;
+—–+—–+
| age | num |
+—–+—–+
| 18 | 3 |
| 21 | 1 |
| 22 | 1 |
| 25 | 1 |
+—–+—–+
4 rows in set (0.01 sec)

having可以代替where过滤分组数据
先重新建一张表
mysql> desc tea7;
+——-+—————————————+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+——-+—————————————+——+—–+———+—————-+
| id | int(5) | NO | PRI | NULL | auto_increment |
| age | int(3) | NO | | 18 | |
| name | char(10) | NO | | NULL | |
| sex | enum(‘boy’,’girl’,’secrect’) | YES | | girl | |
| hobby | set(‘eat’,’sleep’,’game’,’it’,’girl’) | YES | | NULL | |
+——-+—————————————+——+—–+———+—————-+
5 rows in set (0.01 sec)

mysql> insert into tea7(age,name,sex,hobby) values(16,”jimmy”,”boy”,”sleep,girl”);
Query OK, 1 row affected (0.08 sec)

mysql> insert into tea7(age,name,sex,hobby) values(35,”kitty”,”girl”,”sleep,eat”);
Query OK, 1 row affected (0.12 sec)

mysql> select * from tea7;
+—-+—–+——–+———+————-+
| id | age | name | sex | hobby |
+—-+—–+——–+———+————-+
| 1 | 18 | lucy | girl | game |
| 3 | 18 | bob | secrect | eat,it,girl |
| 4 | 18 | lily | girl | eat,it |
| 5 | 22 | tarena | girl | eat,sleep |
| 6 | 25 | kitty | girl | sleep |
| 7 | 21 | jimmy | boy | eat,it,girl |
| 8 | 16 | jimmy | boy | sleep,girl |
| 9 | 35 | kitty | girl | eat,sleep |
+—-+—–+——–+———+————-+
8 rows in set (0.00 sec)

统计下同名人的个数,并列出来
mysql> select name,count(*) as num from tea7 group by name having count(*) >= 2;
+——-+—–+
| name | num |
+——-+—–+
| jimmy | 2 |
| kitty | 2 |
+——-+—–+
2 rows in set (0.00 sec)

发表在 mysql | 标签为 | 留下评论

Day02.Mysql基本操作

1,创建2个表,练习约束条件
mysql> create database test;
Query OK, 1 row affected (0.00 sec)

mysql> use test;
Database changed
mysql> create table tea(name varchar(4) not null,gender enum(‘boy’,’girl’) default “boy”,interest set(‘book’,’film’,’music’,’football’));
Query OK, 0 rows affected (0.36 sec)

mysql> desc tea;
+———-+—————————————+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+—————————————+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| interest | set(‘book’,’film’,’music’,’football’) | YES | | NULL | |
+———-+—————————————+——+—–+———+——-+
3 rows in set (0.02 sec)

mysql> create table tea2(name varchar(4) not null,gender enum(‘boy’,’girl’) default “boy”,age int(3) not null default 21,interest set(‘book’,’film’,’music’,’football’,’girl’));
Query OK, 0 rows affected (0.32 sec)

mysql> desc tea2;
+———-+———————————————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+———————————————-+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| age | int(3) | NO | | 21 | |
| interest | set(‘book’,’film’,’music’,’football’,’girl’) | YES | | NULL | |
+———-+———————————————-+——+—–+———+——-+
4 rows in set (0.00 sec)

2、修改字段
1添加表字段
alter table table1 add transactor varchar(10) not Null;
alter table table1 add id int unsigned not Null auto_increment primary key
2.修改某个表的字段类型及指定为空或非空
alter table 表名称 change 字段名称 字段名称 字段类型 [是否允许非空];
alter table 表名称 modify 字段名称 字段类型 [是否允许非空];
3.修改某个表的字段名称及指定为空或非空
alter table 表名称 change 字段原名称 字段新名称 字段类型 [是否允许非空
4如果要删除某一字段,可用命令:ALTER TABLE mytable DROP 字段名;

mysql> desc tea2;
+———-+———————————————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+———————————————-+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| age | int(3) | NO | | 21 | |
| interest | set(‘book’,’film’,’music’,’football’,’girl’) | YES | | NULL | |
+———-+———————————————-+——+—–+———+——-+
4 rows in set (0.00 sec)

add 添加 modify 修改 change 修改字段名 drop 删除 rename 修改表名

添加
mysql> alter table tea2 add (address varchar(50) not null default “chengdu”);
Query OK, 0 rows affected (0.45 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea2;
+———-+———————————————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+———————————————-+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| age | int(3) | NO | | 21 | |
| interest | set(‘book’,’film’,’music’,’football’,’girl’) | YES | | NULL | |
| address | varchar(50) | NO | | chengdu | |
+———-+———————————————-+——+—–+———+——-+
5 rows in set (0.00 sec)

修改
第一种修改,change,后面要跟新的字段,包括属性;
mysql> alter table tea2 change address road varchar(80) not null default “beijing”;
Query OK, 0 rows affected (0.11 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea2;
+———-+———————————————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+———————————————-+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| age | int(3) | NO | | 21 | |
| interest | set(‘book’,’film’,’music’,’football’,’girl’) | YES | | NULL | |
| road | varchar(80) | NO | | beijing | |
+———-+———————————————-+——+—–+———+——-+
5 rows in set (0.00 sec)

第二种修改,modify,直接跟属性。
mysql> alter table tea2 modify road varchar(60) not null default “Sichuan”;
Query OK, 0 rows affected (0.87 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea2;
+———-+———————————————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+———————————————-+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| age | int(3) | NO | | 21 | |
| interest | set(‘book’,’film’,’music’,’football’,’girl’) | YES | | NULL | |
| road | varchar(60) | NO | | Sichuan | |
+———-+———————————————-+——+—–+———+——-+
5 rows in set (0.00 sec)

删除,直接drop
mysql> alter table tea2 add (address char(80) not null default “Chengdu”,post int not null default “650000”);
Query OK, 0 rows affected (0.49 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea2;
+———-+———————————————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+———————————————-+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| age | int(3) | NO | | 21 | |
| interest | set(‘book’,’film’,’music’,’football’,’girl’) | YES | | NULL | |
| road | varchar(60) | NO | | Sichuan | |
| address | char(80) | NO | | Chengdu | |
| post | int(11) | NO | | 650000 | |
+———-+———————————————-+——+—–+———+——-+
7 rows in set (0.00 sec)

mysql>
mysql> alter table tea2 drop road;
Query OK, 0 rows affected (0.46 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea2;
+———-+———————————————-+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+———-+———————————————-+——+—–+———+——-+
| name | varchar(4) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
| age | int(3) | NO | | 21 | |
| interest | set(‘book’,’film’,’music’,’football’,’girl’) | YES | | NULL | |
| address | char(80) | NO | | Chengdu | |
| post | int(11) | NO | | 650000 | |
+———-+———————————————-+——+—–+———+——-+

重命名表,rename
mysql> alter table tea2 rename tea3;
Query OK, 0 rows affected (0.10 sec)

mysql> show tables;
+—————-+
| Tables_in_test |
+—————-+
| tea |
| tea3 |
+—————-+
2 rows in set (0.00 sec)

mysql> alter table tea3 rename tea2;
Query OK, 0 rows affected (0.14 sec)

mysql> show tables;
+—————-+
| Tables_in_test |
+—————-+
| tea |
| tea2 |
+—————-+
2 rows in set (0.00 sec)

mysql> create table tea4(id char(6) not null,name varchar(4) not null,age int(3) not null, gender enum(‘boy’,’girl’) default ‘boy’,index(id), index(name));
Query OK, 0 rows affected (0.37 sec)

INDEX索引
创建一个新表tea4,将索引设为id,name;
mysql> create table tea4(id char(6) not null,name varchar(4) not null,age int(3) not null,gender enum(“boy”,”girl”) default “boy”,index(id),index(name));
Query OK, 0 rows affected (0.30 sec)

mysql> desc tea4;
+——–+——————–+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+——–+——————–+——+—–+———+——-+
| id | char(6) | NO | MUL | NULL | |
| name | varchar(4) | NO | MUL | NULL | |
| age | int(3) | NO | | NULL | |
| gender | enum(‘boy’,’girl’) | YES | | boy | |
+——–+——————–+——+—–+———+——-+
4 rows in set (0.00 sec)

mysql> show create table tea4;
+——-+————————————————————————————————————————————————————————————————————————————+
| Table | Create Table |
+——-+————————————————————————————————————————————————————————————————————————————+
| tea4 | CREATE TABLE `tea4` (
`id` char(6) NOT NULL,
`name` varchar(4) NOT NULL,
`age` int(3) NOT NULL,
`gender` enum(‘boy’,’girl’) DEFAULT ‘boy’,
KEY `id` (`id`),
KEY `name` (`name`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+——-+————————————————————————————————————————————————————————————————————————————+
1 row in set (0.00 sec)

查看一下索引
mysql> show index from tea4;
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
| tea4 | 1 | id | 1 | id | A | 0 | NULL | NULL | | BTREE | | |
| tea4 | 1 | name | 1 | name | A | 0 | NULL | NULL | | BTREE | | |
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
2 rows in set (0.00 sec)

删除已有的索引
mysql> drop index name on tea4;
Query OK, 0 rows affected (0.11 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> show index from tea4;
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
| tea4 | 1 | id | 1 | id | A | 0 | NULL | NULL | | BTREE | | |
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
1 row in set (0.00 sec)

在已有表中增加索引
mysql> create index name on tea4(name);
Query OK, 0 rows affected (0.15 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> show index from tea4;
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
| Table | Non_unique | Key_name | Seq_in_index | Column_name | Collation | Cardinality | Sub_part | Packed | Null | Index_type | Comment | Index_comment |
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
| tea4 | 1 | id | 1 | id | A | 0 | NULL | NULL | | BTREE | | |
| tea4 | 1 | name | 1 | name | A | 0 | NULL | NULL | | BTREE | | |
+——-+————+———-+————–+————-+———–+————-+———-+——–+——+————+———+—————+
2 rows in set (0.00 sec)

primary key 主键
一个表中只能有一个 primary key。多个字段需要同时创建。标记PRI,通常与AUTO_INCREMENT连用,被主键的字段称为主键字段
mysql> create table tea6(id int(4) auto_increment,name varchar(4) not null,age int(2) not null,primary key(id));
Query OK, 0 rows affected (0.42 sec)

mysql> show create table tea6;
+——-+———————————————————————————————————————————————————————————+
| Table | Create Table |
+——-+———————————————————————————————————————————————————————————+
| tea6 | CREATE TABLE `tea6` (
`id` int(4) NOT NULL AUTO_INCREMENT,
`name` varchar(4) NOT NULL,
`age` int(2) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+——-+———————————————————————————————————————————————————————————+
1 row in set (0.00 sec)

mysql> desc tea6;
+——-+————+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+——-+————+——+—–+———+—————-+
| id | int(4) | NO | PRI | NULL | auto_increment |
| name | varchar(4) | NO | | NULL | |
| age | int(2) | NO | | NULL | |
+——-+————+——+—–+———+—————-+
3 rows in set (0.00 sec)

删除主键(先删除自增auto_increment)
mysql> alter table tea6 modify id int(4) not null;
Query OK, 0 rows affected (0.61 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> alter table tea6 drop primary key;
Query OK, 0 rows affected (0.48 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea6;
+——-+————+——+—–+———+——-+
| Field | Type | Null | Key | Default | Extra |
+——-+————+——+—–+———+——-+
| id | int(4) | NO | | NULL | |
| name | varchar(4) | NO | | NULL | |
| age | int(2) | NO | | NULL | |
+——-+————+——+—–+———+——-+
3 rows in set (0.00 sec)

mysql> show create table tea6;
+——-+——————————————————————————————————————————————–+
| Table | Create Table |
+——-+——————————————————————————————————————————————–+
| tea6 | CREATE TABLE `tea6` (
`id` int(4) NOT NULL,
`name` varchar(4) NOT NULL,
`age` int(2) NOT NULL
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+——-+——————————————————————————————————————————————–+
1 row in set (0.00 sec)

增加主键
mysql> alter table tea6 add primary key(id);
Query OK, 0 rows affected (0.44 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> alter table tea6 modify id int(4) not null auto_increment;
Query OK, 0 rows affected (0.62 sec)
Records: 0 Duplicates: 0 Warnings: 0

mysql> desc tea6;
+——-+————+——+—–+———+—————-+
| Field | Type | Null | Key | Default | Extra |
+——-+————+——+—–+———+—————-+
| id | int(4) | NO | PRI | NULL | auto_increment |
| name | varchar(4) | NO | | NULL | |
| age | int(2) | NO | | NULL | |
+——-+————+——+—–+———+—————-+
3 rows in set (0.00 sec)

mysql> show create table tea6;
+——-+———————————————————————————————————————————————————————————+
| Table | Create Table |
+——-+———————————————————————————————————————————————————————————+
| tea6 | CREATE TABLE `tea6` (
`id` int(4) NOT NULL AUTO_INCREMENT,
`name` varchar(4) NOT NULL,
`age` int(2) NOT NULL,
PRIMARY KEY (`id`)
) ENGINE=InnoDB DEFAULT CHARSET=latin1 |
+——-+———————————————————————————————————————————————————————————+
1 row in set (0.00 sec)

发表在 mysql | 标签为 | 留下评论