创建ceph集群

创建ceph集群
使用4台虚拟机,1台客户端、3台存储集群服务器,IP规划如下
client 192.168.4.10
node1 192.168.4.11
node2 192.168.4.12
node3 192.168.4.13
步骤一:安装前准备
1)物理机为所有节点配置yum源,注意所有的虚拟主机均需要挂载安装光盘。
[root@root9pc01 ~]# yum -y install vsftpd
[root@root9pc01 ~]# mkdir /var/ftp/ceph
##################################
把cluster解压并挂载到FTP目录
[root@room9pc52 ~]# cd cluster/
[root@room9pc52 cluster]# ll
总用量 968676
drwxr-xr-x 2 root root 4096 6月 12 16:55 clusterPPT
drwxr-xr-x 2 root root 4096 6月 12 16:46 cluster
-rw-r–r– 1 root root 10919964 3月 23 2018 Discuz_X3.3_SC_UTF8.zip
-rw-r–r– 1 root root 980799488 5月 16 19:42 rhcs2.0-rhosp9-20161113-x86_64.iso
-rw-r–r– 1 root root 190956 5月 16 19:44 s3cmd-2.0.1-1.el7.noarch.rpm
[root@room9pc52 cluster]# cp rhcs2.0-rhosp9-20161113-x86_64.iso /iso/
[root@room9pc52 cluster]# ll
总用量 968680
drwxr-xr-x 2 root root 4096 6月 12 16:55 clusterPPT
drwxr-xr-x 2 root root 4096 6月 12 16:46 cluster
-rw-r–r– 1 root root 10919964 3月 23 2018 Discuz_X3.3_SC_UTF8.zip
-rw-r–r– 1 root root 980799488 5月 16 19:42 rhcs2.0-rhosp9-20161113-x86_64.iso
-rw-r–r– 1 root root 190956 5月 16 19:44 s3cmd-2.0.1-1.el7.noarch.rpm
[root@room9pc52 cluster]# cd /iso/
[root@room9pc52 iso]# ll
总用量 23948404
-rwxr-xr-x 1 qemu qemu 8694792192 4月 9 2018 CentOS-7-x86_64-Everything-1708.iso
-rwxrwxrwx 1 root root 3419052032 12月 1 2014 cn_windows_7_ultimate_with_sp1_x64_dvd_618537.iso
drwx—— 2 root root 4096 1月 18 2018 lost+found
-rw-r–r– 1 root root 980799488 10月 11 10:11 rhcs2.0-rhosp9-20161113-x86_64.iso
-rw-r–r– 1 root root 3841982464 11月 18 2017 rhel-server-6.7-x86_64-dvd.iso
-rw-r–r– 1 qemu qemu 4059037696 1月 10 2018 rhel-server-7.4-x86_64-dvd.iso
-rw-r–r– 1 qemu qemu 3527475200 1月 12 2018 Win10_Pro_X64_zh_CN.iso
[root@room9pc52 iso]# mount -o loop rhcs2.0-rhosp9-20161113-x86_64.iso /var/ftp/ceph
mount: /dev/loop1 写保护,将以只读方式挂载
###################################################
[root@root9pc01 ~]# mount -o loop \
rhcs2.0-rhosp9-20161113-x86_64.iso /var/ftp/ceph
[root@root9pc01 ~]# systemctl restart vsftpd

2)修改所有节点yum配置(以node1为例)
[root@node1 ~]# cat /etc/yum.repos.d/ceph.repo
[mon]
name=mon
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/MON
gpgcheck=0
[osd]
name=osd
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/OSD
gpgcheck=0
[tools]
name=tools
baseurl=ftp://192.168.4.254/ceph/rhceph-2.0-rhel-7-x86_64/Tools
gpgcheck=0
弄完之后确定一下
[root@11 yum.repos.d]# yum repolist
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
192.168.4.254_rhel7 | 4.1 kB 00:00:00
(1/2): 192.168.4.254_rhel7/group_gz | 137 kB 00:00:00
(2/2): 192.168.4.254_rhel7/primary_db | 4.0 MB 00:00:00
源标识 源名称 状态
192.168.4.254_rhel7 added from: ftp://192.168.4.254/rhel7 4,986
mon mon 41
osd osd 28
tools tools 33
repolist: 5,088

3)修改/etc/hosts并同步到所有主机。
警告:/etc/hosts解析的域名必须与本机主机名一致!!!!
[root@node1 ~]# cat /etc/hosts
… …
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3
[root@node1 ~]# for i in 10 11 12 13
> do
> scp /etc/hosts 192.168.2.$i:/etc/
> done
警告:/etc/hosts解析的域名必须与本机主机名一致!!!!
[root@11 yum.repos.d]# for i in 10 11 12 13; do scp /etc/hosts 192.168.4.$i:/etc/; done
The authenticity of host ‘192.168.4.10 (192.168.4.10)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.10’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 219.3KB/s 00:00
The authenticity of host ‘192.168.4.11 (192.168.4.11)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.11’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 574.1KB/s 00:00
The authenticity of host ‘192.168.4.12 (192.168.4.12)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.12’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 255.2KB/s 00:00
The authenticity of host ‘192.168.4.13 (192.168.4.13)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘192.168.4.13’ (ECDSA) to the list of known hosts.
[email protected]’s password:
hosts 100% 247 284.7KB/s 00:00
[root@11 yum.repos.d]#
[root@11 yum.repos.d]# cat /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.4.10 client
192.168.4.11 node1
192.168.4.12 node2
192.168.4.13 node3

注意,主机名与/etc/hosts里的主机名必须一致,不一致抓紧时间改过来,以node1为例,原来的起的名字是11
[root@11 yum.repos.d]# hostnamectl set-hostname node1
[root@11 yum.repos.d]# exit
登出
Connection to 192.168.4.11 closed.
[root@room9pc52 ~]# ssh 192.168.4.11
[email protected]’s password:
Last login: Thu Oct 11 10:09:18 2018 from 192.168.4.254
[root@node1 ~]#

3)配置无密码连接。
[root@node1 ~]# ssh-keygen -f /root/.ssh/id_rsa -N ”
[root@node1 ~]# for i in 10 11 12 13
> do
> ssh-copy-id 192.168.4.$i
> done
注意,所有的主机之间必须无密码连接
[root@client ~]# ssh-keygen
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Created directory ‘/root/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:bW6lHr46lUqKY+fHysXRzWoNGdVUuKZ9PNPhcv+aEhY root@client
The key’s randomart image is:
+—[RSA 2048]—-+
| .o.o.|
| . o |
| . . |
| o =E o. |
| S *.+=..o|
| ..+o*+..=+|
| . +ooB…o.+|
| +.o.== .. ..|
| . ++o.o+. .o.o|
+—-[SHA256]—–+
[root@client ~]#
[root@client ~]# for i in 10 11 12 13
> do
> ssh-copy-id 192.168.4.$i
> done
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.10 (192.168.4.10)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.10’”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.11 (192.168.4.11)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.11’”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.12 (192.168.4.12)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.12’”
and check to make sure that only the key(s) you wanted were added.

/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/root/.ssh/id_rsa.pub”
The authenticity of host ‘192.168.4.13 (192.168.4.13)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
[email protected]’s password:

Number of key(s) added: 1

Now try logging into the machine, with: “ssh ‘192.168.4.13’”
and check to make sure that only the key(s) you wanted were added.

步骤二:配置NTP时间同步

1)创建NTP服务器。
[root@client ~]# yum -y install chrony
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
软件包 chrony-3.1-2.el7.x86_64 已安装并且是最新版本
无须任何处理
[root@client ~]# vim /etc/chrony.conf
[root@client ~]# cat /etc/chrony.conf | grep -v “^#” | grep -v “^$”
server 0.centos.pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
allow 192.168.4.0/24
local stratum 10
logdir /var/log/chrony
[root@client ~]# systemctl restart chronyd

2)其他所有节点与NTP服务器同步时间(以node1为例)。
[root@node1 ~]# cat /etc/chrony.conf
server 192.168.4.10 iburst
[root@node1 ~]# systemctl restart chronyd

步骤三:准备存储磁盘
1)物理机上为每个虚拟机准备3块磁盘。(可以使用命令,也可以使用图形直接添加)
[root@room9pc52 iso]# cd /var/lib/libvirt/images/
[root@room9pc52 images]# ll
总用量 295860
-rw-r–r– 1 qemu qemu 74252288 10月 11 10:54 a10.img
-rw-r–r– 1 qemu qemu 74907648 10月 11 10:53 a11.img
-rw-r–r– 1 qemu qemu 75235328 10月 11 10:57 a12.img
-rw-r–r– 1 qemu qemu 75104256 10月 11 10:56 a13.img
-rw-r–r– 1 root root 197120 8月 14 22:53 a50.img
drwxr-xr-x 2 root root 4096 1月 19 2018 bin
drwxr-xr-x 2 root root 4096 1月 23 2018 conf.d
drwxr-xr-x 5 root root 4096 1月 12 2018 content
drwxr-xr-x 7 root root 4096 1月 19 2018 db
drwxr-xr-x 4 root root 4096 1月 10 2018 exam
drwxr-xr-x 4 root root 4096 10月 11 10:11 iso
drwx——. 2 root root 16384 1月 18 2018 lost+found
drwx—— 3 root root 4096 1月 16 2018 qemu
-rw-r–r– 1 root root 1860 1月 19 2018 Student.sh
-rw-r–r– 1 root root 2794667 1月 13 2018 tedu-wallpaper-01.png
-rw-r–r– 1 root root 427125 1月 19 2018 tedu-wallpaper-weekend.png
-rw——- 1 root root 4644 8月 13 09:14 vsftpd.conf
-rw-r–r– 1 root root 1859 1月 19 2018 Weekend.sh
-rw-r–r– 1 root root 197632 8月 12 13:13 win.img
[root@room9pc52 images]# qemu-img create -f qcow2 node1-vdb.vol 10g
Formatting ‘node1-vdb.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node1-vdc.vol 10g
Formatting ‘node1-vdc.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node1-vdd.vol 10g
Formatting ‘node1-vdd.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node2-vdb.vol 10g
Formatting ‘node2-vdb.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node2-vdc.vol 10g
Formatting ‘node2-vdc.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node2-vdd.vol 10g
Formatting ‘node2-vdd.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node3-vdb.vol 10g
Formatting ‘node3-vdb.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node3-vdc.vol 10g
Formatting ‘node3-vdc.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# qemu-img create -f qcow2 node3-vdd.vol 10g
Formatting ‘node3-vdd.vol’, fmt=qcow2 size=10737418240 encryption=off cluster_size=65536 lazy_refcounts=off
[root@room9pc52 images]# ll -h node*
-rw-r–r– 1 root root 193K 10月 11 11:00 node1-vdb.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node1-vdc.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node1-vdd.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node2-vdb.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node2-vdc.vol
-rw-r–r– 1 root root 193K 10月 11 11:00 node2-vdd.vol
-rw-r–r– 1 root root 193K 10月 11 11:01 node3-vdb.vol
-rw-r–r– 1 root root 193K 10月 11 11:01 node3-vdc.vol
-rw-r–r– 1 root root 193K 10月 11 11:01 node3-vdd.vol
2)使用virt-manager为虚拟机添加磁盘。
(到图形界面把刚才创建的磁盘添加到虚拟机,每台虚拟机3块)
[root@node1 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk

案例2:部署ceph集群
2.1 问题

沿用练习一,部署Ceph集群服务器,实现以下目标:
安装部署工具ceph-deploy
创建ceph集群
准备日志磁盘分区
创建OSD存储空间
查看ceph状态,验证
2.2 步骤

实现此案例需要按照如下步骤进行。
步骤一:部署软件

1)在node1安装部署工具,学习工具的语法格式。
[root@node1 ~]# yum -y install ceph-deploy
[root@node1 ~]# ceph-deploy –help

代码如下
[root@node1 ~]# yum install -y ceph-deploy
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
正在解决依赖关系
–> 正在检查事务
—> 软件包 ceph-deploy.noarch.0.1.5.33-1.el7cp 将被 安装
–> 解决依赖关系完成

依赖关系解决

=============================================================================================
Package 架构 版本 源 大小
=============================================================================================
正在安装:
ceph-deploy noarch 1.5.33-1.el7cp tools 272 k

事务概要
=============================================================================================
安装 1 软件包

总下载量:272 k
安装大小:1.1 M
Downloading packages:
ceph-deploy-1.5.33-1.el7cp.noarch.rpm | 272 kB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : ceph-deploy-1.5.33-1.el7cp.noarch 1/1
192.168.4.254_rhel7/productid | 1.6 kB 00:00:00
mon/productid | 1.6 kB 00:00:00
osd/productid | 1.6 kB 00:00:00
验证中 : ceph-deploy-1.5.33-1.el7cp.noarch 1/1

已安装:
ceph-deploy.noarch 0:1.5.33-1.el7cp

完毕!
[root@node1 ~]# ceph-deploy –help
usage: ceph-deploy [-h] [-v | -q] [–version] [–username USERNAME]
[–overwrite-conf] [–cluster NAME] [–ceph-conf CEPH_CONF]
COMMAND …

Easy Ceph deployment

-^-
/ \
|O o| ceph-deploy v1.5.33
).-.(
‘/|||\`
| ‘|` |
‘|`

Full documentation can be found at: http://ceph.com/ceph-deploy/docs

optional arguments:
-h, –help show this help message and exit
-v, –verbose be more verbose
-q, –quiet be less verbose
–version the current installed version of ceph-deploy
–username USERNAME the username to connect to the remote host
–overwrite-conf overwrite an existing conf file on remote host (if
present)
–cluster NAME name of the cluster
–ceph-conf CEPH_CONF
use (or reuse) a given ceph.conf file

commands:
COMMAND description
new Start deploying a new cluster, and write a
CLUSTER.conf and keyring for it.
install Install Ceph packages on remote hosts.
rgw Ceph RGW daemon management
mds Ceph MDS daemon management
mon Ceph MON Daemon management
gatherkeys Gather authentication keys for provisioning new nodes.
disk Manage disks on a remote host.
osd Prepare a data disk on remote host.
admin Push configuration and client.admin key to a remote
host.
repo Repo definition management
config Copy ceph.conf to/from remote host(s)
uninstall Remove Ceph packages from remote hosts.
purge Remove Ceph packages from remote hosts and purge all
data.
purgedata Purge (delete, destroy, discard, shred) any Ceph data
from /var/lib/ceph
forgetkeys Remove authentication keys from the local directory.
pkg Manage packages on remote hosts.
calamari Install and configure Calamari nodes. Assumes that a
repository with Calamari packages is already
configured. Refer to the docs for examples
(http://ceph.com/ceph-deploy/docs/conf.html)

2)创建目录
[root@node1 ~]# mkdir ceph-cluster
[root@node1 ~]# cd ceph-cluster/

步骤二:部署Ceph集群

1)创建Ceph集群配置。
[root@node1 ceph-cluster]# ceph-deploy new node1 node2 node3

代码如下
[root@node1 ceph-cluster]# ceph-deploy new node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy new node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] func : <function new at 0x7f1a4519fc80>
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f1a445055f0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] ssh_copykey : True
[ceph_deploy.cli][INFO ] mon : [‘node1’, ‘node2’, ‘node3’]
[ceph_deploy.cli][INFO ] public_network : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] cluster_network : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] fsid : None
[ceph_deploy.new][DEBUG ] Creating new cluster named ceph
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ip link show
[node1][INFO ] Running command: /usr/sbin/ip addr show
[node1][DEBUG ] IP addresses found: [‘192.168.4.11’, ‘192.168.122.1’]
[ceph_deploy.new][DEBUG ] Resolving host node1
[ceph_deploy.new][DEBUG ] Monitor node1 at 192.168.4.11
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node2][DEBUG ] connected to host: node1
[node2][INFO ] Running command: ssh -CT -o BatchMode=yes node2
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO ] will connect again with password prompt
The authenticity of host ‘node2 (192.168.4.12)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘node2’ (ECDSA) to the list of known hosts.
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[ceph_deploy.new][INFO ] adding public keys to authorized_keys
[node2][DEBUG ] append contents to file
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ip link show
[node2][INFO ] Running command: /usr/sbin/ip addr show
[node2][DEBUG ] IP addresses found: [‘192.168.4.12’, ‘192.168.122.1’]
[ceph_deploy.new][DEBUG ] Resolving host node2
[ceph_deploy.new][DEBUG ] Monitor node2 at 192.168.4.12
[ceph_deploy.new][INFO ] making sure passwordless SSH succeeds
[node3][DEBUG ] connected to host: node1
[node3][INFO ] Running command: ssh -CT -o BatchMode=yes node3
[ceph_deploy.new][WARNIN] could not connect via SSH
[ceph_deploy.new][INFO ] will connect again with password prompt
The authenticity of host ‘node3 (192.168.4.13)’ can’t be established.
ECDSA key fingerprint is SHA256:59Byua15gv4TAVEm7YutMtmgXfxGTYbzSfO84kICu0E.
ECDSA key fingerprint is MD5:7e:6e:95:ed:84:fe:5b:8e:26:94:73:9b:c3:4b:90:cb.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘node3’ (ECDSA) to the list of known hosts.
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[ceph_deploy.new][INFO ] adding public keys to authorized_keys
[node3][DEBUG ] append contents to file
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ip link show
[node3][INFO ] Running command: /usr/sbin/ip addr show
[node3][DEBUG ] IP addresses found: [‘192.168.4.13’, ‘192.168.122.1’]
[ceph_deploy.new][DEBUG ] Resolving host node3
[ceph_deploy.new][DEBUG ] Monitor node3 at 192.168.4.13
[ceph_deploy.new][DEBUG ] Monitor initial members are [‘node1’, ‘node2’, ‘node3’]
[ceph_deploy.new][DEBUG ] Monitor addrs are [‘192.168.4.11’, ‘192.168.4.12’, ‘192.168.4.13’]
[ceph_deploy.new][DEBUG ] Creating a random mon key…
[ceph_deploy.new][DEBUG ] Writing monitor keyring to ceph.mon.keyring…
[ceph_deploy.new][DEBUG ] Writing initial config to ceph.conf…

2)给所有节点安装软件包。
[root@node1 ceph-cluster]# ceph-deploy install node1 node2 node3
代码如下
[root@node1 ceph-cluster]# ceph-deploy install node1 node2 node3
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy install node1 node2 node3
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] testing : None
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f4993bfdb48>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] dev_commit : None
[ceph_deploy.cli][INFO ] install_mds : False
[ceph_deploy.cli][INFO ] stable : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] adjust_repos : True
[ceph_deploy.cli][INFO ] func : <function install at 0x7f499486b7d0>
[ceph_deploy.cli][INFO ] install_all : False
[ceph_deploy.cli][INFO ] repo : False
[ceph_deploy.cli][INFO ] host : [‘node1’, ‘node2’, ‘node3’]
[ceph_deploy.cli][INFO ] install_rgw : False
[ceph_deploy.cli][INFO ] install_tests : False
[ceph_deploy.cli][INFO ] repo_url : None
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] install_osd : False
[ceph_deploy.cli][INFO ] version_kind : stable
[ceph_deploy.cli][INFO ] install_common : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] dev : master
[ceph_deploy.cli][INFO ] local_mirror : None
[ceph_deploy.cli][INFO ] release : None
[ceph_deploy.cli][INFO ] install_mon : False
[ceph_deploy.cli][INFO ] gpg_url : None
[ceph_deploy.install][DEBUG ] Installing stable version jewel on cluster ceph hosts node1 node2 node3
[ceph_deploy.install][DEBUG ] Detecting platform for host node1 …
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][INFO ] installing Ceph on node1
[node1][INFO ] Running command: yum clean all
[node1][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node1][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node1][DEBUG ] 正在清理软件源: 192.168.4.254_rhel7 mon osd tools
[node1][DEBUG ] Cleaning up everything
[node1][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node1][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[node1][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node1][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node1][DEBUG ] 正在解决依赖关系
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 ceph-mds.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-mon.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-flask,它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libleveldb.so.1()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-osd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-osd-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-radosgw.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 mailcap,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libcephfs.so.1()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 leveldb.x86_64.0.1.12.0-5.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
[node1][DEBUG ] —> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
[node1][DEBUG ] —> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-flask.noarch.1.0.10.1-5.el7 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-itsdangerous,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-jinja2,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-werkzeug,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
[node1][DEBUG ] —> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
[node1][DEBUG ] —> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
[node1][DEBUG ] —> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node1][DEBUG ] —> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 python-itsdangerous.noarch.0.0.23-1.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-jinja2.noarch.0.2.7.2-2.el7cp 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 python-babel >= 0.8,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node1][DEBUG ] –> 正在处理依赖关系 python-markupsafe,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node1][DEBUG ] —> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node1][DEBUG ] —> 软件包 python-werkzeug.noarch.0.0.9.1-1.el7 将被 安装
[node1][DEBUG ] —> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
[node1][DEBUG ] –> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node1][DEBUG ] –> 正在检查事务
[node1][DEBUG ] —> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
[node1][DEBUG ] —> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-babel.noarch.0.0.9.6-8.el7 将被 安装
[node1][DEBUG ] —> 软件包 python-markupsafe.x86_64.0.0.11-10.el7 将被 安装
[node1][DEBUG ] —> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
[node1][DEBUG ] —> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
[node1][DEBUG ] —> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
[node1][DEBUG ] –> 解决依赖关系完成
[node1][DEBUG ]
[node1][DEBUG ] 依赖关系解决
[node1][DEBUG ]
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] Package 架构 版本 源 大小
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] 正在安装:
[node1][DEBUG ] ceph-mds x86_64 1:10.2.2-38.el7cp tools 2.8 M
[node1][DEBUG ] ceph-mon x86_64 1:10.2.2-38.el7cp mon 2.8 M
[node1][DEBUG ] ceph-osd x86_64 1:10.2.2-38.el7cp osd 9.0 M
[node1][DEBUG ] ceph-radosgw x86_64 1:10.2.2-38.el7cp tools 265 k
[node1][DEBUG ] 为依赖而安装:
[node1][DEBUG ] boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
[node1][DEBUG ] boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
[node1][DEBUG ] boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
[node1][DEBUG ] boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
[node1][DEBUG ] ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
[node1][DEBUG ] ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
[node1][DEBUG ] ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
[node1][DEBUG ] fcgi x86_64 2.4.0-25.el7cp mon 47 k
[node1][DEBUG ] hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
[node1][DEBUG ] leveldb x86_64 1.12.0-5.el7cp mon 161 k
[node1][DEBUG ] libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
[node1][DEBUG ] libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node1][DEBUG ] librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
[node1][DEBUG ] lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
[node1][DEBUG ] m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
[node1][DEBUG ] mailcap noarch 2.1.41-2.el7 192.168.4.254_rhel7 31 k
[node1][DEBUG ] patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
[node1][DEBUG ] python-babel noarch 0.9.6-8.el7 192.168.4.254_rhel7 1.4 M
[node1][DEBUG ] python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
[node1][DEBUG ] python-flask noarch 1:0.10.1-5.el7 mon 204 k
[node1][DEBUG ] python-itsdangerous noarch 0.23-1.el7 mon 24 k
[node1][DEBUG ] python-jinja2 noarch 2.7.2-2.el7cp mon 516 k
[node1][DEBUG ] python-markupsafe x86_64 0.11-10.el7 192.168.4.254_rhel7 25 k
[node1][DEBUG ] python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
[node1][DEBUG ] python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
[node1][DEBUG ] python-werkzeug noarch 0.9.1-1.el7 mon 562 k
[node1][DEBUG ] redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
[node1][DEBUG ] redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
[node1][DEBUG ] spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
[node1][DEBUG ] userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
[node1][DEBUG ] 为依赖而更新:
[node1][DEBUG ] librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node1][DEBUG ] librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M
[node1][DEBUG ]
[node1][DEBUG ] 事务概要
[node1][DEBUG ] ================================================================================
[node1][DEBUG ] 安装 4 软件包 (+30 依赖软件包)
[node1][DEBUG ] 升级 ( 2 依赖软件包)
[node1][DEBUG ]
[node1][DEBUG ] 总下载量:49 M
[node1][DEBUG ] Downloading packages:
[node1][DEBUG ] No Presto metadata available for mon
[node1][DEBUG ] ——————————————————————————–
[node1][DEBUG ] 总计 30 MB/s | 49 MB 00:01
[node1][DEBUG ] Running transaction check
[node1][DEBUG ] Running transaction test
[node1][DEBUG ] Transaction test succeeded
[node1][DEBUG ] Running transaction
[node1][DEBUG ] 正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/38
[node1][DEBUG ] 正在安装 : boost-random-1.53.0-27.el7.x86_64 2/38
[node1][DEBUG ] 正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/38
[node1][DEBUG ] 正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/38
[node1][DEBUG ] 正在安装 : boost-program-options-1.53.0-27.el7.x86_64 5/38
[node1][DEBUG ] 正在安装 : leveldb-1.12.0-5.el7cp.x86_64 6/38
[node1][DEBUG ] 正在安装 : python-werkzeug-0.9.1-1.el7.noarch 7/38
[node1][DEBUG ] 正在安装 : spax-1.5.2-13.el7.x86_64 8/38
[node1][DEBUG ] 正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 9/38
[node1][DEBUG ] 正在安装 : python-markupsafe-0.11-10.el7.x86_64 10/38
[node1][DEBUG ] 正在安装 : patch-2.7.1-8.el7.x86_64 11/38
[node1][DEBUG ] 正在安装 : python-babel-0.9.6-8.el7.noarch 12/38
[node1][DEBUG ] 正在安装 : python-jinja2-2.7.2-2.el7cp.noarch 13/38
[node1][DEBUG ] 正在安装 : hdparm-9.43-5.el7.x86_64 14/38
[node1][DEBUG ] 正在安装 : m4-1.4.16-10.el7.x86_64 15/38
[node1][DEBUG ] 正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 16/38
[node1][DEBUG ] 正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 17/38
[node1][DEBUG ] 正在安装 : boost-regex-1.53.0-27.el7.x86_64 18/38
[node1][DEBUG ] 正在安装 : mailcap-2.1.41-2.el7.noarch 19/38
[node1][DEBUG ] 正在安装 : python-itsdangerous-0.23-1.el7.noarch 20/38
[node1][DEBUG ] 正在安装 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node1][DEBUG ] 正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 22/38
[node1][DEBUG ] 正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 23/38
[node1][DEBUG ] 正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 24/38
[node1][DEBUG ] 正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 25/38
[node1][DEBUG ] 正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 26/38
[node1][DEBUG ] 正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 27/38
[node1][DEBUG ] 正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 28/38
[node1][DEBUG ] 正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 29/38
[node1][DEBUG ] 正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 30/38
[node1][DEBUG ] 正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 31/38
[node1][DEBUG ] 正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 32/38
[node1][DEBUG ] 正在安装 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 33/38
[node1][DEBUG ] 正在安装 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 34/38
[node1][DEBUG ] 正在安装 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 35/38
[node1][DEBUG ] 正在安装 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 36/38
[node1][DEBUG ] 清理 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node1][DEBUG ] 清理 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node1][DEBUG ] 验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/38
[node1][DEBUG ] 验证中 : python-itsdangerous-0.23-1.el7.noarch 2/38
[node1][DEBUG ] 验证中 : mailcap-2.1.41-2.el7.noarch 3/38
[node1][DEBUG ] 验证中 : boost-regex-1.53.0-27.el7.x86_64 4/38
[node1][DEBUG ] 验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 5/38
[node1][DEBUG ] 验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 6/38
[node1][DEBUG ] 验证中 : m4-1.4.16-10.el7.x86_64 7/38
[node1][DEBUG ] 验证中 : hdparm-9.43-5.el7.x86_64 8/38
[node1][DEBUG ] 验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 9/38
[node1][DEBUG ] 验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 10/38
[node1][DEBUG ] 验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 11/38
[node1][DEBUG ] 验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 12/38
[node1][DEBUG ] 验证中 : boost-iostreams-1.53.0-27.el7.x86_64 13/38
[node1][DEBUG ] 验证中 : python-babel-0.9.6-8.el7.noarch 14/38
[node1][DEBUG ] 验证中 : boost-random-1.53.0-27.el7.x86_64 15/38
[node1][DEBUG ] 验证中 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 16/38
[node1][DEBUG ] 验证中 : patch-2.7.1-8.el7.x86_64 17/38
[node1][DEBUG ] 验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 18/38
[node1][DEBUG ] 验证中 : python-markupsafe-0.11-10.el7.x86_64 19/38
[node1][DEBUG ] 验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 20/38
[node1][DEBUG ] 验证中 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node1][DEBUG ] 验证中 : leveldb-1.12.0-5.el7cp.x86_64 22/38
[node1][DEBUG ] 验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 23/38
[node1][DEBUG ] 验证中 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 24/38
[node1][DEBUG ] 验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 25/38
[node1][DEBUG ] 验证中 : python-jinja2-2.7.2-2.el7cp.noarch 26/38
[node1][DEBUG ] 验证中 : boost-program-options-1.53.0-27.el7.x86_64 27/38
[node1][DEBUG ] 验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 28/38
[node1][DEBUG ] 验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 29/38
[node1][DEBUG ] 验证中 : redhat-lsb-core-4.1-27.el7.x86_64 30/38
[node1][DEBUG ] 验证中 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 31/38
[node1][DEBUG ] 验证中 : spax-1.5.2-13.el7.x86_64 32/38
[node1][DEBUG ] 验证中 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 33/38
[node1][DEBUG ] 验证中 : python-werkzeug-0.9.1-1.el7.noarch 34/38
[node1][DEBUG ] 验证中 : fcgi-2.4.0-25.el7cp.x86_64 35/38
[node1][DEBUG ] 验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 36/38
[node1][DEBUG ] 验证中 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node1][DEBUG ] 验证中 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node1][DEBUG ]
[node1][DEBUG ] 已安装:
[node1][DEBUG ] ceph-mds.x86_64 1:10.2.2-38.el7cp ceph-mon.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] ceph-osd.x86_64 1:10.2.2-38.el7cp ceph-radosgw.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ]
[node1][DEBUG ] 作为依赖被安装:
[node1][DEBUG ] boost-iostreams.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] boost-program-options.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] boost-random.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] boost-regex.x86_64 0:1.53.0-27.el7
[node1][DEBUG ] ceph-base.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] ceph-common.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] ceph-selinux.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] fcgi.x86_64 0:2.4.0-25.el7cp
[node1][DEBUG ] hdparm.x86_64 0:9.43-5.el7
[node1][DEBUG ] leveldb.x86_64 0:1.12.0-5.el7cp
[node1][DEBUG ] libbabeltrace.x86_64 0:1.2.4-3.el7cp
[node1][DEBUG ] libcephfs1.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] librgw2.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] lttng-ust.x86_64 0:2.4.1-1.el7cp
[node1][DEBUG ] m4.x86_64 0:1.4.16-10.el7
[node1][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node1][DEBUG ] patch.x86_64 0:2.7.1-8.el7
[node1][DEBUG ] python-babel.noarch 0:0.9.6-8.el7
[node1][DEBUG ] python-cephfs.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] python-flask.noarch 1:0.10.1-5.el7
[node1][DEBUG ] python-itsdangerous.noarch 0:0.23-1.el7
[node1][DEBUG ] python-jinja2.noarch 0:2.7.2-2.el7cp
[node1][DEBUG ] python-markupsafe.x86_64 0:0.11-10.el7
[node1][DEBUG ] python-rados.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] python-rbd.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ] python-werkzeug.noarch 0:0.9.1-1.el7
[node1][DEBUG ] redhat-lsb-core.x86_64 0:4.1-27.el7
[node1][DEBUG ] redhat-lsb-submod-security.x86_64 0:4.1-27.el7
[node1][DEBUG ] spax.x86_64 0:1.5.2-13.el7
[node1][DEBUG ] userspace-rcu.x86_64 0:0.7.9-2.el7rhgs
[node1][DEBUG ]
[node1][DEBUG ] 作为依赖被升级:
[node1][DEBUG ] librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp
[node1][DEBUG ]
[node1][DEBUG ] 完毕!
[node1][INFO ] Running command: ceph –version
[node1][DEBUG ] ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)
[ceph_deploy.install][DEBUG ] Detecting platform for host node2 …
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][INFO ] installing Ceph on node2
[node2][INFO ] Running command: yum clean all
[node2][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node2][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node2][DEBUG ] 正在清理软件源: 192.168.4.254_rhel7 mon osd tools
[node2][DEBUG ] Cleaning up everything
[node2][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node2][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[node2][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node2][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node2][DEBUG ] 正在解决依赖关系
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 ceph-mds.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-mon.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-flask,它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libleveldb.so.1()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-osd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-osd-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-radosgw.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 mailcap,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libcephfs.so.1()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 leveldb.x86_64.0.1.12.0-5.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
[node2][DEBUG ] —> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
[node2][DEBUG ] —> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-flask.noarch.1.0.10.1-5.el7 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-itsdangerous,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-jinja2,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-werkzeug,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
[node2][DEBUG ] —> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
[node2][DEBUG ] —> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
[node2][DEBUG ] —> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node2][DEBUG ] —> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 python-itsdangerous.noarch.0.0.23-1.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-jinja2.noarch.0.2.7.2-2.el7cp 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 python-babel >= 0.8,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node2][DEBUG ] –> 正在处理依赖关系 python-markupsafe,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node2][DEBUG ] —> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node2][DEBUG ] —> 软件包 python-werkzeug.noarch.0.0.9.1-1.el7 将被 安装
[node2][DEBUG ] —> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
[node2][DEBUG ] –> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node2][DEBUG ] –> 正在检查事务
[node2][DEBUG ] —> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
[node2][DEBUG ] —> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-babel.noarch.0.0.9.6-8.el7 将被 安装
[node2][DEBUG ] —> 软件包 python-markupsafe.x86_64.0.0.11-10.el7 将被 安装
[node2][DEBUG ] —> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
[node2][DEBUG ] —> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
[node2][DEBUG ] —> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
[node2][DEBUG ] –> 解决依赖关系完成
[node2][DEBUG ]
[node2][DEBUG ] 依赖关系解决
[node2][DEBUG ]
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] Package 架构 版本 源 大小
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] 正在安装:
[node2][DEBUG ] ceph-mds x86_64 1:10.2.2-38.el7cp tools 2.8 M
[node2][DEBUG ] ceph-mon x86_64 1:10.2.2-38.el7cp mon 2.8 M
[node2][DEBUG ] ceph-osd x86_64 1:10.2.2-38.el7cp osd 9.0 M
[node2][DEBUG ] ceph-radosgw x86_64 1:10.2.2-38.el7cp tools 265 k
[node2][DEBUG ] 为依赖而安装:
[node2][DEBUG ] boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
[node2][DEBUG ] boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
[node2][DEBUG ] boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
[node2][DEBUG ] boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
[node2][DEBUG ] ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
[node2][DEBUG ] ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
[node2][DEBUG ] ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
[node2][DEBUG ] fcgi x86_64 2.4.0-25.el7cp mon 47 k
[node2][DEBUG ] hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
[node2][DEBUG ] leveldb x86_64 1.12.0-5.el7cp mon 161 k
[node2][DEBUG ] libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
[node2][DEBUG ] libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node2][DEBUG ] librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
[node2][DEBUG ] lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
[node2][DEBUG ] m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
[node2][DEBUG ] mailcap noarch 2.1.41-2.el7 192.168.4.254_rhel7 31 k
[node2][DEBUG ] patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
[node2][DEBUG ] python-babel noarch 0.9.6-8.el7 192.168.4.254_rhel7 1.4 M
[node2][DEBUG ] python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
[node2][DEBUG ] python-flask noarch 1:0.10.1-5.el7 mon 204 k
[node2][DEBUG ] python-itsdangerous noarch 0.23-1.el7 mon 24 k
[node2][DEBUG ] python-jinja2 noarch 2.7.2-2.el7cp mon 516 k
[node2][DEBUG ] python-markupsafe x86_64 0.11-10.el7 192.168.4.254_rhel7 25 k
[node2][DEBUG ] python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
[node2][DEBUG ] python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
[node2][DEBUG ] python-werkzeug noarch 0.9.1-1.el7 mon 562 k
[node2][DEBUG ] redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
[node2][DEBUG ] redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
[node2][DEBUG ] spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
[node2][DEBUG ] userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
[node2][DEBUG ] 为依赖而更新:
[node2][DEBUG ] librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node2][DEBUG ] librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M
[node2][DEBUG ]
[node2][DEBUG ] 事务概要
[node2][DEBUG ] ================================================================================
[node2][DEBUG ] 安装 4 软件包 (+30 依赖软件包)
[node2][DEBUG ] 升级 ( 2 依赖软件包)
[node2][DEBUG ]
[node2][DEBUG ] 总下载量:49 M
[node2][DEBUG ] Downloading packages:
[node2][DEBUG ] No Presto metadata available for mon
[node2][DEBUG ] ——————————————————————————–
[node2][DEBUG ] 总计 40 MB/s | 49 MB 00:01
[node2][DEBUG ] Running transaction check
[node2][DEBUG ] Running transaction test
[node2][DEBUG ] Transaction test succeeded
[node2][DEBUG ] Running transaction
[node2][DEBUG ] 正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/38
[node2][DEBUG ] 正在安装 : boost-random-1.53.0-27.el7.x86_64 2/38
[node2][DEBUG ] 正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/38
[node2][DEBUG ] 正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/38
[node2][DEBUG ] 正在安装 : boost-program-options-1.53.0-27.el7.x86_64 5/38
[node2][DEBUG ] 正在安装 : leveldb-1.12.0-5.el7cp.x86_64 6/38
[node2][DEBUG ] 正在安装 : python-werkzeug-0.9.1-1.el7.noarch 7/38
[node2][DEBUG ] 正在安装 : spax-1.5.2-13.el7.x86_64 8/38
[node2][DEBUG ] 正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 9/38
[node2][DEBUG ] 正在安装 : python-markupsafe-0.11-10.el7.x86_64 10/38
[node2][DEBUG ] 正在安装 : patch-2.7.1-8.el7.x86_64 11/38
[node2][DEBUG ] 正在安装 : python-babel-0.9.6-8.el7.noarch 12/38
[node2][DEBUG ] 正在安装 : python-jinja2-2.7.2-2.el7cp.noarch 13/38
[node2][DEBUG ] 正在安装 : hdparm-9.43-5.el7.x86_64 14/38
[node2][DEBUG ] 正在安装 : m4-1.4.16-10.el7.x86_64 15/38
[node2][DEBUG ] 正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 16/38
[node2][DEBUG ] 正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 17/38
[node2][DEBUG ] 正在安装 : boost-regex-1.53.0-27.el7.x86_64 18/38
[node2][DEBUG ] 正在安装 : mailcap-2.1.41-2.el7.noarch 19/38
[node2][DEBUG ] 正在安装 : python-itsdangerous-0.23-1.el7.noarch 20/38
[node2][DEBUG ] 正在安装 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node2][DEBUG ] 正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 22/38
[node2][DEBUG ] 正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 23/38
[node2][DEBUG ] 正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 24/38
[node2][DEBUG ] 正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 25/38
[node2][DEBUG ] 正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 26/38
[node2][DEBUG ] 正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 27/38
[node2][DEBUG ] 正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 28/38
[node2][DEBUG ] 正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 29/38
[node2][DEBUG ] 正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 30/38
[node2][DEBUG ] 正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 31/38
[node2][DEBUG ] 正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 32/38
[node2][DEBUG ] 正在安装 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 33/38
[node2][DEBUG ] 正在安装 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 34/38
[node2][DEBUG ] 正在安装 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 35/38
[node2][DEBUG ] 正在安装 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 36/38
[node2][DEBUG ] 清理 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node2][DEBUG ] 清理 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node2][DEBUG ] 验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/38
[node2][DEBUG ] 验证中 : python-itsdangerous-0.23-1.el7.noarch 2/38
[node2][DEBUG ] 验证中 : mailcap-2.1.41-2.el7.noarch 3/38
[node2][DEBUG ] 验证中 : boost-regex-1.53.0-27.el7.x86_64 4/38
[node2][DEBUG ] 验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 5/38
[node2][DEBUG ] 验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 6/38
[node2][DEBUG ] 验证中 : m4-1.4.16-10.el7.x86_64 7/38
[node2][DEBUG ] 验证中 : hdparm-9.43-5.el7.x86_64 8/38
[node2][DEBUG ] 验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 9/38
[node2][DEBUG ] 验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 10/38
[node2][DEBUG ] 验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 11/38
[node2][DEBUG ] 验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 12/38
[node2][DEBUG ] 验证中 : boost-iostreams-1.53.0-27.el7.x86_64 13/38
[node2][DEBUG ] 验证中 : python-babel-0.9.6-8.el7.noarch 14/38
[node2][DEBUG ] 验证中 : boost-random-1.53.0-27.el7.x86_64 15/38
[node2][DEBUG ] 验证中 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 16/38
[node2][DEBUG ] 验证中 : patch-2.7.1-8.el7.x86_64 17/38
[node2][DEBUG ] 验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 18/38
[node2][DEBUG ] 验证中 : python-markupsafe-0.11-10.el7.x86_64 19/38
[node2][DEBUG ] 验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 20/38
[node2][DEBUG ] 验证中 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node2][DEBUG ] 验证中 : leveldb-1.12.0-5.el7cp.x86_64 22/38
[node2][DEBUG ] 验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 23/38
[node2][DEBUG ] 验证中 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 24/38
[node2][DEBUG ] 验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 25/38
[node2][DEBUG ] 验证中 : python-jinja2-2.7.2-2.el7cp.noarch 26/38
[node2][DEBUG ] 验证中 : boost-program-options-1.53.0-27.el7.x86_64 27/38
[node2][DEBUG ] 验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 28/38
[node2][DEBUG ] 验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 29/38
[node2][DEBUG ] 验证中 : redhat-lsb-core-4.1-27.el7.x86_64 30/38
[node2][DEBUG ] 验证中 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 31/38
[node2][DEBUG ] 验证中 : spax-1.5.2-13.el7.x86_64 32/38
[node2][DEBUG ] 验证中 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 33/38
[node2][DEBUG ] 验证中 : python-werkzeug-0.9.1-1.el7.noarch 34/38
[node2][DEBUG ] 验证中 : fcgi-2.4.0-25.el7cp.x86_64 35/38
[node2][DEBUG ] 验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 36/38
[node2][DEBUG ] 验证中 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node2][DEBUG ] 验证中 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node2][DEBUG ]
[node2][DEBUG ] 已安装:
[node2][DEBUG ] ceph-mds.x86_64 1:10.2.2-38.el7cp ceph-mon.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] ceph-osd.x86_64 1:10.2.2-38.el7cp ceph-radosgw.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ]
[node2][DEBUG ] 作为依赖被安装:
[node2][DEBUG ] boost-iostreams.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] boost-program-options.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] boost-random.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] boost-regex.x86_64 0:1.53.0-27.el7
[node2][DEBUG ] ceph-base.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] ceph-common.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] ceph-selinux.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] fcgi.x86_64 0:2.4.0-25.el7cp
[node2][DEBUG ] hdparm.x86_64 0:9.43-5.el7
[node2][DEBUG ] leveldb.x86_64 0:1.12.0-5.el7cp
[node2][DEBUG ] libbabeltrace.x86_64 0:1.2.4-3.el7cp
[node2][DEBUG ] libcephfs1.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] librgw2.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] lttng-ust.x86_64 0:2.4.1-1.el7cp
[node2][DEBUG ] m4.x86_64 0:1.4.16-10.el7
[node2][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node2][DEBUG ] patch.x86_64 0:2.7.1-8.el7
[node2][DEBUG ] python-babel.noarch 0:0.9.6-8.el7
[node2][DEBUG ] python-cephfs.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] python-flask.noarch 1:0.10.1-5.el7
[node2][DEBUG ] python-itsdangerous.noarch 0:0.23-1.el7
[node2][DEBUG ] python-jinja2.noarch 0:2.7.2-2.el7cp
[node2][DEBUG ] python-markupsafe.x86_64 0:0.11-10.el7
[node2][DEBUG ] python-rados.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] python-rbd.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ] python-werkzeug.noarch 0:0.9.1-1.el7
[node2][DEBUG ] redhat-lsb-core.x86_64 0:4.1-27.el7
[node2][DEBUG ] redhat-lsb-submod-security.x86_64 0:4.1-27.el7
[node2][DEBUG ] spax.x86_64 0:1.5.2-13.el7
[node2][DEBUG ] userspace-rcu.x86_64 0:0.7.9-2.el7rhgs
[node2][DEBUG ]
[node2][DEBUG ] 作为依赖被升级:
[node2][DEBUG ] librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp
[node2][DEBUG ]
[node2][DEBUG ] 完毕!
[node2][INFO ] Running command: ceph –version
[node2][DEBUG ] ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)
[ceph_deploy.install][DEBUG ] Detecting platform for host node3 …
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[ceph_deploy.install][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][INFO ] installing Ceph on node3
[node3][INFO ] Running command: yum clean all
[node3][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node3][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node3][DEBUG ] 正在清理软件源: 192.168.4.254_rhel7 mon osd tools
[node3][DEBUG ] Cleaning up everything
[node3][DEBUG ] Maybe you want: rm -rf /var/cache/yum, to also free up space taken by orphaned data from disabled or removed repos
[node3][INFO ] Running command: yum -y install ceph-osd ceph-mds ceph-mon ceph-radosgw
[node3][DEBUG ] 已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
[node3][DEBUG ] This system is not registered with an entitlement server. You can use subscription-manager to register.
[node3][DEBUG ] 正在解决依赖关系
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 ceph-mds.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 ceph-base = 1:10.2.2-38.el7cp,它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libboost_iostreams-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mds-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-mon.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-flask,它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libboost_random-mt.so.1.53.0()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libleveldb.so.1()(64bit),它被软件包 1:ceph-mon-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-osd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 libboost_program_options-mt.so.1.53.0()(64bit),它被软件包 1:ceph-osd-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-radosgw.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 ceph-common = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 ceph-selinux = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librados2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librgw2 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 mailcap,它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libfcgi.so.0()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librgw.so.2()(64bit),它被软件包 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 boost-iostreams.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 boost-program-options.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 boost-random.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 ceph-base.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 libcephfs1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 librbd1 = 1:10.2.2-38.el7cp,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 hdparm,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libcephfs.so.1()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 liblttng-ust.so.0()(64bit),它被软件包 1:ceph-base-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-common.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-cephfs = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-rados = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-rbd = 1:10.2.2-38.el7cp,它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libbabeltrace-ctf.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libbabeltrace.so.1()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 libboost_regex-mt.so.1.53.0()(64bit),它被软件包 1:ceph-common-10.2.2-38.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 ceph-selinux.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 fcgi.x86_64.0.2.4.0-25.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 leveldb.x86_64.0.1.12.0-5.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 librados2.x86_64.1.0.94.5-2.el7 将被 升级
[node3][DEBUG ] —> 软件包 librados2.x86_64.1.10.2.2-38.el7cp 将被 更新
[node3][DEBUG ] —> 软件包 librgw2.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 mailcap.noarch.0.2.1.41-2.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-flask.noarch.1.0.10.1-5.el7 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-itsdangerous,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-jinja2,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-werkzeug,它被软件包 1:python-flask-0.10.1-5.el7.noarch 需要
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 boost-regex.x86_64.0.1.53.0-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
[node3][DEBUG ] —> 软件包 libbabeltrace.x86_64.0.1.2.4-3.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 libcephfs1.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 librbd1.x86_64.1.0.94.5-2.el7 将被 升级
[node3][DEBUG ] —> 软件包 librbd1.x86_64.1.10.2.2-38.el7cp 将被 更新
[node3][DEBUG ] —> 软件包 lttng-ust.x86_64.0.2.4.1-1.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 liburcu-bp.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 liburcu-cds.so.1()(64bit),它被软件包 lttng-ust-2.4.1-1.el7cp.x86_64 需要
[node3][DEBUG ] —> 软件包 python-cephfs.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 python-itsdangerous.noarch.0.0.23-1.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-jinja2.noarch.0.2.7.2-2.el7cp 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 python-babel >= 0.8,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node3][DEBUG ] –> 正在处理依赖关系 python-markupsafe,它被软件包 python-jinja2-2.7.2-2.el7cp.noarch 需要
[node3][DEBUG ] —> 软件包 python-rados.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 python-rbd.x86_64.1.10.2.2-38.el7cp 将被 安装
[node3][DEBUG ] —> 软件包 python-werkzeug.noarch.0.0.9.1-1.el7 将被 安装
[node3][DEBUG ] —> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
[node3][DEBUG ] –> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
[node3][DEBUG ] –> 正在检查事务
[node3][DEBUG ] —> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
[node3][DEBUG ] —> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-babel.noarch.0.0.9.6-8.el7 将被 安装
[node3][DEBUG ] —> 软件包 python-markupsafe.x86_64.0.0.11-10.el7 将被 安装
[node3][DEBUG ] —> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
[node3][DEBUG ] —> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
[node3][DEBUG ] —> 软件包 userspace-rcu.x86_64.0.0.7.9-2.el7rhgs 将被 安装
[node3][DEBUG ] –> 解决依赖关系完成
[node3][DEBUG ]
[node3][DEBUG ] 依赖关系解决
[node3][DEBUG ]
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] Package 架构 版本 源 大小
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] 正在安装:
[node3][DEBUG ] ceph-mds x86_64 1:10.2.2-38.el7cp tools 2.8 M
[node3][DEBUG ] ceph-mon x86_64 1:10.2.2-38.el7cp mon 2.8 M
[node3][DEBUG ] ceph-osd x86_64 1:10.2.2-38.el7cp osd 9.0 M
[node3][DEBUG ] ceph-radosgw x86_64 1:10.2.2-38.el7cp tools 265 k
[node3][DEBUG ] 为依赖而安装:
[node3][DEBUG ] boost-iostreams x86_64 1.53.0-27.el7 192.168.4.254_rhel7 61 k
[node3][DEBUG ] boost-program-options x86_64 1.53.0-27.el7 192.168.4.254_rhel7 156 k
[node3][DEBUG ] boost-random x86_64 1.53.0-27.el7 192.168.4.254_rhel7 39 k
[node3][DEBUG ] boost-regex x86_64 1.53.0-27.el7 192.168.4.254_rhel7 300 k
[node3][DEBUG ] ceph-base x86_64 1:10.2.2-38.el7cp mon 4.2 M
[node3][DEBUG ] ceph-common x86_64 1:10.2.2-38.el7cp mon 16 M
[node3][DEBUG ] ceph-selinux x86_64 1:10.2.2-38.el7cp mon 38 k
[node3][DEBUG ] fcgi x86_64 2.4.0-25.el7cp mon 47 k
[node3][DEBUG ] hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
[node3][DEBUG ] leveldb x86_64 1.12.0-5.el7cp mon 161 k
[node3][DEBUG ] libbabeltrace x86_64 1.2.4-3.el7cp mon 147 k
[node3][DEBUG ] libcephfs1 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node3][DEBUG ] librgw2 x86_64 1:10.2.2-38.el7cp mon 2.9 M
[node3][DEBUG ] lttng-ust x86_64 2.4.1-1.el7cp mon 176 k
[node3][DEBUG ] m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
[node3][DEBUG ] mailcap noarch 2.1.41-2.el7 192.168.4.254_rhel7 31 k
[node3][DEBUG ] patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
[node3][DEBUG ] python-babel noarch 0.9.6-8.el7 192.168.4.254_rhel7 1.4 M
[node3][DEBUG ] python-cephfs x86_64 1:10.2.2-38.el7cp mon 86 k
[node3][DEBUG ] python-flask noarch 1:0.10.1-5.el7 mon 204 k
[node3][DEBUG ] python-itsdangerous noarch 0.23-1.el7 mon 24 k
[node3][DEBUG ] python-jinja2 noarch 2.7.2-2.el7cp mon 516 k
[node3][DEBUG ] python-markupsafe x86_64 0.11-10.el7 192.168.4.254_rhel7 25 k
[node3][DEBUG ] python-rados x86_64 1:10.2.2-38.el7cp mon 164 k
[node3][DEBUG ] python-rbd x86_64 1:10.2.2-38.el7cp mon 93 k
[node3][DEBUG ] python-werkzeug noarch 0.9.1-1.el7 mon 562 k
[node3][DEBUG ] redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
[node3][DEBUG ] redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
[node3][DEBUG ] spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k
[node3][DEBUG ] userspace-rcu x86_64 0.7.9-2.el7rhgs mon 70 k
[node3][DEBUG ] 为依赖而更新:
[node3][DEBUG ] librados2 x86_64 1:10.2.2-38.el7cp mon 1.9 M
[node3][DEBUG ] librbd1 x86_64 1:10.2.2-38.el7cp mon 2.5 M
[node3][DEBUG ]
[node3][DEBUG ] 事务概要
[node3][DEBUG ] ================================================================================
[node3][DEBUG ] 安装 4 软件包 (+30 依赖软件包)
[node3][DEBUG ] 升级 ( 2 依赖软件包)
[node3][DEBUG ]
[node3][DEBUG ] 总下载量:49 M
[node3][DEBUG ] Downloading packages:
[node3][DEBUG ] No Presto metadata available for mon
[node3][DEBUG ] ——————————————————————————–
[node3][DEBUG ] 总计 34 MB/s | 49 MB 00:01
[node3][DEBUG ] Running transaction check
[node3][DEBUG ] Running transaction test
[node3][DEBUG ] Transaction test succeeded
[node3][DEBUG ] Running transaction
[node3][DEBUG ] 正在安装 : boost-iostreams-1.53.0-27.el7.x86_64 1/38
[node3][DEBUG ] 正在安装 : boost-random-1.53.0-27.el7.x86_64 2/38
[node3][DEBUG ] 正在安装 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 3/38
[node3][DEBUG ] 正在安装 : fcgi-2.4.0-25.el7cp.x86_64 4/38
[node3][DEBUG ] 正在安装 : boost-program-options-1.53.0-27.el7.x86_64 5/38
[node3][DEBUG ] 正在安装 : leveldb-1.12.0-5.el7cp.x86_64 6/38
[node3][DEBUG ] 正在安装 : python-werkzeug-0.9.1-1.el7.noarch 7/38
[node3][DEBUG ] 正在安装 : spax-1.5.2-13.el7.x86_64 8/38
[node3][DEBUG ] 正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 9/38
[node3][DEBUG ] 正在安装 : python-markupsafe-0.11-10.el7.x86_64 10/38
[node3][DEBUG ] 正在安装 : patch-2.7.1-8.el7.x86_64 11/38
[node3][DEBUG ] 正在安装 : python-babel-0.9.6-8.el7.noarch 12/38
[node3][DEBUG ] 正在安装 : python-jinja2-2.7.2-2.el7cp.noarch 13/38
[node3][DEBUG ] 正在安装 : hdparm-9.43-5.el7.x86_64 14/38
[node3][DEBUG ] 正在安装 : m4-1.4.16-10.el7.x86_64 15/38
[node3][DEBUG ] 正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 16/38
[node3][DEBUG ] 正在安装 : libbabeltrace-1.2.4-3.el7cp.x86_64 17/38
[node3][DEBUG ] 正在安装 : boost-regex-1.53.0-27.el7.x86_64 18/38
[node3][DEBUG ] 正在安装 : mailcap-2.1.41-2.el7.noarch 19/38
[node3][DEBUG ] 正在安装 : python-itsdangerous-0.23-1.el7.noarch 20/38
[node3][DEBUG ] 正在安装 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node3][DEBUG ] 正在安装 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 22/38
[node3][DEBUG ] 正在安装 : lttng-ust-2.4.1-1.el7cp.x86_64 23/38
[node3][DEBUG ] 正在更新 : 1:librados2-10.2.2-38.el7cp.x86_64 24/38
[node3][DEBUG ] 正在更新 : 1:librbd1-10.2.2-38.el7cp.x86_64 25/38
[node3][DEBUG ] 正在安装 : 1:librgw2-10.2.2-38.el7cp.x86_64 26/38
[node3][DEBUG ] 正在安装 : 1:python-rados-10.2.2-38.el7cp.x86_64 27/38
[node3][DEBUG ] 正在安装 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 28/38
[node3][DEBUG ] 正在安装 : 1:python-rbd-10.2.2-38.el7cp.x86_64 29/38
[node3][DEBUG ] 正在安装 : 1:ceph-common-10.2.2-38.el7cp.x86_64 30/38
[node3][DEBUG ] 正在安装 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 31/38
[node3][DEBUG ] 正在安装 : 1:ceph-base-10.2.2-38.el7cp.x86_64 32/38
[node3][DEBUG ] 正在安装 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 33/38
[node3][DEBUG ] 正在安装 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 34/38
[node3][DEBUG ] 正在安装 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 35/38
[node3][DEBUG ] 正在安装 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 36/38
[node3][DEBUG ] 清理 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node3][DEBUG ] 清理 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node3][DEBUG ] 验证中 : userspace-rcu-0.7.9-2.el7rhgs.x86_64 1/38
[node3][DEBUG ] 验证中 : python-itsdangerous-0.23-1.el7.noarch 2/38
[node3][DEBUG ] 验证中 : mailcap-2.1.41-2.el7.noarch 3/38
[node3][DEBUG ] 验证中 : boost-regex-1.53.0-27.el7.x86_64 4/38
[node3][DEBUG ] 验证中 : libbabeltrace-1.2.4-3.el7cp.x86_64 5/38
[node3][DEBUG ] 验证中 : 1:librados2-10.2.2-38.el7cp.x86_64 6/38
[node3][DEBUG ] 验证中 : m4-1.4.16-10.el7.x86_64 7/38
[node3][DEBUG ] 验证中 : hdparm-9.43-5.el7.x86_64 8/38
[node3][DEBUG ] 验证中 : 1:libcephfs1-10.2.2-38.el7cp.x86_64 9/38
[node3][DEBUG ] 验证中 : 1:ceph-common-10.2.2-38.el7cp.x86_64 10/38
[node3][DEBUG ] 验证中 : 1:python-cephfs-10.2.2-38.el7cp.x86_64 11/38
[node3][DEBUG ] 验证中 : 1:librbd1-10.2.2-38.el7cp.x86_64 12/38
[node3][DEBUG ] 验证中 : boost-iostreams-1.53.0-27.el7.x86_64 13/38
[node3][DEBUG ] 验证中 : python-babel-0.9.6-8.el7.noarch 14/38
[node3][DEBUG ] 验证中 : boost-random-1.53.0-27.el7.x86_64 15/38
[node3][DEBUG ] 验证中 : 1:ceph-mds-10.2.2-38.el7cp.x86_64 16/38
[node3][DEBUG ] 验证中 : patch-2.7.1-8.el7.x86_64 17/38
[node3][DEBUG ] 验证中 : 1:ceph-selinux-10.2.2-38.el7cp.x86_64 18/38
[node3][DEBUG ] 验证中 : python-markupsafe-0.11-10.el7.x86_64 19/38
[node3][DEBUG ] 验证中 : 1:librgw2-10.2.2-38.el7cp.x86_64 20/38
[node3][DEBUG ] 验证中 : 1:python-flask-0.10.1-5.el7.noarch 21/38
[node3][DEBUG ] 验证中 : leveldb-1.12.0-5.el7cp.x86_64 22/38
[node3][DEBUG ] 验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 23/38
[node3][DEBUG ] 验证中 : 1:ceph-mon-10.2.2-38.el7cp.x86_64 24/38
[node3][DEBUG ] 验证中 : 1:ceph-base-10.2.2-38.el7cp.x86_64 25/38
[node3][DEBUG ] 验证中 : python-jinja2-2.7.2-2.el7cp.noarch 26/38
[node3][DEBUG ] 验证中 : boost-program-options-1.53.0-27.el7.x86_64 27/38
[node3][DEBUG ] 验证中 : 1:python-rados-10.2.2-38.el7cp.x86_64 28/38
[node3][DEBUG ] 验证中 : lttng-ust-2.4.1-1.el7cp.x86_64 29/38
[node3][DEBUG ] 验证中 : redhat-lsb-core-4.1-27.el7.x86_64 30/38
[node3][DEBUG ] 验证中 : 1:ceph-osd-10.2.2-38.el7cp.x86_64 31/38
[node3][DEBUG ] 验证中 : spax-1.5.2-13.el7.x86_64 32/38
[node3][DEBUG ] 验证中 : 1:ceph-radosgw-10.2.2-38.el7cp.x86_64 33/38
[node3][DEBUG ] 验证中 : python-werkzeug-0.9.1-1.el7.noarch 34/38
[node3][DEBUG ] 验证中 : fcgi-2.4.0-25.el7cp.x86_64 35/38
[node3][DEBUG ] 验证中 : 1:python-rbd-10.2.2-38.el7cp.x86_64 36/38
[node3][DEBUG ] 验证中 : 1:librbd1-0.94.5-2.el7.x86_64 37/38
[node3][DEBUG ] 验证中 : 1:librados2-0.94.5-2.el7.x86_64 38/38
[node3][DEBUG ]
[node3][DEBUG ] 已安装:
[node3][DEBUG ] ceph-mds.x86_64 1:10.2.2-38.el7cp ceph-mon.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] ceph-osd.x86_64 1:10.2.2-38.el7cp ceph-radosgw.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ]
[node3][DEBUG ] 作为依赖被安装:
[node3][DEBUG ] boost-iostreams.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] boost-program-options.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] boost-random.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] boost-regex.x86_64 0:1.53.0-27.el7
[node3][DEBUG ] ceph-base.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] ceph-common.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] ceph-selinux.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] fcgi.x86_64 0:2.4.0-25.el7cp
[node3][DEBUG ] hdparm.x86_64 0:9.43-5.el7
[node3][DEBUG ] leveldb.x86_64 0:1.12.0-5.el7cp
[node3][DEBUG ] libbabeltrace.x86_64 0:1.2.4-3.el7cp
[node3][DEBUG ] libcephfs1.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] librgw2.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] lttng-ust.x86_64 0:2.4.1-1.el7cp
[node3][DEBUG ] m4.x86_64 0:1.4.16-10.el7
[node3][DEBUG ] mailcap.noarch 0:2.1.41-2.el7
[node3][DEBUG ] patch.x86_64 0:2.7.1-8.el7
[node3][DEBUG ] python-babel.noarch 0:0.9.6-8.el7
[node3][DEBUG ] python-cephfs.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] python-flask.noarch 1:0.10.1-5.el7
[node3][DEBUG ] python-itsdangerous.noarch 0:0.23-1.el7
[node3][DEBUG ] python-jinja2.noarch 0:2.7.2-2.el7cp
[node3][DEBUG ] python-markupsafe.x86_64 0:0.11-10.el7
[node3][DEBUG ] python-rados.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] python-rbd.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ] python-werkzeug.noarch 0:0.9.1-1.el7
[node3][DEBUG ] redhat-lsb-core.x86_64 0:4.1-27.el7
[node3][DEBUG ] redhat-lsb-submod-security.x86_64 0:4.1-27.el7
[node3][DEBUG ] spax.x86_64 0:1.5.2-13.el7
[node3][DEBUG ] userspace-rcu.x86_64 0:0.7.9-2.el7rhgs
[node3][DEBUG ]
[node3][DEBUG ] 作为依赖被升级:
[node3][DEBUG ] librados2.x86_64 1:10.2.2-38.el7cp librbd1.x86_64 1:10.2.2-38.el7cp
[node3][DEBUG ]
[node3][DEBUG ] 完毕!
[node3][INFO ] Running command: ceph –version
[node3][DEBUG ] ceph version 10.2.2-38.el7cp (119a68752a5671253f9daae3f894a90313a6b8e4)

3)初始化所有节点的mon服务(主机名解析必须对)
[root@node1 ceph-cluster]# ceph-deploy mon create-initial
代码如下

[root@node1 ceph-cluster]# ceph-deploy mon create-initial
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy mon create-initial
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create-initial
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f5c3dcb46c8>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function mon at 0x7f5c3dcaa938>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts node1 node2 node3
[ceph_deploy.mon][DEBUG ] detecting platform for host node1 …
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][DEBUG ] determining if provided host has same hostname in remote
[node1][DEBUG ] get remote short hostname
[node1][DEBUG ] deploying mon to node1
[node1][DEBUG ] get remote short hostname
[node1][DEBUG ] remote hostname: node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node1][DEBUG ] create the mon path if it does not exist
[node1][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node1/done
[node1][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node1/done
[node1][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create the monitor keyring file
[node1][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i node1 –keyring /var/lib/ceph/tmp/ceph-node1.mon.keyring –setuser 167 –setgroup 167
[node1][DEBUG ] ceph-mon: mon.noname-a 192.168.4.11:6789/0 is local, renaming to mon.node1
[node1][DEBUG ] ceph-mon: set fsid to 29908a48-7574-4aac-ac14-80a44b7cffbf
[node1][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node1 for mon.node1
[node1][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node1.mon.keyring
[node1][DEBUG ] create a done file to avoid re-doing the mon deployment
[node1][DEBUG ] create the init path if it does not exist
[node1][INFO ] Running command: systemctl enable ceph.target
[node1][INFO ] Running command: systemctl enable ceph-mon@node1
[node1][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node1][INFO ] Running command: systemctl start ceph-mon@node1
[node1][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[node1][DEBUG ] ********************************************************************************
[node1][DEBUG ] status for monitor: mon.node1
[node1][DEBUG ] {
[node1][DEBUG ] “election_epoch”: 0,
[node1][DEBUG ] “extra_probe_peers”: [
[node1][DEBUG ] “192.168.4.12:6789/0”,
[node1][DEBUG ] “192.168.4.13:6789/0”
[node1][DEBUG ] ],
[node1][DEBUG ] “monmap”: {
[node1][DEBUG ] “created”: “2018-10-11 11:16:27.048381”,
[node1][DEBUG ] “epoch”: 0,
[node1][DEBUG ] “fsid”: “29908a48-7574-4aac-ac14-80a44b7cffbf”,
[node1][DEBUG ] “modified”: “2018-10-11 11:16:27.048381”,
[node1][DEBUG ] “mons”: [
[node1][DEBUG ] {
[node1][DEBUG ] “addr”: “192.168.4.11:6789/0”,
[node1][DEBUG ] “name”: “node1”,
[node1][DEBUG ] “rank”: 0
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] “addr”: “0.0.0.0:0/1”,
[node1][DEBUG ] “name”: “node2”,
[node1][DEBUG ] “rank”: 1
[node1][DEBUG ] },
[node1][DEBUG ] {
[node1][DEBUG ] “addr”: “0.0.0.0:0/2”,
[node1][DEBUG ] “name”: “node3”,
[node1][DEBUG ] “rank”: 2
[node1][DEBUG ] }
[node1][DEBUG ] ]
[node1][DEBUG ] },
[node1][DEBUG ] “name”: “node1”,
[node1][DEBUG ] “outside_quorum”: [
[node1][DEBUG ] “node1”
[node1][DEBUG ] ],
[node1][DEBUG ] “quorum”: [],
[node1][DEBUG ] “rank”: 0,
[node1][DEBUG ] “state”: “probing”,
[node1][DEBUG ] “sync_provider”: []
[node1][DEBUG ] }
[node1][DEBUG ] ********************************************************************************
[node1][INFO ] monitor: mon.node1 is running
[node1][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node2 …
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][DEBUG ] determining if provided host has same hostname in remote
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] deploying mon to node2
[node2][DEBUG ] get remote short hostname
[node2][DEBUG ] remote hostname: node2
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node2][DEBUG ] create the mon path if it does not exist
[node2][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node2/done
[node2][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node2/done
[node2][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create the monitor keyring file
[node2][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i node2 –keyring /var/lib/ceph/tmp/ceph-node2.mon.keyring –setuser 167 –setgroup 167
[node2][DEBUG ] ceph-mon: mon.noname-b 192.168.4.12:6789/0 is local, renaming to mon.node2
[node2][DEBUG ] ceph-mon: set fsid to 29908a48-7574-4aac-ac14-80a44b7cffbf
[node2][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node2 for mon.node2
[node2][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node2.mon.keyring
[node2][DEBUG ] create a done file to avoid re-doing the mon deployment
[node2][DEBUG ] create the init path if it does not exist
[node2][INFO ] Running command: systemctl enable ceph.target
[node2][INFO ] Running command: systemctl enable ceph-mon@node2
[node2][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node2][INFO ] Running command: systemctl start ceph-mon@node2
[node2][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[node2][DEBUG ] ********************************************************************************
[node2][DEBUG ] status for monitor: mon.node2
[node2][DEBUG ] {
[node2][DEBUG ] “election_epoch”: 1,
[node2][DEBUG ] “extra_probe_peers”: [
[node2][DEBUG ] “192.168.4.11:6789/0”,
[node2][DEBUG ] “192.168.4.13:6789/0”
[node2][DEBUG ] ],
[node2][DEBUG ] “monmap”: {
[node2][DEBUG ] “created”: “2018-10-11 11:16:31.198150”,
[node2][DEBUG ] “epoch”: 0,
[node2][DEBUG ] “fsid”: “29908a48-7574-4aac-ac14-80a44b7cffbf”,
[node2][DEBUG ] “modified”: “2018-10-11 11:16:31.198150”,
[node2][DEBUG ] “mons”: [
[node2][DEBUG ] {
[node2][DEBUG ] “addr”: “192.168.4.11:6789/0”,
[node2][DEBUG ] “name”: “node1”,
[node2][DEBUG ] “rank”: 0
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] “addr”: “192.168.4.12:6789/0”,
[node2][DEBUG ] “name”: “node2”,
[node2][DEBUG ] “rank”: 1
[node2][DEBUG ] },
[node2][DEBUG ] {
[node2][DEBUG ] “addr”: “0.0.0.0:0/2”,
[node2][DEBUG ] “name”: “node3”,
[node2][DEBUG ] “rank”: 2
[node2][DEBUG ] }
[node2][DEBUG ] ]
[node2][DEBUG ] },
[node2][DEBUG ] “name”: “node2”,
[node2][DEBUG ] “outside_quorum”: [],
[node2][DEBUG ] “quorum”: [],
[node2][DEBUG ] “rank”: 1,
[node2][DEBUG ] “state”: “electing”,
[node2][DEBUG ] “sync_provider”: []
[node2][DEBUG ] }
[node2][DEBUG ] ********************************************************************************
[node2][INFO ] monitor: mon.node2 is running
[node2][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][DEBUG ] detecting platform for host node3 …
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][DEBUG ] determining if provided host has same hostname in remote
[node3][DEBUG ] get remote short hostname
[node3][DEBUG ] deploying mon to node3
[node3][DEBUG ] get remote short hostname
[node3][DEBUG ] remote hostname: node3
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[node3][DEBUG ] create the mon path if it does not exist
[node3][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-node3/done
[node3][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-node3/done
[node3][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create the monitor keyring file
[node3][INFO ] Running command: ceph-mon –cluster ceph –mkfs -i node3 –keyring /var/lib/ceph/tmp/ceph-node3.mon.keyring –setuser 167 –setgroup 167
[node3][DEBUG ] ceph-mon: mon.noname-c 192.168.4.13:6789/0 is local, renaming to mon.node3
[node3][DEBUG ] ceph-mon: set fsid to 29908a48-7574-4aac-ac14-80a44b7cffbf
[node3][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-node3 for mon.node3
[node3][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-node3.mon.keyring
[node3][DEBUG ] create a done file to avoid re-doing the mon deployment
[node3][DEBUG ] create the init path if it does not exist
[node3][INFO ] Running command: systemctl enable ceph.target
[node3][INFO ] Running command: systemctl enable ceph-mon@node3
[node3][WARNIN] Created symlink from /etc/systemd/system/ceph-mon.target.wants/[email protected] to /usr/lib/systemd/system/[email protected].
[node3][INFO ] Running command: systemctl start ceph-mon@node3
[node3][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[node3][DEBUG ] ********************************************************************************
[node3][DEBUG ] status for monitor: mon.node3
[node3][DEBUG ] {
[node3][DEBUG ] “election_epoch”: 4,
[node3][DEBUG ] “extra_probe_peers”: [
[node3][DEBUG ] “192.168.4.11:6789/0”,
[node3][DEBUG ] “192.168.4.12:6789/0”
[node3][DEBUG ] ],
[node3][DEBUG ] “monmap”: {
[node3][DEBUG ] “created”: “2018-10-11 11:16:27.048381”,
[node3][DEBUG ] “epoch”: 1,
[node3][DEBUG ] “fsid”: “29908a48-7574-4aac-ac14-80a44b7cffbf”,
[node3][DEBUG ] “modified”: “2018-10-11 11:16:27.048381”,
[node3][DEBUG ] “mons”: [
[node3][DEBUG ] {
[node3][DEBUG ] “addr”: “192.168.4.11:6789/0”,
[node3][DEBUG ] “name”: “node1”,
[node3][DEBUG ] “rank”: 0
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] “addr”: “192.168.4.12:6789/0”,
[node3][DEBUG ] “name”: “node2”,
[node3][DEBUG ] “rank”: 1
[node3][DEBUG ] },
[node3][DEBUG ] {
[node3][DEBUG ] “addr”: “192.168.4.13:6789/0”,
[node3][DEBUG ] “name”: “node3”,
[node3][DEBUG ] “rank”: 2
[node3][DEBUG ] }
[node3][DEBUG ] ]
[node3][DEBUG ] },
[node3][DEBUG ] “name”: “node3”,
[node3][DEBUG ] “outside_quorum”: [],
[node3][DEBUG ] “quorum”: [
[node3][DEBUG ] 0,
[node3][DEBUG ] 1,
[node3][DEBUG ] 2
[node3][DEBUG ] ],
[node3][DEBUG ] “rank”: 2,
[node3][DEBUG ] “state”: “peon”,
[node3][DEBUG ] “sync_provider”: []
[node3][DEBUG ] }
[node3][DEBUG ] ********************************************************************************
[node3][INFO ] monitor: mon.node3 is running
[node3][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] processing monitor mon.node1
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node1.asok mon_status
[ceph_deploy.mon][INFO ] mon.node1 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node2.asok mon_status
[ceph_deploy.mon][INFO ] mon.node2 monitor has reached quorum!
[ceph_deploy.mon][INFO ] processing monitor mon.node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: ceph –cluster=ceph –admin-daemon /var/run/ceph/ceph-mon.node3.asok mon_status
[ceph_deploy.mon][INFO ] mon.node3 monitor has reached quorum!
[ceph_deploy.mon][INFO ] all initial monitors are running and have formed quorum
[ceph_deploy.mon][INFO ] Running gatherkeys…
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /etc/ceph/ceph.client.admin.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.client.admin.keyring key from node1.
[ceph_deploy.gatherkeys][DEBUG ] Have ceph.mon.keyring
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-osd/ceph.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-osd.keyring key from node1.
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-mds/ceph.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][WARNIN] Unable to find /var/lib/ceph/bootstrap-mds/ceph.keyring on node1
[ceph_deploy.gatherkeys][DEBUG ] Checking node2 for /var/lib/ceph/bootstrap-mds/ceph.keyring
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-mds.keyring key from node2.
[ceph_deploy.gatherkeys][DEBUG ] Checking node1 for /var/lib/ceph/bootstrap-rgw/ceph.keyring
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] fetch remote file
[ceph_deploy.gatherkeys][DEBUG ] Got ceph.bootstrap-rgw.keyring key from node1.

提示:(初始化操作常见错误解决办法,非必要操作,有错误可以参考)
如果提示如下错误信息:
[node1][ERROR ] admin_socket: exception getting command descriptions: [Error 2] No such file or directory
解决方案如下(在node1操作):
先检查自己的命令是否是在ceph-cluster目录下执行的!!!!如果时确认是在该目录下执行的create-initial命令,依然保存,可以使用如下方式修复。
[root@node1 ceph-cluster]# vim ceph.conf #文件最后追加以下内容
public_network = 192.168.4.0/24
修改后重新推送配置文件:
[root@node1 ceph-cluster]# ceph-deploy –overwrite-conf config push node1 node2 node3

步骤三:创建OSD
1)准备磁盘分区
[root@node1 ~]# parted /dev/vdb mklabel gpt
[root@node1 ~]# parted /dev/vdb mkpart primary 1M 50%
[root@node1 ~]# parted /dev/vdb mkpart primary 50% 100%
[root@node1 ~]# chown ceph.ceph /dev/vdb1
[root@node1 ~]# chown ceph.ceph /dev/vdb2
//这两个分区用来做存储服务器的日志journal盘
注意,每个节点都要操作

代码如下
node1
[root@node1 ceph-cluster]# parted /dev/vdb mklabel gpt
信息: You may need to update /etc/fstab.

[root@node1 ceph-cluster]# parted /dev/vdb mkpart primary 1M 50%
信息: You may need to update /etc/fstab.

[root@node1 ceph-cluster]# parted /dev/vdb mkpart primary 50% 100%
信息: You may need to update /etc/fstab.

[root@node1 ceph-cluster]# chown ceph.ceph /dev/vdb1
[root@node1 ceph-cluster]# chown ceph.ceph /dev/vdb2

node2
[root@node2 ~]# parted /dev/vdb mklabel gpt
信息: You may need to update /etc/fstab.

[root@node2 ~]# parted /dev/vdb mkpart primary 1M 50%
信息: You may need to update /etc/fstab.

[root@node2 ~]# parted /dev/vdb mkpart primary 50% 100%
信息: You may need to update /etc/fstab.

[root@node2 ~]# chown ceph.ceph /dev/vdb1
[root@node2 ~]# chown ceph.ceph /dev/vdb2

node3
[root@node3 ~]# parted /dev/vdb mklabel gpt
信息: You may need to update /etc/fstab.

[root@node3 ~]# parted /dev/vdb mkpart primary 1M 50%
信息: You may need to update /etc/fstab.

[root@node3 ~]# parted /dev/vdb mkpart primary 50% 100%
信息: You may need to update /etc/fstab.

[root@node3 ~]# chown ceph.ceph /dev/vdb1
[root@node3 ~]# chown ceph.ceph /dev/vdb2

2)初始化清空磁盘数据(仅node1操作即可)
[root@node1 ~]# ceph-deploy disk zap node1:vdc node1:vdd
[root@node1 ~]# ceph-deploy disk zap node2:vdc node2:vdd
[root@node1 ~]# ceph-deploy disk zap node3:vdc node3:vdd

代码如下,注意只需要在node1上面操作就行了,不用每个节点都要跑到节点上去操作
NODE1初始化:
[root@node1 ceph-cluster]# ceph-deploy disk zap node1:vdc node1:vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy disk zap node1:vdc node1:vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f18d69c5b90>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f18d69bb2a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [(‘node1’, ‘/dev/vdc’, None), (‘node1’, ‘/dev/vdd’, None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on node1
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][DEBUG ] zeroing last few blocks of device
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdc
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node1][DEBUG ] other utilities.
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdc
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/partx -a /dev/vdc
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on node1
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node1][DEBUG ] zeroing last few blocks of device
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdd
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node1][DEBUG ] other utilities.
[node1][DEBUG ] Creating new GPT entries.
[node1][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdd
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/partx -a /dev/vdd

NODE2初始化
[root@node1 ceph-cluster]# ceph-deploy disk zap node2:vdc node2:vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy disk zap node2:vdc node2:vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7faca8e50b90>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x7faca8e462a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [(‘node2’, ‘/dev/vdc’, None), (‘node2’, ‘/dev/vdd’, None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on node2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][DEBUG ] zeroing last few blocks of device
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdc
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node2][DEBUG ] other utilities.
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdc
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/partx -a /dev/vdc
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on node2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node2][DEBUG ] zeroing last few blocks of device
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdd
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node2][DEBUG ] other utilities.
[node2][DEBUG ] Creating new GPT entries.
[node2][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdd
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/partx -a /dev/vdd

NODE3初始化
[root@node1 ceph-cluster]# ceph-deploy disk zap node3:vdc node3:vdd
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy disk zap node3:vdc node3:vdd
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : zap
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x7f119e29eb90>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] func : <function disk at 0x7f119e2942a8>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] disk : [(‘node3’, ‘/dev/vdc’, None), (‘node3’, ‘/dev/vdd’, None)]
[ceph_deploy.osd][DEBUG ] zapping /dev/vdc on node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][DEBUG ] zeroing last few blocks of device
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdc
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node3][DEBUG ] other utilities.
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdc
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/partx -a /dev/vdc
[ceph_deploy.osd][DEBUG ] zapping /dev/vdd on node3
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[node3][DEBUG ] zeroing last few blocks of device
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk zap /dev/vdd
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] GPT data structures destroyed! You may now partition the disk using fdisk or
[node3][DEBUG ] other utilities.
[node3][DEBUG ] Creating new GPT entries.
[node3][DEBUG ] The operation has completed successfully.
[ceph_deploy.osd][INFO ] calling partx on zapped device /dev/vdd
[ceph_deploy.osd][INFO ] re-reading known partitions will display errors
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/partx -a /dev/vdd

3)创建OSD存储空间(仅node1操作即可)
[root@node1 ~]# ceph-deploy osd create node1:vdc:/dev/vdb1 node1:vdd:/dev/vdb2
//创建osd存储设备,vdc为集群提供存储空间,vdb1提供JOURNAL日志,
//一个存储设备对应一个日志设备,日志需要SSD,不需要很大
[root@node1 ~]# ceph-deploy osd create node2:vdc:/dev/vdb1 node2:vdd:/dev/vdb2
[root@node1 ~]# ceph-deploy osd create node3:vdc:/dev/vdb1 node3:vdd:/dev/vdb2

代码如下:可以看到,vdb作为vdc和vdd的日志盘,所以创建了2个分区

NODE1创建存储空间
[root@node1 ceph-cluster]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk
创建node1的OSD空间
[root@node1 ceph-cluster]# ceph-deploy osd create node1:vdc:/dev/vdb1 node1:vdd:/dev/vdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy osd create node1:vdc:/dev/vdb1 node1:vdd:/dev/vdb2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [(‘node1’, ‘/dev/vdc’, ‘/dev/vdb1’), (‘node1’, ‘/dev/vdd’, ‘/dev/vdb2’)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0xf51638>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0xf44230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node1:/dev/vdc:/dev/vdb1 node1:/dev/vdd:/dev/vdb2
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to node1
[node1][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/vdc journal /dev/vdb1 activate True
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdc /dev/vdb1
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node1][WARNIN] prepare_device: Journal /dev/vdb1 is a partition
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node1][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb1
[node1][WARNIN] prepare_device: Journal /dev/vdb1 was not prepared with ceph-disk. Symlinking directly.
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] set_data_partition: Creating osd partition on /dev/vdc
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] ptype_tobe_for_name: name = data
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:a462a571-af0a-4717-be67-5539845b34f2 –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdc
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on created device /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid
[node1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdc1
[node1][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdc1
[node1][DEBUG ] meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=655295 blks
[node1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node1][DEBUG ] = crc=1 finobt=0, sparse=0
[node1][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node1][DEBUG ] = sunit=0 swidth=0 blks
[node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node1][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node1][WARNIN] mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.TNgP2M with options noatime,inode64
[node1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdc1 /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/ceph_fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/ceph_fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/fsid.5402.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/magic.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/magic.5402.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M/journal_uuid.5402.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M/journal_uuid.5402.tmp
[node1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.TNgP2M/journal -> /dev/vdb1
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.TNgP2M
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdc
[node1][DEBUG ] Warning: The kernel is still using the old partition table.
[node1][DEBUG ] The new table will be used at the next reboot.
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdc1
[node1][INFO ] Running command: systemctl enable ceph.target
[node1][INFO ] checking OSD status…
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.
[node1][DEBUG ] connected to host: node1
[node1][DEBUG ] detect platform information from remote host
[node1][DEBUG ] detect machine type
[node1][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Preparing host node1 disk /dev/vdd journal /dev/vdb2 activate True
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdd /dev/vdb2
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node1][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node1][WARNIN] prepare_device: Journal /dev/vdb2 is a partition
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node1][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node1][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb2
[node1][WARNIN] prepare_device: Journal /dev/vdb2 was not prepared with ceph-disk. Symlinking directly.
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] set_data_partition: Creating osd partition on /dev/vdd
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] ptype_tobe_for_name: name = data
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:e92979e9-2ce0-4be7-a3e0-9d667d16643a –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdd
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on created device /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid
[node1][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdd1
[node1][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdd1
[node1][DEBUG ] meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=655295 blks
[node1][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node1][DEBUG ] = crc=1 finobt=0, sparse=0
[node1][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node1][DEBUG ] = sunit=0 swidth=0 blks
[node1][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node1][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node1][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node1][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node1][WARNIN] mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.QZ47GJ with options noatime,inode64
[node1][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdd1 /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/ceph_fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/ceph_fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/fsid.5881.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/magic.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/magic.5881.tmp
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ/journal_uuid.5881.tmp
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ/journal_uuid.5881.tmp
[node1][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.QZ47GJ/journal -> /dev/vdb2
[node1][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.QZ47GJ
[node1][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node1][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdd
[node1][DEBUG ] Warning: The kernel is still using the old partition table.
[node1][DEBUG ] The new table will be used at the next reboot.
[node1][DEBUG ] The operation has completed successfully.
[node1][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node1][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdd1
[node1][INFO ] Running command: systemctl enable ceph.target
[node1][INFO ] checking OSD status…
[node1][DEBUG ] find the location of an executable
[node1][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node1 is now ready for osd use.

NODE2创建存储空间
[root@node2 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk

[root@node1 ceph-cluster]# ceph-deploy osd create node2:vdc:/dev/vdb1 node2:vdd:/dev/vdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy osd create node2:vdc:/dev/vdb1 node2:vdd:/dev/vdb2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [(‘node2’, ‘/dev/vdc’, ‘/dev/vdb1’), (‘node2’, ‘/dev/vdd’, ‘/dev/vdb2’)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1590638>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x1583230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node2:/dev/vdc:/dev/vdb1 node2:/dev/vdd:/dev/vdb2
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to node2
[node2][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node2 disk /dev/vdc journal /dev/vdb1 activate True
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdc /dev/vdb1
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node2][WARNIN] prepare_device: Journal /dev/vdb1 is a partition
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node2][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node2][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb1
[node2][WARNIN] prepare_device: Journal /dev/vdb1 was not prepared with ceph-disk. Symlinking directly.
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] set_data_partition: Creating osd partition on /dev/vdc
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] ptype_tobe_for_name: name = data
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:4bf10cc7-68bc-463d-9d29-f6ca9081d0bc –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdc
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on created device /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid
[node2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdc1
[node2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdc1
[node2][DEBUG ] meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=655295 blks
[node2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node2][DEBUG ] = crc=1 finobt=0, sparse=0
[node2][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node2][DEBUG ] = sunit=0 swidth=0 blks
[node2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node2][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node2][WARNIN] mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.zv4xfo with options noatime,inode64
[node2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdc1 /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/ceph_fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/ceph_fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/fsid.5364.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/magic.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/magic.5364.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo/journal_uuid.5364.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo/journal_uuid.5364.tmp
[node2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.zv4xfo/journal -> /dev/vdb1
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.zv4xfo
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdc
[node2][DEBUG ] Warning: The kernel is still using the old partition table.
[node2][DEBUG ] The new table will be used at the next reboot.
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdc1
[node2][INFO ] Running command: systemctl enable ceph.target
[node2][INFO ] checking OSD status…
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.
[node2][DEBUG ] connected to host: node2
[node2][DEBUG ] detect platform information from remote host
[node2][DEBUG ] detect machine type
[node2][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Preparing host node2 disk /dev/vdd journal /dev/vdb2 activate True
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdd /dev/vdb2
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node2][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node2][WARNIN] prepare_device: Journal /dev/vdb2 is a partition
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node2][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node2][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb2
[node2][WARNIN] prepare_device: Journal /dev/vdb2 was not prepared with ceph-disk. Symlinking directly.
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] set_data_partition: Creating osd partition on /dev/vdd
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] ptype_tobe_for_name: name = data
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:eda48f95-efaf-435e-8700-9511747dcec3 –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdd
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on created device /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid
[node2][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdd1
[node2][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdd1
[node2][DEBUG ] meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=655295 blks
[node2][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node2][DEBUG ] = crc=1 finobt=0, sparse=0
[node2][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node2][DEBUG ] = sunit=0 swidth=0 blks
[node2][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node2][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node2][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node2][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node2][WARNIN] mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.VkorNk with options noatime,inode64
[node2][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdd1 /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/ceph_fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/ceph_fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/fsid.5874.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/magic.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/magic.5874.tmp
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk/journal_uuid.5874.tmp
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk/journal_uuid.5874.tmp
[node2][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.VkorNk/journal -> /dev/vdb2
[node2][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.VkorNk
[node2][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node2][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdd
[node2][DEBUG ] Warning: The kernel is still using the old partition table.
[node2][DEBUG ] The new table will be used at the next reboot.
[node2][DEBUG ] The operation has completed successfully.
[node2][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node2][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdd1
[node2][INFO ] Running command: systemctl enable ceph.target
[node2][INFO ] checking OSD status…
[node2][DEBUG ] find the location of an executable
[node2][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node2 is now ready for osd use.

NODE3创建存储空间
[root@node3 ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
vdd 252:48 0 10G 0 disk

[root@node1 ceph-cluster]# ceph-deploy osd create node3:vdc:/dev/vdb1 node3:vdd:/dev/vdb2
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.33): /usr/bin/ceph-deploy osd create node3:vdc:/dev/vdb1 node3:vdd:/dev/vdb2
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] disk : [(‘node3’, ‘/dev/vdc’, ‘/dev/vdb1’), (‘node3’, ‘/dev/vdd’, ‘/dev/vdb2’)]
[ceph_deploy.cli][INFO ] dmcrypt : False
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] bluestore : None
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] dmcrypt_key_dir : /etc/ceph/dmcrypt-keys
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x1cd4638>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] fs_type : xfs
[ceph_deploy.cli][INFO ] func : <function osd at 0x1cc7230>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] zap_disk : False
[ceph_deploy.osd][DEBUG ] Preparing cluster ceph disks node3:/dev/vdc:/dev/vdb1 node3:/dev/vdd:/dev/vdb2
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Deploying osd to node3
[node3][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[ceph_deploy.osd][DEBUG ] Preparing host node3 disk /dev/vdc journal /dev/vdb1 activate True
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdc /dev/vdb1
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node3][WARNIN] prepare_device: Journal /dev/vdb1 is a partition
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb1 uuid path is /sys/dev/block/252:17/dm/uuid
[node3][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node3][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb1
[node3][WARNIN] prepare_device: Journal /dev/vdb1 was not prepared with ceph-disk. Symlinking directly.
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] set_data_partition: Creating osd partition on /dev/vdc
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] ptype_tobe_for_name: name = data
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:e84a0ea4-f5c2-4615-803e-a6d57f11bc18 –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdc
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on created device /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc1 uuid path is /sys/dev/block/252:33/dm/uuid
[node3][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdc1
[node3][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdc1
[node3][DEBUG ] meta-data=/dev/vdc1 isize=2048 agcount=4, agsize=655295 blks
[node3][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node3][DEBUG ] = crc=1 finobt=0, sparse=0
[node3][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node3][DEBUG ] = sunit=0 swidth=0 blks
[node3][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node3][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node3][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node3][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node3][WARNIN] mount: Mounting /dev/vdc1 on /var/lib/ceph/tmp/mnt.sFNa72 with options noatime,inode64
[node3][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdc1 /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/ceph_fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/ceph_fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/fsid.5354.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/magic.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/magic.5354.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72/journal_uuid.5354.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72/journal_uuid.5354.tmp
[node3][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.sFNa72/journal -> /dev/vdb1
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.sFNa72
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdc uuid path is /sys/dev/block/252:32/dm/uuid
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdc
[node3][DEBUG ] Warning: The kernel is still using the old partition table.
[node3][DEBUG ] The new table will be used at the next reboot.
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdc
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdc1
[node3][INFO ] Running command: systemctl enable ceph.target
[node3][INFO ] checking OSD status…
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.
[node3][DEBUG ] connected to host: node3
[node3][DEBUG ] detect platform information from remote host
[node3][DEBUG ] detect machine type
[node3][DEBUG ] find the location of an executable
[ceph_deploy.osd][INFO ] Distro info: Red Hat Enterprise Linux Server 7.4 Maipo
[ceph_deploy.osd][DEBUG ] Preparing host node3 disk /dev/vdd journal /dev/vdb2 activate True
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /usr/sbin/ceph-disk -v prepare –cluster ceph –fs-type xfs — /dev/vdd /dev/vdb2
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=fsid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-allows-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-wants-journal -i 0 –cluster ceph
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –check-needs-journal -i 0 –cluster ceph
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-osd –cluster=ceph –show-config-value=osd_journal_size
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mkfs_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_mount_options_xfs
[node3][WARNIN] command: Running command: /usr/bin/ceph-conf –cluster=ceph –name=osd. –lookup osd_fs_mount_options_xfs
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node3][WARNIN] prepare_device: Journal /dev/vdb2 is a partition
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdb2 uuid path is /sys/dev/block/252:18/dm/uuid
[node3][WARNIN] prepare_device: OSD will not be hot-swappable if journal is not the same device as the osd data
[node3][WARNIN] command: Running command: /usr/sbin/blkid -o udev -p /dev/vdb2
[node3][WARNIN] prepare_device: Journal /dev/vdb2 was not prepared with ceph-disk. Symlinking directly.
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] set_data_partition: Creating osd partition on /dev/vdd
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] ptype_tobe_for_name: name = data
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] create_partition: Creating data partition num 1 size 0 on /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –largest-new=1 –change-name=1:ceph data –partition-guid=1:c2ddeef5-1f3b-4ebd-93ff-3e5733ad3c3f –typecode=1:89c57f98-2fe5-4dc0-89c1-f3ad0ceff2be –mbrtogpt — /dev/vdd
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on created device /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd1 uuid path is /sys/dev/block/252:49/dm/uuid
[node3][WARNIN] populate_data_path_device: Creating xfs fs on /dev/vdd1
[node3][WARNIN] command_check_call: Running command: /usr/sbin/mkfs -t xfs -f -i size=2048 — /dev/vdd1
[node3][DEBUG ] meta-data=/dev/vdd1 isize=2048 agcount=4, agsize=655295 blks
[node3][DEBUG ] = sectsz=512 attr=2, projid32bit=1
[node3][DEBUG ] = crc=1 finobt=0, sparse=0
[node3][DEBUG ] data = bsize=4096 blocks=2621179, imaxpct=25
[node3][DEBUG ] = sunit=0 swidth=0 blks
[node3][DEBUG ] naming =version 2 bsize=4096 ascii-ci=0 ftype=1
[node3][DEBUG ] log =internal log bsize=4096 blocks=2560, version=2
[node3][DEBUG ] = sectsz=512 sunit=0 blks, lazy-count=1
[node3][DEBUG ] realtime =none extsz=4096 blocks=0, rtextents=0
[node3][WARNIN] mount: Mounting /dev/vdd1 on /var/lib/ceph/tmp/mnt.rMP6kE with options noatime,inode64
[node3][WARNIN] command_check_call: Running command: /usr/bin/mount -t xfs -o noatime,inode64 — /dev/vdd1 /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command: Running command: /usr/sbin/restorecon /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] populate_data_path: Preparing osd data dir /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/ceph_fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/ceph_fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/fsid.5887.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/magic.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/magic.5887.tmp
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE/journal_uuid.5887.tmp
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE/journal_uuid.5887.tmp
[node3][WARNIN] adjust_symlink: Creating symlink /var/lib/ceph/tmp/mnt.rMP6kE/journal -> /dev/vdb2
[node3][WARNIN] command: Running command: /usr/sbin/restorecon -R /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command: Running command: /usr/bin/chown -R ceph:ceph /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] unmount: Unmounting /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] command_check_call: Running command: /bin/umount — /var/lib/ceph/tmp/mnt.rMP6kE
[node3][WARNIN] get_dm_uuid: get_dm_uuid /dev/vdd uuid path is /sys/dev/block/252:48/dm/uuid
[node3][WARNIN] command_check_call: Running command: /usr/sbin/sgdisk –typecode=1:4fbd7e29-9d25-41b8-afd0-062c0ceff05d — /dev/vdd
[node3][DEBUG ] Warning: The kernel is still using the old partition table.
[node3][DEBUG ] The new table will be used at the next reboot.
[node3][DEBUG ] The operation has completed successfully.
[node3][WARNIN] update_partition: Calling partprobe on prepared device /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command: Running command: /usr/sbin/partprobe /dev/vdd
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm settle –timeout=600
[node3][WARNIN] command_check_call: Running command: /usr/bin/udevadm trigger –action=add –sysname-match vdd1
[node3][INFO ] Running command: systemctl enable ceph.target
[node3][INFO ] checking OSD status…
[node3][DEBUG ] find the location of an executable
[node3][INFO ] Running command: /bin/ceph –cluster=ceph osd stat –format=json
[ceph_deploy.osd][DEBUG ] Host node3 is now ready for osd use.

4)常见错误(非必须操作)
使用osd create创建OSD存储空间时,如提示run ‘gatherkeys’,可以使用如下命令修复:
[root@node1 ~]# ceph-deploy gatherkeys node1 node2 node3

步骤四:验证测试

1) 查看集群状态
[root@node1 ~]# ceph -s

[root@node1 ceph-cluster]# ceph -s
cluster 29908a48-7574-4aac-ac14-80a44b7cffbf
health HEALTH_OK
monmap e1: 3 mons at {node1=192.168.4.11:6789/0,node2=192.168.4.12:6789/0,node3=192.168.4.13:6789/0}
election epoch 4, quorum 0,1,2 node1,node2,node3
osdmap e33: 6 osds: 6 up, 6 in
flags sortbitwise
pgmap v79: 64 pgs, 1 pools, 0 bytes data, 0 objects
203 MB used, 61170 MB / 61373 MB avail
64 active+clean

2)常见错误(非必须操作)
如果查看状态包含如下信息:
health: HEALTH_WARN
clock skew detected on node2, node3…
clock skew表示时间不同步,解决办法:请先将所有主机的时间都使用NTP时间同步!!!
如果状态还是失败,可以尝试执行如下命令,重启ceph服务:
[root@node1 ~]# systemctl restart ceph\*.service ceph\*.target

3 案例3:创建Ceph块存储
3.1 问题

沿用练习一,使用Ceph集群的块存储功能,实现以下目标:
创建块存储镜像
客户端映射镜像
创建镜像快照
使用快照还原数据
使用快照克隆镜像
删除快照与镜像
3.2 步骤

实现此案例需要按照如下步骤进行。
步骤一:创建镜像

1)查看存储池。
[root@node1 ~]# ceph osd lspools
0 rbd,
代码如下
[root@node1 ceph-cluster]# ceph osd lspools
0 rbd,

2)创建镜像、查看镜像
[root@node1 ~]# rbd create demo-image –image-feature layering –size 10G
[root@node1 ~]# rbd create rbd/image –image-feature layering –size 10G
[root@node1 ~]# rbd list
[root@node1 ~]# rbd info demo-image
rbd image ‘demo-image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.d3aa2ae8944a
format: 2
features: layering
代码如下
[root@node1 ceph-cluster]# rbd create demo-image –image-feature layering –size 10G
[root@node1 ceph-cluster]# rbd create rbd/image –image-feature layering –size 10G
[root@node1 ceph-cluster]# rbd list
demo-image
image
[root@node1 ceph-cluster]# rbd info demo-image
rbd image ‘demo-image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.101b238e1f29
format: 2
features: layering
flags:

步骤二:动态调整

1)缩小容量
[root@node1 ~]# rbd resize –size 7G image –allow-shrink
[root@node1 ~]# rbd info image
2)扩容容量
[root@node1 ~]# rbd resize –size 15G image
[root@node1 ~]# rbd info image

代码如下
缩小
[root@node1 ceph-cluster]# rbd info image
rbd image ‘image’:
size 10240 MB in 2560 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.103a238e1f29
format: 2
features: layering
flags:
[root@node1 ceph-cluster]#
[root@node1 ceph-cluster]# rbd resize –size 7G image –allow-shrink
Resizing image: 100% complete…done.
[root@node1 ceph-cluster]# rbd info image
rbd image ‘image’:
size 7168 MB in 1792 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.103a238e1f29
format: 2
features: layering
flags:

扩大
[root@node1 ceph-cluster]# rbd resize –size 15G image
Resizing image: 100% complete…done.
[root@node1 ceph-cluster]# rbd info image
rbd image ‘image’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.103a238e1f29
format: 2
features: layering
flags:

步骤三:通过KRBD访问

1)集群内将镜像映射为本地磁盘
[root@node1 ~]# rbd map demo-image
/dev/rbd0
[root@node1 ~]# lsblk
… …
rbd0 251:0 0 10G 0 disk
[root@node1 ~]# mkfs.xfs /dev/rbd0
[root@node1 ~]# mount /dev/rbd0 /mnt

代码如下
[root@node1 ceph-cluster]# rbd map demo-image
/dev/rbd0
[root@node1 ceph-cluster]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
vdb 252:16 0 10G 0 disk
├─vdb1 252:17 0 5G 0 part
└─vdb2 252:18 0 5G 0 part
vdc 252:32 0 10G 0 disk
└─vdc1 252:33 0 10G 0 part /var/lib/ceph/osd/ceph-0
vdd 252:48 0 10G 0 disk
└─vdd1 252:49 0 10G 0 part /var/lib/ceph/osd/ceph-1
rbd0 251:0 0 10G 0 disk
格式化并挂载
[root@node1 ceph-cluster]# mkfs.xfs /dev/rbd
rbd/ rbd0
[root@node1 ceph-cluster]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=17, agsize=162816 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=2621440, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@node1 ceph-cluster]# mount /dev/rbd0 /mnt/
[root@node1 ceph-cluster]# ll -d /mnt/
drwxr-xr-x. 2 root root 6 10月 11 13:59 /mnt/

2)客户端通过KRBD访问
#客户端需要安装ceph-common软件包
#拷贝配置文件(否则不知道集群在哪)
#拷贝连接密钥(否则无连接权限)
[root@client ~]# yum -y install ceph-common
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph/
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring \
/etc/ceph/
[root@client ~]# rbd map image
[root@client ~]# lsblk
[root@client ~]# rbd showmapped
id pool image snap device
0 rbd image – /dev/rbd0

代码如下
[root@client ~]# yum install -y ceph-common
已加载插件:langpacks, product-id, search-disabled-repos, subscription-manager
This system is not registered with an entitlement server. You can use subscription-manager to register.
正在解决依赖关系
–> 正在检查事务
—> 软件包 ceph-common.x86_64.1.0.94.5-2.el7 将被 安装
–> 正在处理依赖关系 python-rados = 1:0.94.5-2.el7,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在处理依赖关系 python-rbd = 1:0.94.5-2.el7,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在处理依赖关系 hdparm,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在处理依赖关系 redhat-lsb-core,它被软件包 1:ceph-common-0.94.5-2.el7.x86_64 需要
–> 正在检查事务
—> 软件包 hdparm.x86_64.0.9.43-5.el7 将被 安装
—> 软件包 python-rados.x86_64.1.0.94.5-2.el7 将被 安装
—> 软件包 python-rbd.x86_64.1.0.94.5-2.el7 将被 安装
—> 软件包 redhat-lsb-core.x86_64.0.4.1-27.el7 将被 安装
–> 正在处理依赖关系 redhat-lsb-submod-security(x86-64) = 4.1-27.el7,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 /usr/bin/m4,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 /usr/bin/patch,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在处理依赖关系 spax,它被软件包 redhat-lsb-core-4.1-27.el7.x86_64 需要
–> 正在检查事务
—> 软件包 m4.x86_64.0.1.4.16-10.el7 将被 安装
—> 软件包 patch.x86_64.0.2.7.1-8.el7 将被 安装
—> 软件包 redhat-lsb-submod-security.x86_64.0.4.1-27.el7 将被 安装
—> 软件包 spax.x86_64.0.1.5.2-13.el7 将被 安装
–> 解决依赖关系完成

依赖关系解决

==============================================================================================
Package 架构 版本 源 大小
==============================================================================================
正在安装:
ceph-common x86_64 1:0.94.5-2.el7 192.168.4.254_rhel7 6.2 M
为依赖而安装:
hdparm x86_64 9.43-5.el7 192.168.4.254_rhel7 83 k
m4 x86_64 1.4.16-10.el7 192.168.4.254_rhel7 256 k
patch x86_64 2.7.1-8.el7 192.168.4.254_rhel7 110 k
python-rados x86_64 1:0.94.5-2.el7 192.168.4.254_rhel7 39 k
python-rbd x86_64 1:0.94.5-2.el7 192.168.4.254_rhel7 29 k
redhat-lsb-core x86_64 4.1-27.el7 192.168.4.254_rhel7 37 k
redhat-lsb-submod-security x86_64 4.1-27.el7 192.168.4.254_rhel7 15 k
spax x86_64 1.5.2-13.el7 192.168.4.254_rhel7 260 k

事务概要
==============================================================================================
安装 1 软件包 (+8 依赖软件包)

总下载量:7.0 M
安装大小:26 M
Downloading packages:
(1/9): hdparm-9.43-5.el7.x86_64.rpm | 83 kB 00:00:00
(2/9): m4-1.4.16-10.el7.x86_64.rpm | 256 kB 00:00:00
(3/9): patch-2.7.1-8.el7.x86_64.rpm | 110 kB 00:00:00
(4/9): python-rados-0.94.5-2.el7.x86_64.rpm | 39 kB 00:00:00
(5/9): python-rbd-0.94.5-2.el7.x86_64.rpm | 29 kB 00:00:00
(6/9): redhat-lsb-core-4.1-27.el7.x86_64.rpm | 37 kB 00:00:00
(7/9): redhat-lsb-submod-security-4.1-27.el7.x86_64.rpm | 15 kB 00:00:00
(8/9): spax-1.5.2-13.el7.x86_64.rpm | 260 kB 00:00:00
(9/9): ceph-common-0.94.5-2.el7.x86_64.rpm | 6.2 MB 00:00:00
———————————————————————————————-
总计 25 MB/s | 7.0 MB 00:00:00
Running transaction check
Running transaction test
Transaction test succeeded
Running transaction
正在安装 : 1:python-rados-0.94.5-2.el7.x86_64 1/9
正在安装 : 1:python-rbd-0.94.5-2.el7.x86_64 2/9
正在安装 : patch-2.7.1-8.el7.x86_64 3/9
正在安装 : hdparm-9.43-5.el7.x86_64 4/9
正在安装 : m4-1.4.16-10.el7.x86_64 5/9
正在安装 : spax-1.5.2-13.el7.x86_64 6/9
正在安装 : redhat-lsb-submod-security-4.1-27.el7.x86_64 7/9
正在安装 : redhat-lsb-core-4.1-27.el7.x86_64 8/9
正在安装 : 1:ceph-common-0.94.5-2.el7.x86_64 9/9
192.168.4.254_rhel7/productid | 1.6 kB 00:00:00
验证中 : 1:python-rados-0.94.5-2.el7.x86_64 1/9
验证中 : redhat-lsb-submod-security-4.1-27.el7.x86_64 2/9
验证中 : spax-1.5.2-13.el7.x86_64 3/9
验证中 : 1:python-rbd-0.94.5-2.el7.x86_64 4/9
验证中 : m4-1.4.16-10.el7.x86_64 5/9
验证中 : redhat-lsb-core-4.1-27.el7.x86_64 6/9
验证中 : 1:ceph-common-0.94.5-2.el7.x86_64 7/9
验证中 : hdparm-9.43-5.el7.x86_64 8/9
验证中 : patch-2.7.1-8.el7.x86_64 9/9

已安装:
ceph-common.x86_64 1:0.94.5-2.el7

作为依赖被安装:
hdparm.x86_64 0:9.43-5.el7 m4.x86_64 0:1.4.16-10.el7
patch.x86_64 0:2.7.1-8.el7 python-rados.x86_64 1:0.94.5-2.el7
python-rbd.x86_64 1:0.94.5-2.el7 redhat-lsb-core.x86_64 0:4.1-27.el7
redhat-lsb-submod-security.x86_64 0:4.1-27.el7 spax.x86_64 0:1.5.2-13.el7

完毕!
[root@client ~]#
[root@client ~]#
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.conf /etc/ceph
ceph.conf 100% 235 338.4KB/s 00:00
[root@client ~]# scp 192.168.4.11:/etc/ceph/ceph.client.admin.keyring /etc/ceph/
ceph.client.admin.keyring 100% 63 81.4KB/s 00:00
[root@client ~]# ll /etc/ceph/
总用量 12
-rw——-. 1 root root 63 10月 11 14:25 ceph.client.admin.keyring
-rw-r–r–. 1 root root 235 10月 11 14:24 ceph.conf
-rwxr-xr-x. 1 root root 92 6月 28 2017 rbdmap
[root@client ~]#
[root@client ~]#
[root@client ~]# rbd map image
/dev/rbd0
[root@client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
rbd0 251:0 0 15G 0 disk
[root@client ~]#
[root@client ~]# rbd showmapped
id pool image snap device
0 rbd image – /dev/rbd0

3) 客户端格式化、挂载分区
[root@client ~]# mkfs.xfs /dev/rbd0
[root@client ~]# mount /dev/rbd0 /mnt/
[root@client ~]# echo “test” > /mnt/test.txt
代码如下
[root@client ~]# mkfs.xfs /dev/rbd0
meta-data=/dev/rbd0 isize=512 agcount=17, agsize=244736 blks
= sectsz=512 attr=2, projid32bit=1
= crc=1 finobt=0, sparse=0
data = bsize=4096 blocks=3932160, imaxpct=25
= sunit=1024 swidth=1024 blks
naming =version 2 bsize=4096 ascii-ci=0 ftype=1
log =internal log bsize=4096 blocks=2560, version=2
= sectsz=512 sunit=8 blks, lazy-count=1
realtime =none extsz=4096 blocks=0, rtextents=0
[root@client ~]# mount /dev/rbd0 /mnt/
[root@client ~]# echo “test” > /mnt/test.txt
[root@client ~]# cat /mnt/test.txt
test
[root@client ~]#

步骤四:创建镜像快照

1) 查看镜像快照
[root@node1 ~]# rbd snap ls image
2) 创建镜像快照
[root@node1 ~]# rbd snap create image –snap image-snap1
[root@node1 ~]# rbd snap ls image
SNAPID NAME SIZE
4 image-snap1 15360 MB
3) 删除客户端写入的测试文件
[root@client ~]# rm -rf /mnt/test.txt
4) 还原快照
[root@node1 ~]# rbd snap rollback image –snap image-snap1
#客户端重新挂载分区
[root@client ~]# umount /mnt
[root@client ~]# mount /dev/rbd0 /mnt/
[root@client ~]# ls /mnt

代码如下
先确认下当前的快照信息
[root@node1 ceph-cluster]# rbd snap ls image
创建快照
[root@node1 ceph-cluster]# rbd snap create image –snap image-snap1
再查看一下
[root@node1 ceph-cluster]# rbd snap ls image
SNAPID NAME SIZE
4 image-snap1 15360 MB
先在客户端client去操作,把刚才创建的test.txt删除掉
[root@client ~]# rm -rf /mnt/test.txt
[root@client ~]# ll /mnt/
总用量 0
然后在node1上面还原快照
[root@node1 ceph-cluster]# rbd snap rollback image –snap image-snap1
Rolling back to snapshot: 100% complete…done.
然后客户端卸载/mnt,重新挂载确认
[root@client ~]# umount /mnt/
[root@client ~]# lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
vda 252:0 0 20G 0 disk
├─vda1 252:1 0 1G 0 part /boot
└─vda2 252:2 0 19G 0 part
├─rhel-root 253:0 0 17G 0 lvm /
└─rhel-swap 253:1 0 2G 0 lvm [SWAP]
rbd0 251:0 0 15G 0 disk
[root@client ~]# mount /dev/rbd0 /mnt
[root@client ~]# ll /mnt/
总用量 4
-rw-r–r–. 1 root root 5 10月 11 14:27 test.txt

步骤四:创建快照克隆

1)克隆快照
[root@node1 ~]# rbd snap protect image –snap image-snap1
[root@node1 ~]# rbd snap rm image –snap image-snap1 //会失败
[root@node1 ~]# rbd clone \
image –snap image-snap1 image-clone –image-feature layering
//使用image的快照image-snap1克隆一个新的image-clone镜像
2)查看克隆镜像与父镜像快照的关系
[root@node1 ~]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.d3f53d1b58ba
format: 2
features: layering
flags:
parent: rbd/image@image-snap1
#克隆镜像很多数据都来自于快照链
#如果希望克隆镜像可以独立工作,就需要将父快照中的数据,全部拷贝一份,但比较耗时!!!
[root@node1 ~]# rbd flatten image-clone
[root@node1 ~]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.d3f53d1b58ba
format: 2
features: layering
flags:
#注意,父快照信息没了!

代码如下
先锁定快照image-snap1
[root@node1 ceph-cluster]# rbd snap protect image –snap image-snap1
[root@node1 ceph-cluster]# rbd snap rm image –snap image-snap1
rbd: snapshot ‘image-snap1’ is protected from removal.
2018-10-11 14:40:14.728450 7f9f5fca9d80 -1 librbd::Operations: snapshot is protected

然后克隆一下快照
[root@node1 ceph-cluster]# rbd clone image –snap image-snap1 image-clone –image-feature layering

然后查看一下克隆快照的信息,可以发现该快照来源元image-snap1
[root@node1 ceph-cluster]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1033238e1f29
format: 2
features: layering
flags:
parent: rbd/image@image-snap1
overlap: 15360 MB

如果要想独立工作,就得把父快照完全复制,但非常耗时
[root@node1 ceph-cluster]# rbd flatten image-clone
Image flatten: 100% complete…done.

然后再看一下镜像信息,可以发现没有父快照的信息了
[root@node1 ceph-cluster]# rbd info image-clone
rbd image ‘image-clone’:
size 15360 MB in 3840 objects
order 22 (4096 kB objects)
block_name_prefix: rbd_data.1033238e1f29
format: 2
features: layering
flags:

步骤四:其他操作

1) 客户端撤销磁盘映射
[root@client ~]# umount /mnt
[root@client ~]# rbd showmapped
id pool image snap device
0 rbd image – /dev/rbd0
//语法格式:
[root@client ~]# rbd unmap /dev/rbd/{poolname}/{imagename}
[root@client ~]# rbd unmap /dev/rbd/rbd/image
2)删除快照与镜像
[root@node1 ~]# rbd snap rm image –snap image-snap
[root@node1 ~]# rbd list
[root@node1 ~]# rbd rm image

此条目发表在ceph分类目录,贴了标签。将固定链接加入收藏夹。

发表回复

您的电子邮箱地址不会被公开。 必填项已用 * 标注