这篇文章主要讲解了“Ceph怎么添加删除监视器”,文中的讲解内容简单清晰,易于学习与理解,下面请大家跟着小编的思路慢慢深入,一起来研究和学习“Ceph怎么添加删除监视器”吧!
1.环境准备
1.1.已有环境
已有三节点ceph集群,有3个mon,现在再添加一个mon
# ceph -s
cluster 520d715f-adb5-4a6a-afb2-dcf586308166
health HEALTH_OK
monmap e3: 3 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0}
election epoch 1850, quorum 0,1,2hadoop001,hadoop002,hadoop003
osdmap e127: 4 osds: 4 up, 4 in
flags sortbitwise
pgmap v22405: 64 pgs, 1 pools, 0 bytes data, 0 objects
145 MB used, 334 GB / 334 GB avail
64 active+clean
1.2.系统环境
要添加新的mon节点,那新节点的系统环境也需要配置与原有环境一致,这里只简单列下需要配置的列表,不多做赘述:
主机名、/etc/hosts、ssh互信、防火墙、时间同步、Selinux、最大进程数、文件句柄数、最大线程数、ceph的yum源
2.使用ceph-deploy操作
2.1.使用ceph-deploy添加mon
系统环境配置好后,在新增的mon节点上安装ceph软件
# yum install ceph
在原有mon节点上使用ceph-deploy直接创建新的mon
注意:配置文件中需要配置public_network,否则可能会添加失败
# ceph-deploy mon create hadoop004
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy mon create hadoop004
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : create
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x2872cb0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['hadoop004']
[ceph_deploy.cli][INFO ] func : <function mon at 0x27ff758>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.cli][INFO ] keyrings : None
[ceph_deploy.mon][DEBUG ] Deploying mon, cluster ceph hosts hadoop004
[ceph_deploy.mon][DEBUG ] detecting platform for host hadoop004 ...
[hadoop004][DEBUG ] connected to host: hadoop004
[hadoop004][DEBUG ] detect platform information from remote host
[hadoop004][DEBUG ] detect machine type
[hadoop004][DEBUG ] find the location of an executable
[ceph_deploy.mon][INFO ] distro info: CentOS Linux 7.2.1511 Core
[hadoop004][DEBUG ] determining if provided host has same hostname in remote
[hadoop004][DEBUG ] get remote short hostname
[hadoop004][DEBUG ] deploying mon to hadoop004
[hadoop004][DEBUG ] get remote short hostname
[hadoop004][DEBUG ] remote hostname: hadoop004
[hadoop004][DEBUG ] write cluster configuration to /etc/ceph/{cluster}.conf
[hadoop004][DEBUG ] create the mon path if it does not exist
[hadoop004][DEBUG ] checking for done path: /var/lib/ceph/mon/ceph-hadoop004/done
[hadoop004][DEBUG ] done path does not exist: /var/lib/ceph/mon/ceph-hadoop004/done
[hadoop004][INFO ] creating keyring file: /var/lib/ceph/tmp/ceph-hadoop004.mon.keyring
[hadoop004][DEBUG ] create the monitor keyring file
[hadoop004][INFO ] Running command: ceph-mon --cluster ceph --mkfs -i hadoop004 --keyring /var/lib/ceph/tmp/ceph-hadoop004.mon.keyring --setuser 167 --setgroup 167
[hadoop004][DEBUG ] ceph-mon: renaming mon.noname-d 10.10.1.36:6789/0 to mon.hadoop004
[hadoop004][DEBUG ] ceph-mon: set fsid to 520d715f-adb5-4a6a-afb2-dcf586308166
[hadoop004][DEBUG ] ceph-mon: created monfs at /var/lib/ceph/mon/ceph-hadoop004 for mon.hadoop004
[hadoop004][INFO ] unlinking keyring file /var/lib/ceph/tmp/ceph-hadoop004.mon.keyring
[hadoop004][DEBUG ] create a done file to avoid re-doing the mon deployment
[hadoop004][DEBUG ] create the init path if it does not exist
[hadoop004][INFO ] Running command: systemctl enable ceph.target
[hadoop004][INFO ] Running command: systemctl enable ceph-mon@hadoop004
[hadoop004][INFO ] Running command: systemctl start ceph-mon@hadoop004
[hadoop004][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.hadoop004.asok mon_status
[hadoop004][DEBUG ] ********************************************************************************
[hadoop004][DEBUG ] status for monitor: mon.hadoop004
[hadoop004][DEBUG ] {
[hadoop004][DEBUG ] "election_epoch": 0,
[hadoop004][DEBUG ] "extra_probe_peers": [
[hadoop004][DEBUG ] "10.10.1.32:6789/0",
[hadoop004][DEBUG ] "10.10.1.33:6789/0",
[hadoop004][DEBUG ] "10.10.1.34:6789/0"
[hadoop004][DEBUG ] ],
[hadoop004][DEBUG ] "monmap": {
[hadoop004][DEBUG ] "created": "2016-12-19 09:59:12.970500",
[hadoop004][DEBUG ] "epoch": 3,
[hadoop004][DEBUG ] "fsid": "520d715f-adb5-4a6a-afb2-dcf586308166",
[hadoop004][DEBUG ] "modified": "2017-08-02 17:22:40.247484",
[hadoop004][DEBUG ] "mons": [
[hadoop004][DEBUG ] {
[hadoop004][DEBUG ] "addr": "10.10.1.32:6789/0",
[hadoop004][DEBUG ] "name": "hadoop001",
[hadoop004][DEBUG ] "rank": 0
[hadoop004][DEBUG ] },
[hadoop004][DEBUG ] {
[hadoop004][DEBUG ] "addr": "10.10.1.33:6789/0",
[hadoop004][DEBUG ] "name": "hadoop002",
[hadoop004][DEBUG ] "rank": 1
[hadoop004][DEBUG ] },
[hadoop004][DEBUG ] {
[hadoop004][DEBUG ] "addr": "10.10.1.34:6789/0",
[hadoop004][DEBUG ] "name": "hadoop003",
[hadoop004][DEBUG ] "rank": 2
[hadoop004][DEBUG ] }
[hadoop004][DEBUG ] ]
[hadoop004][DEBUG ] },
[hadoop004][DEBUG ] "name": "hadoop004",
[hadoop004][DEBUG ] "outside_quorum": [],
[hadoop004][DEBUG ] "quorum": [],
[hadoop004][DEBUG ] "rank": -1,
[hadoop004][DEBUG ] "state": "probing",
[hadoop004][DEBUG ] "sync_provider": []
[hadoop004][DEBUG ] }
[hadoop004][DEBUG ] ********************************************************************************
[hadoop004][INFO ] monitor: mon.hadoop004 is currently at the state of probing
[hadoop004][INFO ] Running command: ceph --cluster=ceph --admin-daemon /var/run/ceph/ceph-mon.hadoop004.asok mon_status
[hadoop004][WARNIN] monitor hadoop004 does not exist in monmap
查看状态,添加成功。
# ceph -s
cluster 520d715f-adb5-4a6a-afb2-dcf586308166
health HEALTH_OK
monmap e4: 4 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0,hadoop004=10.10.1.36:6789/0}
election epoch 1850, quorum 0,1,2,3 hadoop001,hadoop002,hadoop003,hadoop004
osdmap e127: 4 osds: 4 up, 4 in
flags sortbitwise
pgmap v22405: 64 pgs, 1 pools, 0 bytes data, 0 objects
145 MB used, 334 GB / 334 GB avail
64 active+clean
添加完成后在ceph.conf中的mon_initial_members和mon_host参数中分别添加新mon节点的hostname和ip地址
2.2.使用ceph-deploy删除mon
# ceph-deploy mon destroy hadoop004
[ceph_deploy.conf][DEBUG ] found configuration file at: /root/.cephdeploy.conf
[ceph_deploy.cli][INFO ] Invoked (1.5.34): /usr/bin/ceph-deploy mon destroy hadoop004
[ceph_deploy.cli][INFO ] ceph-deploy options:
[ceph_deploy.cli][INFO ] username : None
[ceph_deploy.cli][INFO ] verbose : False
[ceph_deploy.cli][INFO ] overwrite_conf : False
[ceph_deploy.cli][INFO ] subcommand : destroy
[ceph_deploy.cli][INFO ] quiet : False
[ceph_deploy.cli][INFO ] cd_conf : <ceph_deploy.conf.cephdeploy.Conf instance at 0x19f1cb0>
[ceph_deploy.cli][INFO ] cluster : ceph
[ceph_deploy.cli][INFO ] mon : ['hadoop004']
[ceph_deploy.cli][INFO ] func : <function mon at 0x197e758>
[ceph_deploy.cli][INFO ] ceph_conf : None
[ceph_deploy.cli][INFO ] default_release : False
[ceph_deploy.mon][DEBUG ] Removing mon from hadoop004
[hadoop004][DEBUG ] connected to host: hadoop004
[hadoop004][DEBUG ] detect platform information from remote host
[hadoop004][DEBUG ] detect machine type
[hadoop004][DEBUG ] find the location of an executable
[hadoop004][DEBUG ] get remote short hostname
[hadoop004][INFO ] Running command: ceph --cluster=ceph -n mon. -k /var/lib/ceph/mon/ceph-hadoop004/keyring mon remove hadoop004
[hadoop004][WARNIN] Error EINVAL: removing mon.hadoop004 at 10.10.1.36:6789/0, there will be 3 monitors
[hadoop004][INFO ] polling the daemon to verify it stopped
[hadoop004][INFO ] Running command: systemctl stop ceph-mon@hadoop004.service
[hadoop004][INFO ] Running command: mkdir -p /var/lib/ceph/mon-removed
[hadoop004][DEBUG ] move old monitor data
彻底清理,操作需慎重:
注意:这个会删除mon节点hadoop004上所有的ceph数据,配置文件以及rpm包
# ceph-deploy purge hadoop004
如果觉得删除的不干净,可以再去hadoop004上删除遗留目录
# rm -rf /var/lib/ceph
# rm -rf /var/run/ceph/*
3.手动操作
上一章中已经将hadoop004的mon清理干净了。
3.1.手动添加mon
hadoop004 上安装软件,创建mon目录
[root@hadoop004 ~]# yum install ceph
hadoop001上将ceph.conf和客户端密钥拷贝到hadoop004的/etc/ceph目录
# scp ceph.conf ceph.client.admin.keyring hadoop004:/etc/ceph/
hadoop004上: 获取mon密钥环
# mkdir dlw
# cd dlw/
# ceph auth get mon. -o keying
exported keyring for mon.
获取监视器运行图
# ceph mon getmap -o monmap
got monmap epoch 5
创建监视器数据目录
# ceph-mon -i hadoop004 --mkfs --monmap monmap --keyring keying
ceph-mon: set fsid to 520d715f-adb5-4a6a-afb2-dcf586308166
ceph-mon: created monfs at /var/lib/ceph/mon/ceph-hadoop004 for mon.hadoop004
启动新监视器
# ceph-mon -i hadoop004 --public-addr 10.10.1.36:6789
检查状态
# ceph -s
cluster 520d715f-adb5-4a6a-afb2-dcf586308166
health HEALTH_OK
monmap e6: 4 mons at {hadoop001=10.10.1.32:6789/0,hadoop002=10.10.1.33:6789/0,hadoop003=10.10.1.34:6789/0,hadoop004=10.10.1.36:6789/0}
election epoch 1854, quorum 0,1,2,3 hadoop001,hadoop002,hadoop003,hadoop004
osdmap e127: 4 osds: 4 up, 4 in
flags sortbitwise
pgmap v22405: 64 pgs, 1 pools, 0 bytes data, 0 objects
145 MB used, 334 GB / 334 GB avail
64 active+clean
发现集群已经成功四个mon了,但是到这里并没有完,ceph强在自我修复能力很强,总不能每次启动新的mon都要自己手动执行ceph-mon。
添加ceph-mon@hadoop004服务
先找到刚刚启动的mon进程,终止掉。
# ps -ef |grep ceph
root 30514 1 0 18:25 pts/1 00:00:00 ceph-mon -i hadoop004 --public-addr 10.10.1.36:6789
root 30899 9739 0 18:30 pts/1 00:00:00 grep --color=auto ceph
# kill 30514
在ceph.conf中的mon_initial_members和mon_host参数中分别添加新mon节点的hostname和ip地址
启动服务之前,需要修改mon数据目录的权限为ceph
# cd /var/lib/ceph/mon
# chown -R ceph:ceph ceph-hadoop004/
启动mon服务
# systemctl reset-failed ceph-mon@hadoop004.service
# systemctl restart ceph-mon@`hostname`
# systemctl enable ceph-mon@`hostname`
# systemctl restart ceph-mon.target
# systemctl status ceph-mon@`hostname`
● ceph-mon@hadoop004.service - Ceph cluster monitor daemon
Loaded: loaded (/usr/lib/systemd/system/ceph-mon@.service; enabled; vendor preset: disabled)
Active: active (running) since 三 2017-08-02 18:37:36 CHOST; 3s ago
Main PID: 31115 (ceph-mon)
CGroup: /system.slice/system-ceph\x2dmon.slice/ceph-mon@hadoop004.service
└─31115 /usr/bin/ceph-mon -f --cluster ceph --id hadoop004 --setuser ceph --setgroup ceph
8月 02 18:37:36 hadoop004 systemd[1]: Started Ceph cluster monitor daemon.
8月 02 18:37:36 hadoop004 systemd[1]: Starting Ceph cluster monitor daemon...
8月 02 18:37:36 hadoop004 ceph-mon[31115]: starting mon.hadoop004 rank 3 at 10.10.1.36:6789/0 mon_data /var/lib/ceph/mon/ceph-hadoop004 fsid 520d715f-adb5-4a6a-afb2-dcf586308166
3.2.手动删除mon
# ceph mon remove hadoop004
Error EINVAL: removing mon.hadoop004 at 10.10.1.36:6789/0, there will be 3 monitors
清理数据目录,卸载软件包
# rm -rf /var/lib/ceph
# rm -rf /var/run/ceph/*
# yum remove ceph
4.命令积累
查看集群mon的选取情况
# ceph quorum_status -f json-pretty
获取monmap
# ceph-mon -i `hostname` --inject-monmap /opt/monmap
查看monmap
# monmaptool --print /opt/monmap
在monmap中添加mon
# monmaptool --add hadoop004 10.10.1.36:6789
在monmap中删除mon
# monmaptool /tmp/monmap --rm hadoop004
注入monmap,注入之前要停止所有的mon
# systemctl stop ceph-mon@`hostname`
# ceph-mon -i `hostname` --inject-monmap /opt/monmap
感谢各位的阅读,以上就是“Ceph怎么添加删除监视器”的内容了,经过本文的学习后,相信大家对Ceph怎么添加删除监视器这一问题有了更深刻的体会,具体使用情况还需要大家实践验证。这里是天达云,小编将为大家推送更多相关知识点的文章,欢迎关注!