这篇文章主要为大家展示了“ceph-deploy中new模块有什么用”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“ceph-deploy中new模块有什么用”这篇文章吧。
ceph-deploy源码分析——new模块
ceph-deploy的new.py模块是用来开始部署新集群,创建ceph.conf、ceph.mon.keyring文件。
new 子命令格式如下
ceph-deploy new [-h] [--no-ssh-copykey] [--fsid FSID]
[--cluster-network CLUSTER_NETWORK]
[--public-network PUBLIC_NETWORK]
MON [MON ...]
部署集群
make函数priority为10,子命令设置的默认函数为new函数。
@priority(10)
def make(parser):
"""
Start deploying a new cluster, and write a CLUSTER.conf and keyring for it.
"""
parser.add_argument(
'mon',
metavar='MON',
nargs='+',
help='initial monitor hostname, fqdn, or hostname:fqdn pair',
type=arg_validators.Hostname(),
)
parser.add_argument(
'--no-ssh-copykey',
dest='ssh_copykey',
action='store_false',
default=True,
help='do not attempt to copy SSH keys',
)
parser.add_argument(
'--fsid',
dest='fsid',
help='provide an alternate FSID for ceph.conf generation',
)
parser.add_argument(
'--cluster-network',
help='specify the (internal) cluster network',
type=arg_validators.Subnet(),
)
parser.add_argument(
'--public-network',
help='specify the public network for a cluster',
type=arg_validators.Subnet(),
)
parser.set_defaults(
func=new,
)
部署新集群
new 函数开始部署新集群
创建ceph.conf文件,写入[global]fsid、mon_initial_members、mon_host、auth_cluster_required、auth_service_required、auth_client_required,如果参数中有public_network、cluster_network写入配置文件
调用new_mon_keyring函数创建ceph.mon.keyring文件
def new(args):
if args.ceph_conf:
raise RuntimeError('will not create a Ceph conf file if attemtping to re-use with `--ceph-conf` flag')
LOG.debug('Creating new cluster named %s', args.cluster)
# 生成配置
cfg = conf.ceph.CephConf()
cfg.add_section('global')
# 获取参数中的额fsid或者自动生成
fsid = args.fsid or uuid.uuid4()
cfg.set('global', 'fsid', str(fsid))
# if networks were passed in, lets set them in the
# global section
if args.public_network:
cfg.set('global', 'public network', str(args.public_network))
if args.cluster_network:
cfg.set('global', 'cluster network', str(args.cluster_network))
# mon节点
mon_initial_members = []
# mon主机
mon_host = []
# 循环host
for (name, host) in mon_hosts(args.mon):
# Try to ensure we can ssh in properly before anything else
# ssh key copy
if args.ssh_copykey:
ssh_copy_keys(host, args.username)
# Now get the non-local IPs from the remote node
# 连接远程主机
distro = hosts.get(host, username=args.username)
# 获取主机的IP地址
remote_ips = net.ip_addresses(distro.conn)
# custom cluster names on sysvinit hosts won't work
if distro.init == 'sysvinit' and args.cluster != 'ceph':
LOG.error('custom cluster names are not supported on sysvinit hosts')
raise exc.ClusterNameError(
'host %s does not support custom cluster names' % host
)
distro.conn.exit()
# Validate subnets if we received any
if args.public_network or args.cluster_network:
# 校验IP地址
validate_host_ip(remote_ips, [args.public_network, args.cluster_network])
# Pick the IP that matches the public cluster (if we were told to do
# so) otherwise pick the first, non-local IP
LOG.debug('Resolving host %s', host)
if args.public_network:
ip = get_public_network_ip(remote_ips, args.public_network)
else:
ip = net.get_nonlocal_ip(host)
LOG.debug('Monitor %s at %s', name, ip)
mon_initial_members.append(name)
try:
socket.inet_pton(socket.AF_INET6, ip)
mon_host.append("[" + ip + "]")
LOG.info('Monitors are IPv6, binding Messenger traffic on IPv6')
cfg.set('global', 'ms bind ipv6', 'true')
except socket.error:
mon_host.append(ip)
LOG.debug('Monitor initial members are %s', mon_initial_members)
LOG.debug('Monitor addrs are %s', mon_host)
# mon_initial_members 有多个的话,中间用空格隔开
cfg.set('global', 'mon initial members', ', '.join(mon_initial_members))
# no spaces here, see http://tracker.newdream.net/issues/3145
# mon_host 有多个的话,中间没有空格
cfg.set('global', 'mon host', ','.join(mon_host))
# override undesirable defaults, needed until bobtail
# http://tracker.ceph.com/issues/6788
cfg.set('global', 'auth cluster required', 'cephx')
cfg.set('global', 'auth service required', 'cephx')
cfg.set('global', 'auth client required', 'cephx')
path = '{name}.conf'.format(
name=args.cluster,
)
# 创建mon keyring
new_mon_keyring(args)
LOG.debug('Writing initial config to %s...', path)
tmp = '%s.tmp' % path
with open(tmp, 'w') as f:
# 保存ceph配置文件
cfg.write(f)
try:
os.rename(tmp, path)
except OSError as e:
if e.errno == errno.EEXIST:
raise exc.ClusterExistsError(path)
else:
raise
注意:
mon_initial_members 有多个的话,中间用空格隔开
mon_host 有多个的话,中间没有空格
创建ceph.mon.keyring文件
new_mon_keyring函数创建ceph.mon.keyring文件
def new_mon_keyring(args):
LOG.debug('Creating a random mon key...')
mon_keyring = '[mon.]\nkey = %s\ncaps mon = allow *\n' % generate_auth_key()
keypath = '{name}.mon.keyring'.format(
name=args.cluster,
)
oldmask = os.umask(0o77)
LOG.debug('Writing monitor keyring to %s...', keypath)
try:
tmp = '%s.tmp' % keypath
with open(tmp, 'w', 0o600) as f:
f.write(mon_keyring)
try:
os.rename(tmp, keypath)
except OSError as e:
if e.errno == errno.EEXIST:
raise exc.ClusterExistsError(keypath)
else:
raise
finally:
os.umask(oldmask)
手工部署集群
以ceph-deploy部署集群:ceph-deploy new ceph-231为例,对应的手工操作。
获取ip地址
执行以下命令,通过正则表达式获取IP地址192.168.217.231
[root@ceph-231 ceph-cluster]# /usr/sbin/ip link show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP mode DEFAULT qlen 1000
link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN mode DEFAULT
link/ether 86:f4:14:e3:1b:b2 brd ff:ff:ff:ff:ff:ff
4: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN mode DEFAULT
link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff
[root@ceph-231 ceph-cluster]# /usr/sbin/ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast master ovs-system state UP qlen 1000
link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff
3: ovs-system: <BROADCAST,MULTICAST> mtu 1500 qdisc noop state DOWN
link/ether 86:f4:14:e3:1b:b2 brd ff:ff:ff:ff:ff:ff
4: xenbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UNKNOWN
link/ether 02:03:e7:fc:dc:36 brd ff:ff:ff:ff:ff:ff
inet 192.168.217.231/24 brd 192.168.217.255 scope global xenbr0
valid_lft forever preferred_lft forever
创建ceph.conf
[root@ceph-231 ceph-cluster]# vi ceph.conf
[global]
fsid = a3b9b0aa-01ab-4e1b-bba3-6f5317b0795b
mon_initial_members = ceph-231
mon_host = 192.168.217.231
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx
public_network = 192.168.217.231
创建ceph.mon.keyring
可以通过ceph-authtool命令生成
[root@ceph-231 ceph-cluster]# ceph-authtool --create-keyring /tmp/ceph.mon.keyring --gen-key -n mon. --cap mon 'allow *'
creating /tmp/ceph.mon.keyring
[root@ceph-231 ~]# cat /tmp/ceph.mon.keyring
[mon.]
key = AQCzxEhZC7tICxAAuHK5GipD96enMuhv82CCLg==
caps mon = "allow *"
将/tmp/ceph.mon.keyring内容复制到ceph.mon.keyring
[root@ceph-231 ceph-cluster]# vi ceph.mon.keyring
[mon.]
key = AQCzxEhZC7tICxAAuHK5GipD96enMuhv82CCLg==
caps mon = allow
以上是“ceph-deploy中new模块有什么用”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注天达云行业资讯频道!