本篇内容介绍了“CTDB中main loop怎么配置”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!
main_loop
kill -o daemon is still running
ping local daemon
if election_timeout out
get debug_level
get relevant tunables
get runstate
get recovery lock file from the server
get nodemap
flags
if self_ban
if stop banned frozen
Retrieve capabilities from all connected nodes
validate_recovery_master --> force_election
verify ip public ip {
ips.pnn == self && dont have ip
ips.pnn != self && have ip
} tell recmaster takeover_run
down here only run with recmaster
flags right
active nodes agree we are recmaster --> force_election get vnnmap
need recovery --> do_recovery
verify all active nodes not in recover mode --> do_recovery
hold recovery lock --> do_recovery
get remote_nodemaps --> do_recovery
num_lmasters
vnnmap->size != num_lmasters --> do_recovery
nodemap node also in vnnmap --> do_recovery
all nodes have same vnnmap
if need_takeover_run do_takeover_run
<span id="force_election"></span>
force_election
election_handler
rec = self ctdb = rec->ctdb
pnn == self out
ctdb_election_win
states
longest running
biggest pnn
release recover lock file
let recmaster = that
<h2 id="do_recovery"></h2> # do_recovery > we are rec > need_recovery = true > begin > self_ban > recover_lock_file F_SETLK 为 F_WRLCK > get list of all databases dbmap > create missing local db > create missing remote db > update use same lock files > [db_recovery_parallel](#db_recovery_parallel) > [do_takeover_run](#do_takeover_run) > send message reconfigured > need_recovery = false > end > wait rerecovery_timeout
<span id='db_recovery_parallel'></span>
db_recovery_parallel
envvar CTDB_RECOVERY_HELPER
dir CTDB_HELPER_BINDIR == /usr/libexec/ctdb/
file ctdb_recovery_helper
pipe libsocket
args[0] = fd[1]
args[1] = daemon.name = CTDB_SOCKET = /var/run/ctdb/ctdb.socket
args[2] = random !=1
exec /usr/libexec/ctdb/ctdb_recovery_helper
<log-fd> <output-fd> <ctdb-socket-path> <generation>
1 1 /var/run/ctdb/ctdbd.socket 2
<span id='do_takeover_run'></span>
do_takeover_run
is_in_progress done
begin
srvid = 0 pnn = -1
list_of_connected_nodes
disable takeover_runs 60s
ctdb_takeover_run
reenable takeover_runs
ok
end
<span id='ctdb_takeover_run'></span>
ctdb_takeover_run
分配 ipalloc_state的内存,包括每个节点数组
填充 ipalloc_state的ip分配算法
本地填充 ipalloc_state NoIPFailback 参数--这是一个真正的集群范围的配置,只有master使用此值
取所有连接的节点的 NoIPTakeover 和 NoIPHostOnAllDisabled --这各动作是分开执行的,所以在单元测试过程中可以伪造
填充 ipalloc_state 的 NoIPTakover
填充 ipalloc_state 的 NoIPHost ,衍生出节点 flags 和 NoIPHostOnAllDisabled
检索和填充 ipalloc_state 已知和可用的IP列表
如果没有可用IP地址,则提前退出
构建列表(已知的IPs,当前指定的节点)
填充节点列表以强制重新平衡 - 内部结构,目前没有办法获取,只有使用LCP2算法 增加了新的IP地址的节点
运行IP分配算法
发送 RELEASE_IP 到所有节点的 取消不应有的ips
发送 TAKE_IP 到所有节点的 配置应有ips
发送 IPREALLOCATED 所有节点(向后兼容的 hack )
ipalloc_state_init
三种算法 ipalloc_lcp2 ==> 默认
ipalloc_deterministic ==> pnn = i % numnodes ipalloc_nondeterministic ==> min以pnn=0为基准,轮询 已有ip<min的则可收ip
<span id='ipalloc_lcp2'></span>
ipalloc_lcp2
unassign_unsuitable_ips 不匹配的ip的pnn = -1
lcp2_init
lcp2_allocate_unassigned
^运算 计算出从高到低不同=distance ipv4 32 + 32 + dis + 32 = 0 ~ 128 sum = ip 到其他每个ip distance平方和 minnode || mindstdsum rebalance_candidates
lcp2_failback
均衡所有lcp2_imbalances
“CTDB中main loop怎么配置”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注天达云网站,小编将为大家输出更多高质量的实用文章!