manager 组件
masterha_manger # 启动MHA
masterha_check_ssh # 检查MHA的SSH配置状况
masterha_check_repl # 检查MySQL复制状况,配置信息
masterha_master_monitor # 检测master是否宕机
masterha_check_status # 检测当前MHA运行状态
masterha_master_switch # 控制故障转移(自动或者手动)
masterha_conf_host # 添加或删除配置的server信息node 组件
save_binary_logs # 保存和复制master的二进制日志
apply_diff_relay_logs # 识别差异的中继日志事件并将其差异的事件应用于其他的
purge_relay_logs # 清除中继日志(不会阻塞SQL线程)
MHA在发生切换的过程中,从库的恢复过程中依赖于relay log的相关信息,所以这里要将relay log的自动清除设置为OFF,采用手动清除relay log的方式。在默认情况下,从服务器上的中继日志会在SQL线程执行完毕后被自动删除。但是在MHA环境中,这些中继日志在恢复其他从服务器时可能会被用到,因此需要禁用中继日志的自动删除功能。定期清除中继日志需要考虑到复制延时的问题。在ext3的文件系统下,删除大的文件需要一定的时间,会导致严重的复制延时。为了避免复制延时,需要暂时为中继日志创建硬链接,因为在linux系统中通过硬链接删除大文件速度会很快。(在mysql数据库中,删除大表时,通常也采用建立硬链接的方式)
MHA节点中包含了pure_relay_logs命令工具,它可以为中继日志创建硬链接,执行SET GLOBAL relay_log_purge=1,等待几秒钟以便SQL线程切换到新的中继日志,再执行SET GLOBAL relay_log_purge=0。
pure_relay_logs脚本参数如下所示:
--user mysql 用户名
--password mysql 密码
--port 端口号
--workdir 指定创建relay log的硬链接的位置,默认是/var/tmp,由于系统不同分区创建硬链接文件会失败,故需要执行硬链接具体位置,成功执行脚本后,硬链接的中继日志文件被删除
--disable_relay_log_purge 默认情况下,如果relay_log_purge=1,脚本会什么都不清理,自动退出,通过设定这个参数,当relay_log_purge=1
[root@192.168.0.60 ~]# cat purge_relay_log.sh
#!/bin/bash
user=root
passwd=123456
port=3306
log_dir='/data/masterha/log'
work_dir='/data'
purge='/usr/local/bin/purge_relay_logs'if [ ! -d $log_dir ]
thenmkdir $log_dir -p
fi$purge --user=$user --password=$passwd --disable_relay_log_purge --port=$port --workdir=$work_dir >> $log_dir/purge_relay_logs.log 2>&1
[root@192.168.0.60 ~]#
#!/bin/sh
BACKUP_BIN=/usr/local/mysql/bin/mysqlbinlog
LOCAL_BACKUP_DIR=/data/backup/binlog_bk
BACKUP_LOG=/data/backup/bakbinlog.log
REMOTE_HOST=192.168.56.100
#REMOTE_PORT=3306
SERVER_ID=20003306
REMOTE_USER=wanbin
REMOTE_PASS=mysql
#time to wait before reconnecting after failure
SLEEP_SECONDS=10
##create local_backup_dir if necessary
##mkdir -p ${LOCAL_BACKUP_DIR}
cd ${LOCAL_BACKUP_DIR}
## 运行while循环,连接断开后等待指定时间,重新连接
while :
FIRST_BINLOG=$(mysql --host=${REMOTE_HOST} --user=${REMOTE_USER} --password=${REMOTE_PASS} -e 'show binary logs'|grep -v "Log_name"|awk '{print $1}'|head -n 1)
doif [ `ls -A "${LOCAL_BACKUP_DIR}" |wc -l` -eq 0 ];thenLAST_FILE=${FIRST_BINLOG} ##如果备份目录中没有备份文件则 LAST_FILE=FIRST_FILEelseLAST_FILE=`ls -l ${LOCAL_BACKUP_DIR} |tail -n 1 |awk '{print $9}'` ##last_file取序列最大的binlog文件fi${BACKUP_BIN} -R --raw --host=${REMOTE_HOST} --user=${REMOTE_USER} --password=${REMOTE_PASS} ${LAST_FILE} --stop-never --stop-never-slave-server-id=${SERVER_ID} echo "`date +"%Y/%m/%d %H:%M:%S"` mysqlbinlog停止,返回代码:$?" | tee -a ${BACKUP_LOG}echo "${SLEEP_SECONDS}秒后再次连接并继续备份" | tee -a ${BACKUP_LOG} sleep ${SLEEP_SECONDS}
done
#!/bin/bash[ -e /etc/profile ] && source /etc/profile || exit 0#本地binlog路径local_binlog_dir=/data/3306/247binlog[ ! -d "$local_binlog_dir" ] && mkdir -p "$local_binlog_dir"cd "$local_binlog_dir"#远程服务器ssh端口ssh_port=22#远程服务器ipremote_host=192.168.0.68#本地binlog文件名local_logfile=`ls -al "$local_binlog_dir" | grep 'mysql-bin\.[0-9]\+' |tail -n 1 | awk '{print $NF}'`#远程服务器binlog路径remote_binlog_dir=/data/mysql3306/#远程服务器第一个binlog文件名first_remote_lofile=`ssh -p ${ssh_port} -o StrictHostKeyChecking=no ${remote_host} " cat \${remote_binlog_dir}/mysql-bin.index | head -n 1 | awk -F'/' '{print \\$NF}'"`last_remote_logfile=`ssh -p ${ssh_port} -o StrictHostKeyChecking=no ${remote_host} " cat \${remote_binlog_dir}/mysql-bin.index | tail -n 1 | awk -F'/' '{print \\$NF}'"`#远程mysql用户remote_user=root#远程mysql用户密码remote_password=xxfunction start() {
running=`ps uax | grep 'mysqlbinlog -R --raw' | grep -v grep|grep raw | awk '{print $2}'`if [ "$running" != "" ];thenecho "mysqlbinlog server is running"exitfiif [ "$local_logfile" == "" ];then#echo "the binlogserver is first start "mysqlbinlog -R --raw --host=$remote_host --user="$remote_user" --password="$remote_password" --stop-never $first_remote_lofile &elseif ! ssh -p ${ssh_port} -o StrictHostKeyChecking=no ${remote_host} "ls -lh ${remote_binlog_dir}/${local_logfile}" &> /dev/null;thenlocal_logfile_num=`ll /data/3306/247binlog/ |tail -1 |awk '{print $NF}' |grep -o '[1?9][1?9]\+[0?9][0?9]\+'`binlogs=(`ssh -p ${ssh_port} -o StrictHostKeyChecking=no ${remote_host} "ls -lh ${remote_binlog_dir}/mysql-bin.* |grep -v index |awk -F'/' '{print \\$NF}' |wc -l"`)for binlog in `seq 1 $binlogs`dolocal_logfile_num=`expr $local_logfile_num + 1`if [ "$local_logfile_num" -lt 10 ];thenlocal_logfile=mysql-bin.00000${local_logfile_num}elif [ "$local_logfile_num" -lt 100 ];thenlocal_logfile=mysql-bin.0000${local_logfile_num}elif [ "$local_logfile_num" -lt 1000 ];thenlocal_logfile=mysql-bin.000${local_logfile_num}elif [ "$local_logfile_num" -lt 10000 ];thenlocal_logfile=mysql-bin.00${local_logfile_num}elif [ "$local_logfile_num" -lt 100000 ];thenlocal_logfile=mysql-bin.0${local_logfile_num}elselocal_logfile=mysql-bin.${local_logfile_num}fiif ssh -p ${ssh_port} -o StrictHostKeyChecking=no ${remote_host} "ls -lh ${remote_binlog_dir}/${local_logfile}" &> /dev/null;thenbreakfidonemysqlbinlog -R --raw --host=$remote_host --user="$remote_user" --password="$remote_password" --stop-never $local_logfile &elsemysqlbinlog -R --raw --host=$remote_host --user="$remote_user" --password="$remote_password" --stop-never $local_logfile &fifi}function stop() {
ps uax | grep mysqlbinlog | grep raw | awk '{print $2}' | xargs kill}case $1 instart)start;;stop)stop;;*)# usagebasename=`basename "$0"`echo "Usage: $basename {start|stop} [ MySQL BinlogServer options ]"exit 1;;esac
Phase 1: Configuration Check Phase
init_config(): 初始化配置
MHA::ServerManager::init_binlog_server: 初始化binlog server
check_settings()
a. check_node_version(): 查看MHA的版本
b. connect_all_and_read_server_status(): 检测确认各个Node节点MySQL是否可以连接
c. get_dead_servers(),get_alive_servers(),get_alive_slaves():再次检测一次node节点的状态
d. print_dead_servers(): 是否挂掉的master是否是当前的master
e. MHA::DBHelper::check_connection_fast_util : 快速判断dead server,是否真的挂了,如果ping_type=insert,不会double check
f. MHA::NodeUtil::drop_file_if($_failover_error_file|$_failover_complete_file): 检测上次的failover文件
g. 如果上次failover的时间在8小时以内,那么这次就不会failover,除非配置了额外的参数
h. start_sql_threads_if(): 查看所有slave的Slave_SQL_Running是否为Yes,若不是则启动SQL thread
is_gtid_auto_pos_enabled(): 判断是否是GTID模式
Phase 2: Dead Master Shutdown Phase..
force_shutdown($dead_master):
a. stop_io_thread(): stop所有slave的IO_thread
b. force_shutdown_internal($dead_master):
b_1. master_ip_failover_script: 如果有这个脚本,则执行里面的逻辑(比如:切换vip)
b_2. shutdown_script:如果有这个脚本,则执行里面的逻辑(比如:Power off 服务器)
Phase 3: Master Recovery Phase..
Phase 3.1: Getting Latest Slaves Phase..
* check_set_latest_slaves()
a. read_slave_status(): 获取所有show slave status 信息
b. identify_latest_slaves(): 找到最新的slave是哪个
c. identify_oldest_slaves(): 找到最老的slave是哪个
Phase 3.2: Saving Dead Master's Binlog Phase..
* save_master_binlog($dead_master);
-> 如果dead master可以ssh,那么
b_1_1. save_master_binlog_internal: 用node节点save_binary_logs脚本拷贝相应binlog到manager
diff_binary_log 生产差异binlog日志
b_1_2. file_copy: 将差异binlog拷贝到manager节点的 manager_workdir目录下
-> 如果dead master不可以ssh
b_1_3. 那么差异日志就会丢失
Phase 3.3: Determining New Master Phase..
b. 如果GTID auto_pos没有打开,调用find_latest_base_slave()
b_1. find_latest_base_slave_internal: 寻找拥有所有relay-log的最新slave,如果没有,则failover失败
b_1_1. find_slave_with_all_relay_logs:
b_1_1_1. apply_diff_relay_logs: 查看最新的slave是否有其他slave缺失的relay-log
c. select_new_master: 选举new master
c_1. MHA::ServerManager::select_new_master:
#If preferred node is specified, one of active preferred nodes will be new master.
#If the latest server behinds too much (i.e. stopping sql thread for online backups), we should not use it as a new master, but we should fetch relay log there
#Even though preferred master is configured, it does not become a master if it's far behind
get_candidate_masters(): 获取配置中候选节点
get_bad_candidate_masters(): 以下条件不能成为候选master
# dead server
# no_master >= 1
# log_bin=0
# oldest_major_version=0
# check_slave_delay: 检查是否延迟非常厉害(可以通过设置no_check_delay忽略)
{Exec_Master_Log_Pos} + 100000000 只要binlog position不超过100000000 就行
选举流程: 先看candidate_master,然后找 latest slave, 然后再随机挑选
Phase 3.3(3.4): New Master Diff Log Generation Phase..
* recover_master_internal
recover_relay_logs:
判断new master是否为最新的slave,如果不是,则生产差异relay logs,并发送给新master
recover_master_internal:
将之前生产的dead master上的binlog传送给new master
Phase 3.4: Master Log Apply Phase..
* apply_diff:
a. wait_until_relay_log_applied: 直到new master完成所有relay log,否则一直等待
b. 判断Exec_Master_Log_Pos == Read_Master_Log_Pos, 如果不等,那么生产差异日志:
save_binary_logs --command=save
c. apply_diff_relay_logs --command=apply:对new master进行恢复
c_1. exec_diff:Exec_Master_Log_Pos和Read_Master_Log_Pos的差异日志
c_2. read_diff:new master与lastest slave的relay log的差异日志
c_3. binlog_diff:lastest slave与daed master之间的binlog差异日志
* 如果设置了master_ip_failover_script脚本,那么会执行这里面的脚本(一般用来漂移vip)
* disable_read_only(): 允许new master可写
Phase 4: Slaves Recovery Phase..
recover_slaves_internal
Phase 4.1: Starting Parallel Slave Diff Log Generation Phase..
recover_all_slaves_relay_logs: 生成Slave与New Slave之间的差异日志,并将该日志拷贝到各Slave的工作目录下
Phase 4.2: Starting Parallel Slave Log Apply Phase..
* recover_slave:
对每个slave进行恢复,跟以上Phase 3.4: Master Log Apply Phase中的 apply_diff一样
* change_master_and_start_slave:
重新指向到new master,并且start slave
Phase 5: New master cleanup phase..
reset_slave_on_new_master
在new master上执行reset slave all;
Phase 1: Configuration Check Phase
init_config(): 初始化配置
MHA::ServerManager::init_binlog_server: 初始化binlog server
check_settings()
a. check_node_version(): 查看MHA的版本
b. connect_all_and_read_server_status(): 检测确认各个Node节点MySQL是否可以连接
c. get_dead_servers(),get_alive_servers(),get_alive_slaves():再次检测一次node节点的状态
d. print_dead_servers(): 是否挂掉的master是否是当前的master
e. MHA::DBHelper::check_connection_fast_util : 快速判断dead server,是否真的挂了,如果ping_type=insert,不会double check
f. MHA::NodeUtil::drop_file_if($_failover_error_file|$_failover_complete_file): 检测上次的failover文件
g. 如果上次failover的时间在8小时以内,那么这次就不会failover,除非配置了额外的参数
h. start_sql_threads_if(): 查看所有slave的Slave_SQL_Running是否为Yes,若不是则启动SQL thread
is_gtid_auto_pos_enabled(): 判断是否是GTID模式
Phase 2: Dead Master Shutdown Phase completed.
force_shutdown($dead_master):
a. stop_io_thread(): stop所有slave的IO_thread
b. force_shutdown_internal($dead_master):
b_1. master_ip_failover_script: 如果有这个脚本,则执行里面的逻辑(比如:切换vip)
b_2. shutdown_script:如果有这个脚本,则执行里面的逻辑(比如:Power off 服务器)
Phase 3: Master Recovery Phase..
Phase 3.1: Getting Latest Slaves Phase..
* check_set_latest_slaves()
a. read_slave_status(): 获取所有show slave status 信息
b. identify_latest_slaves(): 找到最新的slave是哪个
c. identify_oldest_slaves(): 找到最老的slave是哪个
Phase 3.2: Saving Dead Master's Binlog Phase.. (GTID 模式下没有这一步)
Phase 3.3: Determining New Master Phase..
get_most_advanced_latest_slave(): 获取最新的slave
c. select_new_master: 选举new master
c_1. MHA::ServerManager::select_new_master:
#If preferred node is specified, one of active preferred nodes will be new master.
#If the latest server behinds too much (i.e. stopping sql thread for online backups), we should not use it as a new master, but we should fetch relay log there
#Even though preferred master is configured, it does not become a master if it's far behind
get_candidate_masters(): 获取配置中候选节点
get_bad_candidate_masters(): 以下条件不能成为候选master
# dead server
# no_master >= 1
# log_bin=0
# oldest_major_version=0
# check_slave_delay: 检查是否延迟非常厉害(可以通过设置no_check_delay忽略)
{Exec_Master_Log_Pos} + 100000000 只要binlog position不超过100000000 就行
选举流程: 先看candidate_master,然后找 latest slave, 然后再随机挑选
Phase 3.3: New Master Recovery Phase..
* recover_master_gtid_internal:
wait_until_relay_log_applied: 候选master等待所有relay-log都应用完
如果候选master不是最新的slave:
$latest_slave->wait_until_relay_log_applied($log): 最新的slave应用完所有的relay-log
change_master_and_start_slave : 让候选master同步到latest master,追上latest slave
获取候选master此时此刻的日志信息,以便后面切换
如果候选master是最新的slave:
获取候选master此时此刻的日志信息,以便后面切换
save_from_binlog_server:
如果配置了binlog server,那么在binlogsever 能连的情况下,将binlog 拷贝到Manager,并生成差异日志diff_binlog(save_binary_logs --command=save)
apply_binlog_to_master:
Applying differential binlog: 应用差异的binlog到new master
Phase 4: Slaves Recovery Phase..
Phase 4.1: Starting Slaves in parallel..
* recover_slaves_gtid_internal:
change_master_and_start_slave: 因为master已经恢复,那么slave直接change master auto_pos=1 的模式就可以恢复
gtid_wait:用此等待同步全部追上
Phase 5: New master cleanup phase..
reset_slave_on_new_master
在new master上执行reset slave all;