avatar

9.MHA高可用

MHA高可用

第1章 主从复制架构演变介绍

1.1 基本结构

1
2
3
4
5
(1)一主一从
(2)一主多从
(3)多级主从
(4)双主
(5)循环复制

1.2 高级应用架构演变

1.2.1 高性能架构

1
2
3
4
5
6
7
8
9
读写分离架构(读性能较高)
代码级别
MySQL proxy (Atlas,mysql router,proxySQL(percona),maxscale)、
amoeba(taobao)
xx-dbproxy等。
分布式架构(读写性能都提高):
分库分表——cobar--->TDDL(头都大了),DRDS
Mycat--->DBLE自主研发等。
NewSQL-->TiDB

1.2.2 高可用架构

1
2
3
4
5
(1)单活:MMM架构——mysql-mmm(google)
(2)单活:MHA架构——mysql-master-ha(日本DeNa),T-MHA
(3)多活:MGR ——5.7 新特性 MySQL Group replication(5.7.17) --->Innodb Cluster
(4)多活:MariaDB Galera Cluster架构,(PXC)Percona XtraDB Cluster、
MySQL Cluster(Oracle rac)架构

第2章 高可用MHA

2.1 工作原理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
主库宕机处理过程
1. 监控节点 (通过配置文件获取所有节点信息)
系统,网络,SSH连接性
主从状态,重点是主库

2. 选主
(1) 如果判断从库(position或者GTID),数据有差异,最接近于Master的slave,成为备选主
(2) 如果判断从库(position或者GTID),数据一致,按照配置文件顺序,选主.
(3) 如果设定有权重(candidate_master=1),按照权重强制指定备选主.
1. 默认情况下如果一个slave落后master 100M的relay logs的话,即使有权重,也会失效.
2. 如果check_repl_delay=0的化,即使落后很多日志,也强制选择其为备选主
3. 数据补偿
(1) 当SSH能连接,从库对比主库GTID 或者position号,立即将二进制日志保存至各个从节点并且应用(save_binary_logs )
(2) 当SSH不能连接, 对比从库之间的relaylog的差异(apply_diff_relay_logs)
4. Failover
将备选主进行身份切换,对外提供服务
其余从库和新主库确认新的主从关系
5. 应用透明(VIP)
6. 故障切换通知(send_reprt)
7. 二次数据补偿(binlog_server)
8. 自愈自治(待开发...)

2.2 架构介绍

1
2
3
4
1主2从,master:db01   slave:db02   db03 ):
MHA 高可用方案软件构成
Manager软件:选择一个从节点安装
Node软件:所有节点都要安装

2.3 MHA软件构成

1
2
3
4
5
6
7
8
9
10
11
12
13
14
Manager工具包主要包括以下几个工具:
masterha_manger 启动MHA
masterha_check_ssh 检查MHA的SSH配置状况
masterha_check_repl 检查MySQL复制状况
masterha_master_monitor 检测master是否宕机
masterha_check_status 检测当前MHA运行状态
masterha_master_switch 控制故障转移(自动或者手动)
masterha_conf_host 添加或删除配置的server信息

Node工具包主要包括以下几个工具:
这些工具通常由MHA Manager的脚本触发,无需人为操作
save_binary_logs 保存和复制master的二进制日志
apply_diff_relay_logs 识别差异的中继日志事件并将其差异的事件应用于其他的
purge_relay_logs 清除中继日志(不会阻塞SQL线程)

2.4 MHA环境搭建

2.4.1 规划

1
2
3
主库: 51    node 
从库: 52 node
53 node manager

2.4.2 准备环境(1主2从GTID)

清理环境

1
2
3
pkill mysqld
rm -rf /data/mysql/data/*
rm -rf /data/binlog/*

准备配置文件

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
主库db01:
cat > /etc/my.cnf <<EOF
[mysqld]
basedir=/usr/local/mysql/
datadir=/data/mysql/data
socket=/tmp/mysql.sock
server_id=51
port=3306
secure-file-priv=/tmp
autocommit=0
log_bin=/data/binlog/mysql-bin
binlog_format=row
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
[mysql]
prompt=db01 [\\d]>
EOF

slave1(db02):
cat > /etc/my.cnf <<EOF
[mysqld]
basedir=/data/mysql
datadir=/data/mysql/data
socket=/tmp/mysql.sock
server_id=52
port=3306
secure-file-priv=/tmp
autocommit=0
log_bin=/data/binlog/mysql-bin
binlog_format=row
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
[mysql]
prompt=db02 [\\d]>
EOF

slave2(db03):
cat > /etc/my.cnf <<EOF
[mysqld]
basedir=/data/mysql
datadir=/data/mysql/data
socket=/tmp/mysql.sock
server_id=53
port=3306
secure-file-priv=/tmp
autocommit=0
log_bin=/data/binlog/mysql-bin
binlog_format=row
gtid-mode=on
enforce-gtid-consistency=true
log-slave-updates=1
[mysql]
prompt=db03 [\\d]>
EOF

初始化数据

1
mysqld --initialize-insecure --user=mysql --basedir=/usr/local/mysql  --datadir=/data/mysql/data

启动数据库

1
2
/etc/init.d/mysqld start
systemctl start mysqld

构建主从

1
2
3
4
5
6
7
8
9
10
11
12
13
master:51	slave:52,53

#51
grant replication slave on *.* to repl@'10.0.1.%' identified by '123';

#52、53
change master to
master_host='10.0.1.51',
master_user='repl',
master_password='123' ,
MASTER_AUTO_POSITION=1;

start slave;

2.4.3 配置关键程序软连接

1
2
ln -s /data/mysql/bin/mysqlbinlog    /usr/bin/mysqlbinlog
ln -s /data/mysql/bin/mysql /usr/bin/mysql

2.4.4 配置各节点互信

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
db01:
rm -rf /root/.ssh
ssh-keygen
cd /root/.ssh
mv id_rsa.pub authorized_keys
scp -r /root/.ssh 10.0.1.52:/root/
scp -r /root/.ssh 10.0.1.53:/root/
各节点验证
db01:
ssh 10.0.1.51 date
ssh 10.0.1.52 date
ssh 10.0.1.53 date
db02:
ssh 10.0.1.51 date
ssh 10.0.1.52 date
ssh 10.0.1.53 date
db03:
ssh 10.0.1.51 date
ssh 10.0.1.52 date
ssh 10.0.1.53 date

2.4.5 安装软件

下载MHA软件

1
2
mha官网:https://code.google.com/archive/p/mysql-master-ha/
github下载地址:https://github.com/yoshinorim/mha4mysql-manager/wiki/Downloads

所有节点安装Node软件依赖包

1
2
yum -y install perl-DBD-MySQL
rpm -ivh mha4mysql-node-0.56-0.el6.noarch.rpm

在db01主库中创建mha需要的用户

1
grant all privileges on *.* to mha@'10.0.1.%' identified by 'mha';

Manager软件安装(db03)

1
2
# yum install -y perl-Config-Tiny epel-release perl-Log-Dispatch perl-Parallel-ForkManager perl-Time-HiRes
# rpm -ivh mha4mysql-manager-0.56-0.el6.noarch.rpm

2.4.6 配置文件准备(db03)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
#创建配置文件目录
mkdir -p /etc/mha
#创建日志目录
mkdir -p /var/log/mha/app1
#编辑mha配置文件
vim /etc/mha/app1.cnf
[server default]
manager_log=/var/log/mha/app1/manager
manager_workdir=/var/log/mha/app1
master_binlog_dir=/data/binlog
user=mha
password=mha
ping_interval=2
repl_password=123
repl_user=repl
ssh_user=root
[server1]
hostname=10.0.1.51
port=3306
[server2]
hostname=10.0.1.52
port=3306
[server3]
hostname=10.0.1.53
port=3306

2.4.7 状态检查

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
#互信检查
[root@db03 ~]# masterha_check_ssh --conf=/etc/mha/app1.cnf
Fri Oct 25 04:29:50 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Oct 25 04:29:50 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Fri Oct 25 04:29:50 2019 - [info] Reading server configuration from /etc/mha/app1.cnf..
Fri Oct 25 04:29:50 2019 - [info] Starting SSH connection tests..
Fri Oct 25 04:29:50 2019 - [debug]
Fri Oct 25 04:29:50 2019 - [debug] Connecting via SSH from root@10.0.1.51(10.0.1.51:22) to root@10.0.1.52(10.0.1.52:22)..
Fri Oct 25 04:29:50 2019 - [debug] ok.
Fri Oct 25 04:29:50 2019 - [debug] Connecting via SSH from root@10.0.1.51(10.0.1.51:22) to root@10.0.1.53(10.0.1.53:22)..
Fri Oct 25 04:29:50 2019 - [debug] ok.
Fri Oct 25 04:29:51 2019 - [debug]
Fri Oct 25 04:29:50 2019 - [debug] Connecting via SSH from root@10.0.1.52(10.0.1.52:22) to root@10.0.1.51(10.0.1.51:22)..
Fri Oct 25 04:29:50 2019 - [debug] ok.
Fri Oct 25 04:29:50 2019 - [debug] Connecting via SSH from root@10.0.1.52(10.0.1.52:22) to root@10.0.1.53(10.0.1.53:22)..
Fri Oct 25 04:29:50 2019 - [debug] ok.
Fri Oct 25 04:29:51 2019 - [debug]
Fri Oct 25 04:29:51 2019 - [debug] Connecting via SSH from root@10.0.1.53(10.0.1.53:22) to root@10.0.1.51(10.0.1.51:22)..
Fri Oct 25 04:29:51 2019 - [debug] ok.
Fri Oct 25 04:29:51 2019 - [debug] Connecting via SSH from root@10.0.1.53(10.0.1.53:22) to root@10.0.1.52(10.0.1.52:22)..
Fri Oct 25 04:29:51 2019 - [debug] ok.
Fri Oct 25 04:29:51 2019 - [info] All SSH connection tests passed successfully.

#主从状态检查
[root@db03 ~]# masterha_check_repl --conf=/etc/mha/app1.cnf
Fri Oct 25 04:51:22 2019 - [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Fri Oct 25 04:51:22 2019 - [info] Reading application default configuration from /etc/mha/app1.cnf..
Fri Oct 25 04:51:22 2019 - [info] Reading server configuration from /etc/mha/app1.cnf..
Fri Oct 25 04:51:22 2019 - [info] MHA::MasterMonitor version 0.56.
Fri Oct 25 04:51:23 2019 - [info] GTID failover mode = 1
Fri Oct 25 04:51:23 2019 - [info] Dead Servers:
Fri Oct 25 04:51:23 2019 - [info] Alive Servers:
Fri Oct 25 04:51:23 2019 - [info] 10.0.1.51(10.0.1.51:3306)
Fri Oct 25 04:51:23 2019 - [info] 10.0.1.52(10.0.1.52:3306)
Fri Oct 25 04:51:23 2019 - [info] 10.0.1.53(10.0.1.53:3306)
Fri Oct 25 04:51:23 2019 - [info] Alive Slaves:
Fri Oct 25 04:51:23 2019 - [info] 10.0.1.52(10.0.1.52:3306) Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Fri Oct 25 04:51:23 2019 - [info] GTID ON
Fri Oct 25 04:51:23 2019 - [info] Replicating from 10.0.1.51(10.0.1.51:3306)
Fri Oct 25 04:51:23 2019 - [info] 10.0.1.53(10.0.1.53:3306) Version=5.7.26-log (oldest major version between slaves) log-bin:enabled
Fri Oct 25 04:51:23 2019 - [info] GTID ON
Fri Oct 25 04:51:23 2019 - [info] Replicating from 10.0.1.51(10.0.1.51:3306)
Fri Oct 25 04:51:23 2019 - [info] Current Alive Master: 10.0.1.51(10.0.1.51:3306)
Fri Oct 25 04:51:23 2019 - [info] Checking slave configurations..
Fri Oct 25 04:51:23 2019 - [info] read_only=1 is not set on slave 10.0.1.52(10.0.1.52:3306).
Fri Oct 25 04:51:23 2019 - [info] read_only=1 is not set on slave 10.0.1.53(10.0.1.53:3306).
Fri Oct 25 04:51:23 2019 - [info] Checking replication filtering settings..
Fri Oct 25 04:51:23 2019 - [info] binlog_do_db= , binlog_ignore_db=
Fri Oct 25 04:51:23 2019 - [info] Replication filtering check ok.
Fri Oct 25 04:51:23 2019 - [info] GTID (with auto-pos) is supported. Skipping all SSH and Node package checking.
Fri Oct 25 04:51:23 2019 - [info] Checking SSH publickey authentication settings on the current master..
Fri Oct 25 04:51:23 2019 - [info] HealthCheck: SSH to 10.0.1.51 is reachable.
Fri Oct 25 04:51:23 2019 - [info]
10.0.1.51(10.0.1.51:3306) (current master)
+--10.0.1.52(10.0.1.52:3306)
+--10.0.1.53(10.0.1.53:3306)

Fri Oct 25 04:51:23 2019 - [info] Checking replication health on 10.0.1.52..
Fri Oct 25 04:51:23 2019 - [info] ok.
Fri Oct 25 04:51:23 2019 - [info] Checking replication health on 10.0.1.53..
Fri Oct 25 04:51:23 2019 - [info] ok.
Fri Oct 25 04:51:23 2019 - [warning] master_ip_failover_script is not defined.
Fri Oct 25 04:51:23 2019 - [warning] shutdown_script is not defined.
Fri Oct 25 04:51:23 2019 - [info] Got exit code 0 (Not master dead).

MySQL Replication Health is OK.

2.4.8 开启MHA(db03):

1
2
[root@db03 ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
[1] 1755

2.4.9 查看MHA状态

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:1755) is running(0:PING_OK), master:10.0.1.51

[root@db03 ~]# mysql -umha -pmha -h 10.0.1.51 -e "show variables like 'server_id'"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 51 |
+---------------+-------+
[root@db03 ~]# mysql -umha -pmha -h 10.0.1.52 -e "show variables like 'server_id'"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 52 |
+---------------+-------+
[root@db03 ~]# mysql -umha -pmha -h 10.0.1.53 -e "show variables like 'server_id'"
mysql: [Warning] Using a password on the command line interface can be insecure.
+---------------+-------+
| Variable_name | Value |
+---------------+-------+
| server_id | 53 |
+---------------+-------+

2.4.10 故障模拟及处理

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
### 停主库db01:    
[root@db01 ~]# systemctl stop mysqld

#观察manager 日志 tail -f /var/log/mha/app1/manager 末尾必须显示successfully,才算正常切换成功。
[root@db03 ~]# tail -f /var/log/mha/app1/manager
Master 10.0.1.51(10.0.1.51:3306) is down!

Check MHA Manager logs at db03:/var/log/mha/app1/manager for details.

Started automated(non-interactive) failover.
Selected 10.0.1.52(10.0.1.52:3306) as a new master.
10.0.1.52(10.0.1.52:3306): OK: Applying all logs succeeded.
10.0.1.53(10.0.1.53:3306): OK: Slave started, replicating from 10.0.1.52(10.0.1.52:3306)
10.0.1.52(10.0.1.52:3306): Resetting slave info succeeded.
Master failover to 10.0.1.52(10.0.1.52:3306) completed successfully.

查看db02变成主库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
db02 [(none)]>show master status;
+------------------+----------+--------------+------------------+------------------------------------------+
| File | Position | Binlog_Do_DB | Binlog_Ignore_DB | Executed_Gtid_Set |
+------------------+----------+--------------+------------------+------------------------------------------+
| mysql-bin.000003 | 194 | | | 39b29c3b-f69b-11e9-a428-000c29cfb981:1-4 |
+------------------+----------+--------------+------------------+------------------------------------------+
1 row in set (0.00 sec)

db03 [(none)]>show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.0.1.52
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 194
Relay_Log_File: db03-relay-bin.000002
Relay_Log_Pos: 367
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

启动db01

1
[root@db01 ~]# systemctl start mysqld

恢复主从结构,db02变成主库

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
db01 [(none)]>CHANGE MASTER TO 
-> MASTER_HOST='10.0.1.52',
-> MASTER_PORT=3306,
-> MASTER_AUTO_POSITION=1,
-> MASTER_USER='repl',
-> MASTER_PASSWORD='123';

db01 [(none)]>show slave status \G
*************************** 1. row ***************************
Slave_IO_State: Waiting for master to send event
Master_Host: 10.0.1.52
Master_User: repl
Master_Port: 3306
Connect_Retry: 60
Master_Log_File: mysql-bin.000003
Read_Master_Log_Pos: 194
Relay_Log_File: db01-relay-bin.000002
Relay_Log_Pos: 367
Relay_Master_Log_File: mysql-bin.000003
Slave_IO_Running: Yes
Slave_SQL_Running: Yes

修改db03 MHA配置文件

1
2
3
4
[root@db03 ~]# vim /etc/mha/app1.cnf
[server1]
hostname=10.0.1.51
port=3306

启动MHA

1
2
3
4
[root@db03 ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
[1] 1959
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf
app1 (pid:1959) is running(0:PING_OK), master:10.0.1.52

2.4.11 Manager额外参数介绍

1
2
3
4
5
6
7
8
9
10
11
12
13
14
说明:
主库宕机谁来接管?
1. 所有从节点日志都是一致的,默认会以配置文件的顺序去选择一个新主。
2. 从节点日志不一致,自动选择最接近于主库的从库
3. 如果对于某节点设定了权重(candidate_master=1),权重节点会优先选择。
但是此节点日志量落后主库100M日志的话,也不会被选择。可以配合check_repl_delay=0,关闭日志量的检查,强制选择候选节点。

(1) ping_interval=1
#设置监控主库,发送ping包的时间间隔,尝试三次没有回应的时候自动进行failover
(2) candidate_master=1
#设置为候选master,如果设置该参数以后,发生主从切换以后将会将此从库提升为主库,即使这个主库不是集群中事件最新的slave
(3)check_repl_delay=0
#默认情况下如果一个slave落后master 100M的relay logs的话,
MHA将不会选择该slave作为一个新的master,因为对于这个slave的恢复需要花费很长时间,通过设置check_repl_delay=0,MHA触发切换在选择一个新的master的时候将会忽略复制延时,这个参数对于设置了candidate_master=1的主机非常有用,因为这个候选主在切换的过程中一定是新的master

2.4.12 MHA 的vip功能

参数

1
2
vim /etc/mha/app1.cnf
master_ip_failover_script=/usr/local/bin/master_ip_failover

拷贝脚本至指定位置

1
[root@db03 ~]# cp master_ip_failover.txt /usr/local/bin/master_ip_failover

修改脚本内容:

1
2
3
4
5
6
7
8
9
10
11
[root@db03 ~]# vim  /usr/local/bin/master_ip_failover
修改为:
my $vip = '10.0.1.55/24';
my $key = '1';
my $ssh_start_vip = "/sbin/ifconfig eth0:$key $vip";
my $ssh_stop_vip = "/sbin/ifconfig eth0:$key down";

[root@db03 ~]# yum -y install dos2unix
[root@db03 ~]# dos2unix /usr/local/bin/master_ip_failover
dos2unix: converting file /usr/local/bin/master_ip_failover to Unix format ...
[root@db03 ~]# chmod +x /usr/local/bin/master_ip_failover

主库上,手工生成第一个vip地址

1
2
手工在主库上绑定vip,注意一定要和配置文件中的ethN一致,我的是eth0:1(1是key指定的值)
ifconfig eth0:1 10.0.1.55/24

重启mha

1
2
[root@db03 ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover  < /dev/null> /var/log/mha/app1/manager.log 2>&1 &
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf

2.4.13 邮件提醒

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
1. 参数:
report_script=/usr/local/bin/send
2. 准备邮件脚本
send_report
(1)准备发邮件的脚本(上传 email_2019-最新.zip中的脚本,到/usr/local/bin/中)
(2)将准备好的脚本添加到mha配置文件中,让其调用
[root@db03 /usr/local/bin]# chmod +x send
[root@db03 /usr/local/bin]# chmod +x sendEmail
[root@db03 /usr/local/bin]# chmod +x testpl

3. 修改manager配置文件,调用邮件脚本
vi /etc/mha/app1.cnf
report_script=/usr/local/bin/send

(3)停止MHA
masterha_stop --conf=/etc/mha/app1.cnf
(4)开启MHA
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &

(5) 关闭主库,看警告邮件
故障修复:
1. 恢复故障节点
(1)实例宕掉
/etc/init.d/mysqld start
(2)主机损坏,有可能数据也损坏了
备份并恢复故障节点。
2.恢复主从环境
看日志文件:
CHANGE MASTER TO MASTER_HOST='10.0.1.52', MASTER_PORT=3306, MASTER_AUTO_POSITION=1, MASTER_USER='repl', MASTER_PASSWORD='123';
start slave ;
3.恢复manager
3.1 修好的故障节点配置信息,加入到配置文件
[server1]
hostname=10.0.1.51
port=3306
3.2 启动manager
nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &

2.4.14 binlog server(db03)

参数

1
2
3
4
5
6
7
binlogserver配置:
找一台额外的机器,必须要有5.6以上的版本,支持gtid并开启,我们直接用的第二个slave(db03)
vim /etc/mha/app1.cnf
[binlog1]
no_master=1
hostname=10.0.1.53
master_binlog_dir=/data/mysql/binlog

创建必要目录

1
2
3
mkdir -p /data/mysql/binlog
chown -R mysql.mysql /data/*
修改完成后,将主库binlog拉过来(从000001开始拉,之后的binlog会自动按顺序过来)

拉取主库binlog日志

1
2
3
4
5
6
7
8
9
cd /data/mysql/binlog     -----》必须进入到自己创建好的目录
mysqlbinlog -R --host=10.0.1.52 --user=mha --password=mha --raw --stop-never mysql-bin.000001 &
注意:
拉取日志的起点,需要按照目前从库的已经获取到的二进制日志点为起点
[root@db03 /data/mysql/binlog]# ll
total 12
-rw-r----- 1 root root 177 Oct 25 05:38 mysql-bin.000001
-rw-r----- 1 root root 1199 Oct 25 05:38 mysql-bin.000002
-rw-r----- 1 root root 194 Oct 25 05:38 mysql-bin.000003

重启MHA

1
2
3
[root@db03 ~]# masterha_stop --conf=/etc/mha/app1.cnf
[root@db03 ~]# nohup masterha_manager --conf=/etc/mha/app1.cnf --remove_dead_master_conf --ignore_last_failover < /dev/null > /var/log/mha/app1/manager.log 2>&1 &
[root@db03 ~]# masterha_check_status --conf=/etc/mha/app1.cnf

故障处理

1
2
3
4
5
主库宕机,binlogserver 自动停掉,manager 也会自动停止。
处理思路:
1、重新获取新主库的binlog到binlogserver中
2、重新配置文件binlog server信息
3、最后再启动MHA

第3章 管理员在高可用架构维护的职责

1
2
3
4
5
1. 搭建:MHA+VIP+SendReport+BinlogServer
2. 监控及故障处理
3. 高可用架构的优化
核心是:尽可能降低主从的延时,让MHA花在数据补偿上的时间尽量减少。
5.7 版本,开启GTID模式,开启从库SQL并发复制。
文章作者: Wu Fei
文章链接: http://linuxwf.com/2020/04/15/9-MHA%E9%AB%98%E5%8F%AF%E7%94%A8/
版权声明: 本博客所有文章除特别声明外,均采用 CC BY-NC-SA 4.0 许可协议。转载请注明来自 WF's Blog
打赏
  • 微信
    微信
  • 支付宝
    支付宝

评论