阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

MHA高可用部署及测试

236次阅读
没有评论

共计 35029 个字符,预计需要花费 88 分钟才能阅读完成。

MHA 高可用部署及测试,MySQL 高可用解决方案

目录

一.MHA 特性
二.MHA 工作机制
三.MHA 适用的主从架构
四.MHA 高可用环境的构建
4.1 实验环境
4.2 实验大概步骤
4.3 脚本相关说明
4.4 MHA 部署过程
4.5 配置 VIP 的方式
五.MHA 常用命令
六. 注意事项
七. 部署过程遇到的问题

一.MHA 特性

1. 主服务器的自动监控和故障转移

  MHA 监控复制架构的主服务器,一旦检测到主服务器故障,就会自动进行故障转移。即使有些从服务器没有收到最新的 relay log,MHA 自动从最新的从服务器上识别差异的 relay log 并把这些日志应用到其他从服务器上,因此所有的从服务器保持一致性了。MHA 通常在几秒内完成故障转移,9-12 秒可以检测出主服务器故障,7-10 秒内关闭故障的主服务器以避免脑裂,几秒中内应用差异的 relay log 到新的主服务器上,整个过程可以在 10-30s 内完成。还可以设置优先级指定其中的一台 slave 作为 master 的候选人。由于 MHA 在 slaves 之间修复一致性,因此可以将任何 slave 变成新的 master,而不会发生一致性的问题,从而导致复制失败。

2. 交互式主服务器故障转移

  可以只使用 MHA 的故障转移,而不用于监控主服务器,当主服务器故障时,人工调用 MHA 来进行故障故障。

3. 非交互式的主故障转移

  不监控主服务器,但自动实现故障转移。这种特征适用于已经使用其他软件来监控主服务器状态,比如 heartbeat 来检测主服务器故障和虚拟 IP 地址接管,可以使用 MHA 来实现故障转移和 slave 服务器晋级为 master 服务器。

4. 在线切换主服务器

  在许多情况下,需要将现有的主服务器迁移到另外一台服务器上。比如主服务器硬件故障,RAID 控制卡需要重建,将主服务器移到性能更好的服务器上等等。维护主服务器引起性能下降,导致停机时间至少无法写入数据。另外,阻塞或杀掉当前运行的会话会导致主主之间数据不一致的问题发生。MHA 提供快速切换和优雅的阻塞写入,这个切换过程只需要 0.5-2s 的时间,这段时间内数据是无法写入的。在很多情况下,0.5-2s 的阻塞写入是可以接受的。因此切换主服务器不需要计划分配维护时间窗口(呵呵,不需要你在夜黑风高时通宵达旦完成切换主服务器的任务)。

 
二.MHA 工作机制

MHA 自动 Failover 过程解析

http://www.mysqlsystems.com/2012/03/figure-out-process-of-autofailover-on-mha.html

https://code.google.com/p/mysql-master-ha/wiki/Sequences_of_MHA
 
三.MHA 适用的主从架构

https://code.google.com/p/mysql-master-ha/wiki/UseCases
 
四.MHA 高可用环境的构建

 
4.1 实验环境

•Node1:192.168.10.216 (主)
•Node2:192.168.10.217 (从, 主故障切换的备主)
•Node3:192.168.10.218 (从, 兼 MHA 管理节点)
•VIP : 192.168.10.219
•Mysql:Percona-Server-5.6.16-rel64.2-569
•以上节点系统均为 CentOS6.5 x64
 
4.2 实验大概步骤

A. 三节点配置 epel 的 yum 源,安装相关依赖包

B. 建立主从复制关系

C. ssh-keygen 实现三台机器之间相互免密钥登录

D. 三节点安装 mha4mysql-node-0.56,node3 上安装 mha4mysql-manager-0.56

E. 在 node3 上管理 MHA 配置文件

F. masterha_check_ssh 验证 ssh 信任登录是否成功,masterha_check_repl 验证 mysql 复制是否成功

G. 启动 MHA manager,并监控日志文件

H. 测试 master(Node1)的 mysql 宕掉后,是否会自动切换正常

I . 配置 VIP,切换后从自动接管主服务,并对客户端透明

 
4.3 脚本相关说明

MHA node 有三个脚本,依赖 perl 模块

save_binary_logs:保存和拷贝宕掉的主服务器二进制日志

apply_diff_relay_logs: 识别差异的 relay log 事件,并应用到所有从服务器节点
purge_relay_logs: 清除 relay log 日志文件

 
4.4 MHA 部署过程

A. 三节点配置 epel 的 yum 源,安装相关依赖包

 
rpm -Uvh http://dl.Fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
yum  -y install perl-DBD-MySQL  ncftp
 

B. 建立主从复制关系

在 node1 上:

 
mysql>grant replication slave  on *.* to ‘rep’@’192.168.10.%’ identified by ‘geekwolf’;
mysql>grant all on *.* to ‘root’@’192.168.10.%’ identified by ‘geekwolf’;
mysql>show master status;
 

拷贝 node1 的 data 目录同步到 node2,node3
在 node2 node3 上:

 
mysql>change master  to  master_host=’192.168.10.216′, master_user=’rep’, master_password=’geekwolf’,master_port=3306, master_log_file=’mysql-in.000006′,master_log_pos=120,master_connect_retry=1;
mysql>start slave;
 

每个节点都做好 mysql 命令的软链

ln -s /usr/local/mysql/bin/* /usr/local/bin/

C. ssh-keygen 实现三台机器之间相互免密钥登录
在 node1(在其他两个节点一同)执行

 
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
ssh-copy-id -i /root/.ssh/id_rsa.pub root@node3
 

D. 三节点安装 mha4mysql-node-0.56,node3 上安装 mha4mysql-manager-0.56

在 node1 node2 node3 安装 mha4mysql-node

wget https://googledrive.com/host/0B1lu97m8-haWeHdGWXp0YVVUSlk/mha4mysql-node-0.56.tar.gz

tar xf mha4mysql-node-0.56.tar.gz

cd mha4mysql-node

perl Makefile.PL

make && make install

在 node3 上安装 mha4mysql-manager

wget https://googledrive.com/host/0B1lu97m8-haWeHdGWXp0YVVUSlk/mha4mysql-manager-0.56.tar.gz

tar xf mha4mysql-manager-0.56.tar.gz

cd mha4mysql-manager-0.56

yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles perl-Time-HiRes

E. 在 node3 上管理 MHA 配置文件

mkdir -p /etc/mha/{app1,scripts}

cp mha4mysql-manager-0.56/samples/conf/ /etc/mha/

cp mha4mysql-manager-0.56/samples/scripts/ /etc/mha/scripts/

mv /etc/mha/app1.cnf /etc/mha/app1/

mv /etc/mha/masterha_default.cnf /etc/masterha_default.cnf

设置全局配置:

vim /etc/mha/masterha_default.cnf

 
[server default]
user=root
password=geekwolf
ssh_user=root
repl_user=rep
repl_password=geekwolf
ping_interval=1
#shutdown_script=””
secondary_check_script = masterha_secondary_check -s node1 -s node2 -s node3 –user=root –master_host=node1 –master_ip=192.168.10.216 –master_port=3306
#master_ip_failover_script=”/etc/mha/scripts/master_ip_failover”
#master_ip_online_change_script=”/etc/mha/scripts/master_ip_online_change”
# shutdown_script= /script/masterha/power_manager
#report_script=””
 

vim /etc/mha/app1/app1.cnf

 
[server default]
manager_workdir=/var/log/mha/app1
manager_log=/var/log/mha/app1/manager.log
[server1]
hostname=node1
master_binlog_dir=”/usr/local/mysql/logs”
candidate_master=1
[server2]
hostname=node2
master_binlog_dir=”/usr/local/mysql/logs”
candidate_master=1
[server3]
hostname=node3
master_binlog_dir=”/usr/local/mysql/logs”
no_master=1
 

注释:

  candidate_master=1 表示该主机优先可被选为 new master,当多个 [serverX] 等设置此参数时,优先级由 [serverX] 配置的顺序决定

  secondary_check_script mha 强烈建议有两个或多个网络线路检查 MySQL 主服务器的可用性。默认情况下, 只有单一的路线 MHA Manager 检查: 从 Manager to Master, 但这是不可取的。MHA 实际上可以有两个或两个以上的检查路线通过调用外部脚本定义二次检查脚本参数

  master_ip_failover_script 在 MySQL 从服务器提升为新的主服务器时,调用此脚本,因此可以将 vip 信息写到此配置文件

  master_ip_online_change_script 使用 masterha_master_switch 命令手动切换 MySQL 主服务器时后会调用此脚本,参数和 master_ip_failover_script 类似,脚本可以互用
  shutdown_script 此脚本 (默认 samples 内的脚本) 利用服务器的远程控制 IDRAC 等,使用 ipmitool 强制去关机,以避免 fence 设备重启主服务器,造成脑列现象

  report_script 当新主服务器切换完成以后通过此脚本发送邮件报告,可参考使用 http://caspian.dotconf.net/menu/Software/SendEmail/sendEmail-v1.56.tar.gz

  以上涉及到的脚本可以从 mha4mysql-manager-0.56/samples/scripts/* 拷贝进行修改使用

  其他 manager 详细配置参数 https://code.google.com/p/mysql-master-ha/wiki/Parameters

F. masterha_check_ssh 验证 ssh 信任登录是否成功,masterha_check_repl 验证 mysql 复制是否成功

验证 ssh 信任:masterha_check_ssh –conf=/etc/mha/app1/app1.cnf

 
[root@localhost ~]# masterha_check_ssh –conf=/etc/mha/app1/app1.cnf
Tue May 13 07:53:15 2014 – [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue May 13 07:53:15 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 07:53:15 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
Tue May 13 07:53:15 2014 – [info] Starting SSH connection tests..
Tue May 13 07:53:16 2014 – [debug]
Tue May 13 07:53:15 2014 – [debug] Connecting via SSH from root@node1(192.168.10.216:22) to root@node2(192.168.10.217:22)..
Tue May 13 07:53:15 2014 – [debug] ok.
Tue May 13 07:53:15 2014 – [debug] Connecting via SSH from root@node1(192.168.10.216:22) to root@node3(192.168.10.218:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:16 2014 – [debug]
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node2(192.168.10.217:22) to root@node1(192.168.10.216:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node2(192.168.10.217:22) to root@node3(192.168.10.218:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:17 2014 – [debug]
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node3(192.168.10.218:22) to root@node1(192.168.10.216:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node3(192.168.10.218:22) to root@node2(192.168.10.217:22)..
Tue May 13 07:53:17 2014 – [debug] ok.
Tue May 13 07:53:17 2014 – [info] All SSH connection tests passed successfully.
 

验证主从复制:masterha_check_repl –conf=/etc/mha/app1/app1.cnf

 
[root@localhost mha]# masterha_check_repl –conf=/etc/mha/app1/app1.cnf
Tue May 13 08:10:54 2014 – [info] Reading default configuration from /etc/masterha_default.cnf..
Tue May 13 08:10:54 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:10:54 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:10:54 2014 – [info] MHA::MasterMonitor version 0.56.
Tue May 13 08:10:54 2014 – [info] GTID failover mode = 0
Tue May 13 08:10:54 2014 – [info] Dead Servers:
Tue May 13 08:10:54 2014 – [info] Alive Servers:
Tue May 13 08:10:54 2014 – [info] node1(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] node2(192.168.10.217:3306)
Tue May 13 08:10:54 2014 – [info] node3(192.168.10.218:3306)
Tue May 13 08:10:54 2014 – [info] Alive Slaves:
Tue May 13 08:10:54 2014 – [info] node2(192.168.10.217:3306) Version=5.6.16-64.2-rel64.2-log (oldest major version between slaves) log-bin:enabled
Tue May 13 08:10:54 2014 – [info] Replicating from 192.168.10.216(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] Primary candidate for the new Master (candidate_master is set)
Tue May 13 08:10:54 2014 – [info] node3(192.168.10.218:3306) Version=5.6.16-64.2-rel64.2-log (oldest major version between slaves) log-bin:enabled
Tue May 13 08:10:54 2014 – [info] Replicating from 192.168.10.216(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] Not candidate for the new Master (no_master is set)
Tue May 13 08:10:54 2014 – [info] Current Alive Master: node1(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] Checking slave configurations..
Tue May 13 08:10:54 2014 – [info] read_only=1 is not set on slave node2(192.168.10.217:3306).
Tue May 13 08:10:54 2014 – [warning] relay_log_purge=0 is not set on slave node2(192.168.10.217:3306).
Tue May 13 08:10:54 2014 – [info] read_only=1 is not set on slave node3(192.168.10.218:3306).
Tue May 13 08:10:54 2014 – [warning] relay_log_purge=0 is not set on slave node3(192.168.10.218:3306).
Tue May 13 08:10:54 2014 – [info] Checking replication filtering settings..
Tue May 13 08:10:54 2014 – [info] binlog_do_db= , binlog_ignore_db=
Tue May 13 08:10:54 2014 – [info] Replication filtering check ok.
Tue May 13 08:10:54 2014 – [info] GTID (with auto-pos) is not supported
Tue May 13 08:10:54 2014 – [info] Starting SSH connection tests..
Tue May 13 08:10:55 2014 – [info] All SSH connection tests passed successfully.
Tue May 13 08:10:55 2014 – [info] Checking MHA Node version..
Tue May 13 08:10:55 2014 – [info] Version check ok.
Tue May 13 08:10:55 2014 – [info] Checking SSH publickey authentication settings on the current master..
Tue May 13 08:10:56 2014 – [info] HealthCheck: SSH to node1 is reachable.
Tue May 13 08:10:56 2014 – [info] Master MHA Node version is 0.56.
Tue May 13 08:10:56 2014 – [info] Checking recovery script configurations on node1(192.168.10.216:3306)..
Tue May 13 08:10:56 2014 – [info] Executing command: save_binary_logs –command=test –start_pos=4 –binlog_dir=/usr/local/mysql/logs –output_file=/var/tmp/save_binary_logs_test –manager_version=0.56 –start_file=mysql-bin.000009
Tue May 13 08:10:56 2014 – [info] Connecting to root@192.168.10.216(node1:22)..
  Creating /var/tmp if not exists.. ok.
  Checking output directory is accessible or not..
  ok.
  Binlog found at /usr/local/mysql/logs, up to mysql-bin.000009
Tue May 13 08:10:56 2014 – [info] Binlog setting check done.
Tue May 13 08:10:56 2014 – [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Tue May 13 08:10:56 2014 – [info] Executing command : apply_diff_relay_logs –command=test –slave_user=’root’ –slave_host=node2 –slave_ip=192.168.10.217 –slave_port=3306 –workdir=/var/tmp –target_version=5.6.16-64.2-rel64.2-log –manager_version=0.56 –relay_log_info=/usr/local/mysql/data/relay-log.info –relay_dir=/usr/local/mysql/data/ –slave_pass=xxx
Tue May 13 08:10:56 2014 – [info] Connecting to root@192.168.10.217(node2:22)..
  Checking slave recovery environment settings..
    Opening /usr/local/mysql/data/relay-log.info … ok.
    Relay log found at /usr/local/mysql/logs, up to relay-bin.000006
    Temporary relay log file is /usr/local/mysql/logs/relay-bin.000006
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Tue May 13 08:10:57 2014 – [info] Executing command : apply_diff_relay_logs –command=test –slave_user=’root’ –slave_host=node3 –slave_ip=192.168.10.218 –slave_port=3306 –workdir=/var/tmp –target_version=5.6.16-64.2-rel64.2-log –manager_version=0.56 –relay_log_info=/usr/local/mysql/data/relay-log.info –relay_dir=/usr/local/mysql/data/ –slave_pass=xxx
Tue May 13 08:10:57 2014 – [info] Connecting to root@192.168.10.218(node3:22)..
  Checking slave recovery environment settings..
    Opening /usr/local/mysql/data/relay-log.info … ok.
    Relay log found at /usr/local/mysql/logs, up to relay-bin.000006
    Temporary relay log file is /usr/local/mysql/logs/relay-bin.000006
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Tue May 13 08:10:57 2014 – [info] Slaves settings check done.
Tue May 13 08:10:57 2014 – [info]
node1(192.168.10.216:3306) (current master)
 +–node2(192.168.10.217:3306)
 +–node3(192.168.10.218:3306)
Tue May 13 08:10:57 2014 – [info] Checking replication health on node2..
Tue May 13 08:10:57 2014 – [info] ok.
Tue May 13 08:10:57 2014 – [info] Checking replication health on node3..
Tue May 13 08:10:57 2014 – [info] ok.
Tue May 13 08:10:57 2014 – [warning] master_ip_failover_script is not defined.
Tue May 13 08:10:57 2014 – [warning] shutdown_script is not defined.
Tue May 13 08:10:57 2014 – [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
 

G. 启动 MHA manager,并监控日志文件

在 node1 上 killall mysqld 的同时在 node3 上启动 manager 服务

[root@localhost mha]# masterha_manager –conf=/etc/mha/app1/app1.cnf
Tue May 13 08:19:01 2014 – [info] Reading default configuration from /etc/masterha_default.cnf..
Tue May 13 08:19:01 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:19:01 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
  Creating /var/tmp if not exists.. ok.
  Checking output directory is accessible or not..
  ok.
  Binlog found at /usr/local/mysql/logs, up to mysql-bin.000009
Tue May 13 08:19:18 2014 – [info] Reading default configuration from /etc/masterha_default.cnf..
Tue May 13 08:19:18 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:19:18 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
 

  之后观察 node3 上 /var/log/mha/app1/manager.log 日志会发现 node1 dead 状态,主自动切换到 node2 上,而 node3 上的主从配置指向了 node2,并且发生一次切换后会生成 /var/log/mha/app1/app1.failover.complete 文件;

更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2014-11/109460p2.htm

手动恢复 node1 操作:

  rm -rf /var/log/mha/app1/app1.failover.complete

  启动 node1 上的 mysql,重新配置 node2 node3 主从指向 node1(change master to)

MHA Manager 后台执行:

nohup masterha_manager –conf=/etc/mha/app1/app1.cnf < /dev/null > /var/log/mha/app1/app1.log 2>&1 &
守护进程方式参考:
https://code.google.com/p/mysql-master-ha/wiki/Runnning_Background

ftp://ftp.pbone.net/mirror/ftp5.gwdg.de/pub/openSUSE/repositories/home:/weberho:/qmailtoaster/openSUSE_Tumbleweed/x86_64/daemontools-0.76-5.3.x86_64.rpm
 
4.5 配置 VIP 的方式

A. 通过全局配置文件实现

vim /etc/mha/masterha_default.cnf

[server default]
  user=root
  password=geekwolf
  ssh_user=root
  repl_user=rep
  repl_password=geekwolf
  ping_interval=1
  secondary_check_script = masterha_secondary_check -s node1 -s node2 -s node3 –user=root –master_host=node1 –master_ip=192.168.10.216 –master_port=3306
  master_ip_failover_script=”/etc/mha/scripts/master_ip_failover”
  master_ip_online_change_script=”/etc/mha/scripts/master_ip_online_change”
  #shutdown_script= /script/masterha/power_manager
  #report_script=””
 

修改后的 master_ip_failover、master_ip_online_change 脚本

 
#!/usr/bin/env perl
use strict;
use warnings FATAL => ‘all’;
use Getopt::Long;
my (
    $command, $ssh_user, $orig_master_host, $orig_master_ip,
    $orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = ‘192.168.10.218’; # Virtual IP
my $gateway = ‘192.168.10.1’;#Gateway IP
my $interface = ‘eth0’
my $key = “1”;
my $ssh_start_vip = “/sbin/ifconfig $interface:$key $vip;/sbin/arping -I $interface -c 3 -s $vip $gateway >/dev/null 2>&1”;
my $ssh_stop_vip = “/sbin/ifconfig $interface:$key down”;
GetOptions(
    ‘command=s’ => \$command,
    ‘ssh_user=s’ => \$ssh_user,
    ‘orig_master_host=s’ => \$orig_master_host,
    ‘orig_master_ip=s’ => \$orig_master_ip,
    ‘orig_master_port=i’ => \$orig_master_port,
    ‘new_master_host=s’ => \$new_master_host,
    ‘new_master_ip=s’ => \$new_master_ip,
    ‘new_master_port=i’ => \$new_master_port,
);
exit &main();
sub main {
    print “\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n”;
    if ($command eq “stop” || $command eq “stopssh”) {
        # $orig_master_host, $orig_master_ip, $orig_master_port are passed.
        # If you manage master ip address at global catalog database,
        # invalidate orig_master_ip here.
        my $exit_code = 1;
        eval {
            print “Disabling the VIP on old master: $orig_master_host \n”;
            &stop_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn “Got Error: $@\n”;
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ($command eq “start”) {
        # all arguments are passed.
        # If you manage master ip address at global catalog database,
        # activate new_master_ip here.
        # You can also grant write access (create user, set read_only=0, etc) here.
        my $exit_code = 10;
        eval {
            print “Enabling the VIP – $vip on the new master – $new_master_host \n”;
            &start_vip();
            $exit_code = 0;
        };
        if ($@) {
            warn $@;
            exit $exit_code;
        }
        exit $exit_code;
    }
    elsif ($command eq “status”) {
        print “Checking the Status of the script.. OK \n”;
        `ssh $ssh_user\@cluster1 \” $ssh_start_vip \”`;
        exit 0;
    }
    else {
        &usage();
        exit 1;
    }
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
    `ssh $ssh_user\@$new_master_host \” $ssh_start_vip \”`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
    `ssh $ssh_user\@$orig_master_host \” $ssh_stop_vip \”`;
}
sub usage {
    print
    “Usage: master_ip_failover –command=start|stop|stopssh|status –orig_master_host=host –orig_master_ip=ip –orig_master_port=port –new_master_host=host –new_master_ip=ip –new_master_port=port\n”;
}
 

B. 通过第三方 HA(keepalived、heartbeat)实现 VIP,以 keepalived 为例

以 node1 node2 互为主备进行配置 keepalived

在 node1 node2 上分别下载安装 keepalived

wget http://www.keepalived.org/software/keepalived-1.2.13.tar.gz

yum -y install popt-*

./configure –prefix=/usr/local/keepalived –enable-snmp

make && make install

cp /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/rc.d/init.d/

cp /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/

chmod +x /etc/rc.d/init.d/keepalived

chkconfig keepalived on

mkdir /etc/keepalived

ln -s /usr/local/keepalived/sbin/keepalived /usr/sbin

修改 node1(192.168.10.216)配置文件

vim /etc/keepalived/keepalived.conf

 
! Configuration File for keepalived
global_defs {
 router_id MHA
 notification_email {
 root@localhost  #接收邮件,可以有多个,一行一个
}
 #当主、备份设备发生改变时,通过邮件通知
 notification_email_from  m@localhost
 #发送邮箱服务器
 smtp_server 127.0.0.1
 #发送邮箱超时时间
 smtp_connect_timeout 30
 }
 
varrp_script check_mysql {
    script “/etc/keepalived/check_mysql.sh”
}
vrrp_sync_group VG1 {
    group {
          VI_1
    }
notify_master “/etc/keepalived/master.sh”
}
 
vrrp_instance VI_1 {
    state master   
    interface eth0 
    virtual_router_id 110
    priority 100         
    advert_int 1
    nopreempt #不抢占资源,意思就是它活了之后也不会再把主抢回来
 
    authentication {
    # 认证方式,可以是 PASS 或 AH 两种认证方式
    auth_type PASS
    # 认证密码
    auth_pass geekwolf
    }
track_script {
    check_mysql
}
virtual_ipaddress {
    192.168.10.219
    }
 
}
 

修改 node2(192.168.10.217)配置文件

vim /etc/keepalived/keepalived.conf

 
! Configuration File for keepalived
global_defs {
 router_id MHA
 notification_email {
 root@localhost  #接收邮件,可以有多个,一行一个
}
 #当主、备份设备发生改变时,通过邮件通知
 notification_email_from  m@localhost
 #发送邮箱服务器
 smtp_server 127.0.0.1
 #发送邮箱超时时间
 smtp_connect_timeout 30
 }
 
varrp_script check_mysql {
    script “/etc/keepalived/check_mysql.sh”
}
vrrp_sync_group VG1 {
    group {
          VI_1
    }
notify_master “/etc/keepalived/master.sh”
}
vrrp_instance VI_1 {
    state backup   
    interface eth0   
    virtual_router_id 110
    priority 99         
    advert_int 1
 
    authentication {
    # 认证方式,可以是 PASS 或 AH 两种认证方式
    auth_type PASS
    # 认证密码
    auth_pass geekwolf
    }
track_script {
    check_mysql
}
virtual_ipaddress {
    192.168.10.219
    }
 
}
 

check_mysql.sh

 

#!/bin/bash
MYSQL=/usr/local/mysql/bin/mysql
MYSQL_HOST=127.0.0.1
MYSQL_USER=root
MYSQL_PASSWORD=geekwolf
CHECK_TIME=3
#mysql  is working MYSQL_OK is 1 , mysql down MYSQL_OK is 0
MYSQL_OK=1
function check_mysql_helth (){
$MYSQL -h $MYSQL_HOST -u $MYSQL_USER -e “show status;” >/dev/null 2>&1
if [$? = 0] ;then
    MYSQL_OK=1
else
    MYSQL_OK=0
fi
    return $MYSQL_OK
}
while [$CHECK_TIME -ne 0]
do
    let “CHECK_TIME -= 1”
    check_mysql_helth
if [$MYSQL_OK = 1] ; then
    CHECK_TIME=0
    exit 0
fi
if [$MYSQL_OK -eq 0] &&  [$CHECK_TIME -eq 0]
then
    pkill keepalived
exit 1
fi
sleep 1
done
 

master.sh

 
#!/bin/bash
VIP=192.168.10.219
GATEWAY=1.1
/sbin/arping -I eth0 -c 5 -s $VIP $GATEWAY &>/dev/null
 

chmod +x /etc/keepalived/check_mysql.sh

chmod +x /etc/keepalived/master.sh
 
五.MHA 常用命令

查看 manager 状态

masterha_check_status –conf=/etc/mha/app1/app1.cnf

查看免密钥是否正常

masterha_check_ssh –conf=/etc/mha/app1/app1.cnf

查看主从复制是否正常

masterha_check_repl –conf=/etc/mha/app1/app1.cnf

添加新节点 server4 到配置文件

masterha_conf_host –command=add –conf=/etc/mha/app1/app1.cnf –hostname=geekwolf –block=server4 –params=“no_master=1;ignore_fail=1”
删除 server4 节点

masterha_conf_host –command=delete –conf=/etc/mha/app1/app1.cnf –block=server4

注:

block: 为节点区名,默认值 为[server_$hostname], 如果设置成 block=100,则为[server100]
params: 参数,分号隔开(参考 https://code.google.com/p/mysql-master-ha/wiki/Parameters)

关闭 manager 服务

masterha_stop –conf=/etc/mha/app1/app1.cnf

主手动切换(前提不要启动 masterha_manager 服务)

在主 node1 存活情况下进行切换

交互模式:

masterha_master_switch –master_state=alive –conf=/etc/mha/app1/app1.cnf –new_master_host=node2

非交互模式:

masterha_master_switch –master_state=alive –conf=/etc/mha/app1/app1.cnf –new_master_host=node2 –interactive=0

在主 node1 宕掉情况下进行切换

masterha_master_switch –master_state=dead –conf=/etc/mha/app1/app1.cnf –dead_master_host=node1 –dead_master_ip=192.168.10.216 –dead_master_port=3306 –new_master_host=192.168.10.217
详细请参考:https://code.google.com/p/mysql-master-ha/wiki/TableOfContents?tm=6
*
 
六. 注意事项

A. 以上两种 vip 切换方式,建议采用第一种方法

B. 发生主备切换后,manager 服务会自动停掉,且在 /var/log/mha/app1 下面生成
app1.failover.complete,若再次发生切换需要删除 app1.failover.complete 文件

C. 测试过程发现一主两从的架构(两从都设置可以担任主角色 candidate_master=1),当旧主故障迁移到备主后,删除 app1.failover.complete,再次启动 manager,停掉新主后,发现无法正常切换(解决方式:删除 /etc/mha/app1/app1.cnf 里面的旧主 node1 的信息后,重新切换正常)

D. arp 缓存导致切换 VIP 后,无法使用问题;keepalived 已经内部实现可不必使用 master.sh

E. 使用 Semi-Sync 能够最大程度保证数据安全

F. Purge_relay_logs 脚本删除中继日志不会阻塞 SQL 线程,在每台从节点上设置计划任务定期清除中继日志

  0 5 * * * root /usr/bin/purge_relay_logs –user=root –password=geekwolf –disable_relay_log_purge » /var/log/mha/purge_relay_logs.log 2>&1
 
七. 部署过程遇到的问题

问题 1:
[root@node1 mha4mysql-node-0.56]# perl Makefile.PL

Can’t locate ExtUtils/MakeMaker.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Makefile.pm line 4.

BEGIN failed–compilation aborted at inc/Module/Install/Makefile.pm line 4.
Compilation failed in require at inc/Module/Install.pm line 283.

Can’t locate ExtUtils/MakeMaker.pm in @INC (@INC contains: inc /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/
vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/Can.pm line 6.

BEGIN failed–compilation aborted at inc/Module/Install/Can.pm line 6.

Compilation failed in require at inc/Module/Install.pm line 283.

Can’t locate ExtUtils/MM_Unix.pm in @INC (@INC contains: inc /usr/local/lib64/
perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at inc/Module/Install/
Metadata.pm line 349.

解决办法:

yum -y install perl-CPAN perl-devel perl-DBD-MySQL

问题 2:

Can’t locate Time/HiRes.pm in @INC (@INC contains: /usr/local/lib64/perl5 /usr/local/share/perl5 /usr/lib64/perl5/vendor_perl /usr/share/perl5/vendor_perl /usr/lib64/perl5 /usr/share/perl5 .) at /usr/local/share/perl5/MHA/SSHCheck.pm line 28.

BEGIN failed–compilation aborted at /usr/local/share/perl5/MHA/SSHCheck.pm line 28.

Compilation failed in require at /usr/local/bin/masterha_check_ssh line 25.
BEGIN failed–compilation aborted at /usr/local/bin/masterha_check_ssh line 25.

解决办法:

yum -y install perl-Time-HiRes

每个节点都做好 mysql 命令的软链

ln -s /usr/local/mysql/bin/* /usr/local/bin/

MHA 高可用部署及测试,MySQL 高可用解决方案

目录

一.MHA 特性
二.MHA 工作机制
三.MHA 适用的主从架构
四.MHA 高可用环境的构建
4.1 实验环境
4.2 实验大概步骤
4.3 脚本相关说明
4.4 MHA 部署过程
4.5 配置 VIP 的方式
五.MHA 常用命令
六. 注意事项
七. 部署过程遇到的问题

一.MHA 特性

1. 主服务器的自动监控和故障转移

  MHA 监控复制架构的主服务器,一旦检测到主服务器故障,就会自动进行故障转移。即使有些从服务器没有收到最新的 relay log,MHA 自动从最新的从服务器上识别差异的 relay log 并把这些日志应用到其他从服务器上,因此所有的从服务器保持一致性了。MHA 通常在几秒内完成故障转移,9-12 秒可以检测出主服务器故障,7-10 秒内关闭故障的主服务器以避免脑裂,几秒中内应用差异的 relay log 到新的主服务器上,整个过程可以在 10-30s 内完成。还可以设置优先级指定其中的一台 slave 作为 master 的候选人。由于 MHA 在 slaves 之间修复一致性,因此可以将任何 slave 变成新的 master,而不会发生一致性的问题,从而导致复制失败。

2. 交互式主服务器故障转移

  可以只使用 MHA 的故障转移,而不用于监控主服务器,当主服务器故障时,人工调用 MHA 来进行故障故障。

3. 非交互式的主故障转移

  不监控主服务器,但自动实现故障转移。这种特征适用于已经使用其他软件来监控主服务器状态,比如 heartbeat 来检测主服务器故障和虚拟 IP 地址接管,可以使用 MHA 来实现故障转移和 slave 服务器晋级为 master 服务器。

4. 在线切换主服务器

  在许多情况下,需要将现有的主服务器迁移到另外一台服务器上。比如主服务器硬件故障,RAID 控制卡需要重建,将主服务器移到性能更好的服务器上等等。维护主服务器引起性能下降,导致停机时间至少无法写入数据。另外,阻塞或杀掉当前运行的会话会导致主主之间数据不一致的问题发生。MHA 提供快速切换和优雅的阻塞写入,这个切换过程只需要 0.5-2s 的时间,这段时间内数据是无法写入的。在很多情况下,0.5-2s 的阻塞写入是可以接受的。因此切换主服务器不需要计划分配维护时间窗口(呵呵,不需要你在夜黑风高时通宵达旦完成切换主服务器的任务)。

 
二.MHA 工作机制

MHA 自动 Failover 过程解析

http://www.mysqlsystems.com/2012/03/figure-out-process-of-autofailover-on-mha.html

https://code.google.com/p/mysql-master-ha/wiki/Sequences_of_MHA
 
三.MHA 适用的主从架构

https://code.google.com/p/mysql-master-ha/wiki/UseCases
 
四.MHA 高可用环境的构建

 
4.1 实验环境

•Node1:192.168.10.216 (主)
•Node2:192.168.10.217 (从, 主故障切换的备主)
•Node3:192.168.10.218 (从, 兼 MHA 管理节点)
•VIP : 192.168.10.219
•Mysql:Percona-Server-5.6.16-rel64.2-569
•以上节点系统均为 CentOS6.5 x64
 
4.2 实验大概步骤

A. 三节点配置 epel 的 yum 源,安装相关依赖包

B. 建立主从复制关系

C. ssh-keygen 实现三台机器之间相互免密钥登录

D. 三节点安装 mha4mysql-node-0.56,node3 上安装 mha4mysql-manager-0.56

E. 在 node3 上管理 MHA 配置文件

F. masterha_check_ssh 验证 ssh 信任登录是否成功,masterha_check_repl 验证 mysql 复制是否成功

G. 启动 MHA manager,并监控日志文件

H. 测试 master(Node1)的 mysql 宕掉后,是否会自动切换正常

I . 配置 VIP,切换后从自动接管主服务,并对客户端透明

 
4.3 脚本相关说明

MHA node 有三个脚本,依赖 perl 模块

save_binary_logs:保存和拷贝宕掉的主服务器二进制日志

apply_diff_relay_logs: 识别差异的 relay log 事件,并应用到所有从服务器节点
purge_relay_logs: 清除 relay log 日志文件

 
4.4 MHA 部署过程

A. 三节点配置 epel 的 yum 源,安装相关依赖包

 
rpm -Uvh http://dl.Fedoraproject.org/pub/epel/6/x86_64/epel-release-6-8.noarch.rpm
rpm –import /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-6
yum  -y install perl-DBD-MySQL  ncftp
 

B. 建立主从复制关系

在 node1 上:

 
mysql>grant replication slave  on *.* to ‘rep’@’192.168.10.%’ identified by ‘geekwolf’;
mysql>grant all on *.* to ‘root’@’192.168.10.%’ identified by ‘geekwolf’;
mysql>show master status;
 

拷贝 node1 的 data 目录同步到 node2,node3
在 node2 node3 上:

 
mysql>change master  to  master_host=’192.168.10.216′, master_user=’rep’, master_password=’geekwolf’,master_port=3306, master_log_file=’mysql-in.000006′,master_log_pos=120,master_connect_retry=1;
mysql>start slave;
 

每个节点都做好 mysql 命令的软链

ln -s /usr/local/mysql/bin/* /usr/local/bin/

C. ssh-keygen 实现三台机器之间相互免密钥登录
在 node1(在其他两个节点一同)执行

 
ssh-keygen -t rsa
ssh-copy-id -i /root/.ssh/id_rsa.pub root@node1
ssh-copy-id -i /root/.ssh/id_rsa.pub root@node2
ssh-copy-id -i /root/.ssh/id_rsa.pub root@node3
 

D. 三节点安装 mha4mysql-node-0.56,node3 上安装 mha4mysql-manager-0.56

在 node1 node2 node3 安装 mha4mysql-node

wget https://googledrive.com/host/0B1lu97m8-haWeHdGWXp0YVVUSlk/mha4mysql-node-0.56.tar.gz

tar xf mha4mysql-node-0.56.tar.gz

cd mha4mysql-node

perl Makefile.PL

make && make install

在 node3 上安装 mha4mysql-manager

wget https://googledrive.com/host/0B1lu97m8-haWeHdGWXp0YVVUSlk/mha4mysql-manager-0.56.tar.gz

tar xf mha4mysql-manager-0.56.tar.gz

cd mha4mysql-manager-0.56

yum install perl-DBD-MySQL perl-Config-Tiny perl-Log-Dispatch perl-Parallel-ForkManager perl-Config-IniFiles perl-Time-HiRes

E. 在 node3 上管理 MHA 配置文件

mkdir -p /etc/mha/{app1,scripts}

cp mha4mysql-manager-0.56/samples/conf/ /etc/mha/

cp mha4mysql-manager-0.56/samples/scripts/ /etc/mha/scripts/

mv /etc/mha/app1.cnf /etc/mha/app1/

mv /etc/mha/masterha_default.cnf /etc/masterha_default.cnf

设置全局配置:

vim /etc/mha/masterha_default.cnf

 
[server default]
user=root
password=geekwolf
ssh_user=root
repl_user=rep
repl_password=geekwolf
ping_interval=1
#shutdown_script=””
secondary_check_script = masterha_secondary_check -s node1 -s node2 -s node3 –user=root –master_host=node1 –master_ip=192.168.10.216 –master_port=3306
#master_ip_failover_script=”/etc/mha/scripts/master_ip_failover”
#master_ip_online_change_script=”/etc/mha/scripts/master_ip_online_change”
# shutdown_script= /script/masterha/power_manager
#report_script=””
 

vim /etc/mha/app1/app1.cnf

 
[server default]
manager_workdir=/var/log/mha/app1
manager_log=/var/log/mha/app1/manager.log
[server1]
hostname=node1
master_binlog_dir=”/usr/local/mysql/logs”
candidate_master=1
[server2]
hostname=node2
master_binlog_dir=”/usr/local/mysql/logs”
candidate_master=1
[server3]
hostname=node3
master_binlog_dir=”/usr/local/mysql/logs”
no_master=1
 

注释:

  candidate_master=1 表示该主机优先可被选为 new master,当多个 [serverX] 等设置此参数时,优先级由 [serverX] 配置的顺序决定

  secondary_check_script mha 强烈建议有两个或多个网络线路检查 MySQL 主服务器的可用性。默认情况下, 只有单一的路线 MHA Manager 检查: 从 Manager to Master, 但这是不可取的。MHA 实际上可以有两个或两个以上的检查路线通过调用外部脚本定义二次检查脚本参数

  master_ip_failover_script 在 MySQL 从服务器提升为新的主服务器时,调用此脚本,因此可以将 vip 信息写到此配置文件

  master_ip_online_change_script 使用 masterha_master_switch 命令手动切换 MySQL 主服务器时后会调用此脚本,参数和 master_ip_failover_script 类似,脚本可以互用
  shutdown_script 此脚本 (默认 samples 内的脚本) 利用服务器的远程控制 IDRAC 等,使用 ipmitool 强制去关机,以避免 fence 设备重启主服务器,造成脑列现象

  report_script 当新主服务器切换完成以后通过此脚本发送邮件报告,可参考使用 http://caspian.dotconf.net/menu/Software/SendEmail/sendEmail-v1.56.tar.gz

  以上涉及到的脚本可以从 mha4mysql-manager-0.56/samples/scripts/* 拷贝进行修改使用

  其他 manager 详细配置参数 https://code.google.com/p/mysql-master-ha/wiki/Parameters

F. masterha_check_ssh 验证 ssh 信任登录是否成功,masterha_check_repl 验证 mysql 复制是否成功

验证 ssh 信任:masterha_check_ssh –conf=/etc/mha/app1/app1.cnf

 
[root@localhost ~]# masterha_check_ssh –conf=/etc/mha/app1/app1.cnf
Tue May 13 07:53:15 2014 – [warning] Global configuration file /etc/masterha_default.cnf not found. Skipping.
Tue May 13 07:53:15 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 07:53:15 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
Tue May 13 07:53:15 2014 – [info] Starting SSH connection tests..
Tue May 13 07:53:16 2014 – [debug]
Tue May 13 07:53:15 2014 – [debug] Connecting via SSH from root@node1(192.168.10.216:22) to root@node2(192.168.10.217:22)..
Tue May 13 07:53:15 2014 – [debug] ok.
Tue May 13 07:53:15 2014 – [debug] Connecting via SSH from root@node1(192.168.10.216:22) to root@node3(192.168.10.218:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:16 2014 – [debug]
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node2(192.168.10.217:22) to root@node1(192.168.10.216:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node2(192.168.10.217:22) to root@node3(192.168.10.218:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:17 2014 – [debug]
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node3(192.168.10.218:22) to root@node1(192.168.10.216:22)..
Tue May 13 07:53:16 2014 – [debug] ok.
Tue May 13 07:53:16 2014 – [debug] Connecting via SSH from root@node3(192.168.10.218:22) to root@node2(192.168.10.217:22)..
Tue May 13 07:53:17 2014 – [debug] ok.
Tue May 13 07:53:17 2014 – [info] All SSH connection tests passed successfully.
 

验证主从复制:masterha_check_repl –conf=/etc/mha/app1/app1.cnf

 
[root@localhost mha]# masterha_check_repl –conf=/etc/mha/app1/app1.cnf
Tue May 13 08:10:54 2014 – [info] Reading default configuration from /etc/masterha_default.cnf..
Tue May 13 08:10:54 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:10:54 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:10:54 2014 – [info] MHA::MasterMonitor version 0.56.
Tue May 13 08:10:54 2014 – [info] GTID failover mode = 0
Tue May 13 08:10:54 2014 – [info] Dead Servers:
Tue May 13 08:10:54 2014 – [info] Alive Servers:
Tue May 13 08:10:54 2014 – [info] node1(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] node2(192.168.10.217:3306)
Tue May 13 08:10:54 2014 – [info] node3(192.168.10.218:3306)
Tue May 13 08:10:54 2014 – [info] Alive Slaves:
Tue May 13 08:10:54 2014 – [info] node2(192.168.10.217:3306) Version=5.6.16-64.2-rel64.2-log (oldest major version between slaves) log-bin:enabled
Tue May 13 08:10:54 2014 – [info] Replicating from 192.168.10.216(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] Primary candidate for the new Master (candidate_master is set)
Tue May 13 08:10:54 2014 – [info] node3(192.168.10.218:3306) Version=5.6.16-64.2-rel64.2-log (oldest major version between slaves) log-bin:enabled
Tue May 13 08:10:54 2014 – [info] Replicating from 192.168.10.216(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] Not candidate for the new Master (no_master is set)
Tue May 13 08:10:54 2014 – [info] Current Alive Master: node1(192.168.10.216:3306)
Tue May 13 08:10:54 2014 – [info] Checking slave configurations..
Tue May 13 08:10:54 2014 – [info] read_only=1 is not set on slave node2(192.168.10.217:3306).
Tue May 13 08:10:54 2014 – [warning] relay_log_purge=0 is not set on slave node2(192.168.10.217:3306).
Tue May 13 08:10:54 2014 – [info] read_only=1 is not set on slave node3(192.168.10.218:3306).
Tue May 13 08:10:54 2014 – [warning] relay_log_purge=0 is not set on slave node3(192.168.10.218:3306).
Tue May 13 08:10:54 2014 – [info] Checking replication filtering settings..
Tue May 13 08:10:54 2014 – [info] binlog_do_db= , binlog_ignore_db=
Tue May 13 08:10:54 2014 – [info] Replication filtering check ok.
Tue May 13 08:10:54 2014 – [info] GTID (with auto-pos) is not supported
Tue May 13 08:10:54 2014 – [info] Starting SSH connection tests..
Tue May 13 08:10:55 2014 – [info] All SSH connection tests passed successfully.
Tue May 13 08:10:55 2014 – [info] Checking MHA Node version..
Tue May 13 08:10:55 2014 – [info] Version check ok.
Tue May 13 08:10:55 2014 – [info] Checking SSH publickey authentication settings on the current master..
Tue May 13 08:10:56 2014 – [info] HealthCheck: SSH to node1 is reachable.
Tue May 13 08:10:56 2014 – [info] Master MHA Node version is 0.56.
Tue May 13 08:10:56 2014 – [info] Checking recovery script configurations on node1(192.168.10.216:3306)..
Tue May 13 08:10:56 2014 – [info] Executing command: save_binary_logs –command=test –start_pos=4 –binlog_dir=/usr/local/mysql/logs –output_file=/var/tmp/save_binary_logs_test –manager_version=0.56 –start_file=mysql-bin.000009
Tue May 13 08:10:56 2014 – [info] Connecting to root@192.168.10.216(node1:22)..
  Creating /var/tmp if not exists.. ok.
  Checking output directory is accessible or not..
  ok.
  Binlog found at /usr/local/mysql/logs, up to mysql-bin.000009
Tue May 13 08:10:56 2014 – [info] Binlog setting check done.
Tue May 13 08:10:56 2014 – [info] Checking SSH publickey authentication and checking recovery script configurations on all alive slave servers..
Tue May 13 08:10:56 2014 – [info] Executing command : apply_diff_relay_logs –command=test –slave_user=’root’ –slave_host=node2 –slave_ip=192.168.10.217 –slave_port=3306 –workdir=/var/tmp –target_version=5.6.16-64.2-rel64.2-log –manager_version=0.56 –relay_log_info=/usr/local/mysql/data/relay-log.info –relay_dir=/usr/local/mysql/data/ –slave_pass=xxx
Tue May 13 08:10:56 2014 – [info] Connecting to root@192.168.10.217(node2:22)..
  Checking slave recovery environment settings..
    Opening /usr/local/mysql/data/relay-log.info … ok.
    Relay log found at /usr/local/mysql/logs, up to relay-bin.000006
    Temporary relay log file is /usr/local/mysql/logs/relay-bin.000006
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Tue May 13 08:10:57 2014 – [info] Executing command : apply_diff_relay_logs –command=test –slave_user=’root’ –slave_host=node3 –slave_ip=192.168.10.218 –slave_port=3306 –workdir=/var/tmp –target_version=5.6.16-64.2-rel64.2-log –manager_version=0.56 –relay_log_info=/usr/local/mysql/data/relay-log.info –relay_dir=/usr/local/mysql/data/ –slave_pass=xxx
Tue May 13 08:10:57 2014 – [info] Connecting to root@192.168.10.218(node3:22)..
  Checking slave recovery environment settings..
    Opening /usr/local/mysql/data/relay-log.info … ok.
    Relay log found at /usr/local/mysql/logs, up to relay-bin.000006
    Temporary relay log file is /usr/local/mysql/logs/relay-bin.000006
    Testing mysql connection and privileges..Warning: Using a password on the command line interface can be insecure.
 done.
    Testing mysqlbinlog output.. done.
    Cleaning up test file(s).. done.
Tue May 13 08:10:57 2014 – [info] Slaves settings check done.
Tue May 13 08:10:57 2014 – [info]
node1(192.168.10.216:3306) (current master)
 +–node2(192.168.10.217:3306)
 +–node3(192.168.10.218:3306)
Tue May 13 08:10:57 2014 – [info] Checking replication health on node2..
Tue May 13 08:10:57 2014 – [info] ok.
Tue May 13 08:10:57 2014 – [info] Checking replication health on node3..
Tue May 13 08:10:57 2014 – [info] ok.
Tue May 13 08:10:57 2014 – [warning] master_ip_failover_script is not defined.
Tue May 13 08:10:57 2014 – [warning] shutdown_script is not defined.
Tue May 13 08:10:57 2014 – [info] Got exit code 0 (Not master dead).
MySQL Replication Health is OK.
 

G. 启动 MHA manager,并监控日志文件

在 node1 上 killall mysqld 的同时在 node3 上启动 manager 服务

[root@localhost mha]# masterha_manager –conf=/etc/mha/app1/app1.cnf
Tue May 13 08:19:01 2014 – [info] Reading default configuration from /etc/masterha_default.cnf..
Tue May 13 08:19:01 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:19:01 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
  Creating /var/tmp if not exists.. ok.
  Checking output directory is accessible or not..
  ok.
  Binlog found at /usr/local/mysql/logs, up to mysql-bin.000009
Tue May 13 08:19:18 2014 – [info] Reading default configuration from /etc/masterha_default.cnf..
Tue May 13 08:19:18 2014 – [info] Reading application default configuration from /etc/mha/app1/app1.cnf..
Tue May 13 08:19:18 2014 – [info] Reading server configuration from /etc/mha/app1/app1.cnf..
 

  之后观察 node3 上 /var/log/mha/app1/manager.log 日志会发现 node1 dead 状态,主自动切换到 node2 上,而 node3 上的主从配置指向了 node2,并且发生一次切换后会生成 /var/log/mha/app1/app1.failover.complete 文件;

更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2014-11/109460p2.htm

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-20发表,共计35029字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中