共计 9308 个字符,预计需要花费 24 分钟才能阅读完成。
PXC 三节点安装:
node1:10.157.26.132
node2:10.157.26.133
node3:10.157.26.134
配置服务器 ssh 登录无密码验证
ssh-keygen 实现三台主机之间相互免密钥登录,保证三台主机之间能 ping 通
1) 在所有的主机上执行:
# ssh-keygen -t rsa
2) 将所有机子上公钥(id_rsa.pub)导到一个主机的 /root/.ssh/authorized_keys 文件中, 然后将 authorized_keys 分别拷贝到所有主机上
cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ssh 10.157.26.133 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
ssh 10.157.26.134 cat /root/.ssh/id_rsa.pub >> /root/.ssh/authorized_keys
scp /root/.ssh/authorized_keys 10.157.26.133:/root/.ssh/authorized_keys
scp /root/.ssh/authorized_keys 10.157.26.134:/root/.ssh/authorized_keys
测试:ssh 10.157.26.133/10.157.26.134
安装依赖包:
yum install -y git scons gcc gcc-c++ openssl check cmake bison boost-devel asio-devel libaio-devel ncurses-devel readline-devel pam-devel socat
若 socat 无法用 yum 安装,可下载源码包安装
wget http://www.dest-unreach.org/socat/download/socat-1.7.3.2.tar.gz
tar zxvf socat-1.7.3.2.tar.gz
cd socat-1.7.3.2
./configure
make && make install
1、将二进制包解压,添加 mysql 账号,做软连接【三个节点都要操作】:
wget https://www.percona.com/downloads/Percona-XtraDB-Cluster-LATEST/Percona-XtraDB-Cluster-5.7.17-29.20/binary/tarball/Percona-XtraDB-Cluster-5.7.17-rel13-29.20.3.Linux.x86_64.ssl101.tar.gz
mkdir /opt/mysql
cd /opt/mysql
tar zxvf /data/src/Percona-XtraDB-Cluster-5.7.17-rel13-29.20.3.Linux.x86_64.ssl101.tar.gz
cd /usr/local
ln -s /opt/mysql/Percona-XtraDB-Cluster-5.7.17-rel13-29.20.3.Linux.x86_64.ssl101/ mysql
groupadd mysql
useradd -M -g mysql -s /sbin/nologin/ -d /usr/local/mysql mysql
2、新建文件夹请授权【三个节点都要操作】:
mkdir -p /data/mysql/mysql_3306/{data,logs,tmp}
mkdir -p /data/mysql/mysql_3306/logs/binlog
chown -R mysql:mysql /data/mysql/
chown -R mysql:mysql /usr/local/mysql
3、配置文件 my.cnf
132 的配置文件:
default_storage_engine=Innodb
#pxc
wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so #库文件位置
wsrep_cluster_address = gcomm://10.157.26.132,10.157.26.133,10.157.26.134 #集群中所有节点的 ip
wsrep_node_name = node132 #本节点的名字
wsrep_node_address = 10.157.26.132 #本节点的 ip
wsrep_cluster_name = pxc_sampson #集群名字
wsrep_sst_auth = sst:sampson #sst 模式需要的用户名和密码
wsrep_sst_method = xtrabackup-v2 #采用什么方式复制数据。还支持 mysqldump,rsync
wsrep_slave_threads = 2 # 开启的复制线程数,建议 cpu 核数 *2,解决 apply_cb 跟不上问题
pxc_strict_mode = ENFORCING #pxc 严厉模式,还有 DISABLED、PERMISSIVE、MASTER 可选
innodb_autoinc_lock_mode = 2 #自增锁的优化
wsrep_provider_options = “debug=1;gcache.size=1G” #打开调试模式
133 的配置文件:
default_storage_engine=Innodb
#pxc
wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so
wsrep_cluster_address = gcomm://10.157.26.132,10.157.26.133,10.157.26.134
wsrep_node_name = node133
wsrep_node_address = 10.157.26.133
wsrep_cluster_name = pxc_sampson
wsrep_sst_auth = sst:sampson
wsrep_sst_method = rsync
wsrep_slave_threads = 2
pxc_strict_mode = ENFORCING
innodb_autoinc_lock_mode = 2
wsrep_provider_options = “debug=1;gcache.size=1G”
134 的配置文件:
default_storage_engine=Innodb
#pxc
wsrep_provider = /usr/local/mysql/lib/libgalera_smm.so
wsrep_cluster_address = gcomm://10.157.26.132,10.157.26.133,10.157.26.134
wsrep_node_name = node134
wsrep_node_address = 10.157.26.134
wsrep_cluster_name = pxc_sampson
wsrep_sst_auth = sst:sampson
wsrep_sst_method = rsync
wsrep_slave_threads = 2
pxc_strict_mode = ENFORCING
innodb_autoinc_lock_mode = 2
wsrep_provider_options = “debug=1;gcache.size=1G”
【注:本来所有节点的 wsrep_sst_method 均配置为 xtrabackup-v2,但是添加第二个节点时报错:WSREP_SST: [ERROR] Error while getting data from donor node: exit codes: 137 0,换成了 rsync 后,就没有问题了,待验证是否是 xtrabackup 的版本问题,我之前用的是 2.4.7】
4、启动节点 1【132 上执行】:
/usr/local/mysql/bin/mysqld –defaults-file=/data/mysql/mysql_3306/my_3306.cnf –wsrep-new-cluster
【注:当 node1 启动的时候,它会先尝试加入一个已存在的集群,但是现在集群并不存在,pxc 必须从 0 开始,所以 node1 的启动必须加上命令 –wsrep-new-cluster,用于新建一个新的集群。node1 正常启动之后,其他的 node 就可以使用平时的启动方式,它们都会自动连接上 primary node】
在 error.log 里看到
[Note] WSREP: Shifting JOINED -> SYNCED (TO: 7)
[Note] WSREP: Waiting for SST/IST to complete.
[Note] WSREP: New cluster view: global state: f71affa6-2b55-11e7-b8db-6afbe908670d:7, view# 1: Primary, number of nodes: 1, my index: 0, protocol version 3
则启动成功,登录 mysql -S /tmp/mysql_3306.sock -p
节点一密码在 error.log 中:
[root@dpstcmsweb00 ~]# cat /data/mysql/mysql_3306/logs/error.log |grep password
2017-05-09T02:46:25.724852Z 1 [Note] A temporary password is generated for root@localhost: worQi;aYF9eQ
登录进去后修改 root 密码:mysql>set password=password(‘mysql’);
主节点添加账号:
grant usage on *.* to ‘pxc-monitor’@’%’ identified by ‘pxc-monitor’;
grant all privileges on *.* to ‘sst’@’%’ identified by ‘sampson’;
5、启动剩下的两个节点【133/134 上执行】:
/usr/local/mysql/bin/mysqld –defaults-file=/data/mysql/mysql_3306/my_3306.cnf &
查看对应的 error.log,能看到
[Note] WSREP: Shifting JOINER -> JOINED (TO: 7)
[Note] WSREP: Member 1.0 (node3307) synced with group.
[Note] WSREP: Shifting JOINED -> SYNCED (TO: 7)
[Note] WSREP: Synchronized with group, ready for connections
则表示 node 启动并加入 cluster 集群成功。
启动成功后,直接使用节点 1 上的账号密码登录即可,这里是 mysql -uroot -pmysql
6、查看节点个数:
“root@localhost:mysql_3306.sock [(none)]>show global status like ‘wsrep_cluster_size’;
+——————–+——-+
| Variable_name | Value |
+——————–+——-+
| wsrep_cluster_size | 3 |
+——————–+——-+
1 row in set (0.00 sec)
7、查看集群状态
“root@localhost:mysql_3306.sock [(none)]>show global status like ‘wsrep%’;
+——————————+———————————————————-+
| Variable_name | Value |
+——————————+———————————————————-+
| wsrep_local_state_uuid | bed19806-3465-11e7-85af-731d83552ec6 |
| wsrep_protocol_version | 7 |
| wsrep_last_committed | 4 |
| wsrep_replicated | 3 |
| wsrep_replicated_bytes | 732 |
| wsrep_repl_keys | 3 |
| wsrep_repl_keys_bytes | 93 |
| wsrep_repl_data_bytes | 447 |
| wsrep_repl_other_bytes | 0 |
| wsrep_received | 36 |
| wsrep_received_bytes | 3494 |
| wsrep_local_commits | 0 |
| wsrep_local_cert_failures | 0 |
| wsrep_local_replays | 0 |
| wsrep_local_send_queue | 0 |
| wsrep_local_send_queue_max | 1 |
| wsrep_local_send_queue_min | 0 |
| wsrep_local_send_queue_avg | 0.000000 |
| wsrep_local_recv_queue | 0 |
| wsrep_local_recv_queue_max | 2 |
| wsrep_local_recv_queue_min | 0 |
| wsrep_local_recv_queue_avg | 0.027778 |
| wsrep_local_cached_downto | 2 |
| wsrep_flow_control_paused_ns | 0 |
| wsrep_flow_control_paused | 0.000000 |
| wsrep_flow_control_sent | 0 |
| wsrep_flow_control_recv | 0 |
| wsrep_flow_control_interval | [173, 173] |
| wsrep_flow_control_status | OFF |
| wsrep_cert_deps_distance | 1.000000 |
| wsrep_apply_oooe | 0.000000 |
| wsrep_apply_oool | 0.000000 |
| wsrep_apply_window | 1.000000 |
| wsrep_commit_oooe | 0.000000 |
| wsrep_commit_oool | 0.000000 |
| wsrep_commit_window | 1.000000 |
| wsrep_local_state | 4 |
| wsrep_local_state_comment | Synced |
| wsrep_cert_index_size | 1 |
| wsrep_cert_bucket_count | 22 |
| wsrep_gcache_pool_size | 2932 |
| wsrep_causal_reads | 0 |
| wsrep_cert_interval | 0.000000 |
| wsrep_ist_receive_status | |
| wsrep_incoming_addresses | 10.157.26.134:3306,10.157.26.132:3306,10.157.26.133:3306 |
| wsrep_desync_count | 0 |
| wsrep_evs_delayed | |
| wsrep_evs_evict_list | |
| wsrep_evs_repl_latency | 0/0/0/0/0 |
| wsrep_evs_state | OPERATIONAL |
| wsrep_gcomm_uuid | 5665b42e-3467-11e7-94ff-9e77d0294b5e |
| wsrep_cluster_conf_id | 17 |
| wsrep_cluster_size | 3 |
| wsrep_cluster_state_uuid | bed19806-3465-11e7-85af-731d83552ec6 |
| wsrep_cluster_status | Primary |
| wsrep_connected | ON |
| wsrep_local_bf_aborts | 0 |
| wsrep_local_index | 1 |
| wsrep_provider_name | Galera |
| wsrep_provider_vendor | Codership Oy <info@codership.com> |
| wsrep_provider_version | 3.20(r) |
| wsrep_ready | ON |
+——————————+———————————————————-+
62 rows in set (0.00 sec)
注:
wsrep_cluster_status:
1.OPEN: 节点启动成功,尝试连接到集群,如果失败则根据配置退出或创建新的集群
2.PRIMARY: 节点处于集群 pc 中,尝试从集群中选取 donor 进行数据同步
3.JOINER: 节点处于等待接收 / 接收数据文件状态,数据传输完成后在本地加载数据
4.JOINED: 节点完成数据同步工作,尝试保持和集群进度一致
5.SYNCED:节点正常提供服务:数据的读写,集群数据的同步,新加入节点的 sst 请求
6.DONOR(贡献数据者):节点处于为新节点准备或传输集群全量数据状态,对客户端不可用。
本文永久更新链接地址 :http://www.linuxidc.com/Linux/2017-05/143930.htm
正文完
星哥玩云-微信公众号