共计 6791 个字符,预计需要花费 17 分钟才能阅读完成。
三台主机搭建 Redis 的三对主从服务器集群环境准备
host1:192.168.1.9:6379
192.168.1.9:6380
host2:192.168.1.106:6379
192.168.1.106:6380
host3:192.168.1.110:6379
192.168.1.110:6380
注意:
(1)在建立 redis 的 cluster 环境时必须清空所有 redis 服务的所有 key-value 数据,没有任何数据
(2)每个 redis node 节点采用相同的硬件配置、相同的密码
1. 分别开启三台主机的 6379 和 6380 两个端口,需给 redis 配置两个独立的配置文件,以 host1 为例
1)给 redis 配置 6379 监听端口
[root@localhost ~]# vim /app/redis/etc/redis.conf
……
bind 192.168.1.9 #绑定 ip
……
port 6379 #绑定 6379 端口
……
cluster-enabled yes #开启 redis 的集群功能
……
cluster-config-file nodes-6379.conf #开启自动创建集群配置文件
……
2)给 redis 配置 6380 监听端口
[root@localhost ~]# cp/app/redis/etc/redis.conf /app/redis/etc/redis.6380.conf
[root@localhost ~]# vim /app/redis/etc/redis.6380.conf
……
bind 192.168.1.9 #绑定 ip
……
port 6380 #绑定 6380 端口
……
cluster-enabled yes #开启 redis 的集群功能
……
cluster-config-file nodes-6379.conf #开启自动创建集群配置文件
……
:%s/6379/6380/g #配置文件底部实现全局替换,将所有 6379 替换为 6380
2. 三台主机 redis 配置文件设置完成后再启动 redis 服务
1)同时开启两个 redis 进程服务
[root@localhost ~]# redis-server /app/redis/etc/redis.6380.conf && redis-server /app/redis/etc/redis.conf
2)查看 redis 的 6379 和 6380 两个端口是否处于监听状态
[root@localhost ~]# ss -tnlp
State Recv-Q Send-Q Local Address:Port Peer Address:Port
LISTEN 0 511 192.168.1.9:6379 *:*
users:((“redis-server”,pid=7492,fd=6))
LISTEN 0 511 192.168.1.9:6380 *:*
users:((“redis-server”,pid=7487,fd=6))
LISTEN 0 511 192.168.1.9:16379 *:*
users:((“redis-server”,pid=7492,fd=8))
LISTEN 0 511 192.168.1.9:16380 *:*
users:((“redis-server”,pid=7487,fd=8))
3. 将创建集群的命令 redis-trib.rb 复制到 /usr/bin 下,此命令暂时不可用,还需编译安装 ruby 以及安装 redis 模块
[root@localhost ruby-2.5.5]# cp redis-trib.rb /usr/bin
4. 安装编译 ruby 工具包时编译环境
yum ×××tall -y vim lrzsz tree screen psmisc lsof tcpdump wget ntpdate gcc gcc-c++ glibc glibc-devel
pcre pcre-devel openssl openssl-devel systemd-devel net-tools iotop bc zip unzip zlib-devel bash-completion
nfs-utils automake libxml2 libxml2-devel libxslt libxslt-devel perl perl-ExtUtils-Embed
5. 编译安装 ruby 工具包(因 yum 安装的 ruby 版本太低,不能满足 gem 安装 redis 模块所依赖的 ruby 版本)
[root@localhost ~]# cd /data/ruby/
[root@localhost ruby]# tar xf ruby-2.5.5.tar.gz
[root@localhost ruby]# cd ruby-2.5.5/
[root@localhost ruby-2.5.5]# ./configure && make –j 4 && make ×××tall
6. 安装 rubygems 包
[root@localhost ruby-2.5.5]# yum ×××tall -y rubygems
7. 分别给 ruby 命令和 gem 命令创建环境变量路径软连接
[root@localhost ruby-2.5.5]# ln -sv /data/ruby/ruby-2.5.5/bin/gem /usr/bin/
[root@localhost ruby-2.5.5]# ln -sv/data/ruby/ruby-2.5.5/ruby /usr/bin/
8.gem 安装 redis 模块
[root@localhost ruby-2.5.5]# gem ×××tall redis -y
9. 修改 redis 模块登录集群环境的密码
[root@localhost ~]# vim /usr/local/lib/ruby/gems/2.5.0/gems/redis-4.1.2/lib/redis/client.rb
#frozen_string_literal: true
require_relative “errors”
require “socket”
require “cgi”
class Redis
class Client
DEFAULTS = {
:url => lambda {ENV[“REDIS_URL”] },
:scheme => “redis”,
:host => “127.0.0.1”,
:port => 6379,
:path => nil,
:timeout => 5.0,
:password => 123456, #登录密码改为 123456
:db => 0,
:driver => nil,
……
10. 使用 redis-trib.rb 命令创建集群环境,会自动分配主从服务
[root@localhost ~]# redis-trib.rb create –replicas 1 192.168.1.9:6379 192.168.1.9:6380 192.168.1.106:6379
192.168.1.106:6380 192.168.1.110:6379 192.168.1.110:6380
>>> Creating cluster
>>> Performing hash slots allocation on 6 nodes…
Using 3 masters:
192.168.1.9:6379
192.168.1.106:6379
192.168.1.110:6379
Adding replica 192.168.1.106:6380 to 192.168.1.9:6379
Adding replica 192.168.1.110:6380 to 192.168.1.106:6379
Adding replica 192.168.1.9:6380 to 192.168.1.110:6379
M: eed2e22136cbdca6770a46bbb2e137ab693dd16b 192.168.1.9:6379
slots:0-5460 (5461 slots) master
# 主服务:M
# 主服务 id:eed2e22136cbdca6770a46bbb2e137ab693dd16b
# 主服务 ip 和端口:192.168.1.9:6379
# 分得的槽位区间:0-5460
# 一共分得的槽:5460
S: 8efbcf7fdd4675c77199a2a1206f0209ac2255f3 192.168.1.9:6380
replicates e387a0ba7c95d0c27d7e28a9b57d23117711eadc
# 从服务:S
# 从服务 id:8efbcf7fdd4675c77199a2a1206f0209ac2255f3
# 从服务 ip 和端口:192.168.1.9:6380
# 从服务所属的主服务 id:e387a0ba7c95d0c27d7e28a9b57d23117711eadc
M: c922b4bf56f0086609fd4fb23d987df0a77bec22 192.168.1.106:6379
slots:5461-10922 (5462 slots) master
S: a4fd89d79cdd27698bc394134b2df25b63ddb4c5 192.168.1.106:6380
replicates eed2e22136cbdca6770a46bbb2e137ab693dd16b
M: e387a0ba7c95d0c27d7e28a9b57d23117711eadc 192.168.1.110:6379
slots:10923-16383 (5461 slots) master
S: 34549b777963b16e65125def8d9a8e50e27ed2a4 192.168.1.110:6380
replicates c922b4bf56f0086609fd4fb23d987df0a77bec22
Can I set the above configuration? (type ‘yes’ to accept): yes
>>> Nodes configuration updated
>>> Assign a different config epoch to each node
>>> Sending CLUSTER MEET messages to join the cluster
Waiting for the cluster to join..
>>> Performing Cluster Check (using node 192.168.1.9:6379)
M: eed2e22136cbdca6770a46bbb2e137ab693dd16b 192.168.1.9:6379
slots:0-5460 (5461 slots) master
1 additional replica(s)
S: a4fd89d79cdd27698bc394134b2df25b63ddb4c5 192.168.1.106:6380
slots: (0 slots) slave
replicates eed2e22136cbdca6770a46bbb2e137ab693dd16b
S: 8efbcf7fdd4675c77199a2a1206f0209ac2255f3 192.168.1.9:6380
slots: (0 slots) slave
replicates e387a0ba7c95d0c27d7e28a9b57d23117711eadc
M: c922b4bf56f0086609fd4fb23d987df0a77bec22 192.168.1.106:6379
slots:5461-10922 (5462 slots) master
1 additional replica(s)
M: e387a0ba7c95d0c27d7e28a9b57d23117711eadc 192.168.1.110:6379
slots:10923-16383 (5461 slots) master
1 additional replica(s)
S: 34549b777963b16e65125def8d9a8e50e27ed2a4 192.168.1.110:6380
slots: (0 slots) slave
replicates c922b4bf56f0086609fd4fb23d987df0a77bec22
[OK] All nodes agree about slots configuration.
>>> Check for open slots…
>>> Check slots coverage…
[OK] All 16384 slots covered.
11. 查看 slave 从服务的连接状态,可以看到还未建立主从连接
[root@localhost ~]# redis-cli -h 192.168.1.106 -p 6380
192.168.1.106:6380> auth 123456
OK
192.168.1.106:6380> info replication
#Replication
role:slave
master_host:192.168.1.9
master_port:6379
master_link_status:down #未与主服务建立连接
master_last_io_seconds_ago:-1
master_sync_in_progress:0
slave_repl_offset:1
master_link_down_since_seconds:1560643385
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:84865c623e15576c50c80a46ee16845b80b872d8
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:0
repl_backlog_size:1048576
repl_backlog_first_byte_offset:0
repl_backlog_histlen:0
11. 三台 salve 从服务分别手动输入主从连接认证密码
1)输入认证密码
192.168.1.106:6380> config set masterauth 123456
OK
2)再次查看 salve 服务连接状态,可以看到为 up 表明主从连接成功
192.168.1.106:6380> info replication
#Replication
role:slave
master_host:192.168.1.9
master_port:6379
master_link_status:up #主从服务已建立连接
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:0
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:df9da60e308938e7a817ca08b20b58c248ad409d
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:0
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:0
12. 实验总结
1)实现了三台主机搭建 3 对主从服务器的集群环境,实现冗余可用和较高的并发量;
2)即可允许其中一台主机宕机,被宕掉的一个主服务会被它的从服替代,从服务会被提升为新的主服务,从而不影响另外两台主机服务器的正常工作,也不会丢失数据;
3)减少了主机的数量从而节约成本。
: