阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Linux 下安装及配置Heartbeat

227次阅读
没有评论

共计 21671 个字符,预计需要花费 55 分钟才能阅读完成。

Heartbeat 是一个基于 Linux 开源的,被广泛使用的高可用集群系统。主要包括心跳服务和资源接管两个高可用集群组件。本文简要描述了在 Linux 环境下安装 heartbeat 2.1.4,同时描述了 heartbeat 的 3 个重要配置文件的配置方法。

有关 heartbeat 集群组件相关概念可参考:Heartbeat 集群组件概述 http://www.linuxidc.com/Linux/2015-11/125210.htm

一、安装 heartbeat
### 准备安装文件
### 由于 heartbeat V2 版本已经不再更新,V2 版本最终版为 2.1.4。
### 对于需要在 Linux 对于需要在 Linux 6 下安装的可以从以下链接下载:
### 对于 Linux 5 系列的可以在此下载:和 https://dl.Fedoraproject.org/pub/epel/5/x86_64/repoview/letter_h.group.html
# rpm -Uvh PyXML-0.8.4-19.el6.x86_64.rpm
# rpm -Uvh perl-MailTools-2.04-4.el6.noarch.rpm
# rpm -Uvh perl-TimeDate-1.16-11.1.el6.noarch.rpm
# rpm -Uvh libnet-1.1.6-7.el6.x86_64.rpm
# rpm -Uvh ipvsadm-1.26-2.el6.x86_64.rpm
# rpm -Uvh lm_sensors-libs.x86_64 0:3.1.1-17.el6 
# rpm -Uvh net-snmp-libs.x86_64.rpm

# rpm -Uvh heartbeat-pils-2.1.4-12.el6.x86_64.rpm
# rpm -Uvh heartbeat-stonith-2.1.4-12.el6.x86_64.rpm
# rpm -Uvh heartbeat-2.1.4-12.el6.x86_64.rpm

### 以下 2 个 rpm 包根据需要安装,一个是 Heartbeat development package,一个是针对 lvs
# rpm -Uvh heartbeat-devel-2.1.4-12.el6.x86_64.rpm     
# rpm -Uvh heartbeat-ldirectord-2.1.4-12.el6.x86_64.rpm 

### 验证安装包
# rpm -qa |grep -i heartbeat
heartbeat-2.1.4-12.el6.x86_64
heartbeat-pils-2.1.4-12.el6.x86_64
heartbeat-stonith-2.1.4-12.el6.x86_64
heartbeat-ldirectord-2.1.4-12.el6.x86_64
heartbeat-devel-2.1.4-12.el6.x86_64

# 复制样本配置文件到 /etc/ha.d 目录下并作相应修改
# cp /usr/share/doc/heartbeat-2.1.4/ha.cf /etc/ha.d/
# cp /usr/share/doc/heartbeat-2.1.4/haresources /etc/ha.d/
# cp /usr/share/doc/heartbeat-2.1.4/authkeys /etc/ha.d/
#

二、配置 heartbeat
heartbeat 配置主要由 3 个文件组成,一个是 ha.cf,一个是 authkeys,一个是 haresources,下面分别描述。

1、ha.cf
该文件是 heartbeat 的主要配置文件,大致包括如下信息:
    heartbeat 日志文件输出级别,位置;
    心跳时长,告警时长,脑裂时长,初始化时长等;
    心跳通讯方式,IP,端口号,串口设备,波特率等;
    节点名称,隔离方式等。

示例文件描述 
[root@orasrv1 ha.d]# more ha.cf
#
#      There are lots of options in this file.  All you have to have is a set
#      of nodes listed {“node …} one of {serial, bcast, mcast, or ucast},
#      and a value for “auto_failback”.
#      必须设置的有节点列表集 {node …},{serial,bcast,mcast, 或 ucast} 中的一个,auto_failback 的值
#
#      ATTENTION: As the configuration file is read line by line,
#                  THE ORDER OF DIRECTIVE MATTERS!
#      配置文件是逐行读取的,并且选项的顺序是会影响最终结果的。
#
#      In particular, make sure that the udpport, serial baud rate
#      etc. are set before the heartbeat media are defined!
#      debug and log file directives go into effect when they
#      are encountered.
#
#      确保在 udpport,serial baud rate 在 heartbeat 检测前预先定义或预留可用
#      也就是是在定义网卡,串口等心跳检测接口前先要定义端口号。
#
#      All will be fine if you keep them ordered as in this example.
#      如果保持本样例中的定义顺序,本配置将会正常工作。
#
#      Note on logging:
#      If all of debugfile, logfile and logfacility are not defined,
#      logging is the same as use_logd yes. In other case, they are
#      respectively effective. if detering the logging to syslog,
#      logfacility must be “none”.
#      记录日志方面的注意事项:
#      如果 debugfile,logfile 和 logfacility 都没有定义,日志记录就相当于 use_logd yes。
#      否则,他们将分别生效。如果要阻止记录日志到 syslog,那么 logfacility 必须设置为“none”
#
#      File to write debug messages to 
# 写入 debug 消息的文件
#debugfile /var/log/ha-debug
#
#
#      File to write other messages to 
#     
# 单独指定日志文件
logfile        /var/log/ha-log
#
#
#      Facility to use for syslog()/logger
# 用于 syslog()/logger 的设备,通常情况下不建议与 logfile 同时启用
#logfacility    local0
#
#
#      A note on specifying “how long” times below…
#
#      The default time unit is seconds
#              10 means ten seconds
#
#      You can also specify them in milliseconds
#              1500ms means 1.5 seconds
#
#
#      keepalive: how long between heartbeats?
# 心跳时长
#keepalive 2
#
#      deadtime: how long-to-declare-host-dead?
#
#              If you set this too low you will get the problematic
#              split-brain (or cluster partition) problem.
#              See the FAQ for how to use warntime to tune deadtime.
#              如果这个时间值设置得过长将导致脑裂或集群分区的问题。
# 心跳丢失后死亡时长
#deadtime 30
#
#      warntime: how long before issuing “late heartbeat” warning?
#      See the FAQ for how to use warntime to tune deadtime.
#     
#     
# 心跳丢失后警告时长
#warntime 10
#
#
#      Very first dead time (initdead)
#
#      On some machines/OSes, etc. the network takes a while to come up
#      and start working right after you’ve been rebooted.  As a result
#      we have a separate dead time for when things first come up.
#      It should be at least twice the normal dead time.
#      在某些机器 / 操作系统等中,网络在机器启动或重启后需要花一定的时间启动并正常工作。
#      因此我们必须分开他们初次起来的 dead time,这个值应该最少设置为两倍的正常 dead time。
#
# 初始死亡时长
#initdead 120
#
#
#      What UDP port to use for bcast/ucast communication?
#
# 端口号的配置
#udpport        694
#
#      Baud rate for serial ports…                     
#
# 波特率的配置
#baud  19200
#
#      serial  serialportname …     
# 串口名称                   
#serial /dev/ttyS0      # Linux
#serial /dev/cuaa0      # FreeBSD
#serial /dev/cuad0      # FreeBSD 6.x
#serial /dev/cua/a      # Solaris
#
#
#      What interfaces to broadcast heartbeats over?           
#
# 广播的网络接口名称
#bcast  eth0            # Linux
#bcast  eth1 eth2      # Linux
#bcast  le0            # Solaris
#bcast  le1 le2        # Solaris
#
#      Set up a multicast heartbeat medium               
#      mcast [dev] [mcast group] [port] [ttl] [loop]
#
#      [dev]          device to send/rcv heartbeats on
#      [mcast group]  multicast group to join (class D multicast address
#                      224.0.0.0 – 239.255.255.255)
#      [port]          udp port to sendto/rcvfrom (set this value to the
#                      same value as “udpport” above)
#      [ttl]          the ttl value for outbound heartbeats.  this effects
#                      how far the multicast packet will propagate.  (0-255)
#                      Must be greater than zero.
#      [loop]          toggles loopback for outbound multicast heartbeats.
#                      if enabled, an outbound packet will be looped back and
#                      received by the interface it was sent on. (0 or 1)
#                      Set this value to zero.
#
# 有关多播的配置
#mcast eth0 225.0.0.1 694 1 0
#
#      Set up a unicast / udp heartbeat medium           
#      ucast [dev] [peer-ip-addr]
#
#      [dev]          device to send/rcv heartbeats on
#      [peer-ip-addr]  IP address of peer to send packets to
#
#
#ucast eth0 192.168.1.2 
#
# 对于广播,单播或多播,各有优缺点。
# 单播多用于 2 节点情形,但是 2 节点上则不能使用相同的配置文件,因为 ip 地址不一样                                   
#
#
#      About boolean values…  关于 boolean 值
#     
#      下面的任意不区分大小写敏感值将被当作 true
#      Any of the following case-insensitive values will work for true:
#              true, on, yes, y, 1
#      下面的任意不区分大小写敏感值将被当作 false
#      Any of the following case-insensitive values will work for false:
#              false, off, no, n, 0
#     
#
#
#
#      auto_failback:  determines whether a resource will
#      automatically fail back to its “primary” node, or remain
#      on whatever node is serving it until that node fails, or
#      an administrator intervenes.
#      决定一个 resource 是否自动恢复到它的初始 primary 节点,
#      或者继续运行在转移后的节点直到出现故障或管理员进行干预。
#
#      The possible values for auto_failback are:
#              on      – enable automatic failbacks
#              off    – disable automatic failbacks
#              legacy  – enable automatic failbacks in systems
#                      where all nodes do not yet support
#                      the auto_failback option.
#
#      auto_failback “on” and “off” are backwards compatible with the old
#              “nice_failback on” setting.
#
#      See the FAQ for information on how to convert
#              from “legacy” to “on” without a flash cut.
#              (i.e., using a “rolling upgrade” process)
#
#      The default value for auto_failback is “legacy”, which
#      will issue a warning at startup.  So, make sure you put
#      an auto_failback directive in your ha.cf file.
#      (note: auto_failback can be any boolean or “legacy”)
#
# 自动 failback 配置
auto_failback on
#
#
#      Basic STONITH support
#      Using this directive assumes that there is one stonith
#      device in the cluster.  Parameters to this device are
#      read from a configuration file. The format of this line is:
#
#        stonith <stonith_type> <configfile>
#
#      NOTE: it is up to you to maintain this file on each node in the
#      cluster!
#
# 基本 STONITH 支持
#stonith baytech /etc/ha.d/conf/stonith.baytech
#
#      STONITH support
#      You can configure multiple stonith devices using this directive.
#      The format of the line is:
#        stonith_host <hostfrom> <stonith_type> <params…>
#        <hostfrom> is the machine the stonith device is attached
#              to or * to mean it is accessible from any host.
#        <stonith_type> is the type of stonith device (a list of
#              supported drives is in /usr/lib/stonith.)
#        <params…> are driver specific parameters.  To see the
#              format for a particular device, run:
#          stonith -l -t <stonith_type>
#
#
#      Note that if you put your stonith device access information in
#      here, and you make this file publically readable, you’re asking
#      for a denial of service attack ;-)
#
#      To get a list of supported stonith devices, run
#              stonith -L
#      For detailed information on which stonith devices are supported
#      and their detailed configuration options, run this command:
#              stonith -h
#
#stonith_host *    baytech 10.0.0.3 mylogin mysecretpassword
#stonith_host ken3  rps10 /dev/ttyS1 kathy 0
#stonith_host kathy rps10 /dev/ttyS1 ken3 0
#
#      Watchdog is the watchdog timer.  If our own heart doesn’t beat for
#      a minute, then our machine will reboot.
#      NOTE: If you are using the software watchdog, you very likely
#      wish to load the module with the parameter “nowayout=0” or
#      compile it without CONFIG_WATCHDOG_NOWAYOUT set. Otherwise even
#      an orderly shutdown of heartbeat will trigger a reboot, which is
#      very likely NOT what you want.
#
#watchdog 计时器的配置
#watchdog /dev/watchdog
#     
#      Tell what machines are in the cluster
#      node    nodename …    — must match uname -n
#
# 节点名称配置,重要,必须与 uname - n 获得的名字等同
#node  ken3
#node  kathy
#
#      Less common options…
#
#      Treats 10.10.10.254 as a psuedo-cluster-member
#      Used together with ipfail below…
#      note: don’t use a cluster node as ping node
#      将 10.10.10.254 看成一个伪集群成员,与下面的 ipfail 一起使用。
#      注意:不要使用一个集群节点作为 ping 节点,通常可以设置为 Ping 网关。
#      此作用用于觉定集群重构的仲裁票数
#
#ping 10.10.10.254
#
#      Treats 10.10.10.254 and 10.10.10.253 as a psuedo-cluster-member
#      called group1. If either 10.10.10.254 or 10.10.10.253 are up
#      then group1 is up
#      Used together with ipfail below…
#      同上,意思是两个 IP 当中,任意一个 ping 通即可
#
#ping_group group1 10.10.10.254 10.10.10.253
#
#      HBA ping derective for Fiber Channel
#      Treats fc-card-name as psudo-cluster-member
#      used with ipfail below …
#
#      You can obtain HBAAPI from http://hbaapi.sourceforge.net.  You need
#      to get the library specific to your HBA directly from the vender
#      To install HBAAPI stuff, all You need to do is to compile the common
#      part you obtained from the sourceforge. This will produce libHBAAPI.so
#      which you need to copy to /usr/lib. You need also copy hbaapi.h to
#      /usr/include.
#
#      The fc-card-name is the name obtained from the hbaapitest program
#      that is part of the hbaapi package. Running hbaapitest will produce
#      a verbose output. One of the first line is similar to:
#              Apapter number 0 is named: qlogic-qla2200-0
#      Here fc-card-name is qlogic-qla2200-0.
#
#hbaping fc-card-name
#
#
#      Processes started and stopped with heartbeat.  Restarted unless
#              they exit with rc=100
#      指定当一个 heartbeat 服务或节点宕机时如何处理。
#      开启 ipfail 则是重启对应的节点,该进程被自动监视,遇到故障则重新启动。
#      ipfail 进程用于检测和处理网络故障,需要配合 ping 语句指定的 ping node 来检测网络连接。
#
#respawn userid /path/name/to/run
#respawn hacluster /usr/lib/heartbeat/ipfail
#
#      Access control for client api
#              default is no access
#
#apiauth client-name gid=gidlist uid=uidlist
#apiauth ipfail gid=haclient uid=hacluster

######################################
#
#      Unusual options. 不常用选项
#
######################################
#
#      hopfudge maximum hop count minus number of nodes in config
#hopfudge 1
#
#      deadping – dead time for ping nodes
#deadping 30
#
#      hbgenmethod – Heartbeat generation number creation method
#              Normally these are stored on disk and incremented as needed.
#hbgenmethod time
#
#      realtime – enable/disable realtime execution (high priority, etc.)
#              defaults to on
#realtime off
#
#      debug – set debug level
#              defaults to zero
#debug 1
#
#      API Authentication – replaces the fifo-permissions-based system of the past
#
#      You can put a uid list and/or a gid list.
#      If you put both, then a process is authorized if it qualifies under either
#      the uid list, or under the gid list.
#
#      The groupname “default” has special meaning.  If it is specified, then
#      this will be used for authorizing groupless clients, and any client groups
#      not otherwise specified.
#
#      There is a subtle exception to this.  “default” will never be used in the
#      following cases (actual default auth directives noted in brackets)
#                ipfail        (uid=HA_CCMUSER) Author : Leshami
#                ccm          (uid=HA_CCMUSER) Blog  : http://blog.csdn.net/leshami
#                ping          (gid=HA_APIGROUP)
#                cl_status    (gid=HA_APIGROUP)
#
#      This is done to avoid creating a gaping security hole and matches the most
#      likely desired configuration.
#      这避免生成一个安全漏洞缺口,可以实现能很多人最渴望的安全配置。
#
#apiauth ipfail uid=hacluster
#apiauth ccm uid=hacluster
#apiauth cms uid=hacluster
#apiauth ping gid=haclient uid=alanr,root
#apiauth default gid=haclient

#      message format in the wire, it can be classic or netstring,
#      default: classic
#msgfmt  classic/netstring

#      Do we use logging daemon?
#      If logging daemon is used, logfile/debugfile/logfacility in this file
#      are not meaningful any longer. You should check the config file for logging
#      daemon (the default is /etc/logd.cf)
#      more infomartion can be fould in http://www.linux-ha.org/ha_2ecf_2fUseLogdDirective
#      Setting use_logd to “yes” is recommended
#
# use_logd yes/no
#
#      the interval we  reconnect to logging daemon if the previous connection failed
#      default: 60 seconds
#conn_logd_time 60
#
#
#      Configure compression module
#      It could be zlib or bz2, depending on whether u have the corresponding
#      library in the system.
#compression    bz2
#
#      Confiugre compression threshold
#      This value determines the threshold to compress a message,
#      e.g. if the threshold is 1, then any message with size greater than 1 KB
#      will be compressed, the default is 2 (KB)
#compression_threshold 2

2、authkeys 认证信息配置
该文件主要用于配置 heartbeat 的认证信息。共有三种可用的方式:crc、md5 和 sha1。
三种方式安全性依次提高,但同时占用的系统资源也依次扩大。
crc 安全性最低,适用于物理上比较安全的网络,sha1 提供最为有效的鉴权方式,占用的系统资源也最多。
该 authkeys 文件的文件其许可权应该设为 600(即 -rw——-)。命令为:chmod 600 authkeys

其配置语句格式如下:
auth <number>
<number> [desc]
举例说明:
    auth 1
    1 sha1 key-for-sha1
    其中键值 key-for-sha1 可以任意指定,number 设置必须保证上下一致。

    auth 2
    2 crc
    crc 方式不需要指定键值。

示例文件描述
[root@orasrv1 ha.d]# more authkeys
#
#      Authentication file.  Must be mode 600
#
#      Must have exactly one auth directive at the front.
#      auth    send authentication using this method-id
#
#      Then, list the method and key that go with that method-id
#
#      Available methods: crc sha1, md5.  Crc doesn’t need/want a key.
#
#      You normally only have one authentication method-id listed in this file
#
#      Put more than one to make a smooth transition when changing auth
#      methods and/or keys.
#
#
#      sha1 is believed to be the “best”, md5 next best.
#
#      crc adds no security, except from packet corruption.
#              Use only on physically secure networks.
#
#auth 1
#1 crc
#2 sha1 HI!
#3 md5 Hello!

3、haresources 资源配置
haresources 文件用于指定集群系统的节点、集群 IP、子网掩码、广播地址以及启动的相关服务等。
其配置语句格式如下:
    node-name  network-config resouce-group
node-name:指定集群系统的节点名称,取值必须匹配 ha.cf 文件中 node 选项设置的主机名中的相同。
network-config:用于网络设置,包括指定集群 IP、子网掩码、广播地址等。
resource-group:用于设置 heartbeat 管理的相关集群服务,也就是这些服务可以由 Heartbeat 来启动和关闭。
对于使用 heartbeat 接管的相关服务,必须将服务写成可以通过 start/stop 来启动和关闭的脚步,然后放到 /etc /init.d/
或者 /etc/ha.d/resource.d/ 目录下,Heartbeat(TE)会根据脚本的名称自动去上述目录下找到相应脚本进行启动或关闭操作。

示例描述:
node1 IPaddr::192.168.21.10/24/eth0/  Filesystem:: /dev/sdb2::/webdata::ext3  httpd tomcat

node1:
    节点名称

IPaddr::192.168.21.10/24/eth0/
    IPaddr 为 heartbeat 提供的一个脚本,位于 /etc/ha.d/resource.d 目录
    执行 /etc/ha.d/resource.d/IPaddr 192.168.21.10/24 start 的操作
    虚拟出一个子网掩码为 255.255.255.0,IP 为 192.168.21.10 的地址。
    此 IP 为 Heartbeat 对外提供服务的网络地址,同时指定此 IP 使用的网络接口为 eth0

Filesystem:: /dev/sdb2::/webdata::ext3
    Filesystem 为 heartbeat 提供的一个脚本,位于 /etc/ha.d/resource.d 目录
    执行共享磁盘分区的挂载操作,等同于命令行下的 mount -t ext3 /dev/sdb2 /webdata

httpd tomcat
    依次启动 httpd,以及 tomcat 服务

注:对于多个网络接口,不同子网的情行,IP 地址,通常会使用别名绑定在跟 VIP 在同一网段内的网络接口上。
如:eth0 : 172.16.100.6  eth1 : 192.168.0.6 VIP : 172.16.100.5
则 VIP 会绑定在 eth0 上,因为 2 个地址在同一网段,由这个命令来完成 /usr/lib64/heartbeat/findif

示例文件描述 
[root@orasrv1 ha.d]# more haresources
#
#      This is a list of resources that move from machine to machine as
#      nodes go down and come up in the cluster.  Do not include
#      “administrative” or fixed IP addresses in this file.
#      集群中的节点停机和启动时,这里配置的资源列表会从一个节点转移到另一个节点,
#      不过资源列表中不要包含管理或已经配置在服务器上的 IP 地址在这个文件中。
# <VERY IMPORTANT NOTE>
#      The haresources files MUST BE IDENTICAL on all nodes of the cluster.
#      此 haresources 文件在所有的集群节点中都必须相同
#
#      The node names listed in front of the resource group information
#      is the name of the preferred node to run the service.  It is
#      not necessarily the name of the current machine.  If you are running
#      auto_failback ON (or legacy), then these services will be started
#      up on the preferred nodes – any time they’re up.
#
#      列在 resource 组信息前的节点名称是主机的 hostname,它不需要是当前机器的名称,如果你配置 auto_failback on
#      (或者 legacy),那么这些服务将会在首选的节点上启动,只要主机是运行的。
#
#      If you are running with auto_failback OFF, then the node information
#      will be used in the case of a simultaneous start-up, or when using
#      the hb_standby {foreign,local} command.
#      如果你配置的是 auto_failback off,在集群重构或者使用 hb_standby {foreign,local} 命令,节点信息将被使用
#
#      BUT FOR ALL OF THESE CASES, the haresources files MUST BE IDENTICAL.
#      If your files are different then almost certainly something
#      won’t work right.
#      但是对于所有的这些情况,此 haresources 文件都必须相同。如果你的文件不同那么肯定有某些功能将不能正常工作。
# </VERY IMPORTANT NOTE>
#
#
#      We refer to this file when we’re coming up, and when a machine is being
#      taken over after going down.
#      在节点启动和一个节点停机后被接管的时候会参考这个文件。
#
#      You need to make this right for your installation, then install it in
#      /etc/ha.d
#      安装时把它放到 /etc/ha.d 目录
#
#      Each logical line in the file constitutes a “resource group”.
#      A resource group is a list of resources which move together from
#      one node to another – in the order listed.  It is assumed that there
#      is no relationship between different resource groups.  These
#      resource in a resource group are started left-to-right, and stopped
#      right-to-left.  Long lists of resources can be continued from line
#      to line by ending the lines with backslashes (“\”).
#
#      在文件里面的每个逻辑行组成一个“resource group”,下简称资源组
#      一个资源组就是从一个节点切换到另一个节点时的资源顺序列表。
#      假定不同的资源组之间是没有关系的, 资源组的启动时是从左到右的。关闭时是从右到左的。
#      过长的 resources 列表可以以反斜杠(“\”)结尾来续行。
#
#      These resources in this file are either IP addresses, or the name
#      of scripts to run to “start” or “stop” the given resource.
#      在这个文件里面的 resources 可以是 IP 地址,也可以是用于“start”或“stop”给定的 resource 的脚本名称
#
#      The format is like this:
#      样例
#
#node-name resource1 resource2 … resourceN
#
#
#      If the resource name contains an :: in the middle of it, the
#      part after the :: is passed to the resource script as an argument.
#      Multiple arguments are separated by the :: delimeter
#      如果资源的名称包含一个:: 在它的中间,在:: 后面的部分会传递给资源的脚本中作为一个参数,多个参数会以:: 分割。
#
#      In the case of IP addresses, the resource script name IPaddr is
#      implied.
#      在 IP 地址的情况中,resource 脚本名称 IPaddr 是隐含的。
#
#      For example, the IP address 135.9.8.7 could also be represented
#      as IPaddr::135.9.8.7
#      例如:IP 地址 135.9.8.7 也可以被表示为 IPaddr::135.9.8.7
#
#      THIS IS IMPORTANT!!    vvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvvv
#
#      The given IP address is directed to an interface which has a route
#      to the given address.  This means you have to have a net route
#      set up outside of the High-Availability structure.  We don’t set it
#      up here — we key off of it.
#      给定的 IP 地址会直接连到有路由到给定的地址的接口上,
#      这也就意味着你必须要在 High-Availability 外部配置一个网络路由。
#
#      The broadcast address for the IP alias that is created to support
#      an IP address defaults to the highest address on the subnet.
#      IP 别名的广播地址将被缺省创建为支持 IP 地址的子网里的最高地址
#     
#      The netmask for the IP alias that is created defaults to the same
#      netmask as the route that it selected in in the step above.
#      IP 别名的子网掩码将被缺省创建为与上一步骤选择的路由相同的子网掩码
#
#      The base interface for the IPalias that is created defaults to the
#      same netmask as the route that it selected in in the step above.
#      IP 别名的基础接口将被缺省创建为与上面选择的路由相同的子网掩码
#
#      If you want to specify that this IP address is to be brought up
#      on a subnet with a netmask of 255.255.255.0, you would specify
#      this as IPaddr::135.9.8.7/24 . 
#      上面为子网掩码指定示例
#
#      If you wished to tell it that the broadcast address for this subnet
#      was 135.9.8.210, then you would specify that this way:
#              IPaddr::135.9.8.7/24/135.9.8.210
#      上面为广播地址指定示例
#
#      If you wished to tell it that the interface to add the address to
#      is eth0, then you would need to specify it this way:
#              IPaddr::135.9.8.7/24/eth0
#      如果你希望指明要增加地址的接口是 eth0,那么你需要像这样指定 IPaddr::135.9.8.7/24/eth0
#     
#      And this way to specify both the broadcast address and the
#      interface:
#              IPaddr::135.9.8.7/24/eth0/135.9.8.210
#      同时指定广播地址和接口的方法为:IPaddr::135.9.8.7/24/eth0/135.9.8.210
#
#      The IP addresses you list in this file are called “service” addresses,
#      since they’re they’re the publicly advertised addresses that clients
#      use to get at highly available services.
#      这个文件中的 IP 地址列表,叫做服务地址,它们是客户端用于获取高可用服务的公共通告地址
#
#      For a hot/standby (non load-sharing) 2-node system with only
#      a single service address,
#      you will probably only put one system name and one IP address in here.
#      The name you give the address to is the name of the default “hot”
#      system.
#      对于一个双机热备(非共享负载)单服务地址的系统,你可能只需要放置一个系统名称和一个 IP 地址在这里。
#      你指定的地址对应的名字就是缺省的 ”hot” 系统的名字。
#
#      Where the nodename is the name of the node which “normally” owns the
#      resource.  If this machine is up, it will always have the resource
#      it is shown as owning.
#      节点名称就是正常情况下拥有 resource 的节点的名称。
#      如果此机器是 up 的,他将一直拥有显示的 resource。
#
#      The string you put in for nodename must match the uname -n name
#      of your machine.  Depending on how you have it administered, it could
#      be a short name or a FQDN.
#      节点名应当与 uname - n 查看的结果一致
#——————————————————————-
#
#      Simple case: One service address, default subnet and netmask
#              No servers that go up and down with the IP address
#      单服务地址,缺省子网和掩码,没有服务与 IP 地址一起启动和关闭
#
#just.linux-ha.org      135.9.216.110
#
#——————————————————————-
#
#      Assuming the adminstrative addresses are on the same subnet…
#      A little more complex case: One service address, default subnet
#      and netmask, and you want to start and stop http when you get
#      the IP address…
#      假定管理地址在相同的子网 …
#      稍微复杂一些的情况:一个服务地址,缺省子网和子网掩码,同时你想要获得 IP 地址的时候启动和停止 http。
#
#just.linux-ha.org      135.9.216.110 http
#——————————————————————-
#
#      A little more complex case: Three service addresses, default subnet
#      and netmask, and you want to start and stop http when you get
#      the IP address…
#      稍微复杂一些的情况:三个服务地址,缺省子网和掩码,同时你要在获得 IP 地址的时候启动和停止 http。
#
#just.linux-ha.org      135.9.216.110 135.9.215.111 135.9.216.112 httpd
#——————————————————————-
#
#      One service address, with the subnet, interface and bcast addr
#      explicitly defined.
#      一个服务地址,显式指定子网,接口,广播地址
#
#just.linux-ha.org      135.9.216.3/28/eth0/135.9.216.12 httpd
#
#——————————————————————-
#
#      An example where a shared filesystem is to be used.
#      Note that multiple aguments are passed to this script using
#      the delimiter ‘::’ to separate each argument.
#      一个使用共享文件系统的例子
#      需要注意用 ’::’ 分隔的多个参数被传递到了这个脚本
#
#node1  10.0.0.170 Filesystem::/dev/sda1::/data1::ext2
#
#      Regarding the node-names in this file:
#
#      They must match the names of the nodes listed in ha.cf, which in turn
#      must match the `uname -n` of some node in the cluster.  So they aren’t
#      virtual in any sense of the word.
#

三、使用集群的其他几个相关配置(具体描述略)
a、配置主机 host 解析
b、配置等效验证
c、高可用的相关服务配置(如 httpd,myqld 等),关闭自启动
d、如需要用到共享存储,还应配置相关存储系统

基于 Heartbeat V1 实现 Web 服务双机热备 http://www.linuxidc.com/Linux/2014-04/100635.htm

Heartbeat 实现 Web 服务的高可用群集 http://www.linuxidc.com/Linux/2014-04/99503.htm

Heartbeat+LVS+Ldirectord 高可用负载均衡解决方案 http://www.linuxidc.com/Linux/2014-04/99502.htm

DRBD+Heartbeat+NFS 高可用性配置笔记 http://www.linuxidc.com/Linux/2014-04/99501.htm

Heartbeat 基于 CRM 使用 NFS 对 MySQL 高可用 http://www.linuxidc.com/Linux/2014-03/98674.htm

Heartbeat 高可用 httpd 基于 Resources 简单配置 http://www.linuxidc.com/Linux/2014-03/98672.htm

CentOS 6 高可用服务 Heartbeat v3 安装以及配��  http://www.linuxidc.com/Linux/2015-04/116153.htm

本文永久更新链接地址:http://www.linuxidc.com/Linux/2015-11/125211.htm

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计21671字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中