共计 11351 个字符,预计需要花费 29 分钟才能阅读完成。
ResourceManager High Availability (RM 高可用)
-
- Introduction(简介)
- Architecture(架构)
-
-
- RM Failover(RM 故障切换)
- Recovering prevous active-RM’s state(恢复之前活动的 RM 的状态)
- Deployment(部署)
-
- Configurations(配置)
- Admin commands(管理命令)
- ResourceManager Web UI services(RM Web UI 服务)
- Web Services(Web 服务)
-
Introduction
This guide provides an overview of High Availability of YARN’s ResourceManager, and details how to configure and use this feature. The ResourceManager (RM) is responsible for tracking the resources in a cluster, and scheduling applications (e.g., MapReduce jobs). Prior to Hadoop 2.4, the ResourceManager is the single point of failure in a YARN cluster. The High Availability feature adds redundancy in the form of an Active/Standby ResourceManager pair to remove this otherwise single point of failure.
这个知道提供 YARN 的 ResourceManager 的高可用综述,和如何配置和使用这个特性的细节。RM 负责跟踪集群中的资源和调度应用(例如 MapReduce 作业)。在 Hadoop2.4 之前,RM 是 YARN 集群中的一个单点故障。这个高可用特性以活动 / 备用 RM 对的形式增加了冗余来移除这个潜在的单点故障。
Architecture(架构)
RM Failover(RM 故障切换)
ResourceManager HA is realized through an Active/Standby architecture – at any point of time, one of the RMs is Active, and one or more RMs are in Standby mode waiting to take over should anything happen to the Active. The trigger to transition-to-active comes from either the admin (through CLI) or through the integrated failover-controller when automatic-failover is enabled.
RM 的高可用特性通过任何时间点的主 / 备架构来实现的,一个 RM 作为活动,而其他 RMs 进入备用模式随时等待接管出事的活动的 RM。备用转活跃的触发可以通过管理员用命令行或者通过集成的故障切换控制器配置允许自动故障切换。
Manual transitions and failover(手动切换和故障切换)
When automatic failover is not enabled, admins have to manually transition one of the RMs to Active. To failover from one RM to the other, they are expected to first transition the Active-RM to Standby and transition a Standby-RM to Active. All this can be done using the“yarn rmadmin”CLI.
当自动故障切换没有被激活时,管理员必须手动地转换 RMs 中的一个为活跃。RM 的故障切换时首先将活跃的 RM 切换为备用然后将一个备用的 RM 切换为活跃状态。这些都可以用“yarn rmadmin”命令行来实现。
Automatic failover(自动故障切换)
The RMs have an option to embed the Zookeeper-based ActiveStandbyElector to decide which RM should be the Active. When the Active goes down or becomes unresponsive, another RM is automatically elected to be the Active which then takes over. Note that, there is no need to run a separate ZKFC daemon as is the case for HDFS because ActiveStandbyElector embedded in RMs acts as a failure detector and a leader elector instead of a separate ZKFC deamon.
RMa 有个选项来嵌入基于 Zookeepper 的主备选举机制来决定哪个 RM 是活跃的。当活跃的 RM 失效或者反应迟钝,另一个 RM 会被自动选举为主用然后接管工作。需要注意的是,没必要为 HDFS 运行一个单独的 ZKFC 进程因为主备选举机制内嵌到 RMs 作为一个失效检查器和选举器来替代一个单独的 ZKFC 进程。
Client, ApplicationMaster and NodeManager on RM failover(客户端、应用控制器和节点管理器在 RM 的故障切换下的转移)
When there are multiple RMs, the configuration (yarn-site.xml) used by clients and nodes is expected to list all the RMs. Clients, ApplicationMasters (AMs) and NodeManagers (NMs) try connecting to the RMs in a round-robin fashion until they hit the Active RM. If the Active goes down, they resume the round-robin polling until they hit the“new”Active. This default retry logic is implemented as org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider. You can override the logic by implementing org.apache.hadoop.yarn.client.RMFailoverProxyProvider and setting the value of yarn.client.failover-proxy-provider to the class name.
当有多个 RM,客户端和节点可以通过配置(yarn-site.xml)来获得 RM 的列表。客户端、应用控制器和节点管理器采用循环的方式来试图连上 RM 直到他们连上活跃 RM。如果活跃的 RM 失效了,它们重新开始以循环的方式去连接 RM 直到他们连上新的活跃 RM。这个默认的重试逻辑是 org.apache.hadoop.yarn.client.ConfiguredRMFailoverProxyProvider 实现的。你可以通过实现 org.apache.hadoop.yarn.client.RMFailoverProxyProvider 来覆盖这个逻辑并将yarn.client.failover-proxy-provider 的值设为该类名。
Recovering prevous active-RM’s state(恢复到之前活跃 RM 的状态)
With the ResourceManger Restart enabled, the RM being promoted to an active state loads the RM internal state and continues to operate from where the previous active left off as much as possible depending on the RM restart feature. A new attempt is spawned for each managed application previously submitted to the RM. Applications can checkpoint periodically to avoid losing any work. The state-store must be visible from the both of Active/Standby RMs. Currently, there are two RMStateStore implementations for persistence – FileSystemRMStateStore and ZKRMStateStore. The ZKRMStateStore implicitly allows write access to a single RM at any point in time, and hence is the recommended store to use in an HA cluster. When using the ZKRMStateStore, there is no need for a separate fencing mechanism to address a potential split-brain situation where multiple RMs can potentially assume the Active role. When using the ZKRMStateStore, it is advisable to NOT set the“zookeeper.DigestAuthenticationProvider.superDigest”property on the Zookeeper cluster to ensure that the zookeeper admin does not have access to YARN application/user credential information.
如果 RM 重启是被激活可用的,依靠 RM 的重启特性一个 RM 被提升为活跃 RM 状态时加载前面那个活跃 RM 留下尽可能多的 RM 的内部状态和操作。应用可以周期的检查来避免丢失任何工作。状态仓库对主用 / 备用 RM 都是可见的。目前,有两个实现的持久化 RM 状态仓库 - FileSystemRMStateStore 和 ZKRMStateStore。ZKRMStateStore允许在任何一个时间点只对一个 RM 可写,因此推荐在 HA 集群中使用这个仓库。当使用 ZKRMStateStore 作为状态仓库,建议不要在 Zookepper 集群中设置zookeeper.DigestAuthenticationProvider.superDigest 属性确保 zookepper 管理员没有进入 YARN 应用和用户的权限信息。
Deployment(部署)
Configurations(配置)
Most of the failover functionality is tunable using various configuration properties. Following is a list of required/important ones. yarn-default.xml carries a full-list of knobs. See yarn-default.xml for more information including default values. See the document for ResourceManger Restart also for instructions on setting up the state-store.
大部分的故障切换功能都可以用各样的配置属性来调用。下面是属性中需要的 / 重要的部分列表。yarn-default.xml 是完整的开关列表。去查看 yarn-default.xml 获取更多信息包括默认值。看 ResourceManger Restart 文档也可以得到状态仓库的设置信息。
Configuration Properties |
Description |
yarn.resourcemanager.zk-address |
Address of the ZK-quorum. Used both for the state-store and embedded leader-election. |
yarn.resourcemanager.ha.enabled |
Enable RM HA. RM 高可用激活 |
yarn.resourcemanager.ha.rm-ids |
List of logical IDs for the RMs. e.g.,“rm1,rm2”. RMs 的逻辑 ID 列表 |
yarn.resourcemanager.hostname.rm-id |
For each rm-id, specify the hostname the RM corresponds to. Alternately, one could set each of the RM’s service addresses. 为每个 RM-id 指定一个主机名。或者可以设置每个 RM 的服务地址 |
yarn.resourcemanager.address.rm-id |
For each rm-id, specify host:port for clients to submit jobs. If set, overrides the hostname set in yarn.resourcemanager.hostname.rm-id. 为每个 rm-id 设置 主机:端口 用来提交作业。如果设置,将覆盖 yarn.resourcemanager.hostname.rm-id 的设置 |
yarn.resourcemanager.scheduler.address.rm-id |
For each rm-id, specify scheduler host:port for ApplicationMasters to obtain resources. If set, overrides the hostname set in yarn.resourcemanager.hostname.rm-id. 为每个 rm-id 指定 AM 的 主机:端口 来获取资源。如果设置了将覆盖 yarn.resourcemanager.hostname.rm-id 的设置 |
yarn.resourcemanager.resource-tracker.address.rm-id |
For each rm-id, specify host:port for NodeManagers to connect. If set, overrides the hostname set in yarn.resourcemanager.hostname.rm-id. 为每个 rm-id 指定 NodeManagers 的连接的 主机:端口。如果设置将覆盖 yarn.resourcemanager.hostname.rm-id 的设置 |
yarn.resourcemanager.admin.address.rm-id |
For each rm-id, specify host:port for administrative commands. If set, overrides the hostname set in yarn.resourcemanager.hostname.rm-id. 为每个 rm-id 设置管理命令行的主机:端口。如果设置了将覆盖 yarn.resourcemanager.hostname.rm-id 的设置 |
yarn.resourcemanager.webapp.address.rm-id |
For each rm-id, specify host:port of the RM web application corresponds to. You do not need this if you set yarn.http.policy to HTTPS_ONLY. If set, overrides the hostname set in yarn.resourcemanager.hostname.rm-id. 为每个 rm-id 指定用于 RMweb 应用通讯的主机:端口。如果你设置了 yarn.http.policy to HTTPS_ONLY 那就没必要设置了。如果设置了将覆盖 yarn.resourcemanager.hostname.rm-id 的设置 |
yarn.resourcemanager.webapp.https.address.rm-id |
For each rm-id, specify host:port of the RM https web application corresponds to. You do not need this if you set yarn.http.policy to HTTP_ONLY. If set, overrides the hostname set in yarn.resourcemanager.hostname.rm-id. 为每个 rm-id 指定用于 RM https web 应用通讯的主机:端口。如果你设置了 yarn.http.policy to HTTPS_ONLY 那就没必要设置了。如果设置了将覆盖 yarn.resourcemanager.hostname.rm-id 的设置 |
yarn.resourcemanager.ha.id |
Identifies the RM in the ensemble. This is optional; however, if set, admins have to ensure that all the RMs have their own IDs in the config. 定义一个 RM 的集合 ID. 这是可选的;然而,如果设置了,管理员将要确保所有的 RM 所有自己的 ID |
yarn.resourcemanager.ha.automatic-failover.enabled |
Enable automatic failover; By default, it is enabled only when HA is enabled. 故障切换激活;默认的,在 HA 激活下可用。 |
yarn.resourcemanager.ha.automatic-failover.embedded |
Use embedded leader-elector to pick the Active RM, when automatic failover is enabled. By default, it is enabled only when HA is enabled. 当自动故障切换可用时,使用内嵌的选举器来选择活跃 RM。默认的,在 HA 激活下可用。 |
yarn.resourcemanager.cluster-id |
Identifies the cluster. Used by the elector to ensure an RM doesn’t take over as Active for another cluster. 定义集群的 ID。被选举器使用确保 RM 不会在其他集群中接管称为活跃 RM |
yarn.client.failover-proxy-provider |
The class to be used by Clients, AMs and NMs to failover to the Active RM. 这个类用于将客户端、AMs 和 NMs 转移到活跃的 RM |
yarn.client.failover-max-attempts |
The max number of times FailoverProxyProvider should attempt failover. 尝试故障切换的最大尝试次数。 |
yarn.client.failover-sleep-base-ms |
The sleep base (in milliseconds) to be used for calculating the exponential delay between failovers. |
yarn.client.failover-sleep-max-ms |
The maximum sleep time (in milliseconds) between failovers. 故障切换之间的最大休眠时间 |
yarn.client.failover-retries |
The number of retries per attempt to connect to a ResourceManager. 每个尝试连接 RM 的重连次数 |
yarn.client.failover-retries-on-socket-timeouts |
The number of retries per attempt to connect to a ResourceManager on socket timeouts. 每个尝试连接 RM 的重连次数的 socket 超时 |
Sample configurations(配置例子)
Here is the sample of minimal setup for RM failover.
<property>
<name>yarn.resourcemanager.ha.enabled</name>
<value>true</value>
</property>
<property>
<name>yarn.resourcemanager.cluster-id</name>
<value>cluster1</value>
</property>
<property>
<name>yarn.resourcemanager.ha.rm-ids</name>
<value>rm1,rm2</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm1</name>
<value>master1</value>
</property>
<property>
<name>yarn.resourcemanager.hostname.rm2</name>
<value>master2</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm1</name>
<value>master1:8088</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address.rm2</name>
<value>master2:8088</value>
</property>
<property>
<name>yarn.resourcemanager.zk-address</name>
<value>zk1:2181,zk2:2181,zk3:2181</value>
</property>
Admin commands(管理员命令)
yarn rmadmin has a few HA-specific command options to check the health/state of an RM, and transition to Active/Standby. Commands for HA take service id of RM set by yarn.resourcemanager.ha.rm-ids as argument.
$ yarn rmadmin -getServiceState rm1
active
$ yarn rmadmin -getServiceState rm2
standby
If automatic failover is enabled, you can not use manual transition command. Though you can override this by –forcemanual flag, you need caution.
$ yarn rmadmin -transitionToStandby rm1
Automatic failover is enabled for org.apache.hadoop.yarn.client.RMHAServiceTarget@1d8299fd
Refusing to manually manage HA state, since it may cause
a split-brain scenario or other incorrect state.
If you are very sure you know what you are doing, please
specify the forcemanual flag.
See YarnCommands for more details.
ResourceManager Web UI services
Assuming a standby RM is up and running, the Standby automatically redirects all web requests to the Active, except for the“About”page.
假设一个备用 RM 被提升为活跃,该备用 RM 会自动重定向到所有提到活跃 RM 的请求,除了“About”页面
Web Services
Assuming a standby RM is up and running, RM web-services described at ResourceManager REST APIs when invoked on a standby RM are automatically redirected to the Active RM.
假设一个备用 RM 被提升为活跃,RM web-service 在 ResourceManager REST APIs 描述的用来将一个备用 RM 自动重定向活跃 RM。
* 由于译者本身能力有限,所以译文中肯定会出现表述不正确的地方,请大家多多包涵,也希望大家能够指出文中翻译得不对或者不准确的地方,共同探讨进步,谢谢。
下面关于 Hadoop 的文章您也可能喜欢,不妨看看:
Ubuntu14.04 下 Hadoop2.4.1 单机 / 伪分布式安装配置教程 http://www.linuxidc.com/Linux/2015-02/113487.htm
CentOS 安装和配置 Hadoop2.2.0 http://www.linuxidc.com/Linux/2014-01/94685.htm
CentOS 6.3 下 Hadoop 伪分布式平台搭建 http://www.linuxidc.com/Linux/2016-11/136789.htm
Ubuntu 14.04 LTS 下安装 Hadoop 1.2.1(伪分布模式)http://www.linuxidc.com/Linux/2016-09/135406.htm
Ubuntu 上搭建 Hadoop 环境(单机模式 + 伪分布模式)http://www.linuxidc.com/Linux/2013-01/77681.htm
实战 CentOS 系统部署 Hadoop 集群服务 http://www.linuxidc.com/Linux/2016-11/137246.htm
单机版搭建 Hadoop 环境图文教程详解 http://www.linuxidc.com/Linux/2012-02/53927.htm
Hadoop 2.6.0 HA 高可用集群配置详解 http://www.linuxidc.com/Linux/2016-08/134180.htm
Spark 1.5、Hadoop 2.7 集群环境搭建 http://www.linuxidc.com/Linux/2016-09/135067.htm
更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13
本文永久更新链接地址:http://www.linuxidc.com/Linux/2016-12/138029.htm