阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

Hadoop HA 高可用集群部署搭建

210次阅读
没有评论

共计 34845 个字符,预计需要花费 88 分钟才能阅读完成。

Hadoop HA 介绍

1. 概论

本指南提供了一个 HDFS 的高可用性(HA)功能的概述,以及如何配置和管理 HDFS 高可用性(HA) 集群。本文档假定读者具有对 HDFS 集群的组件和节点类型具有一定理解。有关详情,请参阅 Apache 的 HDFS 的架构指南。
http://hadoop.apache.org/common/docs/current/hdfs_design.html

2. 背景

CDH4 之前,在 HDFS 集群中 NameNode 存在单点故障(SPOF)。对于只有一个 NameNode 的集群,如果 NameNode 机器出现故障,那么整个集群将无法使用,直到 NameNode 重新启动。
NameNode 主要在以下两个方面影响 HDFS 集群:
(1). NameNode 机器发生意外,比如宕机,集群将无法使用,直到管理员重启 NameNode
(2). NameNode 机器需要升级,包括软件、硬件升级,此时集群也将无法使用
HDFS 的 HA 功能通过配置 Active/Standby 两个 NameNodes 实现在集群中对 NameNode 的热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时可通过此种方式将 NameNode 很快的切换到另外一台机器。

3. 架构

在一个典型的 HDFS(HA) 集群中,使用两台单独的机器配置为 NameNodes。在任何时间点,确保 NameNodes 中只有一个处于 Active 状态,其他的处在 Standby 状态。其中 ActiveNameNode 负责集群中的所有客户端操作,StandbyNameNode 仅仅充当备机,保证一旦 ActiveNameNode 出现问题能够快速切换。
为了保证 Standby 节点与 Active 节点的状态保持同步,目前是通过两个节点同时访问一个共享的存储设备 (如 NFS) 来实现的,在以后的版本中可能会做进一步的改进。
当 Active 节点的 namespace 发生改变时,存储在共享目录中的日志文件也会被修改,Standby 节点监测到日志变化后,将其作用到自身的 namespace。当 Active 节点发生故障需要进行切换时,Standby 节点由 Standby 状态转换为 Active 状态前将会把所有未读日志作用到自身的 namespace 以确保自身的 namespace 与主节点的 namespace 保持一致。
为了实现快速切换,Standby 节点获取集群的最新文件块信息也是很有必要的。为了实现这一目标,DataNode 需要配置 NameNodes 的位置,并同时给他们发送文件块信息以及心跳检测。
任意时间只有一个 ActiveNameNode 对于 HDFS(HA) 集群的正常运行至关重要,否则两者的 namespace 将不能保持同步,面临数据丢失和其它问题的危险性。为了防止出现这种情况,管理员必须为共享存储配置至少一个安全机制。这种机制应该保证: 在切换期间,如果不能证实当前 Active 节点已经改变了状态,那么此机制应切断当前 Active 节点对共享存储的访问,这样可以防止在切换期间当前 Active 节点继续对 namespace 进行修改,确保新的 Active 节点能够安全的切换.

系统环境

hadoop 2.5.0  
zookeeper 3.4.5  
JDK 1.7  

集群节点

m1 192.168.2.100 namenode resourcemanager zookeeper
s1 192.168.2.101 standbyNameNode datanode nodeManager zookeeper
s2 192.168.2.102 datanode nodeManager zookeeper

配置 zookeeper

配置 HADOOP

配置 HADOOP HA 主要需要修改两个配置文件:core-site.xml / hdfs-site.xml 

hdfs-site.xml

<configuration>
<!-- 命名空间设置 ns1-->
	<property>
		<name>dfs.nameservices</name>
		<value>mycluster</value>
	</property>
	
<!--namenodes 节点 ID:nn1,nn2(配置在命名空间 mycluster 下)-->	
	<property>
		<name>dfs.ha.namenodes.mycluster</name>
		<value>nn1,nn2</value>
	</property>
	<!--nn1,nn2 节点地址配置 -->
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn1</name>
		<value>nameNode:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn2</name>
		<value>dataNode1:8020</value>
	</property>
	<!--nn1,nn2 节点 WEB 地址配置 -->
	<property>
		<name>dfs.namenode.http-address.mycluster.nn1</name>
		<value>nameNode:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.nn2</name>
		<value>dataNode1:50070</value>
	</property>

	<property>
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://nameNode:8485;dataNode1:8485;dataNode2:8485/mycluster</value>
	</property>
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp/dfs/journalnode</value>
	</property>

	<property>
		<name>dfs.client.failover.proxy.provider.mycluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>

	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_dsa</value>
	</property>
	
	<!-- 启用自动故障转移 -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>	
	<property>
		<name>dfs.replication.max</name>
		<value>32767</value>
	</property>

</configuration>
 
 
 

core-site.xml

<configuration>
 <property>
 <!-- 配置 hadoop NameNode ip 地址 , 由于我们配置的 HA 那么有两个 namenode 所以这里配置的地址必须是动态的 -->
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
 </property>
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
 </property>
<property>
	<!-- 整合 Zookeeper -->
  <name>ha.zookeeper.quorum</name>
  <value>nameNode:2181,dataNode1:2181,dataNode2:2181</value>
 </property>
        <property>
        				<!-- 配置 hadoop 缓存地址 -->
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp</value>
                <description></description>
        </property>
</configuration>

如何使用

启动 zookeeper

在 m1,s1,s2 所有机器上执行, 下面的代码是在 m1 上执行的示例

root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Mode: leader
root@s1:/home/hadoop#
 

测试 zookeeper 是否启动成功,看下面第 29 行高亮处,表示成功。

root@m1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
Connecting to localhost:2181
2014-07-27 00:27:16,621 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2014-07-27 00:27:16,628 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=m1
2014-07-27 00:27:16,628 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.7.0_65
2014-07-27 00:27:16,629 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2014-07-27 00:27:16,629 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
2014-07-27 00:27:16,630 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/home/hadoop/zookeeper-3.4.5/bin/../build/classes:/home/hadoop/zookeeper-3.4.5/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/netty-3.2.2.Final.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.4.5/bin/../zookeeper-3.4.5.jar:/home/hadoop/zookeeper-3.4.5/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../conf:/usr/lib/jvm/java-7-oracle/lib
2014-07-27 00:27:16,630 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=:/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2014-07-27 00:27:16,631 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2014-07-27 00:27:16,631 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2014-07-27 00:27:16,632 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2014-07-27 00:27:16,632 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2014-07-27 00:27:16,632 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.11.0-15-generic
2014-07-27 00:27:16,633 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2014-07-27 00:27:16,633 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2014-07-27 00:27:16,634 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/home/hadoop
2014-07-27 00:27:16,636 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@19b1ebe5
Welcome to ZooKeeper!
2014-07-27 00:27:16,672 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@966] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2014-07-27 00:27:16,685 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@849] - Socket connection established to localhost/127.0.0.1:2181, initiating session
JLine support is enabled
2014-07-27 00:27:16,719 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1207] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x147737cd5d30000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1]

更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2016-08/134187p2.htm

在 m1 上格式化 zookeeper,第 33 行的日志表示创建成功。

  1. root@m1:/home/Hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs zkfc -formatZK
  2. 14/07/2700:31:59 INFO tools.DFSZKFailoverController:Failover controller configuredforNameNodeNameNode at m1/192.168.1.50:9000
  3. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/201217:52 GMT
  4. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:host.name=m1
  5. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.version=1.7.0_65
  6. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.vendor=OracleCorporation
  7. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
  8. 14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.class.path=/home/hadoop/hadoop-2.2.0/etc/hadoop:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/hadoop/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
  9. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop-2.2.0/lib/native
  10. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
  11. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
  12. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
  13. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
  14. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.version=3.11.0-15-generic
  15. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.name=root
  16. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
  17. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
  18. 14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=m1:2181,m2:2181,s1:2181,s2:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5990054a
  19. 14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Opening socket connection to server m1/192.168.1.50:2181. Will not attempt to authenticate using SASL (unknown error)
  20. 14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Socket connection established to m1/192.168.1.50:2181, initiating session
  21. 14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Session establishment complete on server m1/192.168.1.50:2181, sessionid = 0x147737cd5d30001, negotiated timeout = 5000
  22. ===============================================
  23. The configured parent znode /hadoop-ha/mycluster already exists.
  24. Are you sure you want to clear all failover information from
  25. ZooKeeper?
  26. WARNING: Before proceeding, ensure that all HDFS services and
  27. failover controllers are stopped!
  28. ===============================================
  29. Proceed formatting /hadoop-ha/mycluster? (Y or N) 14/07/27 00:32:00 INFO ha.ActiveStandbyElector: Session connected.
  30. y
  31. 14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK...
  32. 14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK.
  33. 14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
  34. 14/07/27 00:32:13 INFO zookeeper.ClientCnxn: EventThread shut down
  35. 14/07/27 00:32:13 INFO zookeeper.ZooKeeper: Session: 0x147737cd5d30001 closed
  36. root@m1:/home/hadoop#

5)验证 zkfc 是否格式化成功,如果多了一个 hadoop-ha 包就是成功了。

  1. root@m1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
  2. [zk: localhost:2181(CONNECTED)0] ls/
  3. [hadoop-ha, zookeeper]
  4. [zk: localhost:2181(CONNECTED)1]

启动 JournalNode 集群

1)依次在 m1,s1,s2 上面执行

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start journalnode
  2. starting journalnode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-journalnode-m1.out
  3. root@m1:/home/hadoop# jps
  4. 2884JournalNode
  5. 2553QuorumPeerMain
  6. 2922Jps
  7. root@m1:/home/hadoop#

2)格式化集群的一个 NameNode(m1),有两种方法,我使用的是第一种 
方法一:

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs namenode –format

方法二:

  1. root@m1:/home/hadoop/hadoop-2.2.0/bin/hdfs namenode -format -clusterId m1

3)在 m1 上启动刚才格式化的 namenode

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start namenode

执行命令后,浏览:http://m1:50070/dfshealth.jsp 可以看到 m1 的状态  
 
 
4) 在 s1 机器上,将 m1 的数据复制到 s1 上来, 在 s1 上执行

  1. root@s1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs namenode –bootstrapStandby

5)启动 s1 上的 namenode,执行命令后

  1. root@s1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start namenode

浏览:http://s1:50070/dfshealth.jsp 可以看到 s1 的状态。这个时候在网址上可以发现 m1 和 m2 的状态都是 standby。 

启动所有的 datanode,在 m1 上执行

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemons.sh start datanode
  2. s2: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-datanode-s2.out
  3. s1: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-datanode-s1.out
  4. root@m1:/home/hadoop#

启动 yarn,在 m1 上执行以下命令

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/start-yarn.sh
  2. starting yarn daemons
  3. starting resourcemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-resourcemanager-m1.out
  4. s1: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-nodemanager-s1.out
  5. s2: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-nodemanager-s2.out
  6. root@m1:/home/hadoop#

然后浏览:http://m1:8088/cluster, 可以看到效果 

启动 ZooKeeperFailoverCotroller,在 m1,m2 机器上依次执行以下命令,这个时候再浏览 50070 端口,可以发现 m1 变成 active 状态了,而 m2 还是 standby 状态

  1. root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start zkfc
  2. starting zkfc, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-zkfc-m1.out
  3. root@m1:/home/hadoop#

测试 HDFS 是否可用

 
  1. root@m1:/home/hadoop/hadoop-2.2.0/bin# /home/hadoop/hadoop-2.2.0/bin/hdfs dfs -ls /

测试 YARN 是否可用, 我们来做一个经典的例子,统计刚才放入 input 下面的 hadoop.cmd 的单词频率

  1. root@m1:/home/hadoop/hadoop-2.2.0/bin# /home/hadoop/hadoop-2.2.0/bin/hadoop jar /home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output
  2. 14/07/2701:22:41 INFO client.RMProxy:Connecting to ResourceManager at m1/192.168.1.50:8032
  3. 14/07/2701:22:43 INFO input.FileInputFormat:Total input paths to process :1
  4. 14/07/2701:22:44 INFO mapreduce.JobSubmitter: number of splits:1
  5. 14/07/2701:22:44 INFO Configuration.deprecation: user.name is deprecated.Instead,use mapreduce.job.user.name
  6. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.jar is deprecated.Instead,use mapreduce.job.jar
  7. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.value.classis deprecated.Instead,use mapreduce.job.output.value.class
  8. 14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.combine.classis deprecated.Instead,use mapreduce.job.combine.class
  9. 14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.map.classis deprecated.Instead,use mapreduce.job.map.class
  10. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.job.name is deprecated.Instead,use mapreduce.job.name
  11. 14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.reduce.classis deprecated.Instead,use mapreduce.job.reduce.class
  12. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.input.dir is deprecated.Instead,use mapreduce.input.fileinputformat.inputdir
  13. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.dir is deprecated.Instead,use mapreduce.output.fileoutputformat.outputdir
  14. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.map.tasks is deprecated.Instead,use mapreduce.job.maps
  15. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.key.classis deprecated.Instead,use mapreduce.job.output.key.class
  16. 14/07/2701:22:44 INFO Configuration.deprecation: mapred.working.dir is deprecated.Instead,use mapreduce.job.working.dir
  17. 14/07/2701:22:45 INFO mapreduce.JobSubmitter:Submitting tokens for job: job_1406394452186_0001
  18. 14/07/2701:22:46 INFO impl.YarnClientImpl:Submitted application application_1406394452186_0001 to ResourceManager at m1/192.168.1.50:8032
  19. 14/07/2701:22:46 INFO mapreduce.Job:The url to track the job: http://m1:8088/proxy/application_1406394452186_0001/
  20. 14/07/2701:22:46 INFO mapreduce.Job:Running job: job_1406394452186_0001
  21. 14/07/2701:23:10 INFO mapreduce.Job:Job job_1406394452186_0001 running in uber mode :false
  22. 14/07/2701:23:10 INFO mapreduce.Job: map 0% reduce 0%
  23. 14/07/2701:23:31 INFO mapreduce.Job: map 100% reduce 0%
  24. 14/07/2701:23:48 INFO mapreduce.Job: map 100% reduce 100%
  25. 14/07/2701:23:48 INFO mapreduce.Job:Job job_1406394452186_0001 completed successfully
  26. 14/07/2701:23:49 INFO mapreduce.Job:Counters:43
  27. FileSystemCounters
  28. FILE:Number of bytes read=6574
  29. FILE:Number of bytes written=175057
  30. FILE:Number of read operations=0
  31. FILE:Number of large read operations=0
  32. FILE:Number of write operations=0
  33. HDFS:Number of bytes read=7628
  34. HDFS:Number of bytes written=5088
  35. HDFS:Number of read operations=6
  36. HDFS:Number of large read operations=0
  37. HDFS:Number of write operations=2
  38. JobCounters
  39. Launched map tasks=1
  40. Launched reduce tasks=1
  41. Data-local map tasks=1
  42. Total time spent by all maps in occupied slots (ms)=18062
  43. Total time spent by all reduces in occupied slots (ms)=14807
  44. Map-ReduceFramework
  45. Map input records=240
  46. Map output records=827
  47. Map output bytes=9965
  48. Map output materialized bytes=6574
  49. Input split bytes=98
  50. Combine input records=827
  51. Combine output records=373
  52. Reduce input groups=373
  53. Reduce shuffle bytes=6574
  54. Reduce input records=373
  55. Reduce output records=373
  56. SpilledRecords=746
  57. ShuffledMaps=1
  58. FailedShuffles=0
  59. MergedMap outputs=1
  60. GC time elapsed (ms)=335
  61. CPU time spent (ms)=2960
  62. Physical memory (bytes) snapshot=270057472
  63. Virtual memory (bytes) snapshot=1990762496
  64. Total committed heap usage (bytes)=136450048
  65. ShuffleErrors
  66. BAD_ID=0
  67. CONNECTION=0
  68. IO_ERROR=0
  69. WRONG_LENGTH=0
  70. WRONG_MAP=0
  71. WRONG_REDUCE=0
  72. FileInputFormatCounters
  73. BytesRead=7530
  74. FileOutputFormatCounters
  75. BytesWritten=5088
  76. root@m1:/home/hadoop/hadoop-2.2.0/bin#

验证 HA 的高可用性,故障转移,刚才我们用浏览器打开 m1 和 s1 的 50070 端口,已经看到 m1 的状态是 active,s1 的状态是 standby 

a)我们在 s1 上 kill 掉 namenode 进程
  1. root@m1:/home/hadoop/hadoop-2.2.0/bin# jps
  2. 5492Jps
  3. 2884JournalNode
  4. 4375DFSZKFailoverController
  5. 2553QuorumPeerMain
  6. 3898NameNode
  7. 4075ResourceManager
  8. root@m1:/home/hadoop/hadoop-2.2.0/bin# kill -9 3898
  9. root@m1:/home/hadoop/hadoop-2.2.0/bin# jps
  10. 2884JournalNode
  11. 4375DFSZKFailoverController
  12. 2553QuorumPeerMain
  13. 4075ResourceManager
  14. 5627Jps
  15. root@m1:/home/hadoop/hadoop-2.2.0/bin#

再浏览 m1 和 m2 的 50070 端口,发现 m1 是打不开,而 m2 是 active 状态。 

这时候在 m2 上的 HDFS 和 mapreduce 还是可以正常运行的,虽然 m1 上的 namenode 进程已经被 kill 掉,但不影响使用这就是故障转移的优势! 
Hadoop HA 高可用集群部署搭建是不是感觉风格有点不同了,没错。
和后面的是 copy 的。

更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

本文永久更新链接地址:http://www.linuxidc.com/Linux/2016-08/134187.htm

Hadoop HA 介绍

1. 概论

本指南提供了一个 HDFS 的高可用性(HA)功能的概述,以及如何配置和管理 HDFS 高可用性(HA) 集群。本文档假定读者具有对 HDFS 集群的组件和节点类型具有一定理解。有关详情,请参阅 Apache 的 HDFS 的架构指南。
http://hadoop.apache.org/common/docs/current/hdfs_design.html

2. 背景

CDH4 之前,在 HDFS 集群中 NameNode 存在单点故障(SPOF)。对于只有一个 NameNode 的集群,如果 NameNode 机器出现故障,那么整个集群将无法使用,直到 NameNode 重新启动。
NameNode 主要在以下两个方面影响 HDFS 集群:
(1). NameNode 机器发生意外,比如宕机,集群将无法使用,直到管理员重启 NameNode
(2). NameNode 机器需要升级,包括软件、硬件升级,此时集群也将无法使用
HDFS 的 HA 功能通过配置 Active/Standby 两个 NameNodes 实现在集群中对 NameNode 的热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时可通过此种方式将 NameNode 很快的切换到另外一台机器。

3. 架构

在一个典型的 HDFS(HA) 集群中,使用两台单独的机器配置为 NameNodes。在任何时间点,确保 NameNodes 中只有一个处于 Active 状态,其他的处在 Standby 状态。其中 ActiveNameNode 负责集群中的所有客户端操作,StandbyNameNode 仅仅充当备机,保证一旦 ActiveNameNode 出现问题能够快速切换。
为了保证 Standby 节点与 Active 节点的状态保持同步,目前是通过两个节点同时访问一个共享的存储设备 (如 NFS) 来实现的,在以后的版本中可能会做进一步的改进。
当 Active 节点的 namespace 发生改变时,存储在共享目录中的日志文件也会被修改,Standby 节点监测到日志变化后,将其作用到自身的 namespace。当 Active 节点发生故障需要进行切换时,Standby 节点由 Standby 状态转换为 Active 状态前将会把所有未读日志作用到自身的 namespace 以确保自身的 namespace 与主节点的 namespace 保持一致。
为了实现快速切换,Standby 节点获取集群的最新文件块信息也是很有必要的。为了实现这一目标,DataNode 需要配置 NameNodes 的位置,并同时给他们发送文件块信息以及心跳检测。
任意时间只有一个 ActiveNameNode 对于 HDFS(HA) 集群的正常运行至关重要,否则两者的 namespace 将不能保持同步,面临数据丢失和其它问题的危险性。为了防止出现这种情况,管理员必须为共享存储配置至少一个安全机制。这种机制应该保证: 在切换期间,如果不能证实当前 Active 节点已经改变了状态,那么此机制应切断当前 Active 节点对共享存储的访问,这样可以防止在切换期间当前 Active 节点继续对 namespace 进行修改,确保新的 Active 节点能够安全的切换.

系统环境

hadoop 2.5.0  
zookeeper 3.4.5  
JDK 1.7  

集群节点

m1 192.168.2.100 namenode resourcemanager zookeeper
s1 192.168.2.101 standbyNameNode datanode nodeManager zookeeper
s2 192.168.2.102 datanode nodeManager zookeeper

配置 zookeeper

配置 HADOOP

配置 HADOOP HA 主要需要修改两个配置文件:core-site.xml / hdfs-site.xml 

hdfs-site.xml

<configuration>
<!-- 命名空间设置 ns1-->
	<property>
		<name>dfs.nameservices</name>
		<value>mycluster</value>
	</property>
	
<!--namenodes 节点 ID:nn1,nn2(配置在命名空间 mycluster 下)-->	
	<property>
		<name>dfs.ha.namenodes.mycluster</name>
		<value>nn1,nn2</value>
	</property>
	<!--nn1,nn2 节点地址配置 -->
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn1</name>
		<value>nameNode:8020</value>
	</property>
	<property>
		<name>dfs.namenode.rpc-address.mycluster.nn2</name>
		<value>dataNode1:8020</value>
	</property>
	<!--nn1,nn2 节点 WEB 地址配置 -->
	<property>
		<name>dfs.namenode.http-address.mycluster.nn1</name>
		<value>nameNode:50070</value>
	</property>
	<property>
		<name>dfs.namenode.http-address.mycluster.nn2</name>
		<value>dataNode1:50070</value>
	</property>

	<property>
		<name>dfs.namenode.shared.edits.dir</name>
		<value>qjournal://nameNode:8485;dataNode1:8485;dataNode2:8485/mycluster</value>
	</property>
	<property>
		<name>dfs.journalnode.edits.dir</name>
		<value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp/dfs/journalnode</value>
	</property>

	<property>
		<name>dfs.client.failover.proxy.provider.mycluster</name>
		<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
	</property>

	<property>
		<name>dfs.ha.fencing.methods</name>
		<value>sshfence</value>
	</property>
	<property>
		<name>dfs.ha.fencing.ssh.private-key-files</name>
		<value>/home/hadoop/.ssh/id_dsa</value>
	</property>
	
	<!-- 启用自动故障转移 -->
	<property>
		<name>dfs.ha.automatic-failover.enabled</name>
		<value>true</value>
	</property>	
	<property>
		<name>dfs.replication.max</name>
		<value>32767</value>
	</property>

</configuration>
 
 
 

core-site.xml

<configuration>
 <property>
 <!-- 配置 hadoop NameNode ip 地址 , 由于我们配置的 HA 那么有两个 namenode 所以这里配置的地址必须是动态的 -->
  <name>fs.defaultFS</name>
  <value>hdfs://mycluster</value>
 </property>
<property>
  <name>dfs.nameservices</name>
  <value>mycluster</value>
 </property>
<property>
	<!-- 整合 Zookeeper -->
  <name>ha.zookeeper.quorum</name>
  <value>nameNode:2181,dataNode1:2181,dataNode2:2181</value>
 </property>
        <property>
        				<!-- 配置 hadoop 缓存地址 -->
                <name>hadoop.tmp.dir</name>
                <value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp</value>
                <description></description>
        </property>
</configuration>

如何使用

启动 zookeeper

在 m1,s1,s2 所有机器上执行, 下面的代码是在 m1 上执行的示例

root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Mode: leader
root@s1:/home/hadoop#
 

测试 zookeeper 是否启动成功,看下面第 29 行高亮处,表示成功。

root@m1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
Connecting to localhost:2181
2014-07-27 00:27:16,621 [myid:] - INFO  [main:Environment@100] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2014-07-27 00:27:16,628 [myid:] - INFO  [main:Environment@100] - Client environment:host.name=m1
2014-07-27 00:27:16,628 [myid:] - INFO  [main:Environment@100] - Client environment:java.version=1.7.0_65
2014-07-27 00:27:16,629 [myid:] - INFO  [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2014-07-27 00:27:16,629 [myid:] - INFO  [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
2014-07-27 00:27:16,630 [myid:] - INFO  [main:Environment@100] - Client environment:java.class.path=/home/hadoop/zookeeper-3.4.5/bin/../build/classes:/home/hadoop/zookeeper-3.4.5/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/netty-3.2.2.Final.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.4.5/bin/../zookeeper-3.4.5.jar:/home/hadoop/zookeeper-3.4.5/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../conf:/usr/lib/jvm/java-7-oracle/lib
2014-07-27 00:27:16,630 [myid:] - INFO  [main:Environment@100] - Client environment:java.library.path=:/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2014-07-27 00:27:16,631 [myid:] - INFO  [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2014-07-27 00:27:16,631 [myid:] - INFO  [main:Environment@100] - Client environment:java.compiler=<NA>
2014-07-27 00:27:16,632 [myid:] - INFO  [main:Environment@100] - Client environment:os.name=Linux
2014-07-27 00:27:16,632 [myid:] - INFO  [main:Environment@100] - Client environment:os.arch=amd64
2014-07-27 00:27:16,632 [myid:] - INFO  [main:Environment@100] - Client environment:os.version=3.11.0-15-generic
2014-07-27 00:27:16,633 [myid:] - INFO  [main:Environment@100] - Client environment:user.name=root
2014-07-27 00:27:16,633 [myid:] - INFO  [main:Environment@100] - Client environment:user.home=/root
2014-07-27 00:27:16,634 [myid:] - INFO  [main:Environment@100] - Client environment:user.dir=/home/hadoop
2014-07-27 00:27:16,636 [myid:] - INFO  [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@19b1ebe5
Welcome to ZooKeeper!
2014-07-27 00:27:16,672 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@966] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2014-07-27 00:27:16,685 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@849] - Socket connection established to localhost/127.0.0.1:2181, initiating session
JLine support is enabled
2014-07-27 00:27:16,719 [myid:] - INFO  [main-SendThread(localhost:2181):ClientCnxn$SendThread@1207] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x147737cd5d30000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1]

更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2016-08/134187p2.htm

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计34845字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中