共计 34845 个字符,预计需要花费 88 分钟才能阅读完成。
Hadoop HA 介绍
1. 概论
本指南提供了一个 HDFS 的高可用性(HA)功能的概述,以及如何配置和管理 HDFS 高可用性(HA) 集群。本文档假定读者具有对 HDFS 集群的组件和节点类型具有一定理解。有关详情,请参阅 Apache 的 HDFS 的架构指南。
http://hadoop.apache.org/common/docs/current/hdfs_design.html
2. 背景
CDH4 之前,在 HDFS 集群中 NameNode 存在单点故障(SPOF)。对于只有一个 NameNode 的集群,如果 NameNode 机器出现故障,那么整个集群将无法使用,直到 NameNode 重新启动。
NameNode 主要在以下两个方面影响 HDFS 集群:
(1). NameNode 机器发生意外,比如宕机,集群将无法使用,直到管理员重启 NameNode
(2). NameNode 机器需要升级,包括软件、硬件升级,此时集群也将无法使用
HDFS 的 HA 功能通过配置 Active/Standby 两个 NameNodes 实现在集群中对 NameNode 的热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时可通过此种方式将 NameNode 很快的切换到另外一台机器。
3. 架构
在一个典型的 HDFS(HA) 集群中,使用两台单独的机器配置为 NameNodes。在任何时间点,确保 NameNodes 中只有一个处于 Active 状态,其他的处在 Standby 状态。其中 ActiveNameNode 负责集群中的所有客户端操作,StandbyNameNode 仅仅充当备机,保证一旦 ActiveNameNode 出现问题能够快速切换。
为了保证 Standby 节点与 Active 节点的状态保持同步,目前是通过两个节点同时访问一个共享的存储设备 (如 NFS) 来实现的,在以后的版本中可能会做进一步的改进。
当 Active 节点的 namespace 发生改变时,存储在共享目录中的日志文件也会被修改,Standby 节点监测到日志变化后,将其作用到自身的 namespace。当 Active 节点发生故障需要进行切换时,Standby 节点由 Standby 状态转换为 Active 状态前将会把所有未读日志作用到自身的 namespace 以确保自身的 namespace 与主节点的 namespace 保持一致。
为了实现快速切换,Standby 节点获取集群的最新文件块信息也是很有必要的。为了实现这一目标,DataNode 需要配置 NameNodes 的位置,并同时给他们发送文件块信息以及心跳检测。
任意时间只有一个 ActiveNameNode 对于 HDFS(HA) 集群的正常运行至关重要,否则两者的 namespace 将不能保持同步,面临数据丢失和其它问题的危险性。为了防止出现这种情况,管理员必须为共享存储配置至少一个安全机制。这种机制应该保证: 在切换期间,如果不能证实当前 Active 节点已经改变了状态,那么此机制应切断当前 Active 节点对共享存储的访问,这样可以防止在切换期间当前 Active 节点继续对 namespace 进行修改,确保新的 Active 节点能够安全的切换.
系统环境
hadoop 2.5.0 |
|
zookeeper 3.4.5 |
|
JDK 1.7 |
|
集群节点
m1 | 192.168.2.100 | namenode resourcemanager zookeeper |
s1 | 192.168.2.101 | standbyNameNode datanode nodeManager zookeeper |
s2 | 192.168.2.102 | datanode nodeManager zookeeper |
配置 zookeeper
配置 HADOOP
hdfs-site.xml
<configuration>
<!-- 命名空间设置 ns1-->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!--namenodes 节点 ID:nn1,nn2(配置在命名空间 mycluster 下)-->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!--nn1,nn2 节点地址配置 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>nameNode:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>dataNode1:8020</value>
</property>
<!--nn1,nn2 节点 WEB 地址配置 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>nameNode:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>dataNode1:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://nameNode:8485;dataNode1:8485;dataNode2:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp/dfs/journalnode</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_dsa</value>
</property>
<!-- 启用自动故障转移 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.replication.max</name>
<value>32767</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<!-- 配置 hadoop NameNode ip 地址 , 由于我们配置的 HA 那么有两个 namenode 所以这里配置的地址必须是动态的 -->
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<!-- 整合 Zookeeper -->
<name>ha.zookeeper.quorum</name>
<value>nameNode:2181,dataNode1:2181,dataNode2:2181</value>
</property>
<property>
<!-- 配置 hadoop 缓存地址 -->
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp</value>
<description></description>
</property>
</configuration>
如何使用
启动 zookeeper
在 m1,s1,s2 所有机器上执行, 下面的代码是在 m1 上执行的示例
root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Mode: leader
root@s1:/home/hadoop#
测试 zookeeper 是否启动成功,看下面第 29 行高亮处,表示成功。
root@m1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
Connecting to localhost:2181
2014-07-27 00:27:16,621 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2014-07-27 00:27:16,628 [myid:] - INFO [main:Environment@100] - Client environment:host.name=m1
2014-07-27 00:27:16,628 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_65
2014-07-27 00:27:16,629 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2014-07-27 00:27:16,629 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
2014-07-27 00:27:16,630 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/home/hadoop/zookeeper-3.4.5/bin/../build/classes:/home/hadoop/zookeeper-3.4.5/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/netty-3.2.2.Final.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.4.5/bin/../zookeeper-3.4.5.jar:/home/hadoop/zookeeper-3.4.5/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../conf:/usr/lib/jvm/java-7-oracle/lib
2014-07-27 00:27:16,630 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=:/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2014-07-27 00:27:16,631 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2014-07-27 00:27:16,631 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2014-07-27 00:27:16,632 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2014-07-27 00:27:16,632 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2014-07-27 00:27:16,632 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.11.0-15-generic
2014-07-27 00:27:16,633 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2014-07-27 00:27:16,633 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2014-07-27 00:27:16,634 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/hadoop
2014-07-27 00:27:16,636 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@19b1ebe5
Welcome to ZooKeeper!
2014-07-27 00:27:16,672 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@966] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2014-07-27 00:27:16,685 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@849] - Socket connection established to localhost/127.0.0.1:2181, initiating session
JLine support is enabled
2014-07-27 00:27:16,719 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1207] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x147737cd5d30000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1]
更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2016-08/134187p2.htm
在 m1 上格式化 zookeeper,第 33 行的日志表示创建成功。
root@m1:/home/Hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs zkfc -formatZK
14/07/2700:31:59 INFO tools.DFSZKFailoverController:Failover controller configuredforNameNodeNameNode at m1/192.168.1.50:9000
14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/201217:52 GMT
14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:host.name=m1
14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.version=1.7.0_65
14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.vendor=OracleCorporation
14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
14/07/2700:32:00 INFO zookeeper.ZooKeeper:Client environment:java.class.path=/home/hadoop/hadoop-2.2.0/etc/hadoop:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-api-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-net-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-core-1.8.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-compiler-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-math-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/mockito-all-1.8.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/hadoop-auth-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-digester-1.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jettison-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-httpclient-3.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsch-0.1.42.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-beanutils-1.7.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-jaxrs-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/junit-4.8.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-collections-3.2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jackson-xc-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-json-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/stax-api-1.0.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jets3t-0.6.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/commons-configuration-1.6.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/activation-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-api-2.2.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/zookeeper-3.4.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/jaxb-impl-2.2.3-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/common/hadoop-common-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/guava-11.0.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-codec-1.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-lang-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/servlet-api-2.5.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-logging-1.1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jasper-runtime-5.5.23.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsp-api-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/xmlenc-0.52.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jsr305-1.3.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-el-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-cli-1.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/commons-daemon-1.0.13.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/lib/jetty-util-6.1.26.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-nfs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/hdfs/hadoop-hdfs-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-nodemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-client-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-unmanaged-am-launcher-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-tests-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-resourcemanager-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-api-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-server-web-proxy-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-site-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/yarn/hadoop-yarn-applications-distributedshell-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hamcrest-core-1.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/hadoop-annotations-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-guice-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/paranamer-2.3.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-servlet-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/snappy-java-1.0.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/log4j-1.2.17.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/protobuf-java-2.5.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-core-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/guice-3.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-core-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/netty-3.6.2.Final.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/junit-4.10.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-io-2.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/commons-compress-1.4.1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jackson-mapper-asl-1.8.8.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/javax.inject-1.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/aopalliance-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/asm-3.2.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/avro-1.7.4.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/jersey-server-1.9.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/lib/xz-1.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0-tests.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-plugins-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-jobclient-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar:/home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-client-core-2.2.0.jar:/home/hadoop/hadoop-2.2.0/contrib/capacity-scheduler/*.jar
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/home/hadoop/hadoop-2.2.0/lib/native
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.arch=amd64
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:os.version=3.11.0-15-generic
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.name=root
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Client environment:user.dir=/home/hadoop
14/07/27 00:32:00 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=m1:2181,m2:2181,s1:2181,s2:2181 sessionTimeout=5000 watcher=org.apache.hadoop.ha.ActiveStandbyElector$WatcherWithClientRef@5990054a
14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Opening socket connection to server m1/192.168.1.50:2181. Will not attempt to authenticate using SASL (unknown error)
14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Socket connection established to m1/192.168.1.50:2181, initiating session
14/07/27 00:32:00 INFO zookeeper.ClientCnxn: Session establishment complete on server m1/192.168.1.50:2181, sessionid = 0x147737cd5d30001, negotiated timeout = 5000
===============================================
The configured parent znode /hadoop-ha/mycluster already exists.
Are you sure you want to clear all failover information from
ZooKeeper?
WARNING: Before proceeding, ensure that all HDFS services and
failover controllers are stopped!
===============================================
Proceed formatting /hadoop-ha/mycluster? (Y or N) 14/07/27 00:32:00 INFO ha.ActiveStandbyElector: Session connected.
y
14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Recursively deleting /hadoop-ha/mycluster from ZK...
14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Successfully deleted /hadoop-ha/mycluster from ZK.
14/07/27 00:32:13 INFO ha.ActiveStandbyElector: Successfully created /hadoop-ha/mycluster in ZK.
14/07/27 00:32:13 INFO zookeeper.ClientCnxn: EventThread shut down
14/07/27 00:32:13 INFO zookeeper.ZooKeeper: Session: 0x147737cd5d30001 closed
root@m1:/home/hadoop#
5)验证 zkfc 是否格式化成功,如果多了一个 hadoop-ha 包就是成功了。
root@m1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
[zk: localhost:2181(CONNECTED)0] ls/
[hadoop-ha, zookeeper]
[zk: localhost:2181(CONNECTED)1]
启动 JournalNode 集群
1)依次在 m1,s1,s2 上面执行
root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start journalnode
starting journalnode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-journalnode-m1.out
root@m1:/home/hadoop# jps
2884JournalNode
2553QuorumPeerMain
2922Jps
root@m1:/home/hadoop#
2)格式化集群的一个 NameNode(m1),有两种方法,我使用的是第一种
方法一:
root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs namenode –format
方法二:
root@m1:/home/hadoop/hadoop-2.2.0/bin/hdfs namenode -format -clusterId m1
3)在 m1 上启动刚才格式化的 namenode
root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start namenode
执行命令后,浏览:http://m1:50070/dfshealth.jsp 可以看到 m1 的状态
4) 在 s1 机器上,将 m1 的数据复制到 s1 上来, 在 s1 上执行
root@s1:/home/hadoop# /home/hadoop/hadoop-2.2.0/bin/hdfs namenode –bootstrapStandby
5)启动 s1 上的 namenode,执行命令后
root@s1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start namenode
浏览:http://s1:50070/dfshealth.jsp 可以看到 s1 的状态。这个时候在网址上可以发现 m1 和 m2 的状态都是 standby。
启动所有的 datanode,在 m1 上执行
root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemons.sh start datanode
s2: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-datanode-s2.out
s1: starting datanode, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-datanode-s1.out
root@m1:/home/hadoop#
启动 yarn,在 m1 上执行以下命令
root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-resourcemanager-m1.out
s1: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-nodemanager-s1.out
s2: starting nodemanager, logging to /home/hadoop/hadoop-2.2.0/logs/yarn-root-nodemanager-s2.out
root@m1:/home/hadoop#
然后浏览:http://m1:8088/cluster, 可以看到效果
启动 ZooKeeperFailoverCotroller,在 m1,m2 机器上依次执行以下命令,这个时候再浏览 50070 端口,可以发现 m1 变成 active 状态了,而 m2 还是 standby 状态
root@m1:/home/hadoop# /home/hadoop/hadoop-2.2.0/sbin/hadoop-daemon.sh start zkfc
starting zkfc, logging to /home/hadoop/hadoop-2.2.0/logs/hadoop-root-zkfc-m1.out
root@m1:/home/hadoop#
测试 HDFS 是否可用
root@m1:/home/hadoop/hadoop-2.2.0/bin# /home/hadoop/hadoop-2.2.0/bin/hdfs dfs -ls /
测试 YARN 是否可用, 我们来做一个经典的例子,统计刚才放入 input 下面的 hadoop.cmd 的单词频率
root@m1:/home/hadoop/hadoop-2.2.0/bin# /home/hadoop/hadoop-2.2.0/bin/hadoop jar /home/hadoop/hadoop-2.2.0/share/hadoop/mapreduce/hadoop-mapreduce-examples-2.2.0.jar wordcount /input /output
14/07/2701:22:41 INFO client.RMProxy:Connecting to ResourceManager at m1/192.168.1.50:8032
14/07/2701:22:43 INFO input.FileInputFormat:Total input paths to process :1
14/07/2701:22:44 INFO mapreduce.JobSubmitter: number of splits:1
14/07/2701:22:44 INFO Configuration.deprecation: user.name is deprecated.Instead,use mapreduce.job.user.name
14/07/2701:22:44 INFO Configuration.deprecation: mapred.jar is deprecated.Instead,use mapreduce.job.jar
14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.value.classis deprecated.Instead,use mapreduce.job.output.value.class
14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.combine.classis deprecated.Instead,use mapreduce.job.combine.class
14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.map.classis deprecated.Instead,use mapreduce.job.map.class
14/07/2701:22:44 INFO Configuration.deprecation: mapred.job.name is deprecated.Instead,use mapreduce.job.name
14/07/2701:22:44 INFO Configuration.deprecation: mapreduce.reduce.classis deprecated.Instead,use mapreduce.job.reduce.class
14/07/2701:22:44 INFO Configuration.deprecation: mapred.input.dir is deprecated.Instead,use mapreduce.input.fileinputformat.inputdir
14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.dir is deprecated.Instead,use mapreduce.output.fileoutputformat.outputdir
14/07/2701:22:44 INFO Configuration.deprecation: mapred.map.tasks is deprecated.Instead,use mapreduce.job.maps
14/07/2701:22:44 INFO Configuration.deprecation: mapred.output.key.classis deprecated.Instead,use mapreduce.job.output.key.class
14/07/2701:22:44 INFO Configuration.deprecation: mapred.working.dir is deprecated.Instead,use mapreduce.job.working.dir
14/07/2701:22:45 INFO mapreduce.JobSubmitter:Submitting tokens for job: job_1406394452186_0001
14/07/2701:22:46 INFO impl.YarnClientImpl:Submitted application application_1406394452186_0001 to ResourceManager at m1/192.168.1.50:8032
14/07/2701:22:46 INFO mapreduce.Job:The url to track the job: http://m1:8088/proxy/application_1406394452186_0001/
14/07/2701:22:46 INFO mapreduce.Job:Running job: job_1406394452186_0001
14/07/2701:23:10 INFO mapreduce.Job:Job job_1406394452186_0001 running in uber mode :false
14/07/2701:23:10 INFO mapreduce.Job: map 0% reduce 0%
14/07/2701:23:31 INFO mapreduce.Job: map 100% reduce 0%
14/07/2701:23:48 INFO mapreduce.Job: map 100% reduce 100%
14/07/2701:23:48 INFO mapreduce.Job:Job job_1406394452186_0001 completed successfully
14/07/2701:23:49 INFO mapreduce.Job:Counters:43
FileSystemCounters
FILE:Number of bytes read=6574
FILE:Number of bytes written=175057
FILE:Number of read operations=0
FILE:Number of large read operations=0
FILE:Number of write operations=0
HDFS:Number of bytes read=7628
HDFS:Number of bytes written=5088
HDFS:Number of read operations=6
HDFS:Number of large read operations=0
HDFS:Number of write operations=2
JobCounters
Launched map tasks=1
Launched reduce tasks=1
Data-local map tasks=1
Total time spent by all maps in occupied slots (ms)=18062
Total time spent by all reduces in occupied slots (ms)=14807
Map-ReduceFramework
Map input records=240
Map output records=827
Map output bytes=9965
Map output materialized bytes=6574
Input split bytes=98
Combine input records=827
Combine output records=373
Reduce input groups=373
Reduce shuffle bytes=6574
Reduce input records=373
Reduce output records=373
SpilledRecords=746
ShuffledMaps=1
FailedShuffles=0
MergedMap outputs=1
GC time elapsed (ms)=335
CPU time spent (ms)=2960
Physical memory (bytes) snapshot=270057472
Virtual memory (bytes) snapshot=1990762496
Total committed heap usage (bytes)=136450048
ShuffleErrors
BAD_ID=0
CONNECTION=0
IO_ERROR=0
WRONG_LENGTH=0
WRONG_MAP=0
WRONG_REDUCE=0
FileInputFormatCounters
BytesRead=7530
FileOutputFormatCounters
BytesWritten=5088
root@m1:/home/hadoop/hadoop-2.2.0/bin#
验证 HA 的高可用性,故障转移,刚才我们用浏览器打开 m1 和 s1 的 50070 端口,已经看到 m1 的状态是 active,s1 的状态是 standby
root@m1:/home/hadoop/hadoop-2.2.0/bin# jps
5492Jps
2884JournalNode
4375DFSZKFailoverController
2553QuorumPeerMain
3898NameNode
4075ResourceManager
root@m1:/home/hadoop/hadoop-2.2.0/bin# kill -9 3898
root@m1:/home/hadoop/hadoop-2.2.0/bin# jps
2884JournalNode
4375DFSZKFailoverController
2553QuorumPeerMain
4075ResourceManager
5627Jps
root@m1:/home/hadoop/hadoop-2.2.0/bin#
再浏览 m1 和 m2 的 50070 端口,发现 m1 是打不开,而 m2 是 active 状态。
更多 Hadoop 相关信息见Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13
本文永久更新链接地址:http://www.linuxidc.com/Linux/2016-08/134187.htm
Hadoop HA 介绍
1. 概论
本指南提供了一个 HDFS 的高可用性(HA)功能的概述,以及如何配置和管理 HDFS 高可用性(HA) 集群。本文档假定读者具有对 HDFS 集群的组件和节点类型具有一定理解。有关详情,请参阅 Apache 的 HDFS 的架构指南。
http://hadoop.apache.org/common/docs/current/hdfs_design.html
2. 背景
CDH4 之前,在 HDFS 集群中 NameNode 存在单点故障(SPOF)。对于只有一个 NameNode 的集群,如果 NameNode 机器出现故障,那么整个集群将无法使用,直到 NameNode 重新启动。
NameNode 主要在以下两个方面影响 HDFS 集群:
(1). NameNode 机器发生意外,比如宕机,集群将无法使用,直到管理员重启 NameNode
(2). NameNode 机器需要升级,包括软件、硬件升级,此时集群也将无法使用
HDFS 的 HA 功能通过配置 Active/Standby 两个 NameNodes 实现在集群中对 NameNode 的热备来解决上述问题。如果出现故障,如机器崩溃或机器需要升级维护,这时可通过此种方式将 NameNode 很快的切换到另外一台机器。
3. 架构
在一个典型的 HDFS(HA) 集群中,使用两台单独的机器配置为 NameNodes。在任何时间点,确保 NameNodes 中只有一个处于 Active 状态,其他的处在 Standby 状态。其中 ActiveNameNode 负责集群中的所有客户端操作,StandbyNameNode 仅仅充当备机,保证一旦 ActiveNameNode 出现问题能够快速切换。
为了保证 Standby 节点与 Active 节点的状态保持同步,目前是通过两个节点同时访问一个共享的存储设备 (如 NFS) 来实现的,在以后的版本中可能会做进一步的改进。
当 Active 节点的 namespace 发生改变时,存储在共享目录中的日志文件也会被修改,Standby 节点监测到日志变化后,将其作用到自身的 namespace。当 Active 节点发生故障需要进行切换时,Standby 节点由 Standby 状态转换为 Active 状态前将会把所有未读日志作用到自身的 namespace 以确保自身的 namespace 与主节点的 namespace 保持一致。
为了实现快速切换,Standby 节点获取集群的最新文件块信息也是很有必要的。为了实现这一目标,DataNode 需要配置 NameNodes 的位置,并同时给他们发送文件块信息以及心跳检测。
任意时间只有一个 ActiveNameNode 对于 HDFS(HA) 集群的正常运行至关重要,否则两者的 namespace 将不能保持同步,面临数据丢失和其它问题的危险性。为了防止出现这种情况,管理员必须为共享存储配置至少一个安全机制。这种机制应该保证: 在切换期间,如果不能证实当前 Active 节点已经改变了状态,那么此机制应切断当前 Active 节点对共享存储的访问,这样可以防止在切换期间当前 Active 节点继续对 namespace 进行修改,确保新的 Active 节点能够安全的切换.
系统环境
hadoop 2.5.0 |
|
zookeeper 3.4.5 |
|
JDK 1.7 |
|
集群节点
m1 | 192.168.2.100 | namenode resourcemanager zookeeper |
s1 | 192.168.2.101 | standbyNameNode datanode nodeManager zookeeper |
s2 | 192.168.2.102 | datanode nodeManager zookeeper |
配置 zookeeper
配置 HADOOP
hdfs-site.xml
<configuration>
<!-- 命名空间设置 ns1-->
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<!--namenodes 节点 ID:nn1,nn2(配置在命名空间 mycluster 下)-->
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<!--nn1,nn2 节点地址配置 -->
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>nameNode:8020</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>dataNode1:8020</value>
</property>
<!--nn1,nn2 节点 WEB 地址配置 -->
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>nameNode:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>dataNode1:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://nameNode:8485;dataNode1:8485;dataNode2:8485/mycluster</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp/dfs/journalnode</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/home/hadoop/.ssh/id_dsa</value>
</property>
<!-- 启用自动故障转移 -->
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.replication.max</name>
<value>32767</value>
</property>
</configuration>
core-site.xml
<configuration>
<property>
<!-- 配置 hadoop NameNode ip 地址 , 由于我们配置的 HA 那么有两个 namenode 所以这里配置的地址必须是动态的 -->
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<!-- 整合 Zookeeper -->
<name>ha.zookeeper.quorum</name>
<value>nameNode:2181,dataNode1:2181,dataNode2:2181</value>
</property>
<property>
<!-- 配置 hadoop 缓存地址 -->
<name>hadoop.tmp.dir</name>
<value>/home/hadoop/hadoop/src/hadoop-2.5.0/tmp</value>
<description></description>
</property>
</configuration>
如何使用
启动 zookeeper
在 m1,s1,s2 所有机器上执行, 下面的代码是在 m1 上执行的示例
root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh start
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Starting zookeeper ... STARTED
root@s1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkServer.sh status
JMX enabled by default
Using config: /home/hadoop/zookeeper-3.4.5/bin/../conf/zoo.cfg
Mode: leader
root@s1:/home/hadoop#
测试 zookeeper 是否启动成功,看下面第 29 行高亮处,表示成功。
root@m1:/home/hadoop# /home/hadoop/zookeeper-3.4.5/bin/zkCli.sh
Connecting to localhost:2181
2014-07-27 00:27:16,621 [myid:] - INFO [main:Environment@100] - Client environment:zookeeper.version=3.4.5-1392090, built on 09/30/2012 17:52 GMT
2014-07-27 00:27:16,628 [myid:] - INFO [main:Environment@100] - Client environment:host.name=m1
2014-07-27 00:27:16,628 [myid:] - INFO [main:Environment@100] - Client environment:java.version=1.7.0_65
2014-07-27 00:27:16,629 [myid:] - INFO [main:Environment@100] - Client environment:java.vendor=Oracle Corporation
2014-07-27 00:27:16,629 [myid:] - INFO [main:Environment@100] - Client environment:java.home=/usr/lib/jvm/java-7-oracle/jre
2014-07-27 00:27:16,630 [myid:] - INFO [main:Environment@100] - Client environment:java.class.path=/home/hadoop/zookeeper-3.4.5/bin/../build/classes:/home/hadoop/zookeeper-3.4.5/bin/../build/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-log4j12-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/slf4j-api-1.6.1.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/netty-3.2.2.Final.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/log4j-1.2.15.jar:/home/hadoop/zookeeper-3.4.5/bin/../lib/jline-0.9.94.jar:/home/hadoop/zookeeper-3.4.5/bin/../zookeeper-3.4.5.jar:/home/hadoop/zookeeper-3.4.5/bin/../src/java/lib/*.jar:/home/hadoop/zookeeper-3.4.5/bin/../conf:/usr/lib/jvm/java-7-oracle/lib
2014-07-27 00:27:16,630 [myid:] - INFO [main:Environment@100] - Client environment:java.library.path=:/usr/local/lib:/usr/java/packages/lib/amd64:/usr/lib64:/lib64:/lib:/usr/lib
2014-07-27 00:27:16,631 [myid:] - INFO [main:Environment@100] - Client environment:java.io.tmpdir=/tmp
2014-07-27 00:27:16,631 [myid:] - INFO [main:Environment@100] - Client environment:java.compiler=<NA>
2014-07-27 00:27:16,632 [myid:] - INFO [main:Environment@100] - Client environment:os.name=Linux
2014-07-27 00:27:16,632 [myid:] - INFO [main:Environment@100] - Client environment:os.arch=amd64
2014-07-27 00:27:16,632 [myid:] - INFO [main:Environment@100] - Client environment:os.version=3.11.0-15-generic
2014-07-27 00:27:16,633 [myid:] - INFO [main:Environment@100] - Client environment:user.name=root
2014-07-27 00:27:16,633 [myid:] - INFO [main:Environment@100] - Client environment:user.home=/root
2014-07-27 00:27:16,634 [myid:] - INFO [main:Environment@100] - Client environment:user.dir=/home/hadoop
2014-07-27 00:27:16,636 [myid:] - INFO [main:ZooKeeper@438] - Initiating client connection, connectString=localhost:2181 sessionTimeout=30000 watcher=org.apache.zookeeper.ZooKeeperMain$MyWatcher@19b1ebe5
Welcome to ZooKeeper!
2014-07-27 00:27:16,672 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@966] - Opening socket connection to server localhost/127.0.0.1:2181. Will not attempt to authenticate using SASL (unknown error)
2014-07-27 00:27:16,685 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@849] - Socket connection established to localhost/127.0.0.1:2181, initiating session
JLine support is enabled
2014-07-27 00:27:16,719 [myid:] - INFO [main-SendThread(localhost:2181):ClientCnxn$SendThread@1207] - Session establishment complete on server localhost/127.0.0.1:2181, sessionid = 0x147737cd5d30000, negotiated timeout = 30000
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
[zk: localhost:2181(CONNECTED) 0] ls /
[zookeeper]
[zk: localhost:2181(CONNECTED) 1]
更多详情见请继续阅读下一页的精彩内容:http://www.linuxidc.com/Linux/2016-08/134187p2.htm