阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

CentOS 7.4下Hadoop 2.7.6安装部署

215次阅读
没有评论

共计 18683 个字符,预计需要花费 47 分钟才能阅读完成。

1. 主机规划

主机名称

外网 IP

内网 IP

操作系统

备注

安装软件

mini01

10.0.0.11

172.16.1.11

CentOS 7.4

ssh port:22

Hadoop【NameNode  SecondaryNameNode】

mini02

10.0.0.12

172.16.1.12

CentOS 7.4

ssh port:22

Hadoop【ResourceManager】

mini03

10.0.0.13

172.16.1.13

CentOS 7.4

ssh port:22

Hadoop【DataNode  NodeManager】

mini04

10.0.0.14

172.16.1.14

CentOS 7.4

ssh port:22

Hadoop【DataNode  NodeManager】

mini05

10.0.0.15

172.16.1.15

CentOS 7.4

ssh port:22

Hadoop【DataNode  NodeManager】

CentOS 7.4 下 Hadoop 2.7.6 安装部署

添加 hosts 信息,保证每台都可以相互 ping 通

[root@mini01 ~]# cat /etc/hosts 
127.0.0.1  localhost localhost.localdomain localhost4 localhost4.localdomain4
::1        localhost localhost.localdomain localhost6 localhost6.localdomain6
 
10.0.0.11    mini01
10.0.0.12    mini02
10.0.0.13    mini03
10.0.0.14    mini04
10.0.0.15    mini05

2. 添加用户账号

# 使用一个专门的用户,避免直接使用 root 用户
# 添加用户、指定家目录并指定用户密码
useradd -d /app yun && echo ‘123456’ | /usr/bin/passwd –stdin yun
# sudo 提权
echo “yun  ALL=(ALL)      NOPASSWD: ALL” >>  /etc/sudoers
# 让其它普通用户可以进入该目录查看信息
chmod 755 /app/

3. 实现 yun 用户免秘钥登录

要求:根据规划实现 mini01 到 mini01、mini02、mini03、mini04、mini05 免秘钥登录
              实现 mini02 到 mini01、mini02、mini03、mini04、mini05 免秘钥登录
# 可以使用 ip 也可以是 hostname  但是由于我们计划使用的是 hostname 方式交互,所以使用 hostname
# 同时 hostname 方式分发,可以通过 hostname 远程登录,也可以 IP 远程登录

3.1. 创建密钥

# 实现 mini01 到 mini02、mini03、mini04、mini05 免秘钥登录
[yun@mini01 ~]$ ssh-keygen -t rsa  # 一路回车即可
Generating public/private rsa key pair.
Enter file in which to save the key (/app/.ssh/id_rsa):
Created directory ‘/app/.ssh’.
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /app/.ssh/id_rsa.
Your public key has been saved in /app/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:rAFSIyG6Ft6qgGdVl/7v79DJmD7kIDSTcbiLtdKyTQk yun@mini01
The key’s randomart image is:
+—[RSA 2048]—-+
|. o.o    .      |
|.. o .  o..      |
|… . . o=      |
|..o. oE+B        |
|.o .. .*S*      |
|o ..  +oB.. .= . |
|o.o  .* ..++ +  |
|oo    . .  oo.  |
|.          .++o  |
+—-[SHA256]—–+
 
# 生成之后会在用户的根目录生成一个“.ssh”的文件夹
[yun@mini01 ~]$ ll -d .ssh/
drwx—— 2 yun yun 38 Jun  9 19:17 .ssh/
[yun@mini01 ~]$ ll .ssh/
total 8
-rw——- 1 yun yun 1679 Jun  9 19:17 id_rsa
-rw-r–r– 1 yun yun  392 Jun  9 19:17 id_rsa.pub

3.2. 分发密钥

# 可以使用 ip 也可以是 hostname  但是由于我们使用的是 hostname 方式通信,所以使用 hostname
[yun@mini01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub 172.16.1.11  # IP 方式【这里不用】
# 分发
[yun@mini01 ~]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03  # 主机名方式【所有的都这样 从 mini01 到 mini05】
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: “/app/.ssh/id_rsa.pub”
The authenticity of host ‘[mini03]:22 ([10.0.0.13]:22)’ can’t be established.
ECDSA key fingerprint is SHA256:pN2NUkgCTt+b9P5TfQZcTh4PF4h7iUxAs6+V7Slp1YI.
ECDSA key fingerprint is MD5:8c:f0:c7:d6:7c:b1:a8:59:1c:c1:5e:d7:52:cb:5f:51.
Are you sure you want to continue connecting (yes/no)? yes
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed — if you are prompted now it is to install the new keys
yun@mini03’s password:
 
Number of key(s) added: 1
 
Now try logging into the machine, with:  “ssh -p ’22’ ‘mini03′”
and check to make sure that only the key(s) you wanted were added.

mini01 分发密钥

[yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini01
[yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini02
[yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03
[yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini04
[yun@mini01 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini05

mini02 分发密钥

[yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini01
[yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini02
[yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini03
[yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini04
[yun@mini02 .ssh]$ ssh-copy-id -i ~/.ssh/id_rsa.pub mini05

远程登录测试【最好都测试一下】

 [yun@mini02 ~]$ ssh mini05
Last login: Sat Jun  9 19:47:43 2018 from 10.0.0.11
 
Welcome You Login
 
[yun@mini05 ~]$            # 表示远程登录成功

3.3. 远程免密登录原理图

CentOS 7.4 下 Hadoop 2.7.6 安装部署

 

3.4. .ssh 目录中的文件说明

 [yun@mini01 .ssh]$ pwd
/app/.ssh
[yun@mini01 .ssh]$ ll
total 16
-rw——- 1 yun yun  784 Jun  9 19:43 authorized_keys
-rw——- 1 yun yun 1679 Jun  9 19:17 id_rsa
-rw-r–r– 1 yun yun  392 Jun  9 19:17 id_rsa.pub
-rw-r–r– 1 yun yun 1332 Jun  9 19:41 known_hosts
########################################################################################
authorized_keys: 存放远程免密登录的公钥, 主要通过这个文件记录多台机器的公钥
id_rsa : 生成的私钥文件
id_rsa.pub:生成的公钥文件
know_hosts : 已知的主机公钥清单

4. Jdk【java8】

4.1. 软件安装

 [yun@mini01 software]# pwd
/app/software
[yun@mini01 software]# tar xf jdk1.8.0_112.tar.gz
[yun@mini01 software]# ll
total 201392
drwxr-xr-x 8  10  143      4096 Dec 20 13:27 jdk1.8.0_112
-rw-r–r– 1 root root 189815615 Mar 12 16:47 jdk1.8.0_112.tar.gz
[yun@mini01 software]# mv jdk1.8.0_112/ /app/
[yun@mini01 software]# cd /app/
[yun@mini01 app]# ll
total 8
drwxr-xr-x  8  10  143 4096 Dec 20 13:27 jdk1.8.0_112
[yun@mini01 app]# ln -s jdk1.8.0_112/ jdk
[yun@mini01 app]# ll
total 8
lrwxrwxrwx  1 root root    13 May 16 23:19 jdk -> jdk1.8.0_112/
drwxr-xr-x  8  10  143 4096 Dec 20 13:27 jdk1.8.0_112

4.2. 环境变量

 [root@mini01 ~]$ pwd
/app
[root@mini01 ~]$ ll -d jdk*  # 可以根据实际情况选择 jdk 版本,其中 jdk1.8 可以兼容 jdk1.7 
lrwxrwxrwx 1 yun yun  11 Mar 15 14:58 jdk -> jdk1.8.0_112
drwxr-xr-x 8 yun yun 4096 Dec 20 13:27 jdk1.8.0_112
[root@mini01 profile.d]$ pwd
/etc/profile.d
[root@mini01 profile.d]$ cat jdk.sh # java 环境变量 
export JAVA_HOME=/app/jdk
export JRE_HOME=/app/jdk/jre
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib:$CLASSPATH
export PATH=$JAVA_HOME/bin:$PATH
 
[root@mini01 profile.d]# source /etc/profile
[root@mini01 profile.d]$ java -version 
java version “1.8.0_112”
Java(TM) SE Runtime Environment (build 1.8.0_112-b15)
Java HotSpot(TM) 64-Bit Server VM (build 25.112-b15, mixed mode)

5. Hadoop 配置修改并启动【所有机器配置文件一样】

[yun@mini01 software]$ pwd
/app/software
[yun@mini01 software]$ ll
total 194152
-rw-r–r– 1 yun yun 198811365 Jun  8 16:36 CentOS-7.4_hadoop-2.7.6.tar.gz
[yun@mini01 software]$ tar xf CentOS-7.4_hadoop-2.7.6.tar.gz
[yun@mini01 software]$ mv hadoop-2.7.6/ /app/
[yun@mini01 software]$ cd
[yun@mini01 ~]$ ln -s hadoop-2.7.6/ hadoop
[yun@mini01 ~]$ ll
total 4
lrwxrwxrwx  1 yun yun  13 Jun  9 16:21 hadoop -> hadoop-2.7.6/
drwxr-xr-x  9 yun yun  149 Jun  8 16:36 hadoop-2.7.6
lrwxrwxrwx  1 yun yun  12 May 26 11:18 jdk -> jdk1.8.0_112
drwxr-xr-x  8 yun yun  255 Sep 23  2016 jdk1.8.0_112

5.1. 环境变量

[root@mini01 profile.d]# pwd
/etc/profile.d
[root@mini01 profile.d]# vim hadoop.sh 
export HADOOP_HOME=”/app/hadoop”
export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH
 
[root@mini01 profile.d]# source /etc/profile  # 生效 

5.2. core-site.xml

[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ vim core-site.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<?xml-stylesheet type=”text/xsl” href=”https://www.linuxidc.com/Linux/2018-08/configuration.xsl”?>
……………………
<!– Put site-specific property overrides in this file. –>

<configuration>
  <!– 指定 HADOOP 所使用的文件系统 schema(URI),HDFS 的老大(NameNode)的地址 –>
  <property>
    <name>fs.defaultFS</name>
    <value>hdfs://mini01:9000</value>  <!– mini01 是 hostname 信息 –>
  </property>
  <property>
    <name>hadoop.tmp.dir</name>
    <value>/app/hadoop/tmp</value>
  </property>

  <!– 启用垃圾箱功能,单位分钟 –>
  <property>
    <name>fs.trash.interval </name>
    <value>1440</value>
  </property>

</configuration>

5.3. hdfs-site.xml

[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ vim hdfs-site.xml
<?xml version=”1.0″ encoding=”UTF-8″?>
<?xml-stylesheet type=”text/xsl” href=”https://www.linuxidc.com/Linux/2018-08/configuration.xsl”?>
………………
<!– Put site-specific property overrides in this file. –>
 
<configuration>
  <!– 指定 HDFS 副本的数量 –>
  <property>
    <name>dfs.replication</name>
    <value>3</value>
  </property>
 
  <property>
    <!– 两个 name 标签都可以。周期性的合并 fsimage 和 edits log 文件并且使 edits log 保持在一定范围内。最好和 namenode 不在一台机器, 因为所需内存和 namenode 一样 –>
    <!– <name>dfs.secondary.http.address</name> –>
    <name>dfs.namenode.secondary.http-address</name>
    <value>mini01:50090</value>
  </property>
 
  <!– dfs namenode 的目录,可以有多个目录,然后每个目录挂不同的磁盘, 每个目录下的文件信息是一样的,相当于备份 –>
  <!– 如有需要,放开注释即可
  <property>
    <name>dfs.namenode.name.dir</name>
    <value> file://${hadoop.tmp.dir}/dfs/name,file://${hadoop.tmp.dir}/dfs/name1,file://${hadoop.tmp.dir}/dfs/name2</value>
  </property>
  –>
 
  <!– 也可以配置 dfs.datanode.data.dir  配置为多个目录,相当于扩容 –>
 
</configuration>

5.4. mapred-site.xml

[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ mv mapred-site.xml.template mapred-site.xml 
[yun@mini01 hadoop]$ vim mapred-site.xml
<?xml version=”1.0″?>
<?xml-stylesheet type=”text/xsl” href=”https://www.linuxidc.com/Linux/2018-08/configuration.xsl”?>
………………
<!– Put site-specific property overrides in this file. –>
 
<configuration>
  <property>
    <name>mapreduce.framework.name</name>
    <value>yarn</value>
  </property>
 
</configuration>

5.5. yarn-site.xml

[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ vim yarn-site.xml
<?xml version=”1.0″?>
……………………
<configuration>
 
<!– Site specific YARN configuration properties –>
  <!– 指定 YARN 的老大(ResourceManager)的地址 –>
  <property>
    <name>yarn.resourcemanager.hostname</name>
    <value>mini02</value>  <!– 根据规划 mini02 为 ResourceManager –>
  </property>
 
  <!– reducer 获取数据的方式 –>
  <property>
    <name>yarn.nodemanager.aux-services</name>
    <value>mapreduce_shuffle</value>
  </property>
 
</configuration>

5.6. slaves

# 该配置和 Hadoop 服务无关,只是用于 Hadoop 脚本的批量使用
[yun@mini01 hadoop]$ pwd
/app/hadoop/etc/hadoop
[yun@mini01 hadoop]$ cat slaves
mini03
mini04
mini05

5.7. 格式化 namenode
(是对 namenode 进行初始化)

[yun@mini01 hadoop]$ hdfs namenode -format 
18/06/09 17:44:56 INFO namenode.NameNode: STARTUP_MSG:
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:  host = mini01/10.0.0.11
STARTUP_MSG:  args = [-format]
STARTUP_MSG:  version = 2.7.6
………………
STARTUP_MSG:  java = 1.8.0_112
************************************************************/
18/06/09 17:44:56 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
18/06/09 17:44:56 INFO namenode.NameNode: createNameNode [-format]
Formatting using clusterid: CID-72e356f5-7723-4960-885a-72e522e19be1
18/06/09 17:44:57 INFO namenode.FSNamesystem: No KeyProvider found.
18/06/09 17:44:57 INFO namenode.FSNamesystem: fsLock is fair: true
18/06/09 17:44:57 INFO namenode.FSNamesystem: Detailed lock hold time metrics enabled: false
18/06/09 17:44:57 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
18/06/09 17:44:57 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
18/06/09 17:44:57 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
18/06/09 17:44:57 INFO blockmanagement.BlockManager: The block deletion will start around 2018 Jun 09 17:44:57
18/06/09 17:44:57 INFO util.GSet: Computing capacity for map BlocksMap
18/06/09 17:44:57 INFO util.GSet: VM type      = 64-bit
18/06/09 17:44:57 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
18/06/09 17:44:57 INFO util.GSet: capacity      = 2^21 = 2097152 entries
18/06/09 17:44:57 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
18/06/09 17:44:57 INFO blockmanagement.BlockManager: defaultReplication        = 3
18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxReplication            = 512
18/06/09 17:44:57 INFO blockmanagement.BlockManager: minReplication            = 1
18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
18/06/09 17:44:57 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
18/06/09 17:44:57 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
18/06/09 17:44:57 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
18/06/09 17:44:57 INFO namenode.FSNamesystem: fsOwner            = yun (auth:SIMPLE)
18/06/09 17:44:57 INFO namenode.FSNamesystem: supergroup          = supergroup
18/06/09 17:44:57 INFO namenode.FSNamesystem: isPermissionEnabled = true
18/06/09 17:44:57 INFO namenode.FSNamesystem: HA Enabled: false
18/06/09 17:44:57 INFO namenode.FSNamesystem: Append Enabled: true
18/06/09 17:44:58 INFO util.GSet: Computing capacity for map INodeMap
18/06/09 17:44:58 INFO util.GSet: VM type      = 64-bit
18/06/09 17:44:58 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
18/06/09 17:44:58 INFO util.GSet: capacity      = 2^20 = 1048576 entries
18/06/09 17:44:58 INFO namenode.FSDirectory: ACLs enabled? false
18/06/09 17:44:58 INFO namenode.FSDirectory: XAttrs enabled? true
18/06/09 17:44:58 INFO namenode.FSDirectory: Maximum size of an xattr: 16384
18/06/09 17:44:58 INFO namenode.NameNode: Caching file names occuring more than 10 times
18/06/09 17:44:58 INFO util.GSet: Computing capacity for map cachedBlocks
18/06/09 17:44:58 INFO util.GSet: VM type      = 64-bit
18/06/09 17:44:58 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
18/06/09 17:44:58 INFO util.GSet: capacity      = 2^18 = 262144 entries
18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
18/06/09 17:44:58 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension    = 30000
18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.window.num.buckets = 10
18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.num.users = 10
18/06/09 17:44:58 INFO metrics.TopMetrics: NNTop conf: dfs.namenode.top.windows.minutes = 1,5,25
18/06/09 17:44:58 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
18/06/09 17:44:58 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
18/06/09 17:44:58 INFO util.GSet: Computing capacity for map NameNodeRetryCache
18/06/09 17:44:58 INFO util.GSet: VM type      = 64-bit
18/06/09 17:44:58 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
18/06/09 17:44:58 INFO util.GSet: capacity      = 2^15 = 32768 entries
18/06/09 17:44:58 INFO namenode.FSImage: Allocated new BlockPoolId: BP-925531343-10.0.0.11-1528537498201
18/06/09 17:44:58 INFO common.Storage: Storage directory /app/hadoop/tmp/dfs/name has been successfully formatted.
18/06/09 17:44:58 INFO namenode.FSImageFormatProtobuf: Saving image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 using no compression
18/06/09 17:44:58 INFO namenode.FSImageFormatProtobuf: Image file /app/hadoop/tmp/dfs/name/current/fsimage.ckpt_0000000000000000000 of size 319 bytes saved in 0 seconds.
18/06/09 17:44:58 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
18/06/09 17:44:58 INFO util.ExitUtil: Exiting with status 0
18/06/09 17:44:58 INFO namenode.NameNode: SHUTDOWN_MSG:
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at mini01/10.0.0.11
************************************************************/
[yun@mini01 hadoop]$ pwd
/app/hadoop
[yun@mini01 hadoop]$ ll
total 112
drwxr-xr-x 2 yun yun  194 Jun  8 16:36 bin
drwxr-xr-x 3 yun yun    20 Jun  8 16:36 etc
drwxr-xr-x 2 yun yun  106 Jun  8 16:36 include
drwxr-xr-x 3 yun yun    20 Jun  8 16:36 lib
drwxr-xr-x 2 yun yun  239 Jun  8 16:36 libexec
-rw-r–r– 1 yun yun 86424 Jun  8 16:36 LICENSE.txt
-rw-r–r– 1 yun yun 14978 Jun  8 16:36 NOTICE.txt
-rw-r–r– 1 yun yun  1366 Jun  8 16:36 README.txt
drwxr-xr-x 2 yun yun  4096 Jun  8 16:36 sbin
drwxr-xr-x 4 yun yun    31 Jun  8 16:36 share
drwxrwxr-x 3 yun yun    17 Jun  9 17:44 tmp  # 该目录之前是没有的
[yun@mini01 hadoop]$ ll tmp/
total 0
drwxrwxr-x 3 yun yun 18 Jun  9 17:44 dfs
[yun@mini01 hadoop]$ ll tmp/dfs/
total 0
drwxrwxr-x 3 yun yun 21 Jun  9 17:44 name
[yun@mini01 hadoop]$ ll tmp/dfs/name/
total 0
drwxrwxr-x 2 yun yun 112 Jun  9 17:44 current
[yun@mini01 hadoop]$ ll tmp/dfs/name/current/
total 16
-rw-rw-r– 1 yun yun 319 Jun  9 17:44 fsimage_0000000000000000000
-rw-rw-r– 1 yun yun  62 Jun  9 17:44 fsimage_0000000000000000000.md5
-rw-rw-r– 1 yun yun  2 Jun  9 17:44 seen_txid
-rw-rw-r– 1 yun yun 199 Jun  9 17:44 VERSION

5.8. 启动 namenode

# 在 mini01 上启动
[yun@mini01 sbin]$ pwd
/app/hadoop/sbin
[yun@mini01 sbin]$ ./hadoop-daemon.sh start namenode  # 停止使用:hadoop-daemon.sh stop namenode
starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out
[yun@mini01 sbin]$ jps
6066 Jps
5983 NameNode
[yun@mini01 sbin]$ ps -ef | grep ‘hadoop’
yun        5983      1  6 17:55 pts/0    00:00:07 /app/jdk/bin/java -Dproc_namenode -Xmx1000m -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/app/hadoop-2.7.6/logs -Dhadoop.log.file=hadoop.log -Dhadoop.home.dir=/app/hadoop-2.7.6 -Dhadoop.id.str=yun -Dhadoop.root.logger=INFO,console -Djava.library.path=/app/hadoop-2.7.6/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Djava.net.preferIPv4Stack=true -Dhadoop.log.dir=/app/hadoop-2.7.6/logs -Dhadoop.log.file=hadoop-yun-namenode-mini01.log -Dhadoop.home.dir=/app/hadoop-2.7.6 -Dhadoop.id.str=yun -Dhadoop.root.logger=INFO,RFA -Djava.library.path=/app/hadoop-2.7.6/lib/native -Dhadoop.policy.file=hadoop-policy.xml -Djava.net.preferIPv4Stack=true -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS -Dhdfs.audit.logger=INFO,NullAppender -Dhadoop.security.logger=INFO,RFAS org.apache.hadoop.hdfs.server.namenode.NameNode
yun        6160  2337  0 17:57 pts/0    00:00:00 grep –color=auto hadoop
[yun@mini01 sbin]$ netstat -lntup | grep ‘5983’
(Not all processes could be identified, non-owned process info
 will not be shown, you would have to be root to see it all.)
tcp        0      0 0.0.0.0:50070          0.0.0.0:*              LISTEN      5983/java         
tcp        0      0 10.0.0.11:9000          0.0.0.0:*              LISTEN      5983/java 

5.8.1.  浏览器访问

http://10.0.0.11:50070

CentOS 7.4 下 Hadoop 2.7.6 安装部署

CentOS 7.4 下 Hadoop 2.7.6 安装部署

5.9. 启动 datanode

# mini03、mini04、mini05 启动 datanode
# 由于添加环境变量,所以可以在任何目录启动
[yun@mini02 ~]$ hadoop-daemon.sh start datanode  # 停止使用:hadoop-daemon.sh stop datanode
starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini02.out
[yun@mini02 ~]$ jps
5349 Jps
5263 DataNode

5.9.1. 刷新浏览器

CentOS 7.4 下 Hadoop 2.7.6 安装部署

5.10. 使用脚本批量启动 hdf
# 根据规划在 mini01 启动
[yun@mini01 hadoop]$ start-dfs.sh
Starting namenodes on [mini01]
mini01: starting namenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-namenode-mini01.out
mini04: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini04.out
mini03: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini03.out
mini05: starting datanode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-datanode-mini05.out
Starting secondary namenodes [mini01]
mini01: starting secondarynamenode, logging to /app/hadoop-2.7.6/logs/hadoop-yun-secondarynamenode-mini01.out

URL 地址:(HDFS 管理界面)

http://10.0.0.11:50070

5.11. 使用脚本批量启动 yarn

# 根据规划在 mini02 启动
# 启动 yarn
[yun@mini02 hadoop]$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-resourcemanager-mini02.out
mini05: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini05.out
mini04: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini04.out
mini03: starting nodemanager, logging to /app/hadoop-2.7.6/logs/yarn-yun-nodemanager-mini03.out

URL 地址:(MR 管理界面)
http://10.0.0.12:8088   

5.12. 最后结果

##### mini01
[yun@mini01 hadoop]$ jps
16336 NameNode
16548 SecondaryNameNode
16686 Jps
 
##### mini02
[yun@mini02 hadoop]$ jps
10936 ResourceManager
11213 Jps
 
##### mini03
[yun@mini03 ~]$ jps
9212 Jps
8957 DataNode
9039 NodeManager
 
##### mini04
[yun@mini04 ~]$ jps
4130 NodeManager
4296 Jps
4047 DataNode
 
##### mini05
[yun@mini05 ~]$ jps
7011 DataNode
7091 NodeManager
7308 Jps

Hadoop2.3-HA 高可用集群环境搭建  https://www.linuxidc.com/Linux/2017-03/142155.htm
Hadoop 项目之基于 CentOS7 的 Cloudera 5.10.1(CDH)的安装部署  https://www.linuxidc.com/Linux/2017-04/143095.htm
Hadoop2.7.2 集群搭建详解(高可用)https://www.linuxidc.com/Linux/2017-03/142052.htm
使用 Ambari 来部署 Hadoop 集群(搭建内网 HDP 源)https://www.linuxidc.com/Linux/2017-03/142136.htm
Ubuntu 14.04 下 Hadoop 集群安装  https://www.linuxidc.com/Linux/2017-02/140783.htm
CentOS 6.7 安装 Hadoop 2.7.2  https://www.linuxidc.com/Linux/2017-08/146232.htm
Ubuntu 16.04 上构建分布式 Hadoop-2.7.3 集群  https://www.linuxidc.com/Linux/2017-07/145503.htm
CentOS 7 下 Hadoop 2.6.4 分布式集群环境搭建  https://www.linuxidc.com/Linux/2017-06/144932.htm
Hadoop2.7.3+Spark2.1.0 完全分布式集群搭建过程  https://www.linuxidc.com/Linux/2017-06/144926.htm
CentOS 7.4 下编译安装 Hadoop 2.7.6 及所需文件  https://www.linuxidc.com/Linux/2018-06/152786.htm
Ubuntu 16.04.3 下安装配置 Hadoop https://www.linuxidc.com/Linux/2018-04/151993.htm

更多 Hadoop 相关信息见 Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计18683字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中