阿里云-云小站(无限量代金券发放中)
【腾讯云】云服务器、云数据库、COS、CDN、短信等热卖云产品特惠抢购

CentOS 6.7安装Hadoop 2.7.3

222次阅读
没有评论

共计 5779 个字符,预计需要花费 15 分钟才能阅读完成。

VMware 建立虚拟机

建立虚拟机命名为 master,可以参考 http://www.linuxidc.com/Linux/2016-05/131701.htm。

配置 Java 环境

网上同样很多教程

克隆虚拟机

在修改 master 的 hosts,

192.168.197.132  master-01
192.168.197.133  slave-01
192.168.197.134  slave-02

然后克隆 matser, 分别命名为 slave1,slave2。
现在有三台虚拟机
IP 虚拟机名称 用户
192.168.197.132 master yang
192.168.197.133 slave1 yang
192.168.197.134 slave2 yang

ssh 免登陆

(1)CentOS 默认没有启动 ssh 无密登录,去掉 /etc/ssh/sshd_config 其中 2 行的注释,每台服务器都要设置,
#RSAAuthentication yes
#PubkeyAuthentication yes
安装 ssh
在 master-01 的机器上进入 yang 用户 的 .ssh 目录
使用 ssh-keygen -t rsa 来生成公钥和私钥 (连续回车,不设置密码)
把公钥文件复制到要访问的机器的 yang 的用户目录下的.ssh 目录
scp ~/.ssh/id_rsa.pub yang@master-01:/home/yang/.ssh/authorized_keys
scp ~/.ssh/id_rsa.pub yang@slave-01:/home/yang/.ssh/authorized_keys
scp ~/.ssh/id_rsa.pub yang@slave-02:/home/yang/.ssh/authorized_keys
检测是否可以不需要密码登陆
ssh localhost
ssh yang@master-01
ssh yang@slave-01
ssh yang@slave-02
这里只有 master-01 是 master,如果有多个 namenode,或者 rm 的话则需要打通所有 master 到其他剩余节点的免密码登陆。(将 master-01 的 authorized_keys 追加到 02 和 03 的 authorized_keys)

配置安装 Hadoop 2.7.3

下载 Hadoop-2.7.3

下载 Hadoop 2.7.3 并解压到 /usr/software 目录下,在 hadoop-2.7.3 目录下新建 hdfs,hdfs/data,hdfs/name,hdfs/temp 目录。

配置 core-site.xml

<configuration>
    <property>
        <name>fs.defaultFS</name>
        <value>hdfs://master-01:9000</value>
    </property>
    <property>
        <name>hadoop.tmp.dir</name>
        <value>file:/usr/software/hadoop-2.7.3/tmp</value>
    </property>
    <property>
        <name>io.file.buffer.size</name>
        <value>131702</value>
    </property>
</configuration>

配置 mapred-site.xml

<configuration>
    <property>
        <name>mapreduce.framework.name</name>
        <value>yarn</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.address</name>
        <value>master-01:10020</value>
    </property>
    <property>
        <name>mapreduce.jobhistory.webapp.address</name>
        <value>master-01:19888</value>
    </property>
</configuration>

配置 yarn-site.xml

<configuration>
  <property>
        <name>yarn.nodemanager.aux-services</name>
        <value>mapreduce_shuffle</value>
    </property>
    <property>
        <name>yarn.nodemanager.auxservices.mapreduce.shuffle.class</name>
        <value>org.apache.hadoop.mapred.ShuffleHandler</value>
    </property>
    <property>
        <name>yarn.resourcemanager.address</name>
        <value>master-01:8032</value>
    </property>
    <property>
        <name>yarn.resourcemanager.scheduler.address</name>
        <value>master-01:8030</value>
    </property>
    <property>
        <name>yarn.resourcemanager.resource-tracker.address</name>
        <value>master-01:8031</value>
    </property>
    <property>
        <name>yarn.resourcemanager.admin.address</name>
        <value>master-01:8033</value>
    </property>
    <property>
        <name>yarn.resourcemanager.webapp.address</name>
        <value>master-01:8088</value>
    </property>
   <!--
    <property>
        <name>yarn.nodemanager.resource.memory-mb</name>
        <value>768</value>
    </property>-->
</configuration>

设置 slaves

修改 hadoop-2.7.3/etc/hadoop 下的 slaves 文件,添加我们之前建立好的两个 slave

slave-01
slave-02

网上很多地方说需要设置 hadoop-env.sh 和 yarn-env.sh 的 Java 环境,我看了这两个文件的内容,已经帮我们配置好了,所以不用管。

配置完成

然后分别复制 master 下面的已经配置好的 Hadoop-2.7.3 到 yang@slave-01 和 yang@slave02 的 /usr/software 目录下。

启动

在 Master 服务器启动 hadoop,从节点会自动启动,进入 /usr/software/hadoop-2.7.3 目录
(1)初始化,输入命令,bin/hdfs namenode -format
(2)启动 sbin/start-dfs.sh, 输出如下内容,则成功

Starting namenodes on [master-01]
master-01: starting namenode, logging to /usr/software/hadoop-2.7.3/logs/hadoop-yang-namenode-master-01.out
slave-01: starting datanode, logging to /usr/software/hadoop-2.7.3/logs/hadoop-yang-datanode-slave-01.out
slave-02: starting datanode, logging to /usr/software/hadoop-2.7.3/logs/hadoop-yang-datanode-slave-02.out
Starting secondary namenodes [master-01]
master-01: starting secondarynamenode, logging to /usr/software/hadoop-2.7.3/logs/hadoop-yang-secondarynamenode-master-01.out

(3)sbin/start-yarn.sh, 如下则成功

[yang@master-01 hadoop-2.7.3]$ ./sbin/start-yarn.sh 
starting yarn daemons
starting resourcemanager, logging to /usr/software/hadoop-2.7.3/logs/yarn-yang-resourcemanager-master-01.out
slave-02: starting nodemanager, logging to /usr/software/hadoop-2.7.3/logs/yarn-yang-nodemanager-slave-02.out
slave-01: starting nodemanager, logging to /usr/software/hadoop-2.7.3/logs/yarn-yang-nodemanager-slave-01.out

(4)停止的话,输入命令,sbin/stop-dfs.sh,sbin/stop-yarn.sh
(5)输入命令,jps,可以看到相关信息

yang@master-01 hadoop-2.7.3]$ jps
6932 SecondaryNameNode
7384 Jps
6729 NameNode
7118 ResourceManager
[yang@master-01 hadoop-2.7.3]$ ./bin/hdfs dfsadmin -report
Configured Capacity: 75404550144 (70.23 GB)
Present Capacity: 54191501312 (50.47 GB)
DFS Remaining: 54191452160 (50.47 GB)
DFS Used: 49152 (48 KB)
DFS Used%: 0.00%
Under replicated blocks: 0
Blocks with corrupt replicas: 0
Missing blocks: 0
Missing blocks (with replication factor 1): 0

-------------------------------------------------
Live datanodes (2):

Name: 192.168.197.133:50010 (slave-01)
Hostname: slave-01
Decommission Status : Normal
Configured Capacity: 37702275072 (35.11 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 10606755840 (9.88 GB)
DFS Remaining: 27095494656 (25.23 GB)
DFS Used%: 0.00%
DFS Remaining%: 71.87%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Sep 27 17:18:44 CST 2016


Name: 192.168.197.134:50010 (slave-02)
Hostname: slave-02
Decommission Status : Normal
Configured Capacity: 37702275072 (35.11 GB)
DFS Used: 24576 (24 KB)
Non DFS Used: 10606292992 (9.88 GB)
DFS Remaining: 27095957504 (25.24 GB)
DFS Used%: 0.00%
DFS Remaining%: 71.87%
Configured Cache Capacity: 0 (0 B)
Cache Used: 0 (0 B)
Cache Remaining: 0 (0 B)
Cache Used%: 100.00%
Cache Remaining%: 0.00%
Xceivers: 1
Last contact: Tue Sep 27 17:18:44 CST 2016

Web 访问直接关闭防火墙

(1)浏览器打开 http://192.168.197.132:8088/
(2)浏览器打开 http://192.168.197.132:50070/
有如下信息:
Configured Capacity: 35.11 GB
DFS Used: 28 KB (0%)
Non DFS Used: 9.88 GB
DFS Remaining: 25.23 GB (71.87%)
Block Pool Used: 28 KB (0%)
DataNodes usages% (Min/Median/Max/stdDev): 0.00% / 0.00% / 0.00% / 0.00%
Live Nodes 1 (Decommissioned: 0)
Dead Nodes 1 (Decommissioned: 0)
Decommissioning Nodes 0
Total Datanode Volume Failures 0 (0 B)
Number of Under-Replicated Blocks 0
Number of Blocks Pending Deletion 0
Block Deletion Start Time 9/27/2016, 5:15:33 PM

总结

我在启动的时候总是出现错误,提示权限问题,后来发现我之前的操作是用 root 用户,然后 hadoop-2.7.3 的用户组也是 root,yang 这个用户根本没有权限,那么问题找到了就修改呗,chrown 修改为 yang,问题解决。当然配置的过程中出现各种问题,都是参照网上的办法和 logs 解决了。就不一一指出了,如果大家按照这个配置,还是有些问题,那么请多多百度,google 吧。

本文永久更新链接地址:http://www.linuxidc.com/Linux/2017-01/139089.htm

正文完
星哥玩云-微信公众号
post-qrcode
 0
星锅
版权声明:本站原创文章,由 星锅 于2022-01-21发表,共计5779字。
转载说明:除特殊说明外本站文章皆由CC-4.0协议发布,转载请注明出处。
【腾讯云】推广者专属福利,新客户无门槛领取总价值高达2860元代金券,每种代金券限量500张,先到先得。
阿里云-最新活动爆款每日限量供应
评论(没有评论)
验证码
【腾讯云】云服务器、云数据库、COS、CDN、短信等云产品特惠热卖中