共计 4918 个字符,预计需要花费 13 分钟才能阅读完成。
Hadoop 的 Client 搭建 - 即集群外主机访问 Hadoop
1、增加主机映射 (与 namenode 的映射一样):
增加最后一行
[root@localhost ~]# su – root
[root@localhost ~]# vi /etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
192.168.48.129 hadoop-master
[root@localhost ~]#
2、新建用户 hadoop
建立 hadoop 用户组
新建用户,useradd -d /usr/hadoop -g hadoop -m hadoop(新建用户 hadoop 指定用户主目录 /usr/hadoop 及所属组 hadoop)
passwd hadoop 设置 hadoop 密码(这里设置密码为 hadoop)
[root@localhost ~]# groupadd hadoop
[root@localhost ~]# useradd -d /usr/hadoop -g hadoop -m hadoop
[root@localhost ~]# passwd hadoop
3、配置 jdk 环境
本次安装的是 hadoop-2.7.5,需要 JDK 7 以上版本。若已安装可跳过。
JDK 安装可参考:http://www.linuxidc.com/Linux/2017-01/139874.htm 或 CentOS7.2 安装 JDK1.7 http://www.linuxidc.com/Linux/2016-11/137398.htm
或者直接拷贝 master 上的 JDK 文件更有利于保持版本的一致性。
[root@localhost Java]# su – root
[root@localhost java]# mkdir -p /usr/java
[root@localhost java]# scp -r hadoop@hadoop-master:/usr/java/jdk1.7.0_79 /usr/java
[root@localhost java]# ll
total 12
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 default
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 jdk1.7.0_79
drwxr-xr-x. 8 root root 4096 Feb 13 01:34 latest
设置 Java 及 hadoop 环境变量
确保 /usr/java/jdk1.7.0.79 存在
su – root
vi /etc/profile
确保 /usr/java/jdk1.7.0.79 存在
unset i
unset -f pathmunge
JAVA_HOME=/usr/java/jdk1.7.0_79
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
PATH=/usr/hadoop/hadoop-2.7.5/bin:$JAVA_HOME/bin:$PATH
设置生效(重要)
[root@localhost ~]# source /etc/profile
[root@localhost ~]#
JDK 安装后确认:
[hadoop@localhost ~]$ java -version
java version “1.7.0_79”
Java(TM) SE Runtime Environment (build 1.7.0_79-b15)
Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)
[hadoop@localhost ~]$
4、设置 hadoop 的环境变量
拷贝 namenode 上已配置好的 hadoop 目录到当前主机
[root@localhost ~]# su – hadoop
Last login: Sat Feb 24 14:04:55 CST 2018 on pts/1
[hadoop@localhost ~]$ pwd
/usr/hadoop
[hadoop@localhost ~]$ scp -r hadoop@hadoop-master:/usr/hadoop/hadoop-2.7.5 .
The authenticity of host ‘hadoop-master (192.168.48.129)’ can’t be established.
ECDSA key fingerprint is 1e:cd:d1:3d:b0:5b:62:45:a3:63:df:c7:7a:0f:b8:7c.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added ‘hadoop-master,192.168.48.129’ (ECDSA) to the list of known hosts.
hadoop@hadoop-master’s password:
[hadoop@localhost ~]$ ll
total 0
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Desktop
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Documents
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Downloads
drwxr-xr-x 10 hadoop hadoop 150 Feb 24 14:30 hadoop-2.7.5
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Music
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Pictures
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Public
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Templates
drwxr-xr-x 2 hadoop hadoop 6 Feb 24 11:32 Videos
[hadoop@localhost ~]$
到此,Hadoop 的客户端安装就算完成了,接下来就可以使用了。
执行 hadoop 命令结果如下,
[hadoop@localhost ~]$ hadoop
Usage: hadoop [–config confdir] [COMMAND | CLASSNAME]
CLASSNAME run the class named CLASSNAME
or
where COMMAND is one of:
fs run a generic filesystem user client
version print the version
jar <jar> run a jar file
note: please use “yarn jar” to launch
YARN applications, not this command.
checknative [-a|-h] check native hadoop and compression libraries availability
distcp <srcurl> <desturl> copy file or directories recursively
archive -archiveName NAME -p <parent path> <src>* <dest> create a hadoop archive
classpath prints the class path needed to get the
credential interact with credential providers
Hadoop jar and the required libraries
daemonlog get/set the log level for each daemon
trace view and modify Hadoop tracing settings
Most commands print help when invoked w/o parameters.
[hadoop@localhost ~]$
5、使用 hadoop
创建本地文件
[hadoop@localhost ~]$ hdfs dfs -ls
Found 1 items
drwxr-xr-x – hadoop supergroup 0 2018-02-22 23:41 output
[hadoop@localhost ~]$ vi my-local.txt
hello boy!
yehyeh
上传本地文件至集群
[hadoop@localhost ~]$ hdfs dfs -mkdir upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
[hadoop@localhost ~]$ hdfs dfs -ls
Found 2 items
drwxr-xr-x – hadoop supergroup 0 2018-02-22 23:41 output
drwxr-xr-x – hadoop supergroup 0 2018-02-23 22:38 upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
[hadoop@localhost ~]$ hdfs dfs -put my-local.txt upload
[hadoop@localhost ~]$ hdfs dfs -ls upload
Found 1 items
-rw-r–r– 3 hadoop supergroup 18 2018-02-23 22:45 upload/my-local.txt
[hadoop@localhost ~]$ hdfs dfs -cat upload/my-local.txt
hello boy!
yehyeh
[hadoop@localhost ~]$
ps: 注意本地 java 版本与 master 拷贝过来的文件中 /etc/hadoop-env.sh 配置的 JAVA_HOME 是否要保持一致没有验证过,本文是保持一致的。
Hadoop2.3-HA 高可用集群环境搭建 http://www.linuxidc.com/Linux/2017-03/142155.htm
Hadoop 项目之基于 CentOS7 的 Cloudera 5.10.1(CDH)的安装部署 http://www.linuxidc.com/Linux/2017-04/143095.htm
Hadoop2.7.2 集群搭建详解(高可用)http://www.linuxidc.com/Linux/2017-03/142052.htm
使用 Ambari 来部署 Hadoop 集群(搭建内网 HDP 源)http://www.linuxidc.com/Linux/2017-03/142136.htm
Ubuntu 14.04 下 Hadoop 集群安装 http://www.linuxidc.com/Linux/2017-02/140783.htm
CentOS 6.7 安装 Hadoop 2.7.2 http://www.linuxidc.com/Linux/2017-08/146232.htm
Ubuntu 16.04 上构建分布式 Hadoop-2.7.3 集群 http://www.linuxidc.com/Linux/2017-07/145503.htm
CentOS 7 下 Hadoop 2.6.4 分布式集群环境搭建 http://www.linuxidc.com/Linux/2017-06/144932.htm
Hadoop2.7.3+Spark2.1.0 完全分布式集群搭建过程 http://www.linuxidc.com/Linux/2017-06/144926.htm
更多 Hadoop 相关信息见 Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13