共计 22582 个字符,预计需要花费 57 分钟才能阅读完成。
一:HA 分布式配置的优势:
1,防止由于一台 namenode 挂掉,集群失败的情形
2,适合工业生产的需求
二:HA 安装步骤:
1,安装虚拟机
1,型号:VMware_workstation_full_12.5.0.11529.exe linux 镜像:CentOS-7-x86_64-DVD-1611.iso
注意点:
1,网络选择了桥接模式(可以防止 route 总变),(台式机或服务器最好设置自己的本机的 ip 地址为静态的 ip)
2,安装过程中选择了基础建设模式(infras…),(减少内存的消耗,但又保证基本的环境的模式)
3,用户名 root 密码 root
4,网络配置使用了手动网络固定网络 ip4 地址(固定 ip)
2,linux 基本环境配置:(操作都在 root 权限下进行的)
1,验证网络服务:ping < 主机 ip> 主机 ping < 虚拟机 ip> ping www.baidu.ok 验证 ok
备份 ip 地址:cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens33.bak
2,防火墙设置:关闭并禁用防火墙
关闭防火墙 systemctl stop firewalld.service(cetos7 与前面系列的 iptables 不同)
禁用防火墙:systemctl disable firewalld.service
查看防火墙状态:firewall-cmd –state
3,设置 hosts,hostname,network
vim /etc/hostname
ha1
vim /etc/hosts
192.168.1.116 ha1
192.168.1.117 ha2
192.168.1.118 ha3
192.168.1.119 ha4
vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ha1
4,安装一些必要的包:(不一定全)
yum install -y chkconfig
yum install -y Python
yum install -y bind-utils
yum install -y psmisc
yum install -y libxslt
yum install -y zlib
yum install -y sqlite
yum install -y cyrus-sasl-plain
yum install -y cyrus-sasl-gssapi
yum install -y fuse
yum install -y portmap
yum install -y fuse-libs
yum install -y RedHat-lsb
5,安装 Java 和 Scala
java 版本:jdk-7u80-linux-x64.rpm
scala 版本:scala-2.11.6.tgz
验证是否有 java:
rpm -qa|grep java 无
tar -zxf jdk-8u111-linux-x64.tar.gz
tar -zxf scala-2.11.6.tgz
mv jdk1.8.0_111 /usr/java
mv scala-2.11.6 /usr/scala
配置环境变量:
vim /etc/profile
export JAVA_HOME=/usr/java
export SCALA_HOME=/usr/scala
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
6,重启,验证上述是否设置 ok : 重启 使用 vm 快照,命名为:初始化 ok java,scala, 主机名,防火墙,ip
3,Hadoop+zookeeper 集群配置
1,集群机准备
连接克隆:对 ha1 克隆出 ha2,ha3,ha4
对 ha2,ha3,ha4 修改网络地址,network, 防火墙
vim /etc/sysconfig/network-scripts/ifcfg-ens33
116 117/118/119
service network restart
vim /etc/hostname
vim /etc/sysconfig/network
systemctl disable firewalld.service
对 ha2,ha3,ha4 重启验证 ip, 网络,防火墙,分别对三台机快照,命名为:初始化 ok java,scala, 主机名,防火墙,ip
2,集群框架图
机子 |
Namenode |
DataNode |
Zookeeper |
ZkFC |
JournalNode |
RM |
DM |
Ha1 |
1 |
|
1 |
1 |
1 |
1 |
|
Ha2 |
1 |
1 |
1 |
1 |
1 |
|
1 |
Ha3 |
|
1 |
1 |
|
1 |
|
1 |
Ha4 |
|
1 |
|
|
|
|
1 |
3,ssh 通信:ok 后 快照 ssh ok
四台机:
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
ha1 下:
scp ~/.ssh/* root@ha2:~/.ssh/
scp ~/.ssh/* root@ha3:~/.ssh/
scp ~/.ssh/* root@ha4:~/.ssh/
验证:
ssh ha2/ha3/ha4
4,zookeeper 集群配置:
1,配置环境变量
zook 安装:
tar -zxf zookeeper-3.4.8.tar.gz
mv zookeeper-3.4.8 /usr/zookeeper-3.4.8
修改配置文件:
export ZK_HOME=/usr/zookeeper-3.4.8
scp /etc/profile root@ha2:/etc/
scp /etc/profile root@ha3:/etc/
source /etc/profile
2,zoo.cfg 配置(加粗修改出)
cd /usr/zookeeper-3.4.8/conf
cp zoo_sample.cfg zoo.cfg
内容:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/datas
dataLogDir=/opt/zookeeper/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to “0” to disable auto purge feature
#autopurge.purgeInterval=1
server.1=ha1:2888:3888
server.2=ha2:2888:3888
server.3=ha3:2888:3888
3,启动 zookeeper 集群:
# 三台机(ha1,ha2,ha3)
新建文件夹:
mkdir -p /opt/zookeeper/datas
mkdir -p /opt/zookeeper/logs
cd /opt/zookeeper/datas
vim myid 写 1 /2/3
# 分发给 ha2,ha3(注意 ha4 不需要)
cd /usr
scp -r zookeeper-3.4.8 root@ha2:/usr
scp -r zookeeper-3.4.8 root@ha3:/usr
# 启动(三台机)
cd $ZK_HOME/bin
zkServer.sh start
zkServer.sh status 一个 leader 和连个 follower
5,hadoop 集群配置
1,配置环境变量:
版本:hadoop-2.7.3.tar.gz
tar -zxf hadoop-2.7.3.tar.gz
mv hadoop2.7.3 /usr/hadoop2.7.3
export JAVA_HOME=/usr/java
export SCALA_HOME=/usr/scala
export HADOOP_HOME=/usr/hadoop-2.7.3
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile
2,hadoop.env.sh 配置:
export JAVA_HOME=/usr/java
source hadoop.env.sh
hadoop version 验证 ok
3,hdfs-site.xml 配置:后续修改后发送(scp hdfs-site.xml root@ha4:/usr/hadoop-2.7.3/etc/hadoop/)
vim hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>ha1:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>ha2:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>ha1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>ha2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://ha2:8485;ha3:8485;ha4:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/jn/data</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
4,core-site.xml 配置
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>ha1:2181,ha2:2181,ha3:2181</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop2</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
5,yarn-site.xml 配置
vim yarn-site.xml
<configuration>
<!– Site specific YARN configuration properties –>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ha1</value>
</property>
</configuration>
6,mapred-site.xml 配置
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
7,slaves 配置:
vim slaves
ha2
ha3
ha4
8,分发并启动:
# 分发
scp -r hadoop-2.7.3 root@ha2:/usr/
scp -r hadoop-2.7.3 root@ha3:/usr/
scp -r hadoop-2.7.3 root@ha4:/usr/
# 启动 JN(在 ha2,ha3,ha4)
cd sbin
./hadoop-daemon.sh start journalnode
[root@ha2 sbin]# jps
JournalNode
Jps
QuorumPeerMain(#zk 启动的线程)
#ha1:namenode 格式化
cd bin
./hdfs namenode -format
#zk 格式化
./hdfs zkfc -formatZK
# 可以查看 cd /opt/hadoop2 文件来查看元数据是否格式化正常
#ha2:namenode 格式化
1,ha1 要先启动 namenode:
./hadoop-daemon.sh start namenode
2,ha2 下
./hdfs namenode -bootstrapStandby
9,验证:http://192.168.1.116:50070/ 验证 ok 快照 ha 模式下的 hadoop+zookeeper 安装 ok
#hdfs 集群验证
[root@ha1 sbin]# ./stop-dfs.sh
Stopping namenodes on [ha1 ha2]
ha2: no namenode to stop
ha1: stopping namenode
ha2: no datanode to stop
ha3: no datanode to stop
ha4: no datanode to stop
Stopping journal nodes [ha2 ha3 ha4]
ha3: stopping journalnode
ha4: stopping journalnode
ha2: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [ha1 ha2]
ha2: no zkfc to stop
ha1: no zkfc to stop
[root@ha1 sbin]# ./start-dfs.sh
ha1 下:
[root@ha1 sbin]# jps
Jps
NameNode
QuorumPeerMain
DFSZKFailoverController
[root@ha2 dfs]# jps
NameNode
DFSZKFailoverController
Jps
DataNode
JournalNode
QuorumPeerMain
[root@ha3 sbin]# jps
QuorumPeerMain
DataNode
JournalNode
Jps
[root@ha4 sbin]# jps
Jps
DataNode
JournalNode
配置 yarn 和 mapred
[root@ha1 sbin]# jps
NameNode
DFSZKFailoverController
Jps
QuorumPeerMain
ResourceManager
[root@ha2 hadoop]# jps
DataNode
NameNode
DFSZKFailoverController
JournalNode
NodeManager
Jps
QuorumPeerMain
[root@ha3 ~]# jps
QuorumPeerMain
DataNode
NodeManager
Jps
JournalNode
[root@ha4 ~]# jps
JournalNode
NodeManager
DataNode
Jps
更多详情见请继续阅读下一页的精彩内容 :http://www.linuxidc.com/Linux/2017-12/149908p2.htm
一,概念
hive: 是一种数据仓库,数据储存在:hdfs 上,hsql 是由替换简单的 map-reduce,hive 通过 mysql 来记录映射数据
二,安装
1,mysql 安装:
1,检测是否有 mariadb
rpm -qa|grep mariadb
tar -zxvf mysql-5.7.18-linux-glibc2.5-x86_64.tar.gz
存在:rpm -e mariadb-libs-5.5.52-1.el7.x86_64 --nodeps
2,安装前准备:
# ha1 环境下:mysql 版本 mysql-5.7.18-linux-glibc2.5-x86_64
cp mysql-5.7.18-linux-glibc2.5-x86_64 /usr/mysql -r
# 创建用户组及用户
groupadd mysql
useradd -r -g mysql mysql
cd /usr/local/mysql
mkdir data
chown -R mysql:mysql /usr/mysql
# 验证权限:
ls -trhla
# 创建配置文件:
vim /etc/my.conf # 一开始并不存在
basedir=/usr/local/mysql/
datadir/usr/local/mysql/data
scoket=/tmp/mysql.sock
user=mysql
symbolic-links=0
[mysqld_safe]
log-error=/var/log/mysqld.log
pid-fils=/var/run/mysqld/mysqld.pid
# 创建文件夹
cd /var/run
mkdir mysqld
cd mysqld
vim mysqld.pid # 什么都不用写,退出
chown -R mysql:mysql /var/run/mysqld
3,配置 mysql:
# 初始化:
cd /usr/local/mysql/bin
./mysqld --initialize
生成初始密码:gctlsOja8<%0
# 添加自启动脚本服务
chown -R mysql:mysql /usr/local/mysql
cp support-files/mysql.server /etc/init.d/mysql
service mysql start
进程查看:ps -ef | grep mysql
# 登录
mysql ./mysql -u root -p 输入密码
# 修改密码
set password=password("123456");# 客户端远程连接 mysql 服务器问题:
grant all privileges on *.* to root@'%' identified by '1234567';
flush privileges;(sql 语句记得加分号啊)# 解决 root 权限访问所有数据库的问题
grant all privileges on *.* to 'root'@'Master' identified by '123456' with grant option;( 连接权限的问题)flush privileges;
# 创建 hive 数据库
create database hive default charset utf8 collate utf8_general_ci;
# 设置 mysql 自启动
cd /etc/init.d
chmod +x mysql
chkconfig --add mysql
chkconfig --list(3 到 5 都为开,则添加成功)# 配置环境变量:
vim /etc/profile
export MYSQL_HOME=/usr/local/mysql
export PATH=$JAVA_HOME/bin:$MYSQL_HOME/bin:$PATH
source /etc/profile
# 验证:ok
reboot
netstat -na |grep 3306
mysql -u root -p123456
2,hive 安装 (hive-2.1.1)
1,安装前配置
1,启动 Hadoop 前提先启动 zk
cd $ZK_HOME/bin
zkServer.sh start
在 namenode 启动 start-all.sh
2,可以放在任一台 hadoop 集群中
3,解压 tar -zxf apache-hive-2.1.1-bin.tar.gz
4,mv apache-hive-2.1.1-bin /usr/hive-2.1.1
mv mysql-connector-java-5.1.42-bin.jar /usr/hive-2.1.1/lib
5,环境变量设置
vim /etc/profie
export HIVE_HOME=/usr/hive-2.1.1
export PATH=$PATH:$HIVE_HOME/bin
export CLASSPATH=$CLASSPATH:$HIVE_HOME/bin
$source /etc/profile
2,hive-env.sh 配置
cd /app/soft/hive-2.1.1/conf
cp hive-env.sh.template hive-env.sh
vim hive-env.sh
export HADOOP_HOME=/usr/hadoop-2.7.3
export HIVE_CONF_DIR=/usr/hive-2.1.1/conf
3,hive-site.xml 配置
touch hive-site.xml
vim hive-site.xml
<configuration>
<property>
<name>javax.jdo.option.ConnectionURL</name>
<value>jdbc:mysql://ha1:3306/hive?createDatabaseIfNotExsit=true;characterEncoding=UTF-8</value>
</property>
<property>
<name>javax.jdo.option.ConnectionDriverName</name>
<value>com.mysql.jdbc.Driver</value>
</property>
<property>
<name>javax.jdo.option.ConnectionUserName</name>
<value>root</value>
</property>
<property>
<name>javax.jdo.option.ConnectionPassword</name>
<value>123456</value>
</property>
<property>
<name>datanucleus.readOnlyDatastore</name>
<value>false</value>
</property>
<property>
<name>datanucleus.fixedDatastore</name>
<value>false</value>
</property>
<property>
<name>datanucleus.autoCreateSchema</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateTables</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
<property>
<name>datanucleus.autoCreateColumns</name>
<value>true</value>
</property>
<property>
<name>hive.metastore.schema.verification</name>
<value>false</value>
</property>
</configuration>
4,启动服务 (快照 hive+mysql ok)
cd $HIVE_HOME/bin
hive --service metastore &
hive --service hiveserver2&
./hive
一:HA 分布式配置的优势:
1,防止由于一台 namenode 挂掉,集群失败的情形
2,适合工业生产的需求
二:HA 安装步骤:
1,安装虚拟机
1,型号:VMware_workstation_full_12.5.0.11529.exe linux 镜像:CentOS-7-x86_64-DVD-1611.iso
注意点:
1,网络选择了桥接模式(可以防止 route 总变),(台式机或服务器最好设置自己的本机的 ip 地址为静态的 ip)
2,安装过程中选择了基础建设模式(infras…),(减少内存的消耗,但又保证基本的环境的模式)
3,用户名 root 密码 root
4,网络配置使用了手动网络固定网络 ip4 地址(固定 ip)
2,linux 基本环境配置:(操作都在 root 权限下进行的)
1,验证网络服务:ping < 主机 ip> 主机 ping < 虚拟机 ip> ping www.baidu.ok 验证 ok
备份 ip 地址:cp /etc/sysconfig/network-scripts/ifcfg-ens33 /etc/sysconfig/network-scripts/ifcfg-ens33.bak
2,防火墙设置:关闭并禁用防火墙
关闭防火墙 systemctl stop firewalld.service(cetos7 与前面系列的 iptables 不同)
禁用防火墙:systemctl disable firewalld.service
查看防火墙状态:firewall-cmd –state
3,设置 hosts,hostname,network
vim /etc/hostname
ha1
vim /etc/hosts
192.168.1.116 ha1
192.168.1.117 ha2
192.168.1.118 ha3
192.168.1.119 ha4
vim /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ha1
4,安装一些必要的包:(不一定全)
yum install -y chkconfig
yum install -y Python
yum install -y bind-utils
yum install -y psmisc
yum install -y libxslt
yum install -y zlib
yum install -y sqlite
yum install -y cyrus-sasl-plain
yum install -y cyrus-sasl-gssapi
yum install -y fuse
yum install -y portmap
yum install -y fuse-libs
yum install -y RedHat-lsb
5,安装 Java 和 Scala
java 版本:jdk-7u80-linux-x64.rpm
scala 版本:scala-2.11.6.tgz
验证是否有 java:
rpm -qa|grep java 无
tar -zxf jdk-8u111-linux-x64.tar.gz
tar -zxf scala-2.11.6.tgz
mv jdk1.8.0_111 /usr/java
mv scala-2.11.6 /usr/scala
配置环境变量:
vim /etc/profile
export JAVA_HOME=/usr/java
export SCALA_HOME=/usr/scala
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
6,重启,验证上述是否设置 ok : 重启 使用 vm 快照,命名为:初始化 ok java,scala, 主机名,防火墙,ip
3,Hadoop+zookeeper 集群配置
1,集群机准备
连接克隆:对 ha1 克隆出 ha2,ha3,ha4
对 ha2,ha3,ha4 修改网络地址,network, 防火墙
vim /etc/sysconfig/network-scripts/ifcfg-ens33
116 117/118/119
service network restart
vim /etc/hostname
vim /etc/sysconfig/network
systemctl disable firewalld.service
对 ha2,ha3,ha4 重启验证 ip, 网络,防火墙,分别对三台机快照,命名为:初始化 ok java,scala, 主机名,防火墙,ip
2,集群框架图
机子 |
Namenode |
DataNode |
Zookeeper |
ZkFC |
JournalNode |
RM |
DM |
Ha1 |
1 |
|
1 |
1 |
1 |
1 |
|
Ha2 |
1 |
1 |
1 |
1 |
1 |
|
1 |
Ha3 |
|
1 |
1 |
|
1 |
|
1 |
Ha4 |
|
1 |
|
|
|
|
1 |
3,ssh 通信:ok 后 快照 ssh ok
四台机:
ssh-keygen -t rsa
cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
ha1 下:
scp ~/.ssh/* root@ha2:~/.ssh/
scp ~/.ssh/* root@ha3:~/.ssh/
scp ~/.ssh/* root@ha4:~/.ssh/
验证:
ssh ha2/ha3/ha4
4,zookeeper 集群配置:
1,配置环境变量
zook 安装:
tar -zxf zookeeper-3.4.8.tar.gz
mv zookeeper-3.4.8 /usr/zookeeper-3.4.8
修改配置文件:
export ZK_HOME=/usr/zookeeper-3.4.8
scp /etc/profile root@ha2:/etc/
scp /etc/profile root@ha3:/etc/
source /etc/profile
2,zoo.cfg 配置(加粗修改出)
cd /usr/zookeeper-3.4.8/conf
cp zoo_sample.cfg zoo.cfg
内容:
# The number of milliseconds of each tick
tickTime=2000
# The number of ticks that the initial
# synchronization phase can take
initLimit=10
# The number of ticks that can pass between
# sending a request and getting an acknowledgement
syncLimit=5
# the directory where the snapshot is stored.
# do not use /tmp for storage, /tmp here is just
# example sakes.
dataDir=/opt/zookeeper/datas
dataLogDir=/opt/zookeeper/logs
# the port at which the clients will connect
clientPort=2181
# the maximum number of client connections.
# increase this if you need to handle more clients
#maxClientCnxns=60
#
# Be sure to read the maintenance section of the
# administrator guide before turning on autopurge.
#
# http://zookeeper.apache.org/doc/current/zookeeperAdmin.html#sc_maintenance
#
# The number of snapshots to retain in dataDir
#autopurge.snapRetainCount=3
# Purge task interval in hours
# Set to “0” to disable auto purge feature
#autopurge.purgeInterval=1
server.1=ha1:2888:3888
server.2=ha2:2888:3888
server.3=ha3:2888:3888
3,启动 zookeeper 集群:
# 三台机(ha1,ha2,ha3)
新建文件夹:
mkdir -p /opt/zookeeper/datas
mkdir -p /opt/zookeeper/logs
cd /opt/zookeeper/datas
vim myid 写 1 /2/3
# 分发给 ha2,ha3(注意 ha4 不需要)
cd /usr
scp -r zookeeper-3.4.8 root@ha2:/usr
scp -r zookeeper-3.4.8 root@ha3:/usr
# 启动(三台机)
cd $ZK_HOME/bin
zkServer.sh start
zkServer.sh status 一个 leader 和连个 follower
5,hadoop 集群配置
1,配置环境变量:
版本:hadoop-2.7.3.tar.gz
tar -zxf hadoop-2.7.3.tar.gz
mv hadoop2.7.3 /usr/hadoop2.7.3
export JAVA_HOME=/usr/java
export SCALA_HOME=/usr/scala
export HADOOP_HOME=/usr/hadoop-2.7.3
export PATH=$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
source /etc/profile
2,hadoop.env.sh 配置:
export JAVA_HOME=/usr/java
source hadoop.env.sh
hadoop version 验证 ok
3,hdfs-site.xml 配置:后续修改后发送(scp hdfs-site.xml root@ha4:/usr/hadoop-2.7.3/etc/hadoop/)
vim hdfs-site.xml
<configuration>
<property>
<name>dfs.nameservices</name>
<value>mycluster</value>
</property>
<property>
<name>dfs.ha.namenodes.mycluster</name>
<value>nn1,nn2</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn1</name>
<value>ha1:9000</value>
</property>
<property>
<name>dfs.namenode.rpc-address.mycluster.nn2</name>
<value>ha2:9000</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn1</name>
<value>ha1:50070</value>
</property>
<property>
<name>dfs.namenode.http-address.mycluster.nn2</name>
<value>ha2:50070</value>
</property>
<property>
<name>dfs.namenode.shared.edits.dir</name>
<value>qjournal://ha2:8485;ha3:8485;ha4:8485/mycluster</value>
</property>
<property>
<name>dfs.client.failover.proxy.provider.mycluster</name>
<value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value>
</property>
<property>
<name>dfs.ha.fencing.methods</name>
<value>sshfence</value>
</property>
<property>
<name>dfs.ha.fencing.ssh.private-key-files</name>
<value>/root/.ssh/id_rsa</value>
</property>
<property>
<name>dfs.journalnode.edits.dir</name>
<value>/opt/jn/data</value>
</property>
<property>
<name>dfs.ha.automatic-failover.enabled</name>
<value>true</value>
</property>
</configuration>
4,core-site.xml 配置
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://mycluster</value>
</property>
<property>
<name>ha.zookeeper.quorum</name>
<value>ha1:2181,ha2:2181,ha3:2181</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/opt/hadoop2</value>
<description>A base for other temporary directories.</description>
</property>
</configuration>
5,yarn-site.xml 配置
vim yarn-site.xml
<configuration>
<!– Site specific YARN configuration properties –>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.hostname</name>
<value>ha1</value>
</property>
</configuration>
6,mapred-site.xml 配置
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
</configuration>
7,slaves 配置:
vim slaves
ha2
ha3
ha4
8,分发并启动:
# 分发
scp -r hadoop-2.7.3 root@ha2:/usr/
scp -r hadoop-2.7.3 root@ha3:/usr/
scp -r hadoop-2.7.3 root@ha4:/usr/
# 启动 JN(在 ha2,ha3,ha4)
cd sbin
./hadoop-daemon.sh start journalnode
[root@ha2 sbin]# jps
JournalNode
Jps
QuorumPeerMain(#zk 启动的线程)
#ha1:namenode 格式化
cd bin
./hdfs namenode -format
#zk 格式化
./hdfs zkfc -formatZK
# 可以查看 cd /opt/hadoop2 文件来查看元数据是否格式化正常
#ha2:namenode 格式化
1,ha1 要先启动 namenode:
./hadoop-daemon.sh start namenode
2,ha2 下
./hdfs namenode -bootstrapStandby
9,验证:http://192.168.1.116:50070/ 验证 ok 快照 ha 模式下的 hadoop+zookeeper 安装 ok
#hdfs 集群验证
[root@ha1 sbin]# ./stop-dfs.sh
Stopping namenodes on [ha1 ha2]
ha2: no namenode to stop
ha1: stopping namenode
ha2: no datanode to stop
ha3: no datanode to stop
ha4: no datanode to stop
Stopping journal nodes [ha2 ha3 ha4]
ha3: stopping journalnode
ha4: stopping journalnode
ha2: stopping journalnode
Stopping ZK Failover Controllers on NN hosts [ha1 ha2]
ha2: no zkfc to stop
ha1: no zkfc to stop
[root@ha1 sbin]# ./start-dfs.sh
ha1 下:
[root@ha1 sbin]# jps
Jps
NameNode
QuorumPeerMain
DFSZKFailoverController
[root@ha2 dfs]# jps
NameNode
DFSZKFailoverController
Jps
DataNode
JournalNode
QuorumPeerMain
[root@ha3 sbin]# jps
QuorumPeerMain
DataNode
JournalNode
Jps
[root@ha4 sbin]# jps
Jps
DataNode
JournalNode
配置 yarn 和 mapred
[root@ha1 sbin]# jps
NameNode
DFSZKFailoverController
Jps
QuorumPeerMain
ResourceManager
[root@ha2 hadoop]# jps
DataNode
NameNode
DFSZKFailoverController
JournalNode
NodeManager
Jps
QuorumPeerMain
[root@ha3 ~]# jps
QuorumPeerMain
DataNode
NodeManager
Jps
JournalNode
[root@ha4 ~]# jps
JournalNode
NodeManager
DataNode
Jps
更多详情见请继续阅读下一页的精彩内容 :http://www.linuxidc.com/Linux/2017-12/149908p2.htm
(一)HA 下配置 spark
1,spark 版本型号:spark-2.1.0-bin-Hadoop2.7
2,解压,修改配置环境变量
tar -zxvf spark-2.1.0-bin-hadoop2.7.tgz
mv spark-2.1.0-bin-hadoop2.7 /usr/spark-2.1.0
vim /etc/profile
export JAVA_HOME=/usr/java
export SCALA_HOME=/usr/scala
export HADOOP_HOME=/usr/hadoop-2.7.3
export ZK_HOME=/usr/zookeeper-3.4.8
export MYSQL_HOME=/usr/local/mysql
export HIVE_HOME=/usr/hive-2.1.1
export SPARK_HOME=/usr/spark-2.1.0
export PATH=$SPARK_HOME/bin:$HIVE_HOME/bin:$MYSQL_HOME/bin:$ZK_HOME/bin:$JAVA_HOME/bin:$SCALA_HOME/bin:$HADOOP_HOME/bin:$PATH
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
3,修改 spark-env.sh 文件
cd $SPARK_HOME/conf
vim spark-env.sh
# 添加
export JAVA_HOME=/usr/java
export SCALA_HOME=/usr/scala
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=ha1:2181,ha2:2181,ha3:2181 -Dspark.deploy.zookeeper.dir=/spark"
export HADOOP_CONF_DIR=/usr/hadoop-2.7.3/conf/etc/hadoop
export SPARK_MASTER_PORT=7077
export SPARK_EXECUTOR_INSTANCES=1
export SPARK_WORKER_INSTANCES=1
export SPARK_WORKER_CORES=1
export SPARK_WORKER_MEMORY=1024M
export SPARK_MASTER_WEBUI_PORT=8080
export SPARK_CONF_DIR=/usr/spark-2.1.0/conf
4,修改 slaves 文件
vim slaves
# 添加
ha2
ha3
ha4
5,分发及启动
cd /usr
scp -r spark-2.1.0 root@ha4:/usr
scp -r spark-2.1.0 root@ha3:/usr
scp -r spark-2.1.0 root@ha2:/usr
scp -r spark-2.1.0 root@ha1:/usr
# 在 ha1 上
./$SPARK_HOME/sbin/start-all.sh
#ha2,ha3 上
./$SPARK_HOME/sbin/start-master.sh
各个节点 jps 情况:
[root@ha1 spark-2.1.0]# jps
2464 NameNode
2880 ResourceManager
2771 DFSZKFailoverController
3699 Jps
2309 QuorumPeerMain
3622 Master
[root@ha2 zookeeper-3.4.8]# jps
2706 NodeManager
3236 Jps
2485 JournalNode
3189 Worker
2375 DataNode
2586 DFSZKFailoverController
2236 QuorumPeerMain
2303 NameNode
3622 Master
[root@ha3 zookeeper-3.4.8]# jps
2258 DataNode
2466 NodeManager
2197 QuorumPeerMain
2920 Jps
2873 Worker
2331 JournalNode
3622 Master
[root@ha4 ~]# jps
2896 Jps
2849 Worker
2307 JournalNode
2443 NodeManager
2237 DataNode
6,关机,快照 sparkok
# 启动集群顺序
#ha1,ha2,ha3
cd $ZK_HOME
./bin/zkServer.sh start
#ha1
cd $HADOOP_HOME
./sbin/start-all.sh
cd $SPARK_HOME
./sbin/start-all.sh
#ha2,ha3
./sbin/start-master.sh
Hadoop 项目之基于 CentOS7 的 Cloudera 5.10.1(CDH)的安装部署 http://www.linuxidc.com/Linux/2017-04/143095.htm
Hadoop2.7.2 集群搭建详解(高可用)http://www.linuxidc.com/Linux/2017-03/142052.htm
使用 Ambari 来部署 Hadoop 集群(搭建内网 HDP 源)http://www.linuxidc.com/Linux/2017-03/142136.htm
Ubuntu 14.04 下 Hadoop 集群安装 http://www.linuxidc.com/Linux/2017-02/140783.htm
CentOS 6.7 安装 Hadoop 2.7.2 http://www.linuxidc.com/Linux/2017-08/146232.htm
Ubuntu 16.04 上构建分布式 Hadoop-2.7.3 集群 http://www.linuxidc.com/Linux/2017-07/145503.htm
CentOS 7.3 下 Hadoop2.8 分布式集群安装与测试 http://www.linuxidc.com/Linux/2017-09/146864.htm
CentOS 7 下 Hadoop 2.6.4 分布式集群环境搭建 http://www.linuxidc.com/Linux/2017-06/144932.htm
Hadoop2.7.3+Spark2.1.0 完全分布式集群搭建过程 http://www.linuxidc.com/Linux/2017-06/144926.htm
更多 Hadoop 相关信息见 Hadoop 专题页面 http://www.linuxidc.com/topicnews.aspx?tid=13
本文永久更新链接地址 :http://www.linuxidc.com/Linux/2017-12/149908.htm