共计 7329 个字符,预计需要花费 19 分钟才能阅读完成。
《CentOS 6.3 下 Hadoop 伪分布式平台搭建 http://www.linuxidc.com/Linux/2016-11/136789.htm》后就开始搭建 hbase 伪分布式平台了。有了 hadoop 环境,搭建 hbase 就变得很容易了。
一、Hbase 安装
1、从官网下载最新版本 Hbase 安装包 1.2.3,为了省去编译安装环节,我直接下载了 hbase-1.2.3-bin.tar.gz,解压即可使用。(如果此链接下载速度过慢可更换官网其他下载链接)
[hadoop@master tar]$ tar -xzf hbase-1.2.3-bin.tar.gz
[hadoop@master tar]$ mv hbase-1.2.3 /usr/local/hadoop/hbase
[hadoop@master tar]$ cd /usr/local/hadoop/hbase/
[hadoop@master hbase]$ ./bin/hbase version
HBase 1.2.3
Source code repository git://kalashnikov.att.net/Users/stack/checkouts/hbase.git.commit revision=bd63744624a26dc3350137b564fe746df7a721a4
Compiled by stack on Mon Aug 29 15:13:42 PDT 2016
From source with checksum 0ca49367ef6c3a680888bbc4f1485d18
运行上面命令得到正常输出即表示安装成功,然后配置环境变量
2、配置环境变量
修改~/.bashrc 在 PATH 后面增加
:$HADOOP_HOME/hbase/bin
则~/.bashrc 文件内容如下
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin:$HADOOP_HOME/hbase/bin
export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop
[hadoop@master hadoop]$ source ~/.bashrc
二、Hbase 单机模式
1、修改配置文件 hbase/conf/hbase-env.sh
# export JAVA_HOME=/usr/java/jdk1.6.0/ 修改为
export JAVA_HOME=/usr/local/java/
#export HBASE_MANAGES_ZK=true 修改为
export HBASE_MANAGES_ZK=true
# 添加下面一行
export HBASE_SSH_OPTS="-p 322"
2、修改配置文件 hbase/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>file:/usr/local/hadoop/tmp/hbase/hbase-tmp</value>
</property>
</configuration>
3、启动 Hbase
[hadoop@master hbase]$ start-hbase.sh
starting master, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-master-master.out
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option PermSize=128m; support was removed in 8.0
Java HotSpot(TM) 64-Bit Server VM warning: ignoring option MaxPermSize=128m; support was removed in 8.0
jps 下多了一个 HMaster 进程
[hadoop@master hbase]$ jps
12178 ResourceManager
11540 NameNode
4277 Jps
11943 SecondaryNameNode
12312 NodeManager
11707 DataNode
3933 HMaster
4、使用 Hbase shell
[hadoop@master hbase]$ hbase shell
2016-11-07 10:11:02,187 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016
hbase(main):001:0> status
1 active master, 0 backup masters, 1 servers, 0 dead, 2.0000 average load
hbase(main):002:0> exit
未启动 Hbase 直接使用 Hbase Shell 会报错
5、停止 Hbase
[hadoop@master hbase]$ stop-hbase.sh
stopping hbase......................
三、Hbase 伪分布式
伪分布式和单机模式的区别主要是配置文件的不同
1、修改配置文件 hbase/conf/hbase-env.sh
# export JAVA_HOME=/usr/java/jdk1.6.0/ 修改为
export JAVA_HOME=/usr/local/java/
# export HBASE_MANAGES_ZK=true 修改为
export HBASE_MANAGES_ZK=true
# export HBASE_CLASSPATH= 修改为
export HBASE_CLASSPATH=/usr/local/hadoop/etc/hadoop/
# 添加下面一行
export HBASE_SSH_OPTS="-p 322"
zookeeper 使用 Hbase 自带的即可,分布式才有必要开启独立的
2、修改配置文件 hbase/conf/hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://10.1.2.108:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
</configuration>
注意这里的 hbase.rootdir 设置为 hdfs 的存储路径前提是 hadoop 平台是伪分布式,只有一个 NameNode
3、启动 Hbase
[hadoop@master hbase]$ start-hbase.sh
localhost: starting zookeeper, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-zookeeper-master.out
master running as process 3933. Stop it first.
starting regionserver, logging to /usr/local/hadoop/hbase/bin/../logs/hbase-hadoop-1-regionserver-master.out
jps 查看进程多了 HMaster 和 HRegionServer
[hadoop@master hbase]$ jps
7312 Jps
12178 ResourceManager
11540 NameNode
11943 SecondaryNameNode
12312 NodeManager
11707 DataNode
3933 HMaster
7151 HRegionServer
4、使用 Hbase Shell
[hadoop@master hbase]$ hbase shell
2016-11-07 10:35:05,262 WARN [main] util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/usr/local/hadoop/hbase/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.10.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
HBase Shell; enter 'help<RETURN>' for list of supported commands.
Type "exit<RETURN>" to leave the HBase Shell
Version 1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016
1) 查看集群状态和版本信息
hbase(main):001:0> status
1 active master, 0 backup masters, 1 servers, 0 dead, 1.0000 average load
hbase(main):002:0> version
1.2.3, rbd63744624a26dc3350137b564fe746df7a721a4, Mon Aug 29 15:13:42 PDT 2016
2) 创建 user 表和三个列族
hbase(main):003:0> create 'user','user_id','address','info'
0 row(s) in 2.3570 seconds
=> Hbase::Table - user
3) 查看所有表
hbase(main):005:0> create 'tmp', 't1', 't2'
0 row(s) in 1.2320 seconds
=> Hbase::Table - tmp
hbase(main):006:0> list
TABLE
tmp
user
2 row(s) in 0.0100 seconds
=> ["tmp", "user"]
hbase(main):007:0>
4) 查看表结构
hbase(main):008:0> describe 'user'
Table user is ENABLED
user
COLUMN FAMILIES DESCRIPTION
{NAME => 'address', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_V
ERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
{NAME => 'info', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_VERS
IONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
{NAME => 'user_id', BLOOMFILTER => 'ROW', VERSIONS => '1', IN_MEMORY => 'false', KEEP_DELETED_CELLS => 'FALSE', DATA_BLOCK_ENCODING => 'NONE', TTL => 'FOREVER', COMPRESSION => 'NONE', MIN_V
ERSIONS => '0', BLOCKCACHE => 'true', BLOCKSIZE => '65536', REPLICATION_SCOPE => '0'}
3 row(s) in 0.2060 seconds
hbase(main):009:0>
5) 删除表
hbase(main):010:0> disable 'tmp'
0 row(s) in 2.2580 seconds
hbase(main):011:0> drop 'tmp'
0 row(s) in 1.2560 seconds
hbase(main):012:0>
5、停止 Hbase
[hadoop@master hbase]$ stop-hbase.sh
stopping hbase......................
localhost: no zookeeper to stop because no pid file /tmp/hbase-hadoop-zookeeper.pid
停止 Hadoop 的顺序是停止 hbase、停止 YARN、停止 Hdfs
6、web 使用
可通过 Hdfs 页面 http://10.1.2.108:50070 进入 Hbase 页面
或者直接访问 http://10.1.2.108:60010/master.jsp
Hadoop+HBase 搭建云存储总结 PDF http://www.linuxidc.com/Linux/2013-05/83844.htm
Ubuntu Server 14.04 下 Hbase 数据库安装 http://www.linuxidc.com/Linux/2016-05/131499.htm
HBase 结点之间时间不一致造成 regionserver 启动失败 http://www.linuxidc.com/Linux/2013-06/86655.htm
Hadoop+ZooKeeper+HBase 集群配置 http://www.linuxidc.com/Linux/2013-06/86347.htm
Hadoop 集��安装 &HBase 实验环境搭建 http://www.linuxidc.com/Linux/2013-04/83560.htm
基于 Hadoop 集群的 HBase 集群的配置 http://www.linuxidc.com/Linux/2013-03/80815.htm‘
Hadoop 安装部署笔记之 -HBase 完全分布模式安装 http://www.linuxidc.com/Linux/2012-12/76947.htm
单机版搭建 HBase 环境图文教程详解 http://www.linuxidc.com/Linux/2012-10/72959.htm
HBase 的详细介绍 :请点这里
HBase 的下载地址 :请点这里
本文永久更新链接地址 :http://www.linuxidc.com/Linux/2016-11/136830.htm