本文主要介绍HBASE1.2.0 的安装,安装过程发现有个别的地方于0.98 版本不太一样,请各位注意。
本文可以参考HBase0.98 的安装
JDK版本和HBASE对应关系
HBase Version | JDK 6 | JDK 7 | JDK 8 |
2 | Not Supported | Not Supported | yes |
1.3 | Not Supported | yes | yes |
1.2 | Not Supported | yes | yes |
1.1 | Not Supported | yes | Running with JDK 8 will work but is not well tested. |
1 | Not Supported | yes | Running with JDK 8 will work but is not well tested. |
0.98 | yes | yes | Running with JDK 8 works but is not well tested. Building with JDK 8 would require removal of the deprecated remove() method of thePoolMap class and is under consideration. See HBASE-7608 for more information about JDK 8 support. |
0.94 | yes | yes | N/A |
Hadoop和HBASE对应关系
HBase-0.94.x | HBase-0.98.x (Support for Hadoop 1.1+ is eprecated.) | HBase-1.0.x (Hadoop 1.x is NOT supported) | HBase-1.1.x | HBase-1.2.x | HBase-1.3.x | |
Hadoop-1.0.x | X | X | X | X | X | X |
Hadoop-1.1.x | S | NT | X | X | X | X |
Hadoop-0.23.x | S | X | X | X | X | X |
Hadoop-2.0.x-alpha | NT | X | X | X | X | X |
Hadoop-2.1.0-beta | NT | X | X | X | X | X |
Hadoop-2.2.0 | NT | S | NT | NT | X | X |
Hadoop-2.3.x | NT | S | NT | NT | X | X |
Hadoop-2.4.x | NT | S | S | S | S | S |
Hadoop-2.5.x | NT | S | S | S | S | S |
Hadoop-2.6.0 | X | X | X | X | X | X |
Hadoop-2.6.1+ | NT | NT | NT | NT | S | S |
Hadoop-2.7.0 | X | X | X | X | X | X |
Hadoop-2.7.1+ | NT | NT | NT | NT | S | S |
Hadoop version support matrix
"S" = supported
"X" = not supported
"NT" = Not tested
我们这里选用的软件版本如下
Hadoop2.7.1
Hbase 1.2.0
jdk 1.7
1.上传HBase安装包
官网下载地址: http://archive.apache.org/dist/hbase/
2.解压
hbase-1.2.0-bin.tar.gz
配置环境变量
vi .basrrc
HBASE_HOME=/home/hadoop/siz/local/hbase-1.2.0
并添加到path变量中
3.配置hbase集群,要修改3个文件(首先zk集群已经安装好了)
注意:由于hbase最终数据存放到hdfs,故需要要把hadoop的hdfs-site.xml和core-site.xml 放到hbase/conf下
3.1在master 上修改hbase-env.sh
export JAVA_HOME=/home/hadoop/siz/local/jdk1.7.0_79
//告诉hbase使用外部的zk
export HBASE_MANAGES_ZK=false
# The maximum amount of heap to use. Default is left to JVM default.
export HBASE_HEAPSIZE=8G
3.2vim hbase-site.xml
<configuration>
<property>
<name>hbase.rootdir</name>
<value>hdfs://10.8.1.8:9000/hbase</value>
</property>
<property>
<name>hbase.cluster.distributed</name>
<value>true</value>
</property>
<property>
<name>hbase.zookeeper.quorum</name>
<value>10.8.1.120:2181,10.8.1.130:2181,10.8.1.140:2181</value>
</property>
<property>
<name>hbase.master.port</name>
<value>16000</value>
</property>
<property>
<name>hbase.master.info.port</name>
<value>16010</value>
</property>
</configuration>
说明:这里我们可以设置hbase.master.info.port和hbase.master.port 这里和0.98 版本不同,需要特别注意
3.3 指定机器为regionserver,不单独指定master。其中:在哪个机器上启动,哪台就是master,在regionservers文件说明要启动的HRegionServer
vim regionservers
10.8.1.8
10.8.1.9
10.8.1.10
3.4 在$HBASE_HOME/lib 下替换Hadoop版本和Zookeeper对应的版本
(1)rm -rf $HBASE_HOME/lib/hadoop*.jar
find /home/hadoop/siz/local/hadoop-2.7.1/share/ -name "hadoop*jar"| xargs -i cp {} $HBASE_HOME/lib
这里可以删除hadoop中test/sources 相关的包(可以选择)
(2)替换zookeeper的包,我们这里采用3.4.6
(3) hbase1.2.0 依赖 amazonaws包下的两个文件,故需要把下面两个文件上传至$HBASE_HOME/lib 目录下,否则会出现下面的错误
依赖的两个文件:
aws-java-sdk-core-1.10.77.jar
aws-java-sdk-s3-1.11.34.jar
不添加问题出现ClassNotFoundException:
Caused by: java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider
Caused by: java.lang.ClassNotFoundException: com.amazonaws.services.s3.AmazonS3
说明: 这里和0.98 版本不同,需要特别注意
3.4拷贝hbase到其他节点
4.将配置好的HBase拷贝到每一个节点并同步时间。
5.启动所有的hbase
分别启动zk
./zkServer.sh start
启动hdfs集群
start-dfs.sh
启动hbase,在主节点上运行:
start-hbase.sh
6.通过浏览器访问hbase1.2.0版本管理页面
10.8.1.8:16010 (于0.98 版本不同)
7.为保证集群的可靠性,要启动多个HMaster(可选)
hbase-daemon.sh start master