问题:Hbase在集群上运行报错:NoClassDefFoundError:org/apache/hadoop/hbase/HBaseConfiguration
需求:HBase使用Java创建表,打包成jar,提交到集群上行运行!
在IDEA中使用Maven添加Hbase
代码:
<dependency><groupId>org.apache.hbase</groupId><artifactId>hbase-it</artifactId><version>1.1.3</version></dependency>
Java代码:
package com.bynear.HBase;
import org.apache.hadoop.conf.Configuration;
import org.apache.hadoop.hbase.HBaseConfiguration;
import org.apache.hadoop.hbase.HColumnDescriptor;
import org.apache.hadoop.hbase.HTableDescriptor;
import org.apache.hadoop.hbase.TableName;
import org.apache.hadoop.hbase.client.HBaseAdmin;
import java.io.IOException;
/*** 2018/6/20* 15:35*/
public class HBaseTest {public static void main(String[] args) throws Exception {Configuration conf = HBaseConfiguration.create();conf.set("hbase.zookeeper.quorum", "Spark1:2181,Spark2:2181");HBaseAdmin admin = new HBaseAdmin(conf);TableName name = TableName.valueOf("nvshen");HTableDescriptor desc = new HTableDescriptor(name);HColumnDescriptor base_info = new HColumnDescriptor("base_info");HColumnDescriptor extra_info = new HColumnDescriptor("extra_info");base_info.setMaxVersions(5);extra_info.setMaxVersions(5);desc.addFamily(base_info);desc.addFamily(extra_info);admin.createTable(desc);}
}
打包提价到集群上:
hadoop jar /root/SparkStudy.jar com/bynear/HBase/HBaseTest
结果出现如下报错!
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/hadoop/hbase/HBaseConfigurationat com.bynear.HBase.HBaseTest.main(HBaseTest.java:19)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.util.RunJar.main(RunJar.java:212)
Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hbase.HBaseConfigurationat java.net.URLClassLoader$1.run(URLClassLoader.java:366)at java.net.URLClassLoader$1.run(URLClassLoader.java:355)at java.security.AccessController.doPrivileged(Native Method)at java.net.URLClassLoader.findClass(URLClassLoader.java:354)at java.lang.ClassLoader.loadClass(ClassLoader.java:425)at java.lang.ClassLoader.loadClass(ClassLoader.java:358)... 6 more
解决办法:
把hbase的jar包加入到hadoop classpath中
在hadoop安装目录下找到hadoop-env.sh文件,
添加 : export HADOOP_CLASSPATH=/home/hadoop/apps/hbase/lib/*
不需要重启,重新执行命令hadoop jar mapreducehbase.jar hbase.TxHBase 运行成功。
HBase的安装!
tar -zxvf hbase.tar -C 到指定的目录
vim hbase-env.sh
添加 JAVA_HOME 将自导的Zookeeper 修改为flase
vim hbase-site.xml
<configuration>
<property>
//添加HDFS的路径(自动创建)<name>hbase.rootdir</name><value>hdfs://Spark1:9000/hbase</value>
</property>
<property>
//是否启用分布式<name>hbase.cluster.distributed</name><value>true</value>
</property>
<property>
//配置zookeeper<name>hbase.zookeeper.quorum</name><value>Spark1:2181,Spark2:2181</value>
</property>
</configuration>
注意:
需要将 Hadoop 的 hdfs-site.xml 和 core-site.xml 拷贝到Hbase的conf目录中,因为HBase集群是依赖于Hadoop集群的,HBase的region产生的HFile,存储在hbase-site.xml中创建的HDFS指定的目录中!
执行结果:
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:java.library.path=/root/app/hadoop-2.5.0-cdh5.3.6/lib/native
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:java.io.tmpdir=/tmp
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:java.compiler=<NA>
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:os.name=Linux
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:os.arch=i386
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:os.version=2.6.32-431.el6.x86_64
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:user.name=root
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:user.home=/root
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Client environment:user.dir=/root
18/06/21 01:01:36 INFO zookeeper.ZooKeeper: Initiating client connection, connectString=Spark1:2181,Spark2:2181 sessionTimeout=90000 watcher=hconnection-0xcb18130x0, quorum=Spark1:2181,Spark2:2181, baseZNode=/hbase
18/06/21 01:01:36 INFO zookeeper.ClientCnxn: Opening socket connection to server Spark1/192.168.2.15:2181. Will not attempt to authenticate using SASL (unknown error)
18/06/21 01:01:36 INFO zookeeper.ClientCnxn: Socket connection established, initiating session, client: /192.168.2.15:47408, server: Spark1/192.168.2.15:2181
18/06/21 01:01:36 INFO zookeeper.ClientCnxn: Session establishment complete on server Spark1/192.168.2.15:2181, sessionid = 0x641d2794760039, negotiated timeout = 40000
18/06/21 01:01:40 INFO client.HBaseAdmin: Created nvshen
运行结果:
hbase(main):001:0> list
TABLE
nvshen
1 row(s) in 0.4150 seconds=> ["nvshen"]
hbase(main):002:0>