一、启动hive报错
[root@master sbin]# hive
Hive Session ID = 991ccabe-96b4-4fae-8b1c-ac2856ab182eLogging initialized using configuration in jar:file:/root/soft/hive/apache-hive-3.1.3-bin/lib/hive-common-3.1.3.jar!/hive-log4j2.properties Async: true
Exception in thread "main" java.lang.RuntimeException: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive/root/991ccabe-96b4-4fae-8b1c-ac2856ab182e. Name node is in safe mode.
The reported blocks 822 has reached the threshold 0.9990 of total blocks 822. The minimum number of live datanodes is not required. In safe mode extension. Safe mode will be turned off automatically in 6 seconds. NamenodeHostName:masterat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1570)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1557)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3406)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1161)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:739)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:651)at org.apache.hadoop.hive.ql.session.SessionState.beginStart(SessionState.java:591)at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:747)at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:683)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.util.RunJar.run(RunJar.java:323)at org.apache.hadoop.util.RunJar.main(RunJar.java:236)
Caused by: org.apache.hadoop.hdfs.server.namenode.SafeModeException: Cannot create directory /tmp/hive/root/991ccabe-96b4-4fae-8b1c-ac2856ab182e. Name node is in safe mode.
The reported blocks 822 has reached the threshold 0.9990 of total blocks 822. The minimum number of live datanodes is not required. In safe mode extension. Safe mode will be turned off automatically in 6 seconds. NamenodeHostName:masterat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1570)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1557)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3406)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1161)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:739)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)at java.lang.reflect.Constructor.newInstance(Constructor.java:423)at org.apache.hadoop.ipc.RemoteException.instantiateException(RemoteException.java:121)at org.apache.hadoop.ipc.RemoteException.unwrapRemoteException(RemoteException.java:88)at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2492)at org.apache.hadoop.hdfs.DFSClient.mkdirs(DFSClient.java:2466)at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1467)at org.apache.hadoop.hdfs.DistributedFileSystem$27.doCall(DistributedFileSystem.java:1464)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirsInternal(DistributedFileSystem.java:1481)at org.apache.hadoop.hdfs.DistributedFileSystem.mkdirs(DistributedFileSystem.java:1456)at org.apache.hadoop.hive.ql.session.SessionState.createPath(SessionState.java:786)at org.apache.hadoop.hive.ql.session.SessionState.createSessionDirs(SessionState.java:721)at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:627)... 9 more
Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.hdfs.server.namenode.SafeModeException): Cannot create directory /tmp/hive/root/991ccabe-96b4-4fae-8b1c-ac2856ab182e. Name node is in safe mode.
The reported blocks 822 has reached the threshold 0.9990 of total blocks 822. The minimum number of live datanodes is not required. In safe mode extension. Safe mode will be turned off automatically in 6 seconds. NamenodeHostName:masterat org.apache.hadoop.hdfs.server.namenode.FSNamesystem.newSafemodeException(FSNamesystem.java:1570)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkNameNodeSafeMode(FSNamesystem.java:1557)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.mkdirs(FSNamesystem.java:3406)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.mkdirs(NameNodeRpcServer.java:1161)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.mkdirs(ClientNamenodeProtocolServerSideTranslatorPB.java:739)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine2.java:532)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:1070)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:1020)at org.apache.hadoop.ipc.Server$RpcCall.run(Server.java:948)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:422)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1845)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2952)at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1562)at org.apache.hadoop.ipc.Client.call(Client.java:1508)at org.apache.hadoop.ipc.Client.call(Client.java:1405)at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:234)at org.apache.hadoop.ipc.ProtobufRpcEngine2$Invoker.invoke(ProtobufRpcEngine2.java:119)at com.sun.proxy.$Proxy28.mkdirs(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.mkdirs(ClientNamenodeProtocolTranslatorPB.java:673)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:498)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)at com.sun.proxy.$Proxy29.mkdirs(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.primitiveMkdir(DFSClient.java:2490)... 18 more
问题原因:
这个错误信息是Hadoop分布式文件系统(HDFS)中常见的问题,当你尝试在Hive或其他Hadoop应用程序中创建目录时遇到。具体来说,错误原因是NameNode(HDFS的元数据存储和管理器)正处于安全模式(Safe Mode)。安全模式是一种HDFS启动期间或在某些情况下为了维护数据完整性而执行的保护机制。
安全模式下,NameNode不会接受新的写入请求,包括创建新的目录或文件,以防止数据损坏。当出现以下情况时,NameNode可能会进入安全模式:
数据节点报告重大错误,可能导致数据不一致。
名称节点的目录结构发生变化,需要重新平衡数据块。
名称节点进行维护操作,如数据校验、备份或恢复。
为了解决这个问题,你可以尝试以下步骤:
等待:如果NameNode正在自动处理某个临时性的任务,可能很快就会退出安全模式。你可以检查集群的日志,确认是否有正在进行的任务。
重启NameNode:如果安全模式已经持续一段时间,或者你知道没有正在进行的操作,你可以尝试重启NameNode。但在生产环境中,这可能需要谨慎操作,并确保其他服务已经关闭或者有正确的备份。
手动解除安全模式:如果你有权限,可以通过命令行工具(如hadoop dfsadmin -safemode leave)来手动解除安全模式。但在此之前,要确保所有必要的检查都已完成。
检查集群健康:确保所有数据节点正常运行,没有异常退出,没有丢失的数据,以及数据块的均衡分配。
在执行任何操作之前,建议查阅集群的官方文档或联系系统管理员,以确保操作的正确性和安全性。
解决方案:
hadoop dfsadmin -safemode leave
二、Java代码连接远程hive报错
Exception in thread "main" java.lang.NoClassDefFoundError: org/apache/thrift/protocol/TProtocolat org.apache.hive.jdbc.HiveDriver.connect(HiveDriver.java:94)at java.sql.DriverManager.getConnection(DriverManager.java:664)at java.sql.DriverManager.getConnection(DriverManager.java:208)at com.example.demo.controller.TestController.main(TestController.java:39)
Caused by: java.lang.ClassNotFoundException: org.apache.thrift.protocol.TProtocolat java.net.URLClassLoader.findClass(URLClassLoader.java:387)at java.lang.ClassLoader.loadClass(ClassLoader.java:419)at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)at java.lang.ClassLoader.loadClass(ClassLoader.java:352)... 4 more
初始化hive的时候报错
执行命令:./schematool -initSchema -dbType mysql
[root@master bin]# ./schematool -initSchema -dbType mysql
Metastore connection URL: jdbc:mysql://127.0.0.1:3306/hive_metadata?createDatabaseIfNotExist=true&characterEncoding=UTF-8&useSSL=false
Metastore Connection Driver : com.mysql.cj.jdbc.Driver
Metastore connection User: root
org.apache.hadoop.hive.metastore.HiveMetaException: Failed to get schema version.
Underlying cause: java.sql.SQLNonTransientConnectionException : Public Key Retrieval is not allowed
SQL Error code: 0
Use --verbose for detailed stacktrace.
*** schemaTool failed ***
我是修改了MySQL的密码,但是MySQL还没有重新启动登录
重新启动登录后再执行这条命令后就没有问题了