在使用hadoop的过程中,会遇到一个警告,内容如下:
WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
对于这个问题网上很多说法是由于系统位数和所下载的hadoop的位数不同造成的,说到这里就需要看一下自己的hadoop的位数了,查看方法如下:
1.进入到hadoop的安装文件夹下;
2.进入如下目录:
3.看到上图中的libhadoop.so后,使用file命令:
到这里,就可以完全知道了自己的hadoop的版本是32位的还是64位的了;
如果确实是因为位数不一样,ok那么只能选择下载源代码然后自己编译了;但是我这里遇到的不是这个问题,因为我的是64位的操作系统,并且我的hadoop也是64位的;那么问题出在哪里呢?
编辑一下/etc/profile,让hadoop打印日志到console中,来看一下;
1.给/etc/profile 中加入如下内容:
export HADOOP_ROOT_LOGGER=DEBUG,console
截图如下:
然后source一下/etc/profile让它生效:
source /etc/profile
2.执行任意的hadoop命令,来看一下弹出的警告信息,主要内容如下
[hadoop@hadoop2 ~]$ hdfs dfs -ls /
17/01/13 14:04:39 DEBUG util.Shell: setsid exited with exit code 0
17/01/13 14:04:39 DEBUG conf.Configuration: parsing URL jar:file:/home/hadoop/app/hadoop-2.5.2/share/hadoop/common/hadoop-common-2.5.2.jar!/core-default.xml
17/01/13 14:04:39 DEBUG conf.Configuration: parsing input stream sun.net.www.protocol.jar.JarURLConnection$JarURLInputStream@6e90891
17/01/13 14:04:39 DEBUG conf.Configuration: parsing URL file:/home/hadoop/app/hadoop-2.5.2/etc/hadoop/core-site.xml
17/01/13 14:04:39 DEBUG conf.Configuration: parsing input stream java.io.BufferedInputStream@3021eb3f
17/01/13 14:04:40 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginSuccess with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of successful kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
17/01/13 14:04:40 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.loginFailure with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[Rate of failed kerberos logins and latency (milliseconds)], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
17/01/13 14:04:40 DEBUG lib.MutableMetricsFactory: field org.apache.hadoop.metrics2.lib.MutableRate org.apache.hadoop.security.UserGroupInformation$UgiMetrics.getGroups with annotation @org.apache.hadoop.metrics2.annotation.Metric(value=[GetGroups], about=, valueName=Time, type=DEFAULT, always=false, sampleName=Ops)
17/01/13 14:04:40 DEBUG impl.MetricsSystemImpl: UgiMetrics, User and group related metrics
17/01/13 14:04:41 DEBUG security.Groups: Creating new Groups object
17/01/13 14:04:41 DEBUG util.NativeCodeLoader: Trying to load the custom-built native-hadoop library...
17/01/13 14:04:41 DEBUG util.NativeCodeLoader: Failed to load native-hadoop with error: java.lang.UnsatisfiedLinkError: /home/hadoop/app/hadoop-2.5.2/lib/native/libhadoop.so: /lib64/libc.so.6: version `GLIBC_2.14' not found (required by /home/hadoop/app/hadoop-2.5.2/lib/native/libhadoop.so)
17/01/13 14:04:41 DEBUG util.NativeCodeLoader: java.library.path=/home/hadoop/app/hadoop-2.5.2/lib/native
17/01/13 14:04:41 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/13 14:04:41 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Falling back to shell based
17/01/13 14:04:41 DEBUG security.JniBasedUnixGroupsMappingWithFallback: Group mapping impl=org.apache.hadoop.security.ShellBasedUnixGroupsMapping
17/01/13 14:04:41 DEBUG security.Groups: Group mapping impl=org.apache.hadoop.security.JniBasedUnixGroupsMappingWithFallback; cacheTimeout=300000; warningDeltaMs=5000
17/01/13 14:04:41 DEBUG security.UserGroupInformation: hadoop login
17/01/13 14:04:41 DEBUG security.UserGroupInformation: hadoop login commit
17/01/13 14:04:41 DEBUG security.UserGroupInformation: using local user:UnixPrincipal: hadoop
17/01/13 14:04:41 DEBUG security.UserGroupInformation: UGI loginUser:hadoop (auth:SIMPLE)
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
17/01/13 14:04:42 DEBUG hdfs.HAUtil: No HA service delegation token found for logical URI hdfs://ns1/
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.use.legacy.blockreader.local = false
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.read.shortcircuit = false
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.client.domain.socket.data.traffic = false
17/01/13 14:04:42 DEBUG hdfs.BlockReaderLocal: dfs.domain.socket.path =
17/01/13 14:04:42 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
17/01/13 14:04:42 DEBUG ipc.Server: rpcKind=RPC_PROTOCOL_BUFFER, rpcRequestWrapperClass=class org.apache.hadoop.ipc.ProtobufRpcEngine$RpcRequestWrapper, rpcInvoker=org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker@440b2a8c
17/01/13 14:04:42 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@2301799f
17/01/13 14:04:43 DEBUG shortcircuit.DomainSocketFactory: Both short-circuit local reads and UNIX domain socket are disabled.
17/01/13 14:04:43 DEBUG ipc.Client: The ping interval is 60000 ms.
17/01/13 14:04:43 DEBUG ipc.Client: Connecting to hadoop1/192.168.1.232:9000
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop: starting, having connections 1
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop sending #0
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop got value #0
17/01/13 14:04:43 INFO retry.RetryInvocationHandler: Exception while invoking getFileInfo of class ClientNamenodeProtocolTranslatorPB over hadoop1/192.168.1.232:9000. Trying to fail over immediately.
org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.StandbyException): Operation category READ is not supported in state standbyat org.apache.hadoop.hdfs.server.namenode.ha.StandbyState.checkOperation(StandbyState.java:87)at org.apache.hadoop.hdfs.server.namenode.NameNode$NameNodeHAContext.checkOperation(NameNode.java:1688)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.checkOperation(FSNamesystem.java:1258)at org.apache.hadoop.hdfs.server.namenode.FSNamesystem.getFileInfo(FSNamesystem.java:3684)at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.getFileInfo(NameNodeRpcServer.java:803)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolServerSideTranslatorPB.getFileInfo(ClientNamenodeProtocolServerSideTranslatorPB.java:779)at org.apache.hadoop.hdfs.protocol.proto.ClientNamenodeProtocolProtos$ClientNamenodeProtocol$2.callBlockingMethod(ClientNamenodeProtocolProtos.java)at org.apache.hadoop.ipc.ProtobufRpcEngine$Server$ProtoBufRpcInvoker.call(ProtobufRpcEngine.java:585)at org.apache.hadoop.ipc.RPC$Server.call(RPC.java:928)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2013)at org.apache.hadoop.ipc.Server$Handler$1.run(Server.java:2009)at java.security.AccessController.doPrivileged(Native Method)at javax.security.auth.Subject.doAs(Subject.java:415)at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1614)at org.apache.hadoop.ipc.Server$Handler.run(Server.java:2007)at org.apache.hadoop.ipc.Client.call(Client.java:1411)at org.apache.hadoop.ipc.Client.call(Client.java:1364)at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:206)at com.sun.proxy.$Proxy9.getFileInfo(Unknown Source)at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getFileInfo(ClientNamenodeProtocolTranslatorPB.java:707)at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)at java.lang.reflect.Method.invoke(Method.java:606)at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)at com.sun.proxy.$Proxy10.getFileInfo(Unknown Source)at org.apache.hadoop.hdfs.DFSClient.getFileInfo(DFSClient.java:1785)at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1068)at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1064)at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1064)at org.apache.hadoop.fs.Globber.getFileStatus(Globber.java:57)at org.apache.hadoop.fs.Globber.glob(Globber.java:265)at org.apache.hadoop.fs.FileSystem.globStatus(FileSystem.java:1623)at org.apache.hadoop.fs.shell.PathData.expandAsGlob(PathData.java:326)at org.apache.hadoop.fs.shell.Command.expandArgument(Command.java:224)at org.apache.hadoop.fs.shell.Command.expandArguments(Command.java:207)at org.apache.hadoop.fs.shell.Command.processRawArguments(Command.java:190)at org.apache.hadoop.fs.shell.Command.run(Command.java:154)at org.apache.hadoop.fs.FsShell.run(FsShell.java:287)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:84)at org.apache.hadoop.fs.FsShell.main(FsShell.java:340)
17/01/13 14:04:43 DEBUG retry.RetryUtils: multipleLinearRandomRetry = null
17/01/13 14:04:43 DEBUG ipc.Client: getting client out of cache: org.apache.hadoop.ipc.Client@2301799f
17/01/13 14:04:43 DEBUG ipc.Client: The ping interval is 60000 ms.
17/01/13 14:04:43 DEBUG ipc.Client: Connecting to hadoop2/192.168.1.233:9000
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop sending #0
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop: starting, having connections 2
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop got value #0
17/01/13 14:04:43 DEBUG ipc.ProtobufRpcEngine: Call: getFileInfo took 8ms
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop sending #1
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop got value #1
17/01/13 14:04:43 DEBUG ipc.ProtobufRpcEngine: Call: getListing took 6ms
Found 3 items
-rw-r--r-- 3 hadoop supergroup 179161400 2017-01-09 13:35 /apache-storm-1.0.2.tar.gz
-rw-r--r-- 3 hadoop supergroup 147197492 2017-01-07 12:28 /hadoop-2.5.2.tar.gz
drwxr-xr-x - hadoop supergroup 0 2017-01-12 08:54 /hbase
17/01/13 14:04:43 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@2301799f
17/01/13 14:04:43 DEBUG ipc.Client: stopping client from cache: org.apache.hadoop.ipc.Client@2301799f
17/01/13 14:04:43 DEBUG ipc.Client: removing client from cache: org.apache.hadoop.ipc.Client@2301799f
17/01/13 14:04:43 DEBUG ipc.Client: stopping actual client because no more references remain: org.apache.hadoop.ipc.Client@2301799f
17/01/13 14:04:43 DEBUG ipc.Client: Stopping client
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop: closed
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop2/192.168.1.233:9000 from hadoop: stopped, remaining connections 1
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop: closed
17/01/13 14:04:43 DEBUG ipc.Client: IPC Client (382257985) connection to hadoop1/192.168.1.232:9000 from hadoop: stopped, remaining connections 0
绿色的部分不用管,主要是因为我这个是集群中的standby的节点造成的重试现象;主要看红色部分,说来说去就是一个问题
GLIBC_2.14没有找到
那我们来看一下当前系统的GLIBC的版本,执行如下命令:
看到的结果很明显,系统的版本是2.12,额,版本太低了,需要升级一下了;那么现在就开始升级。
升级过程如下:
升级过程中最好在root环境下进行,否则会因为权限问题而导致编译安装失败。
1.如果没有安装wget,自己安装一个吧
2.下载如下两个tar包
wget http://ftp.gnu.org/gnu/glibc/glibc-2.15.tar.gz
wget http://ftp.gnu.org/gnu/glibc/glibc-ports-2.15.tar.gz
3.解压
tar -xvf glibc-2.15.tar.gz
tar -xvf glibc-ports-2.15.tar.gz
4.移动一下,把glibc-ports-2.15移动到glibc-2.15中去
mv glibc-ports-2.15 glibc-2.15/ports
5.建立一个目录来进行编译
mkdir glibc-build-2.15
6、进入编译目录
cd glibc-build-2.15
7.配置编译参数
../glibc-2.15/configure --prefix=/usr --disable-profile --enable-add-ons --with-headers=/usr/include --with-binutils=/usr/bin
8.编译(这个过程有些慢,需要等待)
make
9.安装
make install
到这里就成功安装完成了,这时候再看一下glibc的版本
呵呵,已经升级完成了,在运行一下hadoop的测试命令看看
已经没有刚才的警告了,然后呢,编辑/etc/profile,去掉刚才加入的数据。大功告成!