Java UnsatisfiedLinkError (NativeIO$Windows.access0) 将 mapreduce 作业从 windows 提交到 hadoop 2.2 到 ubuntu 时
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/20584157/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
UnsatisfiedLinkError (NativeIO$Windows.access0) when submitting mapreduce job to hadoop 2.2 from windows to ubuntu
提问by padmalcom
I submit my mapreduce jobs from a java application running on windows to the hadoop 2.2 cluster running on ubuntu. In hadoop 1.x this worked as expected but on hadoop 2.2 I get a strange Error:
我将我的 mapreduce 作业从在 Windows 上运行的 java 应用程序提交到在 ubuntu 上运行的 hadoop 2.2 集群。在 hadoop 1.x 这按预期工作,但在 hadoop 2.2 上我得到一个奇怪的错误:
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.access0(Ljava/lang/String;I)Z
I compiled the necesary windows libraries (hadoop.dll and winutils.exe) and can access the hdfs via code and read the cluster information using hadoop API. Only the job submission does not work.
我编译了必要的 Windows 库(hadoop.dll 和 winutils.exe),可以通过代码访问 hdfs 并使用 hadoop API 读取集群信息。只有作业提交不起作用。
Any help is aprecciated.
任何帮助都值得赞赏。
Solution: I found it out myself, the path where the windows hadoop binaries can be found has to be added to the PATH variable of windows.
解决方案:我自己发现了,windows hadoop二进制文件所在的路径要加到windows的PATH变量中。
回答by Vijay
This error generally occurs due to the mismatch in your binary files in your %HADOOP_HOME%\bin folder. So, what you need to do is to get hadoop.dll and winutils.exe specifically for your hadoop version.
发生此错误的原因通常是 %HADOOP_HOME%\bin 文件夹中的二进制文件不匹配。因此,您需要做的是专门为您的 hadoop 版本获取 hadoop.dll 和 winutils.exe。
Get hadoop.dll and winutils.exe for your specific hadoop version and copy them to your %HADOOP_HOME%\bin folder.
为您的特定 hadoop 版本获取 hadoop.dll 和 winutils.exe,并将它们复制到您的 %HADOOP_HOME%\bin 文件夹。
回答by rustyx
- Get
hadoop.dll
(orlibhadoop.so
on *x). Make sure to match bitness (32- vs. 64-bit) with your JVM. Make sure it is available via PATHor java.library.path.
Note that setting
java.library.path
overridesPATH
. If you setjava.library.path
, make sure it is correct and contains the hadoop library.
- 获取
hadoop.dll
(或libhadoop.so
在 *x 上)。确保将位数(32 位与 64 位)与您的 JVM 相匹配。 确保它可以通过PATH或java.library.path 获得。
请注意,设置
java.library.path
会覆盖PATH
. 如果您设置了java.library.path
,请确保它是正确的并且包含了 hadoop 库。
回答by K. SOT
I have been having issues with my Windows 10 Hadoop installationsince morning where the NameNode and DataNode were not starting due the mismatch in the binary files. The issues were resolved after I replaced the bin folder with the one that corresponds with the version of my Hadoop. Possibly, the binfolder I replaced with the one that came with the installation was for a different version, I don't know how it happened. If all your configurations are intact, you might want to replace the bin folderwith a version that correspond with your Hadoop installation.
从早上开始,我的Windows 10 Hadoop 安装就遇到了问题,由于二进制文件不匹配,NameNode 和 DataNode 没有启动。在我将 bin 文件夹替换为与我的 Hadoop 版本相对应的文件夹后,问题得到了解决。可能是我用安装附带的那个文件夹替换的bin文件夹是针对不同版本的,我不知道它是怎么发生的。如果您的所有配置都完好无损,您可能需要使用与您的Hadoop 安装对应的版本替换bin 文件夹。