java 来自 hbase/文件系统的 hadoop namenode 连接中的 EOF 异常是什么意思?
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/7949058/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
What is the meaning of EOF exceptions in hadoop namenode connections from hbase/filesystem?
提问by jayunit100
This is both a general question about java EOF exceptions, as well as Hadoop's EOF exception which is related to jar interoperability. Comments and answers on either topic are acceptable.
这既是关于 java EOF 异常的一般问题,也是与 jar 互操作性相关的 Hadoop 的 EOF 异常。可以接受对任一主题的评论和回答。
Background
背景
I'm noting some threads which discuss a cryptic exception, which is ultimately caused by a "readInt" method. This exception seems to have some generic implications which are independent of hadoop, but ultimately, is caused by interoperability of Hadoop jars.
我注意到一些讨论神秘异常的线程,这最终是由“readInt”方法引起的。此异常似乎具有一些独立于 hadoop 的通用含义,但最终是由 Hadoop jar 的互操作性引起的。
In my case, I'm getting it when I try to create a new FileSystem object in hadoop, in java.
就我而言,当我尝试在 java 中的 hadoop 中创建一个新的 FileSystem 对象时,我得到了它。
Question
问题
My question is : What is happening and why does the reading of an integer throw an EOF exception ? What "File" is this EOF exception referring to, and why would such an exception be thrown if two jars are not capable of interoperating ?
我的问题是:发生了什么,为什么读取整数会引发 EOF 异常?这个 EOF 异常指的是什么“文件”,如果两个 jar 不能互操作,为什么会抛出这样的异常?
Secondarily, I also would like to know how to fix this error so i can connect to and read/write hadoops filesystem using the hdfs protocol with the java api, remotely....
其次,我还想知道如何修复此错误,以便我可以使用带有 java api 的 hdfs 协议远程连接和读取/写入 hadoops 文件系统....
java.io.IOException: Call to /10.0.1.37:50070 failed on local exception: java.io.EOFException at org.apache.hadoop.ipc.Client.wrapException(Client.java:1139) at org.apache.hadoop.ipc.Client.call(Client.java:1107) at org.apache.hadoop.ipc.RPC$Invoker.invoke(RPC.java:226) at $Proxy0.getProtocolVersion(Unknown Source) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:398) at org.apache.hadoop.ipc.RPC.getProxy(RPC.java:384) at org.apache.hadoop.hdfs.DFSClient.createRPCNamenode(DFSClient.java:111) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:213) at org.apache.hadoop.hdfs.DFSClient.(DFSClient.java:180) at org.apache.hadoop.hdfs.DistributedFileSystem.initialize(DistributedFileSystem.java:89) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:1514) at org.apache.hadoop.fs.FileSystem.access0(FileSystem.java:67) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:1548) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:1530) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:228) at sb.HadoopRemote.main(HadoopRemote.java:35) Caused by: java.io.EOFException at java.io.DataInputStream.readInt(DataInputStream.java:375) at org.apache.hadoop.ipc.Client$Connection.receiveResponse(Client.java:819) at org.apache.hadoop.ipc.Client$Connection.run(Client.java:720)
采纳答案by jayunit100
Regarding hadoop : I fixed the error ! You need to make sure the core-site.xml is serving to 0.0.0.0 instead of 127.0.0.1(localhost).
关于 hadoop:我修复了错误!您需要确保 core-site.xml 服务于 0.0.0.0 而不是 127.0.0.1(localhost)。
If you get the EOF exception, it means that the port is not accessible externally on that ip, so there is no data to read between the hadoop client / server ipc.
如果你得到 EOF 异常,则意味着该端口在该 ip 上无法从外部访问,因此在 hadoop 客户端/服务器 ipc 之间没有数据可以读取。
回答by user207421
EOFException on a socket means there's no more data and the peer has closed the connection.
套接字上的 EOFException 意味着没有更多数据并且对等方已关闭连接。