java 块池 <registering> 初始化失败(Datanode Uuid 未分配)
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/33987253/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Initialization failed for Block pool <registering> (Datanode Uuid unassigned)
提问by Mona Jalal
What is the source of this error and how could it be fixed?
此错误的根源是什么,如何修复?
2015-11-29 19:40:04,670 FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020. Exiting.
java.io.IOException: All specified directories are not accessible or do not exist.
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:217)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:254)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:974)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:945)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:278)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:220)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:816)
at java.lang.Thread.run(Thread.java:745)
2015-11-29 19:40:04,670 WARN org.apache.hadoop.hdfs.server.datanode.DataNode: Ending block pool service for: Block pool <registering> (Datanode Uuid unassigned) service to anmol-vm1-new/10.0.1.190:8020
2015-11-29 19:40:04,771 INFO org.apache.hadoop.hdfs.server.datanode.DataNode: Removed Block pool <registering> (Datanode Uuid unassigned)
回答by Muhammad Soliman
there are 2 Possible Solutions to resolve
有 2 种可能的解决方案来解决
First:
第一的:
Your namenode and datanode cluster ID does not match, make sure to make them the same.
您的 namenode 和 datanode 集群 ID 不匹配,请确保使它们相同。
In name node, change ur cluster id in the file located in:
在名称节点中,更改位于以下位置的文件中的集群 ID:
$ nano HADOOP_FILE_SYSTEM/namenode/current/VERSION
In data node you cluster id is stored in the file:
在数据节点中,您的集群 ID 存储在文件中:
$ nano HADOOP_FILE_SYSTEM/datanode/current/VERSION
Second:
第二:
Format the namenode at all:
完全格式化namenode:
Hadoop 1.x: $ hadoop namenode -format
Hadoop 2.x: $ hdfs namenode -format
回答by rhtsjz
I met the same problem and solved it by doing the following steps:
我遇到了同样的问题并通过执行以下步骤解决了它:
step 1.remove the hdfs
directory (for me it was the default directory "/tmp/hadoop-root/
")
步骤 1.删除hdfs
目录(对我来说它是默认目录“ /tmp/hadoop-root/
”)
rm -rf /tmp/hadoop-root/*
step 2.run
步骤 2.运行
bin/hdfs namenode -format
to format the directory
格式化目录
回答by Savy Pan
The root cause of this is datanode and namenode clusterID different, please unify them with namenode clusterID then restart hadoop then it should be resolved.
造成这种情况的根本原因是datanode和namenode clusterID不同,请将它们与namenode clusterID统一,然后重启hadoop即可解决。
回答by Shwetabh Dixit
The issue arises because of mismatch of cluster ID's of datanode and namenode.
该问题是由于数据节点和名称节点的集群 ID 不匹配而出现的。
Follow these steps:
跟着这些步骤:
- GO to Hadoop_home/data/namenode/CURRENT and copy cluster ID from "VERSION".
- GO to Hadoop_home/data/datanode/CURRENT and paste this cluster ID in "VERSION" replacing the one present there.
- then format the namenode
- start datanode and namenode again.
- 转到 Hadoop_home/data/namenode/CURRENT 并从“VERSION”复制集群 ID。
- 转到 Hadoop_home/data/datanode/CURRENT 并将此集群 ID 粘贴到“VERSION”中,以替换那里的集群 ID。
- 然后格式化namenode
- 再次启动 datanode 和 namenode。
回答by Lyndà Céline
The issue arises because of mismatch of cluster ID's of datanode and namenode.
该问题是由于数据节点和名称节点的集群 ID 不匹配而出现的。
Follow these steps:
跟着这些步骤:
1- GO to Hadoop_home/ delete folder Data
2- create folder with anthor name data123
3- create two folder namenode and datanode
4-go to hdfs-site and past your path
1- 转到 Hadoop_home/ 删除文件夹数据
2- 用 anthor 名称 data123 创建文件夹
3-创建两个文件夹namenode和datanode
4-转到 hdfs-site 并通过您的路径
<name>dfs.namenode.name.dir</name>
<value>........../data123/namenode</value>
<name>dfs.datanode.data.dir</name>
<value>............../data123/datanode</value>
.
.
回答by Eric
This problem may occur when there are some storage i/o errors. In this scenario, the VERSION file is not available hence appearing as the error above.
You may need to exclude the storage locations on those bad drives in hdfs-site.xml
.
当存在一些存储 i/o 错误时可能会出现此问题。在这种情况下,VERSION 文件不可用,因此显示为上述错误。您可能需要排除hdfs-site.xml
.