Java NameNode 地址的 URI 无效
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/23646308/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Invalid URI for NameNode address
提问by cybertextron
I'm trying to set up a ClouderaHadoop cluster, with a master node containing the namenode, secondarynamenodeand jobtracker, and two others nodes containing the datanodeand tasktracker. The Clouderaversion is 4.6, the OS is ubuntu precise x64. Also, this cluster is being created from a AWS instance. ssh passwordlesshas been set as well, Javainstalation Oracle-7.
我正在尝试建立一个ClouderaHadoop 集群,其中一个主节点包含namenode,secondarynamenode和jobtracker,另外两个节点包含datanode和tasktracker。该Cloudera版本是4.6,操作系统是Ubuntu的精确64。此外,此集群是从 AWS 实例创建的。ssh passwordless也已经设置好了,Java安装Oracle-7。
Whenever I execute sudo service hadoop-hdfs-namenode startI get:
每当我执行sudo service hadoop-hdfs-namenode start我得到:
2014-05-14 05:08:38,023 FATAL org.apache.hadoop.hdfs.server.namenode.NameNode: Exception in namenode join
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check fs.defaultFS): file:/// has no authority.
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:329)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:317)
at org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:370)
at org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:422)
at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:442)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:621)
at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:606)
at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1177)
at org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1241)
My core-site.xml:
我的core-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://<master-ip>:8020</value>
</property>
</configuration>
mapred-site.xml:
mapred-site.xml:
<?xml version="1.0"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://<master-ip>:8021</value>
</property>
</configuration>
hdfs-site.xml:
hdfs-site.xml:
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!-- Put site-specific property overrides in this file. -->
<configuration>
<property>
<name>dfs.replication</name>
<value>2</value>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
</property>
</configuration>
I tried using public ip, private-ip, public dnsand fqdn, but the result is the same.
The directory /etc/hadoop/conf.emptylooks like:
我尝试使用public ip, private-ip,public dns和fqdn,但结果是一样的。目录/etc/hadoop/conf.empty看起来像:
-rw-r--r-- 1 root root 2998 Feb 26 10:21 capacity-scheduler.xml
-rw-r--r-- 1 root hadoop 1335 Feb 26 10:21 configuration.xsl
-rw-r--r-- 1 root root 233 Feb 26 10:21 container-executor.cfg
-rwxr-xr-x 1 root root 287 May 14 05:09 core-site.xml
-rwxr-xr-x 1 root root 2445 May 14 05:09 hadoop-env.sh
-rw-r--r-- 1 root hadoop 1774 Feb 26 10:21 hadoop-metrics2.properties
-rw-r--r-- 1 root hadoop 2490 Feb 26 10:21 hadoop-metrics.properties
-rw-r--r-- 1 root hadoop 9196 Feb 26 10:21 hadoop-policy.xml
-rwxr-xr-x 1 root root 332 May 14 05:09 hdfs-site.xml
-rw-r--r-- 1 root hadoop 8735 Feb 26 10:21 log4j.properties
-rw-r--r-- 1 root root 4113 Feb 26 10:21 mapred-queues.xml.template
-rwxr-xr-x 1 root root 290 May 14 05:09 mapred-site.xml
-rw-r--r-- 1 root root 178 Feb 26 10:21 mapred-site.xml.template
-rwxr-xr-x 1 root root 12 May 14 05:09 masters
-rwxr-xr-x 1 root root 29 May 14 05:09 slaves
-rw-r--r-- 1 root hadoop 2316 Feb 26 10:21 ssl-client.xml.example
-rw-r--r-- 1 root hadoop 2251 Feb 26 10:21 ssl-server.xml.example
-rw-r--r-- 1 root root 2513 Feb 26 10:21 yarn-env.sh
-rw-r--r-- 1 root root 2262 Feb 26 10:21 yarn-site.xml
and slaveslists the ip addressesof the two slave machines:
并slaves列出ip addresses两台从机:
<slave1-ip>
<slave2-ip>
Executing
执行
update-alternatives --get-selections | grep hadoop
hadoop-conf auto /etc/hadoop/conf.empty
I've done a lot of search, but didn't get anything that could help me fix my problem. Could someone offer any clue what's going on?
我做了很多搜索,但没有得到任何可以帮助我解决问题的信息。有人可以提供任何线索吗?
回答by Vishwanath S
Make sure you have set the HADOOP_PREFIX variable correctly as indicated in the link: http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
确保您已按照链接中的指示正确设置 HADOOP_PREFIX 变量:http://hadoop.apache.org/docs/current/hadoop-project-dist/hadoop-common/SingleCluster.html
Even i faced the same issue as yours and it got rectified by setting this variable
即使我遇到了和你一样的问题,它通过设置这个变量得到了纠正
回答by pavanpenjandra
Might be you had given a wrong syntax for dfs.datanode.data.dir or dfs.namenode.data.dir in hdfs-site.xml. If you miss / in the value you will get this error. Check the syntax of file:///home/hadoop/hdfs/
可能是您在 hdfs-site.xml 中为 dfs.datanode.data.dir 或 dfs.namenode.data.dir 提供了错误的语法。如果您错过 / 在值中,您将收到此错误。检查 file:///home/hadoop/hdfs/ 的语法
回答by Nik Bates-Haus
I ran into this same thing. I found I had to add a fs.defaultFS property to hdfs-site.xml to match the fs.defaultFS property in core-site.xml:
我遇到了同样的事情。我发现我必须向 hdfs-site.xml 添加 fs.defaultFS 属性以匹配 core-site.xml 中的 fs.defaultFS 属性:
<property>
<name>fs.defaultFS</name>
<value>hdfs://<master-ip>:8020</value>
</property>
Once I added this, the secondary namenode started OK.
一旦我添加了这个,辅助名称节点就可以正常启动了。
回答by Sonu
I was facing the same issue and fixed by formatting the namenode. Below is the command:
我遇到了同样的问题,并通过格式化 namenode 来解决。下面是命令:
hdfs namenode -format
core-site.xml entry is :
core-site.xml 条目是:
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://localhost:9000</value>
</property>
</configuration>
That will definitely solve the problem.
那肯定能解决问题。

