java Hadoop 2.6 在 /0.0.0.0:8032 连接到 ResourceManager
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/34117969/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Hadoop 2.6 Connecting to ResourceManager at /0.0.0.0:8032
提问by Jose Antonio
I′m trying to run the following Spark example under Hadoop 2.6, but I get the following error:
我正在尝试在 Hadoop 2.6 下运行以下 Spark 示例,但出现以下错误:
INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 and the Client enters in a loop trying to connect. I′m running a cluster of two machines, one master and a slave.
信息 RMProxy:在 /0.0.0.0:8032 连接到 ResourceManager,客户端进入尝试连接的循环。我正在运行一个由两台机器组成的集群,一台主机和一台从机。
./bin/spark-submit --class org.apache.spark.examples.SparkPi \
--master yarn-cluster \
--num-executors 3 \
--driver-memory 2g \
--executor-memory 2g \
--executor-cores 1 \
--queue thequeue \
lib/spark-examples*.jar \
10
This is the error I get:
这是我得到的错误:
15/12/06 13:38:28 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/12/06 13:38:29 INFO RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
15/12/06 13:38:30 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 0 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/12/06 13:38:31 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 1 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/12/06 13:38:32 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 2 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/12/06 13:38:33 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 3 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
15/12/06 13:38:34 INFO Client: Retrying connect to server: 0.0.0.0/0.0.0.0:8032. Already tried 4 time(s); retry policy is RetryUpToMaximumCountWithFixedSleep(maxRetries=10, sleepTime=1000 MILLISECONDS)
jps
jps
hduser@master:/usr/local/spark$ jps
hduser@master:/usr/local/spark$ jps
4930 ResourceManager
4781 SecondaryNameNode
5776 Jps
4608 DataNode
5058 NodeManager
4245 Worker
4045 Master
My /etc/host/
我的 /etc/host/
/etc/hosts
192.168.0.1 master
192.168.0.2 slave
The following lines are desirable for IPv6 capable hosts
以下几行适用于支持 IPv6 的主机
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
回答by Naruto
This error mainly comes when hostname is not configured correctly ...Please check if hostname is configured correctly and same as you have mentioned for Resourcemanager...
此错误主要发生在主机名未正确配置时...请检查主机名是否配置正确并且与您在 Resourcemanager 中提到的相同...
回答by sunanda
I had faced the same problem. I solved it.
我遇到了同样的问题。我解决了。
Do the Following steps.
执行以下步骤。
- Start Yarn by using command: start-yarn.sh
- Check Resource Manager by using command: jps
- Add the following code to the configuration
- 使用命令启动 Yarn:start-yarn.sh
- 使用命令检查资源管理器:jps
- 在配置中加入以下代码
<property>
<name>yarn.resourcemanager.address</name>
<value>127.0.0.1:8032</value>
</property>
回答by Ajit K'sagar
I had also encountered the same issue where I was not able to submit the spark job with spark-submit.
我也遇到过同样的问题,我无法使用 spark-submit 提交 spark 作业。
The issue was due to the missing HADOOP_CONF_DIR path while launching the Spark job So, whenever you are submitting the job, set HADOOP_CONF_DIR to appropriate HADOOP CONF directory. Like export HADOOP_CONF_DIR=/etc/hadoop/conf
问题是由于在启动 Spark 作业时缺少 HADOOP_CONF_DIR 路径。因此,无论何时提交作业,请将 HADOOP_CONF_DIR 设置为适当的 HADOOP CONF 目录。像export HADOOP_CONF_DIR=/etc/hadoop/conf
回答by DroneAB
You need to make sure that yarn-site.xml is on the class path and also make sure that the relevant properties are marked with true element.
您需要确保 yarn-site.xml 在类路径上,并确保相关属性标记为 true 元素。
回答by Matiji66
Similar export HADOOP_CONF_DIR=/etc/hadoop/conf was a good idea for my case in flink on yarn when i run ./bin/yarn-session.sh -n 2 -tm 2000.
当我运行 ./bin/yarn-session.sh -n 2 -tm 2000 时,类似的 export HADOOP_CONF_DIR=/etc/hadoop/conf 对我在纱线上的 flink 来说是一个好主意。
回答by mahyard
As you can see hereyarn.resourcemanager.addressis calculated based on yarn.resourcemanager.hostnamewhich its default value is set to 0.0.0.0. So you should configure it correctly.
From the base of the Hadoop installation, edit the etc/hadoop/yarn-site.xmlfile and add this property.
如您所见,此处yarn.resourcemanager.address是根据yarn.resourcemanager.hostname其默认值设置为0.0.0.0计算得出的。所以你应该正确配置它。
从 Hadoop 安装的基础上,编辑etc/hadoop/yarn-site.xml文件并添加此属性。
<property>
<name>yarn.resourcemanager.hostname</name>
<value>localhost</value>
</property>
Exucuting start-yarn.shagain will put your new settings into effect.
Exucutingstart-yarn.sh将再次把你的新设置生效。
回答by Jing He
I have got the same problem. My cause is that the times are not the same between machines since my Resource Manager is not on the master machine. Just one second difference can cause yarn connection problem. A few more seconds difference can cause your name node and date node unable to start. Use ntpd to configure time synchronization to make sure the times are exactly same.
我有同样的问题。我的原因是机器之间的时间不一样,因为我的资源管理器不在主机上。仅仅一秒的差异就会导致纱线连接问题。多几秒的差异可能会导致您的名称节点和日期节点无法启动。使用 ntpd 配置时间同步以确保时间完全相同。

