Hadoop:java.net.ConnectException:连接被拒绝
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/37637941/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Hadoop: java.net.ConnectException: Connection refused
提问by Troy Zuroske
Hello I have been trying to follow this tutorial: http://www.tutorialspoint.com/apache_flume/fetching_twitter_data.htmfor a very long time now and I am absolutely stuck at Step 3: Create a Directory in HDFS. I have ran start-dfs.sh and start-yarn.sh and both seem to have worked correctly as I am getting the same output as the tutorial but when I try to run:
您好,我一直在尝试遵循本教程:http: //www.tutorialspoint.com/apache_flume/fetching_twitter_data.htm很长一段时间以来,我完全陷入了第 3 步:在 HDFS 中创建目录。我已经运行了 start-dfs.sh 和 start-yarn.sh 并且两者似乎都正常工作,因为我得到的输出与教程相同,但是当我尝试运行时:
hdfs dfs -mkdir hdfs://localhost:9000/user/Hadoop/twitter_data
I keep receiving the same error:
我一直收到同样的错误:
mkdir: Call From trz-VirtualBox/10.0.2.15 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused
I can not figure out why as I have searched everywhere and tried a number of solutions but can't seem to make progress. I am going to list all of the files that I think could cause this but I could be wrong: My core.site.xml is:
我不知道为什么,因为我到处搜索并尝试了许多解决方案,但似乎无法取得进展。我将列出我认为可能导致此问题的所有文件,但我可能是错误的:我的 core.site.xml 是:
<configuration>
<property>
<name>hadoop.tmp.dir</name>
<value>/Public/hadoop-2.7.1/tmp</value>
<description>A base for other temporary directories.</description>
</property>
<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>
My mapred-site.xml is:
我的 mapred-site.xml 是:
<configuration>
<property>
<name>mapred.job.tracker</name>
<value>hdfs://localhost:9001</value>
</property>
</configuration>
My hdfs.site.xml is:
我的 hdfs.site.xml 是:
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.permission</name>
<value>false</value>
</property>
</configuration>
I am running Ubuntu 14.04.4 LTS on virtual box. My ~/.bashrc looks as so:
我在虚拟机上运行 Ubuntu 14.04.4 LTS。我的 ~/.bashrc 看起来像这样:
export JAVA_HOME=/usr/lib/jvm/java-1.7.0-openjdk-amd64
export HADOOP_HOME=/usr/local/hadoop/bin
export HADOOP_HOME=/usr/local/hadoop/bin
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export PATH=$PATH:$HADOOP_HOME/sbin:$HADOOP_HOME/bin
#flume
export FLUME_HOME=/usr/local/Flume
export PATH=$PATH:/FLUME_HOME/apache-flume-1.6.0-bin/bin
export CLASSPATH=$CLASSPATH:/FLUME_HOME/apache-flume-1.6.0-bin/lib/*
And finally my /etc/hosts file is set up as so:
最后我的 /etc/hosts 文件是这样设置的:
127.0.0.1 localhost
10.0.2.15 trz-VirtualBox
10.0.2.15 hadoopmaster
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
The added hadoopmaster I am currently not using, that was one of my attempts to fix this by trying not to use local host (didn't work). trz-VirtualBox was originally 127.0.1.1 but I read that you should use your real IP address? Neither worked so I am not sure. I posted all of these files because I do not know where the error is. I do not think it is a path issue (I had a lot before I got to this step and was able to resolve them myself) so I am out of ideas. I've been at this for a number of hours now so any help is appreciated. Thank you.
添加的 hadoopmaster 我目前没有使用,这是我尝试不使用本地主机来解决这个问题的尝试之一(没有用)。trz-VirtualBox 原来是 127.0.1.1 但我读到你应该使用你的真实 IP 地址?两者都没有工作,所以我不确定。我发布了所有这些文件,因为我不知道错误在哪里。我不认为这是一个路径问题(在我到达这一步之前我有很多问题并且能够自己解决它们)所以我没有想法。我已经在这工作了几个小时,所以任何帮助表示赞赏。谢谢你。
采纳答案by Troy Zuroske
Found my answer by following this tutorial: http://codesfusion.blogspot.in/2013/10/setup-hadoop-2x-220-on-ubuntu.html
按照本教程找到我的答案:http: //codesfusion.blogspot.in/2013/10/setup-hadoop-2x-220-on-ubuntu.html
And then with these edits: https://stackoverflow.com/a/32041603/3878508
然后进行这些编辑:https: //stackoverflow.com/a/32041603/3878508
回答by Aaron
You have to set permissions to the hadoop's directory
您必须设置对 hadoop 目录的权限
sudo chown -R user:pass /hadoop_path/hadoop
Then start the cluster and run jps command to see the DataNode and NameNode process.
然后启动集群,运行jps命令查看DataNode和NameNode进程。
回答by Manish Mehra
I was getting similar error. Upon checking I found that my namenode service was in stopped state.
sudo status hadoop-hdfs-namenode
-
check status of the namenode
我遇到了类似的错误。经过检查,我发现我的 namenode 服务处于停止状态。
sudo status hadoop-hdfs-namenode
- 检查名称节点的状态
if its not in started/running state
sudo start hadoop-hdfs-namenode
-
start namenode service
如果它不在启动/运行状态
sudo start hadoop-hdfs-namenode
- 启动 namenode 服务
Do keep in mind that it takes time before name node service becomes fully functional after restart. It reads all the hdfs edits in memory. You can check progress of this in /var/log/hadoop-hdfs/ using command tail -f /var/log/hadoop-hdfs/{Latest log file}
请记住,名称节点服务在重新启动后需要时间才能完全发挥作用。它读取内存中的所有 hdfs 编辑。您可以使用命令在 /var/log/hadoop-hdfs/ 中检查进度tail -f /var/log/hadoop-hdfs/{Latest log file}