Hadoop 集群设置 - java.net.ConnectException:连接被拒绝

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/28661285/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-11 06:35:52  来源:igfitidea点击:

Hadoop cluster setup - java.net.ConnectException: Connection refused

javahadoopconfigurationconnectexception

提问by Marta Karas

I want to setup a hadoop-cluster in pseudo-distributed mode. I managed to perform all the setup-steps, including startuping a Namenode, Datanode, Jobtracker and a Tasktracker on my machine.

我想在伪分布式模式下设置一个 hadoop-cluster。我设法执行了所有设置步骤,包括在我的机器上启动 Namenode、Datanode、Jobtracker 和 Tasktracker。

Then I tried to run some exemplary programms and faced the java.net.ConnectException: Connection refusederror. I stepped back to the very first steps of running some operations in standalone mode and faced the same problem.

然后我尝试运行一些示例程序并遇到java.net.ConnectException: Connection refused错误。我退回到在独立模式下运行一些操作的最初步骤,并遇到了同样的问题。

I performed even triple-checkof all the installation steps and have no idea how to fix it. (I am new to Hadoop and a beginner Ubuntu user thus I kindly ask you for "taking it into account" if providing any guide or tip).

我什对所有安装步骤进行了三重检查,但不知道如何修复它。(我是 Hadoop 新手和 Ubuntu 初学者,因此如果提供任何指南或提示,我恳请您“考虑到它”)。

This is the error outputI keep receiving:

这是我不断收到的错误输出

hduser@marta-komputer:/usr/local/hadoop$ bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-2.6.0.jar grep input output 'dfs[a-z.]+'
15/02/22 18:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
15/02/22 18:23:04 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
java.net.ConnectException: Call From marta-komputer/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
    at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
    at sun.reflect.NativeConstructorAccessorImpl.newInstance(NativeConstructorAccessorImpl.java:62)
    at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(DelegatingConstructorAccessorImpl.java:45)
    at java.lang.reflect.Constructor.newInstance(Constructor.java:408)
    at org.apache.hadoop.net.NetUtils.wrapWithMessage(NetUtils.java:791)
    at org.apache.hadoop.net.NetUtils.wrapException(NetUtils.java:731)
    at org.apache.hadoop.ipc.Client.call(Client.java:1472)
    at org.apache.hadoop.ipc.Client.call(Client.java:1399)
    at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:232)
    at com.sun.proxy.$Proxy9.delete(Unknown Source)
    at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.delete(ClientNamenodeProtocolTranslatorPB.java:521)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:187)
    at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:102)
    at com.sun.proxy.$Proxy10.delete(Unknown Source)
    at org.apache.hadoop.hdfs.DFSClient.delete(DFSClient.java:1929)
    at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:638)
    at org.apache.hadoop.hdfs.DistributedFileSystem.doCall(DistributedFileSystem.java:634)
    at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
    at org.apache.hadoop.hdfs.DistributedFileSystem.delete(DistributedFileSystem.java:634)
    at org.apache.hadoop.examples.Grep.run(Grep.java:95)
    at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
    at org.apache.hadoop.examples.Grep.main(Grep.java:101)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.hadoop.util.ProgramDriver$ProgramDescription.invoke(ProgramDriver.java:71)
    at org.apache.hadoop.util.ProgramDriver.run(ProgramDriver.java:144)
    at org.apache.hadoop.examples.ExampleDriver.main(ExampleDriver.java:74)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:483)
    at org.apache.hadoop.util.RunJar.run(RunJar.java:221)
    at org.apache.hadoop.util.RunJar.main(RunJar.java:136)
Caused by: java.net.ConnectException: Connection refused
    at sun.nio.ch.SocketChannelImpl.checkConnect(Native Method)
    at sun.nio.ch.SocketChannelImpl.finishConnect(SocketChannelImpl.java:716)
    at org.apache.hadoop.net.SocketIOWithTimeout.connect(SocketIOWithTimeout.java:206)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:530)
    at org.apache.hadoop.net.NetUtils.connect(NetUtils.java:494)
    at org.apache.hadoop.ipc.Client$Connection.setupConnection(Client.java:607)
    at org.apache.hadoop.ipc.Client$Connection.setupIOstreams(Client.java:705)
    at org.apache.hadoop.ipc.Client$Connection.access00(Client.java:368)
    at org.apache.hadoop.ipc.Client.getConnection(Client.java:1521)
    at org.apache.hadoop.ipc.Client.call(Client.java:1438)
    ... 32 more


etc/hadoop/hadoop-env.shfile:

etc/hadoop/hadoop-env.sh文件:

# The java implementation to use.
export JAVA_HOME=/usr/lib/jvm/java-8-oracle

# The jsvc implementation to use. Jsvc is required to run secure datanodes
# that bind to privileged ports to provide authentication of data transfer
# protocol.  Jsvc is not required if SASL is configured for authentication of
# data transfer protocol using non-privileged ports.
#export JSVC_HOME=${JSVC_HOME}

export HADOOP_CONF_DIR=${HADOOP_CONF_DIR:-"/etc/hadoop"}

# Extra Java CLASSPATH elements.  Automatically insert capacity-scheduler.
for f in $HADOOP_HOME/contrib/capacity-scheduler/*.jar; do
  if [ "$HADOOP_CLASSPATH" ]; then
    export HADOOP_CLASSPATH=$HADOOP_CLASSPATH:$f
  else
    export HADOOP_CLASSPATH=$f
  fi
done

# The maximum amount of heap to use, in MB. Default is 1000.
#export HADOOP_HEAPSIZE=
#export HADOOP_NAMENODE_INIT_HEAPSIZE=""

# Extra Java runtime options.  Empty by default.
export HADOOP_OPTS="$HADOOP_OPTS -Djava.net.preferIPv4Stack=true"

# Command specific options appended to HADOOP_OPTS when specified
export HADOOP_NAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_NAMENODE_OPTS"
export HADOOP_DATANODE_OPTS="-Dhadoop.security.logger=ERROR,RFAS $HADOOP_DATANODE_OPTS"

export HADOOP_SECONDARYNAMENODE_OPTS="-Dhadoop.security.logger=${HADOOP_SECURITY_LOGGER:-INFO,RFAS} -Dhdfs.audit.logger=${HDFS_AUDIT_LOGGER:-INFO,NullAppender} $HADOOP_SECONDARYNAMENODE_OPTS"

export HADOOP_NFS3_OPTS="$HADOOP_NFS3_OPTS"
export HADOOP_PORTMAP_OPTS="-Xmx512m $HADOOP_PORTMAP_OPTS"

# The following applies to multiple commands (fs, dfs, fsck, distcp etc)
export HADOOP_CLIENT_OPTS="-Xmx512m $HADOOP_CLIENT_OPTS"
#HADOOP_JAVA_PLATFORM_OPTS="-XX:-UsePerfData $HADOOP_JAVA_PLATFORM_OPTS"

# On secure datanodes, user to run the datanode as after dropping privileges.
# This **MUST** be uncommented to enable secure HDFS if using privileged ports
# to provide authentication of data transfer protocol.  This **MUST NOT** be
# defined if SASL is configured for authentication of data transfer protocol
# using non-privileged ports.
export HADOOP_SECURE_DN_USER=${HADOOP_SECURE_DN_USER}

# Where log files are stored.  $HADOOP_HOME/logs by default.
#export HADOOP_LOG_DIR=${HADOOP_LOG_DIR}/$USER

# Where log files are stored in the secure data environment.
export HADOOP_SECURE_DN_LOG_DIR=${HADOOP_LOG_DIR}/${HADOOP_HDFS_USER}

# HDFS Mover specific parameters
###
# Specify the JVM options to be used when starting the HDFS Mover.
# These options will be appended to the options specified as HADOOP_OPTS
# and therefore may override any similar flags set in HADOOP_OPTS
#
# export HADOOP_MOVER_OPTS=""

###
# Advanced Users Only!
###

# The directory where pid files are stored. /tmp by default.
# NOTE: this should be set to a directory that can only be written to by 
#       the user that will run the hadoop daemons.  Otherwise there is the
#       potential for a symlink attack.
export HADOOP_PID_DIR=${HADOOP_PID_DIR}
export HADOOP_SECURE_DN_PID_DIR=${HADOOP_PID_DIR}

# A string representing this instance of hadoop. $USER by default.
export HADOOP_IDENT_STRING=$USER

.bashrcfile Hadoop-related fragment:

.bashrc文件 Hadoop 相关片段:

# -- HADOOP ENVIRONMENT VARIABLES START -- #
export JAVA_HOME=/usr/lib/jvm/java-8-oracle
export HADOOP_HOME=/usr/local/hadoop
export PATH=$PATH:$HADOOP_HOME/bin
export PATH=$PATH:$HADOOP_HOME/sbin
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export YARN_HOME=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib"
# -- HADOOP ENVIRONMENT VARIABLES END -- #

/usr/local/hadoop/etc/hadoop/core-site.xmlfile:

/usr/local/hadoop/etc/hadoop/core-site.xml文件:

<configuration>

<property>
  <name>hadoop.tmp.dir</name>
  <value>/usr/local/hadoop_tmp</value>
  <description>A base for other temporary directories.</description>
</property>

<property>
<name>fs.default.name</name>
<value>hdfs://localhost:9000</value>
</property>

</configuration>

/usr/local/hadoop/etc/hadoop/hdfs-site.xmlfile:

/usr/local/hadoop/etc/hadoop/hdfs-site.xml文件:

<configuration>
<property>
      <name>dfs.replication</name>
      <value>1</value>
 </property>
 <property>
      <name>dfs.namenode.name.dir</name>
      <value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
 </property>
 <property>
      <name>dfs.datanode.data.dir</name>
      <value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
 </property>
</configuration>

/usr/local/hadoop/etc/hadoop/yarn-site.xmlfile:

/usr/local/hadoop/etc/hadoop/yarn-site.xml文件:

<configuration> 
<property>
      <name>yarn.nodemanager.aux-services</name>
      <value>mapreduce_shuffle</value>
</property>
<property>
      <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
      <value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
</configuration>

/usr/local/hadoop/etc/hadoop/mapred-site.xmlfile:

/usr/local/hadoop/etc/hadoop/mapred-site.xml文件:

<configuration>
<property>
      <name>mapreduce.framework.name</name>
      <value>yarn</value>
</property>
<configuration>

Running hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -formatresults in an output as follows (I substitiute some of its part with (...)):

运行hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format结果如下(我用 代替了它的某些部分(...)):

hduser@marta-komputer:/usr/local/hadoop$ bin/hdfs namenode -format
15/02/22 18:50:47 INFO namenode.NameNode: STARTUP_MSG: 
/************************************************************
STARTUP_MSG: Starting NameNode
STARTUP_MSG:   host = marta-komputer/127.0.1.1
STARTUP_MSG:   args = [-format]
STARTUP_MSG:   version = 2.6.0
STARTUP_MSG:   classpath = /usr/local/hadoop/etc/hadoop:/usr/local/hadoop/share/hadoop/common/lib/htrace-core-3.0.4.jar:/usr/local/hadoop/share/hadoop/common/lib/commons-cli (...)2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-common-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-hs-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-app-2.6.0.jar:/usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-client-shuffle-2.6.0.jar:/usr/local/hadoop/contrib/capacity-scheduler/*.jar
STARTUP_MSG:   build = https://git-wip-us.apache.org/repos/asf/hadoop.git -r e3496499ecb8d220fba99dc5ed4c99c8f9e33bb1; compiled by 'jenkins' on 2014-11-13T21:10Z
STARTUP_MSG:   java = 1.8.0_31
************************************************************/
15/02/22 18:50:47 INFO namenode.NameNode: registered UNIX signal handlers for [TERM, HUP, INT]
15/02/22 18:50:47 INFO namenode.NameNode: createNameNode [-format]
15/02/22 18:50:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Formatting using clusterid: CID-0b65621a-eab3-47a4-bfd0-62b5596a940c
15/02/22 18:50:48 INFO namenode.FSNamesystem: No KeyProvider found.
15/02/22 18:50:48 INFO namenode.FSNamesystem: fsLock is fair:true
15/02/22 18:50:48 INFO blockmanagement.DatanodeManager: dfs.block.invalidate.limit=1000
15/02/22 18:50:48 INFO blockmanagement.DatanodeManager: dfs.namenode.datanode.registration.ip-hostname-check=true
15/02/22 18:50:48 INFO blockmanagement.BlockManager: dfs.namenode.startup.delay.block.deletion.sec is set to 000:00:00:00.000
15/02/22 18:50:48 INFO blockmanagement.BlockManager: The block deletion will start around 2015 Feb 22 18:50:48
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map BlocksMap
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 2.0% max memory 889 MB = 17.8 MB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^21 = 2097152 entries
15/02/22 18:50:48 INFO blockmanagement.BlockManager: dfs.block.access.token.enable=false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: defaultReplication         = 1
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxReplication             = 512
15/02/22 18:50:48 INFO blockmanagement.BlockManager: minReplication             = 1
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxReplicationStreams      = 2
15/02/22 18:50:48 INFO blockmanagement.BlockManager: shouldCheckForEnoughRacks  = false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: replicationRecheckInterval = 3000
15/02/22 18:50:48 INFO blockmanagement.BlockManager: encryptDataTransfer        = false
15/02/22 18:50:48 INFO blockmanagement.BlockManager: maxNumBlocksToLog          = 1000
15/02/22 18:50:48 INFO namenode.FSNamesystem: fsOwner             = hduser (auth:SIMPLE)
15/02/22 18:50:48 INFO namenode.FSNamesystem: supergroup          = supergroup
15/02/22 18:50:48 INFO namenode.FSNamesystem: isPermissionEnabled = true
15/02/22 18:50:48 INFO namenode.FSNamesystem: HA Enabled: false
15/02/22 18:50:48 INFO namenode.FSNamesystem: Append Enabled: true
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map INodeMap
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 1.0% max memory 889 MB = 8.9 MB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^20 = 1048576 entries
15/02/22 18:50:48 INFO namenode.NameNode: Caching file names occuring more than 10 times
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map cachedBlocks
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 0.25% max memory 889 MB = 2.2 MB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^18 = 262144 entries
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.threshold-pct = 0.9990000128746033
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.min.datanodes = 0
15/02/22 18:50:48 INFO namenode.FSNamesystem: dfs.namenode.safemode.extension     = 30000
15/02/22 18:50:48 INFO namenode.FSNamesystem: Retry cache on namenode is enabled
15/02/22 18:50:48 INFO namenode.FSNamesystem: Retry cache will use 0.03 of total heap and retry cache entry expiry time is 600000 millis
15/02/22 18:50:48 INFO util.GSet: Computing capacity for map NameNodeRetryCache
15/02/22 18:50:48 INFO util.GSet: VM type       = 64-bit
15/02/22 18:50:48 INFO util.GSet: 0.029999999329447746% max memory 889 MB = 273.1 KB
15/02/22 18:50:48 INFO util.GSet: capacity      = 2^15 = 32768 entries
15/02/22 18:50:48 INFO namenode.NNConf: ACLs enabled? false
15/02/22 18:50:48 INFO namenode.NNConf: XAttrs enabled? true
15/02/22 18:50:48 INFO namenode.NNConf: Maximum size of an xattr: 16384
Re-format filesystem in Storage Directory /usr/local/hadoop_tmp/hdfs/namenode ? (Y or N) Y
15/02/22 18:50:50 INFO namenode.FSImage: Allocated new BlockPoolId: BP-948369552-127.0.1.1-1424627450316
15/02/22 18:50:50 INFO common.Storage: Storage directory /usr/local/hadoop_tmp/hdfs/namenode has been successfully formatted.
15/02/22 18:50:50 INFO namenode.NNStorageRetentionManager: Going to retain 1 images with txid >= 0
15/02/22 18:50:50 INFO util.ExitUtil: Exiting with status 0
15/02/22 18:50:50 INFO namenode.NameNode: SHUTDOWN_MSG: 
/************************************************************
SHUTDOWN_MSG: Shutting down NameNode at marta-komputer/127.0.1.1
************************************************************/

Starting dfsand yarnresults in the following output:

启动dfsyarn产生以下输出:

hduser@marta-komputer:/usr/local/hadoop$ start-dfs.sh
15/02/22 18:53:05 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Starting namenodes on [localhost]
localhost: starting namenode, logging to /usr/local/hadoop/logs/hadoop-hduser-namenode-marta-komputer.out
localhost: starting datanode, logging to /usr/local/hadoop/logs/hadoop-hduser-datanode-marta-komputer.out
Starting secondary namenodes [0.0.0.0]
0.0.0.0: starting secondarynamenode, logging to /usr/local/hadoop/logs/hadoop-hduser-secondarynamenode-marta-komputer.out
15/02/22 18:53:20 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
hduser@marta-komputer:/usr/local/hadoop$ start-yarn.sh
starting yarn daemons
starting resourcemanager, logging to /usr/local/hadoop/logs/yarn-hduser-resourcemanager-marta-komputer.out
localhost: starting nodemanager, logging to /usr/local/hadoop/logs/yarn-hduser-nodemanager-marta-komputer.out

Calling jpsshortly after that gives:

在此jps之后不久调用给出:

hduser@marta-komputer:/usr/local/hadoop$ jps
11696 ResourceManager
11842 NodeManager
11171 NameNode
11523 SecondaryNameNode
12167 Jps

netstatoutput:

网络统计输出:

hduser@marta-komputer:/usr/local/hadoop$ sudo netstat -lpten | grep java
tcp        0      0 0.0.0.0:8088            0.0.0.0:*               LISTEN      1001       690283      11696/java      
tcp        0      0 0.0.0.0:42745           0.0.0.0:*               LISTEN      1001       684574      11842/java      
tcp        0      0 0.0.0.0:13562           0.0.0.0:*               LISTEN      1001       680955      11842/java      
tcp        0      0 0.0.0.0:8030            0.0.0.0:*               LISTEN      1001       684531      11696/java      
tcp        0      0 0.0.0.0:8031            0.0.0.0:*               LISTEN      1001       684524      11696/java      
tcp        0      0 0.0.0.0:8032            0.0.0.0:*               LISTEN      1001       680879      11696/java      
tcp        0      0 0.0.0.0:8033            0.0.0.0:*               LISTEN      1001       687392      11696/java      
tcp        0      0 0.0.0.0:8040            0.0.0.0:*               LISTEN      1001       680951      11842/java      
tcp        0      0 127.0.0.1:9000          0.0.0.0:*               LISTEN      1001       687242      11171/java      
tcp        0      0 0.0.0.0:8042            0.0.0.0:*               LISTEN      1001       680956      11842/java      
tcp        0      0 0.0.0.0:50090           0.0.0.0:*               LISTEN      1001       690252      11523/java      
tcp        0      0 0.0.0.0:50070           0.0.0.0:*               LISTEN      1001       687239      11171/java  

/etc/hostsfile:

/etc/hosts文件:

127.0.0.1       localhost
127.0.1.1       marta-komputer

# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

====================================================

================================================== ==

UPDATE 1.

更新 1。

I updated the core-site.xmland now I have:

我更新了core-site.xml,现在我有:

<property>
<name>fs.default.name</name>
<value>hdfs://marta-komputer:9000</value>
</property>

but I keep receiving the error - now starting as:

但我一直收到错误 - 现在开始:

15/03/01 00:59:34 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032
java.net.ConnectException: Call From marta-komputer.home/192.168.1.8 to marta-komputer:9000 failed on connection exception:     java.net.ConnectException: Connection refused; For more details see:    http://wiki.apache.org/hadoop/ConnectionRefused

I also notice that telnet localhost 9000is not working:

我还注意到这telnet localhost 9000不起作用:

hduser@marta-komputer:~$ telnet localhost 9000
Trying 127.0.0.1...
telnet: Unable to connect to remote host: Connection refused

回答by SubOptimal

From the netstatoutput you can see the process is listening on address 127.0.0.1

netstat输出中您可以看到进程正在监听地址127.0.0.1

tcp        0      0 127.0.0.1:9000          0.0.0.0:*  ...

from the exception message you can see that it tries to connect to address 127.0.1.1

从异常消息中您可以看到它尝试连接到地址 127.0.1.1

java.net.ConnectException: Call From marta-komputer/127.0.1.1 to localhost:9000 failed ...

further in the exception it's mentionend

进一步的例外是提到end

For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused

on this page you find

在这个页面上你找到

Check that there isn't an entry for your hostname mapped to 127.0.0.1 or 127.0.1.1 in /etc/hosts (Ubuntu is notorious for this)

检查 /etc/hosts 中是否没有映射到 127.0.0.1 或 127.0.1.1 的主机名条目(Ubuntu 因这个而臭名昭著)

so the conclusion is to remove this line in your /etc/hosts

所以结论是在你的 /etc/hosts

127.0.1.1       marta-komputer

回答by Tinku

Check your firewall setting and set

检查您的防火墙设置并设置

  <property>
  <name>fs.default.name</name>
  <value>hdfs://MachineName:9000</value>
  </property>

replace localhost to machine name

将 localhost 替换为机器名称

回答by Paul H.

Your issue is a very interesting one. Hadoop setup could be frustrating some time due to the complexity of the system and many moving parts involved. I think the issue you faced is definitely a firewall one. My hadoop cluster has similar setup. With a firewall rule added with command:

你的问题很有趣。由于系统的复杂性和涉及的许多活动部件,Hadoop 设置可能会令人沮丧。我认为您面临的问题绝对是防火墙问题。我的 hadoop 集群有类似的设置。使用命令添加防火墙规则:

 sudo iptables -A INPUT -p tcp --dport 9000 -j REJECT

I'm able to see the exact issue:

我能够看到确切的问题:

15/03/02 23:46:10 INFO client.RMProxy: Connecting to ResourceManager at  /0.0.0.0:8032
java.net.ConnectException: Call From mybox/127.0.1.1 to localhost:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see:  http://wiki.apache.org/hadoop/ConnectionRefused
     at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)

You can verify your firewall settings with command:

您可以使用以下命令验证防火墙设置:

/usr/local/hadoop/etc$ sudo iptables -L
Chain INPUT (policy ACCEPT)
target     prot opt source               destination         
REJECT     tcp  --  anywhere             anywhere             tcp dpt:9000 reject-with icmp-port-unreachable

Chain FORWARD (policy ACCEPT)
target     prot opt source               destination         

Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination   

Once the suspicious rule is identified, it could be deleted with a command like:

一旦识别出可疑规则,就可以使用如下命令将其删除:

 sudo iptables -D INPUT -p tcp --dport 9000 -j REJECT 

Now, the connection should go through.

现在,连接应该通过。

回答by Prachil Tambe

In my experaince

以我的经验

15/02/22 18:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable

You may have 64 bit version OS, and hadoop installation 32bit. refer this

您可能有 64 位版本的操作系统,而 hadoop 安装的是 32 位。参考这个

java.net.ConnectException: Call From marta-komputer/127.0.1.1 to
localhost:9000 failed on connection exception: java.net.ConnectException: 
connection refused; For more details see:   
http://wiki.apache.org/hadoop/ConnectionRefused

this problem refers to your ssh public key authorization. please provide details about your ssh set up.

此问题与您的 ssh 公钥授权有关。请提供有关您的 ssh 设置的详细信息。

Please refer thislink to check the complete steps.

请参阅链接以检查完整的步骤。

also provide info if

还提供信息,如果

cat $HOME/.ssh/authorized_keys

returns any result or not.

返回任何结果。

回答by Rajesh N

hduser@marta-komputer:/usr/local/hadoop$ jps

11696 ResourceManager

11842 NodeManager

11171 NameNode

11523 SecondaryNameNode

12167 Jps

hduser@marta-komputer:/usr/local/hadoop$ jps

11696 资源管理器

11842 节点管理器

11171 名称节点

11523 次要名称节点

12167 日元

Where is your DataNode? Connection refusedproblem might also be due to no active DataNode. Check datanode logs for issues.

你的DataNode在哪里?Connection refused问题也可能是由于没有 active DataNode。检查数据节点日志是否有问题。

UPDATED:

更新:

For this error:

对于这个错误:

15/03/01 00:59:34 INFO client.RMProxy: Connecting to ResourceManager at /0.0.0.0:8032 java.net.ConnectException: Call From marta-komputer.home/192.168.1.8 to marta-komputer:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

15/03/01 00:59:34 INFO client.RMProxy:连接到 ResourceManager at /0.0.0.0:8032 java.net.ConnectException:从 marta-komputer.home/192.168.1.8 调用到 marta-komputer:9000 失败连接异常:java.net.ConnectException:连接被拒绝;有关更多详细信息,请参阅:http: //wiki.apache.org/hadoop/ConnectionRefused

Add these lines in yarn-site.xml:

yarn-site.xml 中添加这些行:

<property>
<name>yarn.resourcemanager.address</name>
<value>192.168.1.8:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>192.168.1.8:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>192.168.1.8:8031</value>
</property>

Restart the hadoop processes.

重新启动 hadoop 进程。

回答by Desly Peter

Hi Edit your conf/core-site.xml and change localhost to 0.0.0.0. Use the conf below. That should work.

嗨,编辑您的 conf/core-site.xml 并将本地主机更改为 0.0.0.0。使用下面的配置。那应该工作。

<configuration>
  <property>
 <name>fs.default.name</name>
 <value>hdfs://0.0.0.0:9000</value>
</property>

回答by leoflower

I had the similar prolem with OP. As the terminal output suggested, I went to http://wiki.apache.org/hadoop/ConnectionRefused

我和 OP 有类似的问题。正如终端输出所建议的那样,我去了 http://wiki.apache.org/hadoop/ConnectionRefused

I tried to change my /etc/hosts file as suggested here, i.e. remove 127.0.1.1 as OP suggested it will create another error.

我尝试按照此处的建议更改我的 /etc/hosts 文件,即删除 127.0.1.1,因为 OP 建议它将创建另一个错误。

So in the end, I leave it as is. The following is my /etc/hosts

所以最后,我保持原样。以下是我的/etc/hosts

127.0.0.1       localhost.localdomain   localhost
127.0.1.1       linux
# The following lines are desirable for IPv6 capable hosts
::1     ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

In the end, I found that my namenode did notstarted correctly, i.e. When you type sudo netstat -lpten | grep javain the terminal, there will not be any JVM process running(listening) on port 9000.

最后发现我的namenode没有正确启动,即sudo netstat -lpten | grep java在终端输入时,9000端口上不会有JVM进程在运行(监听)。

So I made two directories for namenode and datanode respectively(if you have not done so). You don't have to put where I put it, please replace it based on your hadoop directory. i.e.

所以我分别为 namenode 和 datanode 创建了两个目录(如果你还没有这样做)。不用放在我放的地方,请根据你的hadoop目录替换。IE

mkdir -p /home/hadoopuser/hadoop-2.6.2/hdfs/namenode
mkdir -p /home/hadoopuser/hadoop-2.6.2/hdfs/datanode

I reconfigured my hdfs-site.xml.

我重新配置了我的hdfs-site.xml

<configuration>
    <property>
        <name>dfs.replication</name>
        <value>1</value>
    </property>
   <property>
        <name>dfs.namenode.name.dir</name>
        <value>file:/home/hadoopuser/hadoop-2.6.2/hdfs/namenode</value>
    </property>
    <property>
        <name>dfs.datanode.data.dir</name>
        <value>file:/home/hadoopuser/hadoop-2.6.2/hdfs/datanode</value>
    </property>
</configuration>

In terminal, stop your hdfs and yarn with script stop-dfs.shand stop-yarn.sh. They are located in your hadoop directory/sbin. In my case, it's /home/hadoopuser/hadoop-2.6.2/sbin/.

在终端中,使用脚本stop-dfs.shstop-yarn.sh. 它们位于您的 hadoop 目录/sbin 中。就我而言,它是 /home/hadoopuser/hadoop-2.6.2/sbin/。

Then start your hdfs and yarn with script start-dfs.shand start-yarn.shAfter it is started, type jpsin your terminal to see if your JVM processes are running correctly. It should show the following.

然后用脚本启动HDFS和纱线start-dfs.sh,并start-yarn.sh在启动后,键入jps您的终端,看看你的JVM进程运行正常。它应该显示以下内容。

15678 NodeManager
14982 NameNode
15347 SecondaryNameNode
23814 Jps
15119 DataNode
15548 ResourceManager

Then try to use netstat again to see if your namenode is listening to port 9000

然后再次尝试使用netstat查看你的namenode是否监听了9000端口

sudo netstat -lpten | grep java

If you successfully set up the namenode, you should see the following in your terminal output.

如果您成功设置了 namenode,您应该会在终端输出中看到以下内容。

tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1001 175157 14982/java

tcp 0 0 127.0.0.1:9000 0.0.0.0:* LISTEN 1001 175157 14982/java

Then try to type the command hdfs dfs -mkdir /user/hadoopuserIf this command executes sucessfully, now you can list your directory in the HDFS user directory by hdfs dfs -ls /user

然后尝试输入命令hdfs dfs -mkdir /user/hadoopuser如果此命令执行成功,现在您可以通过以下方式在 HDFS 用户目录中列出您的目录hdfs dfs -ls /user

回答by nikk

Make sure HDFS is online. Start it by $HADOOP_HOME/sbin/start-dfs.shOnce you do that, your test with telnet localhost 9001should work.

确保 HDFS 在线。开始它$HADOOP_HOME/sbin/start-dfs.sh一旦你这样做了,你的测试telnet localhost 9001应该可以工作。

回答by Abhash Kumar

For me these steps worked

对我来说,这些步骤有效

  1. stop-all.sh
  2. hadoop namenode -format
  3. start-all.sh
  1. stop-all.sh
  2. hadoop namenode -format
  3. start-all.sh

回答by nataku981

For me it was that I could not cluster my zookeeper.

对我来说,我无法集群我的动物园管理员。

hdfs haadmin -getServiceState 1
active

hdfs haadmin -getServiceState 2
active

My hadoop-hdfs-zkfc-[hostname].log showed:

我的 hadoop-hdfs-zkfc-[hostname].log 显示:

2017-04-14 11:46:55,351 WARN org.apache.hadoop.ha.HealthMonitor: Transport-level exception trying to monitor health of NameNode at HOST/192.168.1.55:9000: java.net.ConnectException: Connection refused Call From HOST/192.168.1.55 to HOST:9000 failed on connection exception: java.net.ConnectException: Connection refused; For more details see: http://wiki.apache.org/hadoop/ConnectionRefused

2017-04-14 11:46:55,351 WARN org.apache.hadoop.ha.HealthMonitor:传输级异常试图在 HOST/192.168.1.55:9000 监控 NameNode 的健康状况:java.net.ConnectException:连接拒绝呼叫来自HOST/192.168.1.55 到 HOST:9000 连接异常失败:java.net.ConnectException:连接被拒绝;有关更多详细信息,请参阅:http: //wiki.apache.org/hadoop/ConnectionRefused

solution:

解决方案:

hdfs-site.xml
  <property>
    <name>dfs.namenode.rpc-bind-host</name>
      <value>0.0.0.0</value>
  </property>

before

netstat -plunt

tcp        0      0 192.168.1.55:9000        0.0.0.0:*               LISTEN      13133/java

nmap localhost -p 9000

Starting Nmap 6.40 ( http://nmap.org ) at 2017-04-14 12:15 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000047s latency).
Other addresses for localhost (not scanned): 127.0.0.1
PORT     STATE  SERVICE
9000/tcp closed cslistener

after

netstat -plunt
tcp        0      0 0.0.0.0:9000            0.0.0.0:*               LISTEN      14372/java

nmap localhost -p 9000

Starting Nmap 6.40 ( http://nmap.org ) at 2017-04-14 12:28 EDT
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000039s latency).
Other addresses for localhost (not scanned): 127.0.0.1
PORT     STATE SERVICE
9000/tcp open  cslistener