scala 启动时Apache Spark错误

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/30085779/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-22 07:08:05  来源:igfitidea点击:

Apache Spark error while start

javascalahadoop

提问by Mateusz

I want to enable single cluster in Apache Spark, I installed java and scala. I downloaded the spark for Apache Hadoop 2.6 and unpacked. I'm trying to turn the spark-shell but throws me an error, in addition, I do not have access to sc in shell. I compiled from source but the same error. What am I doing wrong?

我想在 Apache Spark 中启用单个集群,我安装了 java 和 scala。我下载了 Apache Hadoop 2.6 的 spark 并解压了。我正在尝试打开 spark-shell,但给我一个错误,此外,我无法访问 shell 中的 sc。我从源代码编译但同样的错误。我究竟做错了什么?

Welcome to
      ____              __
     / __/__  ___ _____/ /__
    _\ \/ _ \/ _ `/ __/  '_/
   /___/ .__/\_,_/_/ /_/\_\   version 1.3.1
      /_/

Using Scala version 2.10.4 (Java HotSpot(TM) 64-Bit Server VM, Java 1.7.0_79)
Type in expressions to have them evaluated.
Type :help for more information.
java.net.BindException: Failed to bind to: ADMINISTRATOR.home/192.168.1.5:0: Service 'sparkDriver' failed after 16 retries!
 at org.jboss.netty.bootstrap.ServerBootstrap.bind(ServerBootstrap.java:272)
 at akka.remote.transport.netty.NettyTransport$$anonfun$listen.apply(NettyTransport.scala:393)
 at akka.remote.transport.netty.NettyTransport$$anonfun$listen.apply(NettyTransport.scala:389)
 at scala.util.Success$$anonfun$map.apply(Try.scala:206)
 at scala.util.Try$.apply(Try.scala:161)
 at scala.util.Success.map(Try.scala:206)
 at scala.concurrent.Future$$anonfun$map.apply(Future.scala:235)
 at scala.concurrent.Future$$anonfun$map.apply(Future.scala:235)
 at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:32)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run.processBatch(BatchingExecutor.scala:67)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run.apply$mcV$sp(BatchingExecutor.scala:82)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run.apply(BatchingExecutor.scala:59)
 at akka.dispatch.BatchingExecutor$Batch$$anonfun$run.apply(BatchingExecutor.scala:59)
 at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
 at akka.dispatch.BatchingExecutor$Batch.run(BatchingExecutor.scala:58)
 at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:41)
 at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:393)
 at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
 at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
 at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
 at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

java.lang.NullPointerException
 at org.apache.spark.sql.SQLContext.<init>(SQLContext.scala:145)
 at org.apache.spark.sql.hive.HiveContext.<init>(HiveContext.scala:49)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance0(Native Method)
 at sun.reflect.NativeConstructorAccessorImpl.newInstance(Unknown Source)
 at sun.reflect.DelegatingConstructorAccessorImpl.newInstance(Unknown Source)
 at java.lang.reflect.Constructor.newInstance(Unknown Source)
 at org.apache.spark.repl.SparkILoop.createSQLContext(SparkILoop.scala:1027)
 at $iwC$$iwC.<init>(<console>:9)
 at $iwC.<init>(<console>:18)
 at <init>(<console>:20)
 at .<init>(<console>:24)
 at .<clinit>(<console>)
 at .<init>(<console>:7)
 at .<clinit>(<console>)
 at $print(<console>)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at org.apache.spark.repl.SparkIMain$ReadEvalPrint.call(SparkIMain.scala:1065)
 at org.apache.spark.repl.SparkIMain$Request.loadAndRun(SparkIMain.scala:1338)
 at org.apache.spark.repl.SparkIMain.loadAndRunReq(SparkIMain.scala:840)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:871)
 at org.apache.spark.repl.SparkIMain.interpret(SparkIMain.scala:819)
 at org.apache.spark.repl.SparkILoop.reallyInterpret(SparkILoop.scala:856)
 at org.apache.spark.repl.SparkILoop.interpretStartingWith(SparkILoop.scala:901)
 at org.apache.spark.repl.SparkILoop.command(SparkILoop.scala:813)
 at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark.apply(SparkILoopInit.scala:130)
 at org.apache.spark.repl.SparkILoopInit$$anonfun$initializeSpark.apply(SparkILoopInit.scala:122)
 at org.apache.spark.repl.SparkIMain.beQuietDuring(SparkIMain.scala:324)
 at org.apache.spark.repl.SparkILoopInit$class.initializeSpark(SparkILoopInit.scala:122)
 at org.apache.spark.repl.SparkILoop.initializeSpark(SparkILoop.scala:64)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process$$anonfun$apply$mcZ$sp.apply$mcV$sp(SparkILoop.scala:973)
 at org.apache.spark.repl.SparkILoopInit$class.runThunks(SparkILoopInit.scala:157)
 at org.apache.spark.repl.SparkILoop.runThunks(SparkILoop.scala:64)
 at org.apache.spark.repl.SparkILoopInit$class.postInitialization(SparkILoopInit.scala:106)
 at org.apache.spark.repl.SparkILoop.postInitialization(SparkILoop.scala:64)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process.apply$mcZ$sp(SparkILoop.scala:990)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process.apply(SparkILoop.scala:944)
 at org.apache.spark.repl.SparkILoop$$anonfun$org$apache$spark$repl$SparkILoop$$process.apply(SparkILoop.scala:944)
 at scala.tools.nsc.util.ScalaClassLoader$.savingContextLoader(ScalaClassLoader.scala:135)
 at org.apache.spark.repl.SparkILoop.org$apache$spark$repl$SparkILoop$$process(SparkILoop.scala:944)
 at org.apache.spark.repl.SparkILoop.process(SparkILoop.scala:1058)
 at org.apache.spark.repl.Main$.main(Main.scala:31)
 at org.apache.spark.repl.Main.main(Main.scala)
 at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
 at sun.reflect.NativeMethodAccessorImpl.invoke(Unknown Source)
 at sun.reflect.DelegatingMethodAccessorImpl.invoke(Unknown Source)
 at java.lang.reflect.Method.invoke(Unknown Source)
 at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:569)
 at org.apache.spark.deploy.SparkSubmit$.doRunMain(SparkSubmit.scala:166)
 at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:189)
 at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:110)
 at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)

<console>:10: error: not found: value sqlContext
       import sqlContext.implicits._
              ^
<console>:10: error: not found: value sqlContext
       import sqlContext.sql
              ^

scala> 

回答by mike

I've just begun to learn Spark, and I hope run Spark in local mode. I met a problem like yours. The problem:

刚开始学习Spark,希望能在本地模式下运行Spark。我遇到了和你一样的问题。问题:

java.net.BindException: Failed to bind to: /124.232.132.94:0: Service 'sparkDriver' failed after 16 retries!

java.net.BindException:无法绑定到:/124.232.132.94:0:16 次重试后服务“sparkDriver”失败!

Because I just wanted to run Spark in local mode, I found a solution to solve this problem. The solution: edit the file spark-env.sh(you can find it in your $SPARK_HOME/conf/) and add this into the file:

因为我只是想在本地模式下运行Spark,所以我找到了解决这个问题的方法。解决方案:编辑文件spark-env.sh(您可以在您的 中找到它$SPARK_HOME/conf/)并将其添加到文件中:

export  SPARK_MASTER_IP=127.0.0.1
export  SPARK_LOCAL_IP=127.0.0.1

After that my Spark works fine in local mode. I hope this can help you! :)

之后我的 Spark 在本地模式下工作正常。我希望这可以帮助你!:)

回答by kjosh

Above solution did not work for me. I followed these steps: How to start Spark applications on Windows (aka Why Spark fails with NullPointerException)?

以上解决方案对我不起作用。我遵循了以下步骤: 如何在 Windows 上启动 Spark 应用程序(又名为什么 Spark 因 NullPointerException 而失败)?

and changed HADOOP_HOME environment variable in system variable. It worked for me.

并在系统变量中更改了 HADOOP_HOME 环境变量。它对我有用。

回答by deepdive

It might be ownership issue as well

这也可能是所有权问题

hadoop fs -chown -R deepdive:root /user/deepdive/

hadoop fs -chown -R deepdive:root /user/deepdive/