java 初始作业未接受任何资源;检查您的集群 UI 以确保工作人员已注册并拥有足够的资源

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/38118572/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-11-03 03:08:20  来源:igfitidea点击:

Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources

javahadoopapache-spark

提问by Eddy

I'm trying to run the spark examples from Eclipseand getting this generic error: Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources.

我正在尝试运行 spark 示例Eclipse并收到此一般错误:Initial job has not accepted any resources; check your cluster UI to ensure that workers are registered and have sufficient resources.

The version I have is spark-1.6.2-bin-hadoop2.6.I started spark using the ./sbin/start-master.shcommand from a shell, and set my sparkConflike this:

我拥有的版本是spark-1.6.2-bin-hadoop2.6.我使用./sbin/start-master.shshell 中的命令启动 spark ,并sparkConf像这样设置我的:

SparkConf conf = new SparkConf().setAppName("Simple Application");
conf.setMaster("spark://My-Mac-mini.local:7077");

I'm not bringing any other code here because this error pops up with any of the examples I'm running. The machine is a Mac OSX and I'm pretty sure it has enough resources to run the simplest examples.

我没有在此处提供任何其他代码,因为此错误会在我运行的任何示例中弹出。这台机器是 Mac OSX,我很确定它有足够的资源来运行最简单的示例。

What am I missing?

我错过了什么?

采纳答案by Knight71

The error indicates that you cluster has insufficient resources for current job.Since you have not started the slaves i.e worker . The cluster won't have any resources to allocate to your job. Starting the slaves will work.

该错误表明您的集群没有足够的资源用于当前作业。因为您还没有启动从属设备,即 worker 。集群将没有任何资源可分配给您的作业。启动奴隶将工作。

`start-slave.sh <spark://master-ip:7077>`

回答by Praveen Kumar K S

Solution to your Answer

答案的解决方案

Reason

原因

  1. Spark Master doesn't have any resources allocated to execute the Job like worker node or slave node.
  1. Spark Master 没有像工作节点或从节点那样分配任何资源来执行作业。

Fix

使固定

  1. You have to start the slave nodeby connecting with the master node like this /SPARK_HOME/sbin> ./start-slave.sh spark://localhost:7077(if your master in your local node)
  1. 你必须像这样连接主节点来启动从节点/SPARK_HOME/sbin> ./start-slave.sh spark://localhost:7077(如果你的主节点在你的本地节点)

Conclusion

结论

  1. start your master node and also slave node during spark-submit, so that you will get the enough resources allocated to execute the job.
  1. 在 spark-submit 期间启动您的主节点和从节点,以便您获得足够的资源来执行作业。

Alternate-way

备用方式

  1. You need to make necessary changes in spark-env.sh file which is not recommended.
  1. 您需要在不推荐的 spark-env.sh 文件中进行必要的更改。

回答by Maxime Maillot

I had the same problem, and it was because the workers could not communicate with the driver.

我遇到了同样的问题,这是因为工人无法与司机沟通。

You need to set spark.driver.port(and open said port on your driver), spark.driver.hostand spark.driver.bindAddressin your spark-submitfrom the driver.

您需要设置spark.driver.port(并在您的驱动程序上打开所述端口),spark.driver.hostspark.driver.bindAddress在您spark-submit的驱动程序中进行设置。

回答by dancelikefish

If you try to run your application with IDE, and you have free resources on your workers, you need to do this:

如果您尝试使用 IDE 运行您的应用程序,并且您的 worker 上有免费资源,则需要执行以下操作:

1) Before all, configure workers and master spark nodes.

1)首先配置worker和master spark节点。

2) Specify driver(PC) configuration to return calculation value from workers.

2)指定驱动程序(PC)配置以从工作人员返回计算值。

SparkConf conf = new SparkConf()
            .setAppName("Test spark")
            .setMaster("spark://ip of your master node:port of your master node")
            .set("spark.blockManager.port", "10025")
            .set("spark.driver.blockManager.port", "10026")
            .set("spark.driver.port", "10027") //make all communication ports static (not necessary if you disabled firewalls, or if your nodes located in local network, otherwise you must open this ports in firewall settings)
            .set("spark.cores.max", "12") 
            .set("spark.executor.memory", "2g")
            .set("spark.driver.host", "ip of your driver (PC)"); //(necessary)

回答by river

Try using "spark://127.0.0.1:7077" as a master address instead of *.local name. Sometime java is not able to resolve .local addresses - for reasons I don't understand.

尝试使用“spark://127.0.0.1:7077”作为主地址而不是 *.local 名称。有时 java 无法解析 .local 地址 - 由于我不明白的原因。