scala Spark 应用程序抛出 javax.servlet.FilterRegistration

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/28086520/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-22 06:50:29  来源:igfitidea点击:

Spark application throws javax.servlet.FilterRegistration

scalaintellij-ideasbtapache-spark

提问by Marco

I'm using Scala to create and run a Spark application locally.

我正在使用 Scala 在本地创建和运行 Spark 应用程序。

My build.sbt:

我的 build.sbt:

name : "SparkDemo"
version : "1.0"
scalaVersion : "2.10.4"
libraryDependencies += "org.apache.spark" %% "spark-core" % "1.2.0"    exclude("org.apache.hadoop", "hadoop-client")
libraryDependencies += "org.apache.spark" % "spark-sql_2.10" % "1.2.0"
libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0"  excludeAll(
ExclusionRule(organization = "org.eclipse.jetty"))
libraryDependencies += "org.apache.hadoop" % "hadoop-mapreduce-client-core" % "2.6.0"
libraryDependencies += "org.apache.hbase" % "hbase-client" % "0.98.4-hadoop2"
libraryDependencies += "org.apache.hbase" % "hbase-server" % "0.98.4-hadoop2"
libraryDependencies += "org.apache.hbase" % "hbase-common" % "0.98.4-hadoop2"
mainClass in Compile := Some("demo.TruckEvents")

During runtime I get the exception:

在运行时我得到异常:

Exception in thread "main" java.lang.ExceptionInInitializerError during calling of... Caused by: java.lang.SecurityException: class "javax.servlet.FilterRegistration"'s signer information does not match signer information of other classes in the same package

线程“main”中的异常java.lang.ExceptionInInitializerError during call of...

The exception is triggered here:

此处触发异常:

val sc = new SparkContext("local", "HBaseTest")

I am using the IntelliJ Scala/SBT plugin.

我正在使用 IntelliJ Scala/SBT 插件。

I've seen that other people have also this problem suggestion solution. But this is a maven build... Is my sbt wrong here? Or any other suggestion how I can solve this problem?

我已经看到其他人也有这个问题建议解决方案。但这是一个 Maven 构建......我的 sbt 错了吗?或任何其他建议我如何解决这个问题?

回答by Mansoor Siddiqui

See my answer to a similar question here. The class conflict comes about because HBase depends on org.mortbay.jetty, and Spark depends on org.eclipse.jetty. I was able to resolve the issue by excluding org.mortbay.jettydependencies from HBase.

在此处查看我对类似问题的回答。类冲突的产生是因为 HBase 依赖于org.mortbay.jetty,而 Spark 依赖于org.eclipse.jetty. 我能够通过org.mortbay.jetty从 HBase 中排除依赖项来解决该问题。

If you're pulling in hadoop-common, then you may also need to exclude javax.servletfrom hadoop-common. I have a working HBase/Spark setup with my sbt dependencies set up as follows:

如果你在拉hadoop-common,那么你可能还需要排除javax.servlethadoop-common。我有一个有效的 HBase/Spark 设置,我的 sbt 依赖项设置如下:

val clouderaVersion = "cdh5.2.0"
val hadoopVersion = s"2.5.0-$clouderaVersion"
val hbaseVersion = s"0.98.6-$clouderaVersion"
val sparkVersion = s"1.1.0-$clouderaVersion"

val hadoopCommon = "org.apache.hadoop" % "hadoop-common" % hadoopVersion % "provided" excludeAll ExclusionRule(organization = "javax.servlet")
val hbaseCommon = "org.apache.hbase" % "hbase-common" % hbaseVersion % "provided"
val hbaseClient = "org.apache.hbase" % "hbase-client" % hbaseVersion % "provided"
val hbaseProtocol = "org.apache.hbase" % "hbase-protocol" % hbaseVersion % "provided"
val hbaseHadoop2Compat = "org.apache.hbase" % "hbase-hadoop2-compat" % hbaseVersion % "provided"
val hbaseServer = "org.apache.hbase" % "hbase-server" % hbaseVersion % "provided" excludeAll ExclusionRule(organization = "org.mortbay.jetty")
val sparkCore = "org.apache.spark" %% "spark-core" % sparkVersion % "provided"
val sparkStreaming = "org.apache.spark" %% "spark-streaming" % sparkVersion % "provided"
val sparkStreamingKafka = "org.apache.spark" %% "spark-streaming-kafka" % sparkVersion exclude("org.apache.spark", "spark-streaming_2.10")

回答by Jing He

If you are using IntelliJ IDEA, try this:

如果你使用 IntelliJ IDEA,试试这个:

  1. Right click the project root folder, choose Open Module Settings
  2. In the new window, choose Modulesin the left navigation column
  3. In the column rightmost, select Dependenciestab, find Maven: javax.servlet:servlet-api:2.5
  4. Finally, just move this item to the bottom by pressing ALT+Down.
  1. 右键单击项目根文件夹,选择打开模块设置
  2. 在新窗口中,选择左侧导航栏中的模块
  3. 在最右边的列中,选择Dependencies选项卡,找到Maven:javax.servlet:servlet-api:2.5
  4. 最后,只需按 ALT+Down 将此项移动到底部。

It should solve this problem.

它应该可以解决这个问题。

This method came from http://wpcertification.blogspot.ru/2016/01/spark-error-class-javaxservletfilterreg.html

这个方法来自http://wpcertification.blogspot.ru/2016/01/spark-error-class-javaxservletfilterreg.html

回答by M.Rez

If it is happening in Intellij Idea you should go to the project setting and find the jar in the modules, and remove it. Then run your code with sbt through shell. It will get the jar files itself, and then go back to intellij and re-run the code through intellij. It somehow works for me and fixes the error. I am not sure what was the problem since it doesn't show up anymore.

如果它发生在 Intellij Idea 中,您应该转到项目设置并在模块中找到 jar,然后将其删除。然后通过 shell 用 sbt 运行你的代码。它将自己获取jar文件,然后返回intellij并通过intellij重新运行代码。它以某种方式对我有用并修复了错误。我不确定是什么问题,因为它不再出现。

Oh, I also removed the jar file, and added "javax.servlet:javax.servlet-api:3.1.0" through maven by hand and now I can see the error gone.

哦,我还删除了 jar 文件,并通过 Maven 手动添加了“javax.servlet:javax.servlet-api:3.1.0”,现在我可以看到错误消失了。

回答by Ravi Macha

When you use SBT, FilterRegistration class is present in 3.0 and also if you use JETTY Or Java 8 this JAR 2.5 it automatically adds as dependency,

当您使用 SBT 时,FilterRegistration 类存在于 3.0 中,并且如果您使用 JETTY 或 Java 8 这个 JAR 2.5,它会自动添加为依赖项,

Fix: Servlet-api-2.5 JAR was the mess there, I resolved this issue by adding servlet-api-3.0 jar in dependencies,

修复:Servlet-api-2.5 JAR 在那里很混乱,我通过在依赖项中添加 servlet-api-3.0 jar 解决了这个问题,

回答by Igorock

For me works the following:

对我来说,工作如下:

libraryDependencies ++= Seq(
    "org.apache.spark" %% "spark-core" % sparkVersion.value % "provided",
    "org.apache.spark" %% "spark-sql"  % sparkVersion.value % "provided",
    ....................................................................
).map(_.excludeAll(ExclusionRule(organization = "javax.servlet")))

回答by Juan Alonso

If you are running inside intellij, please check in project settings if you have two active modules (one for the project and another for sbt).

如果您在 intellij 中运行,如果您有两个活动模块(一个用于项目,另一个用于 sbt),请检查项目设置。

Probably a problem while importing existing project.

导入现有项目时可能出现问题。

回答by Pankaj Narang

try running a simple program without the hadoop and hbase dependency

尝试运行一个没有 hadoop 和 hbase 依赖的简单程序

libraryDependencies += "org.apache.hadoop" % "hadoop-common" % "2.6.0"     excludeAll(ExclusionRule(organization = "org.eclipse.jetty"))

libraryDependencies += "org.apache.hadoop" % "hadoop-mapreduce-client-core" % "2.6.0"


libraryDependencies += "org.apache.hbase" % "hbase-client" % "0.98.4-hadoop2"

libraryDependencies += "org.apache.hbase" % "hbase-server" % "0.98.4-hadoop2"

libraryDependencies += "org.apache.hbase" % "hbase-common" % "0.98.4-hadoop2"

There should be mismatch of the dependencies. also make sure you have same version of jars while you compile and while you run.

依赖项应该不匹配。还要确保在编译和运行时使用相同版本的 jar。

Also is it possible to run the code on spark shell to reproduce ? I will be able to help better.

也可以在 spark shell 上运行代码来重现吗?我将能够更好地提供帮助。