scala 读取的值不是 org.apache.spark.SparkContext 的成员

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/40957224/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-22 08:53:57  来源:igfitidea点击:

value read is not a member of org.apache.spark.SparkContext

scalaapache-spark

提问by Jennie.WU

The version of scala is 2.11.8 ; jdk is 1.8 ; spark is 2.0.2

scala 的版本是 2.11.8 ;jdk 是 1.8 ;火花是 2.0.2

I try to run an example of LDA model in the offical site of spark apache, I got error message from following sentence:

我尝试在spark apache 的官方站点中运行 LDA 模型的示例,从以下句子中收到错误消息:

val dataset = spark.read.format("libsvm")
  .load("data/libsvm_data.txt")

The error massage is:

错误消息是:

Error:(49, 25) value read is not a member of org.apache.spark.SparkContext val dataset = spark.read.format("libsvm") ^

错误:(49, 25) 读取的值不是 org.apache.spark.SparkContext val dataset = spark.read.format("libsvm") ^

enter image description here

在此处输入图片说明

I don't know how to solve it.

我不知道如何解决。

回答by Tzach Zohar

Looks like you're trying to call readon a SparkContext, instead of an SQLContextor a SparkSession:

看起来您正在尝试调用reada SparkContext,而不是 anSQLContext或 a SparkSession

// New 2.0.+ API: create SparkSession and use it for all purposes:
val session = SparkSession.builder().appName("test").master("local").getOrCreate()
session.read.load("/file") // OK

// Old <= 1.6.* API: create SparkContext, then create a SQLContext for DataFrame API usage:
val sc = new SparkContext("local", "test") // used for RDD operations only
val sqlContext = new SQLContext(sc) // used for DataFrame / DataSet APIs

sqlContext.read.load("/file") // OK
sc.read.load("/file") // NOT OK

回答by user3503711

Add these lines:

添加这些行:

import org.apache.spark.sql.SparkSession

val session = SparkSession.builder().appName("app_name").master("local").getOrCreate()

val training = session.read.format("format_name").load("path_to_file")

回答by user3521180

The full syntax for the sqlcontext function is as below

sqlcontext 函数的完整语法如下

val df = sqlContext
                .read()
                .format("com.databricks.spark.csv")
                .option("inferScheme","true")
                .option("header","true")
                .load("path to/data.csv");

in case you are reading/writing csv file

如果您正在读/写 csv 文件