scala where子句在spark sql数据框中不起作用
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 
原文地址: http://stackoverflow.com/questions/42409756/
Warning: these are provided under cc-by-sa 4.0 license.  You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
where clause not working in spark sql dataframe
提问by Ishan
I've created a dataframe which contains 3 columns : zip, lat, lng
我创建了一个包含 3 列的数据框:zip、lat、lng
I want to select the lat and lng values where zip = 00650
我想选择 zip = 00650 的 lat 和 lng 值
So, I tried using :
所以,我尝试使用:
sqlContext.sql("select lat,lng from census where zip=00650").show()
But it is returning ArrayOutOfBound Exception because it does not have any values in it. If I remove the where clause it is running fine.
但它返回 ArrayOutOfBound Exception 因为它没有任何值。如果我删除 where 子句,它运行良好。
Can someone please explain what I am doing wrong ?
有人可以解释一下我做错了什么吗?
Update:
更新:
dataframe schema:
数据框架构:
root 
|-- zip: string (nullable = true) 
|-- lat: string (nullable = true) 
|-- lng: string (nullable = true)
First 10 rows are :
前 10 行是:
+-----+---------+-----------+
|  zip|      lat|        lng|
+-----+---------+-----------+
|00601|18.180555| -66.749961|
|00602|18.361945| -67.175597|
|00603|18.455183| -67.119887|
|00606|18.158345| -66.932911|
|00610|18.295366| -67.125135|
|00612|18.402253| -66.711397|
|00616|18.420412| -66.671979|
|00617|18.445147| -66.559696|
|00622|17.991245| -67.153993|
|00623|18.083361| -67.153897|
|00624|18.064919| -66.716683|
|00627|18.412600| -66.863926|
|00631|18.190607| -66.832041|
|00637|18.076713| -66.947389|
|00638|18.295913| -66.515588|
|00641|18.263085| -66.712985|
|00646|18.433150| -66.285875| 
|00647|17.963613| -66.947127|
|00650|18.349416| -66.578079|
采纳答案by Prasad Khode
As you can see in your schema zipis of type String, so your query should be something like this
正如你可以在你的方案看zip是类型的String,所以你的查询应该是这样的
sqlContext.sql("select lat, lng from census where zip = '00650'").show()
Update:
更新:
If you are using Spark 2then you can do this:
如果您正在使用,Spark 2那么您可以这样做:
import sparkSession.sqlContext.implicits._
val dataFrame = Seq(("10.023", "75.0125", "00650"),("12.0246", "76.4586", "00650"), ("10.023", "75.0125", "00651")).toDF("lat","lng", "zip")
dataFrame.printSchema()
dataFrame.select("*").where(dataFrame("zip") === "00650").show()
dataFrame.registerTempTable("census")
sparkSession.sqlContext.sql("SELECT lat, lng FROM census WHERE zip = '00650'").show()
output:
输出:
root
 |-- lat: string (nullable = true)
 |-- lng: string (nullable = true)
 |-- zip: string (nullable = true)
+-------+-------+-----+
|    lat|    lng|  zip|
+-------+-------+-----+
| 10.023|75.0125|00650|
|12.0246|76.4586|00650|
+-------+-------+-----+
+-------+-------+
|    lat|    lng|
+-------+-------+
| 10.023|75.0125|
|12.0246|76.4586|
+-------+-------+
回答by Ishan
I resolved my issue using RDD rather that DataFrame. It provided me desired results :
我使用 RDD 而不是 DataFrame 解决了我的问题。它提供了我想要的结果:
val data = sc.textFile("/home/ishan/Desktop/c").map(_.split(","))
val arr=data.filter(_.contains("00650")).take(1)
arr.foreach{a => a foreach println}

