Spark Scala:如何将 Dataframe[vector] 转换为 DataFrame[f1:Double, ..., fn: Double)]

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/38110038/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-10-22 08:26:06  来源:igfitidea点击:

Spark Scala: How to convert Dataframe[vector] to DataFrame[f1:Double, ..., fn: Double)]

scalaapache-sparkapache-spark-sqlapache-spark-ml

提问by mt88

I just used Standard Scaler to normalize my features for a ML application. After selecting the scaled features, I want to convert this back to a dataframe of Doubles, though the length of my vectors are arbitrary. I know how to do it for a specific 3 features by using

我只是使用 Standard Scaler 来标准化我的 ML 应用程序功能。选择缩放特征后,我想将其转换回双精度数据帧,尽管我的向量的长度是任意的。我知道如何通过使用来针对特定的 3 个功能执行此操作

myDF.map{case Row(v: Vector) => (v(0), v(1), v(2))}.toDF("f1", "f2", "f3")

but not for an arbitrary amount of features. Is there an easy way to do this?

但不适用于任意数量的功能。是否有捷径可寻?

Example:

例子:

val testDF = sc.parallelize(List(Vectors.dense(5D, 6D, 7D), Vectors.dense(8D, 9D, 10D), Vectors.dense(11D, 12D, 13D))).map(Tuple1(_)).toDF("scaledFeatures")
val myColumnNames = List("f1", "f2", "f3")
// val finalDF = DataFrame[f1: Double, f2: Double, f3: Double] 

EDIT

编辑

I found out how to unpack to column names when creating the dataframe, but still am having trouble converting a vector to a sequence needed to create the dataframe:

我发现了在创建数据帧时如何解包到列名,但仍然无法将向量转换为创建数据帧所需的序列:

finalDF = testDF.map{case Row(v: Vector) => v.toArray.toSeq /* <= this errors */}.toDF(List("f1", "f2", "f3"): _*)

回答by zero323

Spark >= 3.0.0

火花 >= 3.0.0

Since Spark 3.0 you can use vector_to_array

从 Spark 3.0 开始,您可以使用 vector_to_array

import org.apache.spark.ml.functions.vector_to_array

testDF.select(vector_to_array($"scaledFeatures").alias("_tmp")).select(exprs:_*)

Spark < 3.0.0

火花 < 3.0.0

One possible approach is something similar to this

一种可能的方法与此类似

import org.apache.spark.sql.functions.udf

// In Spark 1.x you'll will have to replace ML Vector with MLLib one
// import org.apache.spark.mllib.linalg.Vector
// In 2.x the below is usually the right choice
import org.apache.spark.ml.linalg.Vector

// Get size of the vector
val n = testDF.first.getAs[Vector](0).size

// Simple helper to convert vector to array<double> 
// asNondeterministic is available in Spark 2.3 or befor
// It can be removed, but at the cost of decreased performance
val vecToSeq = udf((v: Vector) => v.toArray).asNondeterministic

// Prepare a list of columns to create
val exprs = (0 until n).map(i => $"_tmp".getItem(i).alias(s"f$i"))

testDF.select(vecToSeq($"scaledFeatures").alias("_tmp")).select(exprs:_*)

If you know a list of columns upfront you can simplify this a little:

如果您预先知道列列表,则可以稍微简化一下:

val cols: Seq[String] = ???
val exprs = cols.zipWithIndex.map{ case (c, i) => $"_tmp".getItem(i).alias(c) }

For Python equivalent see How to split Vector into columns - using PySpark.

对于 Python 等效项,请参阅How to split Vector into columns - using PySpark

回答by Boern

Alternate solution that evovled couple of days ago: Import the VectorDisassemblerinto your project (as long as it's not merged into Spark), now:

几天前提出的替代解决方案:将其VectorDisassembler导入您的项目(只要它没有合并到 Spark 中),现在:

import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
  Seq((0, 1.2, 1.3), (1, 2.2, 2.3), (2, 3.2, 3.3))
).toDF("id", "val1", "val2")


val assembler = new VectorAssembler()
  .setInputCols(Array("val1", "val2"))
  .setOutputCol("vectorCol")

val output = assembler.transform(dataset)
output.show()
/*
+---+----+----+---------+
| id|val1|val2|vectorCol|
+---+----+----+---------+
|  0| 1.2| 1.3|[1.2,1.3]|
|  1| 2.2| 2.3|[2.2,2.3]|
|  2| 3.2| 3.3|[3.2,3.3]|
+---+----+----+---------+*/

val disassembler = new org.apache.spark.ml.feature.VectorDisassembler()
  .setInputCol("vectorCol")
disassembler.transform(output).show()
/*
+---+----+----+---------+----+----+
| id|val1|val2|vectorCol|val1|val2|
+---+----+----+---------+----+----+
|  0| 1.2| 1.3|[1.2,1.3]| 1.2| 1.3|
|  1| 2.2| 2.3|[2.2,2.3]| 2.2| 2.3|
|  2| 3.2| 3.3|[3.2,3.3]| 3.2| 3.3|
+---+----+----+---------+----+----+*/

回答by Tong

Please try VectorSlicer:

请尝试VectorSlicer

import org.apache.spark.ml.feature.VectorAssembler
import org.apache.spark.ml.linalg.Vectors

val dataset = spark.createDataFrame(
  Seq((1, 0.2, 0.8), (2, 0.1, 0.9), (3, 0.3, 0.7))
).toDF("id", "negative_logit", "positive_logit")


val assembler = new VectorAssembler()
  .setInputCols(Array("negative_logit", "positive_logit"))
  .setOutputCol("prediction")

val output = assembler.transform(dataset)
output.show()
/*
+---+--------------+--------------+----------+
| id|negative_logit|positive_logit|prediction|
+---+--------------+--------------+----------+
|  1|           0.2|           0.8| [0.2,0.8]|
|  2|           0.1|           0.9| [0.1,0.9]|
|  3|           0.3|           0.7| [0.3,0.7]|
+---+--------------+--------------+----------+
*/

val slicer = new VectorSlicer()
.setInputCol("prediction")
.setIndices(Array(1))
.setOutputCol("positive_prediction")

val posi_output = slicer.transform(output)
posi_output.show()

/*
+---+--------------+--------------+----------+-------------------+
| id|negative_logit|positive_logit|prediction|positive_prediction|
+---+--------------+--------------+----------+-------------------+
|  1|           0.2|           0.8| [0.2,0.8]|              [0.8]|
|  2|           0.1|           0.9| [0.1,0.9]|              [0.9]|
|  3|           0.3|           0.7| [0.3,0.7]|              [0.7]|
+---+--------------+--------------+----------+-------------------+
*/

回答by Yuehan Lyu

I use Spark 2.3.2, and built a xgboost4j binary-classification model, the result looks like this:

我使用Spark 2.3.2,构建了一个xgboost4j二元分类模型,结果如下:

results_train.select("classIndex","probability","prediction").show(3,0)
+----------+----------------------------------------+----------+
|classIndex|probability                             |prediction|
+----------+----------------------------------------+----------+
|1         |[0.5998525619506836,0.400147408246994]  |0.0       |
|1         |[0.5487841367721558,0.45121586322784424]|0.0       |
|0         |[0.5555324554443359,0.44446757435798645]|0.0       |

I define the following udf to get the elements out of vector column probability

我定义了以下 udf 以从向量列概率中获取元素

import org.apache.spark.sql.functions._

def getProb = udf((probV: org.apache.spark.ml.linalg.Vector, clsInx: Int) => probV.apply(clsInx) )

results_train.select("classIndex","probability","prediction").
withColumn("p_0",getProb($"probability",lit(0))).
withColumn("p_1",getProb($"probability", lit(1))).show(3,0)

+----------+----------------------------------------+----------+------------------+-------------------+
|classIndex|probability                             |prediction|p_0               |p_1                |
+----------+----------------------------------------+----------+------------------+-------------------+
|1         |[0.5998525619506836,0.400147408246994]  |0.0       |0.5998525619506836|0.400147408246994  |
|1         |[0.5487841367721558,0.45121586322784424]|0.0       |0.5487841367721558|0.45121586322784424|
|0         |[0.5555324554443359,0.44446757435798645]|0.0       |0.5555324554443359|0.44446757435798645|

Hope this would help for those who handle with Vector type input.

希望这对那些处理 Vector 类型输入的人有所帮助。