Python 在派生自其他列的数据框中添加新列 (Spark)
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/31333437/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Adding a new column in Data Frame derived from other columns (Spark)
提问by oikonomiyaki
I'm using Spark 1.3.0 and Python. I have a dataframe and I wish to add an additional column which is derived from other columns. Like this,
我正在使用 Spark 1.3.0 和 Python。我有一个数据框,我希望添加一个从其他列派生的附加列。像这样,
>>old_df.columns
[col_1, col_2, ..., col_m]
>>new_df.columns
[col_1, col_2, ..., col_m, col_n]
where
在哪里
col_n = col_3 - col_4
How do I do this in PySpark?
我如何在 PySpark 中做到这一点?
采纳答案by zero323
One way to achieve that is to use withColumn
method:
实现这一目标的一种方法是使用withColumn
方法:
old_df = sqlContext.createDataFrame(sc.parallelize(
[(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2'))
new_df = old_df.withColumn('col_n', old_df.col_1 - old_df.col_2)
Alternatively you can use SQL on a registered table:
或者,您可以在已注册的表上使用 SQL:
old_df.registerTempTable('old_df')
new_df = sqlContext.sql('SELECT *, col_1 - col_2 AS col_n FROM old_df')
回答by arker296
Additionally, we can use udf
此外,我们可以使用 udf
from pyspark.sql.functions import udf,col
from pyspark.sql.types import IntegerType
from pyspark import SparkContext
from pyspark.sql import SQLContext
sc = SparkContext()
sqlContext = SQLContext(sc)
old_df = sqlContext.createDataFrame(sc.parallelize(
[(0, 1), (1, 3), (2, 5)]), ('col_1', 'col_2'))
function = udf(lambda col1, col2 : col1-col2, IntegerType())
new_df = old_df.withColumn('col_n',function(col('col_1'), col('col_2')))
new_df.show()
回答by That tech guy
This worked for me in databricks using spark.sql
这在使用 spark.sql 的数据块中对我有用
df_converted = spark.sql('select total_bill, tip, sex, case when sex == "Female" then "0" else "1" end as sex_encoded from tips')