Python 从 Pyspark DataFrame 中的选定行获取特定字段
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/35720330/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Getting specific field from chosen Row in Pyspark DataFrame
提问by mar tin
I have a Spark DataFrame built through pysparkfrom a JSON file as
我有一个通过pyspark从 JSON 文件构建的 Spark DataFrame
sc = SparkContext()
sqlc = SQLContext(sc)
users_df = sqlc.read.json('users.json')
Now, I want to access a chosen_userdata, where this is its _id field. I can do
现在,我想访问selected_user数据,这是它的 _id 字段。我可以
print users_df[users_df._id == chosen_user].show()
and this gives me the full Row of the user. But suppose I just want one specific field in the Row, say the user gender, how would I obtain it?
这给了我用户的完整行。但是假设我只想要 Row 中的一个特定字段,比如用户性别,我将如何获得它?
回答by zero323
Just filter and select:
只需过滤并选择:
result = users_df.where(users_df._id == chosen_user).select("gender")
or with col
或与 col
from pyspark.sql.functions import col
result = users_df.where(col("_id") == chosen_user).select(col("gender"))
Finally PySpark Row
is just a tuple
with some extensions so you can for example flatMap
:
最后 PySparkRow
只是一个tuple
带有一些扩展的,所以你可以例如flatMap
:
result.rdd.flatMap(list).first()
or map
with something like this:
或者map
像这样:
result.rdd.map(lambda x: x.gender).first()