Python 在 Pyspark 数据框中选择列
声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow
原文地址: http://stackoverflow.com/questions/46813283/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me):
StackOverFlow
Select columns in Pyspark Dataframe
提问by Nivi
I am looking for a way to select columns of my dataframe in pyspark. For the first row, I know I can use df.first()
but not sure about columns given that they do not have column names.
我正在寻找一种在 pyspark 中选择数据框列的方法。对于第一行,我知道我可以使用df.first()
但不确定列,因为它们没有列名。
I have 5 columns and want to loop through each one of them.
我有 5 列,想遍历其中的每一列。
+--+---+---+---+---+---+---+
|_1| _2| _3| _4| _5| _6| _7|
+--+---+---+---+---+---+---+
|1 |0.0|0.0|0.0|1.0|0.0|0.0|
|2 |1.0|0.0|0.0|0.0|0.0|0.0|
|3 |0.0|0.0|1.0|0.0|0.0|0.0|
回答by MaxU
Try something like this:
尝试这样的事情:
df.select([c for c in df.columns if c in ['_2','_4','_5']]).show()
回答by Michael West
First two columns and 5 rows
前两列和 5 行
df.select(df.columns[:2]).take(5)
回答by Shadowtrooper
You can use an array and unpack it inside the select:
您可以使用数组并将其解压缩到选择中:
cols = ['_2','_4','_5']
df.select(*cols).show()
回答by desertnaut
Use df.schema.names
:
使用df.schema.names
:
spark.version
# u'2.2.0'
df = spark.createDataFrame([("foo", 1), ("bar", 2)])
df.show()
# +---+---+
# | _1| _2|
# +---+---+
# |foo| 1|
# |bar| 2|
# +---+---+
df.schema.names
# ['_1', '_2']
for i in df.schema.names:
# df_new = df.withColumn(i, [do-something])
print i
# _1
# _2
回答by Igor Ostaptchenko
The dataset in ss.csv
contains some columns I am interested in:
中的数据集ss.csv
包含一些我感兴趣的列:
ss_ = spark.read.csv("ss.csv", header= True,
inferSchema = True)
ss_.columns
['Reporting Area', 'MMWR Year', 'MMWR Week', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Current week', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Current week, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Previous 52 weeks Med', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Previous 52 weeks Med, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Previous 52 weeks Max', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Previous 52 weeks Max, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Cum 2018', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Cum 2018, flag', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Cum 2017', 'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Cum 2017, flag', 'Shiga toxin-producing Escherichia coli, Current week', 'Shiga toxin-producing Escherichia coli, Current week, flag', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Med', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Med, flag', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Max', 'Shiga toxin-producing Escherichia coli, Previous 52 weeks Max, flag', 'Shiga toxin-producing Escherichia coli, Cum 2018', 'Shiga toxin-producing Escherichia coli, Cum 2018, flag', 'Shiga toxin-producing Escherichia coli, Cum 2017', 'Shiga toxin-producing Escherichia coli, Cum 2017, flag', 'Shigellosis, Current week', 'Shigellosis, Current week, flag', 'Shigellosis, Previous 52 weeks Med', 'Shigellosis, Previous 52 weeks Med, flag', 'Shigellosis, Previous 52 weeks Max', 'Shigellosis, Previous 52 weeks Max, flag', 'Shigellosis, Cum 2018', 'Shigellosis, Cum 2018, flag', 'Shigellosis, Cum 2017', 'Shigellosis, Cum 2017, flag']
but I only need a few:
但我只需要几个:
columns_lambda = lambda k: k.endswith(', Current week') or k == 'Reporting Area' or k == 'MMWR Year' or k == 'MMWR Week'
The filter returns the list of desired columns, list is evaluated:
过滤器返回所需列的列表,列表被评估:
sss = filter(columns_lambda, ss_.columns)
to_keep = list(sss)
the list of desired columns is unpacked as arguments to dataframe select function that return dataset containing only columns in the list:
所需列的列表被解压为数据框选择函数的参数,该函数返回仅包含列表中的列的数据集:
dfss = ss_.select(*to_keep)
dfss.columns
The result:
结果:
['Reporting Area',
'MMWR Year',
'MMWR Week',
'Salmonellosis (excluding Paratyphoid fever andTyphoid fever)?, Current week',
'Shiga toxin-producing Escherichia coli, Current week',
'Shigellosis, Current week']
The df.select()
has a complementary pair: http://spark.apache.org/docs/2.4.1/api/python/pyspark.sql.html#pyspark.sql.DataFrame.drop
在df.select()
具有一对互补的:http://spark.apache.org/docs/2.4.1/api/python/pyspark.sql.html#pyspark.sql.DataFrame.drop
to drop the list of columns.
删除列列表。