Python 迭代pyspark数据框列

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/42307976/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-08-19 21:31:44  来源:igfitidea点击:

iterate over pyspark dataframe columns

pythoniteratorpysparkpyspark-sql

提问by too_many_questions

I have the following pyspark.dataframe:

我有以下几点pyspark.dataframe

age  state  name    income
21    DC    john    30-50K
NaN   VA    gerry   20-30K

I'm trying to achieve the equivalent of df.isnull().sum()(from pandas) which produces:

我试图实现相当于df.isnull().sum()(来自熊猫)产生:

age      1
state    0
name     0
income   0

At first I tried something along the lines of:

起初我尝试了一些类似的东西:

null_counter = [df[c].isNotNull().count() for c in df.columns]

but this produces the following error:

但这会产生以下错误:

TypeError: Column is not iterable

Similarly, this is how I'm currently iterating over columns to get the minimum value:

同样,这就是我目前迭代列以获得最小值的方式:

class BaseAnalyzer:
    def __init__(self, report, struct):
        self.report = report
        self._struct = struct
        self.name = struct.name
        self.data_type = struct.dataType
        self.min = None
        self.max = None

    def __repr__(self):
        return '<Column: %s>' % self.name


class BaseReport:
    def __init__(self, df):
        self.df = df
        self.columns_list = df.columns
        self.columns = {f.name: BaseAnalyzer(self, f) for f in df.schema.fields}

    def calculate_stats(self):
        find_min = self.df.select([fn.min(self.df[c]).alias(c) for c in self.df.columns]).collect()
        min_row = find_min[0]
        for column, min_value in min_row.asDict().items():
            self[column].min = min_value

    def __getitem__(self, name):
        return self.columns[name]

    def __repr__(self):
        return '<Report>'

report = BaseReport(df)
calc = report.calculate_stats()

for column in report1.columns.values():
if hasattr(column, 'min'):
    print("{}:{}".format(column, column.min))

which allows me to 'iterate over the columns'

这允许我“遍历列”

<Column: age>:1
<Column: name>: Alan
<Column: state>:ALASKA
<Column: income>:0-1k

I think this method has become way to complicated, how can I properly iterate over ALL columns to provide vaiour summary statistcs (min, max, isnull, notnull, etc..) The distinction between pyspark.sql.Rowand pyspark.sql.Columnseems strange coming from pandas.

我觉得这个方法已经成为途径复杂,我怎么能正确地遍历所有列提供vaiour总结statistcs(最小值,最大值,ISNULL,NOTNULL,等)之间的区别pyspark.sql.Row,并pyspark.sql.Column似乎很奇怪,从大熊猫的到来。

回答by Grr

Have you tried something like this:

你有没有尝试过这样的事情:

names = df.schema.names
for name in names:
    print(name + ': ' + df.where(df[name].isNull()).count())

You can see how this could be modified to put the information into a dictionary or some other more useful format.

您可以看到如何修改它以将信息放入字典或其他一些更有用的格式中。

回答by Unicorn Squad

you can try this one :

你可以试试这个:

nullDf= df.select([count(when(col(c).isNull(), c)).alias(c) for c in df.columns]) 
nullDf.show() 

it will give you a list of columns with the number of null its null values.

它将为您提供一个列列表,其中包含空值的空值。