SQL Pyspark:基于多种条件过滤数据框

声明:本页面是StackOverFlow热门问题的中英对照翻译,遵循CC BY-SA 4.0协议,如果您需要使用它,必须同样遵循CC BY-SA许可,注明原文地址和作者信息,同时你必须将它归于原作者(不是我):StackOverFlow 原文地址: http://stackoverflow.com/questions/49301373/
Warning: these are provided under cc-by-sa 4.0 license. You are free to use/share it, But you must attribute it to the original authors (not me): StackOverFlow

提示:将鼠标放在中文语句上可以显示对应的英文。显示中英文
时间:2020-09-01 05:23:41  来源:igfitidea点击:

Pyspark: Filter dataframe based on multiple conditions

sqlfilterpysparkapache-spark-sqlpyspark-sql

提问by Sidhom

I want to filter dataframe according to the following conditions firstly (d<5) and secondly (value of col2 not equal its counterpart in col4 if value in col1 equal its counterpart in col3).

我想首先根据以下条件过滤数据帧(d<5),其次(如果 col1 中的值等于 col3 中的对应值,则 col2 的值不等于 col4 中的对应值)。

If the original dataframe DFis as follows:

如果原始数据框DF如下:

+----+----+----+----+---+
|col1|col2|col3|col4|  d|
+----+----+----+----+---+
|   A|  xx|   D|  vv|  4|
|   C| xxx|   D|  vv| 10|
|   A|   x|   A|  xx|  3|
|   E| xxx|   B|  vv|  3|
|   E| xxx|   F| vvv|  6|
|   F|xxxx|   F| vvv|  4|
|   G| xxx|   G| xxx|  4|
|   G| xxx|   G|  xx|  4|
|   G| xxx|   G| xxx| 12|
|   B|xxxx|   B|  xx| 13|
+----+----+----+----+---+

The desired Dataframe is:

所需的数据框是:

+----+----+----+----+---+
|col1|col2|col3|col4|  d|
+----+----+----+----+---+
|   A|  xx|   D|  vv|  4|
|   A|   x|   A|  xx|  3|
|   E| xxx|   B|  vv|  3|
|   F|xxxx|   F| vvv|  4|
|   G| xxx|   G|  xx|  4|
+----+----+----+----+---+

Code I have tried that did not work as expected:

我试过的代码没有按预期工作:

cols=[('A','xx','D','vv',4),('C','xxx','D','vv',10),('A','x','A','xx',3),('E','xxx','B','vv',3),('E','xxx','F','vvv',6),('F','xxxx','F','vvv',4),('G','xxx','G','xxx',4),('G','xxx','G','xx',4),('G','xxx','G','xxx',12),('B','xxxx','B','xx',13)]
df=spark.createDataFrame(cols,['col1','col2','col3','col4','d'])

df.filter((df.d<5)& (df.col2!=df.col4) & (df.col1==df.col3)).show()

+----+----+----+----+---+
|col1|col2|col3|col4|  d|
+----+----+----+----+---+
|   A|   x|   A|  xx|  3|
|   F|xxxx|   F| vvv|  4|
|   G| xxx|   G|  xx|  4|
+----+----+----+----+---+

What should I do to achieve the desired result?

我应该怎么做才能达到预期的结果?

回答by pault

Your logic condition is wrong. IIUC, what you want is:

你的逻辑条件是错误的。IIUC,你想要的是:

import pyspark.sql.functions as f

df.filter((f.col('d')<5))\
    .filter(
        ((f.col('col1') != f.col('col3')) | 
         (f.col('col2') != f.col('col4')) & (f.col('col1') == f.col('col3')))
    )\
    .show()

I broke the filter()step into 2 calls for readability, but you could equivalently do it in one line.

filter()为了提高可读性,我将这一步分成了 2 个调用,但您可以等效地在一行中完成。

Output:

输出:

+----+----+----+----+---+
|col1|col2|col3|col4|  d|
+----+----+----+----+---+
|   A|  xx|   D|  vv|  4|
|   A|   x|   A|  xx|  3|
|   E| xxx|   B|  vv|  3|
|   F|xxxx|   F| vvv|  4|
|   G| xxx|   G|  xx|  4|
+----+----+----+----+---+

回答by ohke

You can also write like below (without pyspark.sql.functions):

你也可以像下面这样写(没有pyspark.sql.functions):

df.filter('d<5 and (col1 <> col3 or (col1 = col3 and col2 <> col4))').show()

Result:

结果:

+----+----+----+----+---+
|col1|col2|col3|col4|  d|
+----+----+----+----+---+
|   A|  xx|   D|  vv|  4|
|   A|   x|   A|  xx|  3|
|   E| xxx|   B|  vv|  3|
|   F|xxxx|   F| vvv|  4|
|   G| xxx|   G|  xx|  4|
+----+----+----+----+---+

回答by hamze_z3

faster way (without pyspark.sql.functions)

更快的方式(没有pyspark.sql.functions

    df.filter((df.d<5)&((df.col1 != df.col3) |
                    (df.col2 != df.col4) & 
                    (df.col1 ==df.col3)))\
    .show()