Home > Mobile >  recursive operation on the same column in Pyspark
recursive operation on the same column in Pyspark

Time:10-09

I have a dataframe like so:

Dataframe:

|SEQ_ID |TIME_STAMP             |_MS               |
 ------- ----------------------- ------------------ 
|3879826|2021-07-29 11:24:20.525|NaN               |
|3879826|2021-07-29 11:25:56.934|21.262409581399556|
|3879826|2021-07-29 11:27:43.264|27.247600203353613|
|3879826|2021-07-29 11:29:27.613|18.13528511851038 |
|3879826|2021-07-29 11:31:10.512|2.520896614376871 |
|3879826|2021-07-29 11:32:54.252|2.7081931585605541|
|3879826|2021-07-29 11:34:36.995|2.9832290627235505|
|3879826|2021-07-29 11:36:19.128|13.011968111650264|
|3879826|2021-07-29 11:38:10.919|17.762006254598797|
|3879826|2021-07-29 11:40:01.929|1.9661930950977457|

when _MS is >=3 and when the previous _MS is lesser than current _MS I want to increment a new column drift_MS by 100. But if _MS is <3 and previous _MS < current _MS I want to increment drift_MS by 1. If none of the conditions satisfy, I want to set the value to 0

Expected output:

|SEQ_ID |TIME_STAMP             |_MS               |drift_MS|
 ------- ----------------------- ------------------ -------- 
|3879826|2021-07-29 11:24:20.525|NaN               |0       |
|3879826|2021-07-29 11:25:56.934|21.262409581399556|0       |
|3879826|2021-07-29 11:27:43.264|27.247600203353613|100     |
|3879826|2021-07-29 11:29:27.613|18.13528511851038 |0       |
|3879826|2021-07-29 11:31:10.512|2.520896614376871 |0       |
|3879826|2021-07-29 11:32:54.252|2.7081931585605541|1       |
|3879826|2021-07-29 11:34:36.995|2.9832290627235505|2       |
|3879826|2021-07-29 11:36:19.128|13.011968111650264|102     |
|3879826|2021-07-29 11:38:10.919|17.762006254598797|202     |
|3879826|2021-07-29 11:40:01.929|1.9661930950977457|0       |

I had a different version of this question where I just wanted to keep the previous value the same and a very helpful contributor suggested I use the sum function like so;

import pyspark.sql.functions as f

w1=Window.partitionBy('SEQ_ID').orderBy(col('TIME_STAMP').asc())
    
prev_MS = (f.lag(col('_MS'),1).over(w1))
df.withColumn('drift_MS', 
  f.sum(
    when((col('_MS') < 3) & (prev_MS < col('_MS')), 1)
    .when((col('_MS') >= 3) & (prev_MS < col('_MS')), 100)
    .otherwise(0)
 ).over(w1))

This works perfectly when I want the previous drift_MS value to stay the same if none of the conditions are satisfied. However, I now need to reset it to zero if the conditions are not satisfied. I tried to figure it out, but I keep hitting the wall where I would need to iteratively loop back to the previous row which is not typically done in pyspark or big data since it is most efficient with column-wise operations

The following code does not work for me:

import pyspark.sql.functions as f

w1=Window.partitionBy('SEQ_ID').orderBy(col('TIME_STAMP').asc())
prev_drift_MS_temp = (f.lag(col('drift_MS_temp'),1).over(w1))
prev_drift_MS = (f.lag(col('drift_MS'),1).over(w1))
    
prev_MS = (f.lag(col('_MS'),1).over(w1))
df.withColumn('drift_MS_temp', 
  f.sum(
    when((col('_MS') < 3) & (prev_MS < col('_MS')), 1)
    .when((col('_MS') >= 3) & (prev_MS < col('_MS')), 100)
    .otherwise(0)
 ).over(w1))\
  .withColumn('drift_MS',when(prev_drift_MS_temp==col('drift_MS_temp'),0)
  .otherwise(col('drift_MS_temp') - prev_drift_MS_temp   prev_drift_MS))

Any thoughts on how I can go about this?

UPDATE: So after wracking my head on this, the best logic I have come up with so far is to create a different column from drift_MS and then having a conditional cumulative sum when the difference column is not 0 So something like this:

|SEQ_ID |TIME_STAMP             |_MS               |drift_MS|_diff   |drift   |
 ------- ----------------------- ------------------ -------- -------- -------- 
|3879826|2021-07-29 11:24:20.525|NaN               |0       |0       |0       |
|3879826|2021-07-29 11:25:56.934|21.262409581399556|0       |0       |0       |
|3879826|2021-07-29 11:27:43.264|27.247600203353613|100     |100     |100     |
|3879826|2021-07-29 11:29:27.613|18.13528511851038 |100     |0       |0       |
|3879826|2021-07-29 11:31:10.512|2.520896614376871 |100     |0       |0       |
|3879826|2021-07-29 11:32:54.252|2.7081931585605541|101     |1       |1       |
|3879826|2021-07-29 11:34:36.995|2.9832290627235505|102     |1       |1       |
|3879826|2021-07-29 11:36:19.128|13.011968111650264|202     |100     |102     |
|3879826|2021-07-29 11:38:10.919|17.762006254598797|302     |100     |202     |
|3879826|2021-07-29 11:40:01.929|1.9661930950977457|302     |0       |0       |

The pseudocode I would envision would look something like this:

import pyspark.sql.functions as f

w1=Window.partitionBy('SEQ_ID').orderBy(col('TIME_STAMP').asc())
prev_drift_MS = (f.lag(col('drift_MS'),1).over(w1))
prev_diff= (f.lag(col('_diff'),1).over(w1))

prev_MS = (f.lag(col('_MS'),1).over(w1))
df.withColumn('drift_MS', 
  f.sum(
    when((col('_MS') < 3) & (prev_MS < col('_MS')), 1)
    .when((col('_MS') >= 3) & (prev_MS < col('_MS')), 100)
    .otherwise(0)
 ).over(w1))\
 .withColumn('_diff', prev_drift_MS - col('drift_MS'))\
 .withColumn('drift', when(prev_diff==0, 0).otherwise(f.sum(col('drift')).over(w1)))

What is the correct syntax to get it this way?

CodePudding user response:

An option we can use would be creating a bunch of helper columns before getting the final drift_MS column. Let's try it step-by-step.

  1. Create column x from applying those incremental conditions you defined.
  2. Create column y as a flag where values reset to zero in column x.
  3. Create column z to group together rows between flags. We can use cumulative sum within rows between current row and unbounded following rows.
  4. Finally create column drift_MS as cumulative sum of grouped rows by SEQ_ID and helper column z ordered by TIME_STAMP.

Those steps put into code would be like this (easier to read in SQL expressions)

import pyspark.sql.functions as F

expr_x = F.expr("""
    case 
    when _MS >= 3 AND lag(_MS) over (partition by SEQ_ID  order by TIME_STAMP) < _MS then 100
    when _MS < 3 AND lag(_MS) over (partition by SEQ_ID order by TIME_STAMP) < _MS then 1
    else 0 end  """)

expr_y = F.expr("""
    case 
    when x <> 0 and lead(x) over (partition by SEQ_ID order by TIME_STAMP) = 0 then 1
    else null end """)

expr_z = F.expr("""
    sum(y) over(partition by SEQ_ID 
                order by TIME_STAMP 
                rows between 0 preceding and unbounded following) """)

expr_drift = F.expr("""
    sum(x) over (partition by SEQ_ID, z 
                 order by TIME_STAMP 
                 rows between unbounded preceding and 0 following) """)

df = (df
      .withColumn('x', expr_x)
      .withColumn('y', expr_y)
      .withColumn('z', expr_z)
      .withColumn("drift_MS", expr_drift))
df.show()

#  ------- -------------------- ------------------ --- ---- ---- -------- 
# | SEQ_ID|          TIME_STAMP|               _MS|  x|   y|   z|drift_MS|
#  ------- -------------------- ------------------ --- ---- ---- -------- 
# |3879826|2021-07-29 11:24:...|               NaN|  0|null|   2|       0|
# |3879826|2021-07-29 11:25:...|21.262409581399556|  0|null|   2|       0|
# |3879826|2021-07-29 11:27:...|27.247600203353613|100|   1|   2|     100|
# |3879826|2021-07-29 11:29:...| 18.13528511851038|  0|null|   1|       0|
# |3879826|2021-07-29 11:31:...| 2.520896614376871|  0|null|   1|       0|
# |3879826|2021-07-29 11:32:...| 2.708193158560554|  1|null|   1|       1|
# |3879826|2021-07-29 11:34:...|2.9832290627235505|  1|null|   1|       2|
# |3879826|2021-07-29 11:36:...|13.011968111650264|100|null|   1|     102|
# |3879826|2021-07-29 11:38:...|  17.7620062545988|100|   1|   1|     202|
# |3879826|2021-07-29 11:40:...|1.9661930950977458|  0|null|null|       0|
#  ------- -------------------- ------------------ --- ---- ---- -------- 
  • Related