I have a dataframe that contains groups and percentages
| Group | A % | B % | Target % |
| ----- | --- | --- | -------- |
| A | .05 | .85 | 1.0 |
| A | .07 | .75 | 1.0 |
| A | .08 | .95 | 1.0 |
| B | .03 | .80 | 1.0 |
| B | .05 | .83 | 1.0 |
| B | .04 | .85 | 1.0 |
I want to be able to iterate column A %
by column Group
and find an array of values from column B %
that when summed with each value in column A%
is less than or equal to column Target %
.
| Group | A % | B % | Target % | SumArray |
| ----- | --- | --- | -------- | ------------ |
| A | .05 | .85 | 1.0 | [.85,.75,.95]|
| A | .07 | .75 | 1.0 | [.85,.75] |
| A | .08 | .95 | 1.0 | [.85,.75] |
| B | .03 | .80 | 1.0 | [.80,.83,.85]|
| B | .05 | .83 | 1.0 | [.80,.83,.85]|
| B | .04 | .85 | 1.0 | [.80,.83,.85]|
I'd like to be able to use PySpark for this problem. Any ideas how to approach this?
CodePudding user response:
You can use collect_list
function to get an array of B %
column values grouped by Group
column then filter
the resulting array using your condition A B <= Target
:
from pyspark.sql import Window
import pyspark.sql.functions as F
df2 = df.withColumn(
"SumArray",
F.collect_list(F.col("B")).over(Window.partitionBy("Group"))
).withColumn(
"SumArray",
F.expr("filter(SumArray, x -> x A <= Target)")
)
df2.show()
# ----- ---- ---- ------ ------------------
# |Group| A| B|Target| SumArray|
# ----- ---- ---- ------ ------------------
# | B|0.03| 0.8| 1.0| [0.8, 0.83, 0.85]|
# | B|0.05|0.83| 1.0| [0.8, 0.83, 0.85]|
# | B|0.04|0.85| 1.0| [0.8, 0.83, 0.85]|
# | A|0.05|0.85| 1.0|[0.85, 0.75, 0.95]|
# | A|0.07|0.75| 1.0| [0.85, 0.75]|
# | A|0.08|0.95| 1.0| [0.85, 0.75]|
# ----- ---- ---- ------ ------------------