Home > Software design >  finding common words across rows using pyspark/pandas
finding common words across rows using pyspark/pandas

Time:11-30

I have a text file as below with pipe delimiter

person_id | category | notes
1|A|He bought cat
1|A|He bought dog
1|B|He has hen
2|A|Switzerland Australia
2|A|Australia

I want to group by person_id and category and find only the words that are repeated in all the row

expected output

1|A|He bought
1|B|he has hen
2|A|Australia

I have bought the word counts of each using group by person_id and category, I am stuck getting the output

I have got the word count using group by like below using rdd word count and spark-sql

person_id | category | notes
1|A|He (2) bought(2) cat(1) dog(1)
1|B|He(1) has(1) hen(1)
2|A|Switzerland(1) Australia(2)

CodePudding user response:

You can achieve that using Spark arrays functions:

  1. split the column notes to get array of words
  2. group by person_id and category to collect the list of words
  3. filter the resulting array by checking if it exists in all collected sub-arrays (i.e. words) using higher order function filter
import pyspark.sql.functions as F

df1 = df.withColumn("notes", F.split("notes", " ")) \
    .groupBy("person_id", "category") \
    .agg(F.collect_list(F.col("notes")).alias("notes")) \
    .withColumn("w", F.array_distinct(F.flatten("notes"))) \
    .withColumn("notes", F.array_join(F.expr("filter(w, x -> size(filter(notes, y -> array_contains(y, x))) = size(notes))"), " ")) \
    .drop("w")

df1.show()
# --------- -------- ---------- 
#|person_id|category|notes     |
# --------- -------- ---------- 
#|1        |A       |He bought |
#|1        |B       |He has hen|
#|2        |A       |Australia |
# --------- -------- ---------- 

CodePudding user response:

You can split the notes string into words and then explode the words. Finally count the number of notes for each person_id, category and occurence of word per person_id, category. If they counts are equal then construct the words by collect_list.


from pyspark.sql import functions as F
from pyspark.sql import Window

data = [(1, "A", "He bought cat"),
(1, "A", "He bought dog"),
(1, "B", "He has hen"),
(2, "A", "Switzerland Australia"),
(2, "A", "Australia"),]

df = spark.createDataFrame(data, ("person_id", "category", "notes", ))

window_spec_category = Window.partitionBy("person_id", "category")

df_word = df.withColumn("category_count", F.count("*").over(window_spec_category))\
            .select("person_id", "category", F.posexplode(F.split(F.col("notes"), " ")).alias("pos", "word"))

window_spec_word = Window.partitionBy("person_id", "category", "word")

matching_words = df_word.withColumn("word_count", F.count("*").over(window_spec_word))\
        .withColumn("rn", F.row_number().over(window_spec_word.orderBy(F.lit(None))))\
        .filter(F.col("word_count") == F.col("category_count"))\
        .filter(F.col("rn") == F.lit(1))\
        .drop("rn")

window_spec_collect = window_spec_category.orderBy("pos").rowsBetween(Window.unboundedPreceding, Window.unboundedFollowing)

matching_words.withColumn("result", F.concat_ws(" ", F.collect_list("word").over(window_spec_collect)))\
              .withColumn("rn", F.row_number().over(window_spec_category.orderBy(F.lit(None))))\
              .filter(F.col("rn") == F.lit(1))\
              .select("person_id", "category", "result")\
              .show()

Output

 --------- -------- ---------- 
|person_id|category|    result|
 --------- -------- ---------- 
|        1|       A| He bought|
|        1|       B|He has hen|
|        2|       A| Australia|
 --------- -------- ---------- 

CodePudding user response:

Another option is to define a UDF that will find the intersection of an array of strings:

from pyspark.sql import SparkSession
from pyspark.sql import functions as F

spark = SparkSession.builder.getOrCreate()
data = [
    {"person_id": 1, "category": "A", "notes": "He bought cat"},
    {"person_id": 1, "category": "A", "notes": "He bought dog"},
    {"person_id": 1, "category": "B", "notes": "He has hen"},
    {"person_id": 2, "category": "A", "notes": "Switzerland Australia"},
    {"person_id": 2, "category": "A", "notes": "Australia"},
]


def common(x):
    l = [i.split() for i in x]
    return " ".join(sorted(set.intersection(*map(set, l)), key=l[0].index))

df = spark.createDataFrame(data)
df = df.groupBy(["person_id", "category"]).agg(F.collect_list("notes").alias("b"))
df = df.withColumn("result", F.udf(common)(F.col("b")))

Result:

 --------- -------- ----------                                                  
|person_id|category|result    |
 --------- -------- ---------- 
|1        |A       |He bought |
|1        |B       |He has hen|
|2        |A       |Australia |
 --------- -------- ---------- 
  • Related