Home > OS >  Get last / delimited value from Dataframe column in PySpark
Get last / delimited value from Dataframe column in PySpark

Time:10-29

I am trying to get the last string after '/'.

The column can look like this: "lala/mae.da/rg1/zzzzz" (not necessary only 3 /), and I'd like to return: zzzzz

In SQL and Python it's very easy, but I would like to know if there is a way to do it in PySpark.

Solving it in Python:

original_string = "lala/mae.da/rg1/zzzzz"
last_char_index = original_string.rfind("/")
new_string = original_string[last_char_index 1:]

or directly:

new_string = original_string.rsplit('/', 1)[1]

And in SQL:

RIGHT(MyColumn, CHARINDEX('/', REVERSE(MyColumn))-1)

For PySpark I was thinking something like this:

df = df.select(col("MyColumn").rsplit('/', 1)[1])

but I get the following error: TypeError: 'Column' object is not callable and I am not even sure Spark allows me to do rsplit at all.

Do you have any suggestion on how can I solve this?

CodePudding user response:

import pandas as pd
from pyspark.sql import functions as F

df = pd.DataFrame({"MyColumn": ["lala/mae.da/rg1/zzzzz", "fefe", "fe/fe/frs/fs/fe32/4"]})

df = spark.createDataFrame(df)

df.show(truncate=False)

# output
 --------------------- 
|MyColumn             |
 --------------------- 
|lala/mae.da/rg1/zzzzz|
|fefe                 |
|fe/fe/frs/fs/fe32/4  |
 --------------------- 
(
    df
    .withColumn("NewCol", 
                F.split("MyColumn", "/")
               )
    .withColumn("NewCol", F.col("Newcol")[F.size("NewCol") -1])
    .show()
)

# output
 -------------------- ------ 
|            MyColumn|NewCol|
 -------------------- ------ 
|lala/mae.da/rg1/z...| zzzzz|
|                fefe|  fefe|
| fe/fe/frs/fs/fe32/4|     4|
 -------------------- ------ 

CodePudding user response:

Since Spark 2.4, you can use split built-in function to split your string then use element_at built-in function to get the last element of your obtained array, as follows:

from pyspark.sql import functions as F

df = df.select(F.element_at(F.split(F.col("MyColumn"), '/'), -1))

CodePudding user response:

Adding another solution even though @Pav3k's answer is great. element_at which gets an item at specific position out of a list:

from pyspark.sql import functions as F
df = df.withColumn('my_col_split', F.split(df['MyColumn'], '/'))\
                          .select('MyColumn',F.element_at(F.col('my_col_split'), -1).alias('rsplit')
                                 )

>>> df.show(truncate=False)

 --------------------- ------ 
|MyColumn             |rsplit|
 --------------------- ------ 
|lala/mae.da/rg1/zzzzz|zzzzz |
|fefe                 |fefe  |
|fe/fe/frs/fs/fe32/4  |4     |
 --------------------- ------ 

Pav3k's DF used.

  • Related