I have a string column in a dataframe with values with accents, like
'México', 'Albânia', 'Japão'
How to replace letters with accents to get this:
'Mexico', 'Albania', 'Japao'
I tried many solutions available in Stack OverFlow, like this:
def strip_accents(s):
return ''.join(c for c in unicodedata.normalize('NFD', s)
if unicodedata.category(c) != 'Mn')
But disappointed returns
strip_accents('México')
>>> 'M?xico'
CodePudding user response:
In PySpark, you can create a pandas_udf
which is vectorized, so it's preferred to a regular udf
.
This seems to be the best way to do it in pandas. So, we can use it to create a pandas_udf
for PySpark application.
from pyspark.sql import functions as F
import pandas as pd
@F.pandas_udf('string')
def strip_accents(s: pd.Series) -> pd.Series:
return s.str.normalize('NFKD').str.encode('ascii', errors='ignore').str.decode('utf-8')
Test:
df = df.withColumn('country2', strip_accents('country'))
df.show()
# ------- --------
# |country|country2|
# ------- --------
# | México| Mexico|
# |Albânia| Albania|
# | Japão| Japao|
# ------- --------
CodePudding user response:
You can use translate
:
df = spark.createDataFrame(
[
('1','Japão'),
('2','Irã'),
('3','São Paulo'),
('5','Canadá'),
('6','Tókio'),
('7','México'),
('8','Albânia')
],
["id", "Local"]
)
df.show(truncate = False)
--- ---------
|id |Local |
--- ---------
|1 |Japão |
|2 |Irã |
|3 |São Paulo|
|5 |Canadá |
|6 |Tókio |
|7 |México |
|8 |Albânia |
--- ---------
from pyspark.sql import functions as F
df\
.withColumn('Loc_norm', F.translate('Local',
'ãäöüẞáäčďéěíĺľňóôŕšťúůýžÄÖÜẞÁÄČĎÉĚÍĹĽŇÓÔŔŠŤÚŮÝŽ',
'aaousaacdeeillnoorstuuyzAOUSAACDEEILLNOORSTUUYZ'))\
.show(truncate=False)
--- --------- ---------
|id |Local |Loc_norm |
--- --------- ---------
|1 |Japão |Japao |
|2 |Irã |Ira |
|3 |São Paulo|Sao Paulo|
|5 |Canadá |Canada |
|6 |Tókio |Tokio |
|7 |México |Mexico |
|8 |Albânia |Albânia |
--- --------- ---------