Home > Net >  Replace values of Pandas DataFrame columns based upon substrings of symbols in a list
Replace values of Pandas DataFrame columns based upon substrings of symbols in a list

Time:03-08

I am trying to remove some erroneous data from two columns in a DataFrame. The columns are subject to corruption where symbols occur within the columns values. I want to check all values in two columns and replace identified values with '' when a symbol is present.

For example:

import pandas as pd

bad_chars = [')', ',', '@', '/', '!', '&', '*', '.', '_', ' ']


d = {'p1' : [1,2,3,4,5,6],
    'p2' : ['abc*', 'abc@', 'zxya', '&sdf', 'p xx', 'abcd'],
    'p3' : ['abc', 'abc.', 'zxya', '&sdf', 'p xx', 'abcd']}

df = pd.DataFrame(d) 

    p1  p2      p3
0   1   abc*    abc
1   2   abc@    abc.
2   3   zxya    zxya
3   4   &sdf    &sdf
4   5   p xx    p xx
5   6   abcd    abcd

I have been trying unsuccessfully to uses list comprehensions to iterate over the bad_chars variable and replace the value in columns p2 and p3 with empty '' resulting in something like this:

    p1  p2      p3
0   1           abc
1   2           
2   3   zxya    zxya
3   4       
4   5       
5   6   abcd    abcd

Once I have achieved this I would like to remove any rows containing an empty cell in either column p2, p3 or both.

    p1  p2      p3
0   3   zxya    zxya
1   6   abcd    abcd

CodePudding user response:

Here you go:

import pandas as pd

bad_chars = ['\,', '\@', '\/', '\!', '\&', '\*', '\.', '\_', '\ ']


d = {'p1' : [1,2,3,4,5,6],
    'p2' : ['abc*', 'abc@', 'zx_ya', '&sdf', 'p xx', 'abcd'],
    'p3' : ['abc', 'abc.', 'zxya', '&sdf', 'p xx', 'abcd']}

df = pd.DataFrame(d)
df.loc[df['p2'].str.contains('|'.join(bad_chars)), 'p2'] = None
df.loc[df['p3'].str.contains('|'.join(bad_chars)), 'p3'] = None
df = df.dropna(subset=['p2', 'p3'])
df

note that I have changed bad_chars (added \ to them)

CodePudding user response:

Another option for you to try.

import pandas as pd

bad_chars = [')', ',', '@', '/', '!', '&', '*', '.', '_', ' ']

d = {'p1' : [1,2,3,4,5,6],
    'p2' : ['abc*', 'abc@', 'zxya', '&sdf', 'p xx', 'abcd'],
    'p3' : ['abc', 'abc.', 'zxya', '&sdf', 'p xx', 'abcd']}
df = pd.DataFrame(d)


for i in df.index:
    # creates True/False list checking each char in df cell's
    # content using line comprehension
    p2_chks = [char in bad_chars for char in df.at[i,"p2"]]
    p3_chks = [char in bad_chars for char in df.at[i,"p3"]]

    # if "True" exists in the either of the check lists,
    # then delete the row
    if (True in p2_chks) or (True in p3_chks):
        print("{}: p2 or p3 three is true".format(i))
        df = df.drop(i)

# Reindex the df rows. Use drop=True so 
# new column is not added with old index
df = df.reset_index(drop=True)
print(df)

CodePudding user response:

Please try this:

import pandas as pd
import numpy as np
bad_chars = [')', ',', '@', '/', '!', '&', '*', '.', '_', ' ']


d = {'p1' : [1,2,3,4,5,6],
    'p2' : ['abc*', 'abc@', 'zxya', '&sdf', 'p xx', 'abcd'],
    'p3' : ['abc', 'abc.', 'zxya', '&sdf', 'p xx', 'abcd']}

df = pd.DataFrame(d)
def check_char(text):

    for char in bad_chars:
        if char in text:
            return np.nan
            break
    return text

check_cols = ['p2','p3']
for col in check_cols:
    df[col] = df[col].apply(lambda x:check_char(x))
df = df.dropna(subset=check_cols)
  • Related