Home > OS >  Pandas drop row based on groupby AND partial string match
Pandas drop row based on groupby AND partial string match

Time:03-14

I have a large pandas DataFrame with numerous columns. I want to group by serial number AND where there are duplicates to keep the row where the product ID ends in -RF. The first part I can achieve with a groupby(subset='Serial Number'), however I'm at a loss as to how combine this and keep/drop row based on a regex ('-RF$'). How can I achieve this?

Input:

Serial Number Product ID
ABC1745AABC ABC-SUP2E-RF
ABC1745AABC ABC-SUP2E
ABC1745AAFF ABC-SUP2E
ABC1745AAFE ABC-SUP2E

Ultimately, I want to be left with something like this (output):

Serial Number Product ID
ABC1745AABC ABC-SUP2E-RF
ABC1745AAFF ABC-SUP2E
ABC1745AAFE ABC-SUP2E

Data:

{'Serial Number': ['ABC1745AABC', 'ABC1745AABC', 'ABC1745AAFF', 'ABC1745AAFE'],
 'Product ID': ['ABC-SUP2E-RF', 'ABC-SUP2E', 'ABC-SUP2E', 'ABC-SUP2E']}

CodePudding user response:

Create a boolean mask where for each row, it's True if it's either unique or ends with "RF"; False otherwise:

out = df[df.groupby('Serial Number')['Product ID'].transform('count').eq(1) | df['Product ID'].str.endswith('-RF')]

Output:

  Serial Number    Product ID
0   ABC1745AABC  ABC-SUP2E-RF
2   ABC1745AAFF     ABC-SUP2E
3   ABC1745AAFE     ABC-SUP2E

CodePudding user response:

You could add a column to mark rows ending with "RF", then sort values to leave those rows at the top of each group. And finally just group and take the first row:

df["RF"] = df["Product ID"].str.endswith("-RF")
df = df.sort_values(["Serial Number", "RF"], ascending=False)
output = df.groupby("Serial Number").first()[["Serial Number", "Product ID"]]

Output:

  Serial Number    Product ID
2  ABC1745AAFF      ABC-SUP2E
3  ABC1745AAFE      ABC-SUP2E
0  ABC1745AABC   ABC-SUP2E-RF
  • Related