Home > database >  Find columns negative value in another column - dataframe
Find columns negative value in another column - dataframe

Time:09-12

I have this code:

test = {"number": ['1555','1666','1777', '1888', '1999'],
        "order_amount": ['100.00','200.00','-200.00', '300.00', '-150.00'],
        "number_of_refund": ['','','1666', '', '1888']
    }

df = pd.DataFrame(test)

Which returns the following dataframe:

  number order_amount number_of_refund
0   1555       100.00                 
1   1666       200.00                 
2   1777      -200.00             1666
3   1888       300.00                 
4   1999      -150.00             1888    

I want to remove order and order refund entries if:

  • "number_of_refund" matches a number from "number" column (there might not be a number of order in the dataframe if order was made last month and refund during the current month)
  • amount of "number_of_refund" (which was matched to "number") has a negative amount of "number" amount (in this case number 1666 has 200, and refund of 1666 has -200 so both rows should be removed)

So the result in this case should be:

number order_amount number_of_refund
0   1555       100.00                 
3   1888       300.00                 
4   1999      -150.00           1888                            

How do I check if amount of one column's value is in another column but with opposite amount (negative)?

CodePudding user response:

IIUC, you can use a boolean indexing approach:

# ensure numeric values
df['order_amount'] = pd.to_numeric(df['order_amount'], errors='coerce')

# is the row a refund?
m1 = df['number_of_refund'].ne('')
# get mapping of refunds
s = df[m1].set_index('number_of_refund')['order_amount']

# get reimbursements and find which ones will equal the original value
reimb = df['number'].map(s)
m2 = reimb.eq(-df['order_amount'])
m3 = df['number_of_refund'].isin(df.loc[m2, 'number'])

# keep rows that do not match any m2 or m3 mask
df = df[~(m2|m3)]

output:

  number  order_amount number_of_refund
0   1555         100.0                 
3   1888         300.0                 
4   1999        -150.0             1888

CodePudding user response:

Here's another way to do it. I create a unique string by concatenating the number_of_refund and the absolute order_amount (ie, without the negative sign), then drop both duplicates found

df['unique'] = df.apply(lambda x: x['order_amount'].replace('-','') x['number'] if x['number_of_refund']=='' else x['order_amount'].replace('-','') x['number_of_refund'], axis=1)
df['unique2'] = df['number_of_refund'].mask(df['number_of_refund'].eq(''), df['number'])
print(df)
  number order_amount number_of_refund      unique unique2
0   1555       100.00                   100.001555    1555
1   1666       200.00                   200.001666    1666
2   1777      -200.00             1666  200.001666    1666
3   1888       300.00                   300.001888    1888
4   1999      -150.00             1888  150.001888    1888

The duplicate rows are easily identified, and ready to be dropped (including the column unique)

df = df.drop_duplicates(['unique', 'unique2'], keep=False).drop(columns=['unique', 'unique2'])
print(df)
  number order_amount number_of_refund
0   1555       100.00                 
3   1888       300.00                 
4   1999      -150.00             1888
  • Related