Home > Software design >  Concatenating dataframes horizontally
Concatenating dataframes horizontally

Time:11-08

I have defined a dictionary where the values in the pair are actually dataframes.

# Creating a dictionary
data = {'Value':[0,0,0]}
kernel_df = pd.DataFrame(data, index=['M1','M2','M3'])
dict = {'dummy':kernel_df}
# dummy  ->          Value
#               M1      0
#               M2      0
#               M3      0

Then, with the following code, I am trying to batch process a large dataframe and append some columns together.

df = pd.read_csv('test.batch.csv')
for i in range(0, len(df), 3):
    print("\n------BATCH BEGIN")
    batch_df = df.iloc[i:i 3]
    name = batch_df.loc[i].at["Name"]
    values = batch_df.loc[:,["Value"]]
    print(name)
    print(values)
    print("------BATCH END")
    if name in dict:
        # Append values to the existing key
        dict[name] = pd.concat( [dict[name],values], axis=1 )   # <-- Is not correct
    else:
        # Create a new pair in dictionary
        dict[name] = values;

According to the output, everything is fine, but as you can see, the concatenation doesn't looks right. In fact I want to append "value" column to an existing one horizontally.

   ID Name Metric  Value
0   0   K1     M1     10
1   0   K1     M2      5
2   0   K1     M3     10
3   1   K2     M1     20
4   1   K2     M2     10
5   1   K2     M3     15
6   2   K1     M1      2
7   2   K1     M2      2
8   2   K1     M3      2

------BATCH BEGIN
K1
   Value
0     10
1      5
2     10
------BATCH END

------BATCH BEGIN
K2
   Value
3     20
4     10
5     15
------BATCH END

------BATCH BEGIN
K1
   Value
6      2
7      2
8      2
------BATCH END
{'dummy':     Value
M1      0
M2      0
M3      0, 'K1':    Value  Value
0   10.0    NaN
1    5.0    NaN
2   10.0    NaN
6    NaN    2.0
7    NaN    2.0
8    NaN    2.0, 'K2':    Value
3     20
4     10
5     15}

How can I fix that?

UPDATE: I expect to see the following dictionary

{'dummy':     Value
M1      0
M2      0
M3      0, 'K1':    Value  Value
0   10.0    2.0
1    5.0    2.0
2   10.0    2.0, 'K2':    Value
3     20
4     10
5     15}

This is shown in the page.

CodePudding user response:

There are several approaches. Perhaps the safest approach is to groupby Name then group each subgroup by ID:

for name, df_group in df.groupby('Name'):
    d[name] = pd.concat(
        [g.reset_index(drop=True) for _, g in df_group.groupby('ID')['Value']],
        axis=1
    )

Notice we reset_index on each subgroup so that the indexes align correctly.

This approach is safest since we're grouping by ID rather than grabbing every 3 rows without checks.


We can also create a RangeIndex and floor divide by 3 so that we group every three rows together regardless of the ID column:

for name, df_group in df.groupby('Name'):
    df_group.index = pd.RangeIndex(len(df_group)) // 3
    d[name] = pd.concat([
        g.reset_index(drop=True)
        for _, g in df_group.groupby(level=0)['Value']
    ], axis=1)

The same reset_index of each subgroup is happening here to allow row alignment. This is marginally safe, and will not fail on df_group which are not evenly divisible into 3 row chunks.


The least robust approach, but likely the fastest if the requirements hold, is to use to_numpy and reshape:

for name, df_group in df.groupby('Name'):
    a = df_group['Value'].to_numpy().reshape((3, -1))
    d[name] = pd.DataFrame(a, columns=['Value'] * a.shape[1])

This will fail if the array cannot be reshaped into even columns of length 3, however, is the fastest as it does not require any regrouping or reindexing.columns=['Value'] * a.shape[1] is optional, however, the shown output shows multiple Value columns so this makes Value labels the same length as the number of columns in a.

All options produce d:

{'dummy':     Value
M1      0
M2      0
M3      0, 'K1':    Value  Value
0     10      5
1     10      2
2      2      2, 'K2':    Value
0     20
1     10
2     15}

Setup and imports:

import pandas as pd

df = pd.DataFrame({
    'ID': [0, 0, 0, 1, 1, 1, 2, 2, 2],
    'Name': ['K1', 'K1', 'K1', 'K2', 'K2', 'K2', 'K1', 'K1', 'K1'],
    'Metric': ['M1', 'M2', 'M3', 'M1', 'M2', 'M3', 'M1', 'M2', 'M3'],
    'Value': [10, 5, 10, 20, 10, 15, 2, 2, 2]
})

data = {'Value': [0, 0, 0]}
kernel_df = pd.DataFrame(data, index=['M1', 'M2', 'M3'])
d = {'dummy': kernel_df}

General python note dict is a built-in so it should be avoided as a variable name. For this reason, it has been updated to d in the above code.

  • Related