In the dataset below,
# DataFrame using arrays.
import pandas as pd
import numpy as np
# create dataset
data = {'Gender':['287F', '287F', '287F', '287F','287F', '287F', '189M', '189M','189M', '189M',
'189M', '189F','287M', '189F', '287M', '287M','287M','189F', '189F', '287M'],
'code_num':[1001,1001,1002,1002,1003,1003,1004,1004,1005,1005,
1006,1006,1007,1007,1008,1008,1009,1009,1010,1010],
'Date':['10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923','10-22-1923'],
'Location':['PHX','PHX','PHX','PHX','PHX','PHX','PHX','PHX','PHX','PHX',
'MIA','MIA','MIA','MIA','MIA','MIA','MIA','MIA','MIA','MIA'],
'Age':['18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr','18yr'],
'Group':['F1', 'D1', 'F2', 'D2','F1', 'D1', 'F2', 'D2','F1', 'D1', 'F3', 'D3','F2', 'D2', 'F4', 'D4','F3','D3', 'F4', 'D4'],
'Dog_10_UID': ['T-X', 'T-X', 'G-A', 'G-A','T-X', 'T-X', 'G-A', 'G-A','T-X', 'T-X', 'C-A', 'C-A','G-A', 'G-A', 'F-L', 'F-L','C-A','C-A', 'F-L', 'F-L'],
'Dog_10_name': ['Tex', 'Tex', 'Gina', 'Gina','Tex', 'Tex', 'Gina', 'Gina','Tex', 'Tex', 'Carla', 'Carla','Gina', 'Gina', 'Flora', 'Flora','Carla','Carla', 'Flora', 'Flora'],
'Dog_10_txt':['>11','51','61','>11','>91','61','51','>11','>91','>11','61','>11','>71','51','>11','61','>11','>71','>91','51'],
'Dog_10_index':[11,51,61,11,91,61,51,11,91,11,61,11,71,51,11,61,11,71,91,51],
'Dog_20_UID': ['T-X', 'T-X', 'G-A', 'G-A','T-X', 'T-X', 'G-A', 'G-A','T-X', 'T-X', 'C-A', 'C-A','G-A', 'G-A', 'F-L', 'F-L','C-A','C-A', 'F-L', 'F-L'],
'Dog_20_name': ['Tex', 'Tex', 'Gina', 'Gina','Tex', 'Tex', 'Gina', 'Gina','Tex', 'Tex', 'Carla', 'Carla','Gina', 'Gina', 'Flora', 'Flora','Carla','Carla', 'Flora', 'Flora'],
'Dog_20_txt':['>12','52','62','>12','>92','62','52','12','>92','>12','62','>12','>72','52','>12','62','>12','>72','>92','52'],
'Dog_20_index':[12,52,62,12,92,62,52,12,92,12,62,12,72,52,12,62,12,72,92,52]
}
data = pd.DataFrame(data)
data
I want to collapse (or maybe pivot) the following corresponding columns
Dog_10_UID
& Dog_20_UID
resulting in a single column Dog_UID
Dog_10_name
& Dog_20_name
resulting in a single column Dog_name
Dog_10_txt
& Dog_20_txt
resulting in a single column Dog_txt
Dog_10_index
& Dog_20_index
resulting in a single column Dog_index
After collapsing/pivoting, the final dataframe should have the following column names
Gender
, code_num
, Date
, Location
, Age
, Group
, Dog_UID
,
Dog_name
,Dog_txt
, Dog_index
My attempt
# 'Gender','code_num', 'Date', 'Location', 'Age', 'Group' should remain constant while collapsing/pivoting Columns starting with 'Dog_'
keys = [x for x in data if x.startswith('Dog_')]
df = data.melt(id_vars=['Gender','code_num', 'Date', 'Location', 'Age', 'Group'], var_name=['Dog_UID','Dog_name', 'Dog_txt', 'Dog_index'],
value_name='keys')
I am open to other methods, kindly share your full code. Thanx
CodePudding user response:
First step isDataFrame.set_index
, Create MultiIndex
by all columns which are not processing by split
and reshape by DataFrame.stack
df = data.set_index(['Gender','code_num', 'Date', 'Location', 'Age', 'Group'])
df.columns = df.columns.str.split('_', expand=True)
df = df.stack(1)
df.columns = df.columns.map(lambda x: f'{x[0]}_{x[1]}')
cols = ['Dog_UID', 'Dog_name', 'Dog_txt', 'Dog_index']
df = df.reset_index(level=-1, drop=True)[cols].reset_index()
print (df.head())
Gender code_num Date Location Age Group Dog_UID Dog_name Dog_txt \
0 287F 1001 10-22-1923 PHX 18yr F1 T-X Tex >11
1 287F 1001 10-22-1923 PHX 18yr F1 T-X Tex >12
2 287F 1001 10-22-1923 PHX 18yr D1 T-X Tex 51
3 287F 1001 10-22-1923 PHX 18yr D1 T-X Tex 52
4 287F 1002 10-22-1923 PHX 18yr F2 G-A Gina 61
Dog_index
0 11
1 12
2 51
3 52
4 61
CodePudding user response:
One option is with pd.wide_to_long
; first, the columns need to be reshaped before transformation:
temp = data.copy()
cols = ['Gender', 'code_num', 'Date', 'Location', 'Age', 'Group']
stubnames = ['Dog_UID', 'Dog_name', 'Dog_txt', 'Dog_index']
pattern = r"(?P<first>. )_(?P<num>\d )_(?P<last>. )"
repl = lambda m: f"{m.group('first')}_{m.group('last')}-{m.group('num')}"
temp.columns = temp.columns.str.replace(pattern, repl, regex=True)
out = (pd.wide_to_long(temp,
stubnames = stubnames,
i = cols,
j = 'num',
sep = '-',
suffix = '. ')
.reset_index()
)
out.head()
Gender code_num Date Location Age Group num Dog_UID Dog_name Dog_txt Dog_index
0 287F 1001 10-22-1923 PHX 18yr F1 10 T-X Tex >11 11
1 287F 1001 10-22-1923 PHX 18yr F1 20 T-X Tex >12 12
2 287F 1001 10-22-1923 PHX 18yr D1 10 T-X Tex 51 51
3 287F 1001 10-22-1923 PHX 18yr D1 20 T-X Tex 52 52
4 287F 1002 10-22-1923 PHX 18yr F2 10 G-A Gina 61 61
Another option is with pivot_longer
from pyjanitor
-> your columns have a pattern to them, (ends in UID or name or txt or index); we'll use that pattern to reshape the data:
# pip install pyjanitor
import janitor
import pandas as pd
outcome = (data.pivot_longer(index=cols,,
names_to=stubnames,
names_pattern=['UID$', 'name$', 'txt$', 'index$'])
)
outcome.head()
Gender code_num Date Location Age Group Dog_UID Dog_name Dog_txt Dog_index
0 287F 1001 10-22-1923 PHX 18yr F1 T-X Tex >11 11
1 287F 1001 10-22-1923 PHX 18yr D1 T-X Tex 51 51
2 287F 1002 10-22-1923 PHX 18yr F2 G-A Gina 61 61
3 287F 1002 10-22-1923 PHX 18yr D2 G-A Gina >11 11
4 287F 1003 10-22-1923 PHX 18yr F1 T-X Tex >91 91