I got a table that look like this:
code | year | month | Value A | Value B |
---|---|---|---|---|
1 | 2020 | 1 | 120 | 100 |
1 | 2020 | 2 | 130 | 90 |
1 | 2020 | 3 | 90 | 89 |
1 | 2020 | 4 | 67 | 65 |
... | ... | ... | ... | ... |
100 | 2020 | 10 | 90 | 90 |
100 | 2020 | 11 | 115 | 100 |
100 | 2020 | 12 | 150 | 135 |
I would like to know if there's a way to rearrange the data to find the correlation between A and B for every distinct code.
What I'm thinking is, for example, getting an array for every code, like:
[(A1,A2,A3...,A12),(B1,B2,B3...,B12)]
where A and B is the values for the respective month, and then I could see the correlation between these two columns. Is there a way to make this dynamic?
CodePudding user response:
IIUC, you don't need to re-arrange to get the correlation for each "code". Instead, try with groupby
:
>>> df.groupby("code").apply(lambda x: x["Value A"].corr(x["Value B"]))
code
1 0.830163
100 0.977093
dtype: float64