Home > Back-end >  Creating a discretized version of an array
Creating a discretized version of an array

Time:12-24

I have a list that describes a profile, such as the next one:

dat=[(0, 5),(1, 1),(3,1)]

I need to create a discretized version of that profile give a step of time 'dt=0.2'. For instance, the firs column of 'dat' would be:

dt = 0.2
time = np.linspace(dat[0][0],dat[-1][0],int(dat[-1][0]/dt) 1)

I need to assign the second value of the second column of data, so the new profile would be something like this:

0 5
0.2 5
0.4 5
0.6 5
0.8 5
1 5
1.2 1
1.4 1
1.6 1
1.8 1
2 1
2.2 1
2.4 1
2.6 1
2.8 1
3 1

How can I do this?

CodePudding user response:

There is probably a better/cleaner/faster way but this is what i have come up with:

import numpy as np
dat=[(0, 5),(1, 1),(3,1)]
dt = 0.2

t = [x[0] for x in dat]
col_1 = []
col_2 = []
for idx, (i,j) in enumerate(zip(t[:-1],t[1:])):
    N = int((j-i)/dt)
    col_1  =np.linspace(i,j,N,endpoint = False).tolist()
    col_2  =[dat[idx][1]]*N

res = [(i,j) for i,j in zip(col_1, col_2)]   [dat[-1]]
print(res)

result:

[(0.0, 5), (0.2, 5), (0.4, 5), (0.6000000000000001, 5), (0.8, 5), (1.0, 1), (1.2, 1), (1.4, 1), (1.6, 1), (1.8, 1), (2.0, 1), 
(2.2, 1), (2.4000000000000004, 1), (2.6, 1), (2.8, 1), (3, 1)]

CodePudding user response:

I make a effor while waiting for a response. Again, I guess it should be a better/cleaner/faster way to do it.

import numpy as np

dat=[(0, 5),(1, 1),(3,1)]
dt=0.2
col_1 = np.arange(dat[0][0],dat[-1][0] dt,dt)   
col_2 = np.zeros(len(col_1))    
j=0
for i in range(len(dat)):
    while   dat[i][0] <= col_1[j] <= dat[i 1][0]:
       col_2[j] = dat[i][1]
       j  = 1
       if j == len(col_1):
           j = 0

The result of this is

col_1 = array([0. , 0.2, 0.4, 0.6, 0.8, 1. , 1.2, 1.4, 1.6, 1.8, 2. , 2.2, 2.4,
       2.6, 2.8, 3. ])
col_2 = array([5., 5., 5., 5., 5., 5., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.])

CodePudding user response:

You might like to try np.repeat:

dt = 0.2
dat = np.array([(0, 5),(1, 1),(3,1)])
counts = (np.diff(dat[:,0], axis=0)/dt).astype(int)
counts[0]  = 1
sum_counts = ((dat[-1,0] - dat[0,0])/dt).astype(int)   1
col_1 = np.linspace(dat[0,0], dat[-1,0], sum_counts)
col_2 = np.repeat(dat[:-1,1], counts)
np.transpose([col_1, col_2])
>>> array([[0. , 5. ],
       [0.2, 5. ],
       [0.4, 5. ],
       [0.6, 5. ],
       [0.8, 5. ],
       [1. , 5. ],
       [1.2, 1. ],
       [1.4, 1. ],
       [1.6, 1. ],
       [1.8, 1. ],
       [2. , 1. ],
       [2.2, 1. ],
       [2.4, 1. ],
       [2.6, 1. ],
       [2.8, 1. ],
       [3. , 1. ]])

CodePudding user response:

Here is one approach: you can first create a Series with the desired index from np.linspace, then update it with given values, and fill the remaining values with ffill and bfill:

dat_np = np.array(dat, dtype=float)
s = pd.Series(index=np.arange(dat_np[:,0].min(), dat_np[:,0].max()   dt, dt), dtype=float)
s.update(pd.Series(dat_np[:,1], index=dat_np[:,0]))
result = s.ffill()
# this almost works, but we have result[1.0] == 1 instead of result[1.0] == 5;
result.loc[dat_np[:,0]] = np.nan
result = result.ffill().bfill().astype(int)
print(result)
# 0.0    5
# 0.2    5
# 0.4    5
# 0.6    5
# 0.8    5
# 1.0    5
# 1.2    1
# 1.4    1
# 1.6    1
# 1.8    1
# 2.0    1
# 2.2    1
# 2.4    1
# 2.6    1
# 2.8    1
# 3.0    1
# dtype: int64

This assumes that all values in the index are exact multiples of dt.

  • Related