I have a piece of C code that can only handle an array of size 20. The array that my instrument outputs is much smaller than what the function requires. Is there a numpy or math function that can "scale up" an array to any specific size while maintaining its structural integrity? For example:
I have a 8 element array that is basically a two ramp "sawtooth" meaning its values are : [1 2 3 4 1 2 3 4]
What I need for the C code is a 20 element array. So I can scale it, by padding linear intervals of the original array with "0"s , like:
[1,0,0,2,0,0,3,0,0,4,0,0,1,0,0,2,0,0,3,0]
so it adds up to 20 elements. I would think this process is the opposite of "decimation". (I apologize ,I'm simplifying this process so it will be a bit more understandable)
CodePudding user response:
Based on your example, I guess the following approach could be tweaked to do what you want:
- upsample with 0s:
upsampled_l = [[i, 0, 0] for i in l]
withl
being your initial list - Flatten the array
flat_l = flatten(upsampled_l)
using a method from How to make a flat list out of a list of lists? for instance - Get the expected length
final_l = flat_l[:20]
For instance, the following code gives the output you gave in your example:
l = [1, 2, 3, 4, 1, 2, 3, 4]
upsampled_l = [[i, 0, 0] for i in l]
flat_l = [item for sublist in upsampled_l for item in sublist]
final_l = flat_l[:20]
However, the final element of the initial list (the second 4) is missing from the final list. Perhaps it's worth upsampling with only one 0 in between ([i, 0]
instead of [i, 0, 0]
) and finally do final_l.extend([0 for _ in range(20 - len(final_l))])
.
Hope this helps!
CodePudding user response:
You can manage it in a one-liner by adding zeros as another axis, then flattening:
sm = np.array([1, 2, 3, 4, 1, 2, 3, 4])
np.concatenate([np.reshape(sm, (8, 1)), np.zeros((8, 3))], axis=1).flatten()