I just switched from MATLAB to python and I want help in translating a piece of MATLAB code to python.
I have a 1000 x 4
matrix in which the values in the m'th column
represents a signal.
For simplicity I am changing the dimensions to 5 x 4
.
Let the values in the m'th column
be [1,2,3,4,5]
.
I want to delay
this by x
samples that is if x=2
the delayed version will be [0,0,1,2,3]
.
Its essentially adding x zeros
to the front and removing x values
from the back.
So the output is a 5x1 column vector
.
The MATLAB code is:
[zeros(2,1),signal(1:(end-2),m)];
The python code I wrote is:
[np.zeros((2,1)),signal[(length_of_signal-1)-2:,m]]
This doesn't seem to work and neither am I able to figure out what's wrong with it. Please do help.
CodePudding user response:
IIUC, try with vstack
:
np.random.seed(100)
x = 2
m = 3
a = np.random.randint(1, 5, (5,4))
>>> np.vstack((np.zeros((x, 1)), a[:,3][:len(a)-x][np.newaxis].T))
array([[0.],
[0.],
[4.],
[3.],
[2.]])
Note that you need the np.newaxis
because the transpose of a 1D array is still a 1D array (unlike in MATLAB, where the default array is 2D).