Home > Net >  "reshape" numpy array of (N, 2) shape into (N, 2, 2) where each column (size 2) become a d
"reshape" numpy array of (N, 2) shape into (N, 2, 2) where each column (size 2) become a d

Time:10-04

Is there an efficient way to do this? For example I have

[[1, 2, 3],
 [4, 5, 6]]

I would like to get:

[[[1, 0],
  [0, 4]],

 [[2, 0],
  [0, 5]],

 [[3, 0],
  [0, 6]]]

CodePudding user response:

For large arrays I recommend np.einsum as follows:

>>> data
array([[1, 2, 3],
       [4, 5, 6]])
>>> out = np.zeros((*reversed(data.shape),2),data.dtype)
>>> np.einsum("...ii->...i",out)[...] = data.T
>>> out
array([[[1, 0],
        [0, 4]],

       [[2, 0],
        [0, 5]],

       [[3, 0],
        [0, 6]]])

einsum creates a writable strided view of the memory locations holding the diagonal elements. This is about as efficient as it gets in numpy.

CodePudding user response:

Not a strided view, but perhaps easier to understand is this 'diagonal' fill of a (3,2,2) array:

In [28]: arr = np.arange(1,7).reshape(2,3)
In [29]: res = np.zeros((3,2,2),int)
In [30]: res[:,np.arange(2),np.arange(2)].shape
Out[30]: (3, 2)
In [31]: res[:,np.arange(2),np.arange(2)]=arr.T
In [32]: res
Out[32]: 
array([[[1, 0],
        [0, 4]],

       [[2, 0],
        [0, 5]],

       [[3, 0],
        [0, 6]]])

For this small case the times aren't too different. I don't know how they'll scale:

In [39]: timeit np.einsum("...ii->...i",out)[...] = arr.T
5.21 µs ± 5.99 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
In [40]: timeit res[:,np.arange(2),np.arange(2)]=arr.T
6.4 µs ± 21.3 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
  • Related