Home > Software engineering >  "Vectorized" Matrix-Vector multiplication in numpy
"Vectorized" Matrix-Vector multiplication in numpy

Time:01-11

I have an $I$-indexed array $V = (V_i)_{i \in I}$ of (column) vectors $V_i$, which I want to multiply pointwise (along $i \in I$) by a matrix $M$. So I'm looking for a "vectorized" operation, wherein the individual operation is a multiplication of a matrix with a vector; that is

$W = (M V_i)_{i \in I}$

Is there a numpy way to do this?

numpy.dot unfortunately assumes that $V$ is a matrix, instead of an $I$-indexed family of vectors, which obviously fails.


So basically I want to "vectorize" the operation

W = [np.dot(M, V[i]) for i in range(N)]

Considering the 2D array V as a list (first index) of column vectors (second index).

If

shape(M) == (2, 2)
shape(V) == (N, 2)

Then

shape(W) == (N, 2)

CodePudding user response:

EDIT:

Based on your iterative example, it seems it can be done with a dot product with some transposes to match the shapes. This is the same as ([email protected]).T which is the transpose of M @ V.T.

# Step by step
    ((2,2) @ (5,2).T).T
->  ((2,2) @ (2,5)).T
->  (2,5).T
->  (5,2)

Code to prove this is as follows. Your iterative output results in a matrix W which is exactly equal to the solutions matrix.

M = np.random.random((2,2))
V = np.random.random((5,2))

# YOUR ITERATIVE SOLUTION (STACKED AS MATRIX)
W = np.stack([np.dot(M, V[i]) for i in range(5)])
print(W)

#array([[0.71663319, 0.84053871],
#       [0.28626354, 0.36282745],
#       [0.26865497, 0.55552295],
#       [0.40165606, 0.10177711],
#       [0.33950909, 0.54215385]])


# PROPOSED DOT PRODUCt
solution = ([email protected]).T           #<---------------
print(solution)

#array([[0.71663319, 0.84053871],
#       [0.28626354, 0.36282745],
#       [0.26865497, 0.55552295],
#       [0.40165606, 0.10177711],
#      [0.33950909, 0.54215385]])

np.allclose(W, solution) #compare the 2 matrices
True

IIUC, your ar elooking for a pointwise multiplication of a matrix M and vector V (with broadcasting).

The matrix here is (3,3), while V is an array with 4 column vectors, each of which you want to independently multiply with the matrix while obeying broadcasting rules.

# Broadcasting Rules

 M ->     3, 3
 V ->  4, 1, 3   #V.T[:,None,:] 
----------------
 R ->  4, 3, 3
----------------

Code for this -

M = np.array([[1,1,1],
              [0,0,0],
              [1,1,1]])    #3,3 matrix M

V = np.array([[1,2,3,4],
              [1,2,3,4],   #4,3 indexed vector
              [1,2,3,4]])  #store 4 column vectors
                           


R = M * V.T[:,None,:]          #<--------------
R
array([[[1, 1, 1],
        [0, 0, 0],
        [1, 1, 1]],

       [[2, 2, 2],
        [0, 0, 0],
        [2, 2, 2]],

       [[3, 3, 3],
        [0, 0, 0],
        [3, 3, 3]],

       [[4, 4, 4],
        [0, 0, 0],
        [4, 4, 4]]])

Post this if you have any aggregation, you can reduce the matrix with the required operations.

Example, Matrix M * Column vector [1,1,1] results in -

array([[[1, 1, 1],
        [0, 0, 0],
        [1, 1, 1]],

while, Matrix M * Column vector [4,4,4] results in -

array([[[4, 4, 4],
        [0, 0, 0],
        [4, 4, 4]],

CodePudding user response:

With

shape(M) == (2, 2)
shape(V) == (N, 2)

and

W = [np.dot(M, V[i]) for i in range(N)]   

V[i] is (2,), so np.dot(M,V[i]) is (2,2) with(2,) => (2,) with sum-of-products on the last 2 of M. np.array(W) is then (N,2) shape

For 2d A,B, np.dot(A,B) does sum-of-products with the last dimension of A and 2nd to the last of B. You want the last dim of M with the last of V.

One way is:

np.dot(M,V.T).T    # (2,2) with (2,N) => (2,N) => (N,2)
([email protected]).T           # with the matmul operator

Sometimes einsum makes the relation between axes clearer:

np.einsum('ij,nj->ni',M,V)
np.einsum('ij,jn->in',M,V.T).T      # with j in last/2nd last positions

Or switching the order of V and M:

V @ M.T           # 'nj,ji->ni'

Or treating the N dimension as a batch, we could make V[:,:,None] (N,2,1). This could be thought of as N (2,1) "column vectors".

M @ V[:,:,None]       # (N,2,1)
np.einsum('ij,njk->nik', M, V[:,:,None])   # again j is in the last/2nd last slots

Numerically:

In [27]: M = np.array([[1,2],[3,4]]); V = np.array([[1,2],[2,3],[3,4]])

In [28]: [M@V[i] for i in range(3)]
Out[28]: [array([ 5, 11]), array([ 8, 18]), array([11, 25])]

In [30]: ([email protected]).T
Out[30]: 
array([[ 5, 11],
       [ 8, 18],
       [11, 25]])

In [31]: [email protected]
Out[31]: 
array([[ 5, 11],
       [ 8, 18],
       [11, 25]])

Or the batched:

In [32]: M@V[:,:,None]
Out[32]: 
array([[[ 5],
        [11]],

       [[ 8],
        [18]],

       [[11],
        [25]]])
In [33]: np.squeeze(M@V[:,:,None])
Out[33]: 
array([[ 5, 11],
       [ 8, 18],
       [11, 25]])
  • Related