Home > database >  PyTorch vectorized sum different from looped sum
PyTorch vectorized sum different from looped sum

Time:11-28

I am using torch 1.7.1 and I noticed that vectorized sums are different from sums in a loop if the indices are repeated. For example:

import torch

indices = torch.LongTensor([0,1,2,1])
values = torch.FloatTensor([1,1,2,2])
result = torch.FloatTensor([0,0,0])

looped_result = torch.zeros_like(result)

for i in range(indices.shape[0]):
    looped_result[indices[i]]  = values[i]

result[indices]  = values

print('result:',result)
print('looped result:', looped_result)

results in:

 result tensor: ([1., 2., 2.])
 looped result tensor: ([1., 3., 2.])

As you can see the looped variable has the correct sums while the vectorized one doesn’t. Is it possible to avoid the loop and still get the correct result?

CodePudding user response:

The issue here is that you're indexing result multiple times at the same index, which is bound to fail for this inplace operation. Instead what you'd need to use is index_add or index_add_, e.g. (as a continuation of your snippet):

>>> result_ia = torch.zeros_like(result)
>>> result_ia.index_add_(0, indices, values)
tensor([1., 3., 2.]
  • Related