I have a (A,B) tensor, and I'm looking for a performant way to map each value from that tensor to an array to create a new tensor of size (A,B,N). Here's a functioning piece of code showing what I'm trying to do.
A, B, N = 3, 4, 5
my_old_tensor = torch.ones((A,B), dtype=torch.float32)
my_new_tensor = torch.zeros((A, B, N), dtype=torch.float32)
for val in range(N):
my_new_tensor[:,:,val] = (val - my_old_tensor)/2
My code is currently quite slow, and I think the for-loop is the problem. Is there a more pytorch-performant way of doing this that eliminates the for-loop? I've tried something like this
x = torch.arange(0, N, 1, dtype=torch.float32)
my_new_tensor = (x - my_old_tensor)/2
but that gives "RuntimeError: The size of tensor a (5) must match the size of tensor b (4) at non-singleton dimension 1"
Any help would be appreciated!
CodePudding user response:
Use unsqueeze
to broadcast my_old_tensor
:
my_new_tensor = (torch.arange(N, dtype=torch.float32) - my_old_tensor.unsqueeze(-1))/2