I have three tensors: A - (1, 3, 256, 256) B - (1, 3, 256, 256) - this is a white image tensor C - (256, 256) - this is the segment tensor
For instance C would look like:
tensor([[ 337, 337, 337, ..., 340, 340, 340],
[ 337, 337, 337, ..., 340, 340, 340],
[ 337, 337, 337, ..., 340, 340, 340],
...,
[1022, 1022, 1022, ..., 1010, 1010, 1010],
[1022, 1022, 1022, ..., 1010, 1010, 1010],
[1022, 1022, 1022, ..., 1010, 1010, 1010]], device='cuda:0')
where 37 could indicate a building etc.
Tensor C gives the location of the segment shape. What I want is to copy the same segment based on the location from tensor A onto tensor B. This would be photoshopping the segment onto a white image tensor.
This is similar to masking and I looked into mask_select (https://pytorch.org/docs/stable/generated/torch.masked_select.html) but that only returns 1D tensor back.
CodePudding user response:
You do not need to select the pixels in C
, only to mask them:
select = 337 # which segment to select
select_mask = (C == select)[None, None, ...] # create binary mask and add singleton dimensions
# this is the part where you select the right part of A
B = B * (1 - select_mask) A * select_mask