Home > Blockchain >  Torch: tensor to numpy - cycling problem .cpu() to .detach().numpy()
Torch: tensor to numpy - cycling problem .cpu() to .detach().numpy()

Time:12-19

I got a problem with converting a torch tensor to numpy array.

    avg_rewards = th.mean(rewards, dim=[0,1,3])
    avg_targets = th.mean(th.mean(targets.reshape(rewards.shape), dim=[0, 1, 3]))
    avg_score = th.max(avg_rewards, avg_targets)
    avg_q = th.mean(q, dim=[0, 1, 3])

    griefers = avg_score > avg_q
    griefers = [i for i, x in enumerate(griefers) if x]

    grieve_factor = th.tanh(th.clamp(avg_score / avg_q, min=0)).detach().numpy()

The last line returns an error. If I use .detach().numpy() I get "use .cpu()..." message. If I however use .cpu() I get the "use .detach().numpy()" error

I printed out the type of grieve_factor:

the grieve factors are tensor([0., 0.], device='cuda:0', grad_fn=<TanhBackward0>)

I don't need any gradients on it, and I just want it to be [0., 0.] array

CodePudding user response:

By default, Tensor.numpy() only performs the conversion if the tensor is on the CPU. Since your tensor is on a GPU, you should either move it to the CPU before the conversion as hinted in a comment or by setting force=True:

grieve_factor = th.tanh(th.clamp(avg_score / avg_q, min=0)).numpy(force=True)
  • Related