Home > Back-end >  Replicate softmax of pytorch
Replicate softmax of pytorch

Time:10-05

I am trying to implement the softmax function in pytorch, but cant get the output of my implementation to match the output of pytorch's implementation.

I am trying to do this because I would like to go on to implement a masked softmax that would not include certain indices in the sum for the denominator, and set the output for those masked indexes.

I would like to calculate this for a matrix where each row in the output sums to 1. My current implentation is :

def my_softmax(x):
    exp = x.exp()
    return exp / exp.sum(1, keepdim=True)

However the output isnt the same for my implementation and pytorch's:

>>> t = torch.randn(3, 2)
>>> t
tensor([[-1.1881, -0.1085],
        [ 0.5825,  1.0719],
        [-0.5309, -1.3774]])
>>> my_softmax(t)
tensor([[0.2536, 0.7464],
        [0.3800, 0.6200],
        [0.6998, 0.3002]])
>>> t.softmax(1)
tensor([[0.2536, 0.7464],
        [0.3800, 0.6200],
        [0.6998, 0.3002]])
>>> my_softmax(t) == t.softmax(1)
tensor([[False,  True],
        [False, False],
        [ True,  True]])

Why do these different implementations produce different results?

CodePudding user response:

This works

import torch

def my_softmax(x):

    means = torch.mean(x, 1, keepdim=True)[0]
    x_exp = torch.exp(x-means)
    x_exp_sum = torch.sum(x_exp, 1, keepdim=True)

    return x_exp/x_exp_sum

t = torch.randn(3, 2) 

s1 = my_softmax(t)
s2 = t.softmax(1)
print(torch.allclose(s1, s2))
True

P.S. taken from discussion https://discuss.pytorch.org/t/how-to-implement-the-exactly-same-softmax-as-f-softmax-by-pytorch/44263

  • Related