numpy tanh seems much slower than its pytorch equivalence:
import torch
import numpy as np
data=np.random.randn(128,64,32).astype(np.float32)
%timeit torch.tanh(torch.tensor(data))
%timeit np.tanh(data)
820 µs ± 24.6 µs per loop (mean ± std. dev. of 7 runs, 1,000 loops each)
3.89 ms ± 95.4 µs per loop (mean ± std. dev. of 7 runs, 100 loops each)
is there a way to speed up tanh in numpy? Thanks!
CodePudding user response:
You could try with numexpr
as follows:
pip install numexpr
Then:
import numexpr as ne
import numpy as np
data=np.random.randn(128,64,32).astype(np.float32)
resne = ne.evaluate("tanh(data)")
resnp = np.tanh(data)
Then check all close:
In [16]: np.allclose(resne,resnp)
Out[16]: True
And check timings:
In [14]: %timeit res = ne.evaluate("tanh(data)")
311 µs ± 1.26 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)
In [15]: %timeit np.tanh(data)
1.85 ms ± 7.43 µs per loop (mean ± std. dev. of 7 runs, 1000 loops each)