Home > Blockchain >  Why is a combination of numpy functions faster than np.mean?
Why is a combination of numpy functions faster than np.mean?

Time:12-19

I am wondering what the fastest way for a mean computation is in numpy. I used the following code to experiment with it:

import time
n = 10000
p = np.array([1] * 1000000)

t1 = time.time()
for x in range(n):
    np.divide(np.sum(p), p.size)
t2 = time.time()

print(t2-t1)

3.9222593307495117

t3 = time.time()
for x in range(n):
    np.mean(p)
t4 = time.time()

print(t4-t3)

5.271147012710571

I would assume that np.mean would be faster or at least equivalent in speed, however it looks like the combination of numpy functions is faster than np.mean. Why is the combination of numpy functions faster?

CodePudding user response:

For integer input, by default, numpy.mean computes the sum in float64 dtype. This prevents overflow errors, but it requires a conversion for every element.

Your code with numpy.sum only converts once, after the sum has been computed.

CodePudding user response:

The reason why the combination of np.sum and np.divide is faster than np.mean in this case is that np.mean is implemented in Python, while np.sum and np.divide are implemented in C, which makes them faster.

np.mean calls np.sum internally and then divides the result by the size of the array, so the combination of np.sum and np.divide is essentially doing the same thing as np.mean, but without the overhead of calling a Python function.

In general, it is usually faster to use NumPy functions that are implemented in C rather than Python, especially for large arrays.

  • Related