Home > Net >  Numba parallel time per thread usage in python
Numba parallel time per thread usage in python

Time:06-26

When I run this program in parallel using njit from numba, I noticed that using many threads does not make a difference. In fact, from 1-5 threads the time is faster (which is expected) but after that the time gets slower. Why is this happening?

from numba import njit,prange,set_num_threads,get_num_threads
import numpy as np
@njit(parallel=True)
def test(x,y):
    z=np.empty((x.shape[0],x.shape[0]),dtype=np.float64)
    for i in prange(x.shape[0]):
        for j in range(x.shape[0]):
            z[i,j]=x[i,j]*y[i,j]
    return z
x=np.random.rand(10000,10000)
y=np.random.rand(10000,10000)
for i in range(16):   
    set_num_threads(i 1)
    print("Number of threads :",get_num_threads())
    %timeit -r 1 -n 10 test(x,y)
Number of threads : 1
234 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 2
178 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 3
168 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 4
161 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 5
148 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 6
152 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 7
152 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 8
153 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 9
154 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 10
156 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 11
158 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 12
157 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 13
158 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 14
160 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 15
160 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)
Number of threads : 16
161 ms ± 0 ns per loop (mean ± std. dev. of 1 run, 10 loops each)

I tested this in a Jupyter Notebook (anaconda) in a cpu with 8 cores and 16 threads.

CodePudding user response:

The code is memory-bound so the RAM is saturated with only few cores.

Indeed, z[i,j]=x[i,j]*y[i,j] cause two memory load of of 8 bytes, one store of 8 byte and an additional load of 8 bytes due to the write-allocate cache-policy on x86-64 processors (a written cache line must be read in this case). This means 32 bytes loaded/stored per loop iteration while only 1 multiplication need to be done. Modern mainstream (x86-64) processors can do 2x4 double-precision FP multiplications/cycle and operate at 3-5 GHz (in fact, Intel server processor can do 2x8 DP FP multiplications/cycle). Meanwhile a good mainstream PC can only reach 40-60 GiB/s and a high-performance server 200-350 GiB/s.

There is no way to speed up memory bound code like this in Numba. C/C code can improve this a bit by avoiding write-allocates (up to 1.33 times faster). The best solution is to operate on smaller blocks if possible and merge computing steps so to apply more FP operations per step.

Actually, the speed of the RAM is known to increase slowly compared to the computing power of processors. This problem has been identified few decades ago and the gap between the two is still getting bigger over time. THis problem is known as the "Memory wall". This is not gonna be better in the future (at least it is very unlikely to be the case).

  • Related