Home > Net >  Why does a python function work in parallel even if it should not?
Why does a python function work in parallel even if it should not?

Time:12-08

I am running this code using the healpy package. I am not using multiprocessing and I need it to run on a single core. It worked for a certain amount of time, but, when I run it now, the function healpy.projector.GnomonicProj.projmap takes all the available cores.

This is the incriminated code block:

def Stacking () :

    f = lambda x,y,z: pixelfunc.vec2pix(xsize,x,y,z,nest=False)
    map_array = pixelfunc.ma_to_array(data)
    im = np.zeros((xsize, xsize))
    plt.figure()

    for i in range (nvoids) :
        sys.stdout.write("\r"   str(i 1)   "/"   str(nvoids))
        sys.stdout.flush()
        proj = hp.projector.GnomonicProj(rot=[rav[i],decv[i]], xsize=xsize, reso=2*nRad*rad_deg[i]*60/(xsize))
        im  = proj.projmap(map_array, f)

    im/=nvoids
    plt.imshow(im)
    plt.colorbar()
    plt.title(title   " (Map)")
    plt.savefig("../Plots/stackedMap_" name ".png")

    return im

Does someone know why this function is running in parallel? And most important, does someone know a way to run it in a single core?

Thank you!

CodePudding user response:

In this thread they recommend to set the environment variable OMP_NUM_THREADS accordingly:

Worked with:

import os
os.environ['OMP_NUM_THREADS'] = '1'
import healpy as hp
import numpy as np

os.environ['OMP_NUM_THREADS'] = '1' have to be done before import numpy and healpy libraries.

CodePudding user response:

python has something called the Global Interpreter Lock which means that bytecode running in a single Python environment(as is the case with threading) cannot run in parallel(it can still run out of order but not concurrently

  • Related