Home > Software engineering >  How to improve loop for on big array in python?
How to improve loop for on big array in python?

Time:09-28

After some researches on StackOverflow, i didn't find a simple answer to my problem. So I share with you my code in order to find some help.

S=np.random.random((495,930,495,3,3))
#The shape of S is (495,930,495,3,3)

#I want to calculate for each small array (z,y,x,3,3) some features
for z in range(S.shape[0]):
    for y in range(S.shape[1]):
        for x in range(S.shape[2]):
            res[z,y,x,0]=np.array(np.linalg.det(S[z,y,x])/np.trace(S[z,y,x]))
            res[z,y,x,1]=np.array(S[z,y,x].mean())
            res[z,y,x,2:]=np.array(np.linalg.eigvals(S[z,y,x]))

Here is my problem. The size of the S array is huge. So I was wondering if it is possible to make this for loop faster.

CodePudding user response:

I had to reduce the shape to (49,93,49,3,3) so that it runs on my hardware in acceptable time. I was able to shave off 5-10% by avoiding unnecessary work (not optimizing your algorithm). Unnecessary work includes, but is not limited to:

  • Performing (global) lookups
  • Calculating the same value several times

You might also want to try a different python runtime, such as PyPy instead of CPython.

Here is my updated version of your script:

#!/usr/bin/python

import numpy as np

def main():
    # avoid lookups
    array = np.array
    trace = np.trace
    eigvals = np.linalg.eigvals
    det = np.linalg.det

    #The shape of S is (495,930,495,3,3)
    shape = (49,93,49,3,3) # so my computer can run it
    S=np.random.random(shape)

    res = np.ndarray(shape) # missing from the question, I hope this is correct

    #I want to calculate for each small array (z,y,x,3,3) some features
    # get shape only once, instead of z times for shape1 and z*y times for shape2
    shape1 = S.shape[1]
    shape2 = S.shape[2]
    for z in range(S.shape[0]):
        for y in range(shape1):
            for x in range(shape2):
                # get value once instead of 4 times
                s = S[z,y,x]
                res[z,y,x,0]=array(det(s)/trace(s))
                res[z,y,x,1]=array(s.mean())
                res[z,y,x,2:]=array(eigvals(s))

# function to have local (vs. global) lookups
main()

Runtime was reduced from 25 to 23 seconds (measured with hyperfine).

Useful references:

CodePudding user response:

I don't know your requirements or resources but maybe you can manage to achieve your goal using threat programming or something like parallel computing. Anything that can divide your big task into little task running at the same time to put all the results together at the end. It is just an idea of the concept.

  • Related