Home > Software engineering >  Is there any way to optimize a triple loop in Python by using numpy or other ressources?
Is there any way to optimize a triple loop in Python by using numpy or other ressources?

Time:11-30

I'm having trouble finding out a way to optimize a triple loop in Python. I will directly give the code for a better and simpler representation of what I have to compute :

Given two 2-D arrays named samples (M x N) and D(N x N) along with the output results (NxN):

for sigma in range(M):
        for i in range(N):
            for j in range(N):
                results[i, j]   = (1/N) * (samples[sigma, i]*samples[sigma, j] 
                                             - samples[sigma, i]*D[j, i] 
                                             - samples[sigma, j]*D[i, j])  
return results

It does the job but is not effective at all in python. I tried to unloop the for i.. for j.. loop but I cannot compute it correctly with the sigma in the way.

Does someone have an idea on how to optimize those few lines ? Any suggestions are welcomed such as numpy, numexpr, etc...

CodePudding user response:

One way I found to improve your code (i.e reduce the number of loops) is by using np.meshgrid.

Here is the impovement I found. It took some fiddling but it gives the same output as your triple loop code. I kept the same code structure so you can see what parts correspond to what part. I hope this is of use to you!

for sigma in range(M):
    xx, yy = np.meshgrid(samples[sigma], samples[sigma])

    results  = (1/N) * (xx * yy 
                         - yy * D.T
                         - xx * D)       

print(results) # or return results

.

Edit: Here's a small script to verify that the results are as expected:

import numpy as np
M, N = 3, 4
rng = np.random.default_rng(seed=42)


samples = rng.random((M, N))
D       = rng.random((N, N))
results = rng.random((N, N))

results_old = results.copy()
results_new = results.copy()

for sigma in range(M):
        for i in range(N):
            for j in range(N):
                results_old[i, j]   = (1/N) * (samples[sigma, i]*samples[sigma, j]
                                             - samples[sigma, i]*D[j, i]
                                             - samples[sigma, j]*D[i, j])

print('\n\nresults_old', results_old, sep='\n')

for sigma in range(M):
    xx, yy = np.meshgrid(samples[sigma], samples[sigma])

    results_new  = (1/N) * (xx * yy
                         - yy * D.T
                         - xx * D)

print('\n\nresults_new', results_new, sep='\n')

Edit 2: Entirely getting rid of loops: it is a bit convoluted but it essentially does the same thing.

M, N = samples.shape
xxx, yyy = np.meshgrid(samples, samples)
split_x = np.array(np.hsplit(np.vsplit(xxx, M)[0], M))
split_y = np.array(np.vsplit(np.hsplit(yyy, M)[0], M))

results  = np.sum(
    (1/N) * (split_x*split_y 
                 - split_y*D.T 
                 - split_x*D), axis=0)

print(results) # or return results

CodePudding user response:

I found easier to break the problem into smaller steps and work on it, until we have a single equation.

Going from your original formulation:

for sigma in range(M):
        for i in range(N):
            for j in range(N):
                results[i, j]   = (1/N) * (samples[sigma, i]*samples[sigma, j] 
                                             - samples[sigma, i]*D[j, i] 
                                             - samples[sigma, j]*D[i, j])

The first thing is to eliminate the j index in the inner most loop. For this we start working with vectors instead of single elements:

for sigma in range(M):
    for i in range(N):
        results[i, :]   = (1/N) * (samples[sigma, i]*samples[sigma, :] - samples[sigma, i]*D[:, i] - samples[sigma, :]*D[i, :])

Then, we eliminate the second loop, the one with i index. In this step we start to think in matrices. Therefore, each loop is the direct summation of "sigma matrices".

for sigma in range(M):
    results  = (1/N) * (samples[sigma, :, np.newaxis] * samples[sigma] - samples[sigma, :, np.newaxis] * D.T - samples[sigma, :] * D)

I strongly recommend to use this step as the solution since vectorizing even more would require too much memory for a big value of M. But, just for knowlegde...

think of the matrices as 3-dimensional objects. We do the calculations and sum at the end in index zero as:

results = (1/N) * (samples[:, :, np.newaxis] * samples[:,np.newaxis] - samples[:, :, np.newaxis] * D.T - samples[:, np.newaxis, :] * D).sum(axis=0)

CodePudding user response:

In order to vectorize for loops, we can make use of broadcasting and then reducing along any axes that are not reflected by the output array. To do so, we can "assign" one axis to each of the for loop indices (as a convention). For your example this means that all input arrays can be reshaped to have dimension 3 (i.e. len(a.shape) == 3); the axes correspond then to sigma, i, j respectively. Then we can perform all operations with the broadcasted arrays and finally reduce (sum) the result along the sigma axis (since only i, j are reflected in the result):

# Ordering of axes: (sigma, i, j)
samples_i = samples[:, :, np.newaxis]
samples_j = samples[:, np.newaxis, :]
D_ij = D[np.newaxis, :, :]
D_ji = D.T[np.newaxis, :, :]
return (samples_i*samples_j - samples_i*D_ji - samples_j*D_ij).sum(axis=0) / N

The following is a complete example that compares the reference code (using for loops) with the above version; note that I've removed the 1/N part in order to keep computations in the domain of integers and thus make the array equality test exact.

import time
import numpy as np


def timeit(func):
    def wrapper(*args):
        t_start = time.process_time()
        res = func(*args)
        t_total = time.process_time() - t_start
        print(f'{func.__name__}: {t_total:.3f} seconds')
        return res
    return wrapper


rng = np.random.default_rng()

M, N = 100, 200
samples = rng.integers(0, 100, size=(M, N))
D = rng.integers(0, 100, size=(N, N))


@timeit
def reference(samples, D):
    results = np.zeros(shape=(N, N))
    for sigma in range(M):
        for i in range(N):
            for j in range(N):
                results[i, j]  = (samples[sigma, i]*samples[sigma, j] 
                                  - samples[sigma, i]*D[j, i] 
                                  - samples[sigma, j]*D[i, j])
    return results


@timeit
def new(samples, D):
    # Ordering of axes: (sigma, i, j)
    samples_i = samples[:, :, np.newaxis]
    samples_j = samples[:, np.newaxis, :]
    D_ij = D[np.newaxis, :, :]
    D_ji = D.T[np.newaxis, :, :]
    return (samples_i*samples_j - samples_i*D_ji - samples_j*D_ij).sum(axis=0)


assert np.array_equal(reference(samples, D), new(samples, D))

This gives me the following benchmark results:

reference: 6.465 seconds
new: 0.133 seconds
  • Related