Home > OS >  Ways of speeding up a loop that iterates over a large array
Ways of speeding up a loop that iterates over a large array

Time:10-08

I'm looking to speed up the code below which loops through each voxel in brain that has been split into regions (numbered 0 to 50), and reassigns that region with a corresponding value (found in the array region_vals which contains 51 numbers). brain is a numpy.ndarray. As the dimensions of the brain array are 182x218x182 it takes around 12 seconds to finish this loop.

import numpy as np

# Sample data
brain = np.random.randint(10, size=(6,5,5))
region_vals = np.random.randint(250, size=11)

# Iterate through each voxel in the brain
for x in range(0, brain.shape[0]):
    for y in range(0, brain.shape[1]):
        for z in range(0, brain.shape[2]):

            region = brain[x][y][z]  # Get region number

            # Reassign voxel value
            brain[x][y][z] = region_vals[region]

Multi-threading is not an option here as I am already running this code in parallel.

Is there a way of speeding up the loop or removing the loop entirely?

CodePudding user response:

I believe you can just use brain (initial) as indices for region_vals in a one-liner if region_vals is also a numpy array: brain = region_vals[brain]

  • Related