Home > other >  How to make this loop parallel and faster?
How to make this loop parallel and faster?

Time:11-08

I have this set of images from which I want to create a set of sub images with a stride with sub image size of 128*128, original image must be greater than this size (row and column both), I have created the following functions :

def sliding_window(image, stride, imgSize):
height, width, _ = image.shape
img = []
a1 = list(range(0, height-imgSize stride, stride))
a2 = list(range(0, width-imgSize stride, stride))
if (a1[-1] imgSize != height):
    a1[-1] = height-imgSize
if (a2[-1] imgSize != width):
    a2[-1] = width-imgSize
for y in a1:
    for x in a2:
        im1 = image[y:y imgSize, x:x imgSize, :]
        img.append(np.array(im1))
return img

and the main code snippet from where I call this definition :

im_counter = 0

image_data = []
image_label = []
for cl in file_images:
    for img_file in data[cl]:
        path = img_path   cl   "/"   img_file
        im = image.load_img(path)
        im = image.img_to_array(im)
        im_counter  = 1
        if(im_counter % 500 == 0):
            print("{} images processed...".format(im_counter))
        if (im.shape[0] >= SIZE and im.shape[1] >= SIZE):
            img = sliding_window(im, STRIDE, SIZE)
            for i in range(len(img)):
                if(img[i].shape[2] >=3):
                    temp_img = img[i]
                    temp_img = preprocess_input(temp_img)
                    image_data.append(temp_img)
                    del temp_img
                    gc.collect()
                    image.append(class_dictionary[cl])

Now, the above code snippet takes forever to run on only 3000 images (takes at least 25 hours with utilizing only 1 CPU core), I want to make this faster, I have server access, the CPU has many cores, so can you please suggest a parallelized version of it so that it runs faster ?

NOTE : The sequence of subimages in which it is returned from the original image matters very much, No arbitrary sequence of image is allowed.

CodePudding user response:

Here is a rough outline of something you can try.

def main():
    # Create a list of tuples consisting of the file path, and the class
    # dictionary info for each of the cl arguments
    args = []
    for cl in file_images:
        for img_file in data[cl]:
            path = img_path   cl   "/"   img_file
            args.append((path, class_dictionary[cl]))

    with multiprocessing.Pool(processes=30) as pool:   # or however many processes
        image_counter = 0
        # Use multiprocessing to call handle_on_image(pathname, info)
        # and return the results in order
        for images, info in pool.starmap(handle_one_image, args):
            # Images is a list of returned images.  info is the class_dictionary info that we passed
            for image in images:
                image_counter  = 1
                image_data.append(image)
                image_label.append(info)

def handle_one_image(path, info):
    image_data = []
    im = image.load_img(path)
    im = image.img_to_array(im)
    if (im.shape[0] >= SIZE and im.shape[1] >= SIZE):
        img = sliding_window(im, STRIDE, SIZE)
        for i in range(len(img)):
            if(img[i].shape[2] >=3):
                temp_img = img[i]
                temp_img = preprocess_input(temp_img)
                image_data.append(temp_img)
        return image_data, info
    else:
        # indicate that no images are available
        return [], info
  • Related