Home > other >  Finding identity of touching labels/objects/masks in images using python
Finding identity of touching labels/objects/masks in images using python

Time:06-01

Unfortunately, I couldn't find anything towards this topic so here goes:

I have an image as a numpy array containing masks for different nuclei of cells as integer numbers that looks like this:

https://i.stack.imgur.com/nn8hG.png

The individual masks have different values and the background is 0. Now for every mask in that image I would like to get the identity of other touching masks (if there are any). What I have so far is code that gets the pixel positions of every masks value (via the argwhere function) and checks whether any pixel in the 8 surrounding pixels is not 0 or its own value.

for i in range(1, np.max(mask_image 1)):
     coordinates = np.argwhere(mask_image==i)
     touching_masks = []
     for pixel in coordinates:
    
         if mask_image[pixel[0]   1, pixel[1]] != 0 and mask_image[pixel[0]   1, pixel[1]] != i:
         touching_masks.append(mask_image[pixel[0]   1, pixel[1]]) #bottom neighbour
    
         elif mask_image[pixel[0] -1, pixel[1]] != 0 and mask_image[pixel[0] -1, pixel[1]] != i:
         touching_masks.append(mask_image[pixel[0] -1, pixel[1]]) #top neighbour
    
         elif mask_image[pixel[0], pixel[1]-1] != 0 and mask_image[pixel[0], pixel[1]-1] != i:
         touching_masks.append(mask_image[pixel[0], pixel[1]-1]) #left neighbour
    
         elif mask_image[pixel[0], pixel[1]   1] != 0 and mask_image[pixel[0], pixel[1]   1] != i:
         touching_masks.append(mask_image[pixel[0], pixel[1]   1]) #right neighbour
        
         elif mask_image[pixel[0]   1, pixel[1]   1] != 0 and mask_image[pixel[0]   1, pixel[1]   1] != i:
         touching_masks.append(mask_image[pixel[0]   1, pixel[1]   1]) #bottom-right neighbour
    
         elif mask_image[pixel[0] - 1, pixel[1] - 1] != 0 and mask_image[pixel[0] - 1, pixel[1] - 1] != i:
         touching_masks.append(mask_image[pixel[0] - 1, pixel[1] - 1]) #top-left neighbour
    
         elif mask_image[pixel[0]   1, pixel[1] - 1] != 0 and mask_image[pixel[0]   1, pixel[1] - 1] != i:
         touching_masks.append(mask_image[pixel[0]   1, pixel[1] - 1]) #bottom-left neighbour
        
         elif mask_image[pixel[0] - 1, pixel[1]   1] != 0 and mask_image[pixel[0] - 1, pixel[1]   1] != i:
         touching_masks.append(mask_image[pixel[0] - 1, pixel[1]   1]) #top-right neighbour

Since I have about 500 masks per image and a time series of about 200 images this is very slow and I would like to improve it. I tried a bit with regionprops, and skimage.segmentation and scipy but didn't find a proper function for that.

I would like to know whether

  1. there already is a pre-existing function that could do that (and which I blindly overlooked)
  2. one can retain only the positions of the argwhere function that are border-pixels of the mask and thereby reduce the number of input pixels for the checks of the surrounding 8 pixels. The condition being that these border-pixels always retain their original value as a form of identifier.

Any kind of advice is much appreciated!

A bit more background information about why I am trying to do this:

I am currently acquiring timelapses of multiple cells over the course of various hours. Sometimes after cell division the two daughter nuclei stick to another and can be missegmented as one nucleus or proeprly as two nuclei. This happens rarely, but I would like to filter out time-tracks of such cells that alternate between one or two masks. I also calculate the area of such cells, but filtering for unreasonable changes in mask area runs into two problems:

  1. Cells that wander into (or out of) the image can also display such size changes and
  2. misfocusing of the microscope can also result in smaller masks (and larger ones when the proper focus is achieved again). Unfortunately, this also happens with our microscope from time to time during tha timelapse. My idea was to get the identity of touching masks throughout the timelapse to have one more criteria to take into account while filtering out such cells.

CodePudding user response:

You can find the shared boundary of the masks by e.g. using the find_boundaries() function in skimage.segmentation. Sadly this will also include the background borders, but we can filter that out, by taking the xor with a mask of all foreground pixels.

a = find_boundaries(mask_image)
b = find_boundaries(mask_image != 0)
touching_masks = np.logical_xor(a, b)

On my computer this take about 0.05 seconds for a 1000x1000 image and 500 masks.

If you then want the values of the masks, you can just take

mask_values = mask_image.copy()
mask_values[~touching_masks] = 0

and find the neighboring values by using your code.

CodePudding user response:

The skimage.graph.pixel_graph function will tell you which pixels in an image connect to other pixels. You can use this graph to answer your question — I think very quickly.

(Note that the image you shared is not a segmentation mask but an RGBA grayscale image with values in [0, 255], so I couldn't use it in the analysis below.)

Step 1: we build the pixel graph. For this, we only want to keep edges where the two labels are different, so we pass in an edge function that returns 1.0 when the values are different, and 0.0 otherwise.

import numpy as np
from skimage.graph import pixel_graph


def different_labels(center_value, neighbor_value, *args):
    return (center_value != neighbor_value).astype(float)


label_mask = ... # load your image here
g, nodes = pixel_graph(
        label_mask,
        mask=label_mask.astype(bool),
        connectivity=2,  # count diagonals in 2D
        )

Now, you need to know that g is in scipy.sparse.csr_matrix format, and that the row indices correspond to all the nonzero pixels in the image. To get back to the actual image position, you need the nodes array, which contain the map from matrix indices to pixel indices.

The matrix also contains all of the zero entries for pixels we don't care about, so get rid of them using the scipy.sparse.csr_matrix.eliminate_zeros() method.

To get our pairs of different pixels, we convert the matrix to a COO matrix, and then we grab the corresponding image coordinates and grab the values:

g.eliminate_zeros()

coo = g.tocoo()
center_coords = nodes[coo.row]
neighbor_coords = nodes[coo.col]

center_values = label_mask.ravel()[center_coords]
neighbor_values = label_mask.ravel()[neighbor_coords]

Now we have a list of (i, j) pairs of nuclei that touch. (They are somewhat-arbitrarily arranged into center/neighbor. Also, the pairs appear both as (i, j) and (j, i).) You can do what you will with these arrays, e.g. save them to a text file:

touching_masks = np.column_stack((center_values, neighbor_values))
np.savetxt('touching_masks.txt', touching_masks, delimiter=',')

or make a dictionary mapping each nucleus to a list of neighboring nuclei:

from collections import defaultdict
pairs = defaultdict(list)

for i, j in zip(center_values, neighbor_values):
    pairs[i].append(j)

In general, it's good to try to avoid iterating over pixels, and use NumPy vectorized operations instead. The source code for the pixel_graph function might serve for further inspiration for how to think about this type of problem!

  • Related