The Goal
I have an M-length array of 3-dimensional coordinate points as floats. I would like to create a 3-dimensional numpy array of a predefined shape filled with ellipsoids of a given float radius centered on these points. Because this is intended for image manipulation, I call each value in the array a "pixel". If these ellipsoids overlap, I would like to assign the pixel to the closer center by euclidean distance. The final output will be a numpy array with a background of 0 and pixels within ellipsoids numbered 1, 2,... M, as corresponding to the initial list of coordinates, similar to the output of scipy's ndimage.label(...).
Below, I have a naive approach which considers every position in the output array and compares it to every defined center, creating a binary array with a value of 1 for any pixel inside any ellipsoid. It then uses scikit-image to watershed this binary array. While this code works, it is prohibitively slow for my use, both because it considers every pixel and center pair and because it performs watershedding seperately. How can I speed up this code?
Naive Example
def define_centromeres(template_image, centers_of_mass, xradius = 4.5, yradius = 4.5, zradius = 3.5):
""" Creates a binary N-dimensional numpy array of ellipsoids.
:param template_image: An N-dimensional numpy array of the same shape as the output array.
:param centers_of_mass: A list of lists of N floats defining the centers of spots.
:param zradius: A float defining the radius in pixels in the z direction of the ellipsoids.
:param xradius: A float defining the radius in pixels in the x direction of the ellipsoids.
:param yradius: A float defining the radius in pixels in the y direction of the ellipsoids.
:return: A binary N-dimensional numpy array.
"""
out = np.full_like(template_image, 0, dtype=int)
for idx, val in np.ndenumerate(template_image):
z, x, y = idx
for point in centers_of_mass:
pz, px, py = point[0], point[1], point[2]
if (((z - pz)/zradius)**2 ((x - px)/xradius)**2 ((y - py)/yradius)**2) <= 1:
out[z, x, y] = 1
break
return out
Scikit-image's watershed function; it is unlikely to find a speed-up by changing the code inside this method:
def watershed_image(binary_input_image):
bg_distance = ndi.distance_transform_edt(binary_input_image,
return_distances=True,
return_indices=False)
local_maxima = peak_local_max(bg_distance, min_distance=1, labels=binary_input_image)
bg_mask = np.zeros(bg_distance.shape, dtype=bool)
bg_mask[tuple(local_maxima.T)] = True
marks, _ = ndi.label(bg_mask)
output_watershed = watershed(-bg_distance, marks, mask=binary_input_image)
return output_watershed
Small-scale example data:
zdim, xdim, ydim = 15, 100, 100
example_shape = np.zeros((zdim,xdim,ydim))
example_points = np.random.random_sample(size=(10,3))*np.array([zdim,xdim,ydim])
center_spots_image = define_centromeres(example_shape, example_points)
watershed_spots = watershed_image(center_spots_image)
Output:
The difference is (I think) that Eric (top panel) uses standard Euclidean metric whereas I (bottom panel) use the metric suggested by the ellipsoids mostly out of opportunism but also because it may even be the right thing to do. Switching this over would be possible but at a speed penalty.