I have a piece of code as bellow:
import cv2
import numpy as np
import operator
from functools import reduce
image = cv2.imread("<some image path>")
bgr = np.int16(image)
h, w, _ = image.shape
mask = np.zeros((h, w), np.uint8)
# Get all channels
blue = bgr[:,:,0]
green = bgr[:,:,1]
red = bgr[:,:,2]
rules = np.where(reduce(operator.and_, [(red > 100), (red > green), (red > blue)]
# Create mask using above rules
mask[rules] = 255
### Then use cv2.findContours ...
This piece of code doesn't run enough fast as I expected. I think I can make it more faster by apply all conditions one by one, ie:
rule_1 = np.where(red > 100)
rule_2 = np.where(red[rule_1] > green)
rule_3 = np.where(red[rule_2] > blue)
mask[rule_3] = 255
Can the above method speed up my code? And how to do that? Many thanks!
CodePudding user response:
A great way is (adapted from Cris Luengo's comment)
mask = 255 * ((red > 100) & (red > green) & (red > blue))
but if you need faster, you can use Numba
from numba import jit, prange
@jit(nopython=True, parallel=True)
def red_dominates(rgb, mask):
M, N, _ = rgb.shape
for i in prange(M):
for j in prange(N):
r = rgb[i,j,0]
g = rgb[i,j,1]
b = rgb[i,j,2]
mask[i,j] = 255 * ((r > 100) & (r > g) & (r > b))
return mask
Notice that prange
is used instead of range
. This tells Numba that the loops are parallelizable.
On my computer the Numba version is about 3x faster.
>>> bgr = np.int16(255 * np.random.random((100, 100, 3)))
>>> w = np.ones(bgr.shape[:2], np.uint8)
>>> %timeit red_dominates(bgr, w)
13.7 µs ± 26.5 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
>>> %timeit 255 * ((bgr[:,:,0] > 100) & (bgr[:,:,0] > bgr[:,:,1]) & (bgr[:,:,0] > bgr[:,:,2]))
46.3 µs ± 208 ns per loop (mean ± std. dev. of 7 runs, 10000 loops each)
Best of luck!