the problem is to take a black-white image, detect all the places where white borders on black, keep that white, and turn all other white pixels black. I know how to do this using normal for-loops and lists, but I want to do it w/ numpy, which I am not that familiar with. Here is what I have so far:
>>>from PIL Import Image
>>>import numpy as np
>>>a = Image.open('a.png')
>>>a = a.convert('L')
>>>a_np = np.array(a)
>>>a_np
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], dtype=uint8)
>>>mask = np.pad(a_np[1:-1,1:-1],1,mode='wrap') != 0
>>>mask
array([[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
...,
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False],
[False, False, False, ..., False, False, False]])
>>> np.where(mask == True)
(array([ 98, 98, 98, ..., 981, 981, 981]), array([393, 394, 395, ..., 684, 685, 686]))
>>> a_np[mask] = 0
>>> a_np
array([[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]], dtype=uint8)
>>> np.where(a_np == 1)
(array([], dtype=int64), array([], dtype=int64))
Basically, trying to create a mask that finds the neighbors of every element in the array and for those that do not have a black neighbor, turn them black - but no matter what I try I either get all black elements or the same array that I started with. Numpy or OpenCV solutions are welcome.
CodePudding user response: