I was making some experimentations with the OpenCV function cv2.warpPerspective when I decided to code it from scratch to better understand its pipeline. Though I followed (hopefully) every theoretical step, it seems I am still missing something and I am struggling a lot to understand what. Could you please help me?
SRC image (left) and True DST Image (right)
Output of the cv2.warpPerspective overlapped on the True DST
# Invert the homography SRC->DST to DST->SRC
hinv = np.linalg.inv(h)
src = gray1
dst = np.zeros(gray2.shape)
h, w = src.shape
# Remap back and check the domain
for ox in range(h):
for oy in range(w):
# Backproject from DST to SRC
xw, yw, w = hinv.dot(np.array([ox, oy, 1]).T)
# cv2.INTER_NEAREST
x, y = int(xw/w), int(yw/w)
# Check if it falls in the src domain
c1 = x >= 0 and y < h
c2 = y >= 0 and y < w
if c1 and c2:
dst[x, y] = src[ox, oy]
cv2.imshow(dst gray2//2)
PS: The output images are the overlapping of Estimated DST and the True DST to better highlight differences.
CodePudding user response:
Your issue amounts to a typo. You mixed up the naming of your coordinates. The homography assumes (x,y,1)
order, which would correspond to (j,i,1)
.
Just use (x, y, 1)
in the calculation, and (xw, yw, w)
in the result of that (then x,y = xw/w, yw/w
). the w
factor mirrors the math, when formulated properly.
Avoid indexing into .shape
. The indices don't "speak". Just do (height, width) = src.shape[:2]
and use those.
I'd recommend to fix the naming scheme, or define it up top in a comment. I'd recommend sticking with x,y
instead of i,j,u,v, and then extend those with prefixes/suffixes for the space they're in ("src/dst/in/out"). Perhaps something like ox,oy
for iterating, just xw,yw,w
for the homography result, which turns into x,y
via division, and ix,iy
(integerized) for sampling in the input? Then you can use dst[oy, ox] = src[iy, ix]