Home > Net >  low quality image with cv2.warperspective
low quality image with cv2.warperspective

Time:12-06

I am working with the perspective transformation using opencv on python. So I have found the homography matrix between two sets of points using SIFT detector. For now, I would like to transform an image using the cv2.warperspective function but it turns out the quality of the warped image is very low.

I did some google search and found this, enter image description here

CodePudding user response:

did you test various interpolation methods, supported in OpenCV? I reckon that the results of bilinear or bicubic interpolation (i.e., INTER_LINEAR or INTER_CUBIC) would be surprising for you. You can change your code as below (indeed you are are using INTER_NEAREST which can distort image contents):

warpedImg = cv2.warpPerspective(picture, homography_matrix, (w,h), flags= cv2.WARP_FILL_OUTLIERS   cv2.INTER_LINEAR)

From OpenCV source:

enum InterpolationFlags{
    /** nearest neighbor interpolation */
    INTER_NEAREST        = 0,
    /** bilinear interpolation */
    INTER_LINEAR         = 1,
    /** bicubic interpolation */
    INTER_CUBIC          = 2,
    /** resampling using pixel area relation. It may be a preferred method for image decimation, as
    it gives moire'-free results. But when the image is zoomed, it is similar to the INTER_NEAREST
    method. */
    INTER_AREA           = 3,
    /** Lanczos interpolation over 8x8 neighborhood */
    INTER_LANCZOS4       = 4,
    /** Bit exact bilinear interpolation */
    INTER_LINEAR_EXACT = 5,
    /** Bit exact nearest neighbor interpolation. This will produce same results as
    the nearest neighbor method in PIL, scikit-image or Matlab. */
    INTER_NEAREST_EXACT  = 6,
    /** mask for interpolation codes */
    INTER_MAX            = 7,
    /** flag, fills all of the destination image pixels. If some of them correspond to outliers in the
    source image, they are set to zero */
    WARP_FILL_OUTLIERS   = 8,
    /** flag, inverse transformation

    For example, #linearPolar or #logPolar transforms:
    - flag is __not__ set: \f$dst( \rho , \phi ) = src(x,y)\f$
    - flag is set: \f$dst(x,y) = src( \rho , \phi )\f$
    */
    WARP_INVERSE_MAP     = 16
};

CodePudding user response:

Yes, as you mentioned higher input resolution would improve the result, albeit if you don’t have any speed issue. The point which seems suspicious about your response is that, as far as I know, just one of the interpolation flags will be applied in OpenCV, and combination of them has no effect. You can see the OpenCV source code to check which interpolation method is really applying based on your input.

  • Related