Home > other >  Getting different image arrays after doing the same operation in Image Sharpening
Getting different image arrays after doing the same operation in Image Sharpening

Time:01-27

Iam trying to sharpen an image by doing the following steps using unsharp masking where you subtract your image with the gaussianblurred image and then add the diff back to ur image.. Here is the code which I ran:-

 img = cv2.imread('redhat.jpg')
 gauss = cv2.GaussianBlur(img,(7,7),0)
 diff = img - gauss
 sharp = img   diff
 cv2_imshow(img)
 cv2_imshow(sharp)

original image :- enter image description here

sharp:- enter image description here

Instead of above code if i run:-

 img = cv2.imread('redhat.jpg')
 gauss = cv2.GaussianBlur(img,(7,7),0)
 sharp = cv2.addWeighted(img, 2, gauss, -1, 0)
 cv2_imshow(img)
 cv2_imshow(sharp)

Iam getting the correct sharp image now enter image description here

Can someone explain me why I got weird results during first time since per my understanding both the codes are doing the same mathematical operations

CodePudding user response:

In diff = img - gauss, the subtraction produces negative values, but the two inputs are of type uint8, so the result of the operation is coerced to that same type, which cannot hold negative values.

You’d have to convert one of the images to a signed type for this to work. For example:

gauss = cv2.GaussianBlur(img,(7,7),0)
diff = img.astype(np.int_) - gauss
sharp = np.clip(img   diff, 0, 255).astype(np.uint8)

Using cv2.addWeighted() is more efficient.

CodePudding user response:

I believe the difference is caused by over/underflow in

diff = img - gauss

If the source images have both 8-bit unsigned integer depth, the diff will have the same depth as well, which can cause underflow in the subtraction operation.

In contrast, addWeighted(), performs the operation in double precision, and perform saturation cast to the destination type after the operation (see documentation). That effectively reduces the likelihood of over/underflow, and cast will automatically trim the values to the supported range of the destination scalar type.

If you still want to use the first approach, either convert the images to floating point depth, or use big enough signed integers. After the operation, you may need to perform saturation cast to the destination depth.

  • Related