do anyone have idea why the two outputs below are different
in 1st code block image is loaded and PIL resize used. while in 2nd block keras load_img parameter: target_size is used. for same steps it is giving different output.
from keras.preprocessing.image import load_img
import numpy as np
path = 'C:/Users/user/Downloads/random_colour_image.JPG' # actual snippet of image:https://wallpaperaccess.com/full/1523270.jpg
target_size = (3,3)
#code block 1
image = load_img(path)
image = image.resize(target_size)
image = np.asarray(image)
print(image)
Output 1:
[[[132 99 79]
[146 80 68]
[165 15 81]]
[[116 102 94]
[133 101 69]
[198 28 53]]
[[ 82 129 108]
[119 89 112]
[166 87 51]]]
code block 2:
image = load_img(path, target_size=target_size)
image = np.asarray(image)
print(image)
Output 2:
[[[ 48 190 88]
[ 57 159 49]
[145 0 77]]
[[116 90 101]
[ 14 133 67]
[146 19 2]]
[[ 5 119 50]
[129 69 97]
[179 63 2]]]
CodePudding user response:
For getting the same output from two methods you need to consider:
- In
keras.preprocessing.image.load_img(target_size = (W,H))
you should not set resize output, you should insert your original image_size. - For resizing image after reading with
keras.preprocessing.image.load_img
, you need to convertPIL
tonumpy.array
then totensor
then usetf.image.resize
. - In
tf.image.resize
you need to setmethod='bicubic'
andantialias = True
to get result of two methods same as each other.
Example:
from keras.preprocessing.image import load_img
import tensorflow as tf
import numpy as np
path = 'test.png'
original_size = (32,32) # <- you want this
resize_size = (3,3) # <- you want this
#code block 1
image = load_img(path)
image = image.resize(resize_size)
image = np.asarray(image)
print(image)
#code block 2
image = load_img(path, target_size=original_size) # <- you want this
image = tf.image.resize( # <- you want this
tf.convert_to_tensor(np.asarray(image)) , # <- you want this
resize_size, # <- you want this
method='bicubic', # <- you want this
antialias = True # <- you want this
)
image = np.asarray(image).astype('int')
print(image)
Output:
# Output 1
[[[132 129 123]
[126 124 135]
[121 130 118]]
[[120 132 125]
[128 132 117]
[129 126 120]]
[[125 120 133]
[134 123 122]
[137 117 125]]]
#Output 2
[[[132 128 122]
[125 124 135]
[121 130 117]]
[[119 131 124]
[127 131 117]
[128 126 120]]
[[125 119 132]
[133 122 122]
[137 117 124]]]
Creating a random image:
import numpy as np
from PIL import Image
imarray = np.random.rand(32,32,3) * 255
im = Image.fromarray(imarray.astype('uint8')).convert('RGB')
im.save('test.png')
CodePudding user response:
thanks I'mahdi. but target_size is also intended to load an resized image.
got the solution:
Keras load_img default interpolation is NEAREST(0) and that of PIL.resize is BICUBIC(3). hence that ambiguity.
change in code block 1:
image = image.resize(target_size, resample=0)
or change in code block 2:
image = load_img(path1, target_size=target_size, interpolation='bicubic')
will solve it and produce same result.
Thanks.