I notice that the antialiasing of imresize is implemented as follows(contributions.m, line 12-13):
h = @(x) scale * kernel(scale * x);
kernel_width = kernel_width / scale;
This is scale input and output of kernel function, also expand kernel width, though it seems to have some intuitive aspect such as expand kernel width according to scale. But how is this formula explicitly derived is confused me, can anyone explain the principle behind these codes detailed?
CodePudding user response:
They explained a bit in https://blogs.mathworks.com/steve/2017/01/16/aliasing-and-image-resizing-part-3/ .
If you have a look, they explained it with an example of signal resampling. But they also show example of images and the graph of the kernel used to interpolate. The kernel is a cubic interpolation to modify the image to the new desired size. However, if you want to use a lower size then they modify the cubic interpolation using the relation of new and previous size (scale variable).
h = @(x) scale * kernel(scale * x);
This modification makes the peak of cubic interpolation smaller and the width bigger, so that is why the use this to store the value of the width for later. As scale is a factor < 1 (they use an if
condition to make sure it will never be > 1) the width spreads more.
kernel_width = kernel_width / scale;
As the width is larger, they use more pixels to computate each output pixel (that makes sense to make the image smaller).
% What is the maximum number of pixels that can be involved in the
% computation? Note: it's OK to use an extra pixel here; if the
% corresponding weights are all zero, it will be eliminated at the end
% of this function.
P = ceil(kernel_width) 2;