I am trying to threshold my images, but some of them have problem with casting to DataBufferByte. I post here two types of pictures - the first (the test1.jpg is good and can be casted to DataBufferByte), the second (test2.jpg throwing an exception).
Images:
Here is my code:
public static void main(String[] args) throws IOException {
System.loadLibrary( Core.NATIVE_LIBRARY_NAME );
BufferedImage bufferedImage = ImageIO.read(new File("/test/test2.jpg"));
ByteArrayOutputStream byteArrayOutputStream = new ByteArrayOutputStream();
ImageIO.write(bufferedImage, "jpg", byteArrayOutputStream);
byte[] input = byteArrayOutputStream.toByteArray();
InputStream is = new ByteArrayInputStream(input);
bufferedImage = ImageIO.read(is);
byte[] data = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
Mat mat = new Mat(bufferedImage.getHeight(), bufferedImage.getWidth(), CvType.CV_8UC3);
mat.put(0, 0, data);
Mat mat1 = new Mat(bufferedImage.getHeight(),bufferedImage.getWidth(),CvType.CV_8UC1);
Imgproc.cvtColor(mat, mat1, Imgproc.COLOR_RGB2GRAY);
byte[] data1 = new byte[mat1.rows() * mat1.cols() * (int)(mat1.elemSize())];
mat1.get(0, 0, data1);
Mat dst = new Mat();
Imgproc.adaptiveThreshold(mat1, dst, 255, Imgproc.ADAPTIVE_THRESH_MEAN_C, Imgproc.THRESH_BINARY, 11, 15);
MatOfByte matOfByte = new MatOfByte();
Imgcodecs.imencode(".jpg", dst, matOfByte);
ByteArrayInputStream inputStream = new ByteArrayInputStream(matOfByte.toArray());
bufferedImage = ImageIO.read(inputStream);
ImageIO.write(bufferedImage, "jpg", new File("/test_output/test2.jpg"));
}
The output for test1.jpg image is correct:
But the second image throwing an exception
Exception in thread "main" java.lang.ClassCastException: java.awt.image.DataBufferInt cannot be cast to java.awt.image.DataBufferByte
at line
byte[] data = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData();
What is the problem ? Why the one .jpg image is converted without problems but second throwing an exception?
CodePudding user response:
Both are not of the same format. One using 8-bit for color representation and the other 3x8-bit (so 4 bytes, with padding or transparency)?
Looking at it give:
image1 : JPEG image data, JFIF standard 1.01, aspect ratio, density 1x1, segment length 16, baseline, precision 8, 449x361, components 3
image2 : PNG image data, 617 x 353, 8-bit/color RGBA, non-interlaced
You may use the getDataType()
method on the buffer to get the size of the color representation.