Home > Back-end >  Python bytestream to image
Python bytestream to image

Time:04-26

I'm trying to achieve the same thing as this question, but with color image : How to I transfer an image(opencv Matrix/numpy array) from c publisher to python sender via ZeroMQ?

Here is my input image

This is what my code display

C side :

cv::Mat frame = cv::imread("/home/victor/Images/Zoom.png");

int height = frame.rows; //480
int width = frame.cols; // 640
zmq_send(static_cast<void *>(pubSocket), frame.data, (height*width*3*sizeof(uint8_t)), ZMQ_NOBLOCK);

Python side :

    try:
        image_bytes = self._subsocketVideo.recv(flags=zmq.NOBLOCK)
        width = 480
        height = 640
        try:
            temp = numpy.frombuffer(image_bytes, dtype=numpy.uint8)
            self.currentFrame = temp.reshape(height, width, 3)
        except Exception as e :
            print("Failed to create frame :")
            print(e)
    except zmq.Again as e:
        raise e

Python code displaying the image : this part works, I tried with static images instead of what I got on network

def videoCB(self):
    try:
        self._socket.subVideoReceive()
        print("Creating QImg")
        qimg = QImage(self._socket.currentFrame.data, 480, 640, 3*480, QImage.Format_RGB888)
        print("Creating pixmap")
        pixmap = QtGui.QPixmap.fromImage(qimg)
        print("Setting pixmap")
        self.imageHolder.setPixmap(pixmap)

        self.imageHolder.show()

    except Exception as e:
        print(e)

        pass

I feel like I have 2 or 3 issues :

  • Why is my output image wider than high? I tried to inverse height and width in reshape, without any result
  • There seems to be a RGB mixup somewhere
  • Overall, I feel like data is there but I don't put it correctly together.

The reshape function looks like it does nothing, I have the same output without it.

Thoughts?

CodePudding user response:

Use opencv cv2.imdecode

import cv2

try:
    image_bytes = self._subsocketVideo.recv(flags=zmq.NOBLOCK)
    width = 480
    height = 640
    try:
        temp = numpy.frombuffer(image_bytes, dtype=numpy.uint8)
        self.currentFrame = cv2.imdecode(temp, flags=cv2.IMREAD_COLOR)
    except Exception as e :
        print("Failed to create frame :")
        print(e)
except zmq.Again as e:
    raise e

or Pillow

import io
from PIL import Image

...

img = Image.open(image_bytes)
self.currentFrame = np.asarray(img)[..., :-1]  # No need for last channel

...

CodePudding user response:

Managed to make it work : changed python side into

image2 = Image.frombytes('RGB', (height,width), image_bytes)
self.currentFrame = ImageQt(image2)

and displaying with

qimg = QImage(self._socket.currentFrame)
pixmap = QtGui.QPixmap.fromImage(qimg)
self.imageHolder.setPixmap(pixmap)
self.imageHolder.show()

                
  • Related