Home > Enterprise >  how to convert depth map to 3D points for cv::rgbd::RgbdNormals
how to convert depth map to 3D points for cv::rgbd::RgbdNormals

Time:06-04

I am trying to compute surface normal using OpenCV. Below is the code snippet:

// load 16bit uchar depth image (values are in millimeters)
cv::Mat img = cv::imread("depth.png", CV_16UC1);

// define our camera matrix
cv::Mat k = (cv::Mat1d(3, 3) << 900, 0, 640, 0, 900, 360, 0, 0, 1);

// compute surface normals
cv::Mat normals;
cv::rgbd::RgbdNormals normal_computer(img.rows, img.cols, CV_32F, k, 3);
normal_computer(img, normals);

std::cout << "normals" << std::endl
          << normals << std::endl
          << normals.size() << std::endl
          << normals.channels() << std::endl
          << std::endl;

Unfortunately, it shows the following error:

terminate called after throwing an instance of 'cv::Exception'
  what():  OpenCV(4.2.0) ../contrib/modules/rgbd/src/normal.cpp:776: error: (-215:Assertion failed) points3d_ori.channels() == 3 in function 'operator()'

Aborted (core dumped)

As per the documentation, input points should be a rows x cols x 3 matrix of CV_32F/CV64F or a rows x cols x 1 CV_U16S.

Therefore, my question is how to convert depth image to the above suitable format (i.e., rows x cols x 3 matrix of CV_32F/CV64F or a rows x cols x 1 CV_U16S)?

PS: I am using OpenCV v4.2.0 on Ubuntu 20.04 LTS.

CodePudding user response:

It turned out that there is a function called depthTo3d in OpenCV to do this conversion. Please see following code snippet:

  cv::Mat points3d;
  cv::rgbd::depthTo3d(img, k, points3d);

After conversion, these points should be given as input as shown below:

normal_computer(points3d, normals);
  • Related