Home > Enterprise >  Difference between stereo camera calibration vs two single camera calibrations using OpenCV
Difference between stereo camera calibration vs two single camera calibrations using OpenCV

Time:04-03

I have a vehicle with two cameras, left and right. Is there a difference between me calibrating each camera separately vs me performing "stereo calibration" ? I am asking because I noticed in the OpenCV documentation that there is a stereoCalibrate function, and also a stereo calibration tool for MATLAB. If I do separate camera calibration on each and then perform a depth calculation using the undistorted images of each camera, will the results be the same ?

I am not sure what the difference is between the two methods. I performed normal camera calibration for each camera separately.

CodePudding user response:

For intrinsics, it doesn't matter. The added information ("pair of cameras") might make the calibration a little better though.

Stereo calibration gives you the extrinsics, i.e. transformation matrices between cameras. That's for... stereo vision. If you don't perform stereo calibration, you would lack the extrinsics, and then you can't do any depth estimation at all, because that requires the extrinsics.

CodePudding user response:

TL;DR You need stereo calibration if you want 3D points.

Long answer There is a huge difference between single and stereo camera calibration.

The output of single camera calibration are intrinsic parameters only (i.e. the 3x3 camera matrix and a number of distortion coefficients, depending on the model used). In OpenCV this is accomplished by cv2.calibrateCamera. You may check my custom library that helps reducing the boilerplate.

When you do stereo calibration, its output is given by the intrinsics of both cameras and the extrinsic parameters. In OpenCV this is done with cv2.stereoCalibrate. OpenCV fixes the world origin in the first camera and then you get a rotation matrix R and translation vector t to go from the first camera (origin) to the second one.

So, why do we need extrinsics? If you are using a stereo system for 3D scanning then you need those (and the intrinsics) to do triangulation, so to obtain 3D points in the space: if you know the projection of a general point p in the space on both cameras, then you can calculate its position.

To add something to what @Christoph correctly answered before, the intrinsics should be almost the same, however, cv2.stereoCalibrate may improve the calculation of the intrinsics if the flag CALIB_FIX_INTRINSIC is not set. This happens because the system composed by two cameras and the calibration board is solved as a whole by numerical optimization.

  • Related