I've spent some time trying to calibrate two similar cameras (ExCam IPQ1715 and ExCam IPQ1765) to varying degrees of success, with the eventual goal of using them for short-range photogrammetry. I've been using a charuco board, along with the OpenCV Charuco calibration library, and have noticed that the quality of my calibration is closely tied to how much of the images is taken up by the board. (I measure calibration quality by RMS reprojection error given by OpenCV, and also by just seeing if the undistorted images appear to have straighter lines on the board than the originals).
I'm still pretty inexperienced, and there have been other factors messing with my calibration (leaving autofocus on, OpenCV charuco identification sometimes getting strange false positives on some images without me noticing), so my question is less about my results and more about best practice for camera calibration in general:
How crucial is it that the board (charuco, chessboard) take up most of the image space? Is there generally a minimum amount that it should cover? Is this even an issue at all, or am I likely mistaking it for another cause of bad calibration?
I've seen lots of calibration tutorials online where the board seems to take up a small portion of the image, but then have also found other people experiencing similar issues. In short, I'm horribly lost.
Any guidance would be awesome - thanks!
CodePudding user response:
Consider the point that camera calibration calculation is a model fitting. i.e. optimize the model parameters with the measurements.
So... You should pay attention to:
If the board image is too small to see the distortion in the board image, is it possible to optimize the distortion parameters with such image?
If the pattern image is only distributed near the center of the image, is it possible to estimate valid parameter values for regions far from the center? (this will be an extrapolation).
If the pattern distribution is not uniform, the density of the data can affect the results. e.g. With least square optimization, errors in regions with little data can be neglected.
Therefore, my suggestion is:
- Pattern images that are extremely small are useless.
- The data should cover the entire field of view of the camera image, and the distribution should be as uniform as possible.
- Use enough data. With few data may cause overfitting.
- Check the pattern recognition results of all images(sample code often omit this).