
If you have an other camera, or if you want to have a better estimation than the factory parameters you may follow Tutorial: Camera intrinsic calibration. These intrinsic parameters could be retrieved using vpRealSense2::getCameraParameters(). This is for example the case for our SR300 camera considered in this tutorial. Depending on the device, these parameters are part of the device SDK or firmware. In order to compute the pose from the chessboard image, there is the need to get the camera intrinsic parameters. To get good calibration results follow these Recommendations. $ svn export https: ///lagadic/visp.git/trunk/tutorial/calibration Note that all the material (source code) described in this tutorial is part of ViSP source code (in tutorial/calibration folder) and could be downloaded using the following command: from the basket of corresponding to couple of poses the last step is to estimate the transformation.computing the corresponding pose of the chessboard from the images.acquiring couples of poses and images of the chessboard.The calibration process described in this tutorial consists in three stages: This is the transformation corresponding to the extrinsic eye-in-hand transformation that we have to estimate. the homogeneous transformation between the end-effector and the camera frame.the homogeneous transformation between the camera frame and a calibration grid frame (also called object frame), typically the OpenCV chessboard.the homogeneous transformation between the robot base frame (also called fixed frame) and the robot end-effector.The principle of the extrinsic calibration is easy to apply to any other robot equipped with any other camera attached to the robot end-effector. As a use case, we will consider in this tutorial the case of a Panda robot in its research version from Franka Emika equipped with an Intel Realsense SR300 camera mounted on its end-effector. This tutorial focuses estimation of the homogeneous transformation between the robot end-effector and the camera frame.
