1.相机标定资料
这个视频是建议有一定基础的去看,详细介绍了整个标定的过程。
https://www.bilibili.com/video/BV1R7411m7ZQ/?spm_id_from=333.337.search-card.all.click&vd_source=c205d4d10f730a57820343328741984a
这个文章基础一点,可以先看
https://blog.csdn.net/sunshine_zoe/article/details/73457686
https://zhuanlan.zhihu.com/p/642155792?utm_id=0
2.案例分析
这里用sick的visionary 相机的sdk做分析
2.0基本参数定义
cxcy 就是图像坐标系下的原点在像素坐标系下的坐标,其中fx和fy表示每一列和每一行分别代表1mm多少个像素,即1mm=fx pix
k1k2是径向畸变的参数
f2c 是标定的z的起始位置,后面在介绍
class CameraParameters:""" This class gathers the main camera parameters. """def __init__(self, width=176, height=144,cam2worldMatrix=None,fx=146.5, fy=146.5, cx=84.4, cy=71.2,k1=0.326442, k2=0.219623,f2rc=0.0):self.cam2worldMatrix = [1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1, 0, 0, 0, 0, 1]self.width = widthself.height = heightif cam2worldMatrix:self.cam2worldMatrix = cam2worldMatrixself.fx = fxself.fy = fyself.cx = cxself.cy = cyself.k1 = k1self.k2 = k2self.f2rc = f2rc
2.1 visionary-s 双目
# transform into camera coordinates (zc, xc, yc)#像素坐标系转为图像坐标系xp = (myCamParams.cx - col) / myCamParams.fxyp = (myCamParams.cy - row) / myCamParams.fy# coordinate system local to the imager# 图像坐标系转为相机坐标系 f=1 ,所以 xc=(xp*zc)/f #yc=(yp*zc)/fzc = distData[row][col]xc = xp * zcyc = yp * zc# convert to world coordinate system#相机坐标系转世界坐标系 ,因为矩阵m_c2w是一个单位矩阵。所以xyz还是原来的值xw = (m_c2w[0, 3] + zc * m_c2w[0, 2] + yc * m_c2w[0, 1] + xc * m_c2w[0, 0])yw = (m_c2w[1, 3] + zc * m_c2w[1, 2] + yc * m_c2w[1, 1] + xc * m_c2w[1, 0])zw = (m_c2w[2, 3] + zc * m_c2w[2, 2] + yc * m_c2w[2, 1] + xc * m_c2w[2, 0])
2.2 visionary-tmini tof
#calculate radial distortion# 像素坐标系转图像坐标系xp = (myCamParams.cx - col) / myCamParams.fxyp = (myCamParams.cy - row) / myCamParams.fy# 畸变校正r2 = (xp * xp + yp * yp)r4 = r2 * r2k = 1 + myCamParams.k1 * r2 + myCamParams.k2 * r4xd = xp * k # 畸变校正yd = yp * kd = distData[row][col] #实际的距离值,不是z而是 np.sqrt(x*x + y*y + z*z)s0 = np.sqrt(xd*xd + yd*yd + 1) # 计算当前的坐标点在z=1的时候的d# d/s0 就是实际的z值# 图像坐标系转相机坐标系 也是f=1 所以直接乘以z值xc = xd * d / s0yc = yd * d / s0zc = d / s0 - myCamParams.f2rc# convert to world coordinate systemxw = (m_c2w[0, 3] + zc * m_c2w[0, 2] + yc * m_c2w[0, 1] + xc * m_c2w[0, 0])yw = (m_c2w[1, 3] + zc * m_c2w[1, 2] + yc * m_c2w[1, 1] + xc * m_c2w[1, 0])zw = (m_c2w[2, 3] + zc * m_c2w[2, 2] + yc * m_c2w[2, 1] + xc * m_c2w[2, 0])
3. 2D相机的手眼标定
看了上面的案例 ,我们发现相机的sdk只做了2个矩阵的计算。也就是把像素坐标转为实际的相机坐标。真正要使用的话还缺少 相机转为机器人、法兰盘、工作坐标系。其实这些都是上面文章所说的相机转世界坐标系。也就是需要求R T 旋转加平移。所以理论上我们只需要3个标定点就可以。
https://www.xjx100.cn/news/433771.html?action=onClick
3.1欧拉角
https://www.guyuehome.com/43450#2__148
https://blog.csdn.net/weixin_43134049/article/details/122826538?spm=1001.2014.3001.5501