先占坑,明天再完善…
文章目录
- 0 引言
- 1 Frame类
- 1.1 成员函数
- 1.2 成员变量
- 2 Frame类的用途
0 引言
ORB-SLAM2学习笔记8详细了解了图像特征点提取和描述子的生成,本文在此基础上,继续学习ORB-SLAM2
中的图像帧,也就是Frame
类,该类中主要包含设置相机参数、利用双目计算深度及特征点反投影到3D
地图点等函数。
1 Frame类
构造函数Frame
类主要的代码如下:
双目相机Frame:
// 双目相机Frame构造函数
Frame::Frame(const cv::Mat &imLeft, const cv::Mat &imRight, const double &timeStamp, ORBextractor *extractorLeft, ORBextractor *extractorRight, ORBVocabulary *voc, cv::Mat &K, cv::Mat &distCoef, const float &bf, const float &thDepth): mpORBvocabulary(voc), mpORBextractorLeft(extractorLeft), mpORBextractorRight(extractorRight), mTimeStamp(timeStamp), mK(K.clone()), mDistCoef(distCoef.clone()), mbf(bf), mThDepth(thDepth), mpReferenceKF(static_cast<KeyFrame *>(NULL)) {// step0. 帧ID自增mnId = nNextId++;// step1. 计算金字塔参数mnScaleLevels = mpORBextractorLeft->GetLevels();mfScaleFactor = mpORBextractorLeft->GetScaleFactor();mfLogScaleFactor = log(mfScaleFactor);mvScaleFactors = mpORBextractorLeft->GetScaleFactors();mvInvScaleFactors = mpORBextractorLeft->GetInverseScaleFactors();mvLevelSigma2 = mpORBextractorLeft->GetScaleSigmaSquares();mvInvLevelSigma2 = mpORBextractorLeft->GetInverseScaleSigmaSquares();// step2. 提取双目图像特征点thread threadLeft(&Frame::ExtractORB, this, 0, imLeft);thread threadRight(&Frame::ExtractORB, this, 1, imRight);threadLeft.join();threadRight.join();N = mvKeys.size();if (mvKeys.empty())return;// step3. 畸变矫正,实际上UndistortKeyPoints()不对双目图像进行矫正UndistortKeyPoints();// step4. 双目图像特征点匹配ComputeStereoMatches();// step5. 第一次调用构造函数时计算static变量if (mbInitialComputations) {ComputeImageBounds(imLeft);mfGridElementWidthInv = static_cast<float>(FRAME_GRID_COLS) / static_cast<float>(mnMaxX - mnMinX);mfGridElementHeightInv = static_cast<float>(FRAME_GRID_ROWS) / static_cast<float>(mnMaxY - mnMinY);fx = K.at<float>(0, 0);fy = K.at<float>(1, 1);cx = K.at<float>(0, 2);cy = K.at<float>(1, 2);invfx = 1.0f / fx;invfy = 1.0f / fy;// 计算完成,标志复位mbInitialComputations = false;}mvpMapPoints = vector<MapPoint *>(N, static_cast<MapPoint *>(NULL)); // 初始化本帧的地图点mvbOutlier = vector<bool>(N, false); // 标记当前帧的地图点不是外点mb = mbf / fx; // 计算双目基线长度// step6. 将特征点分配到网格中AssignFeaturesToGrid();
}
RGBD相机Frame:
// RGBD相机Frame构造函数
Frame::Frame(const cv::Mat &imGray, const cv::Mat &imDepth, const double &timeStamp, ORBextractor *extractor, ORBVocabulary *voc, cv::Mat &K, cv::Mat &distCoef, const float &bf, const float &thDepth): mpORBvocabulary(voc), mpORBextractorLeft(extractor), mpORBextractorRight(static_cast<ORBextractor *>(NULL)), mTimeStamp(timeStamp), mK(K.clone()), mDistCoef(distCoef.clone()), mbf(bf), mThDepth(thDepth) {// step0. 帧ID自增mnId = nNextId++;// step1. 计算金字塔参数mnScaleLevels = mpORBextractorLeft->GetLevels();mfScaleFactor = mpORBextractorLeft->GetScaleFactor();mfLogScaleFactor = log(mfScaleFactor);mvScaleFactors = mpORBextractorLeft->GetScaleFactors();mvInvScaleFactors = mpORBextractorLeft->GetInverseScaleFactors();mvLevelSigma2 = mpORBextractorLeft->GetScaleSigmaSquares();mvInvLevelSigma2 = mpORBextractorLeft->GetInverseScaleSigmaSquares();// step2. 提取左目图像特征点ExtractORB(0, imGray);N = mvKeys.size();if (mvKeys.empty())return;// step3. 畸变矫正UndistortKeyPoints();// step4. 根据深度信息构造虚拟右目图像ComputeStereoFromRGBD(imDepth);mvpMapPoints = vector<MapPoint *>(N, static_cast<MapPoint *>(NULL));mvbOutlier = vector<bool>(N, false);// step5. 第一次调用构造函数时计算static变量if (mbInitialComputations) {ComputeImageBounds(imLeft);mfGridElementWidthInv = static_cast<float>(FRAME_GRID_COLS) / static_cast<float>(mnMaxX - mnMinX);mfGridElementHeightInv = static_cast<float>(FRAME_GRID_ROWS) / static_cast<float>(mnMaxY - mnMinY);fx = K.at<float>(0, 0);fy = K.at<float>(1, 1);cx = K.at<float>(0, 2);cy = K.at<float>(1, 2);invfx = 1.0f / fx;invfy = 1.0f / fy;// 计算完成,标志复位mbInitialComputations = false;}mvpMapPoints = vector<MapPoint *>(N, static_cast<MapPoint *>(NULL)); // 初始化本帧的地图点mvbOutlier = vector<bool>(N, false); // 标记当前帧的地图点不是外点mb = mbf / fx; // 计算双目基线长度// step6. 将特征点分配到网格中AssignFeaturesToGrid();
}
1.1 成员函数
成员函数 | 类型 | 定义 |
---|---|---|
ORBextractor* mpORBextractorLeft,ORBextractor* mpORBextractorRight | public | 左右目图像的特征点提取器 |
void ExtractORB(int flag, const cv::Mat &im) | public | 进行ORB 特征提取 |
cv::Mat mDescriptors,cv::Mat mDescriptorsRight | public | 左右目图像特征点描述子 |
std::vector<cv::KeyPoint> mvKeys,std::vector<cv::KeyPoint> mvKeysRight | public | 畸变矫正前的左/右目特征点 |
std::vector<cv::KeyPoint> mvKeysUn | public | 畸变矫正后的左目特征点 |
std::vector<float> mvuRight | public | 左目特征点在右目中匹配特征点的横坐标 |
(左右目匹配特征点的纵坐标相同) | ||
std::vector<float> mvDepth | public | 特征点深度 |
float mThDepth | public | 判断单目特征点和双目特征点的阈值;深度低于该值得特征点被认为是双目特征点;深度低于该值得特征点被认为是单目特征点 |
1.2 成员变量
成员变量 | 类型 | 定义 |
---|---|---|
mbInitialComputations | public | static 变量,是否需要为Frame类的相机参数赋值,初始化为false,第一次为相机参数赋值后变为false |
float fx, float fy, float cx, float cy, float invfx, float invfy | public | static 变量,相机内参 |
cv::Mat mK | public | 相机内参矩阵 |
float mb | public | 相机基线baseline ,相机双目间的距离 |
float mbf | public | 相机基线baseline 与焦距的乘积 |
Frame
类大多与相机相关的参数,而且整个系统内的所有Frame
对象共享同一份相机参数;
2 Frame类的用途
除了少数被选为KeyFrame
的帧以外,大部分Frame
对象的作用仅在于Tracking
线程内追踪当前帧位姿,不会对LocalMapping
线程和LoopClosing
线程产生任何影响,在mLastFrame
和mCurrentFrame
更新之后就被系统销毁了。
Reference:
- https://github.com/raulmur/ORB_SLAM2
- https://github.com/electech6/ORB_SLAM2_detailed_comments/tree/master
⭐️👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍👍🌔