以相关组为基础的3D物体识别

这次我们要解释如何以pcl_recognition模块来进行3D物体识别。特别地,它解释了怎么使用相关组算法为了聚类那些从3D描述器算法里面把当前的场景与模型进行匹配的相关点对点的匹配。(长难句)。对于每一次聚类,描绘了一个在场景中的可能模型实例,相关组算法也输出标识6DOF位姿估计的转换矩阵。

代码

#include <pcl/io/pcd_io.h>
#include <pcl/point_cloud.h>
#include <pcl/correspondence.h>
#include <pcl/features/normal_3d_omp.h>
#include <pcl/features/shot_omp.h>
#include <pcl/features/board.h>
#include <pcl/filters/uniform_sampling.h>
#include <pcl/recognition/cg/hough_3d.h>
#include <pcl/recognition/cg/geometric_consistency.h>
#include <pcl/visualization/pcl_visualizer.h>
#include <pcl/kdtree/kdtree_flann.h>
#include <pcl/kdtree/impl/kdtree_flann.hpp>
#include <pcl/common/transforms.h>
#include <pcl/console/parse.h>
typedef pcl::PointXYZRGBA PointType;
typedef pcl::Normal NormalType;
typedef pcl::ReferenceFrame RFType;
typedef pcl::SHOT352 DescriptorType;
std::string model_filename_;
std::string scene_filename_;
//Algorithm params
bool show_keypoints_ (false);
bool show_correspondences_ (false);
bool use_cloud_resolution_ (false);
bool use_hough_ (true);
float model_ss_ (0.01f);
float scene_ss_ (0.03f);
float rf_rad_ (0.015f);
float descr_rad_ (0.02f);
float cg_size_ (0.01f);
float cg_thresh_ (5.0f);
void
showHelp (char *filename)
{
std::cout << std::endl;
std::cout << "***************************************************************************" << std::endl;
std::cout << "*                                                                         *" << std::endl;
std::cout << "*             Correspondence Grouping Tutorial - Usage Guide              *" << std::endl;
std::cout << "*                                                                         *" << std::endl;
std::cout << "***************************************************************************" << std::endl << std::endl;
std::cout << "Usage: " << filename << " model_filename.pcd scene_filename.pcd [Options]" << std::endl << std::endl;
std::cout << "Options:" << std::endl;
std::cout << "     -h:                     Show this help." << std::endl;
std::cout << "     -k:                     Show used keypoints." << std::endl;
std::cout << "     -c:                     Show used correspondences." << std::endl;
std::cout << "     -r:                     Compute the model cloud resolution and multiply" << std::endl;
std::cout << "                             each radius given by that value." << std::endl;
std::cout << "     --algorithm (Hough|GC): Clustering algorithm used (default Hough)." << std::endl;
std::cout << "     --model_ss val:         Model uniform sampling radius (default 0.01)" << std::endl;
std::cout << "     --scene_ss val:         Scene uniform sampling radius (default 0.03)" << std::endl;
std::cout << "     --rf_rad val:           Reference frame radius (default 0.015)" << std::endl;
std::cout << "     --descr_rad val:        Descriptor radius (default 0.02)" << std::endl;
std::cout << "     --cg_size val:          Cluster size (default 0.01)" << std::endl;
std::cout << "     --cg_thresh val:        Clustering threshold (default 5)" << std::endl << std::endl;
}
void
parseCommandLine (int argc, char *argv[])
{
//Show help
if (pcl::console::find_switch (argc, argv, "-h"))
{
showHelp (argv[0]);
exit (0);
}
//Model & scene filenames
std::vector<int> filenames;
filenames = pcl::console::parse_file_extension_argument (argc, argv, ".pcd");
if (filenames.size () != 2)
{
std::cout << "Filenames missing.\n";
showHelp (argv[0]);
exit (-1);
}
model_filename_ = argv[filenames[0]];
scene_filename_ = argv[filenames[1]];
//Program behavior
if (pcl::console::find_switch (argc, argv, "-k"))
{
show_keypoints_ = true;
}
if (pcl::console::find_switch (argc, argv, "-c"))
{
show_correspondences_ = true;
}
if (pcl::console::find_switch (argc, argv, "-r"))
{
use_cloud_resolution_ = true;
}
std::string used_algorithm;
if (pcl::console::parse_argument (argc, argv, "--algorithm", used_algorithm) != -1)
{
if (used_algorithm.compare ("Hough") == 0)
{
use_hough_ = true;
}else if (used_algorithm.compare ("GC") == 0)
{
use_hough_ = false;
}
else
{
std::cout << "Wrong algorithm name.\n";
showHelp (argv[0]);
exit (-1);
}
}
//General parameters
pcl::console::parse_argument (argc, argv, "--model_ss", model_ss_);
pcl::console::parse_argument (argc, argv, "--scene_ss", scene_ss_);
pcl::console::parse_argument (argc, argv, "--rf_rad", rf_rad_);
pcl::console::parse_argument (argc, argv, "--descr_rad", descr_rad_);
pcl::console::parse_argument (argc, argv, "--cg_size", cg_size_);
pcl::console::parse_argument (argc, argv, "--cg_thresh", cg_thresh_);
}
double
computeCloudResolution (const pcl::PointCloud<PointType>::ConstPtr &cloud)
{
double res = 0.0;
int n_points = 0;
int nres;
std::vector<int> indices (2);
std::vector<float> sqr_distances (2);
pcl::search::KdTree<PointType> tree;
tree.setInputCloud (cloud);
for (size_t i = 0; i < cloud->size (); ++i)
{
if (! pcl_isfinite ((*cloud)[i].x))
{
continue;
}
//Considering the second neighbor since the first is the point itself.
nres = tree.nearestKSearch (i, 2, indices, sqr_distances);
if (nres == 2)
{
res += sqrt (sqr_distances[1]);
++n_points;
}
}
if (n_points != 0)
{
res /= n_points;
}
return res;
}
int
main (int argc, char *argv[])
{
parseCommandLine (argc, argv);
pcl::PointCloud<PointType>::Ptr model (new pcl::PointCloud<PointType> ());
pcl::PointCloud<PointType>::Ptr model_keypoints (new pcl::PointCloud<PointType> ());
pcl::PointCloud<PointType>::Ptr scene (new pcl::PointCloud<PointType> ());
pcl::PointCloud<PointType>::Ptr scene_keypoints (new pcl::PointCloud<PointType> ());
pcl::PointCloud<NormalType>::Ptr model_normals (new pcl::PointCloud<NormalType> ());
pcl::PointCloud<NormalType>::Ptr scene_normals (new pcl::PointCloud<NormalType> ());
pcl::PointCloud<DescriptorType>::Ptr model_descriptors (new pcl::PointCloud<DescriptorType> ());
pcl::PointCloud<DescriptorType>::Ptr scene_descriptors (new pcl::PointCloud<DescriptorType> ());
//
//  Load clouds
//
if (pcl::io::loadPCDFile (model_filename_, *model) < 0)
{
std::cout << "Error loading model cloud." << std::endl;
showHelp (argv[0]);
return (-1);
}
if (pcl::io::loadPCDFile (scene_filename_, *scene) < 0)
{
std::cout << "Error loading scene cloud." << std::endl;
showHelp (argv[0]);
return (-1);
}
//
//  Set up resolution invariance
//
if (use_cloud_resolution_)
{
float resolution = static_cast<float> (computeCloudResolution (model));
if (resolution != 0.0f)
{
model_ss_   *= resolution;
scene_ss_   *= resolution;
rf_rad_     *= resolution;
descr_rad_  *= resolution;
cg_size_    *= resolution;
}
std::cout << "Model resolution:       " << resolution << std::endl;
std::cout << "Model sampling size:    " << model_ss_ << std::endl;
std::cout << "Scene sampling size:    " << scene_ss_ << std::endl;
std::cout << "LRF support radius:     " << rf_rad_ << std::endl;
std::cout << "SHOT descriptor radius: " << descr_rad_ << std::endl;
std::cout << "Clustering bin size:    " << cg_size_ << std::endl << std::endl;
}
//
//  Compute Normals
//
pcl::NormalEstimationOMP<PointType, NormalType> norm_est;
norm_est.setKSearch (10);
norm_est.setInputCloud (model);
norm_est.compute (*model_normals);
norm_est.setInputCloud (scene);
norm_est.compute (*scene_normals);
//
//  Downsample Clouds to Extract keypoints
//
pcl::UniformSampling<PointType> uniform_sampling;
uniform_sampling.setInputCloud (model);
uniform_sampling.setRadiusSearch (model_ss_);
uniform_sampling.filter (*model_keypoints);
std::cout << "Model total points: " << model->size () << "; Selected Keypoints: " << model_keypoints->size () << std::endl;
uniform_sampling.setInputCloud (scene);
uniform_sampling.setRadiusSearch (scene_ss_);
uniform_sampling.filter (*scene_keypoints);
std::cout << "Scene total points: " << scene->size () << "; Selected Keypoints: " << scene_keypoints->size () << std::endl;
//
//  Compute Descriptor for keypoints
//
pcl::SHOTEstimationOMP<PointType, NormalType, DescriptorType> descr_est;
descr_est.setRadiusSearch (descr_rad_);
descr_est.setInputCloud (model_keypoints);
descr_est.setInputNormals (model_normals);
descr_est.setSearchSurface (model);
descr_est.compute (*model_descriptors);
descr_est.setInputCloud (scene_keypoints);
descr_est.setInputNormals (scene_normals);
descr_est.setSearchSurface (scene);
descr_est.compute (*scene_descriptors);
//
//  Find Model-Scene Correspondences with KdTree
//
pcl::CorrespondencesPtr model_scene_corrs (new pcl::Correspondences ());
pcl::KdTreeFLANN<DescriptorType> match_search;
match_search.setInputCloud (model_descriptors);
//  For each scene keypoint descriptor, find nearest neighbor into the model keypoints descriptor cloud and add it to the correspondences vector.
for (size_t i = 0; i < scene_descriptors->size (); ++i)
{
std::vector<int> neigh_indices (1);
std::vector<float> neigh_sqr_dists (1);
if (!pcl_isfinite (scene_descriptors->at (i).descriptor[0])) //skipping NaNs
{
continue;
}
int found_neighs = match_search.nearestKSearch (scene_descriptors->at (i), 1, neigh_indices, neigh_sqr_dists);
if(found_neighs == 1 && neigh_sqr_dists[0] < 0.25f) //  add match only if the squared descriptor distance is less than 0.25 (SHOT descriptor distances are between 0 and 1 by design)
{
pcl::Correspondence corr (neigh_indices[0], static_cast<int> (i), neigh_sqr_dists[0]);
model_scene_corrs->push_back (corr);
}
}
std::cout << "Correspondences found: " << model_scene_corrs->size () << std::endl;
//
//  Actual Clustering
//
std::vector<Eigen::Matrix4f, Eigen::aligned_allocator<Eigen::Matrix4f> > rototranslations;
std::vector<pcl::Correspondences> clustered_corrs;
//  Using Hough3D
if (use_hough_)
{
//
//  Compute (Keypoints) Reference Frames only for Hough
//
pcl::PointCloud<RFType>::Ptr model_rf (new pcl::PointCloud<RFType> ());
pcl::PointCloud<RFType>::Ptr scene_rf (new pcl::PointCloud<RFType> ());
pcl::BOARDLocalReferenceFrameEstimation<PointType, NormalType, RFType> rf_est;
rf_est.setFindHoles (true);
rf_est.setRadiusSearch (rf_rad_);
rf_est.setInputCloud (model_keypoints);
rf_est.setInputNormals (model_normals);
rf_est.setSearchSurface (model);
rf_est.compute (*model_rf);
rf_est.setInputCloud (scene_keypoints);
rf_est.setInputNormals (scene_normals);
rf_est.setSearchSurface (scene);
rf_est.compute (*scene_rf);
//  Clustering
pcl::Hough3DGrouping<PointType, PointType, RFType, RFType> clusterer;
clusterer.setHoughBinSize (cg_size_);
clusterer.setHoughThreshold (cg_thresh_);
clusterer.setUseInterpolation (true);
clusterer.setUseDistanceWeight (false);
clusterer.setInputCloud (model_keypoints);
clusterer.setInputRf (model_rf);
clusterer.setSceneCloud (scene_keypoints);
clusterer.setSceneRf (scene_rf);
clusterer.setModelSceneCorrespondences (model_scene_corrs);
//clusterer.cluster (clustered_corrs);
clusterer.recognize (rototranslations, clustered_corrs);
}
else // Using GeometricConsistency
{
pcl::GeometricConsistencyGrouping<PointType, PointType> gc_clusterer;
gc_clusterer.setGCSize (cg_size_);
gc_clusterer.setGCThreshold (cg_thresh_);
gc_clusterer.setInputCloud (model_keypoints);
gc_clusterer.setSceneCloud (scene_keypoints);
gc_clusterer.setModelSceneCorrespondences (model_scene_corrs);
//gc_clusterer.cluster (clustered_corrs);
gc_clusterer.recognize (rototranslations, clustered_corrs);
}
//
//  Output results
//
std::cout << "Model instances found: " << rototranslations.size () << std::endl;
for (size_t i = 0; i < rototranslations.size (); ++i)
{
std::cout << "\n    Instance " << i + 1 << ":" << std::endl;
std::cout << "        Correspondences belonging to this instance: " << clustered_corrs[i].size () << std::endl;
// Print the rotation matrix and translation vector
Eigen::Matrix3f rotation = rototranslations[i].block<3,3>(0, 0);
Eigen::Vector3f translation = rototranslations[i].block<3,1>(0, 3);
printf ("\n");
printf ("            | %6.3f %6.3f %6.3f | \n", rotation (0,0), rotation (0,1), rotation (0,2));
printf ("        R = | %6.3f %6.3f %6.3f | \n", rotation (1,0), rotation (1,1), rotation (1,2));
printf ("            | %6.3f %6.3f %6.3f | \n", rotation (2,0), rotation (2,1), rotation (2,2));
printf ("\n");
printf ("        t = < %0.3f, %0.3f, %0.3f >\n", translation (0), translation (1), translation (2));
}
//
//  Visualization
//
pcl::visualization::PCLVisualizer viewer ("Correspondence Grouping");
viewer.addPointCloud (scene, "scene_cloud");
pcl::PointCloud<PointType>::Ptr off_scene_model (new pcl::PointCloud<PointType> ());
pcl::PointCloud<PointType>::Ptr off_scene_model_keypoints (new pcl::PointCloud<PointType> ());
if (show_correspondences_ || show_keypoints_)
{
//  We are translating the model so that it doesn't end in the middle of the scene representation
pcl::transformPointCloud (*model, *off_scene_model, Eigen::Vector3f (-1,0,0), Eigen::Quaternionf (1, 0, 0, 0));
pcl::transformPointCloud (*model_keypoints, *off_scene_model_keypoints, Eigen::Vector3f (-1,0,0), Eigen::Quaternionf (1, 0, 0, 0));
pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_color_handler (off_scene_model, 255, 255, 128);
viewer.addPointCloud (off_scene_model, off_scene_model_color_handler, "off_scene_model");
}
if (show_keypoints_)
{
pcl::visualization::PointCloudColorHandlerCustom<PointType> scene_keypoints_color_handler (scene_keypoints, 0, 0, 255);
viewer.addPointCloud (scene_keypoints, scene_keypoints_color_handler, "scene_keypoints");
viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, "scene_keypoints");
pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_keypoints_color_handler (off_scene_model_keypoints, 0, 0, 255);
viewer.addPointCloud (off_scene_model_keypoints, off_scene_model_keypoints_color_handler, "off_scene_model_keypoints");
viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, "off_scene_model_keypoints");
}
for (size_t i = 0; i < rototranslations.size (); ++i)
{
pcl::PointCloud<PointType>::Ptr rotated_model (new pcl::PointCloud<PointType> ());
pcl::transformPointCloud (*model, *rotated_model, rototranslations[i]);
std::stringstream ss_cloud;
ss_cloud << "instance" << i;
pcl::visualization::PointCloudColorHandlerCustom<PointType> rotated_model_color_handler (rotated_model, 255, 0, 0);
viewer.addPointCloud (rotated_model, rotated_model_color_handler, ss_cloud.str ());
if (show_correspondences_)
{
for (size_t j = 0; j < clustered_corrs[i].size (); ++j)
{
std::stringstream ss_line;
ss_line << "correspondence_line" << i << "_" << j;
PointType& model_point = off_scene_model_keypoints->at (clustered_corrs[i][j].index_query);
PointType& scene_point = scene_keypoints->at (clustered_corrs[i][j].index_match);
//  We are drawing a line for each pair of clustered correspondences found between the model and the scene
viewer.addLine<PointType, PointType> (model_point, scene_point, 0, 255, 0, ss_line.str ());
}
}
}
while (!viewer.wasStopped ())
{
viewer.spinOnce ();
}
return (0);
}

帮助函数

这个函数将会打印出几个命令行的简短选项的解释。

void
showHelp (char *filename)
{
std::cout << std::endl;
std::cout << "***************************************************************************" << std::endl;
std::cout << "*                                                                         *" << std::endl;
std::cout << "*             Correspondence Grouping Tutorial - Usage Guide              *" << std::endl;
std::cout << "*                                                                         *" << std::endl;
std::cout << "***************************************************************************" << std::endl << std::endl;
std::cout << "Usage: " << filename << " model_filename.pcd scene_filename.pcd [Options]" << std::endl << std::endl;
std::cout << "Options:" << std::endl;
std::cout << "     -h:                     Show this help." << std::endl;
std::cout << "     -k:                     Show used keypoints." << std::endl;
std::cout << "     -c:                     Show used correspondences." << std::endl;
std::cout << "     -r:                     Compute the model cloud resolution and multiply" << std::endl;
std::cout << "                             each radius given by that value." << std::endl;
std::cout << "     --algorithm (Hough|GC): Clustering algorithm used (default Hough)." << std::endl;
std::cout << "     --model_ss val:         Model uniform sampling radius (default 0.01)" << std::endl;
std::cout << "     --scene_ss val:         Scene uniform sampling radius (default 0.03)" << std::endl;
std::cout << "     --rf_rad val:           Reference frame radius (default 0.015)" << std::endl;
std::cout << "     --descr_rad val:        Descriptor radius (default 0.02)" << std::endl;
std::cout << "     --cg_size val:          Cluster size (default 0.01)" << std::endl;
std::cout << "     --cg_thresh val:        Clustering threshold (default 5)" << std::endl << std::endl;
}

第二个函数,解析命令行参数为了设置正确的参数来执行。

void
parseCommandLine (int argc, char *argv[])
{
//Show help
if (pcl::console::find_switch (argc, argv, "-h"))
{
showHelp (argv[0]);
exit (0);
}
//Model & scene filenames
std::vector<int> filenames;
filenames = pcl::console::parse_file_extension_argument (argc, argv, ".pcd");
if (filenames.size () != 2)
{
std::cout << "Filenames missing.\n";
showHelp (argv[0]);
exit (-1);
}
model_filename_ = argv[filenames[0]];
scene_filename_ = argv[filenames[1]];
//Program behavior
if (pcl::console::find_switch (argc, argv, "-k"))
{
show_keypoints_ = true;
}
if (pcl::console::find_switch (argc, argv, "-c"))
{
show_correspondences_ = true;
}
if (pcl::console::find_switch (argc, argv, "-r"))
{
use_cloud_resolution_ = true;
}
std::string used_algorithm;
if (pcl::console::parse_argument (argc, argv, "--algorithm", used_algorithm) != -1)
{
if (used_algorithm.compare ("Hough") == 0)
{
use_hough_ = true;
}else if (used_algorithm.compare ("GC") == 0)
{
use_hough_ = false;
}
else
{
std::cout << "Wrong algorithm name.\n";
showHelp (argv[0]);
exit (-1);
}
}
//General parameters
pcl::console::parse_argument (argc, argv, "--model_ss", model_ss_);
pcl::console::parse_argument (argc, argv, "--scene_ss", scene_ss_);
pcl::console::parse_argument (argc, argv, "--rf_rad", rf_rad_);
pcl::console::parse_argument (argc, argv, "--descr_rad", descr_rad_);
pcl::console::parse_argument (argc, argv, "--cg_size", cg_size_);
pcl::console::parse_argument (argc, argv, "--cg_thresh", cg_thresh_);
}

你可以用两个相应的命令行选项来进行聚类算法的切换。--algorithm(Hough|GC)

Hough(默认)

这是一个以3DHough决策为基础的算法

GC

这是一个geometric consistency(GC) 几何一致聚类算法,执行简单的几何约束在一对相关的物体之间。

一些别的比较有趣的选项 -k -c 和 -r

-k 显示了计算相关度的关键点,以蓝色覆盖的形式显示。

-c画了一条线连接两幅图里面相关的地方。

-r估计模型点的空间分辨率,然后考虑把半径作为点云分辨率。因此获取一些分辨率不变量将会是有用的,当我们获取同样的命令行从不同的点云里面。

下面这个函数实现了某个点云的空间分辨率计算,通过对每个点云里面的点和其最近邻平均距离。

double
computeCloudResolution (const pcl::PointCloud<PointType>::ConstPtr &cloud)
{
double res = 0.0;
int n_points = 0;
int nres;
std::vector<int> indices (2);
std::vector<float> sqr_distances (2);
pcl::search::KdTree<PointType> tree;
tree.setInputCloud (cloud);
for (size_t i = 0; i < cloud->size (); ++i)
{
if (! pcl_isfinite ((*cloud)[i].x))
{
continue;
}
//Considering the second neighbor since the first is the point itself.
nres = tree.nearestKSearch (i, 2, indices, sqr_distances);
if (nres == 2)
{
res += sqrt (sqr_distances[1]);
++n_points;
}
}
if (n_points != 0)
{
res /= n_points;
}
return res;
}

聚类通道

这是我们主要的函数,执行实际的聚类操作。

parseCommandLine (argc, argv);
if (pcl::io::loadPCDFile (model_filename_, *model) < 0)
{
std::cout << "Error loading model cloud." << std::endl;
showHelp (argv[0]);
return (-1);
}
if (pcl::io::loadPCDFile (scene_filename_, *scene) < 0)
{
std::cout << "Error loading scene cloud." << std::endl;
showHelp (argv[0]);
return (-1);
}

第二步,只有分辨率不变这个标志被enable,程序会调整会在下一节里面使用的半径,把它们和模型分辨率相乘。

if (use_cloud_resolution_)
{
float resolution = static_cast<float> (computeCloudResolution (model));
if (resolution != 0.0f)
{
model_ss_   *= resolution;
scene_ss_   *= resolution;
rf_rad_     *= resolution;
descr_rad_  *= resolution;
cg_size_    *= resolution;
}
std::cout << "Model resolution:       " << resolution << std::endl;
std::cout << "Model sampling size:    " << model_ss_ << std::endl;
std::cout << "Scene sampling size:    " << scene_ss_ << std::endl;
std::cout << "LRF support radius:     " << rf_rad_ << std::endl;
std::cout << "SHOT descriptor radius: " << descr_rad_ << std::endl;
std::cout << "Clustering bin size:    " << cg_size_ << std::endl << std::endl;
}

接下去,将会计算模型和场景的每个点的法线,通过NormalEstiminationOMP估计器,使用10个最近邻。

  pcl::NormalEstimationOMP<PointType, NormalType> norm_est;
norm_est.setKSearch (10);
norm_est.setInputCloud (model);
norm_est.compute (*model_normals);
norm_est.setInputCloud (scene);
norm_est.compute (*scene_normals);

接下去我们对每个点云降低取样来找到少量的关键点。UniformSampling里面要使用的参数要么是命令行里面进行设置要么就用默认值。

  pcl::UniformSampling<PointType> uniform_sampling;
uniform_sampling.setInputCloud (model);
uniform_sampling.setRadiusSearch (model_ss_);
uniform_sampling.filter (*model_keypoints);
std::cout << "Model total points: " << model->size () << "; Selected Keypoints: " << model_keypoints->size () << std::endl;
uniform_sampling.setInputCloud (scene);
uniform_sampling.setRadiusSearch (scene_ss_);
uniform_sampling.filter (*scene_keypoints);
std::cout << "Scene total points: " << scene->size () << "; Selected Keypoints: " << scene_keypoints->size () << std::endl;

下一个阶段在于把3D描述器和每个模型和场景的点进行关联。在这里,我们将计算SHOT描述器,使用SHOTEstiminationOMP

  descr_est.setInputCloud (model_keypoints);
descr_est.setInputNormals (model_normals);
descr_est.setSearchSurface (model);
descr_est.compute (*model_descriptors);
descr_est.setInputCloud (scene_keypoints);
descr_est.setInputNormals (scene_normals);
descr_est.setSearchSurface (scene);
descr_est.compute (*scene_descriptors);

现在我们需要决定模型描述器和场景间的相关性。为了做到这,这个程序使用了一个KdTreeFLANN。对于每一个和一个场景关键点相关联的描述器,它会以欧拉距离为基础找到最相似的模型描述器,并把这对相似的关键点放到Correspondences这个向量里面。

  match_search.setInputCloud (model_descriptors);
//  For each scene keypoint descriptor, find nearest neighbor into the model keypoints descriptor cloud and add it to the correspondences vector.
for (size_t i = 0; i < scene_descriptors->size (); ++i)
{
std::vector<int> neigh_indices (1);
std::vector<float> neigh_sqr_dists (1);
if (!pcl_isfinite (scene_descriptors->at (i).descriptor[0])) //skipping NaNs
{
continue;
}
int found_neighs = match_search.nearestKSearch (scene_descriptors->at (i), 1, neigh_indices, neigh_sqr_dists);
if(found_neighs == 1 && neigh_sqr_dists[0] < 0.25f) //  add match only if the squared descriptor distance is less than 0.25 (SHOT descriptor distances are between 0 and 1 by design)
{
pcl::Correspondence corr (neigh_indices[0], static_cast<int> (i), neigh_sqr_dists[0]);
model_scene_corrs->push_back (corr);
}
}
std::cout << "Correspondences found: " << model_scene_corrs->size () << std::endl;
//
//  Actual Clustering

最后的阶段是管道对前面相关的东西进行聚类。

默认的算法是Hough3DGrouping,这是一个以Hough决策为基础的算法。记住这个算法需要与一个Local Reference Frame对每一个作为参数传递的点云关键点。在这个例子里面,我们显示的计算了LRF通过BOARDLocalReferenceFrameEstimation这个预估器。

    //  Compute (Keypoints) Reference Frames only for Hough
//
pcl::PointCloud<RFType>::Ptr model_rf (new pcl::PointCloud<RFType> ());
pcl::PointCloud<RFType>::Ptr scene_rf (new pcl::PointCloud<RFType> ());
pcl::BOARDLocalReferenceFrameEstimation<PointType, NormalType, RFType> rf_est;
rf_est.setFindHoles (true);
rf_est.setRadiusSearch (rf_rad_);
rf_est.setInputCloud (model_keypoints);
rf_est.setInputNormals (model_normals);
rf_est.setSearchSurface (model);
rf_est.compute (*model_rf);
rf_est.setInputCloud (scene_keypoints);
rf_est.setInputNormals (scene_normals);
rf_est.setSearchSurface (scene);
rf_est.compute (*scene_rf);
//  Clustering
pcl::Hough3DGrouping<PointType, PointType, RFType, RFType> clusterer;
clusterer.setHoughBinSize (cg_size_);
clusterer.setHoughThreshold (cg_thresh_);
clusterer.setUseInterpolation (true);
clusterer.setUseDistanceWeight (false);
clusterer.setInputCloud (model_keypoints);
clusterer.setInputRf (model_rf);
clusterer.setSceneCloud (scene_keypoints);
clusterer.setSceneRf (scene_rf);
clusterer.setModelSceneCorrespondences (model_scene_corrs);
//clusterer.cluster (clustered_corrs);
clusterer.recognize (rototranslations, clustered_corrs);
}
else // Using GeometricConsistency
{
pcl::GeometricConsistencyGrouping<PointType, PointType> gc_clusterer;

注意我们并于一定要显示的计算LRF在调用聚类算法前。如果被用来聚类的点云没用相关联的LRF。

Hough3DGrouping算法是可选的,我们还可以使用GeometricConsistencyGrouping这个算法。在这个例子里面LRF计算是不需要的,我们只要简单创建一个算法类的实例,传入相应的参数,并调用相应的recognize这个方法。

    gc_clusterer.setGCSize (cg_size_);
gc_clusterer.setGCThreshold (cg_thresh_);
gc_clusterer.setInputCloud (model_keypoints);
gc_clusterer.setSceneCloud (scene_keypoints);
gc_clusterer.setModelSceneCorrespondences (model_scene_corrs);
//gc_clusterer.cluster (clustered_corrs);
gc_clusterer.recognize (rototranslations, clustered_corrs);
}

输出和可视化。

    std::cout << "\n    Instance " << i + 1 << ":" << std::endl;
std::cout << "        Correspondences belonging to this instance: " << clustered_corrs[i].size () << std::endl;
// Print the rotation matrix and translation vector
Eigen::Matrix3f rotation = rototranslations[i].block<3,3>(0, 0);
Eigen::Vector3f translation = rototranslations[i].block<3,1>(0, 3);
printf ("\n");
printf ("            | %6.3f %6.3f %6.3f | \n", rotation (0,0), rotation (0,1), rotation (0,2));
printf ("        R = | %6.3f %6.3f %6.3f | \n", rotation (1,0), rotation (1,1), rotation (1,2));
printf ("            | %6.3f %6.3f %6.3f | \n", rotation (2,0), rotation (2,1), rotation (2,2));
printf ("\n");
printf ("        t = < %0.3f, %0.3f, %0.3f >\n", translation (0), translation (1), translation (2));
}

程序到时候会显示一个PCLVisualizer这个窗口。如果命令行选项里面-k或者-c被使用了,程序将会显示一个“独立的”经过渲染的点云。


 

  pcl::PointCloud<PointType>::Ptr off_scene_model (new pcl::PointCloud<PointType> ());
pcl::PointCloud<PointType>::Ptr off_scene_model_keypoints (new pcl::PointCloud<PointType> ());
if (show_correspondences_ || show_keypoints_)
{
//  We are translating the model so that it doesn't end in the middle of the scene representation
pcl::transformPointCloud (*model, *off_scene_model, Eigen::Vector3f (-1,0,0), Eigen::Quaternionf (1, 0, 0, 0));
pcl::transformPointCloud (*model_keypoints, *off_scene_model_keypoints, Eigen::Vector3f (-1,0,0), Eigen::Quaternionf (1, 0, 0, 0));
pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_color_handler (off_scene_model, 255, 255, 128);
viewer.addPointCloud (off_scene_model, off_scene_model_color_handler, "off_scene_model");
}
if (show_keypoints_)
{
pcl::visualization::PointCloudColorHandlerCustom<PointType> scene_keypoints_color_handler (scene_keypoints, 0, 0, 255);
viewer.addPointCloud (scene_keypoints, scene_keypoints_color_handler, "scene_keypoints");
viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, "scene_keypoints");
pcl::visualization::PointCloudColorHandlerCustom<PointType> off_scene_model_keypoints_color_handler (off_scene_model_keypoints, 0, 0, 255);
viewer.addPointCloud (off_scene_model_keypoints, off_scene_model_keypoints_color_handler, "off_scene_model_keypoints");
viewer.setPointCloudRenderingProperties (pcl::visualization::PCL_VISUALIZER_POINT_SIZE, 5, "off_scene_model_keypoints");
}
for (size_t i = 0; i < rototranslations.size (); ++i)
{
pcl::PointCloud<PointType>::Ptr rotated_model (new pcl::PointCloud<PointType> ());
pcl::transformPointCloud (*model, *rotated_model, rototranslations[i]);
std::stringstream ss_cloud;
ss_cloud << "instance" << i;
pcl::visualization::PointCloudColorHandlerCustom<PointType> rotated_model_color_handler (rotated_model, 255, 0, 0);
viewer.addPointCloud (rotated_model, rotated_model_color_handler, ss_cloud.str ());
if (show_correspondences_)
{
for (size_t j = 0; j < clustered_corrs[i].size (); ++j)
{
std::stringstream ss_line;
ss_line << "correspondence_line" << i << "_" << j;
PointType& model_point = off_scene_model_keypoints->at (clustered_corrs[i][j].index_query);
PointType& scene_point = scene_keypoints->at (clustered_corrs[i][j].index_match);
//  We are drawing a line for each pair of clustered correspondences found between the model and the scene
viewer.addLine<PointType, PointType> (model_point, scene_point, 0, 255, 0, ss_line.str ());
}
}
}
while (!viewer.wasStopped ())
{
viewer.spinOnce ();
}
return (0);
}

运行程序

./correspondence_grouping milk.pcd milk_cartoon_all_small_clorox.pcd

可选的,你可以指定点云分辨率单元里面的半径

./correspondence_grouping milk.pcd milk_cartoon_all_small_clorox.pcd milk.pcd milk_cartoon_all_small_clorox.pcd -r --model_ss 7.5 --scene_ss 20 --rf_rad 10 --descr_rad 15 --cg_size 10

提示:如果你使用不同的点云并且你不知道如何去设置不同的参数,你可以使用"-r"这个标志还有设置LRF和描述器半径变成分辨率的5 10 15倍。接着你需要小心的调节参数来得到最好的结果。

几秒之后会有如下输出

Model total points: 13704; Selected Keypoints: 732
Scene total points: 307200; Selected Keypoints: 3747
Correspondences found: 1768
Model instances found: 1
Instance 1:
Correspondences belonging to this instance: 24
|  0.969 -0.120  0.217 |
R = |  0.117  0.993  0.026 |
| -0.218 -0.000  0.976 |
t = < -0.159, 0.212, -0.042 >

 最终提示,如果你的pcl/filters里面没有uniform_sampling.h这个文件你可以从https://github.com/PointCloudLibrary/pcl/tree/master/filters/include/pcl/filters下载uniform_sampling.h和impl里面的uniform_sampling.hpp放到/usr/include/pcl/pcl1.7里面对应的地方。

 

 

 

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/566193.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

隐式形状模型

在这次我们将学会隐式形状模型算法通过pcl::ism::ImplicitShapeModel这个类来实现。这个算法是把Hough转换和特征近似包进行结合。有训练集&#xff0c;这个算法将计算一个确定的模型用来预测一个物体的中心。 这个算法由两部分组成&#xff0c;第一部分是训练&#xff0c;第二…

3D物体识别的假设检验

3D物体识别的假设验证 这次目的在于解释如何做3D物体识别通过验证模型假设在聚类里面。在描述器匹配后&#xff0c;这次我们将运行某个相关组算法在PCL里面为了聚类点对点相关性的集合&#xff0c;决定假设物体在场景里面的实例。在这个假定里面&#xff0c;全局假设验证算法将…

怎么样递增的注册成对的点云

这次我们将使用Iterative Closest Point algorithm来递增的注册一系列的点云。 这个主意来自于把所有的点云转换成第一个点云的框架&#xff0c;通过找到每个连续点云间最好的装换&#xff0c;并且计算整个点云的转换。 你的数据集应该由重新排列的&#xff0c;在一个相同的框…

qt入门

&#xfeff;&#xfeff;qt入门 1.首先我们先创建一个qt的空项目 1.这会生成两个文件 xx.pro xx.pro.user xx.pro文件是qt的工程文件&#xff0c;有点类似于vc的prj文件&#xff0c;或者sln文件。xx.pro.user是这个当前环境下的工程文件。(移植的时候这个文件没啥用) 以…

新手博客,开博立言_Youcans2021

这是我的第一篇博客。 今后我会将我的学习心得和总结在这里发布&#xff0c;与大家共享&#xff0c;共勉。

qt输入框

&#xfeff;&#xfeff;qt里面的输入框是QLineEdit这个类来实现的。 下面是代码 /* 应用程序抽象类 */ #include <QApplication>/*窗口类*/ #include <QWidget> #include <QCompleter> #include <QLineEdit>int main(int argc, char* argv[]) {QAp…

Python数模笔记-PuLP库(2)线性规划进阶

1、基于字典的创建规划问题 上篇中介绍了使用 LpVariable 对逐一定义每个决策变量&#xff0c;设定名称、类型和上下界&#xff0c;类似地对约束条件也需要逐一设置模型参数。在大规模的规划问题中&#xff0c;这样逐个定义变量和设置模型参数非常繁琐&#xff0c;效率很低。P…

qt坐标系统与布局的简单入门

&#xfeff;&#xfeff;qt坐标系统 qt坐标系统比较简单 button.setGeometry(20,20,100,100); 上面的代码把按钮显示为父窗口的20,20处宽度为100&#xff0c;高度为100 接下去是布局 qt里面布局需要加入<QLayout.h>这个头文件。 qt里面垂直布局 qt里面的垂直布局…

Python数模笔记-PuLP库(1)线性规划入门

1、什么是线性规划 线性规划&#xff08;Linear programming&#xff09;&#xff0c;在线性等式或不等式约束条件下求解线性目标函数的极值问题&#xff0c;常用于解决资源分配、生产调度和混合问题。例如&#xff1a; max fx 2*x1 3*x2 - 5*x3 s.t. x1 3*x2 x3 < 1…

qt控件基本应用

Qt里面有很多控件&#xff0c;让我们来看一些常用控件。 首先是对pro文件的配置 HEADERS \ MyWidget.h SOURCES \ MyWidget.cpp QTwidgets gui CONFIG c11 因为要用到lambda所以要加一个CONFIGc11 下面是MyWidget.h #ifndef MYWIDGET_H #define MYWIDGET_H#include &…

Python数模笔记-PuLP库(3)线性规划实例

本节以一个实际数学建模案例&#xff0c;讲解 PuLP 求解线性规划问题的建模与编程。 1、问题描述 某厂生产甲乙两种饮料&#xff0c;每百箱甲饮料需用原料6千克、工人10名&#xff0c;获利10万元&#xff1b;每百箱乙饮料需用原料5千克、工人20名&#xff0c;获利9万元。 今工…

深度学习资料整理

本文是转载了别人的博客&#xff0c;然后还加上了自己到淘宝上买的百度云盘资料(还包括一些数据挖掘&#xff0c;大数据之类的教程)。 编者按&#xff1a;本文收集了百来篇关于机器学习和深度学习的资料&#xff0c;含各种文档&#xff0c;视频&#xff0c;源码等。而且原文也会…

Python数模笔记-模拟退火算法(1)多变量函数优化

1、模拟退火算法 模拟退火算法借鉴了统计物理学的思想&#xff0c;是一种简单、通用的启发式优化算法&#xff0c;并在理论上具有概率性全局优化性能&#xff0c;因而在科研和工程中得到了广泛的应用。 退火是金属从熔融状态缓慢冷却、最终达到能量最低的平衡态的过程。模拟退…

Python数模笔记-模拟退火算法(2)约束条件的处理

1、最优化与线性规划 最优化问题的三要素是决策变量、目标函数和约束条件。 线性规划&#xff08;Linear programming&#xff09;&#xff0c;是研究线性约束条件下线性目标函数的极值问题的优化方法&#xff0c;常用于解决利用现有的资源得到最优决策的问题。 简单的线性规…

Python数模笔记-模拟退火算法(3)整数规划问题

1、整数规划问题 整数规划问题在工业、经济、国防、医疗等各行各业应用十分广泛&#xff0c;是指规划中的变量&#xff08;全部或部分&#xff09;限制为整数&#xff0c;属于离散优化问题&#xff08;Discrete Optimization&#xff09;。 线性规划问题的最优解可能是分数或小…

数据结构之算法特性及分类

数据结构之算法特性及分类 算法的特性 1.通用性。2.有效性。3.确定性4.有穷性。基本算法分类 1.穷举法顺序查找K值2.回溯,搜索八皇后&#xff0c;树和图遍历3.递归分治二分查找K值&#xff0c;快速排序&#xff0c;归并排序。4.贪心法Huffman编码树&#xff0c;最短路Dijkstra…

Python数模笔记-模拟退火算法(4)旅行商问题

1、旅行商问题(Travelling salesman problem, TSP) 旅行商问题是经典的组合优化问题&#xff0c;要求找到遍历所有城市且每个城市只访问一次的最短旅行路线&#xff0c;即对给定的正权完全图求其总权重最小的Hamilton回路&#xff1a;设有 n个城市和距离矩阵 D[dij]&#xff0…

神经网络概述

神经网络概述 以监督学习为例&#xff0c;假设我们有训练样本集 &#xff0c;那么神经网络算法能够提供一种复杂且非线性的假设模型 &#xff0c;它具有参数 &#xff0c;可以以此参数来拟合我们的数据。 为了描述神经网络&#xff0c;我们先从最简单的神经网络讲起&#x…

Python数模笔记-StatsModels 统计回归(1)简介

1、关于 StatsModels statsmodels&#xff08;http://www.statsmodels.org&#xff09;是一个Python库&#xff0c;用于拟合多种统计模型&#xff0c;执行统计测试以及数据探索和可视化。 欢迎关注 Youcans 原创系列&#xff0c;每周更新数模笔记 Python数模笔记-PuLP库 Pyth…

Python数模笔记-StatsModels 统计回归(2)线性回归

1、背景知识 1.1 插值、拟合、回归和预测 插值、拟合、回归和预测&#xff0c;都是数学建模中经常提到的概念&#xff0c;而且经常会被混为一谈。 插值&#xff0c;是在离散数据的基础上补插连续函数&#xff0c;使得这条连续曲线通过全部给定的离散数据点。 插值是离散函数…