ROS下获取kinectv2相机的仿照TUM数据集格式的彩色图和深度图

准备工作: 1. ubuntu16.04上安装iai-kinect2, 2. 运行roslaunch kinect2_bridge kinect2_bridge.launch, 3. 运行 rosrun save_rgbd_from_kinect2 save_rgbd_from_kinect2,开始保存图像.

这个保存kinectV2相机的代码如下,完整的工程可以从我的github上下载 https://github.com/Serena2018/save_color_depth_from_kinect2_with_ros/tree/master/save_rgbd_from_kinect2

问题:我使用这个第一个版本的工具获取了rgb和depth图像,并使用TUM数据集提供的associate.py脚本(脚本内容在文章底部)得到彩色图和深度图的对应(在这个工作中我才意识到,深度相机获取的深度图和彩色图实时一一对应的,不信的话,你去看一下TUM数据集).

经过上面的工作,我感觉我获取的数据集是天衣无缝的,知道今天我用我的数据集跑一下orb-slam的RGB-D接口,发现一个大问题,跟踪过程,不连贯,出现回跳的问题,就是,比如说场景中有个人,头一段时间,这个人已经走过去了,可是下一会,这个人又退回来了.

出了这样的问题,我就开始排查问题,先从获取的原始数据开始,我播放查看图像,发现图像是平滑变换的,不存在来回跳转的问题,(彩色图和深度图都没问题)

然后排查是不是associate.py脚本的问题,那么我就使用这个脚本,作用到TUM数据集,得到相应的association.txt文件,然后在orb-slam的RGBD接口测试该组数据,发现没有会跳的问题出现,那么,就不是这个脚本的问题,

我发现现在我的数据集和TUM数据集的区别是,我的保存下来的图像的时间戳的小数部分,不能保证所有的小数部分都有6位,就是说当小数部分后几位为0时,那么就直接略去了,会出现小数部分小于6位的情况,既然我可以想到的其他变量都是相同的,那么我看一下,如果我也能做到保证小数部分都是6位,那么这个问题也许就解决了,我就将原来的代码

os_rgb << time_val.tv_sec << "." <<time_val.tv_usec;
os_dep << time_val.tv_sec << "."<<time_val.tv_usec;

 改为

os_rgb << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec;
os_dep << time_val.tv_sec << "."<<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6) <<time_val.tv_usec;

 经过上面的修改,就可以保证图像的时间戳的小数部分都是6位.

我重新生成了association.txt文件,再次运行orb-slam2的RGB-D接口,发现之前的会跳的问题解决了.很不可思议,但这就是事实,如何解释这个问题呢,

/**** 函数功能:采集iaikinect2输出的彩色图和深度图数据,并以文件的形式进行存储*** 分隔符为 逗号','  * 时间戳单位为秒(s) 精确到小数点后6位(us)** maker:crp* 2017-5-13*/#include <iostream>
#include <sstream>
#include <stdio.h>
#include <stdlib.h>
#include <string>
#include <vector>#include <ros/ros.h>
#include <ros/spinner.h>
#include <sensor_msgs/CameraInfo.h>
#include <sensor_msgs/Image.h>
#include <std_msgs/String.h>#include <cv_bridge/cv_bridge.h> //将ROS下的sensor_msgs/Image消息类型转化成cv::Mat。
#include <sensor_msgs/image_encodings.h> //头文件sensor_msgs/Image是ROS下的图像的类型,这个头文件中包含对图像进行编码的函数#include <fstream>
#include <image_transport/image_transport.h>
#include <image_transport/subscriber_filter.h>
#include <iostream>
#include <opencv2/core/core.hpp>
#include <opencv2/highgui/highgui.hpp>
#include <opencv2/imgproc/imgproc.hpp>
#include <opencv2/opencv.hpp>
#include <sstream>using namespace std;
using namespace cv;Mat rgb, depth;
char successed_flag1 = 0, successed_flag2 = 0;string topic1_name = "/kinect2/qhd/image_color"; // topic 名称
string topic2_name = "/kinect2/qhd/image_depth_rect";string filename_rgbdata = "/home/yunlei/recordData/RGBD/rgbdata.txt";
string filename_depthdata = "/home/yunlei/recordData/RGBD/depthdata.txt";
string save_imagedata = "/home/yunlei/recordData/RGBD/";void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue);
void callback_function_color(const sensor_msgs::Image::ConstPtr image_data);
void callback_function_depth(const sensor_msgs::Image::ConstPtr image_data);
int main(int argc, char **argv) {string out_result;// namedWindow("image color",CV_WINDOW_AUTOSIZE);// namedWindow("image depth",CV_WINDOW_AUTOSIZE);ros::init(argc, argv, "kinect2_listen");if (!ros::ok())return 0;ros::NodeHandle n;ros::Subscriber sub1 = n.subscribe(topic1_name, 30, callback_function_color);ros::Subscriber sub2 = n.subscribe(topic2_name, 30, callback_function_depth);ros::AsyncSpinner spinner(3); // Use 3 threadsspinner.start();string rgb_str, dep_str;struct timeval time_val;struct timezone tz;double time_stamp;ofstream fout_rgb(filename_rgbdata.c_str());if (!fout_rgb) {cerr << filename_rgbdata << " file not exist" << endl;}ofstream fout_depth(filename_depthdata.c_str());if (!fout_depth) {cerr << filename_depthdata << " file not exist" << endl;}while (ros::ok()) {if (successed_flag1) {gettimeofday(&time_val, &tz); // us//  time_stamp =time_val.tv_sec+ time_val.tv_usec/1000000.0;ostringstream os_rgb;// os_rgb.setf(std::ios::fixed);// os_rgb.precision(6);os_rgb << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec;rgb_str = save_imagedata + "rgb/" + os_rgb.str() + ".png";imwrite(rgb_str, rgb);fout_rgb << os_rgb.str() << ",rgb/" << os_rgb.str() << ".png\n";successed_flag1 = 0;//   imshow("image color",rgb);cout << "rgb -- time:  " << time_val.tv_sec << "." <<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<< time_val.tv_usec<< endl;//    waitKey(1);}if (successed_flag2) {gettimeofday(&time_val, &tz); // usostringstream os_dep;// os_dep.setf(std::ios::fixed);// os_dep.precision(6);os_dep << time_val.tv_sec << "."<<setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6) <<time_val.tv_usec;dep_str =save_imagedata + "depth/" + os_dep.str() + ".png"; // 输出图像目录imwrite(dep_str, depth);fout_depth << os_dep.str() << ",depth/" << os_dep.str() << ".png\n";successed_flag2 = 0;//   imshow("image depth",depth);cout << "depth -- time:" << time_val.tv_sec << "." << setiosflags(ios::fixed)<<setprecision(6)<<std::setfill('0')<<setw(6)<<time_val.tv_usec<< endl;}}ros::waitForShutdown();ros::shutdown();return 0;
}
void callback_function_color(const sensor_msgs::Image::ConstPtr image_data) {cv_bridge::CvImageConstPtr pCvImage; // 声明一个CvImage指针的实例pCvImage = cv_bridge::toCvShare(image_data,image_data->encoding); //将ROS消息中的图象信息提取,生成新cv类型的图象,复制给CvImage指针pCvImage->image.copyTo(rgb);successed_flag1 = 1;
}
void callback_function_depth(const sensor_msgs::Image::ConstPtr image_data) {Mat temp;cv_bridge::CvImageConstPtr pCvImage; // 声明一个CvImage指针的实例pCvImage = cv_bridge::toCvShare(image_data,image_data->encoding); //将ROS消息中的图象信息提取,生成新cv类型的图象,复制给CvImage指针pCvImage->image.copyTo(depth);// dispDepth(temp, depth, 12000.0f);successed_flag2 = 1;// imshow("Mat depth",depth/256);// cv::waitKey(1);
}
void dispDepth(const cv::Mat &in, cv::Mat &out, const float maxValue) {cv::Mat tmp = cv::Mat(in.rows, in.cols, CV_8U);const uint32_t maxInt = 255;#pragma omp parallel forfor (int r = 0; r < in.rows; ++r) {const uint16_t *itI = in.ptr<uint16_t>(r);uint8_t *itO = tmp.ptr<uint8_t>(r);for (int c = 0; c < in.cols; ++c, ++itI, ++itO) {*itO = (uint8_t)std::min((*itI * maxInt / maxValue), 255.0f);}}cv::applyColorMap(tmp, out, COLORMAP_JET);
}

 

associate.py脚本

#!/usr/bin/python
# Software License Agreement (BSD License)
#
# Copyright (c) 2013, Juergen Sturm, TUM
# All rights reserved.
#
# Redistribution and use in source and binary forms, with or without
# modification, are permitted provided that the following conditions
# are met:
#
#  * Redistributions of source code must retain the above copyright
#    notice, this list of conditions and the following disclaimer.
#  * Redistributions in binary form must reproduce the above
#    copyright notice, this list of conditions and the following
#    disclaimer in the documentation and/or other materials provided
#    with the distribution.
#  * Neither the name of TUM nor the names of its
#    contributors may be used to endorse or promote products derived
#    from this software without specific prior written permission.
#
# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
# "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
# LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS
# FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE
# COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT,
# INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING,
# BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
# LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER
# CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT
# LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN
# ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
# POSSIBILITY OF SUCH DAMAGE.
#
# Requirements: 
# sudo apt-get install python-argparse"""
The Kinect provides the color and depth images in an un-synchronized way. This means that the set of time stamps from the color images do not intersect with those of the depth images. Therefore, we need some way of associating color images to depth images.For this purpose, you can use the ''associate.py'' script. It reads the time stamps from the rgb.txt file and the depth.txt file, and joins them by finding the best matches.
"""import argparse
import sys
import os
import numpydef read_file_list(filename):"""Reads a trajectory from a text file. File format:The file format is "stamp d1 d2 d3 ...", where stamp denotes the time stamp (to be matched)and "d1 d2 d3.." is arbitary data (e.g., a 3D position and 3D orientation) associated to this timestamp. Input:filename -- File nameOutput:dict -- dictionary of (stamp,data) tuples"""file = open(filename)data = file.read()lines = data.replace(","," ").replace("\t"," ").split("\n") list = [[v.strip() for v in line.split(" ") if v.strip()!=""] for line in lines if len(line)>0 and line[0]!="#"]list = [(float(l[0]),l[1:]) for l in list if len(l)>1]return dict(list)def associate(first_list, second_list,offset,max_difference):"""Associate two dictionaries of (stamp,data). As the time stamps never match exactly, we aim to find the closest match for every input tuple.Input:first_list -- first dictionary of (stamp,data) tuplessecond_list -- second dictionary of (stamp,data) tuplesoffset -- time offset between both dictionaries (e.g., to model the delay between the sensors)max_difference -- search radius for candidate generationOutput:matches -- list of matched tuples ((stamp1,data1),(stamp2,data2))"""first_keys = first_list.keys()second_keys = second_list.keys()potential_matches = [(abs(a - (b + offset)), a, b) for a in first_keys for b in second_keys if abs(a - (b + offset)) < max_difference]potential_matches.sort()matches = []for diff, a, b in potential_matches:if a in first_keys and b in second_keys:first_keys.remove(a)second_keys.remove(b)matches.append((a, b))matches.sort()return matchesif __name__ == '__main__':# parse command lineparser = argparse.ArgumentParser(description='''This script takes two data files with timestamps and associates them   ''')parser.add_argument('first_file', help='first text file (format: timestamp data)')parser.add_argument('second_file', help='second text file (format: timestamp data)')parser.add_argument('--first_only', help='only output associated lines from first file', action='store_true')parser.add_argument('--offset', help='time offset added to the timestamps of the second file (default: 0.0)',default=0.0)parser.add_argument('--max_difference', help='maximally allowed time difference for matching entries (default: 0.02)',default=0.015)args = parser.parse_args()first_list = read_file_list(args.first_file)second_list = read_file_list(args.second_file)matches = associate(first_list, second_list,float(args.offset),float(args.max_difference))    if args.first_only:for a,b in matches:print("%f %s"%(a," ".join(first_list[a])))else:for a,b in matches:print("%f %s %f %s"%(a," ".join(first_list[a]),b-float(args.offset)," ".join(second_list[b])))

 

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/252407.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Java Web 九大内置对象(一)

在Jsp 中一共定义了九个内置对象&#xff0c;分别为&#xff1a; *request HttpServletRequest; *response HttpServletResponse; *session HttpSession; page This(本jsp页面)&#xff1b; *application ServletCon…

Missing URI template variable 'XXXX' for method parameter of type String

原因&#xff1a;就是spring的controller上的RequestMapping的实参和方法里面的形参名字不一致 方法&#xff1a;改成一样就可。 ps.还能用绑定的方法&#xff0c;不建议&#xff0c;因为太麻烦了 RequestMapping(value "/findUser/{id}",method RequestMethod.GET…

css:text-overflow属性

参考文档:www.w3school.com.cn/cssref/pr_t… text-overflow:ellipsis;( 显示省略符号来代表被修剪的文本。)

Failed to load nodelet ‘/kinect2_bridge` of type `kinect2_bridge/kinect2_bridge_nodelet` to manager

之前在我的电脑上配置了libfreenect2和iai_kinect2&#xff0c;现在需要在工控机上重新安装这两个库&#xff0c;讲kinectV2相机安置在婴儿车上&#xff0c;然后使用我的ros下获取kinectV2相机的彩色图和灰度图的脚本&#xff0c;获取深度图和彩色图。 我成功的安装了libfreen…

object转字符串

1、obj.tostring() obj为空时&#xff0c;抛异常。 2、convert.tostring(obj) obj为空时&#xff0c;返回null&#xff1b; 3、(string)obj obj为空时&#xff0c;返回null&#xff1b;obj不是string类型时&#xff0c;抛异常。 4、obj as string obj为空时&#xff0c;返回nul…

微信开发中,H5的video标签使用

<video></video>是HTML5新加入的标签&#xff0c;最近流行的h5开发多以video技术集成一个H5页面&#xff0c;效果也是很6的。现在总结一下用到的技术&#xff0c;主要的使用环境是微信&#xff0c;部分属性一些手机的默认浏览器不支持&#xff0c;这些还需要读者亲…

bundlefusion论文阅读笔记

4. 全局位姿对齐(glob pose alignment) 输入系统的是使用消费级的传感器获取的RGBD数据流&#xff0c;并且保证这些数据中的彩色图像和深度图像是时间和空间上都对齐的。图像分辨率是640x480,频率是30hz。我们的目的就是要找到frames之间的3D对应&#xff0c;然后根据这些对应…

IOC和DI的区别详解

IOC 是英文inversion of control的缩写&#xff0c;意思是控制反转DI 是英文Dependency Injection的缩写&#xff0c;意思是依赖注入 下面用一个简单的例子来描述一下IOC和DI的关系 先看下总结&#xff1a; 依赖注入(DI)和控制反转(IOC)是从不同的角度的描述的同一件事情&#…

TOMCAT启动到一半停止如何解决

当你的项目过大的时候&#xff0c;往往会导致你的TOMCAT启动时间过长&#xff0c;启动失败&#xff0c;遇到该情况可以试一下下面两招&#xff1a; TOmcat启动到一半的时候停止了&#xff0c;以下原因&#xff1a; 1、 tomcat启动时间超过了设置时间&#xff1a; 解决办法&…

视觉slam十四讲ch6曲线拟合 代码注释(笔记版)

1 #include <opencv2/core/core.hpp>2 #include <ceres/ceres.h>3 #include <chrono>4 5 using namespace std;6 7 // 代价函数的计算模型8 struct CURVE_FITTING_COST9 {10 CURVE_FITTING_COST ( double x, double y ) : _x ( x ), _y ( y ) {}11 /…

Dojo 如何测试 widget

测试 dojo/framework/src/testing/README.mdcommit 84e254725f41d60f624ab5ad38fe82e15b6348a2 用于测试和断言 Dojo 部件期望的虚拟 DOM 和行为的简单 API。 测试 Features harness APICustom Comparatorsselectors harness.expect harness.expectPartial harness.triggerharn…

python中将四元数转换为旋转矩阵

在制作bundlefusion时,想测试TUM数据集,并且将groundtruth写入到数据集中,TUM中给定的groundtruth中的旋转是使用四元数表示的,而bundlefusion中需要SE3的形式,所以我需要首先将四元数转换为旋转矩阵,然后再将其与平移向量合并在一起,因为我之前关于生成bundlefusion数据集写了…

js -- 时间转年月日

/*** 时间转年月日* param sdate 开始的时间* param edate 结束的时间* returns {*}*/function day2ymrStr2(sdate, edate) {var day2ymrStr "";var date1 new Date(edate);var date2 new Date(sdate);var y 0, m 0, d 0;var y1 date1.getFullYear();var m1 …

iOS sha1加密算法

最近在项目中使用到了网络请求签名认证的方法&#xff0c;于是在网上找关于OC sha1加密的方法&#xff0c;很快找到了一个大众使用的封装好的方法&#xff0c;以下代码便是 首先需要添加头文件 #import<CommonCrypto/CommonDigest.h> 然后直接使用下面的方法就可以了 //s…

Linux开发5款实用工具推荐

今天安利给大家5款实用的Linux开发工具&#xff0c;希望对大家工作效率的提升有所帮助。容器放眼于现实&#xff0c;现在已经是容器的时代了。容器既及其容易部署&#xff0c;又可以方便地构建开发环境。如果你针对的是特定的平台的开发&#xff0c;将开发流程所需要的各种工具…

TUM数据集制作BundleFusion数据集

BundleFusion的数据集中,在生成.sens文件之前,包括彩色图,深度图和一个位姿文件,并且这个pose文件中的位姿态是有变化的,所以我怀疑,推测,在这个pose文件中可以写入groundtruth的位姿,然后在重建的时候就按照传入的位姿进行计算.为了测试一下效果,我从TUM数据集开始入手,这个数…

Linq查询datatable的记录集合

通过linq查询datatable数据集合满足条件的数据集 1.首先定义查询字段的变量&#xff0c;比方深度 string strDepth查询深度的值&#xff1b; var dataRows from datarow in dataTable(须要查询的datatable数据集).AsEnumerable() where …

Java 概述和编程基础

First of all&#xff0c;Java概述&#xff1a; 类是Java程序设计的基石和基本单元&#xff1b; main()方法是程序的入口&#xff0c;它是共有的、静态的&#xff0c;参数String[] args表示一个字符串数组可以传入该程序&#xff0c;用来传递外部数据以初始化程序。   计算机…

19、Fragment

一、Fragment 1.1、fragment介绍 fragment的出现是为了同时适应手机和平板&#xff0c;可以将其看做Activity的组成部分&#xff0c;甚至Activity界面完全由不同的Fragment组成&#xff0c;它拥有自己的生命 周期和接收、处理用户的事件&#xff0c;更为重要的是&#xff0c;可…

喜好:

不喜欢吃&#xff1a;一瓣瓣的蘑菇、海带、豆腐皮、 不喜欢喝&#xff1a;鱼汤&#xff1b; 不喜欢吃&#xff1a;山楂片、法式小面包&#xff08;软软的&#xff09;、果冻、 不喜欢喝&#xff1a;对饮料无感、不喜欢脉动、可乐雪碧等少量还行、 喜欢&#xff1a;啃骨头、排骨…