自动驾驶地面车辆的雷达里程计:方法与数据集综述

在不同复杂环境中,各种传感器的性能会有所不同。它们各自具有优势和劣势,因此融合多模态数据提供了一种解决方案,可以克服各个传感器单独使用时的限制[90]–[92]。此外,许多讨论的传感器已经广泛应用于自动驾驶领域,因此开发利用车载所有硬件和感知模态的算法是有意义的。雷达传感器存在一些问题,例如幽灵物体、低分辨率、多径反射、运动畸变和饱和等。相机也有自己的问题,比如对光照和天气条件敏感。激光雷达在一定程度上也会受到恶劣天气条件和运动畸变的影响。典型的IMU具有噪声且容易漂移,最后,轮速编码器容易受到车轮滑动的影响。除了与这些传感器相关的各种弱点之外,单模态里程计算法通常也存在其固有的问题。例如,单目视觉里程计可以估计运动的尺度,而立体视觉里程计对于校准、校正、视差和三角测量的质量非常敏感,当深度远大于立体对的基线时,立体视觉也会退化成单目视觉。基于扫描雷达的雷达里程计通常具有较低的频率。基于激光雷达的里程计通常在计算上要求很高,其基于扫描匹配的方法需要良好的初始化。尽管使用传感器融合在里程计中具有预期的优点,但我们强调雷达里程计的出版工作中实际应用传感器融合技术的比例略低于预期(图2d),我们猜测这背后的原因可能是传感器融合硬件和方法的标准化缺乏,此外,单模态方法的成熟性和稳健性使得证明多模态方法的必要性越来越困难。接下来,我们对雷达里程计文献中发现的传感器融合方法进行简要概述。在这里,我们限制讨论使用其他传感器(例如相机、IMU)数据应在运行时可用的方法,只有使用其他传感器数据进行训练、测试、校准或作为地面真实数据源的方法不被视为融合方法。

最常见的雷达里程计传感器配置是雷达和IMU的组合;典型IMU的高采样率与雷达的采样率相辅相成,雷达定期纠正IMU的不良漂移。这种传感器组合在无人机应用中取得了巨大成功,其中卡尔曼滤波器或其变体用于融合两个传感器的数据[93]–[98]。类似地,Almalioglu等人[63]和Araujo等人[69]的工作基于使用卡尔曼滤波器的变体来融合雷达和IMU数据。使用卡尔曼滤波器融合的一个优点是相对容易扩展到包括更多传感器,例如Ort等人[85]将地面探测雷达、轮速编码器和IMU的数据融合在一起。Holder等人[66]融合了经过预处理的雷达数据、陀螺仪、轮速编码器和方向盘角度。最后,Liang等人[68]在一个可扩展的框架中融合了雷达、激光雷达、相机、GNSS和IMU的数据。

另一方面,Lu等人[84]采用深度学习,并提出了一种自我和交叉注意力的混合机制,用于融合雷达和IMU的数据,他们声称他们的方法在模型内部优于简单的特征级联,并且可以轻松扩展到包括更多传感器。Fritsche和Wagner[14]使用手工制定的启发式方法将雷达和激光雷达的探测结果。

8. 雷达里程计与机器学习

机器学习技术已广泛应用于各种机器人感知任务,并在自动驾驶汽车和机器人领域取得了许多成功案例[99],[100]。学习技术利用现代硬件的计算能力来处理大量数据,从而实现无需建模的方法。尽管如此,我们发现使用任何学习技术的出版物数量比预期的少(图2c)。这可能与学习方法的泛化困难有关,这是机器学习领域普遍存在的问题,但在里程计和SLAM领域尤为难以解决。以下是在雷达里程计文献中发现的一些学习方法的简要概述。

Barnes等人[82]和Weston等人[83]的工作是基于训练一个CNN来预测可以用于滤除嘈杂雷达扫描的掩码。他们在训练阶段使用视觉里程计作为地面真实位姿的来源。Aldera等人[50]也使用了U-Net风格的CNN来生成过滤掉雷达噪声和伪影的掩码,并且他们使用视觉里程计作为地面真实数据。Barnes和Posner[42]使用U-Net风格的CNN来预测关键点、得分和描述符。Burnett等人[45]也训练了一个U-Net CNN来预测关键点、得分和描述符,但他们使用无监督学习来训练模型。Aldera等人[51]使用SVM来对关联地标的相容性矩阵的特征向量进行分类,以区分好的和坏的估计。Araujo等人[69]仅将CNN用作预处理步骤,以对雷达数据进行去噪。Zhu等人[54]开发了一个神经网络模型,用于处理雷达点云并生成逐点的权重和偏移量,可用于算法的其他阶段进行运动估计。Almalioglu等人[63]使用RNN作为运动模型,以便从先前姿态的信息中受益,并更好地捕捉运动的动态特性。在Lu等人[84]中,使用了各种学习技术,从处理雷达数据的CNN,到处理IMU数据的RNN,再到混合注意机制,用于将两个流融合在一起,然后使用LSTM在将输出传递给全连接网络进行姿态回归之前从先前姿态中获得益处。最后,Alhashimi等人[58]提出了一种基于可学习参数的雷达滤波器。

因此,看起来在学习技术中,扫描雷达比汽车雷达更常见。这可能是因为扫描雷达产生更丰富的数据,深度学习模型对此非常吸引。此外,我们注意到CNN是雷达里程计中最流行的学习技术,这是因为扫描雷达的扫描与视觉图像相似,因此在任务如语义分割和物体检测中使用CNN非常常见。

9. 讨论、挑战与未来建议

实际情况下,雷达传感器不太可能完全取代相机和激光雷达在感知领域的作用。事实上,目前可用的雷达在物体识别、采样率或信噪比方面无法与相机/激光雷达相竞争。我们预期雷达将在自主平台的传感器组合中继续发挥重要的辅助作用。汽车雷达已经在自动驾驶汽车市场得到广泛应用,其成本相对于激光雷达仍然较低,并且在恶劣天气条件下更加可靠。以下是阻碍雷达里程计进展以及未来建议的一些挑战。

  1. 雷达几乎总是被提出作为解决恶劣天气问题以及激光雷达和相机在此类情况下性能下降的解决方案;然而,两种最具挑战性的条件,雾和尘土,却是当前可用数据集中最少的条件。对添加合成雾的研究取得了很大进展(例如[101]),但在雾和尘土条件下进行真实驾驶数据收集仍然更为理想。然而,我们也认识到在这些条件下预先预测和记录数据的难度,尤其是雾的情况。
  2. Navtech的热门扫描雷达被认为采样率较低,只有4Hz。考虑到大多数公开可用数据集是在相对较低的行驶速度下记录的(如图7所示),这意味着我们的雷达算法未经受过中高速行驶的测试。这是一个问题,因为这可能意味着我们对雷达里程计算法的当前评估最多只是乐观的。
  3. 雷达感知研究,特别是雷达里程计方面,缺少类似KITTI数据集和排行榜的公共参考点;一个让研究人员测试和对比他们的工作的共同参考标准。虽然牛津雷达机器人车数据集在一定程度上填补了这一角色,但牛津数据集在季节多样性和行驶速度方面非常有限。此外,它缺少维护的排行榜,而Boreas数据集试图解决这个问题。
  4. 最受欢迎的雷达数据集是使用扫描雷达收集的,目前没有公开的基于汽车雷达的数据集引起足够的关注,被视为“雷达里程计基准”,相反,基于汽车雷达的研究的常见做法是让研究人员收集和测试自己未发表的数据,这使得很难比较和评估针对汽车雷达开发的不同里程计方法。
  5. 最受欢迎的评估指标,平均平移误差和平均旋转误差(见子节IV-C),是为KITTI数据集量身定制的;(100, 200, 300, 400, 500, 600, 700和800)米的距离范围不一定适用于其他数据集。可以将其推广,以适应更长或更短距离的轨迹。

10. 结论

基于雷达的里程计是在恶劣环境中估计机器人位置和方向变化的最佳解决方案之一。雷达在机器人领域已经得到广泛应用,并拥有许多优点,使其成为传感器组合中的重要组成部分;此外,雷达技术日益改进,价格更便宜,体积更小。本文对雷达里程计领域的相关工作进行了调研,重点关注了用于自主地面车辆或机器人的雷达里程计算法。调查了雷达里程计研究的当前趋势,包括方法、传感器类型以及传感器融合和机器学习技术的应用。文章还概述了雷达传感器的工作原理,雷达里程计的标准评估指标以及可用于雷达里程计研究的公开数据集。此外,对文献中发现的雷达里程计方法进行了系统分类。虽然基于雷达的状态估计技术并不新鲜,但是雷达传感器技术的最新进展和对自主性、安全性和全天候功能的增加期望为这一领域留下了广阔的发展空间,还有更多的工作需要在这个领域进行。

REFERENCES

[1] Y. Zhang, A. Carballo, H. Yang, and K. Takeda, “Perception and sensing for autonomous vehicles under adverse weather conditions: A survey,” ISPRS Journal of Photogrammetry and Remote Sensing, vol. 196, pp. 146–177, 2023.

[2] A. S. Mohammed, A. Amamou, F. K. Ayevide, S. Kelouwani, K. Agbossou, and N. Zioui, “The perception system of intelligent ground vehicles in all weather conditions: A systematic literature review,” Sensors, vol. 20, no. 22, p. 6532, 2020.

[3] M. Bijelic, T. Gruber, and W. Ritter, “A benchmark for lidar sensors in fog: Is detection breaking down?” in 2018 IEEE Intelligent Vehicles Symposium (IV). IEEE, 2018, pp. 760–767.

[4] J. Dickmann, J. Klappstein, M. Hahn, N. Appenrodt, H.-L. Bloecher, K. Werber, and A. Sailer, “Automotive radar the key technology for autonomous driving: From detection and ranging to environmental understanding,” in 2016 IEEE Radar Conference (RadarConf), 2016, pp. 1–6.

[5] A. Venon, Y. Dupuis, P. Vasseur, and P. Merriaux, “Millimeter wave fmcw radars for perception, recognition and localization in automotive applications: A survey,” IEEE Transactions on Intelligent Vehicles, vol. 7, no. 3, pp. 533–555, 2022.

[6] S. A. S. Mohamed, M.-H. Haghbayan, T. Westerlund, J. Heikkonen, H. Tenhunen, and J. Plosila, “A survey on odometry for autonomous navigation systems,” IEEE Access, vol. 7, pp. 97 466–97 486, 2019.

[7] M. Yang, X. Sun, F. Jia, A. Rushworth, X. Dong, S. Zhang, Z. Fang, G. Yang, and B. Liu, “Sensors and sensor fusion methodologies for indoor odometry: A review,” Polymers, vol. 14, no. 10, 2022.

[8] F. Corradi and F. Fioranelli, “Radar perception for autonomous unmanned aerial vehicles: A survey,” in System Engineering for Constrained Embedded Systems, ser. DroneSE and RAPIDO. New York, NY, USA: Association for Computing Machinery, 2022, p. 14–20.

[9] T. Zhou, M. Yang, K. Jiang, H. Wong, and D. Yang, “Mmw radar-based technologies in autonomous driving: A review,” Sensors, vol. 20, no. 24, 2020.

[10] D. Louback da Silva Lubanco, T. Schlechter, M. Pichler-Scheder, and C. Kastl, “Survey on radar odometry,” in Computer Aided Systems Theory – EUROCAST 2022, R. Moreno-Dı ́az, F. Pichler, and A. QuesadaArencibia, Eds. Cham: Springer Nature Switzerland, 2022, pp. 619–625.

[11] C. Iovescu and S. Rao, “The Fundamentals of Millimeter Wave Sensors,” Texas Instruments, 2020.

[12] M. Adams and M. D. Adams, Robotic navigation and mapping with radar. Artech House, 2012.

[13] J. Gamba, Radar signal processing for autonomous driving. Springer, 2019.

[14] P. Fritsche and B. Wagner, “Modeling structure and aerosol concentration with fused radar and lidar data in environments with changing visibility,” in 2017 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2017, pp. 2685–2690.

[15] J. W. Marck, A. Mohamoud, E. vd Houwen, and R. van Heijster, “Indoor radar slam a radar application for vision and gps denied environments,” in 2013 European Radar Conference, 2013, pp. 471–474.

[16] "Safety is everything. — navtechradar.com," https://navtechradar.com/, [Accessed 28-May-2023].

[17] TI, "Analog, Embedded Processing, Semiconductor Company, Texas Instruments," 2013.

[18] O. Ait-Aider, N. Andreff, J. M. Lavest, and P. Martinet, "Simultaneous object pose and velocity computation using a single view from a rolling shutter camera," in Computer Vision–ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, May 7-13, 2006. Proceedings, Part II 9. Springer, 2006, pp. 56–68.

[19] I. Vizzo, T. Guadagnino, B. Mersch, L. Wiesmann, J. Behley, and C. Stachniss, "Kiss-icp: In defense of point-to-point icp – simple, accurate, and robust registration if done the right way," IEEE Robotics and Automation Letters, vol. 8, no. 2, pp. 1029–1036, 2023.

[20] K. Burnett, A. P. Schoellig, and T. D. Barfoot, "Do we need to compensate for motion distortion and doppler effects in spinning radar navigation?" IEEE Robotics and Automation Letters, vol. 6, no. 2, pp. 771–778, 2021.

[21] M. Sheeny, E. De Pellegrin, S. Mukherjee, A. Ahrabian, S. Wang, and A. Wallace, "Radiate: A radar dataset for automotive perception," arXiv preprint arXiv:2010.09076, 2020.

[22] R. K ̈ ummerle, B. Steder, C. Dornhege, M. Ruhnke, G. Grisetti, C. Stachniss, and A. Kleiner, "On measuring the accuracy of slam algorithms," Autonomous Robots, vol. 27, pp. 387–407, 2009.

[23] A. Geiger, P. Lenz, and R. Urtasun, "Are we ready for autonomous driving? the kitti vision benchmark suite," in 2012 IEEE Conference on Computer Vision and Pattern Recognition, 2012, pp. 3354–3361.

[24] M. Grupp, "evo: Python package for the evaluation of odometry and slam." https://github.com/MichaelGrupp/evo, 2017.

[25] Z. Zhang and D. Scaramuzza, "A tutorial on quantitative trajectory evaluation for visual(-inertial) odometry," in 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018, pp. 7244–7251.

[26] H. Zhan, C. S. Weerasekera, J.-W. Bian, and I. Reid, "Visual odometry revisited: What should be learnt?" in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 4203–4210.

[27] A. Ouaknine, A. Newson, J. Rebut, F. Tupin, and P. P ́ erez, "Carrada dataset: Camera and automotive radar with range-angle-doppler annotations," in 2020 25th International Conference on Pattern Recognition (ICPR), 2021, pp. 5068–5075.

[28] F. E. Nowruzi, D. Kolhatkar, P. Kapoor, F. Al Hassanat, E. J. Heravi, R. Laganiere, J. Rebut, and W. Malik, "Deep open space segmentation using automotive radar," in 2020 IEEE MTT-S International Conference on Microwaves for Intelligent Mobility (ICMIM), 2020, pp. 1–4.

[29] D. Barnes, M. Gadd, P. Murcutt, P. Newman, and I. Posner, "The oxford radar robotcar dataset: A radar extension to the oxford robotcar dataset," in Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Paris, 2020.

[30] W. Maddern, G. Pascoe, C. Linegar, and P. Newman, "1 Year, 1000km: The Oxford RobotCar Dataset," The International Journal of Robotics Research (IJRR), vol. 36, no. 1, pp. 3–15, 2017.

[31] K. Burnett, D. J. Yoon, Y. Wu, A. Z. Li, H. Zhang, S. Lu, J. Qian, W.-K. Tseng, A. Lambert, K. Y. Leung, A. P. Schoellig, and T. D. Barfoot, "Boreas: A multi-season autonomous driving dataset," The International Journal of Robotics Research, vol. 42, no. 1-2, pp. 33–42, 2023.

[32] G. Kim, Y. S. Park, Y. Cho, J. Jeong, and A. Kim, "Mulran: Multimodal range dataset for urban place recognition," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 6246–6253.

[33] J. Jeong, Y. Cho, Y.-S. Shin, H. Roh, and A. Kim, "Complex urban dataset with multi-level sensors from highly diverse urban environments," The International Journal of Robotics Research, vol. 38, no. 6, pp. 642–657, 2019.

[34] H. Caesar, V. Bankiti, A. H. Lang, S. Vora, V. E. Liong, Q. Xu, A. Krishnan, Y. Pan, G. Baldan, and O. Beijbom, "nuscenes: A multimodal dataset for autonomous driving," arXiv preprint arXiv:1903.11027, 2019.

[35] J.-L. D ́ eziel, P. Merriaux, F. Tremblay, D. Lessard, D. Plourde, J. Stanguennec, P. Goulet, and P. Olivier, "Pixset: An opportunity for 3d computer vision to go beyond point clouds with a full-waveform lidar dataset," 2021.

[36] T. Peynot, S. Scheding, and S. Terho, "The Marulan Data Sets: MultiSensor Perception in Natural Environment with Challenging Conditions," International Journal of Robotics Research, vol. 29, no. 13, pp. 1602–1607, November 2010.

[37] O. Schumann, M. Hahn, N. Scheiner, F. Weishaupt, J. Tilly, J. Dickmann, and C. W ̈ ohler, "RadarScenes: A Real-World Radar Point Cloud Data Set for Automotive Applications," Mar. 2021.

[38] J. Callmer, D. T ̈ ornqvist, F. Gustafsson, H. Svensson, and P. Carlbom, "Radar slam using visual features," EURASIP Journal on Advances in Signal Processing, vol. 2011, no. 1, pp. 1–11, 2011.

[39] Z. Hong, Y. Petillot, and S. Wang, "Radarslam: Radar based large-scale slam in all weathers," in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 5164–5170.

[40] J. H. Challis, "A procedure for determining rigid body transformation parameters," Journal of Biomechanics, vol. 28, no. 6, pp. 733–737, 1995.

[41] Z. Hong, Y. Petillot, A. Wallace, and S. Wang, "Radarslam: A robust simultaneous localization and mapping system for all weather conditions," The International Journal of Robotics Research, vol. 41, no. 5, pp. 519–542, 2022. [Online]. Available: https://doi.org/10.1177/02783649221080483

[42] D. Barnes and I. Posner, "Under the radar: Learning to predict robust keypoints for odometry estimation and metric localisation in radar," in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 9484–9490.

[43] O. Ronneberger, P. Fischer, and T. Brox, "U-net: Convolutional networks for biomedical image segmentation," in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: 18th International Conference, Munich, Germany, October 5-9, 2015, Proceedings, Part III 18. Springer, 2015, pp. 234–241.

[44] E. Rublee, V. Rabaud, K. Konolige, and G. Bradski, "Orb: An efficient alternative to sift or surf," in 2011 International Conference on Computer Vision, 2011, pp. 2564–2571.

[45] K. Burnett, D. J. Yoon, A. P. Schoellig, and T. D. Barfoot, "Radar odometry combining probabilistic estimation and unsupervised feature learning," arXiv preprint arXiv:2105.14152, 2021.

[46] T. D. Barfoot, J. R. Forbes, and D. J. Yoon, "Exactly sparse gaussian variational inference with application to derivative-free batch nonlinear state estimation," The International Journal of Robotics Research, vol. 39, no. 13, pp. 1473–1502, 2020. [Online]. Available: https://doi.org/10.1177/0278364920937608

[47] H. Lim, K. Han, G. Shin, G. Kim, S. Hong, and H. Myung, "Orora: Outlier-robust radar odometry," arXiv preprint arXiv:2303.01876, 2023.

[48] S. H. Cen and P. Newman, "Precise ego-motion estimation with millimeter-wave radar under diverse and challenging conditions," in 2018 IEEE International Conference on Robotics and Automation (ICRA), 2018, pp. 6045–6052.

[49] ---, "Radar-only ego-motion estimation in difficult settings via graph matching," in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 298–304.

[50] R. Aldera, D. D. Martini, M. Gadd, and P. Newman, "Fast radar motion estimation with a learnt focus of attention using weak supervision," in 2019 International Conference on Robotics and Automation (ICRA), 2019, pp. 1190–1196.

[51] ---, "What could go wrong? introspective radar odometry in challenging environments," in 2019 IEEE Intelligent Transportation Systems Conference (ITSC), 2019, pp. 2835–2842.

[52] R. Aldera, M. Gadd, D. De Martini, and P. Newman, "What goes around: Leveraging a constant-curvature motion constraint in radar odometry," IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 7865–7872, 2022.

[53] D. Scaramuzza, F. Fraundorfer, and R. Siegwart, "Real-time monocular visual odometry for on-road vehicles with 1-point ransac," in 2009 IEEE International Conference on Robotics and Automation, 2009, pp. 4293–4299.

[54] S. Zhu, A. Yarovoy, and F. Fioranelli, "Deepego: Deep instantaneous ego-motion estimation using automotive radar," IEEE Transactions on Radar Systems, vol. 1, pp. 166–180, 2023.

[55] P. Besl and N. D. McKay, "A method for registration of 3-d shapes," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 2, pp. 239–256, 1992.

[56] D. Adolfsson, M. Magnusson, A. Alhashimi, A. J. Lilienthal, and H. Andreasson, "Cfear radarodometry - conservative filtering for efficient and accurate radar odometry," in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 5462-5469.

[57] ---, "Lidar-level localization with radar? the cfear approach to accurate, fast, and robust large-scale radar odometry in diverse environments," IEEE Transactions on Robotics, pp. 1–20, 2022.

[58] A. Alhashimi, D. Adolfsson, M. Magnusson, H. Andreasson, and A. J. Lilienthal, "Bfar-bounded false alarm rate detector for improved radar odometry estimation," arXiv preprint arXiv:2109.09669, 2021.

[59] P. Biber and W. Strasser, "The normal distributions transform: a new approach to laser scan matching," in Proceedings 2003 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2003) (Cat. No.03CH37453), vol. 3, 2003, pp. 2743–2748 vol.3.

[60] M. Rapp, M. Barjenbruch, K. Dietmayer, M. Hahn, and J. Dickmann, "A fast probabilistic ego-motion estimation framework for radar," in 2015 European Conference on Mobile Robots (ECMR), 2015, pp. 1–6.

[61] M. Rapp, M. Barjenbruch, M. Hahn, J. Dickmann, and K. Dietmayer, "Probabilistic ego-motion estimation using multiple automotive radar sensors," Robotics and Autonomous Systems, vol. 89, pp. 136146, 2017. [Online]. Available: https://www.sciencedirect.com/science/ article/pii/S0921889016300525

[62] P.-C. Kung, C.-C. Wang, and W.-C. Lin, “A normal distribution transform-based radar odometry designed for scanning and automotive radars,” in 2021 IEEE International Conference on Robotics and Automation (ICRA), 2021, pp. 14 417–14 423.

[63] Y. Almalioglu, M. Turan, C. X. Lu, N. Trigoni, and A. Markham, “Milli-rio: Ego-motion estimation with low-cost millimetre-wave radar,” IEEE Sensors Journal, vol. 21, no. 3, pp. 3314–3323, 2021.

[64] R. Zhang, Y. Zhang, D. Fu, and K. Liu, “Scan denoising and normal distribution transform for accurate radar odometry and positioning,” IEEE Robotics and Automation Letters, vol. 8, no. 3, pp. 1199–1206, 2023.

[65] K. Haggag, S. Lange, T. Pfeifer, and P. Protzel, “A credible and robust approach to ego-motion estimation using an automotive radar,” IEEE Robotics and Automation Letters, vol. 7, no. 3, pp. 6020–6027, 2022.

[66] M. Holder, S. Hellwig, and H. Winner, “Real-time pose graph slam based on radar,” in 2019 IEEE Intelligent Vehicles Symposium (IV), 2019, pp. 1145–1151.

[67] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, and K. Dietmayer, “Instantaneous ego-motion estimation using doppler radar,” in 16th International IEEE Conference on Intelligent Transportation Systems (ITSC 2013), 2013, pp. 869–874.

[68] Y. Liang, S. Müller, D. Schwendner, D. Rolle, D. Ganesch, and I. Schaffer, “A scalable framework for robust vehicle state estimation with a fusion of a low-cost imu, the gnss, radar, a camera and lidar,” in 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2020, pp. 1661–1668.

[69] P. R. M. d. Araujo, M. Elhabiby, S. Givigi, and A. Noureldin, “A novel method for land vehicle positioning: Invariant kalman filters and deep-learning-based radar speed estimation,” IEEE Transactions on Intelligent Vehicles, pp. 1–12, 2023.

[70] Y. Z. Ng, B. Choi, R. Tan, and L. Heng, “Continuous-time radar-inertial odometry for automotive radars,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 323330.

[71] R. Ghabcheloo and S. Siddiqui, “Complete odometry estimation of a vehicle using single automotive radar and a gyroscope,” in 2018 26th Mediterranean Conference on Control and Automation (MED), 2018, pp. 855–860.

[72] A. F. Scannapieco, “A novel outlier removal method for twodimensional radarodometry,” IET Radar, Sonar & Navigation, vol. 13, no. 10, pp. 1705–1712, 2019. [Online]. Available: https://ietresearch.onlinelibrary.wiley.com/doi/abs/10.1049/iet-rsn.2018.5661

[73] D. Kellner, M. Barjenbruch, J. Klappstein, J. Dickmann, and K. Dietmayer, “Instantaneous ego-motion estimation using multiple doppler radars,” in 2014 IEEE International Conference on Robotics and Automation (ICRA), 2014, pp. 1592–1597.

[74] M. Barjenbruch, D. Kellner, J. Klappstein, J. Dickmann, and K. Dietmayer, “Joint spatial- and doppler-based ego-motion estimation for automotive radars,” in 2015 IEEE Intelligent Vehicles Symposium (IV), 2015, pp. 839–844.

[75] Z. Zeng, X. Liang, X. Dang, and Y. Li, “Joint velocity ambiguity resolution and ego-motion estimation method for mmwave radar,” IEEE Robotics and Automation Letters, vol. 8, no. 8, pp. 4753–4760, 2023.

[76] D. Vivet, P. Checchin, and R. Chapuis, “Radar-only localization and mapping for ground vehicle at high speed and for riverside boat,” in 2012 IEEE International Conference on Robotics and Automation, 2012, pp. 2618–2624.

[77] ——, “Localization and mapping using only a rotating fmcw radar sensor,” Sensors, vol. 13, no. 4, pp. 4527–4552, 2013. [Online]. Available: https://www.mdpi.com/1424-8220/13/4/4527

[78] K. Retan, F. Loshaj, and M. Heizmann, “Radar odometry on se(3) with constant velocity motion prior,” IEEE Robotics and Automation Letters, vol. 6, no. 4, pp. 6386–6393, 2021.

[79] ——, “Radar odometry on se (3) with constant acceleration motion prior and polar measurement model,” arXiv preprint arXiv:2209.05956, 2022.

[80] P. Checchin, F. G ́ erossier, C. Blanc, R. Chapuis, and L. Trassoudaine, “Radar scan matching slam using the fourier-mellin transform,” in Field and Service Robotics: Results of the 7th International Conference. Springer, 2010, pp. 151–161.

[81] Y. S. Park, Y.-S. Shin, and A. Kim, “Pharao: Direct radar odometry using phase correlation,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 2617–2623.

[82] D. Barnes, R. Weston, and I. Posner, “Masking by moving: Learning distraction-free radar odometry from pose information,” 2020.

[83] R. Weston, M. Gadd, D. De Martini, P. Newman, and I. Posner, “Fastmbym: Leveraging translational invariance of the fourier transform for efficient and accurate radar odometry,” in 2022 International Conference on Robotics and Automation (ICRA), 2022, pp. 2186–2192.

[84] C. X. Lu, M. R. U. Saputra, P. Zhao, Y. Almalioglu, P. P. B. de Gusmao, C. Chen, K. Sun, N. Trigoni, and A. Markham, “Milliego: Single-chip mmwave radar aided egomotion estimation via deep sensor fusion,” in Proceedings of the 18th Conference on Embedded Networked Sensor Systems, ser. SenSys ’20. New York, NY, USA: Association for Computing Machinery, 2020, p. 109–122. [Online]. Available: https://doi.org/10.1145/3384419.3430776

[85] T. Ort, I. Gilitschenski, and D. Rus, “Autonomous navigation in inclement weather based on a localizing ground penetrating radar,” IEEE Robotics and Automation Letters, vol. 5, no. 2, pp. 3267–3274, 2020.

[86] C. D. Monaco and S. N. Brennan, “Radarodo: Ego-motion estimation from doppler and spatial data in radar images,” IEEE Transactions on Intelligent Vehicles, vol. 5, no. 3, pp. 475–484, 2020.

[87] W. Churchill and W. S. Churchill, “Experience based navigation: Theory, practice and implementation,” Ph.D. dissertation, University of Oxford, 2012.

[88] R. Mur-Artal and J. D. Tardos, “Orb-slam2: An open-source slam system for monocular, stereo, and rgb-d cameras,” IEEE Transactions on Robotics, vol. 33, no. 5, pp. 1255–1262, 2017.

[89] J. Behley and C. Stachniss, “Efficient surfel-based slam using 3d laser range data in urban environments.” in Robotics: Science and Systems, vol. 2018, 2018, p. 59.

[90] Z. Wang, Y. Wu, and Q. Niu, “Multi-sensor fusion in automated driving: A survey,” IEEE Access, vol. 8, pp. 2847–2868, 2020.

[91] B. Shahian Jahromi, T. Tulabandhula, and S. Cetin, “Real-time hybrid multi-sensor fusion framework for perception in autonomous vehicles,” Sensors, vol. 19, no. 20, 2019. [Online]. Available: https://www.mdpi.com/1424-8220/19/20/4357

[92] N. A. Rawashdeh, J. P. Bos, and N. J. Abu-Alrub, “Camera–Lidar sensor fusion for drivable area detection in winter weather using convolutional neural networks,” Optical Engineering, vol. 62, no. 3, p. 031202, 2022. [Online]. Available: https://doi.org/10.1117/1.OE.62. 3.031202

[93] E. B. Quist, P. C. Niedfeldt, and R. W. Beard, “Radar odometry with recursive-ransac,” IEEE Transactions on Aerospace and Electronic Systems, vol. 52, no. 4, pp. 1618–1630, 2016.

[94] M. Mostafa, S. Zahran, A. Moussa, N. El-Sheimy, and A. Sesay, “Radar and visual odometry integrated system aided navigation for uavs in gnss denied environment,” Sensors, vol. 18, no. 9, 2018. [Online]. Available: https://www.mdpi.com/1424-8220/18/9/2776

[95] A. Kramer, C. Stahoviak, A. Santamaria-Navarro, A.-a. Aghamohammadi, and C. Heckman, “Radar-inertial ego-velocity estimation for visually degraded environments,” in 2020 IEEE International Conference on Robotics and Automation (ICRA), 2020, pp. 5739–5746.

[96] C. Doer and G. F. Trommer, “An ekf based approach to radar inertial odometry,” in 2020 IEEE International Conference on Multisensor Fusion and Integration for Intelligent Systems (MFI), 2020, pp. 152159.

[97] ——, “Radar visual inertial odometry and radar thermal inertial odometry: Robust navigation even in challenging visual conditions,” in 2021 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2021, pp. 331–338.

[98] C. Doer, J. Atman, and G. F. Trnmmer, “Gnss aided radar inertial odometry for uas flights in challenging conditions,” in 2022 IEEE Aerospace Conference (AERO), 2022, pp. 1–10.

[99] S. Grigorescu, B. Trasnea, T. Cocias, and G. Macesanu, “A survey of deep learning techniques for autonomous driving,” Journal of Field Robotics, vol. 37, no. 3, pp. 362–386, 2020.

[100] N. J. Abu-Alrub, A. D. Abu-Shaqra, and N. A. Rawashdeh, “Compact cnn-based road weather condition detection by grayscale image band for adas,” in Autonomous Systems: Sensors, Processing and Security for Ground, Air, Sea and Space Vehicles and Infrastructure 2022, vol. 12115. SPIE, 2022, pp. 183–191.

[101] M. Hahner, C. Sakaridis, D. Dai, and L. Van Gool, “Fog Simulation on Real LiDAR Point Clouds for 3D Object Detection in Adverse Weather,” in IEEE International Conference on Computer Vision (ICCV), 2021.

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.mzph.cn/news/12815.shtml

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

PHP 支付宝支付、订阅支付(周期扣款)整理汇总

最近项目中需要使用支付宝的周期扣款,整理一下各种封装方法 APP支付(服务端) /******************************************************* 调用方法******************************************************/function test_pay(){$isSubscri…

mybatis日志工厂

前言: 如果一个数据库操作,出现异常,我们需要排错,日志就是最好的助手 官方给我们提供了logImpl:指定 MyBatis 所用日志的具体实现,未指定时将自动查找。 默认工厂: 在配置文件里添加&#xf…

深度剖析APP开发中的UI/UX设计

作为一个 UI/UX设计师,除了要关注 UI/UX设计之外,还要掌握移动开发知识,同时在日常工作中也需要对用户体验有一定的认知,在本次分享中,笔者就针对自己在工作中积累的一些经验来进行一个总结,希望能够帮助到…

cartographer发布畸变矫正后的scan数据

实现方式: 模仿源代码,在cartographer_ros写一个函数,以函数指针的方式传入cartographer后端,然后接收矫正后的scan数据,然后按照话题laserScan发布出来。 需要同时发布点云强度信息的,还要自己添加含有强度…

如何连接远程服务器?快解析内内网穿透可以吗?

如今我们迎来了数字化转型的时代,众多企业来为了更好地推动业务的发展,常常需要在公司内部搭建一个远程服务器。然而,对于企业员工来说,在工作过程中经常需要与这个服务器进行互动,而服务器位于公司的局域网中&#xf…

Go重写Redis中间件 - Go实现Redis协议解析器

Go实现Redis协议解析器 Redis网络协议详解 在解决完通信后,下一步就是搞清楚 Redis 的协议-RESP协议,其实就是一套类似JSON、Protocol Buffers的序列化协议,也就是我们的客户端和服务端通信的协议 RESP定义了5种格式 简单字符串(Simple String) : 服务器用来返回简单的结…

简述IO(BIO NIO IO多路复用)

在unix网络变成中的五种IO模型: Blocking IO(阻塞IO) NoneBlocking IO (非阻塞IO) IO mulitplexing(IO多路复用) signal driven IO (信号驱动IO) asynchronous IO (异步IO) BIO BIO(Blocking IO)是一种阻塞IO模型,也是传统的IO操作模型之一…

Windows上安装和使用git到gitoschina和github上_亲测

Windows上安装和使用git到gitoschina和github上_亲测 git介绍与在windows上安装创建SSHkey在gitoschina使用 【git介绍与在windows上安装】 Git是一款免费、开源的分布式版本控制系统&#xff0c;用于敏捷高效地处理任何或小或大的项目。 相关介绍可以参考 <百度百科>…

【Vue3 + Element Plus】纯前端实现本地数据分页

先附上效果图 Vue3 Element Plus 实现本地分页 首页弹窗代码 <el-table :data"tableData" style"width: 100%" border stripe><el-table-column v-for"{ id, prop, label } in tableColumn" :prop"prop" :key"id"…

RocketMQ概论

目录 前言&#xff1a; 1.概述 2.下载安装、集群搭建 3.消息模型 4.如何保证吞吐量 4.1.消息存储 4.1.1顺序读写 4.1.2.异步刷盘 4.1.3.零拷贝 4.2.网络传输 前言&#xff1a; RocketMQ的代码示例在安装目录下有全套详细demo&#xff0c;所以本文不侧重于讲API这种死…

【Rust 基础篇】Rust默认泛型参数:简化泛型使用

导言 Rust是一种以安全性和高效性著称的系统级编程语言&#xff0c;其设计哲学是在不损失性能的前提下&#xff0c;保障代码的内存安全和线程安全。在Rust中&#xff0c;泛型是一种非常重要的特性&#xff0c;它允许我们编写一种可以在多种数据类型上进行抽象的代码。然而&…

tcp keepalive

tcp keepalive用于检查两者之间的链路是否正常&#xff0c;或防止链路断开。 一旦建立了TCP连接&#xff0c;该连接被定义为有效&#xff0c;直到一方关闭它。一旦连接进入连接状态&#xff0c;它将无限期地保持连接状态。但实际上&#xff0c;这种联系不会无限期地持续下去。如…

数据结构:快速的Redis有哪些慢操作?

redis 为什么要这莫快&#xff1f;一个就是他是基于内存的&#xff0c;另外一个就是他是他的数据结构 说到这儿&#xff0c;你肯定会说&#xff1a;“这个我知道&#xff0c;不就是 String&#xff08;字符串&#xff09;、List&#xff08;列表&#xff09;、 Hash&#xff08…

1.Ansible

文章目录 Ansible概念作用特性总结 部署AnsibleAnsible模块commandshellcronusergroupcopyfilehostnamepingyumserice/systemdscriptmountarchiveunarchivereplacesetup inventory主机清单主机变量组变量组嵌套 Ansible 概念 Ansible是一个基于Python开发的配置管理和应用部署…

【Redis】面试题

1. 为什么要用缓存 1. 提高系统的读写性能。 2. 减轻数据库的压力&#xff0c;防止大量的请求到达数据库&#xff0c;让数据库压力剧增&#xff0c;拖垮数据库。redis数据存储在内存中&#xff0c;高效的数据结构&#xff0c;读写数据比数据库快。 将热点数据存储在redis当中&…

#P1004. [NOIP1998普及组] 三连击

题目描述 将 1, 2, \ldots , 91,2,…,9 共 99 个数分成 33 组&#xff0c;分别组成 33 个三位数&#xff0c;且使这 33 个三位数构成 1 : 2 : 31:2:3 的比例&#xff0c;试求出所有满足条件的 33 个三位数。 输入格式 无 输出格式 若干行&#xff0c;每行 33 个数字。按照…

数据结构:分块查找

分块查找&#xff0c;也叫索引顺序查找&#xff0c;算法实现除了需要查找表本身之外&#xff0c;还需要根据查找表建立一个索引表。例如图 1&#xff0c;给定一个查找表&#xff0c;其对应的索引表如图所示&#xff1a; 图 1 查找表及其对应的索引表 图 1 中&#xff0c;查找表…

小程序 账号的体验版正式版的账号信息及相关配置

siteinfo.js // 正式环境 const releaseConfig {appID: "",apiUrl: "",imgUrl: "" }; // 测试环境&#xff08;包含开发环境和体验环境&#xff09; const developConfig {appID: "",apiUrl: "",imgUrl: "" }…

相机可用性变化监听AvailabilityCallback流程分析

相机可用性变化监听及流程分析 一、接口说明 ​ 相机可用性变化监听可以通过CameraManager中的接口registerAvailabilityCallback()来设置回调&#xff0c;接口如下&#xff1a; /** *注册一个回调以获得有关相机设备可用性的通知。 * *<p>再次注册相同的回调将用提供…

Nginx性能优化配置

一、全局优化 # 工作进程数 worker_processes auto; # 建议 CPU核心数|CPU线程数# 最大支持的连接(open-file)数量&#xff1b;最大值受限于 Linux open files (ulimit -n) # 建议公式&#xff1a;worker_rlimit_nofile > worker_processes * worker_connections…