|
|
RGB-D SLAM Algorithm Based on Delayed Semantic Information in Dynamic Environment |
WANG Hao1, ZHOU Shenchao1, FANG Baofu1 |
1. School of Computer Science and Information Engineering, Hefei University of Technology, Hefei 230601 |
|
|
Abstract Visual simultaneous localization and mapping(SLAM) cannot be applied to dynamic environment. The mainstream solution is to combine segmentation network and SLAM. However, real-time operation of SLAM systems is not guaranteed due to the processing speed constraints of segmentation network. Therefore, a RGB-D SLAM algorithm based on delayed semantic information in dynamic environment is proposed. Firstly, tracking and segmentation threads run in parallel. To obtain the latest delayed semantic information, a cross-frame segmentation strategy is employed for image processing, and real-time semantic information for the current frame is generated by the tracking thread according to the delaysemantic information. Then, the dynamic point set of the current frame is selected and the real motion state of the prior dynamic object in the environment is determined by combining successful tracking count and epipolar constraints. When the object is determined as moving, the object area is further subdivided into rectangular grids, and dynamic feature points are removed with the grid as the minimum unit. Finally, the camera pose is tracked by static feature points and an environment map is constructed. Experiments on TUM RGB-D dynamic scene dataset and real scenes show that the proposed algorithm performs well and its effectiveness is verified.
|
Received: 08 September 2023
|
|
Fund:National Natural Science Foundation of China(No.61872327), Natural Science Foundation of Anhui Province(No.2308085MF203),University Synergy Innovation Program of Anhui Province(No.GXXT-2022-055), Major Project of Key Laboratory of Flight Techniques and Flight Safety of CAAC(No.FZ2022ZZ02), Open Fund of Key Laboratory of Flight Techniques and Flight Safety of CAAC(No.FZ2022KF09) |
Corresponding Authors:
FANG Baofu, Ph.D., associate professor. His research interests include intelligent robot systems.
|
About author:: WANG Hao, Ph.D., professor. His research interests include distributed intelligent systems and robots.ZHOU Shenchao, master student. His research interests include visual SLAM. |
|
|
|
[1] JIA Y J, YAN X Y, XU Y H. A Survey of Simultaneous Localization and Mapping for Robot // Proc of the IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference. Washington, USA: IEEE, 2019: 857-861. [2] MAKHUBELA J K, ZUVA T, AGUNBIADE O Y. A Review on Vision Simultaneous Localization and Mapping(VSLAM) // Proc of the International Conference on Intelligent and Innovative Computing Applications. Washington, USA: IEEE, 2018. DOI: 10.1109/ICONIC.2018.8601227. [3] MUR-ARTAL R, MONTIEL J M M, TARDOS J D. ORB-SLAM: A Versatile and Accurate Monocular SLAM System. IEEE Transactions on Robotics, 2015, 31(5): 1147-1163. [4] MUR-ARTAL R, TARDOS J D. ORB-SLAM2: An Open-Source SLAM System for Monocular, Stereo, and RGB-D Cameras. IEEE Transactions on Robotics, 2017, 33(5): 1255-1262. [5] CAMPOS C, ELVIRA R, RODRÍGUEZ G J J, et al. ORB-SLAM3: An Accurate Open-Source Library for Visual, Visual-Inertial and Multi-map SLAM. IEEE Transactions on Robotics, 2021, 37(6): 1874-1890. [6] HE K M, GKIOXARI G, DOLLÁR P, et al. Mask R-CNN // Proc of the IEEE International Conference on Computer Vision. Washington, USA: IEEE, 2017: 2980-2988. [7] CHEN L C, PAPANDREOU G, KOKKINOS I, et al. DeepLab: Se-mantic Image Segmentation with Deep Convolutional Nets, Atrous Convolution, and Fully Connected CRFs. IEEE Transactions on Pa-ttern Analysis and Machine Intelligence, 2018, 40(4): 834-848. [8] BADRINARAYANAN V, KENDALL A, CIPOLLA R. SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2017, 39(12): 2481-2495. [9] FORSTER C, PIZZOLI M, SCARAMUZZA D. SVO: Fast Semi-Direct Monocular Visual Odometry // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2014: 15-22. [10] LI S P, ZHANG T, GAO X, et al. Semi-Direct Monocular Visual and Visual-Inertial SLAM with Loop Closure Detection. Robotics and Autonomous Systems, 2019, 112: 201-210. [11] ENGEL J, KOLTUN V, CREMERS D. Direct Sparse Odometry. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2018, 40(3): 611-625. [12] KLEIN G, MURRAY D. Parallel Tracking and Mapping for Small AR Workspaces // Proc of the 6th IEEE and ACM International Symposium on Mixed and Augmented Reality. Washington, USA: IEEE, 2007: 225-234. [13] ELVIRA R, TARDÓS J D, MONTIEL J M M. ORBSLAM-Atlas: A Robust and Accurate Multi-map System // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washington, USA: IEEE, 2019: 6253-6259. [14] ENGEL J, SCHÖPS T, CREMERS D. LSD-SLAM: Large-Scale Di-rect Monocular SLAM // Proc of the 13th European Conference on Computer Vision. Berlin, Germany: Springer, 2014, II: 834-849. [15] SUN Y X, LIU M, MENG M Q H. Improving RGB-D SLAM in Dynamic Environments: A Motion Removal Approach. Robotics and Autonomous Systems, 2017, 89: 110-122. [16] ZHANG T W, ZHANG H Y, LI Y, et al. FlowFusion: Dynamic Dense RGB-D SLAM Based on Optical Flow // Proc of the IEEE International Conference on Robotics and Automation. Washington, USA: IEEE, 2020: 7322-7328. [17] SUN D Q, YANG X D, LIU M Y, et al. PWC-Net: CNNs for Optical Flow Using Pyramid, Warping, and Cost Volume // Proc of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. Washington, USA: IEEE, 2018: 8934-8943. [18] DAI W C, ZHANG Y, LI P, et al. RGB-D SLAM in Dynamic Environments Using Point Correlations. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2022, 44(1): 373-389 [19] 艾青林,王威,刘刚江. 室内动态环境下基于网格分割与双地图耦合的RGB-D SLAM算法.机器人, 2022, 44(4): 431-442. (AI Q L, WANG W, LIU G J. RGB-D SLAM Algorithm in Indoor Dynamic Environments Based on Gridding Segmentation and Dual Map Coupling. Robot, 2022, 44(4): 431-442.) [20] BESCOS B, FACIL J M, CIVERA J, et al. DynaSLAM: Tracking, Mapping, and Inpainting in Dynamic Scenes. IEEE Robotics and Automation Letters, 2018, 3(4): 4076-4083. [21] YU C, LIU Z X, LIU X J, et al. DS-SLAM: A Semantic Visual SLAM towards Dynamic Environments // Proc of the IEEE/RSJ International Conference on Intelligent Robots and Systems. Washing-ton, USA: IEEE, 2018: 1168-1174. [22] ZHONG F W, WANG S, ZHANG Z Q, et al. Detect-SLAM: Ma-king Object Detection and SLAM Mutually Beneficial // Proc of the IEEE Winter Conference on Applications of Computer Vision. Washington, USA: IEEE, 2018: 1001-1010. [23] LIU W, ANGUELOV D, ERHAN D, et al. SSD: Single Shot Multi-box Detector // Proc of the European Conference on Computer Vision. Berlin, Germany: Springer, 2016: 21-37. [24] FANG B F, MEI G F, YUAN X H, et al. Visual SLAM for Robot Navigation in Healthcare Facility. Pattern Recognition, 2021, 113(12). DOI: 10.1016/j.patcog.2021.107822. [25] WANG X L, ZHANG R F, KONG T, et al. SOLOv2: Dynamic and Fast Instance Segmentation[C/OL]. [2023-2-22]. https://arxiv.org/abs/2003.10152. [26] FANG B F, HAN X M, WANG Z J, et al. SLAM Algorithm Based on Bounding Box and Deep Continuity in Dynamic Scene. International Journal of Wireless and Mobile Computing, 2021, 21(4): 349-364. [27] RAN T, YUAN L, ZHANG J B, et al. RS-SLAM: A Robust Semantic SLAM in Dynamic Environments Based on RGB-D Sensor. IEEE Sensors Journal, 2021, 21(18): 20657-20664. [28] CHENG S H, SUN C H, ZHANG S J. SG-SLAM: A Real-Time RGB-D Visual SLAM Toward Dynamic Scenes with Semantic and Geometric Information. IEEE Transactions on Instrumentation and Measurement, 2023, 72. DOI: 10.1109/TIM.2022.3228006. [29] LIU X, WEN S Y, YUAN M X, et al. DPF-SLAM: Dense Semantic SLAM Based on Dynamic Probability Fusion in Dynamic Environments // Proc of the IEEE International Conference on Real-time Computing and Robotics. Washington, USA: IEEE, 2022: 360-365. [30] LONG X D, ZHANG W W, ZHAO B. PSPNet-SLAM: A Semantic SLAM Detect Dynamic Object by Pyramid Scene Parsing Network. IEEE Access, 2020, 8: 214685-214695 |
|
|
|