A Review of SLAM Research on Mobile Robot Vision

Authors

  • Longfei Wei
  • Hongchang Sun
  • Guoqing Qiao
  • Xiangyan Wu
  • Hang Xu

DOI:

https://doi.org/10.54691/gcwwng31

Keywords:

Visual SLAM, Feature extraction, Multi-sensor fusion.

Abstract

Simultaneous Localization and Mapping (SLAM) refers to the process of constructing an unknown environmental map while locating the position of an intelligent agent. In recent years, visual sensors have shown significant performance, accuracy, and efficiency in simultaneous localization and mapping systems. The main purpose of this paper is to introduce the progress of visual SLAM systems, introduce representative classic visual SLAM methods and multi-sensor fusion methods, summarize some of the problems existing in the visual SLAM system discussed in the article, and finally explore the hot research directions and development prospects of visual SLAM in the future.

Downloads

Download data is not yet available.

References

[1] Khairuddin, A.R.; Talib, M.S.; Haron, H. Review on simultaneous localization and mapping (SLAM). In Proceedings of the 2015.IEEE International Conference on Control System, Computing and Engineering (ICCSCE), Penang, Malaysia, 27–29 November 2015; pp. 85–90.

[2] Smith R C, Cheeseman P. On the Representation and Estimation of Spatial Uncertainty[J]. International Journal of Robotics Research, 1986, 5(4): 56–68.

[3] Cadena C , Carlone L , Carrillo H ,et al. Past, Present, and Future of Simultaneous Localization and Mapping: Toward the Robust-Perception Age[J].IEEE Transactions on Robotics, 2016, 32(6):1309-1332.DOI:10.1109/TRO.2016.2624754.

[4] Davison A J .Real-time simultaneous localisation and mapping with a single camera[J].Proc.ieee Int.conf.on Computer Vision, 2003.

[5] Mourikis A I , Roumeliotis S I .A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation[C]//Robotics and Automation, 2007 IEEE International Conference on. IEEE, 2007.DOI:10.1109/ROBOT.2007.364024.

[6] Klein, G.; Murray, D. Parallel tracking and mapping for small AR workspaces. In Proceedings of the 2007 6th IEEE and ACM International Symposium on Mixed and Augmented Reality, Nara, Japan, 13–16 November 2007; pp. 225–234.

[7] Jia, Y.; Yan, X.; Xu, Y. A Survey of simultaneous localization and mapping for robot. In Proceedings of the 2019 IEEE 4th Advanced Information Technology, Electronic and Automation Control Conference (IAEAC), Chengdu, China, 20–22 December 2019; Volume 1, pp. 857–861.

[8] Henry P , Krainin M , Herbst E ,et al.RGB-D mapping: Using Kinect-style depth camerasfor dense 3D modeling of indoor environments[J].International Journal of Robotics Research,2013, 31(5):647-663.DOI:10.1177/0278364911434148.

[9] Henry P,Krainin M ,Herbst E ,et al.RGB-D Mapping: Using Depth Cameras for Dense 3DModeling of Indoor Environments[J].2014.DOI:10.10 07/978-3-6 42-285 72-1_33.

[10] Krainin M .Manipulator and Object Tracking for In Hand Model Acquisition[C]//ICRA 2010 Mobile Manipulation and Best Practice in Robotics Worksh ops. 2010.

[11] Engelhard N , Endres F ,Jürgen Hess,et al.Real-time 3-D visual SLAM with A hand-held camera[J]. 2011.

[12] Arshad, S.; Kim, G.W. Role of deep learning in loop closure detection for visual and lidar slam: A survey. Sensors 2021, 21, 1243.

[13] Singandhupe, A.; La, H.M. A review of slam techniques and security in autonomous driving. In Proceedings of the 2019 Third IEEE International Conference on Robotic Computing (IRC), Naples, Italy, 25–27 February 2019; pp. 602–607.

[14] Saputra, M.R.U.; Markham, A.; Trigoni, N. Visual SLAM and Structure from Motion in Dynamic Environments: A Survey. ACMComput. Surv. 2018, 51, 1–36.

[15] Davison, A.J.; Reid, I.D.; Molton, N.D.; Stasse, O. MonoSLAM: Real-time single camera SLAM. IEEE Trans. Pattern Anal. Mach.Intell. 2007, 29, 1052–1067.

[16] Endres F, Hess J, Sturm J, et al. 3-D map** with an RGB-D camera[J]. IEEE transactionson robotics, 2013, 30(1): 177-187..

[17] R. Mur-Artal and J. D. Tardos. ORB-SLAM: a versatile and accurate monocular SLAM system[J]. IEEE transactions on robotics, 2015, 31(5): 1147-1163.

[18] Engel J,Koltun V,Cremers D.Direct sparse odometry[J]. IEEE transactions on pattern analysis and machine intelligence,2017,40(3):611-625.

[19] R. Mur-Artal and J. D. Tardos, “ORB-SLAM2: an Open-Source SLAM System for Monocular, Stereo and RGB-D Cameras,” 2016.

[20] Mourikis A I , Roumeliotis S I .A Multi-State Constraint Kalman Filter for Vision-Aided Inertial Navigation[C]//Robotics and Automation, 2007 IEEE International Conference on.IEEE, 2007. DOI:10.1109/ROBOT.2007.364024.

[21] Pollefeys M .Visual 3D Modeling from Images[J].DBLP, 2004..

[22] Zhang J , Singh S .Visual-lidar odometry and mapping: low-drift, robust, and fast [J]. IEEE, 2015.DOI:10.1109/ICRA.2015.7139486.

[23] Tong Q , Peiliang L , Shaojie S .VINS-Mono: A Robust and Versatile Monocular Visual-Inertial State Estimator[J].IEEE Transactions on Robotics, 2017, PP(99):1-17.D OI:10.1109/TRO.2018.2853729.

[24] Qin T, Shen S. Robust Initialization of Monocular Visual-Inertial Estimation on Aerial Robots[C]. IEEE RSJ International Conference on Intelligent Robots & Systems. IEEE, 2017: 4225-4232.

[25] Qin T, Li P, Shen S. Relocalization, Global Optimization and Map Merging for Monocular Visual-Inertial SLAM[J]. IEEE, 2018.

[26] Shan T , Englot B , Ratti C ,et al.LVI-SAM: Tightly-coupled Lidar-Visual-In -ertial Odometry via Smoothing and Mapping[J]. 2021.DOI:10.48550/arXiv.2104.10831.

Downloads

Published

2024-11-24

Issue

Section

Articles

How to Cite

Wei, L., Sun, H., Qiao, G., Wu, X., & Xu, H. (2024). A Review of SLAM Research on Mobile Robot Vision. Frontiers in Science and Engineering, 4(11), 17-23. https://doi.org/10.54691/gcwwng31