Application Status and Development Trends of Visual SLAM Technology based on Computer Vision

Authors

  • Zehua Hua
  • Lu Dong
  • Xiaoyue Peng
  • Tingyu Zhu

DOI:

https://doi.org/10.54691/vyk0v094

Keywords:

Visual SLAM; Computer Image Processing Technology; State Estimation; Loop Closure Detection; Mapping.

Abstract

In the field of future technology, artificial intelligence technologies such as autonomous driving, unmanned mobile carrier platforms, and service robots will become the next hotspots, among which simultaneous localization and mapping (SLAM) technology is one of the key technologies. Visual SLAM technology has emerged as a research focus due to its advantages of low economic cost, rich information content, and excellent mapping performance. This paper first introduces the application background, research platforms, and underlying mathematical theories of visual SLAM technology, including computer image processing technology, multi-view geometry, Lie groups and Lie algebras in algebra, and state estimation in probability theory. Subsequently, it elaborates on the specific process of visual SLAM technology for processing video and image data, covering four main components: front-end visual odometry, back-end optimization, loop closure detection, and map construction. Finally, the current application status and technical obstacles of visual SLAM technology are summarized, and prospects for its future optimization and development directions are presented.

Downloads

Download data is not yet available.

References

[1] OBAIGBENA A, LOTTU O A, UGWUANYI E D, et al. AI and human-robot interaction: a review of recent advances and challenges[J].GSC Advanced Research and Reviews,2024,18(2),321-330.

[2] LI J, GAO W, WU Y, et al. High-quality indoor scene 3D reconstruction with RGB-D cameras: a brief review[J]. Computational Visual Media, 2022, 8(3): 369-393

[3] QIN L, WU C, KONG X, et al. BVT-SLAM: a binocular visible-thermal sensors SLAM system in low-light environments[J]. IEEE Sensors Journal,2024,24(7):11599-11609.

[4] ZHANG G, LUO J, XU H, et al. An improved UKF algorithm for extracting weak signals based on RBF neural network[J]. IEEE Transactions on Instrumentation and Measurement, 2022, 71: 1-14.

[5] NAM J, HYEON S, JOO Y, et al. Spectral trade-off for measurement sparsification of pose-graph SLAM[J]. IEEE Robotics and Automation Letters, 2023, 9(1): 723-730.

[6] LI C H, GUO G H, YI P, et al. Distributed pose-graph optimization with multi-level partitioning for multirobot SLAM[J]. IEEE Robotics and Automation Letters, 2024,9(6):4926-4933.

[7] SU P, LUO S, HUANG X. Real-time dynamic SLAM algorithm based on deep learning[J]. IEEE Access, 2022, 10: 87754-87766

[8] YANG C, CHEN Q, YANG Y, et al. SDF-SLAM: a deep learning based highly accurate SLAM using monocular camera aiming at indoor map reconstruction with semantic and depth fusion[J]. IEEE Access, 2022, 10: 10259-10272.

[9] WU H, ZHAO J, XU K, et al. Semantic SLAM based on deep learning in endocavity environment[J]. Symmetry, 2022, 14(3): 614.

[10] Abaspur Kazerouni I, Fitzgerald L, Dooly G, et al. A survey of state-of-the-art on visual SLAM[J]. Expert Systems with Applications, 2022, 205: 117734.

[11] Keetha N, Karhade J, Jatavallabhula K M, et al. Splatam: Splat track & map 3d gaussians for dense rgb-d slam[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2024: 21357-21366.

[12] Teed Z, Deng J. Droid-slam: Deep visual slam for monocular, stereo, and rgb-d cameras[J]. Advances in neural information processing systems, 2021, 34: 16558- 16569.

Downloads

Published

2026-01-28

Issue

Section

Articles

How to Cite

Hua, Zehua, Lu Dong, Xiaoyue Peng, and Tingyu Zhu. 2026. “Application Status and Development Trends of Visual SLAM Technology Based on Computer Vision”. Scientific Journal of Intelligent Systems Research 8 (1): 8-17. https://doi.org/10.54691/vyk0v094.