Deep Learning for Point Cloud Registration: A Review of Core Architectures and Feature Extraction Strategies

Authors

  • Zhiqiu Zhang
  • Ye Chen

DOI:

https://doi.org/10.54691/nr3yhg10

Keywords:

Point Cloud Registration; Deep Learning; Feature Extraction; Correspondence Learning; Transformer; Coarse-to-Fine Matching.

Abstract

Point cloud registration aims to estimate the spatial transformation that aligns a source point cloud with a target point cloud, and constitutes a fundamental problem in three-dimensional reconstruction, robotic perception, autonomous driving, industrial metrology, and medical digitization. In recent years, deep learning has substantially reshaped the research paradigm of point cloud registration. The field has progressively shifted from traditional pipelines based on handcrafted geometric descriptors and local optimization toward data-driven frameworks that jointly learn feature representations, correspondence patterns, and rigid transformations. Focusing on the research question of which core architectures and feature extraction strategies are adopted by deep learning methods for point cloud registration, this review systematically analyzes the representative methodological routes developed in recent years. Specifically, we examine global feature regression architectures, correspondence learning architectures, coarse-to-fine hierarchical frameworks, and Transformer-based geometric attention models, and further discuss the evolution of feature learning from point-wise encoding to local geometric description, global context aggregation, multi-scale fusion, and cross-cloud interaction. On this basis, the review summarizes the intrinsic coupling among feature extraction, correspondence construction, and transformation estimation, compares the performance of representative models under a unified benchmark protocol, and outlines the main challenges and future directions in terms of generalization, efficiency, and geometric interpretability. This review is expected to provide a structured reference for subsequent studies on deep-learning-based point cloud registration.

Downloads

Download data is not yet available.

References

[1] Z. Zhang, Y. Dai and J. Sun: Deep learning based point cloud registration: an overview, Virtual Reality & Intelligent Hardware, Vol. 2 (2020) No. 3, p. 222-246.

[2] M. Lyu, J. Yang, Z. Qi, R. Xu, J. Liu and Y. Liu: Rigid pairwise 3D point cloud registration: A survey, Pattern Recognition, Vol. 151 (2024), 110408.

[3] Y.-X. Zhang, J. Gui, B. Yu, X. Cong, X. Gong, W. Tao and D. Tao: Deep Learning-Based Point Cloud Registration: A Comprehensive Survey and Taxonomy, arXiv:2404.13830 (2024).

[4] P. J. Besl and N. D. McKay: A method for registration of 3-D shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 14 (1992) No. 2, p. 239-256.

[5] S. Rusinkiewicz and M. Levoy: Efficient variants of the ICP algorithm, Proc. Third International Conference on 3-D Digital Imaging and Modeling (Quebec City, Canada, May 28-June 1, 2001), p. 145-152.

[6] A. Zeng, S. Song, M. Nießner, M. Fisher, J. Xiao and T. Funkhouser: 3DMatch: Learning Local Geometric Descriptors from RGB-D Reconstructions, Proc. IEEE Conference on Computer Vision and Pattern Recognition (Honolulu, USA, July 21-26, 2017), p. 199-208.

[7] C. Choy, J. Park and V. Koltun: Fully Convolutional Geometric Features, Proc. IEEE/CVF International Conference on Computer Vision (Seoul, Korea, October 27-November 2, 2019), p. 8958-8966.

[8] Y. Aoki, H. Goforth, R. A. Srivatsan and S. Lucey: PointNetLK: Robust & Efficient Point Cloud Registration Using PointNet, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (Long Beach, USA, June 16-20, 2019), p. 7163-7172.

[9] Y. Wang and J. M. Solomon: Deep Closest Point: Learning Representations for Point Cloud Registration, Proc. IEEE/CVF International Conference on Computer Vision (Seoul, Korea, October 27-November 2, 2019), p. 3523-3532.

[10] Z. J. Yew and G. H. Lee: RPM-Net: Robust Point Matching Using Learned Features, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (Seattle, USA, June 13-19, 2020), p. 11824-11833.

[11] S. Huang, Z. Gojcic, M. Usvyatsov, A. Wieser and K. Schindler: Predator: Registration of 3D Point Clouds with Low Overlap, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (virtual, June 19-25, 2021), p. 4267-4276.

[12] H. Yu, F. Li, M. Saleh, B. Busam and S. Ilic: CoFiNet: Reliable Coarse-to-Fine Correspondences for Robust Point Cloud Registration, Advances in Neural Information Processing Systems, Vol. 34 (2021), p. 23872-23884.

[13] Z. J. Yew and G. H. Lee: REGTR: End-to-End Point Cloud Correspondences with Transformers, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (New Orleans, USA, June 18-24, 2022), p. 6677-6686.

[14] Z. Qin, H. Yu, C. Wang, Y. Guo, Y. Peng and S. Ilic: Geometric Transformer for Fast and Robust Point Cloud Registration, Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition (New Orleans, USA, June 18-24, 2022), p. 11143-11152.

[15] A. Geiger, P. Lenz and R. Urtasun: Are we ready for autonomous driving? The KITTI vision benchmark suite, Proc. IEEE Conference on Computer Vision and Pattern Recognition (Providence, USA, June 16-21, 2012), p. 3354-3361.

[16] J. Qian and D. Tao: RRGA-Net: Robust Point Cloud Registration Based on Graph Convolutional Attention, Sensors, 2023, 23(24): 9710.

[17] C. Ren, Y. Feng, W. Zhang, X.-P. Zhang and Y. Gao: Multi-scale Consistency for Robust 3D Registration via Hierarchical Sinkhorn Tree, Advances in Neural Information Processing Systems, Vol. 37 (2024), pp. 63991-64020.

Downloads

Published

2026-03-30

Issue

Section

Articles

How to Cite

Zhang, Zhiqiu, and Ye Chen. 2026. “Deep Learning for Point Cloud Registration: A Review of Core Architectures and Feature Extraction Strategies”. Scientific Journal of Intelligent Systems Research 8 (2): 42-50. https://doi.org/10.54691/nr3yhg10.