Based on LAMP and Channel Pruning and Knowledge Distillation Model Algorithm

Authors

  • Jianhao Liang

DOI:

https://doi.org/10.6911/WSRJ.202504_11(4).0018

Keywords:

Model pruning, Distillation of knowledge, Lightweight model.

Abstract

To solve the problem of limited computing power of mobile devices for tea picking robots, this study proposed a hybrid compression algorithm based on LAMP pruning and channel pruning, and combined with knowledge distillation technology to optimize the performance of the lightweight model. The LAMP-adaptive normalization strategy of LAMP pruning was used to balance the proportion of pruning at different levels to avoid the over-pruning problem of traditional amplitude pruning. The channel pruning is further introduced to eliminate redundant convolution kernels, reducing the computational effort by 37.3% and the parameter count by 20.3%. The Mimic loss and linear attenuation distillation strategies were proposed, and the reasoning speed was increased by 19.7% at the pruning rate 1.7 through weighted multi-layer feature difference and dynamic adjustment of loss weight. Experiments show that the proposed method reduces the average delay to 8.26ms while maintaining the accuracy of the model, which is significantly better than the traditional pruning methods such as L1 and group_taylor, and provides an efficient solution for real-time tea recognition on edge devices.

Downloads

Download data is not yet available.

References

[1] C. Zhang, J. Wang, T. Yan, X.H. Lu, G.D. Lu, X.L. Tang, B.C. Huang, An instance-based deep transfer learning method for quality identification of Longjing tea from multiple geographical origins, Complex Intell. Syst. 9(3) (2023) 3409-3428. https://doi.org/10.1007/s40747-023-01024-4.

[2] J. Yang, Y. Chen, Tender Leaf Identification for Early-Spring Green Tea Based on Semi-Supervised Learning and Image Processing, Agronomy-Basel 12(8) (2022) 13. https://doi.org/10.3390/agronomy12081958.

[3] M. Zhu, S. Gupta, To prune, or not to prune: exploring the efficacy of pruning for model compression, (2017).

[4] A. Dosovitskiy, L. Beyer, A. Kolesnikov, D. Weissenborn, X. Zhai, T. Unterthiner, M. Dehghani, M. Minderer, G. Heigold, S. Gelly, An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale, International Conference on Learning Representations, 2021.

[5] Z. Liu, Y. Lin, Y. Cao, H. Hu, Y. Wei, Z. Zhang, S. Lin, B. Guo, Swin Transformer: Hierarchical Vision Transformer using Shifted Windows, (2021).

[6] N. Carion, F. Massa, G. Synnaeve, N. Usunier, A. Kirillov, S. Zagoruyko, End-to-End Object Detection with Transformers, 2020.

[7] X. Zhu, W. Su, L. Lu, B. Li, X. Wang, J. Dai, Deformable DETR: Deformable Transformers for End-to-End Object Detection, International Conference on Learning Representations, 2021.

Downloads

Published

2025-03-20

Issue

Section

Articles

How to Cite

Liang, Jianhao. 2025. “Based on LAMP and Channel Pruning and Knowledge Distillation Model Algorithm”. World Scientific Research Journal 11 (4): 167-78. https://doi.org/10.6911/WSRJ.202504_11(4).0018.