Momentum Method and Top-k Optimisation Algorithm in Efficient and Secure Federated Learning Modelling

Authors

  • Shuhao Fan
  • Juntao Zhang
  • Yuelin Liu

DOI:

https://doi.org/10.6911/WSRJ.202411_10(11).0001

Keywords:

Federated Learning; Top-k Gradient Sparsification; Batch Processing; Homomorphic Encryption; Momentum Method.

Abstract

Federated learning allows multiple clients to collaboratively train models without sharing data, which is used to protect user privacy. Currently, homomorphic encryption is widely used in federated learning to prevent the leakage of model parameters. However, the large number of ciphertexts generated by homomorphic encryption imposes a huge overhead in communication transmission, which affects the training efficiency. Although various methods have been proposed to reduce communication costs through algorithm optimisation or decentralised training, there is a long way to go to solve the communication problem caused by homomorphic encryption. Yu et al. propose an efficient homomorphic encryption secure federated aggregation framework (ESFL) based on the quantisation of gradient pruning. We firstly simplify the ESFL framework by retaining the batch processing of the gradient of the candidate indexes in it, and the Top-k Gradient Sparsification for model gradient screening, which reduces the number of gradients to be transmitted, and then we apply momentum to refine the user model updates and accelerate convergence. It is shown that after the improvement of the gradient selection algorithm, our proposed method has a high-efficiency improvement while ensuring data security.

Downloads

Download data is not yet available.

References

[1] McMahan, B., Moore, E., Ramage, D., Hampson, S., & Arcas, B. A. (2017). Communication-efficient learning of deep networks from decentralized data. AISTATS, pp 1273-1282.

[2] Bonawitz, K., Ivanov, V., Kreuter, B., et al. (2017). Practical Secure Aggregation for Privacy-Preserving Machine Learning. Google Inc., Mountain View, CA, USA.

[3] Gentry, C. (2009). Fully homomorphic encryption using ideal lattices. STOC, pp 169-178.

[4] Fang, H., & Qian, Q. (2021). Privacy Preserving Machine Learning with Homomorphic Encryption and Federated Learning. Future Internet, 13(4), 94. http://doi.org/10.3390/fi13040094.

[5] Zhang, C., Li, S., Xia, J., Wang, W., Yan, F., & Liu, Y. (2020). BatchCrypt: Efficient Homomorphic Encryption for Cross-Silo Federated Learning. USENIX ATC.

[6] Han, P., Wang, S., & Leung, K. K. (2020). Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach. ICDCS, pp 300-310.

[7] Phong, L. T., Aono, Y., Hayashi, T., Wang, L., & Moriai, S. (2018). Privacy-Preserving Deep Learning via Additively Homomorphic Encryption. IEEE Transactions on Information Forensics and Security, 13, 1333-1345.

[8] Cheng, K., Fan, T., Jin, Y., Liu, Y., Chen, T., & Yang, Q. (2019). SecureBoost: A Lossless Federated Learning Framework. IEEE Intelligent Systems, 36, 87-98.

[9] Ma, J., Naas, S., Sigg, S., & Lyu, X. (2021). Privacy‐preserving federated learning based on multi‐key homomorphic encryption. International Journal of Intelligent Systems, 37, pp 5880-5901.

[10] Strom, N. (2015). Scalable distributed DNN training using commodity GPU cloud computing. arXiv preprint arXiv:1506.03478.

[11] Aji, A. F., & Heafield, K. (2017). Sparse Communication for Distributed Gradient Descent. EMNLP,pp 440-445.

[12] Dryden, N., Moon, T., Jacobs, S. A., et al. (2016). Communication Quantization for Data-Parallel Training of Deep Neural Networks. MLHPC, pp 1-8.

[13] Alistarh, D., Li, J., Tomioka, R., et al. (2016). QSGD: Randomized Quantization for Communication-Optimal Stochastic Gradient Descent. CoRR, abs/1610.02132.

[14] Han, P., Wang, S., & Leung, K. K. (2020). Adaptive Gradient Sparsification for Efficient Federated Learning: An Online Learning Approach. ICDCS, pp 300-310.

[15] Yu, S. X., & Chen, Z. (2023). An Efficient and Secure Federated Learning Aggregation Framework Based on Homomorphic Encryption. Journal of Communications, 44(1), 14-28. http://doi.org/10.11959/j.issn.1000-436x.2023015.

[16] El-Mhamdi, E. M., Guerraoui, R., & Rouault, S. (2020). Distributed Momentum for Byzantine-resilient Learning. arXiv preprint arXiv:2003.00010.

[17] Shaohuai Shi, Qiang Wang, Kaiyong Zhao, Zhenheng Tang, Yuxin Wang, Xiang Huang, Xiaowen Chu. (2019) A Distributed Synchronous SGD Algorithm with Global Top-k Sparsification for Low Bandwidth Networks. In: Proceedings of the IEEE International Conference on Distributed Computing Systems (ICDCS). IEEE, arXiv:1901.04359v2

Downloads

Published

2024-10-22

Issue

Section

Articles

How to Cite

Fan, Shuhao, Juntao Zhang, and Yuelin Liu. 2024. “Momentum Method and Top-K Optimisation Algorithm in Efficient and Secure Federated Learning Modelling”. World Scientific Research Journal 10 (11): 1-11. https://doi.org/10.6911/WSRJ.202411_10(11).0001.