Enhanced Privacy-Preserving Federated Learning with Exit Tolerance and Verifiable Aggregation

Authors

  • Yudong Lin

DOI:

https://doi.org/10.54691/p3f71f02

Keywords:

Federated Learning; Privacy Protection; Exit Tolerance; Masking; Bilinear Pairing.

Abstract

With the escalating importance of data privacy, Federated Learning (FL) has drawn significant attention as a machine learning framework that facilitates collaborative model training among multiple users without the need for direct data sharing. This paper presents an enhanced Federated Learning scheme designed to bolster user privacy protection while maintaining the efficiency and performance of model training. Our approach integrates the use of random seeds and One-Time Password (OTP) technology to reinforce data encryption, and employs an advanced masking mechanism coupled with bilinear pairing for verification, thereby enhancing the security of the aggregation process. Additionally, our design accommodates user exits during the training process without compromising the overall training outcome. Through rigorous experimental analysis, we have demonstrated the effectiveness of our scheme, showcasing its acceptable computational and communication overheads. This research introduces a novel solution to privacy preservation challenges in Federated Learning, laying a solid foundation for the advancement of related technologies.

Downloads

Download data is not yet available.

References

[1] Brendan McMahan, Eider Moore, Daniel Ramage, et al. Communication-Efficient Learning of Deep Networks from Decentralized Data. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, volume 54 of Proceedings of Machine Learning Research. PMLR, 2017.

[2] H. Brendan McMahan, Sarvar Patel, Daniel Ramage, et al. Karn Seth. Practical secure aggregation for privacy-preserving machine learning. CCS’17, pages 1175-1191, New York, NY, USA, 2017.

[3] Hahn C, Kim H, Kim M, et al. Versa: Verifiable secure aggregation for cross-device federated learning[J]. IEEE Transactions on Dependable and Secure Computing, 2021, 20(1): 36-52.

[4] Guo X, Liu Z, Li J, et al. Verifl: Communication-efficient and fast verifiable aggregation for federated learning[J]. IEEE Transactions on Information Forensics and Security, 2020, 16: 1736-1751.

Downloads

Published

2024-09-29

Issue

Section

Articles

How to Cite

Lin, Yudong , trans. 2024. “Enhanced Privacy-Preserving Federated Learning With Exit Tolerance and Verifiable Aggregation”. Scientific Journal of Intelligent Systems Research 6 (9): 6-12. https://doi.org/10.54691/p3f71f02.