PDF下载 分享
[1]蔡 宇,陈 丁,王 娟,等.基于LOF的联邦学习后门攻击防御方法[J].成都信息工程大学学报,2025,40(04):454-458.[doi:10.16836/j.cnki.jcuit.2025.04.007]
 CAI Yu,CHEN Ding,WANG Juan,et al.LOF-based Defense Against Federated Learning Backdoor Attacks[J].Journal of Chengdu University of Information Technology,2025,40(04):454-458.[doi:10.16836/j.cnki.jcuit.2025.04.007]
点击复制

基于LOF的联邦学习后门攻击防御方法

参考文献/References:

[1] 林伟伟,石方,曾岚,等.联邦学习开源框架综述[J]. 计算机研究与发展,2023,60(7):1551-1580.
[2] 余晟兴,陈泽凯,陈钟,等.DAGUARD:联邦学习下的分布式后门攻击防御方案[J]. 通信学报,2023,44(5):110-122.
[3] Gao J,Zhang B,Guo X,et al.Secure partial aggregation:Making federated learning more robust for industry 4.0 applications[J]. IEEE Transactions on Industrial Informatics,2022,18(9):6340-6348.
[4] Fu S,Xie C,Li B,et al.Attack-resistant federated learning with residual-based reweighting[J]. arXiv preprint arXiv:1912.11464,2019.
[5] Liu G,Ma X,Yang Y,et al.Federaser:Enabling efficient client-level data removal from federated learning models[C]. 2021 IEEE/ACM 29th International Symposium on Quality of Service(IWQOS).IEEE,2021:1-10.
[6] 陈明鑫,张钧波,李天瑞.联邦学习攻防研究综述[J]. 计算机科学,2022,49(7):310-323.
[7] 高莹,陈晓峰,张一余,等.联邦学习系统攻击与防御技术研究综述[J]. 计算机学报,2023,46(9):1781-1805.
[8] Bagdasaryan E,Veit A,Hua Y,et al.How to backdoor federated learning[C]. International conference on artificial intelligence and statistics.PMLR,2020:2938-2948.
[9] Hinton G E,Srivastava N,Krizhevsky A,et al.Improving neural networks by preventing co-adaptation of feature detectors[J]. arXiv preprint arXiv:1207.0580,2012.
[10] Zhao Y,Xu K,Wang H,et al.Stability-based analysis and defense against backdoor attacks on edge computing services[J]. IEEE Network,2021,35(1):163-169.
[11] Breunig MM,Kriegel HP,Ng RT,et al.LOF:identifying density-based local outliers[C]. Proceedings of the 2000 ACM SIGMOD international conference on Management of data.2000:93-104.
[12] He K,Zhang X,Ren S,et al.Deep residual learning for image recognition[C]. Proceedings of the IEEE conference on computer vision and pattern recognition.2016:770-778.
[13] Xiao H,Rasul K,Vollgraf R.Fashion-mnist:a novel image dataset for benchmarking machine learning algorithms[J]. arXiv preprint arXiv:1708.07747,2017.
[14] Cohen G,Afshar S,Tapson J,et al.EMNIST:Extending MNIST to handwritten letters[C]. 2017 international joint conference on neural networks(IJCNN).IEEE,2017:2921-2926.
[15] Wang B,Yao Y,Shan S,et al.Neural cleanse:Identifying and mitigating backdoor attacks in neural networks[C]. 2019 IEEE Symposium on Security and Privacy(SP).IEEE,2019:707-723.
[16] Li Q,Diao Y,Chen Q,et al.Federated Learning on Non-IID Data Silos:An Experimental Study[J]. 2021.
[17] Sun Z,Kairouz P,Suresh A T,et al.Can you really backdoor federated learning?[J]. arXiv preprint arXiv:1911.07963,2019.
[18] Nguyen T D,Rieger P,De Viti R,et al.{FLAME}:Taming backdoors in federated learning[C]. 31st USENIX Security Symposium(USENIX Security 22).2022:1415-1432.

备注/Memo

收稿日期:2024-01-06

更新日期/Last Update: 2025-08-31