ADVERSARIAL ATTACKS ON FEDERATED LEARNING MODELS IN HEALTHCARE DATA ECOSYSTEMS
Subjects/Theme:
Federated Learning, Healthcare AI, Adversarial Attacks, Data Poisoning, Model Poisoning, Model Inversion, Privacy Leakage, Secure Aggregation, Differential Privacy, Medical Data SecurityDescription
Security and Privacy in AI Systems,
Edited By: Dr. Sunita Chaudhary, Dr. Joydeb Patra
ISBN (978-81-685212-9-2)
Federated Learning (FL) has emerged as a transformative paradigm for privacy-preserving machine learning, particularly in healthcare ecosystems where sensitive patient data cannot be centrally aggregated. By enabling decentralized model training across hospitals, diagnostic centers, and wearable devices, FL addresses data privacy and regulatory constraints. However, despite its privacy advantages, FL is highly vulnerable to adversarial attacks due to its distributed and trust-based architecture. This paper provides a comprehensive analysis of adversarial threats in healthcare FL systems, focusing on data poisoning attacks, model poisoning attacks, and model inversion attacks. We examine how malicious participants manipulate local training processes to degrade global model performance or extract sensitive medical information. Furthermore, the paper explores attack vectors specific to healthcare, including medical imaging, electronic health records (EHRs), and IoT-based patient monitoring systems. A detailed comparison of attack mechanisms, impact severity, and defense strategies is presented. We also discuss state-of-the-art mitigation approaches such as robust aggregation, differential privacy, blockchain-based verification, and trust-aware learning frameworks. The study highlights critical research gaps and proposes future directions to enhance the robustness of federated healthcare systems against adversarial threats.