Taxonomy and Survey of Federated Learning Approaches for Privacy-Preserving Applications in IoT, Healthcare, and Smart Environments

Main Article Content

Narendra V S, Savita K Shetty

Abstract

A quickly developing machine learning paradigm called federated learning (FL) allows for cooperative model training while protecting data privacy. Massive amounts of sensitive data are produced by digital systems in healthcare, the Internet of Things, and smart environments. FL has become significant because of its decentralized methodology, which forbids the sharing of raw data. The applications, difficulties, and developments in FL are examined in this review article, with an emphasis on privacy-preserving techniques in a variety of fields. Classifying and evaluating FL frameworks according to their architecture, learning models, aggregation methods, and privacy-preserving tactics is the main goal of this evaluation. The chosen studies include cybersecurity, smart city infrastructure, healthcare diagnostics, and customized IoT solutions. Relevance to FL privacy preservation and practical use was guaranteed by the inclusion criteria. The results show that although FL successfully improves data security, issues with data heterogeneity, model convergence, communication cost, and adversarial attack susceptibility still exist. To improve privacy guarantees, methods like clustered FL, secure multi-party computing, homomorphic encryption, and differential privacy are frequently used. In order to increase FL systems' scalability, resilience, and security, this paper identifies these developments and suggests future research avenues. This study contributes to the developing subject of privacy-preserving AI by offering a systematic taxonomy and analysis of current federated learning algorithms suited to privacy-sensitive situations.

Article Details

Section
Articles