The implementation code Guard-FL is here.
Federated learning (FL) in Internet of Things (IoT) applications facilitates the collaborative training of a global model across distributed devices with a server.
Despite its potential, the distributed nature and vulnerability of IoT devices render FL susceptible to Byzantine attacks. Existing approaches to counter these attacks are often impractical in real-world IoT scenarios, mainly due to the challenges posed by non-independent identically distributed (non-IID) data and the high-dimensional model common in IoT devices. To address these challenges, we introduce Guard-FL, an efficient UMAP-assisted robust aggregation mechanism for FL. Guard-FL is designed to enhance the performance of the global model in non-IID data environments without compromising defense capabilities. Specifically, it utilizes uniform manifold approximation and projection (UMAP) to capture non-linear features among high-dimensional local models. Based on these features, robust regression and unsupervised clustering techniques are applied to effectively detect and remove attackers from local model updates. Subsequently, the server employs information stored in weights to evaluate and aggregate the remaining divergent model updates, thus significantly improving the global model's performance. To validate the efficacy of Guard-FL, we provide a theoretical analysis of its robust convergence properties. Our experiments demonstrate that Guard-FL surpasses existing state-of-the-art solutions, achieving up to