C3-SL: Circular Convolution-Based Batch-Wise Compression for Communication-Efficient Split Learning (IEEE MLSP 2022)
-
Updated
Aug 6, 2022 - Python
C3-SL: Circular Convolution-Based Batch-Wise Compression for Communication-Efficient Split Learning (IEEE MLSP 2022)
CVPR 2024 accepted paper, An Upload-Efficient Scheme for Transferring Knowledge From a Server-Side Pre-trained Generator to Clients in Heterogeneous Federated Learning
AAAI 2024 accepted paper, FedTGP: Trainable Global Prototypes with Adaptive-Margin-Enhanced Contrastive Learning for Data and Model Heterogeneity in Federated Learning
Atomo: Communication-efficient Learning via Atomic Sparsification
[ICML2022] ProgFed: Effective, Communication, and Computation Efficient Federated Learning by Progressive Training
The implementation of "Two-Stream Federated Learning: Reduce the Communication Costs" (VCIP 2018)
A project that investigated, designed and evaluated different methods to reduce overall up-link communication (client -> server) during federated learning
FedAnil++ is a Privacy-Preserving and Communication-Efficient Federated Deep Learning Model to address non-IID data, privacy concerns, and communication overhead. This repo hosts a simulation for FedAnil++ written in Python.
Code for the paper "A Quadratic Synchronization Rule for Distributed Deep Learning"
Add a description, image, and links to the communication-efficient topic page so that developers can more easily learn about it.
To associate your repository with the communication-efficient topic, visit your repo's landing page and select "manage topics."