Skip to content

pps-lab/rofl-project-code

Repository files navigation


Logo

RoFL: Robustness of Federated Learning

Table of Contents
  1. About The Project
  2. Getting Started
  3. Usage
  4. Logging
  5. License
  6. Contact

About the Project

This framework is an end-to-end implementation of the protocol proposed in of RoFL: Attestable Robustness for Secure Federated Learning. The protocol relies on combining secure aggregation with commitments together with zero-knowledge proofs to proof constraints on client updates. One constraint that we show to be effective against some types of backdoor attack is the bounding of update norms. We evaluate this constraint using the federated learning analysis framework, which can be used to perform experiments to analyse the effectiveness of various federated learning backdoor attacks and defenses.

The current implementation of RoFL is an academic proof-of-concept prototype. This prototype is designed to focus on evaluating the overheads of zero-knowledge proofs on client updates on top of secure aggregation. The current prototype is not meant to be directly used for applications in productions.

RoFL components

This repository is structured as follows.

  • RoFL_Service: This directory contains the code for the federated learning server and client, written in Rust.
  • RoFL_Crypto: This directory contains the cryptographic library to generate and verify the zero-knowledge proof constraints used in RoFL.
  • RoFL_Train_Client: This directory contains a python service to handle the training and inference operations for the machine learning model. The RoFL_Service interfaces with the RoFL_Train_Client using gRPC. This component acts as a wrapper around the FL Analysis framework that is used for machine learning training.

Utilities

  • ansible: The Ansible setup used for the evaluation of the framework. For more information on how to use this, See ansible/README.md.

  • plots: This directory contains code used to generate plots for the paper.

End-to-end implementation

The end-to-end setup consists of two components. First the secure federated learning with constraints implementation performs the communication between the server and the clients. On the client- and server-side, this component offloads the machine learning operations to a python training and evaluation service.

Getting Started

Follow these steps to run the implementation on your local machine.

Requirements

Python 3.7 & Rust version:

Min: 1.64.0-nightly (2022-07-24)

Installation

Both the secure FL component and the training service are installed separately.

Secure FL with constraints

  1. Clone this repository
git clone git@github.com:pps-lab/rofl-project-code.git
  1. Install Cargo/Rust
curl https://sh.rustup.rs -sSf | sh -s -- -y
  1. Switch to nightly. Note: As of now, only a specific nightly version is supported due to a deprecated feature that a dependency is using.
rustup override set nightly-2022-07-24
  1. cargo build

Python training service

  1. Install the requirements for the trainservice wrapper (in rofl_project_code)
cd rofl_train_client
pip install -r requirements.txt
  1. Download the analysis framework
cd ../../ # go up to workspace directory
git clone git@github.com:pps-lab/fl-analysis.git
  1. Install the requirements for the analysis framework
cd fl-analysis
pipenv install

Usage

The framework can be used in two ways.

Using Ansible

We provide a setup in Ansible to easily deploy and evaluate the framework on multiple servers on AWS. See ansible/README.md for instructions on how to use this Ansible setup.

Manually

To run the setup manually, several components must be run separately. The individual components must be started in this order. The example shown are for a basic local configuration with four clients with L8-norm (infinity) range proof verification. The implementation of RoFL uses the analysis framework for model training and evaluation. In the following, we assume the following directory structure:

Top-level directory (e.g., workspace):

  • rofl-project-code (this repository)
  • fl-analysis (the analysis framework)

Each component must be run in a separate terminal window.

Server

In rofl-project-code, run the server:

./target/debug/flserver

Client Trainer

First, navigate to the analysis framework directory and enter the pipenv:

cd ../fl-analysis
pipenv shell

Then, navigate back to the python directory in the implementation directory:

cd ../rofl-project-code/rofl_train_client

From the rofl_train_client directory, run the python service.

PYTHONPATH=$(pwd) python trainservice/service.py

Client

In the rofl-project-code directory, run the client executable.

cd ../
./target/debug/flclients -n 4 -r 50016

Observer (optional)

After running the client, training has started. In addition, the observer component can be used to evaluate the model accuracy on the server-side. To do so, first, navigate to the analysis framework directory and enter the pipenv:

cd ../fl-analysis
pipenv shell

Then, navigate back to the python directory in the implementation directory:

cd ../rofl-project-code/rofl_train_client

Set the PYTHONPATH to include the current directory and run

PYTHONPATH=$(pwd) python trainservice/observer.py

The observer will connect to the FL server and receive the global model for each round.

Logging

The implementation outputs time and bandwidth measurements in several files.

Benchmark Log Format

The benchmark files for both the server and the clients can be found in the benchlog folder.

#### Format of the server log

t1--t2--t3--t4
t1: round starts
t2: round aggregation done
t3: round param extraction done
t4: verification completes

Format of a benchmark log line:
<Round ID>, <t2 - t1>, <t3 - t2>, <t4 - t3>, <total duration>

#### Format of the client log

t1--t2--t3--t4--t5
t1: model meta received
t2: model completely received
t3: local model training done
t4: model update encryption + proofs completed
t5: model sent to server

Format of a benchmark log line:
<Round ID>, <t2 - t1>, <t3 - t2>, <t4 - t3>, <t5 - t4>, <total duration>, <bytes received>,  <bytes sent>

Microbenchmarks

We provide cargo bench benchmarks for the following individual components:

Bench Purpose
Well-formedness
randproof_bench Unoptimized per-parameter randomness proof
squarerandproof_bench Unoptimized per-parameter randomness + proof of square relation proof (in a single Sigma protocol)
compressedrandproof_Bench Compressed single randomness proof
squareproof_bench Per-parameter proof of square relation (to be used with the compressed randomness proof in the L2 norm)
Range proofs
rangeproof_bench Per-parameter range proof (partitioned in 4 chunks)
rangeproof_part36_bench Per-parameter range proof (partitioned in 32 chunks to measure optimized verification speed on the server)
l2rangeproof_bench Single range proof for the sum of squared commitments
Server-side operations
dlog_bench Measures the time to decrypt using discrete log
addelgamal_bench Measures the time to combine the vector ElGamal commitments of the clients

Benchmarks that are prefixed with create_ only perform the creation of the proofs to be measured on a resource-constrained client.

License

This project's code is distributed under the MIT License. See LICENSE for more information.

Contact

Project Links: https://github.com/pps-lab/rofl-project-code and https://pps-lab.com/research/ml-sec/