The code was used for the experiments and results of Batch-Model-Consolidation.
The repository contains a combination of the methods found in FACIL and Mammoth adapted to work with the AutoDS dataset where we evaluate the methods on a long sequence of tasks and in a distributed fashion.
- Install the AutoDS dataset.
git clone https://github.com/fostiropoulos/stream_benchmark.git
cd stream_benchmark
pip install .
AutoDS Feature Vectors Download
We use 71 datasets with extracted features from pre-trained models, supported in the AutoDS dataset. The detailed table.
Hyper-parameters are stored in hparams/defaults.json
with the reported values in their papers.
Modify the file for the number of n_epochs
you want to train and the batch_size
you want to use.
python -m stream_benchmark --save_path {save_path} --dataset_path {dataset_path} --model_name {model_name} --hparams hparams/defaults.json
We run the baselines on Stream with CLIP embeddings in this code.
For model_name
support see below.
Read more on Ray
-
ray stop
-
ray start --head
-
python -m stream_benchmark.distributed --dataset_path {dataset_path} --num_gpus {num_gpus}
NOTE:
{num_gpus}
is the fractional number of GPU to use.
Set this so that {GPU usage per experiment} * {num_gpus} < 1
The code in test_benchmark.py would be a good starting point in a simple example (ignoring the mock.patching) in understanding how the benchmark can be extended.
@inproceedings{fostiropoulos2023batch,
title={Batch Model Consolidation: A Multi-Task Model Consolidation Framework},
author={Fostiropoulos, Iordanis and Zhu, Jiaye and Itti, Laurent},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={3664--3676},
year={2023}
}
Description | model_name |
File |
---|---|---|
Batch-Model-Consolidation | bmc | bmc.py |
Continual learning via Gradient Episodic Memory. | gem | gem.py |
Continual learning via online EWC. | ewc_on | ewc_on.py |
Continual learning via MAS. | mas | mas.py |
Continual learning via Experience Replay. | er | er.py |
Continual learning via Deep Model Consolidation. | dmc | dmc.py |
Continual learning via A-GEM, leveraging a reservoir buffer. | agem_r | agem_r.py |
Continual Learning Through Synaptic Intelligence. | si | si.py |
Continual learning via Function Distance Regularization. | fdr | fdr.py |
Gradient based sample selection for online continual learning | gss | gss.py |
Continual learning via Dark Experience Replay++. | derpp | derpp.py |
Continual learning via A-GEM. | agem | agem.py |
Stochastic gradient descent baseline without continual learning. | sgd | sgd.py |
Continual learning via Learning without Forgetting. | lwf | lwf.py |
Continual Learning via iCaRL. | icarl | icarl.py |
Continual learning via Dark Experience Replay. | der | der.py |
Continual learning via GDumb. | gdumb | gdumb.py |
Continual learning via Experience Replay. | er_ace | er_ace.py |
Continual learning via Hindsight Anchor Learning. | hal | hal.py |
Joint training: a strong, simple baseline. | joint_gcl | joint_gcl.py |