Skip to content

Continual / incremental / lifelong learning methods implemented by PyTorch. Especially the methods based on memory replay.

License

Notifications You must be signed in to change notification settings

zhchuu/continual-learning-reproduce

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

35 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

This codebase has developed into a new project that is well-maintained and includes more SOTA methods. Please refer to PyCIL: A Python Toolbox for Class-Incremental Learning for more information.

Implementation of continual learning methods

This repository implements some continual / incremental / lifelong learning methods by PyTorch.

Especially the methods based on memory replay.

  • iCaRL: Incremental Classifier and Representation Learning. [paper]
  • End2End: End-to-End Incremental Learning. [paper]
  • DR: Lifelong Learning via Progressive Distillation and Retrospection. [paper]
  • UCIR: Learning a Unified Classifier Incrementally via Rebalancing. [paper]
  • BiC: Large Scale Incremental Learning. [paper]
  • LwM: Learning without Memorizing. [paper]
  • PODNet: PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning. [paper]

Dependencies

  1. torch 1.7.1
  2. torchvision 0.8.2
  3. tqdm
  4. numpy
  5. scipy

Usage

Run experiment

  1. Edit the config.json file for global settings.
  2. Edit the hyperparameters in the corresponding .py file (e.g., models/icarl.py).
  3. Run:
python main.py

Add datasets

  1. Add corresponding classes to utils/data.py.
  2. Modify the _get_idata function in utils/data_manager.py.

Results

iCaRL

CIFAR100

Average accuracies of CIFAR-100 (iCaRL):

Increments Paper reported Reproduce
10 classes 64.1 63.10
20 classes 67.2 65.25
50 classes 68.6 67.69

UCIR

CIFAR100

ImageNet-Subset

BiC

ImageNet-1000

100 200 300 400 500 600 700 800 900 1000
Paper reported (BiC) 94.1 92.5 89.6 89.1 85.7 83.2 80.2 77.5 75.0 73.2
Reproduce 94.3 91.6 89.6 87.5 85.6 84.3 82.2 79.4 76.7 74.1

PODNet

CIFAR100

NME results are shown and the reproduced results are not in line with the reported results. Maybe I missed something...

Classifier Steps Reported (%) Reproduced (%)
Cosine (k=1) 50 56.69 55.49
LSC-CE (k=10) 50 59.86 55.69
LSC-NCA (k=10) 50 61.40 56.50
LSC-CE (k=10) 25 ----- 59.16
LSC-NCA (k=10) 25 62.71 59.79
LSC-CE (k=10) 10 ----- 62.59
LSC-NCA (k=10) 10 64.03 62.81
LSC-CE (k=10) 5 ----- 64.16
LSC-NCA (k=10) 5 64.48 64.37

Change log

  • (2020.6.8) Store the data with list instead of np.array to avoid bugs when the image size is different.
  • (2020.7.15) Avoid duplicative selection in constructing exemplars.
  • (2020.10.3) Fix the bug of excessive memory usage.
  • (2020.10.8) Store the data with np.array instead of Python list to obtain faster I/O.

Some problems

Q: Why can't I reproduce the results of the paper by this repository?

A: The result of the methods may be affected by the incremental order (In my opinion). You can either generate more orders and average their results or increase the number of training iterations (Adjust the hyperparameters).

References

https://github.com/arthurdouillard/incremental_learning.pytorch

About

Continual / incremental / lifelong learning methods implemented by PyTorch. Especially the methods based on memory replay.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages