Commit a0badf10 authored by Yaoyao Liu's avatar Yaoyao Liu
Browse files

Update README.md

parent 3d956cd4
......@@ -4,9 +4,52 @@
[![Python](https://img.shields.io/badge/python-3.6-blue.svg?style=flat-square&logo=python&color=3776AB)](https://www.python.org/)
[![PyTorch](https://img.shields.io/badge/pytorch-1.2.0-%237732a8?style=flat-square&logo=PyTorch&color=EE4C2C)](https://pytorch.org/)
\[[Project Page](https://class-il.mpi-inf.mpg.de/rmm/)\]
\[[PDF](https://openreview.net/pdf?id=BfPzZSype5M)\] \[[Project Page](https://class-il.mpi-inf.mpg.de/rmm/)\]
### The code is already uploaded. We will add an instruction for the code soon.
#### Summary
* [Introduction](#introduction)
* [Getting Started](#getting-started)
* [Running Experiments](#running-experiments)
* [Citation](#citation)
* [Acknowledgements](#acknowledgements)
### Introduction
Class-Incremental Learning (CIL) trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase. The preserved data are exemplars used for replaying. However, existing methods use a static and ad hoc strategy for memory allocation, which is often sub-optimal. In this work, we propose a dynamic memory management strategy that is optimized for the incremental phases and different object classes. We call our method reinforced memory management (RMM), leveraging reinforcement learning. RMM training is not naturally compatible with CIL as the past, and future data are strictly non-accessible during the incremental phases. We solve this by training the policy function of RMM on pseudo CIL tasks, e.g., the tasks built on the data of the zeroth phase, and then applying it to target tasks. RMM propagates two levels of actions: Level-1 determines how to split the memory between old and new classes, and Level-2 allocates memory for each specific class. In essence, it is an optimizable and general method for memory management that can be used in any replaying-based CIL method. For evaluation, we plug RMM into two top-performing baselines (LUCIR+AANets and POD+AANets) and conduct experiments on three benchmarks (CIFAR-100, ImageNet-Subset, and ImageNet-Full). Our results show clear improvements, e.g., boosting POD+AANets by 3.6%, 4.4%, and 1.9% in the 25-Phase settings of the above benchmarks, respectively.
<p align="center">
<img src="https://images.yyliu.net/rmm.png" width="800"/>
</p>
> Figure: (a) Existing CIL methods allocate memory between old and new classes in an arbitrary and frozen way, causing the data imbalance between old and new classes and exacerbating the catastrophic forgetting of old knowledge in the learned model. (b) Our proposed method -- Reinforced Memory Management (RMM) -- is able to learn the optimal and class-specific memory sizes in different incremental phases. Please note we use orange, blue and green dots to denote the samples observed in the (i-1)-th, i-th, and (i+1)-th phases, respectively.
### Getting Started
In order to run this repository, we advise you to install python 3.6 and PyTorch 1.2.0 with Anaconda.
You may download Anaconda and read the installation instruction on their official website:
<https://www.anaconda.com/download/>
Create a new environment and install PyTorch and torchvision on it:
```bash
conda create --name RMM-PyTorch python=3.6
conda activate RMM-PyTorch
conda install pytorch=1.2.0
conda install torchvision -c pytorch
```
Install other requirements:
```bash
pip install tqdm scipy sklearn tensorboardX Pillow==6.2.1
```
### Running Experiments
```bash
python run_exp.py
```
### Citation
......@@ -23,3 +66,13 @@ Please cite our paper if it is helpful to your work:
year = {2021}
}
```
### Acknowledgements
Our implementation uses the source code from the following repositories:
* [Learning a Unified Classifier Incrementally via Rebalancing](https://github.com/hshustc/CVPR19_Incremental_Learning)
* [iCaRL: Incremental Classifier and Representation Learning](https://github.com/srebuffi/iCaRL)
* [PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning](https://github.com/arthurdouillard/incremental_learning.pytorch)
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment