Commit a0badf10 authored by Yaoyao Liu's avatar Yaoyao Liu
Browse files


parent 3d956cd4
......@@ -4,9 +4,52 @@
\[[Project Page](\]
\[[PDF](\] \[[Project Page](\]
### The code is already uploaded. We will add an instruction for the code soon.
#### Summary
* [Introduction](#introduction)
* [Getting Started](#getting-started)
* [Running Experiments](#running-experiments)
* [Citation](#citation)
* [Acknowledgements](#acknowledgements)
### Introduction
Class-Incremental Learning (CIL) trains classifiers under a strict memory budget: in each incremental phase, learning is done for new data, most of which is abandoned to free space for the next phase. The preserved data are exemplars used for replaying. However, existing methods use a static and ad hoc strategy for memory allocation, which is often sub-optimal. In this work, we propose a dynamic memory management strategy that is optimized for the incremental phases and different object classes. We call our method reinforced memory management (RMM), leveraging reinforcement learning. RMM training is not naturally compatible with CIL as the past, and future data are strictly non-accessible during the incremental phases. We solve this by training the policy function of RMM on pseudo CIL tasks, e.g., the tasks built on the data of the zeroth phase, and then applying it to target tasks. RMM propagates two levels of actions: Level-1 determines how to split the memory between old and new classes, and Level-2 allocates memory for each specific class. In essence, it is an optimizable and general method for memory management that can be used in any replaying-based CIL method. For evaluation, we plug RMM into two top-performing baselines (LUCIR+AANets and POD+AANets) and conduct experiments on three benchmarks (CIFAR-100, ImageNet-Subset, and ImageNet-Full). Our results show clear improvements, e.g., boosting POD+AANets by 3.6%, 4.4%, and 1.9% in the 25-Phase settings of the above benchmarks, respectively.
<p align="center">
<img src="" width="800"/>
> Figure: (a) Existing CIL methods allocate memory between old and new classes in an arbitrary and frozen way, causing the data imbalance between old and new classes and exacerbating the catastrophic forgetting of old knowledge in the learned model. (b) Our proposed method -- Reinforced Memory Management (RMM) -- is able to learn the optimal and class-specific memory sizes in different incremental phases. Please note we use orange, blue and green dots to denote the samples observed in the (i-1)-th, i-th, and (i+1)-th phases, respectively.
### Getting Started
In order to run this repository, we advise you to install python 3.6 and PyTorch 1.2.0 with Anaconda.
You may download Anaconda and read the installation instruction on their official website:
Create a new environment and install PyTorch and torchvision on it:
conda create --name RMM-PyTorch python=3.6
conda activate RMM-PyTorch
conda install pytorch=1.2.0
conda install torchvision -c pytorch
Install other requirements:
pip install tqdm scipy sklearn tensorboardX Pillow==6.2.1
### Running Experiments
### Citation
......@@ -23,3 +66,13 @@ Please cite our paper if it is helpful to your work:
year = {2021}
### Acknowledgements
Our implementation uses the source code from the following repositories:
* [Learning a Unified Classifier Incrementally via Rebalancing](
* [iCaRL: Incremental Classifier and Representation Learning](
* [PODNet: Pooled Outputs Distillation for Small-Tasks Incremental Learning](
Supports Markdown
0% or .
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment