Slides: Speaker: Polina Kirichenko, New York University Learning new tasks continually without forgetting on a constantly changing data distribution is essential for real-world problems but is challenging for modern deep learning. Deep learning models suffer from catastrophic forgetting: when presented with a sequence of tasks, deep neural networks can successfully learn the new tasks, but the performance on the old tasks degrades. In this talk, I will present an overview of the continual learning algorithms including well-established methods as well as recent state-of-the-art approaches. We will talk about several continual learning scenarios (task-, class-, and domain-incremental learning), review the most common approaches in alleviating forgetting and discuss other challenges in the field beyond catastrophic forgetting (including forward & backward transfer, learning on continuously drifting data and continua
Hide player controls
Hide resume playing