KU Leuven and Huawei research center
Over the last few years, deep convolutional neural networks (CNNs) have achieved impressive results on visual recognition tasks, such as image classification, segmentation, understanding and etc. While these techniques have achieved outstanding performance when full-supervision is available, there exist two major obstacles to deploy these models in practical applications. First, they typically require very large amounts of annotated data, which are rarely available in a breadth of applications and extremely costly to obtain. And second, deep models present poor generalization to new datasets and tasks, being susceptible to a virtually complete loss of the knowledge acquired on previous tasks when training incrementally on new data, what is often called catastrophic forgetting. Avoiding catastrophic forgetting is particularly important in many applications for which learning has to be done from non-centralized datasets and sharing and pooling data is extremely complicated due to practical, ethical, or legal concerns. This makes that full annotations may become an impediment when scaling deep networks to new object categories, tasks or target domains. In this regard, we will focus on key concepts and methods for few-shot and continual learning algorithms.
Reza Azad is currently a Ph.D. student under the supervision of Prof. Tinne Tuytelaars at the most prestigious Katholieke Universiteit Leuven (KU Leuven), Belgium and a Research Fellow at Huawei Research Center in Belgium. His doctoral research focuses on deep continual learning. He continuously seeks for research ideas to model human cognition in machine learning algorithms. Prior to these positions, he used to be a visiting research fellow in ETS Montreal in Canada. With an excellent career in academics and research, he also received his M.Sc. Degree from the Sharif University of Technology in Iran.