University of Toronto
Data-driven methods in Robotics circumvent hand-tuned feature engineering, albeit lack guarantees and often incur a massive computational expense. My research aims to bridge this gap and enable generalizable imitation for robot autonomy. We need to build systems that can capture semantic task structures that promote sample efficiency and can generalize to new task instances across visual, dynamical or semantic variations. And this involves designing algorithms that unify learning with perception, control and planning. In this talk, I will discuss inductive biases and prior help with Generalizable Autonomy. First I will talk about choice of action representations in RL and imitation from ensembles of suboptimal supervisors. Then I will talk about latent variable models in self-supervised learning. Finally I will talk about meta-learning for multi-task learning and data gathering in robotics.
He is an Assistant Professor of Computer Science at University of Toronto and a Faculty Member at the Vector Institute. He directs the UofT People, AI and Robotics (PAIR) group. He is also a Sr. Research Scientist at Nvidia. He earned his M.S. in Computer Science and Ph.D. in Operations Research from UC, Berkeley. IHeworked with Ken Goldberg at Berkeley AI Research (BAIR). He was later a postdoc at Stanford AI Lab with Fei-Fei Li and Silvio Savarese. His current research focuses on machine learning algorithms for perception and control in robotics.