Tehran Institute for Advanced Studies (TeIAS) and University of Cambridge
Embeddings have undoubtedly been one of the most influential research areas in Natural Language Processing (NLP). Encoding information into a low-dimensional vector representation, which is easily integrable in modern machine learning algorithms, has played a central role in NLP progress. In this talk I will provide a high level synthesis of the main embedding techniques in NLP, in the broad sense, including conventional word vector space models, word embeddings (e.g., Word2vec and GloVe), and graph embeddings (e.g., Graph Convolutional Networks). I will also give an overview on the status of the recent developments in contextualized embeddings (e.g., ELMo and BERT).
Mohammad Taher Pilehvar is an Assistant Professor at Tehran Institute for Advanced Studies (TeIAS) and an Affiliated Lecturer at the University of Cambridge. Taherís research is primarily in the field of Natural Language Processing (NLP) with special focus on Lexical Semantics. Taher has co-instructed four tutorials on representation learning at main NLP venues (EMNLP 2015, ACL 2016, EACL 2017, and NAACL 2018) and co-organised three SemEval tasks and an EACL workshop on semantic representations. Taher has contributed to the field of Lexical Semantics with several papers, including two ACL best paper nominations (2013 and 2017) and a recent textbook on Embeddings in Natural Language Processing.