Representation Learning for Words and Entities

Pushpendre Rastogi, Johns Hopkins University

This thesis presents new methods for unsupervised learning of distributed representations of words and entities from text and knowledge bases. The first algorithm presented in the thesis is a multi-view algorithm for learning representations of words called Multiview LSA (MVLSA). Through experiments on close to 50 different views, I show that MVLSA outperforms other state-of-the-art word embedding models. After that, I focus on learning entity representations for search and recommendation and present the second algorithm of this thesis called Neural Variational Set Expansion (NVSE). NVSE is also an unsupervised learning method, but it is based on the Variational Autoencoder framework. Evaluations with human annotators show that NVSE can facilitate better search and recommendation of information gathered from noisy, automatic annotation of unstructured natural language corpora. Finally, I move from unstructured data and focus on structured knowledge graphs. Moreover, I present novel approaches for learning embeddings of vertices and edges in a knowledge graph that obey logical constraints.

Speaker Biography

Pushpendre Rastogi graduated with a bachelors in Electrical Engineering and Masters in Information and Communication Technology in 2011 from IIT, Delhi. His masters’ thesis was on the Stationarity Condition for Fractional Sampling Filters. During 2011-12, he worked in Goldman Sachs as an Operations Strategist (Developer) where he implemented a Fat-Finger alert system to reduce operations risk to GS due to human error. From 2012-13 he worked at Aspiring Minds Pvt. Ltd. as an applied researcher on the problem of Automated English Essay Grading. In 2013 he entered the Ph.D. program at JHU. He interned at Samsung in 2017 and at Amazon in 2018. In 2017 he won the George M.L. Sommerman Engineering Graduate Teaching Assistant Award.