Deep Probabilistic Graphical Modeling

Adji Bousso Dieng, Columbia University
Host: Ilya Shpitser

Deep learning (DL) is a powerful approach to modeling complex and large scale data. However, DL models lack interpretable quantities and calibrated uncertainty. In contrast, probabilistic graphical modeling (PGM) provides a framework for formulating an interpretable generative process of data and a way to express uncertainty about what we do not know. How can we develop machine learning methods that bring together the expressivity of DL with the interpretability and calibration of PGM to build flexible models endowed with an interpretable latent structure that can be fit efficiently? I call this line of research deep probabilistic graphical modeling (DPGM). In this talk, I will discuss my work on developing DPGM both on the modeling and algorithmic fronts. In the first part of the talk I will show how DPGM enables learning document representations that are highly predictive of sentiment without requiring supervision. In the second part of the talk I will describe entropy-regularized adversarial learning, a scalable and generic algorithm for fitting DPGMs.

Speaker Biography

Adji Bousso Dieng is a PhD Candidate at Columbia University where she is jointly advised by David Blei and John Paisley. Her research is in Artificial Intelligence and Statistics, bridging probabilistic graphical models and deep learning. Dieng is supported by a Dean Fellowship from Columbia University. She won a Microsoft Azure Research Award and a Google PhD Fellowship in Machine Learning. She was recognized as a rising star in machine learning by the University of Maryland. Prior to Columbia, Dieng worked as a Junior Professional Associate at the World Bank. She did her undergraduate studies in France where she attended Lycee Henri IV and Telecom ParisTech–France’s Grandes Ecoles system. She spent the third year of Telecom ParisTech’s curriculum at Cornell University where she earned a Master in Statistics.