

Title: Scaling Up Probabilistic Inference
Abstract:
Graphical models and their logic-based extensions such as Markov logic have become the central paradigm for representation and reasoning in machine learning, artificial intelligence and computer science. They have led to many successful applications in domains such as Bio-informatics, data mining, computer vision, social networks, entity resolution, natural language processing, and hardware and software verification. For them to be useful in these and other future applications, we need access to scalable probabilistic inference systems that are able to accurately analyze and model large amount of data and make future predictions. Unfortunately, this is hard because exact
inference crosses the #P boundary and is computationally intractable. Therefore, in practice, one has to resort to approximate algorithms and hope that they are accurate and scalable.
In this talk, I'll describe my research on scaling up approximate probabilistic inference algorithms to unprecedented levels. These algorithms are based on exploiting structural features present in the
problem. I'll show that all approximate inference schemes developed to date have completely ignored or under-utilized structural features such as context-specific independence, determinism and logical dependencies that are prevalent in many real-world problems. I'll show how to change
this by developing theoretically well-founded structure-aware algorithms that are simple yet very effective in practice. In particular, I'll present results from the recently held 2010 UAI approximate inference
challenge in which my schemes won several categories, outperforming the competition by an order of magnitude on the hardest problems. I'll conclude by describing exciting opportunities that lie ahead in the area of graphical models and statistical relational learning.