Date & Time: Tuesday May 7, 2013 - 10:45 a.m.
Location: Hackerman B-17
Title: Finding Nemo: The Power of Probabilistic Method
How, as computer scientists, can we help Marlin find Nemo in a very short time? The answer may lie in the Probabilistic Method. Pioneered by Paul Erdös more than seven decades back, the Probabilistic Method has provided a number of widely used tools in Combinatorics. However, many of these tools are non-constructive: while they show the existence of certain combinatorial structures, they do not tell us how to find them. One such powerful technique is László Lovász's famous Local Lemma (LLL). LLL has diverse range of applications that include breakthroughs in packet-routing, a variety of theorems in graph-coloring, and a host of other surprising applications where probability appears to have no role. While the original LLL was nonconstructive, there has been a series of works trying to devise algorithmic versions of LLL. In the first half of my talk, I will describe our work in this regime which covers many compelling applications that were out of reach by previous methods. Using our technique, we resolve a fascinating open question in the area of resource allocation and scheduling while giving the first Monte-Carlo approximation algorithms for a large variety of combinatorial problems.
Probability comes as a surprising element in many of the above problems, however, there are applications where probability is inherent in the data. Applications such as information retrieval, data integration and cleaning, text analytics, social network analysis etc. deal with voluminous amount of uncertain data. Probability theory can play a critical role in designing scalable algorithms under such uncertainty. In the second part of my presentation, I will talk about a basic problem of ranking and how using probabilistic generating function idea, we can give a unified approach to finding top-k results over probabilistic databases.
Barna Saha is a Senior Member of Research at the AT&T Shannon Laboratories. She obtained her Ph.D. in Computer Science from University of Maryland College Park in 2011. Her Research interest spans the areas of theoretical computer science and database management system, such as design and analysis of algorithms, probabilistic methods, graph theory and big data analysis. She is the co-winner of the best paper award in VLDB 2009, one of the premier conferences in databases, for her work on probabilistic ranking algorithms. She is also the recipient of Deans Fellowship Award, University of Maryland, 2010, for her dissertation research.
Host: Rao Kosaraju
Date & Time:Wednesday May 8, 2013 - 12:00 p.m.
Location: Hackerman B-17
Title: Probabilistic Modeling for Large-scale Data Exploration
We live in the era of "Big Data," where we are surrounded by a dauntingly vast amount of information. How can we help people quickly navigate the data and acquire useful knowledge from it? Probabilistic models provide a general framework for analyzing, predicting and understanding the underlying patterns in the large-scale and complex data.
Using a new recommender system as an example, I will show how we can develop principled approaches to advance two important directions in probabilistic modeling---exploratory analysis and scalable inference. First, I will describe a new model for document recommendation. This model not only gives better recommendation performance, but also provides new exploratory tools that help users navigate the data. For example, a user can adjust her preferences and the system can adaptively change the recommendations. Second, building a recommender system like this requires learning the probabilistic model from large-scale empirical data. I will describe a scalable approach for learning a wide class of probabilistic models, a class that includes our recommendation model, from massive data.
Chong Wang is a project scientist in the Machine Learning Department, Carnegie Mellon University. He received his PhD from Princeton University in 2012, advised by David Blei. His research lies in probabilistic graphical models and their applications to real-world problems. He has won several awards, including a best student paper award at KDD 2011, a notable paper award at AISTATS 2011 and a best student paper award honorable mention at NIPS 2009. He received the Google PhD Fellowship for machine learning and the Siebel Scholar Fellowship. His thesis was nominated for ACM Doctoral Dissertation Award by Princeton University in 2012.
Host: Mark Dredze
Date & Time:Thursday May 9, 2013 - 10:45 a.m.
Location: Maryland 110
Automatic Synthesis of Out-of-Core Algorithms
In this talk, I present a system for the automatic synthesis of efficient algorithms specialized for a particular memory hierarchy and a set of storage devices. The developer provides two independent inputs: 1) an algorithm that ignores memory hierarchy and external storage aspects; and 2) a description of the target memory hierarchy, including its topology and parameters. Our system is able to automatically synthesize memory-hierarchy and storage-device-aware algorithms out of those specifications, for tasks such as joins and sorting. The framework is extensible and allows developers to quickly synthesize custom out-of-core algorithms as new storage technologies become available. This research draws on work from a number of areas, such as systems (data locality principles, cost-based optimization, secondary storage algorithms) programming languages, and compilers (domain specific languages, program synthesis).
This is joint work with Yannis Klonatos, Andres Noetzli, Andrej Spielmann, and Viktor Kuncak.
Christoph Koch is a professor of Computer Science at EPFL, specializing in data management. Until 2010, he was an Associate Professor in the Department of Computer Science at Cornell University. Previously to this, from 2005 to 2007, he was an Associate Professor of Computer Science at Saarland University. Earlier, he obtained his PhD in Artificial Intelligence from TU Vienna and CERN (2001), was a postdoctoral researcher at TU Vienna and the University of Edinburgh (2001-2003), and an assistant professor at TU Vienna (2003-2005). He obtained his Habilitation degree in 2004. He has won Best Paper Awards at PODS 2002, ICALP 2005, and SIGMOD 2011, an Outrageous Ideas and Vision Paper Award at CIDR 2013, a Google Research Award (in 2009), and an ERC Grant (in 2011). He is a PI of the Billion-Euro EU FET Flagship Human Brain Project. He (co-)chaired the program committees of DBPL 2005, WebDB 2008, and ICDE 2011, and was PC vice-chair of ICDE 2008 and ICDE 2009. He has served on the editorial board of ACM Transactions on Internet Technology as well as in numerous program committees. He currently serves as PC co-chair of VLDB 2013 and Editor-in-Chief of PVLDB.
Host: Yanif Ahmad