Seminar

We typically have seminars on Wednesdays at noon in Malone 228.  All seminar announcements will be sent to the theory mailing list.

Sep
3
Wed
Welcome and Introductions
Sep 3 @ 12:00 pm – 1:00 pm
Sep
10
Wed
Rico Zenklusen
Sep 10 @ 12:00 pm – 1:00 pm

Speaker: Rico Zenklusen
Affiliation: ETH Zurich and Johns Hopkins University

Title: Multi-Budgeted Matchings via the Ham Sandwich Theorem

Abstract:
In many applications, one has to deal with multiple, partially conflicting constraints. In this talk, we consider a multi-objective variant of the maximum weight matching problem, which is a classical combinatorial optimization problem with numerous applications. A natural way to deal with several objectives is to turn all of the objectives but one into budget constraints. This leads to the multi-budgeted matching problem, which asks to find a maximum weight matching subject to k linear constraints with nonnegative coefficients. Whereas this problem can easily be shown to be NP hard even for k=1, I will present in this talk a polynomial-time approximation scheme that works for any constant k. Our algorithm is based on rounding an optimal solution x* of an LP relaxation. Starting with a convex decomposition of x* into few matchings, we reduce the problem of rounding x* to an arguably simpler problem of successively merging two matchings in the convex decomposition of x*.

To prove that our algorithm is correct, we leverage two beautiful non-constructive mathematical theorems. More precisely, the Jordan Curve Theorem gives a concise and intuitive proof why our algorithm works for k=1, and a result of Stromquist and Woodall that follows from the Ham Sandwich Theorem allows for showing correctness for any constant k.

Part of this work is joint with Fabrizio Grandoni, R. Ravi and Mohit Singh.

Sep
17
Wed
Rong Ge
Sep 17 @ 12:00 pm – 1:00 pm

Speaker: Rong Ge
Affiliation: Microsoft Research New England

Title: New Algorithms for Learning Incoherent and Overcomplete Dictionary

Abstract:

In sparse recovery we are given a matrix A (“the dictionary”) and a vector of the form AX where X is sparse. and the goal is to recover X. This is a central notion in signal processing, statistics and machine learning. But in applications such as
sparse coding, the dictionary A is unknown and has to be learned from random examples of the form Y = AX where X is drawn from an appropriate distribution — this is the dictionary learning problem. In most settings, A is overcomplete: it has more columns
than rows. This talk presents a polynomial-time algorithm for learning overcomplete dictionaries; Our algorithm applies to incoherent dictionaries which have been a central object of study since they were introduced in seminal work of Donoho and Huo.

Based on joint work with Sanjeev Arora, Tengyu Ma and Ankur Moitra.

Bio:

Rong Ge is currently a post-doc at Microsoft Research, New England. He received his Ph.D. in Princeton University, advised by Prof. Sanjeev Arora. His main research interest is in applying algorithm design techniques from theoretical computer science to
machine learning problems, with the hope of provable algorithms and better understanding of the machine learning models.

Sep
24
Wed
Aaron Roth
Sep 24 @ 12:00 pm – 1:00 pm

Title: Correctness Protection via Differential Privacy

Speaker: Aaron Roth
Affiliation: UPenn

Abstract:
False discovery is a growing problem in scientific research. Despite
sophisticated statistical techniques for controlling the false discovery rate
and related statistics designed to protect against spurious discoveries, there
is significant evidence that many
published scientific papers contain incorrect conclusions.

In this talk we consider the role that adaptivity has in this problem. A
fundamental disconnect between the theorems that control false discovery rate
and the practice of science is that the theorems assume a fixed collection of
hypotheses to be tested, selected non-adaptively before the data is gathered,
whereas science is by definition an
adaptive process, in which data is shared and re-used, while hypotheses are
generated after seeing the results of previous tests.

We note that false discovery cannot be prevented when a substantial number of
adaptive queries are made to the data, and data is used naively — i.e. when
queries are answered exactly with their empirical estimates on a given finite
data set. However we show that remarkably, there is a different way to evaluate
statistical queries on a data set that allows even an adaptive analyst to make
exponentially many queries to the data set, while guaranteeing that with high
probability, all of the conclusions he draws generalize to the underlying
distribution. This technique counter-intuitively involves actively perturbing
the answers given to the data analyst, using techniques developed for privacy
preservation — but in our application, the perturbations are added entirely to
increase the utility of the data.

Joint work with Cynthia Dwork, Vitaly Feldman, Moritz Hardt, Toniann
Pitassi, and Omer Reingold.

Oct
8
Wed
Guy Kortsarz
Oct 8 @ 12:00 pm – 1:00 pm

Speaker: Guy Kortsarz
Affiliation: Rutgers University-Camden

Title: What have we learned about cut expansion and density problems

Abstract: I will survey several problems related to the above subjects. Directed and undirected multicut.  For Directed multicut I will show the approximation algorithm algorithm of Gupta.  Conductance (and sparsest cut), overlapping and non overlapping clustering, the small set expansion conjecture and it equivalent to breaking the ratio of 2 for MINIMUM partial vertex cover problem, the densest subgraph problem and the dense k-subgraph problem.

Oct
22
Wed
Robert Krauthgamer
Oct 22 @ 12:00 pm – 1:00 pm

Speaker: Robert Krauthgamer
Affiliation: Weizmann Institute of Science

Title: The Sketching Complexity of Graph Cuts

Abstract:
We study the problem of sketching an input graph $G$ on $n$ vertices, so that given the sketch, one can estimate the weight (capacity) of any cut in the graph within a small approximation factor $1+\epsilon$. The classic cut-sparsifier construction of Benczur and Karger (STOC 1996) implies a sketch of size $\tilde O(n/\epsilon^2)$ [this notation hides logarithmic factors].

We show that the dependence on $\epsilon$ can be brought down to only linear, at the price of achieving a slightly relaxed guarantee. Specifically, we design a randomized scheme that produces from $G$ a sketch of size $\tilde O(n/\epsilon)$ bits, from which the weight of any cut $(S,\bar S)$ can be reported, with high probability, within factor $1+\epsilon$. We also demonstrate some applications where this “for each” guarantee is indeed useful.

We further prove that that our relaxation is necessary. Specifically, a sketch that can $(1+\epsilon)$-approximate the weight of all cuts in the graph simultaneously (i.e., a “for all” guarantee), must be of size $\Omega(n/\epsilon^2)$ bits.

Joint work with Alexandr Andoni and David Woodruff.