Seminar

We typically have seminars on Wednesday at noon in Malone 228.  All seminar announcements will be sent to the theory mailing list.

Jan
16
Wed
[Theory Seminar] Sami Davies
Jan 16 @ 12:00 pm – 1:00 pm

Speaker: Sami Davies
Affiliation: University of Washington

Title: A Tale of Santa Claus, Hypergraphs, and Matroids
Abstract:
A well-known problem in scheduling and approximation algorithms is the Santa Claus problem. Suppose that Santa Claus has a set of gifts, and he wants to distribute them among a set of children so that the least happy child is made as happy as possible. Here, the value that a child i has for a present j is of the form p_{ij} \in \{0,p_j\}. A polynomial time algorithm by Annamalai et al. gives a 12.33-approximation algorithm and is based on a modification of Haxell’s hypergraph matching argument.

In this paper, we introduce a matroid version of the Santa Claus problem. Our algorithm is also based on Haxell’s augmentation tree, but with the introduction of the matroid structure we solve a more general problem with cleaner methods. Our result can then be used as a blackbox to obtain a (4 +\varepsilon)-approximation for Santa Claus. This factor also compares against a natural, compact LP for Santa Claus.

Feb
6
Wed
[Theory Seminar] Jalaj Upadhyay
Feb 6 @ 12:00 pm – 1:00 pm

Speaker: Jalaj Upadhyay
Affiliation: Johns Hopkins Universit

Title: Towards Robust and Scalable Private Data Analysis

Abstract:
In the current age of big data, we are constantly creating new data which is analyzed by various platforms to improve service and user’s experience. Given the sensitive and confidential nature of these data, there are obvious security and privacy concerns while storing and analyzing such data. In this talk, I will discuss the fundamental challenges in providing robust security and privacy guarantee while storing and analyzing large data. I will also give a brief overview of my contributions and future plans towards addressing these challenges.

To give a glimpse of these challenges in providing a robust privacy guarantee known as differential privacy, I will use spectral sparsification of graphs as an example. Given the ubiquitous nature of graphs, differentially private analysis on graphs has gained a lot of interest. However, existing algorithms for these analyses are tailored made for the task at hand making them infeasible in practice. In this talk, I will present a novel differentially private algorithm that outputs a spectral sparsification of the input graph. At the core of this algorithm is a method to privately estimate the importance of an edge in the graph. Prior to this work, there was no known privacy preserving method that provides such an estimate or spectral sparsification of graphs.

Since many graph properties are defined by the spectrum of the graph, this work has many analytical as well as learning theoretic applications. To demonstrate some applications, I will show more efficient and accurate analysis of various combinatorial problems on graphs and the first technique to perform privacy preserving manifold learning on graphs.

Feb
13
Wed
[Theory Seminar] Martin Farach-Colton
Feb 13 @ 12:00 pm – 1:00 pm

Speaker: Martin Farach-Colton
Affiliation: Rutgers University

Title: TBA

Abstract: TBA

Feb
27
Wed
[Theory Seminar] Xue Chen
Feb 27 @ 12:00 pm – 1:00 pm

Speaker: Xue Chen
Affiliation: Northwestern University

Title: TBA

Abstract: TBA

Mar
6
Wed
[Theory Seminar] Grigory Yaroslavtsev
Mar 6 @ 12:00 pm – 1:00 pm

Speaker: Grigory Yaroslavtsev
Affiliation: Indiana University, Bloomington

Title: Advances in Hierarchical Clustering for Vector Data
Abstract:
Compared to the highly successful flat clustering (e.g. k-means), despite its important role and applications in data analysis, hierarchical clustering has been lacking in rigorous algorithmic studies until late due to absence of rigorous objectives. Since 2016, a sequence of works has emerged and gave novel algorithms for this problem in the general metric setting. This was enabled by a breakthrough by Dasgupta, who introduced a formal objective into the study of hierarchical clustering.

In this talk I will give an overview of our recent progress on models and scalable algorithms for hierarchical clustering applicable specifically to high-dimensional vector data. I will first discuss various linkage-based algorithms (single-linkage, average-linkage) and their formal properties with respect to various objectives. I will then introduce a new projection-based approximation algorithm for vector data. The talk will be self-contained and doesn’t assume prior knowledge of clustering methods.

Based on joint works with Vadapalli (ICML’18) and Charikar, Chatziafratis and Niazadeh (AISTATS’19)