Seminar

We typically have seminars on Wednesdays at noon in Malone 228.  All seminar announcements will be sent to the theory mailing list.

Feb
17
Wed
[Theory Seminar] Enayat Ullah
Feb 17 @ 12:00 pm – 1:00 pm

Speaker: Enayat Ullah
Affiliation: Johns Hopkins University

Title: Machine unlearning via algorithmic stability

Abstract: We study the problem of machine unlearning, and identify a notion of algorithmic stability, Total Variation (TV) stability, which we argue, is suitable for the goal of exact efficient unlearning. For convex risk minimization problems, we design TV-stable algorithms based on noisy Stochastic Gradient Descent (SGD). Our key contribution is the design of corresponding efficient unlearning algorithms, which are based on constructing a (maximal) coupling of Markov chains for the noisy SGD procedure. To understand the trade-offs between accuracy and unlearning efficiency, we give upper and lower bounds on excess empirical and population risk of TV stable algorithms for convex risk minimization. Our techniques generalize to arbitrary non-convex functions, and our algorithms are differentially private as well.

Feb
24
Wed
[Theory Seminar] Thomas Lavastida
Feb 24 @ 12:00 pm – 1:00 pm

Speaker: Thomas Lavastida
Affiliation: Carnegie Mellon University

Title: Combinatorial Optimization Augmented with Machine Learning

Abstract:

Combinatorial optimization often focuses on optimizing for the worst-case. However, the best algorithm to use depends on the “relevant inputs”, which is application specific and often does not have a formal definition.

The talk gives a new theoretical model for designing algorithms that are tailored to inputs for the application at hand. In the model, learning is performed on past problem instances to make predictions on future instances. These predictions are incorporated into the design and analysis of the algorithm. The predictions can be used to achieve “instance-optimal” algorithm design when the predictions are accurate and the algorithm’s performance gracefully degrades when there is error in the prediction.

The talk will apply this framework to online algorithm design and give algorithms with theoretical performance that goes beyond worst-case analysis. The majority of the talk will focus on load balancing with restricted assignments.

Mar
3
Wed
[Theory Seminar] Hung Le
Mar 3 @ 12:00 pm – 1:00 pm

Speaker: Hung Le
Affiliation: University of Massachusetts, Amherst

Title: Reliable Spanners: Locality-Sensitive Orderings Strike Back

Abstract:
A highly desirable property of networks is robustness to failures.
Consider a metric space $(X,d_X)$, a graph $H$ over $X$ is a $\vartheta$-reliable $t$-spanner if, for every set of failed vertices $B\subset X$, there is a superset $B^+\supseteq B$ such that the induced subgraph $H[X\setminus B]$ preserves all the distances between points in $X\setminus B^+$ up to a stretch factor $t$, while the expected size of $B^+$ is as most $(1+\vartheta)|B|$. Such a spanner could withstand a catastrophe: failure of even $90\%$ of the network.

Buchin, Har-Peled, and Olah showed how to construct a sparse reliable spanner for Euclidean space from Euclidean locality-sensitive orderings, an object introduced by Chan, Har-Peled, and Jones. In this talk, we extend their approach to non-Euclidean metric spaces by generalizing the ordering of Chan, Har-Peled, and Jones to doubling metrics and introducing new types of locality-sensitive orderings for other metric spaces. We also show how to construct reliable spanners from the newly introduced locality-sensitive orderings via reliable 2-hop spanners for paths. The highlight of our results is that the number of edges in our spanner has no dependency on the spread.

Mar
10
Wed
[Theory Seminar] Teodor Marinov
Mar 10 @ 12:00 pm – 1:00 pm

Speaker: Teodor Marinov
Affiliation: Johns Hopkins University

Title: Beyond Value-Function Gaps: Improved Instance-Dependent Regret Bounds for Episodic Reinforcement Learning

Abstract:
Reinforcement Learning (RL) is a general scenario where agents interact with the environment to achieve some goal. The environment and an agent’s interactions are typically modeled as a Markov decision process (MDP), which can represent a rich variety of tasks. But, for which MDPs can an agent or an RL algorithm succeed? This requires a theoretical analysis of the complexity of an MDP. In this talk I will present information-theoretic lower bounds for a large class of MDPs. The lower bounds are based on studying a certain semi-infinite LP. Further, I will show that existing algorithms enjoy tighter gap-dependent regret bounds (similar to the stochastic multi-armed bandit problem), however, they are unable to achieve the information-theoretic lower bounds even in deterministic transition MDPs, unless there is a unique optimal policy.

Mar
17
Wed
[Theory Seminar] Dominik Kempa
Mar 17 @ 12:00 pm – 1:00 pm

Speaker: Dominik Kempa
Affiliation: Johns Hopkins University

Title: How to store massive sequence collections in a searchable form

Abstract:
Compressed indexing is concerned with the design and construction of data structures to store massive sequences in space close to the size of compressed data, while simultaneously providing searching functionality (such as pattern matching) on the original uncompressed data. These indexes have a huge impact on the analysis of large-scale DNA databases (containing hundreds of thousands of genomes) but until recently the size of many indexes lacked theoretical guarantees and their construction remains a computational bottleneck.

In this talk I will describe my work proving theoretical guarantees on index size as a function of compressed data size, resolving a key open question in this field. Related to that, I will also describe new algorithms for converting between two conceptually distinct compressed representations, Lempel-Ziv and the Burrows-Wheeler Transform. I will show how these new findings enable advanced computation directly on compressed data. I will conclude by describing avenues for future work, including the new algorithms for the construction of compressed indexes that have the potential to cut indexing time by further orders of magnitude, unlocking the ability to search truly enormous text or DNA datasets.

Mar
24
Wed
[Theory Seminar] Audra McMillan
Mar 24 @ 12:00 pm – 1:00 pm

Speaker: Audra McMillan
Affiliation: Apple

Title: Hiding among the clones: a simple and nearly optimal analysis of privacy amplification by shuffling

Abstract:
Differential privacy (DP) is a model of privacy-preserving machine learning that has garnered significant interest in recent years due to its rigorous privacy guarantees. An algorithm differentially private if the output is stable under small changes in the input database. While DP has been adopted in a variety of applications, most applications of DP in industry actually satisfy a stronger notion called local differential privacy. In local differential privacy data subjects perturb their data before it reaches the data analyst. While this requires less trust, it comes a substantial cost to accuracy. Recent work of Erlingsson, Feldman, Mironov, Raghunathan, Talwar, and Thakurta [EFMRTT19] demonstrated that random shuffling amplifies differential privacy guarantees of locally randomized data. Such amplification implies substantially stronger privacy guarantees for systems in which data is contributed anonymously [BEMMRLRKTS17] and has led to significant interest in the shuffle model of privacy [CSUZZ19, EFMRTT19]. In this talk, we will discuss a new result on privacy amplification by shuffling, which achieves the asymptotically optimal dependence in the local privacy parameter. Our result is based on a new proof strategy which is simpler than previous approaches, and extends to a lightly weaker notion known as approximate differential privacy with nearly the same guarantees. Based on joint work with Vitaly Feldman and Kunal Talwar (https://arxiv.org/abs/2012.12803)

Mar
31
Wed
[Theory Seminar] Maryam Negahbani
Mar 31 @ 12:00 pm – 1:00 pm

Speaker: Maryam Negahbani
Affiliation: Dartmouth University

Title: “Revisiting Priority k-Center: Fairness and Outliers

Abstract:
Clustering is a fundamental unsupervised learning and facility location problem extensively studied in the literature. I will talk about a clustering problem called “priority k-center” introduced by Plesnik (in Disc. Appl. Math. 1987). Given a metric space on n points X, with distance function d, an integer k, and radius r_v for each point v in X, the goal is to choose k points S as “centers” to minimize the maximum distance of a point v to S divided by r_v. For uniform r_v’s this is precisely the “k-center” problem where the objective is to minimize the maximum distance of any point to S. In the priority version, points with smaller r_v are prioritized to be closer to S. Recently, a special case of this problem was studied in the context of “individually fair clustering” by Jung et al., FORC 2020. This notion of fairness forces S to open a center in every “densely populated area” by setting r_v to be v’s distance to its closest (n/k)-th neighbor.

In this talk, I show how to approximate priority k-center with outliers: When for a given integer z, you are allowed to throw away z points as outliers and minimize the objective over the rest of the points. We show there is 9-approximation, which is morally a 5, if you have constant many types of radii or if your radii are powers of 2. This is via an LP-aware reduction to min-cost max-flow and is general enough that could handle Matroid constraints on facilities (where instead of asking to pick at most k facilities, you are asked to pick facilities that are independent in a given matroid). Things become quite interesting for priority knapsack-center with outliers: where opening each center costs something and you have a limited budget to spend on your solution S. In this case, we do not know how to solve the corresponding flow problem, so we alter our reduction to reduce to a simpler problem we do know how to solve taking a hit of +5 in the approximation ratio. There are still many open problems in this work, in addition to solving the flow problem in the knapsack case, the best LP integrality gap we know for priority k-center with outliers is 3.

Apr
7
Wed
[Theory Seminar] Leonidas Tsepenekas
Apr 7 @ 12:00 pm – 1:00 pm

Speaker: Leonidas Tsepenekas
Affiliation: University of Maryland

Title: Approximating Two-Stage Stochastic Supplier Problems

Abstract:
The main focus of this talk will be radius-based (supplier) clustering in the two-stage stochastic setting with recourse, where the inherent stochasticity of the model comes in the form of a budget constraint. Our eventual goal is to provide results in the most general distributional setting, where there is only black-box access to the underlying distribution. To that end, we follow a two-step approach. First, we develop algorithms for a restricted version of the problem, in which all possible scenarios are explicitly provided; second, we employ a novel scenario-discarding variant of the standard Sample Average Approximation (SAA) method, in which we also crucially exploit structural properties of the algorithms developed for the first step of the framework. In this way, we manage to generalize the results of the latter to the black-box model. Finally, we note that the scenario-discarding modification to the SAA method is necessary in order to optimize over the radius.

Paper: https://arxiv.org/abs/2008.03325

Apr
14
Wed
[Theory Seminar] Samson Zhou
Apr 14 @ 12:00 pm – 1:00 pm

Speaker: Samson Zhou
Affiliation: Carnegie Mellon University

Title: Tight Bounds for Adversarially Robust Streams and Sliding Windows via Difference Estimators

Abstract:
We introduce difference estimators for data stream computation, which provide approximations to F(v)-F(u) for frequency vectors v,u and a given function F. We show how to use such estimators to carefully trade error for memory in an iterative manner. The function F is generally non-linear, and we give the first difference estimators for the frequency moments F_p for p between 0 and 2, as well as for integers p>2. Using these, we resolve a number of central open questions in adversarial robust streaming and sliding window models.

For both models, we obtain algorithms for norm estimation whose dependence on epsilon is 1/epsilon^2, which shows, up to logarithmic factors, that there is no overhead over the standard insertion-only data stream model for these problems.

Sep
6
Wed
[Theory Seminar] Welcome Back / Introductions
Sep 6 @ 12:00 pm – 1:00 pm
Oct
25
Wed
[Theory Seminar] Zeyu Guo
Oct 25 @ 12:00 pm – 1:00 pm

Speaker: Zeyu Guo
Affiliation: Ohio State University

Title: TBD

Abstract: TBD

Sep
25
Wed
[Theory Seminar] Yuzhou Gu
Sep 25 @ 12:00 pm – 1:00 pm

Speaker: Yuzhou Gu

Affiliation:NYU Center for Data Science & Courant Institute

Title: Community detection in the hypergraph stochastic block model

Abstract:

Community detection is a fundamental problem in network
science, and its theoretical study has received significant attention
over the last decade. In this talk I will present some recent advances
on the community detection problem in sparse hypergraphs. In
particular, we determine the weak recovery threshold for the
hypergraph stochastic block model for a wide range of parameters. This
resolves conjectures made by physicists in the corresponding regimes
and has implications to phase transitions of random constraint
satisfaction problems. A key component in this study is to analyze the
behavior of information channels under repeated applications of the
belief propagation operator. We introduce a framework for performing
this analysis based on information-theoretical methods for channel
comparison. Along the way, we formulate a rigorous version of the
population dynamics algorithm, an approach commonly used in practice
but lacks theoretical guarantees.