Seminar

We typically have seminars on Wednesday at noon in Malone 228.  All seminar announcements will be sent to the theory mailing list.

Oct
12
Wed
[Theory Seminar] Justin Hsu
Oct 12 @ 12:00 pm – 1:00 pm

Speaker: Justin Hsu
Affiliation: University of Pennsylvania

Title: Approximate Probabilistic Coupling and Differential Privacy
Abstract: Approximate lifting is a formal verification concept for proving
differential privacy. Recently, we have explored an interesting connection:
approximate liftings are an approximate version of probabilistic coupling. As a
consequence, we can give new, “coupling” proofs of differential privacy,
simplifying and generalizing existing proofs.  In this talk we will present
approximate couplings and describe how to they can be used to prove differential
privacy for the Sparse Vector mechanism, an algorithm whose existing privacy
proof is notoriously subtle.
Joint with Gilles Barthe, Marco Gaboradi, Benjamin Gregoire, and Pierre-Yves Strub.
Oct
19
Wed
[Theory Seminar] David Harris
Oct 19 @ 12:00 pm – 1:00 pm

Speaker: David Harris

Affiliation: University of Maryland College Park

Title: Improved parallel algorithms for hypergraph maximal independent set

Abstract:

Finding a maximal independent set in hypergraphs has been a long-standing algorithmic challenge. The best parallel algorithm for hypergraphs of rank $r$ was developed by Beame
and Luby (1990) and Kelsen (1992), running in time roughly $(\log n)^{r!}$. This is in RNC for fixed $r$, but is still quite expensive.  We improve on the analysis of Kelsen to
show that (a slight variant) of this algorithm runs in time $(\log n)^{2^r}$. We derandomize this algorithm to achieve a deterministic algorithm running in time $(\log
n)^{2^{r+3}}$ using $m^{O(1)}$ processors.

Our analysis can also apply when $r$ is slowly growing; using this in conjunction with a strategy of Bercea et al. (2015) gives a deterministic algorithm running in time
$\exp(O(\log m/\log \log m))$. This is faster than the algorithm of Bercea et al, and in addition it is deterministic. In particular, this is sub-polynomial time for graphs with
$m \leq n^{o(\log \log n)}$ edges.

Oct
26
Wed
[Theory Seminar] Adam Smith
Oct 26 @ 12:00 pm – 1:00 pm

Speaker: Adam Smith

Affiliation: Penn State University.

Title: Privacy, Information and Generalization

Abstract:

Consider an agency holding a large database of sensitive personal
information — medical records, census survey answers, web search
records, or genetic data, for example. The agency would like to
discover and publicly release global characteristics of the data (say,
to inform policy or business decisions) while protecting the privacy
of individuals’ records. I will begin by discussing what makes this
problem difficult, and exhibit some of the nontrivial issues that
plague simple attempts at anonymization and aggregation. Motivated by
this, I will present differential privacy, a rigorous definition of
privacy in statistical databases that has received significant
attention.

In the second part of the talk, I will explain how differential
privacy is connected to a seemingly different problem: “adaptive data
analysis”, the practice by which insights gathered from data are used
to inform further analysis of the same data sets. This is increasingly
common both in scientific research, in which data sets are shared and
re-used across multiple studies. Classical statistical theory assumes
that the analysis to be run is selected independently of the data.
This assumption breaks down when data re re-used; the resulting
dependencies can significantly bias the analyses’ outcome. I’ll show
how the limiting the information revealed about a data set during
analysis allows one to control such bias, and why differentially
private analyses provide a particularly attractive tool for limiting
information.

Based on several papers, including recent joint works with R. Bassily,
K. Nissim, U. Stemmer, T. Steinke and J. Ullman (STOC 2016) and R.
Rogers, A. Roth and O. Thakkar (FOCS 2016).

 

Bio:
Adam Smith is a professor of Computer Science and Engineering at Penn
State. His research interests lie in data privacy and cryptography,
and their connections to machine learning, statistics, information
theory, and quantum computing. He received his Ph.D. from MIT in 2004
and has held visiting positions at the Weizmann Institute of Science,
UCLA, Boston University and Harvard. In 2009, he received a
Presidential Early Career Award for Scientists and Engineers (PECASE).
In 2016, he received the Theory of Cryptography Test of Time award,
jointly with C. Dwork, F. McSherry and K. Nissim.

Nov
16
Wed
[Theory Seminar] Justin Thaler
Nov 16 @ 12:00 pm – 1:00 pm

Speaker: Justin Thaler

Affiliation: Georgetown University

Title: Approximate Degree, Sign-Rank, and the Method of Dual Polynomials

Abstract:

The eps-approximate degree of a Boolean function is the minimum degree of a real polynomial that point-wise approximates f to error eps. Approximate degree has wide-ranging applications in theoretical computer science, yet our understanding of approximate degree remains limited, with few general results known.

The focus of this talk will be on a relatively new method for proving lower bounds on approximate degree: specifying dual polynomials, which are dual solutions to a certain linear program capturing the approximate degree of any function. I will describe how the method of dual polynomials has recently enabled progress on a variety of open problems, especially in communication complexity and oracle separations. 

Joint work with Mark Bun, Adam Bouland, Lijie Chen, Dhiraj Holden, and Prashant Nalini Vasudevan

Nov
30
Wed
[Theory seminar] Jalaj Upadhyay
Nov 30 @ 12:00 pm – 1:00 pm

Speaker: Jalaj Upadhyay

Affiliation: Penn State University

Title: Fast and Space-Optimal Differentially-Private Low-Rank Factorization in the General Turnstile Update Model

Abstract:

The problem of {\em low-rank factorization} of an mxn matrix A requires outputting a singular value decomposition: an m x k matrix U, an n x k matrix V, and a k x k diagonal
matrix D) such that  U D V^T approximates the matrix A in the Frobenius norm.  In this paper, we study  releasing differentially-private low-rank factorization of a matrix in
the general turnstile update model.  We give two differentially-private algorithms instantiated with respect to two levels of privacy.  Both of our privacy levels are stronger
than  privacy levels for this and related problems studied in previous works, namely that of Blocki {\it et al.} (FOCS 2012), Dwork {\it et al.} (STOC 2014), Hardt and Roth
(STOC 2012, STOC 2013), and Hardt and Price (NIPS 2014). Our main contributions are as follows.

1. In our first level of privacy, we consider two matrices A and A’ as neighboring if  A – A’ can be represented as an outer product of two unit vectors. Our private algorithm
with respect to this privacy level incurs optimal  additive error.  We also prove a lower bound that shows that the space required by this algorithm is optimal up to a
logarithmic factor.
2. In our second level of privacy, we consider two matrices  as neighboring if their difference has the Frobenius norm at most 1. Our private algorithm with respect to this
privacy level is computationally more efficient than our first algorithm and incurs optimal additive error.

Mar
7
Tue
[Theory Seminar] Mohammad Mahmoody
Mar 7 @ 3:00 pm – 4:00 pm

Speaker: Mohammad Mahmoody, Assistant Professor, University of Virginia

Title: Lower Bounds on Indistinguishability Obfuscation from Zero-One Encryption

Abstract: Indistinguishability Obfuscation (IO) has recently emerged as a central primitive in cryptography, enabling many heretofore out-of-reach applications. However, currently all known constructions of IO are based on multilinear maps which are poorly understood. With the hope of basing IO on more standard assumptions, in this work we ask whether IO could be based on any of powerful (and recently realized) encryption primitives such as attribute-based/predicate encryption, fully homomorphic encryption, and witness encryption. What connects these primitives is that they are zero-one: either the message is revealed fully by the “right key” or it remains completely hidden.

Our main result is a negative one: we prove there is no black-box construction of IO from any of the above list of “zero-one” encryptions. We note many IO constructions are in fact non-black-box and e.g., results of Anath-Jain’15 and Bitansky-Vaikuntanathan’15 of basing IO on functional encryption is non-black-box. In fact, we prove our separations in an extension of the black-box framework of Impagliazzo-Rudich’89 and Reingold-Trevisan-Vadhan’04 which allows such non-black-box techniques as part of the model by default. Thus, we believe our extended model is of independent interest as a candidate for the new “standard” for cryptographic separations.

Mar
8
Wed
[Theory seminar] Avishay Tal
Mar 8 @ 12:00 pm – 1:00 pm

Speaker: Avishay Tal

Affiliation: IAS

Title:Time-Space Hardness of Learning Sparse Parities

Abstract:

How can one learn a parity function, i.e., a function of the form $f(x) = a_1 x_1 + a_2 x_2 + … + a_n x_n (mod 2)$ where a_1, …, a_n are in {0,1}, from random labeled examples? One approach is to gather O(n) random labeled examples and perform Gaussian-elimination. This requires a memory of size O(n^2) and poly(n) time. Another approach is to go over all possible 2^n parity functions and to verify them by checking O(n) random examples per each possibility. This requires a memory of size O(n), but O(2^n * n) time. In a recent work, Raz [FOCS, 2016] showed that if an algorithm has memory of size much smaller than n^2, then it has to spend exponential time in order to learn a parity function. In other words, fast learning requires a good memory. In this work, we show that even if the parity function is known to be extremely sparse, where only log(n) of the a_i’s are nonzero, then the learning task is still time-space hard. That is, we show that any algorithm with linear size memory and polynomial time fails to learn log(n)-sparse parities. Consequently, the classical tasks of learning linear-size DNF formulae, linear-size decision trees, and logarithmic-size juntas are all time-space hard. Based on joint work with Gillat Kol and Ran Raz.

 

Mar
22
Wed
[Theory Seminar] Yevgeniy Dodis @ Malone G33/35 (ground floor)
Mar 22 @ 12:00 pm – 1:00 pm

SPEAKER: Yevgeniy Dodis, New York University

TITLE: Fixing Cracks in the Concrete: Random Oracles with Auxiliary Input, Revisited

ABSTRACT: We revisit security proofs for various cryptographic primitives in the random oracle model with auxiliary input (ROM-AI): a (computationally unbounded) attacker A can compute arbitrary S bits of leakage z=z(O) about the random oracle O before attacking the system, and then use additional T oracle queries to O during the attack. This model was explicitly studied by Unruh in 2007 (CRYPTO 2007), but dates back to the seminal paper of Hellman in 1980 about time-space tradeoffs for inverting random functions, and has natural applications in settings where traditional random oracle proofs are not useful: (a) security against non-uniform attackers; (b) space-time tradeoffs; (c) security against preprocessing; (d) resilience to backdoors in hash functions. We obtain a number of new results about ROM-AI, but our main message is that ROM-AI is the “new cool kid in town”: it nicely connects theory and practice, has a lot of exciting open questions, leads to beautiful math, short definitions, elegant proofs, surprising algorithms, and is still in its infancy. In short, you should work on it! Joint Work with Siyao Guo and Jonathan Katz.

 

 

 

Mar
29
Wed
[Theory Seminar] Ori Rottenstriech
Mar 29 @ 12:00 pm – 1:00 pm
Speaker: Ori Rottenstriech, Princeton

Title: Novel Approaches to Challenges in Emerging Network Paradigms

Abstract:

SDN (Software defined networking) and NFV (Network Function Virtualization) are two emerging network paradigms that enable simplification, flexibility and cost-reduction in network management. We believe that the new paradigms will lead to many interesting research questions. We study how to rely on them for dealing with two common network challenges.

We consider switches that imply network policies in SDN through rule matching tables of limited size. We study the applicability of rule caching and lossy compression to create packet classifiers requiring much less memory than the theoretical size limits of semantically-equivalent representations. We would like to find limited-size classifiers that can correctly classify a high portion of the traffic. We address different goals with unique settings and explain how to deal with the traffic that cannot be classified correctly.

Network functions such as load balancing and deep packet inspection are often implemented in dedicated hardware called middleboxes. Those can suffer from temporary unavailability due to misconfiguration or software and hardware malfunction. We suggest to rely on virtualization for planning and deploying backup schemes for network functions. The schemes guarantee high levels of survivability with significant reduction in resource consumption. We discuss different goals that network designers should take into account. We describe a graph theoretical model for finding properties of efficient solutions and developing algorithms that can build them.

Bio: Ori Rottenstriech is a postdoctoral research associate at the Department of Computer Science, Princeton University. He received his Ph.D. from the Electrical Engineering department of the Technion. His research interests include the intersection of computer networking and algorithms.

 

Apr
19
Wed
[Theory Seminar] Sepehr Assadi
Apr 19 @ 12:00 pm – 1:00 pm

Speaker: Sepehr Assadi, UPenn

Title:

Matching Size and Matrix Rank Estimation in Data Streams

 

Abstract:

How well a sub-linear space algorithm can estimate the size of a largest matching in a graph or the rank of a given matrix, if the input is revealed in a streaming fashion? In this talk, we consider this question from both upper bound and lower bound ends and establish new results on the tradeoff between the space requirement and desired accuracy of streaming algorithms for these tasks.

 

We show that while the problem of matching size estimation is provably easier than the problem of finding an approximate matching (i.e., finding the actual edges of the matching), the space complexity of the two problems starts to converge together as the accuracy desired in the computation approaches near-optimality. A well-known connection between matching size estimation and computing rank of Tutte matrices allows us to further carry our lower bound results to the matrix rank estimation problem, and we show that an almost quadratic space is necessary to obtain a near-optimal approximation of matrix rank in data streams.

 

Based on a joint work with Sanjeev Khanna and Yang Li (in SODA’17, invited to HALG’17).