We typically have seminars on Wednesday at noon in Malone 228. All seminar announcements will be sent to the theory mailing list.

Speaker: Jalaj Upadhyay

Affiliation: Penn State University

Title: Fast and Space-Optimal Differentially-Private Low-Rank Factorization in the General Turnstile Update Model

Abstract:

The problem of {\em low-rank factorization} of an mxn matrix A requires outputting a singular value decomposition: an m x k matrix U, an n x k matrix V, and a k x k diagonal

matrix D) such that U D V^T approximates the matrix A in the Frobenius norm. In this paper, we study releasing differentially-private low-rank factorization of a matrix in

the general turnstile update model. We give two differentially-private algorithms instantiated with respect to two levels of privacy. Both of our privacy levels are stronger

than privacy levels for this and related problems studied in previous works, namely that of Blocki {\it et al.} (FOCS 2012), Dwork {\it et al.} (STOC 2014), Hardt and Roth

(STOC 2012, STOC 2013), and Hardt and Price (NIPS 2014). Our main contributions are as follows.

1. In our first level of privacy, we consider two matrices A and A’ as neighboring if A – A’ can be represented as an outer product of two unit vectors. Our private algorithm

with respect to this privacy level incurs optimal additive error. We also prove a lower bound that shows that the space required by this algorithm is optimal up to a

logarithmic factor.

2. In our second level of privacy, we consider two matrices as neighboring if their difference has the Frobenius norm at most 1. Our private algorithm with respect to this

privacy level is computationally more efficient than our first algorithm and incurs optimal additive error.

Speaker: Mohammad Mahmoody, Assistant Professor, University of Virginia

Abstract: Indistinguishability Obfuscation (IO) has recently emerged as a central primitive in cryptography, enabling many heretofore out-of-reach applications. However, currently all known constructions of IO are based on multilinear maps which are poorly understood. With the hope of basing IO on more standard assumptions, in this work we ask whether IO could be based on any of powerful (and recently realized) encryption primitives such as attribute-based/predicate encryption, fully homomorphic encryption, and witness encryption. What connects these primitives is that they are zero-one: either the message is revealed fully by the “right key” or it remains completely hidden.

Our main result is a negative one: we prove there is no black-box construction of IO from any of the above list of “zero-one” encryptions. We note many IO constructions are in fact non-black-box and e.g., results of Anath-Jain’15 and Bitansky-Vaikuntanathan’15 of basing IO on functional encryption is non-black-box. In fact, we prove our separations in an extension of the black-box framework of Impagliazzo-Rudich’89 and Reingold-Trevisan-Vadhan’04 which allows such non-black-box techniques as part of the model by default. Thus, we believe our extended model is of independent interest as a candidate for the new “standard” for cryptographic separations.

Speaker: Avishay Tal

Affiliation: IAS

Title:Time-Space Hardness of Learning Sparse Parities

Abstract:

How can one learn a parity function, i.e., a function of the form $f(x) = a_1 x_1 + a_2 x_2 + … + a_n x_n (mod 2)$ where a_1, …, a_n are in {0,1}, from random labeled examples? One approach is to gather O(n) random labeled examples and perform Gaussian-elimination. This requires a memory of size O(n^2) and poly(n) time. Another approach is to go over all possible 2^n parity functions and to verify them by checking O(n) random examples per each possibility. This requires a memory of size O(n), but O(2^n * n) time. In a recent work, Raz [FOCS, 2016] showed that if an algorithm has memory of size much smaller than n^2, then it has to spend exponential time in order to learn a parity function. In other words, fast learning requires a good memory. In this work, we show that even if the parity function is known to be extremely sparse, where only log(n) of the a_i’s are nonzero, then the learning task is still time-space hard. That is, we show that any algorithm with linear size memory and polynomial time fails to learn log(n)-sparse parities. Consequently, the classical tasks of learning linear-size DNF formulae, linear-size decision trees, and logarithmic-size juntas are all time-space hard. Based on joint work with Gillat Kol and Ran Raz.

SPEAKER: Yevgeniy Dodis, New York University

TITLE: Fixing Cracks in the Concrete: Random Oracles with Auxiliary Input, Revisited

ABSTRACT: We revisit security proofs for various cryptographic primitives in the random oracle model with auxiliary input (ROM-AI): a (computationally unbounded) attacker A can compute arbitrary S bits of leakage z=z(O) about the random oracle O before attacking the system, and then use additional T oracle queries to O during the attack. This model was explicitly studied by Unruh in 2007 (CRYPTO 2007), but dates back to the seminal paper of Hellman in 1980 about time-space tradeoffs for inverting random functions, and has natural applications in settings where traditional random oracle proofs are not useful: (a) security against non-uniform attackers; (b) space-time tradeoffs; (c) security against preprocessing; (d) resilience to backdoors in hash functions. We obtain a number of new results about ROM-AI, but our main message is that ROM-AI is the “new cool kid in town”: it nicely connects theory and practice, has a lot of exciting open questions, leads to beautiful math, short definitions, elegant proofs, surprising algorithms, and is still in its infancy. In short, you should work on it! Joint Work with Siyao Guo and Jonathan Katz.

Speaker: Ori Rottenstriech, Princeton Title: Novel Approaches to Challenges in Emerging Network Paradigms Abstract: SDN (Software defined networking) and NFV (Network Function Virtualization) are two emerging network paradigms that enable simplification, flexibility and cost-reduction in network management. We believe that the new paradigms will lead to many interesting research questions. We study how to rely on them for dealing with two common network challenges. We consider switches that imply network policies in SDN through rule matching tables of limited size. We study the applicability of rule caching and lossy compression to create packet classifiers requiring much less memory than the theoretical size limits of semantically-equivalent representations. We would like to find limited-size classifiers that can correctly classify a high portion of the traffic. We address different goals with unique settings and explain how to deal with the traffic that cannot be classified correctly. Network functions such as load balancing and deep packet inspection are often implemented in dedicated hardware called middleboxes. Those can suffer from temporary unavailability due to misconfiguration or software and hardware malfunction. We suggest to rely on virtualization for planning and deploying backup schemes for network functions. The schemes guarantee high levels of survivability with significant reduction in resource consumption. We discuss different goals that network designers should take into account. We describe a graph theoretical model for finding properties of efficient solutions and developing algorithms that can build them. Bio: Ori Rottenstriech is a postdoctoral research associate at the Department of Computer Science, Princeton University. He received his Ph.D. from the Electrical Engineering department of the Technion. His research interests include the intersection of computer networking and algorithms.

Speaker: Sepehr Assadi, UPenn

Title:

Abstract:

Speaker: Dana Dachman Soled, UMD

**Title: Tight Upper and Lower Bounds for Leakage-Resilient, Locally Decodable and Updatable Non-Malleable Codes**

**Abstract: **In a recent result, Dachman-Soled et al.~(TCC ’15) proposed a new notion called locally decodable and updatable non-malleable codes, which informally, provides the security guarantees of a non-malleable code while also allowing for efficient random access. They also considered locally decodable and updatable non-malleable codes that are leakage-resilient, allowing for adversaries who continually leak information in addition to tampering. Unfortunately, the locality of their construction in the continual setting was Omega(log n), meaning that if the original message size was n, then Omega(log n) positions of the codeword had to be accessed upon each decode and update instruction.

In this work, we ask whether super-constant locality is inherent in this setting. We answer the question affirmatively by showing tight upper and lower bounds. Specifically, in any threat model which allows for a rewind attack-wherein the attacker leaks a small amount of data, waits for the data to be overwritten and then writes the original data back-we show that a locally decodable and updatable non-malleable code with block size Chi in poly(lambda) number of bits requires locality delta(n) in omega(1), where n = poly(lambda) is message length and lambda is security parameter. On the other hand, we re-visit the threat model of Dachman-Soled et al.~(TCC ’15)-which indeed allows the adversary to launch a rewind attack-and present a construction of a locally decodable and updatable non-malleable code with block size Chi in Omega(lambda^{1/mu}) number of bits (for constant 0 < mu < 1) with locality delta(n), for any delta(n) in omega(1), and n = poly(lambda).

Speaker: Mohammad Hajiesmaili

Affiliation: Johns Hopkins University

Title: Online storage management in electricity market

Abstract:

With unprecedented benefits in terms of efficiency, economy, reliability, and environmental awareness, in the recent years, there has been a rapid proliferation of renewable energy sources such as solar and wind in electric power systems. Despite these benefits, the inherent uncertainty in renewables places severe challenges on the management of the entire energy systems, including electricity market. Leveraging energy storage systems is a promising approach to mitigate the uncertainty of renewables, by charging and discharging during the mismatched periods. Energy storage systems, however, offers a new design space for additional optimization. That is, a storage system can capture energy during periods when the market prices are low and surrender stored energy when energy prices are high.

In this talk, we consider different scenarios of storage management in both supply and demand sides of the electricity market. The uncertainties in both renewable output and electricity market price, emphasizes the need for online solution design. The underlying theoretical problems could be described as extensions of conversion problems in financial markets, i.e., the search for best prices to buy and/or sell assets. The difference with the conversion problems, is that in addition to the uncertainty in the price, our problems suffer from another uncertainty originated from renewable output. We follow online algorithm design and use competitive ratio as the performance measure of our algorithms. We present our recent results in designing competitive online algorithms that achieve constant competitive ratios. In addition, we briefly talk about the case of utilizing aggregate potentials distributed small-scale storage systems, such as EVs or residential storages, to participate in electricity market through an aggregator. This setting is more challenging than the previous one, since the distributed sources also arrive in online manner with heterogeneous profiles.

Overall, we believe that changing the landscape of electric power system from a centralized predictable system to a distributed uncertain system opens a new research direction for leveraging online framework designs in this relatively under-explored area.

Speaker: Kuan Cheng

Affiliation: Johns Hopkins University

Title: Near-Optimal Secret Sharing and Error Correcting Codes in $\AC^0$

Abstract:

We study the question of minimizing the computational complexity of (robust) secret sharing schemes and error correcting codes. In standard instances of these objects, both encoding and decoding involve linear algebra, and thus cannot be implemented in the class $\AC^0$. The feasibility of non-trivial secret sharing schemes in $\AC^0$ was recently shown by Bogdanov et al.\ (Crypto 2016) and that of (locally) decoding errors in $\AC^0$ by Goldwasser et al.\ (STOC 2007).

In this paper, we show that by allowing some slight relaxation such as a small error probability, we can construct much better secret sharing schemes and error correcting codes in the class $\AC^0$. In some cases, our parameters are close to optimal and would be impossible to achieve without the relaxation. Our results significantly improve previous constructions in various parameters.

Our constructions combine several ingredients in pseudorandomness and combinatorics in an innovative way. Specifically, we develop a general technique to simultaneously amplify security threshold and reduce alphabet size, using a two-level concatenation of protocols together with a random permutation. We demonstrate the broader usefulness of this technique by applying it in the context of a variant of secure broadcast.

Based on a joint work with Yuval Ishai and Xin Li.

Speaker: Ilan Komargodski

Affiliation: Cornell Tech

Title: White-Box vs. Black-Box Complexity of Search Problems: Ramsey and Graph Property Testing

Abstract: Ramsey theory assures us that in any graph there is a clique or independent set of a certain size, roughly logarithmic in the graph size. But how difficult is it to find the clique or independent set? This problem is in TFNP, the class of search problems with guaranteed solutions. If the graph is given explicitly, then it is possible to do so while examining a linear number of edges. If the graph is given by a black-box, where to figure out whether a certain edge exists the box should be queried, then a large number of queries must be issued.

1) What if one is given a program or circuit (“white-box”) for computing the existence of an edge. Does the search problem remain hard?

2) Can we generically translate all TFNP black-box hardness into white-box hardness?

3) Does the problem remain hard if the black-box instance is small?

We will answer all of these questions and discuss related questions in the setting of property testing.

Joint work with Moni Naor and Eylon Yogev.