Tentative Program:

Thursday (May 26th)
Hodson hall 210
9:00 Prof. Yair Amir, Chair, Department of Computer Science Welcome Remarks
9:05 Prof. Marius Zimand On Efficient Compression at Almost Minimum Description Length
9:25Prof. Xin LiAlmost Optimal Non-malleable Extractors and Privacy Amplification Protocols
9:45Prof. Mohammad HajiaghayiFrom Duels to Battefields: Computing Equilibria of Blotto and Other Games
10:05Prof. Jeremy FinemanEfficient Algorithms with Asymmetric Read and Write Costs
10:25Coffee break
11:00Distinguished Invited Speaker: Prof. Yevgeniy DodisNon-Malleable Codes
1:30Distinguished Invited Speaker: Prof. Boaz BarakDo Algorithms Believe in Unicorns?
2:30Coffee break
3:00Prof. Abhishek JainCryptography with Updates
3:20Prof. Dana Dachman-SoledNon-Malleable Codes for Bounded Depth, Bounded Fan-in Circuits
3:40Prof. Calvin NewportNoisy Distributed Computation: Some Preliminary Thoughts
4:00Reception: The Johns Hopkins Club


On Efficient Compression at Almost Minimum Description Length
Marius Zimand, Towson University

Abstract: A minimum description length (MDL) of a string x is a program p of length C(x) such that U(p)=x, where C(x) is the Kolmogorov complexity of x, and U is the universal machine underlying the definition of Kolmogorov complexity. Given x and C(x) it is possible to find an MDL description of x, but the process takes time larger than any computable function. Using randomization, from input (x, k= C(x)) it is possible to find a description of x at length close to MDL in polynomial time. Thus, computing a short description of x from x and C(x) is an interesting example of a task that probabilistically can be done in polynomial time, but deterministically requires time larger than any computable function. The requirement that k= C(x) is given is quite strong, but in fact it is enough that k is an upper bound of C(x) to be able to construct a description of length approximately k, and this requirement is likely to hold in many applications. Similar results hold for distributed compression of multiple correlated sources. (Some of the results are joint work with Bruno Bauwens.)

Cryptography with Updates
Abhishek Jain, Johns Hopkins University

Abstract: The notion of randomized encoding allows one to encode a ``complex'' computation C(x) into a ``simpler'' randomized function Encode(C,x;r) whose output allows anyone to recover C(x) without learning anything else about C and x. Over the years, randomized encodings have found tremendous applications in cryptography. We initiate the study of updatable randomized encodings, where a randomized encoding of a circuit C and input x can be updated to an encoding of a related circuit C' and input x' by using an efficiently computable update string. For example, one may change the the second AND gate in C to an OR gate. We also study an analogous notion of updatable garbled circuits (a decomposable form of randomized encodings). We identify updatable randomized encodings as a central figure in the study of updatable cryptography. We demonstrate some applications to this effect, namely, to updatable non-interactive zero knowledge proofs and updatable multiparty computation. We provide a construction of updatable randomized encodings with support for arbitrary updates based on functional encryption. We also present a construction of updatable garbled circuits for bit-wise updates based on hardness of GapSVP and SIVP with sub-exponential approximation factors.

From Duels to Battefields: Computing Equilibria of Blotto and Other Games
MohammadTaghi HajiAghayi, University of Maryland at College Park

Abstract: We study the problem of computing Nash equilibria of zero-sum games. Many natural zero-sum games have exponentially many strategies, but highly structured payoffs. For example, in the well-studied Colonel Blotto game (introduced by Borel in 1921), players must divide a pool of troops among a set of battlefields with the goal of winning (i.e., having more troops in) a majority. The Colonel Blotto game is commonly used for analyzing a wide range of applications from the U.S presidential election, to innovative technology competitions, to advertisement, to sports. However, because of the size of the strategy space, standard methods for computing equilibria of zero-sum games fail to be computationally feasible. Indeed, despite its importance, only a few solutions for special variants of the problem are known. In this paper we show how to compute equilibria of Colonel Blotto games. Moreover, our approach takes the form of a general reduction: to find a Nash equilibrium of a zero-sum game, it suffices to design a separation oracle for the strategy polytope of any bilinear game that is payoff-equivalent. We then apply this technique to obtain the first polytime algorithms for a variety of games. In addition to Colonel Blotto, we also show how to compute equilibria in an infinite-strategy variant called the General Lotto game; this involves showing how to prune the strategy space to a finite subset before applying our reduction. We also consider the class of dueling games, first introduced by Immorlica et al. (2011). We show that our approach provably extends the class of dueling games for which equilibria can be computed: we introduce a new dueling game, the matching duel, on which prior methods fail to be computationally feasible but upon which our reduction can be applied.

Efficient Algorithms with Asymmetric Read and Write Costs
Jeremy Fineman, Georgetown University

Abstract: In several emerging technologies for computer memory (main memory) the cost of reading is significantly cheaper than the cost of writing. Such asymmetry in memory cost poses a fundamentally different model from the RAM For algorithm design. In this work we study lower and upper bounds for various problems under such asymmetric read and write costs. We consider both the case in which all but O(1) memory has asymmetric cost, and the case of a small cache of symmetric memory. We model both cases using the (M,w)-ARAM, in which there is a small (symmetric) memory of size M, a large unbounded (asymmetric) memory, both random access, and the cost of reading from the large memory is unit, but the cost of writing is w >= 1. The general question is whether recomputation and extra reads can improve the overall cost by reducing the number of writes. This talk focuses primarily on longest common subsequence (LCS) and related dynamic-programming algorithms. If viewed as a pebbling game, the underlying n x n diamond dag of the LCS computation requires \Omega(w n^2/M) cost, which indicates that no asymptotic improvement is possible in the ARAM model. Surprisingly, we can beat this lower bound for the LCS problem by partially computing some nodes before all inputs are ready. We exhibit an algorithm that performs better by a factor of min(w^{1/3}, M^{1/2}).

Non-Malleable Codes
Yevgeniy Dodis, New York University

Abstract: Non-malleable codes provide a useful and meaningful security guarantee in situations where traditional error-correction (and even error-detection) is impossible; for example, when the attacker can completely overwrite the encoded message. Informally, a code is non-malleable if the message contained in a modified codeword is either the original message, or a completely unrelated value. Although such codes do not exist if the family of tampering functions F is completely unrestricted, they are known to exist for many broad tampering families F. One such natural family is the family of tampering functions in the so called split-state model. Here the message m is encoded into two shares, and the attacker is allowed to arbitrarily tamper with each share individually. As our main result, we give several constructions of efficient, multi-bit, information-theoretically-secure non-malleable code in the split-state model, culminating in the first explicit construction achieving a constant encoding rate. The key to obtaining our results is a simple notion of *non-malleable reductions*, which is of independent interest. (Joint works with Divesh Aggarwal, Tomasz Kazana, Shachar Lovett and Maciej Obremski).

Do Algorithms Believe in Unicorns?
Boaz Barak, Harvard John A. Paulson School of Engineering and Applied Sciences

Abstract: Bayesian analysis uses probabilities to model uncertainty. Traditionally this uncertainty is due to a lack of *information* but, in many cases uncertainty of observers is due to a lack of *computational resources*. That is, in an information theoretic sense the observations completely determine the unseen parameters, but recovering the parameters from the observed data is a hard computational problem. In this talk I will discuss one way to assign internally consistent Bayesian probabilities to model computational uncertainty, based on the Sum of Squares (SoS) semidefinite programming hierarchy (Shor '87, Parrilo '00, Lasserre '01). I will illustrate this notion via the example of the *planted clique problem* (Karp '76, Jerrum '92, Kucera '95). This is a central problem in average case complexity that has been connected to questions in a variety of areas including cryptography, computational economics, computational biology, sparse recovery, and machine learning. We show that for every constant epsilon, the SoS algorithm requires n^{Omega(log n)} time to detect a planted clique of size n^{1/2-epsilon} in a G(n,1/2) Erdos-Renyi random graph. We do so by giving a general approach to find consistent probabilities to model the uncertainty of the SoS algorithm on the location of the planted clique. Interestingly, a computationally restricted agent must make probabilistic inferences about objects that do not exist (such as a large clique in a random graph), hence a theory of computational Bayesian probabilities needs to make sense of quantities such as "the probability that a unicorn has blue eyes". I will not assume background in semidefinite programming and the talk will aim to be accessible (and hopefully enjoyable) to anyone interested in uncertainty, data analysis, and algorithms Based on joint work with Hopkins, Kelner, Kothari, Moitra and Potechin.

Almost Optimal Non-malleable Extractors and Privacy Amplification Protocols
Xin Li, Johns Hopkins University

Privacy amplification is a basic problem in cryptography, where two parties communicate over an adversarially controlled channel to convert their shared weak random secret into nearly uniform secret random strings. The primary goal is to use as few number of interactions to achieve entropy loss as small as possible. A key ingredient in designing privacy amplification protocols is an object called non-malleable extractor. Previously, explicit constructions of such extractors require entropy at least log^2 (n/\epsilon) for input length n and error \epsilon, which translates into security parameter at most \sqrt{k} when the shared secret has entropy k. In this talk I'll describe the first explicit construction of a non-malleable extractor that only requres entropy log^{1+o(1)} (n/\epsilon), which in turn gives optimal privacy amplification protocols for nearly all possible security parameters. (Joint work with Eshan Chattopadhyay.)

Non-Malleable Codes for Bounded Depth, Bounded Fan-in Circuits
Dana Dachman-Soled, University of Maryland at College Park

Abstract: We show how to construct efficient, unconditionally secure non-malleable codes for bounded output locality. In particular, our scheme is resilient against functions such that any output bit is dependent on at most $n^{\delta}$ bits, where $n$ is the total number of bits in a codeword and $0\leq \delta < 1$ a constant. Notably, this tampering class includes NC^0. (Joint work with: Marshall Ball, Mukul Kulkarni and Tal Malkin)

Noisy Distributed Computation: Some Preliminary Thoughts
Calvin Newport, Georgetown University

Abstract: In the 1950's, inspired by issues that arose from his work on early digital computers, John Von Neumann investigated the following simple question: when is it possible to design a circuit that reliably solves a given problem using unreliable gates (i.e., gates that generate the wrong output bit with some fixed probability)? This question spawned a new research field that came to be called noisy computation. This field had significant practical impacts on the engineering of early digital computers and led to decades of deep theory. In this talk, I will present some initial efforts to adapt the concept of noisy computation to the world of distributed computing. In more detail, the canonical approach to formally modeling a distributed system is to represent the distributed processes as state machines that interact through shared objects or network channels. Whereas Von Neumann captured noise as a circuit gate probabilistically generating the wrong output bit, we propose to capture noise as a state machine transition function that probabilistically transitions to the wrong next state. Note that noisy computation in this context captures something fundamentally different than the standard distributed fault models such as crash and byzantine failures. These standard fault models describe processes that stop working altogether or behave arbitrarily, while our notion of noisy computation describes systems in which individual processes more or less follow their specified logic, but cannot guarantee to execute their steps perfectly. The core question inspired by this topic follows naturally: for what models, problems, and definitions of noise is it possible for a collection of unreliably computing processes to work together to reliably solve a global problem?