We typically have seminars on Wednesday at noon in Malone 228. All seminar announcements will be sent to the theory mailing list.

Title: Fault Resilient Graph Structures

Speaker: Merav Parter (MIT)

Abstract:

A fault-tolerant (FT) structure for a network is required to continue functioning following the failure of some of the network’s edges or vertices. Fault-resilience can be introduced into the network in several different ways. This talk will focus on a notion of fault-tolerance whereby the structure at hand is augmented (by adding to it various components) so that subsequent to the failure of some of the network’s vertices or edges, the surviving part of the structure is still operational. As this augmentation carries certain costs, it is desirable to minimize the number of added components.We will revise several constructions of sparse fault tolerant structures such as FT spanner and FT shortest-path trees. I will also present a new model for fault resilient network structures that mix two orthogonal protection mechanisms: (a) backup, namely, augmenting the structure with many (redundant) low-cost and fault-prone components, and (b) reinforcement, namely, acquiring high-cost but fault-resistant components. A trade-off between these two mechanisms will be presented in a concrete setting of shortest-path trees.

**Talk Title:**

Dependent Random Graphs and Multiparty Pointer Jumpin

**Abstract:**

We initiate a study of a relaxed version of the standard Erdos-Renyi random graph model, where each edge may depend on a few other edges. We call such graphs *dependent random graphs* and give tight bounds on the clique and chromatic numbers of such graphs. Surprisingly, some of the bounds in the standard random graph model continue to hold in this relaxed setting. For example, the size of the largest clique in a dependent random graph remains roughly log(n)/log(1/p).

As an application, we give a new upper bound on communication complexity of the Multiparty Pointer Jumping (MPJ) problem in the number-on-the-forehead (NOF) model. NOF communication lies at the current frontier of our understanding of communication complexity, and MPJ is one of the canonical problems in this setting. Furthermore, sufficiently strong bounds for MPJ would have important consequences for circuit complexity.

No prior knowledge is assumed aside from basic discrete probability. I hope to motivate both random graphs and the application and demonstrate why NOF communication is an important active research area.

This talk is based on research that is joint work with Mario Sanchez.

**Bio:**

Joshua Brody received a bachelor’s degree in Mathematics/Computer Science from Carnegie Mellon University, a Master’s Degree in Computer Science from New York University, and a PhD in Computer Science from Dartmouth College. Prior to graduate school, he served in the Peace Corps teaching high school mathematics in Burkina Faso, West Africa. Dr. Brody is currently an Assistant Professor in the Computer Science department at Swarthmore College. His primary research area is communication complexity. Additional research interests include several areas of theoretical computer science, including streaming algorithms, property testing, and data structures.

In a t-out-of-n robust secret sharing scheme, a secret message is shared among n parties who can reconstruct the message by combining their shares. An adversary can adaptively corrupt up to t of the parties, get their shares, and modify them arbitrarily. The scheme should satisfy privacy, meaning that the adversary cannot learn anything about the shared message, and robustness, meaning that the adversary cannot cause the reconstruction procedure to output an incorrect message. Such schemes are only possible in the case of an honest majority, and here we focus on unconditional security in the maximal corruption setting where n=2t+1.In this scenario, to share an m-bit message with a reconstruction failure probability of at most 2−k, a known lower-bound shows that the share size must be at least m+k bits. On the other hand, all prior constructions have share size that scales linearly with the number of parties n, and the prior state-of-the-art scheme due to Cevallos et al. (EUROCRYPT ’12) achieves m+O˜(k+n).

In this work, we construct the first robust secret sharing scheme in the maximal corruption setting with n=2t+1, that avoids the linear dependence between share size and the number of parties n. In particular, we get a share size of only m+O˜(k) bits. Our scheme is computationally efficient and relies on approximation algorithms for the minimum graph bisection problem.

This talk is based on a Eurocrypt’2016 paper with authors: Allison Bishop and Valerio Pastro and Rajmohan Rajaraman and Daniel Wichs.

Details TBA

Speaker: Samir Khuller

Affiliation: University of Maryland College Park

Title: To do or not to do: scheduling to minimize energy

Abstract:

Traditional scheduling algorithms, especially those involving job scheduling

on parallel machines, make the assumption that the machines are always

available and try to schedule jobs to minimize specific job related metrics.

Since modern data centers consume massive amounts of energy, we consider job

scheduling problems that take energy consumption into account, turning

machines off, especially during periods of low demand. The ensuing problems

relate very closely to classical covering problems such as capacitated set

cover, and we discuss several recent results in this regard.

(This is talk covers two papers, and is joint work with Jessica Chang, Hal Gabow

and Koyel Mukherjee.)

Speaker: Justin Hsu

Affiliation: University of Pennsylvania

Speaker: David Harris

Affiliation: University of Maryland College Park

Title: Improved parallel algorithms for hypergraph maximal independent set

Abstract:

Finding a maximal independent set in hypergraphs has been a long-standing algorithmic challenge. The best parallel algorithm for hypergraphs of rank $r$ was developed by Beame

and Luby (1990) and Kelsen (1992), running in time roughly $(\log n)^{r!}$. This is in RNC for fixed $r$, but is still quite expensive. We improve on the analysis of Kelsen to

show that (a slight variant) of this algorithm runs in time $(\log n)^{2^r}$. We derandomize this algorithm to achieve a deterministic algorithm running in time $(\log

n)^{2^{r+3}}$ using $m^{O(1)}$ processors.

Our analysis can also apply when $r$ is slowly growing; using this in conjunction with a strategy of Bercea et al. (2015) gives a deterministic algorithm running in time

$\exp(O(\log m/\log \log m))$. This is faster than the algorithm of Bercea et al, and in addition it is deterministic. In particular, this is sub-polynomial time for graphs with

$m \leq n^{o(\log \log n)}$ edges.

Speaker: Adam Smith

Affiliation: Penn State University.

Title: Privacy, Information and Generalization

Abstract:

Consider an agency holding a large database of sensitive personal

information — medical records, census survey answers, web search

records, or genetic data, for example. The agency would like to

discover and publicly release global characteristics of the data (say,

to inform policy or business decisions) while protecting the privacy

of individuals’ records. I will begin by discussing what makes this

problem difficult, and exhibit some of the nontrivial issues that

plague simple attempts at anonymization and aggregation. Motivated by

this, I will present differential privacy, a rigorous definition of

privacy in statistical databases that has received significant

attention.

In the second part of the talk, I will explain how differential

privacy is connected to a seemingly different problem: “adaptive data

analysis”, the practice by which insights gathered from data are used

to inform further analysis of the same data sets. This is increasingly

common both in scientific research, in which data sets are shared and

re-used across multiple studies. Classical statistical theory assumes

that the analysis to be run is selected independently of the data.

This assumption breaks down when data re re-used; the resulting

dependencies can significantly bias the analyses’ outcome. I’ll show

how the limiting the information revealed about a data set during

analysis allows one to control such bias, and why differentially

private analyses provide a particularly attractive tool for limiting

information.

Based on several papers, including recent joint works with R. Bassily,

K. Nissim, U. Stemmer, T. Steinke and J. Ullman (STOC 2016) and R.

Rogers, A. Roth and O. Thakkar (FOCS 2016).

Bio:

Adam Smith is a professor of Computer Science and Engineering at Penn

State. His research interests lie in data privacy and cryptography,

and their connections to machine learning, statistics, information

theory, and quantum computing. He received his Ph.D. from MIT in 2004

and has held visiting positions at the Weizmann Institute of Science,

UCLA, Boston University and Harvard. In 2009, he received a

Presidential Early Career Award for Scientists and Engineers (PECASE).

In 2016, he received the Theory of Cryptography Test of Time award,

jointly with C. Dwork, F. McSherry and K. Nissim.