>> Publications


Home | Toggle Abstract

Peer-Reviewed Publications:

  1. Approximation Algorithms for Minimizing Congestion in Demand-Aware Networks
    INFOCOM '24
    | arxiv
    (with Wenkai Dai, Klaus-Tycho Foerster, Long Luo, and Stefan Schmid)
    Emerging reconfigurable optical communication technologies enable demand-aware networks: networks whose static topology can be enhanced with demand-aware links optimized towards the traffic patterns the network serves. This paper studies the algorithmic problem of how to jointly optimize the topology and the routing in such demand-aware networks, to minimize congestion.

    We investigate this problem along two dimensions: (1) whether flows are splittable or unsplittable, and (2) whether routing on the hybrid topology is segregated or not, i.e., whether or not flows either have to use exclusively either the static network or the demand-aware connections. For splittable and segregated routing, we show that the problem is $2$-approximable in general, but APX-hard even for uniform demands induced by a bipartite demand graph.

    For unsplittable and segregated routing, we show an upper bound of $O\left(\log m/ \log\log m \right)$ and a lower bound of $\Omega\left(\log m/ \log\log m \right)$ for polynomial-time approximation algorithms, where $m$ is the number of static links. We further show that under splittable (resp., unsplittable) and non-segregated routing, even for demands of a single source (resp., destination), the problem cannot be approximated within a ratio better than $\Omega\left(\frac{c_{\max}}{c_{\min}} \right)$ unless P=NP, where $c_{\max}$ (resp., $c_{\min}$) denotes the maximum (resp., minimum) capacity. It is still NP-hard for uniform capacities, but can be solved efficiently for a single commodity and uniform capacities.

    We complement our theoretical results with trace-driven simulations, and find that our algorithms can significantly reduce the network congestion in comparison to the state of the art.
  2. Controlling Tail Risk in Online Ski-Rental
    SODA '24
    | arxiv
    (with Sungjin Im, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii)
    The classical ski-rental problem admits a textbook 2-competitive deterministic algorithm, and a simple randomized algorithm that is e/(e−1)-competitive in expectation. The randomized algorithm, while optimal in expectation, has a large variance in its performance: it has more than a 37% chance of competitive ratio exceeding 2, and a Θ(1/n) chance of the competitive ratio exceeding n!

    We ask what happens to the optimal solution if we insist that the tail risk, i.e., the chance of the competitive ratio exceeding a specific value, is bounded by some constant δ. We find that this additional modification significantly changes the structure of the optimal solution. The probability of purchasing skis on a given day becomes non-monotone, discontinuous, and arbitrarily large (for sufficiently small tail risk δ and large purchase cost n).
  3. Improved Approximations for Relative Survivable Network Design
    WAOA '23
    | arxiv
    (with Ama Koranteng, Guy Kortsarz, and Zeev Nutov)
    Invited to special issue of Transactions on Computing Systems
    One of the most important and well-studied settings for network design is edge-connectivity requirements. This encompasses uniform demands such as the Minimum k-Edge-Connected Spanning Subgraph problem as well as nonuniform demands such as the Survivable Network Design problem (SND). In a recent paper by [Dinitz, Koranteng, Kortsarz APPROX '22] , the authors observed that a weakness of these formulations is that it does not enable one to consider fault-tolerance in graphs that have just one small cut. To remedy this, they introduced new variants of these problems under the notion "relative" fault-tolerance. Informally, this requires not that two nodes are connected if there are a bounded number of faults (as in the classical setting), but that two nodes are connected if there are a bounded number of faults and the two nodes are connected in the underlying graph post-faults. The problem is already highly non-trivial even for the case of a single demand.

    Due to difficulties introduced by this new notion of fault-tolerance, the results in [Dinitz, Koranteng, Kortsarz APPROX '22] are quite limited. For the Relative Survivable Network Design problem (RSND), when the demands are not uniform they give a nontrivial result only when there is a single demand with a connectivity requirement of 3: a non-optimal 27/4-approximation. We strengthen this result in two significant ways: We give a 2-approximation for RSND where all requirements are at most 3, and a 2O(k2)-approximation for RSND with a single demand of arbitrary value k. To achieve these results, we first use the "cactus representation'' of minimum cuts to give a lossless reduction to normal SND. Second, we extend the techniques of [Dinitz, Koranteng, Kortsarz APPROX '22] to prove a generalized and more complex version of their structure theorem, which we then use to design a recursive approximation algorithm.
  4. Epic Fail: Emulators can tolerate polynomially many edge faults for free
    ITCS '23
    (with Greg Bodwin and Yasamin Nazari)
    A t-emulator of a graph G is a graph H that approximates its pairwise shortest path distances up to multiplicative t error. We study fault tolerant t-emulators, under the model recently introduced by Bodwin, Dinitz, and Nazari [ITCS 2022] for vertex failures. In this paper we consider the version for edge failures, and show that they exhibit surprisingly different behavior.

    In particular, our main result is that, for (2k−1)-emulators with k odd, we can tolerate a polynomial number of edge faults for free. For example: for any n-node input graph, we construct a 5-emulator (k=3) on O(n4/3) edges that is robust to f=O(n2/9) edge faults. It is well known that Ω(n4/3) edges are necessary even if the 5-emulator does not need to tolerate any faults. Thus we pay no extra cost in the size to gain this fault tolerance. We leave open the precise range of free fault tolerance for odd k, and whether a similar phenomenon can be proved for even k.
  5. Algorithms with Prediction Portfolios
    NeurIPS '22
    (with Sungjin Im, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii)
    The research area of algorithms with predictions has seen recent success showing how to incorporate machine learning into algorithm design to improve performance when the predictions are correct, while retaining worst-case guarantees when they are not. Most previous work has assumed that the algorithm has access to a single predictor. However, in practice, there are many machine learning methods available, often with incomparable generalization guarantees, making it hard to pick a best method a priori. In this work we consider scenarios where multiple predictors are available to the algorithm and the question is how to best utilize them.

    Ideally, we would like the algorithm's performance to depend on the quality of the best predictor. However, utilizing more predictions comes with a cost, since we now have to identify which prediction is best. We study the use of multiple predictors for a number of fundamental problems, including matching, load balancing, and non-clairvoyant scheduling, which have been well-studied in the single predictor setting. For each of these problems we introduce new algorithms that take advantage of multiple predictors, and prove bounds on the resulting performance.
  6. Smoothed Analysis of Information Spreading in Dynamic Networks
    DISC '22
    (with Jeremy Fineman, Seth Gilbert, and Calvin Newport)
    Best Paper Award, Invited to Journal of the ACM
    The best known solutions for k-message broadcast in dynamic networks of size n require Ω(nk) rounds. In this paper, we see if these bounds can be improved by smoothed analysis. To do so, we study perhaps the most natural randomized algorithm for disseminating tokens in this setting: at every time step, choose a token to broadcast randomly from the set of tokens you know. We show that with even a small amount of smoothing (i.e., one random edge added per round), this natural strategy solves k-message broadcast in Õ(n+k^3) rounds, with high probability, beating the best known bounds for k=o(\sqrt{n}) and matching the Ω(n+k) lower bound for static networks for k=O(n^{1/3}) (ignoring logarithmic factors). In fact, the main result we show is even stronger and more general: given \ell-smoothing (i.e., \ell random edges added per round), this simple strategy terminates in O(kn^{2/3}\log^{1/3}(n)\ell^{-1/3}) rounds. We then prove this analysis close to tight with an almost-matching lower bound. To better understand the impact of smoothing on information spreading, we next turn our attention to static networks, proving a tight bound of Õ(k\sqrt{n}) rounds to solve k-message broadcast, which is better than what our strategy can achieve in the dynamic setting. This confirms the intuition that although smoothed analysis reduces the difficulties induced by changing graph structures, it does not eliminate them altogether. Finally, we apply tools developed to support our smoothed analysis to prove an optimal result for k-message broadcast in so-called well-mixed networks in the absence of smoothing. By comparing this result to an existing lower bound for well-mixed networks, we establish a formal separation between oblivious and strongly adaptive adversaries with respect to well-mixed token spreading, partially resolving an open question on the impact of adversary strength on the k-message broadcast problem.
  7. Relative Survivable Network Design
    APPROX '22
    (with Ama Koranteng and Guy Kortsarz)
    One of the most important and well-studied settings for network design is edge-connectivity requirements. This encompasses uniform demands such as the Minimum $k$-Edge-Connected Spanning Subgraph problem ($k$-ECSS), as well as nonuniform demands such as the Survivable Network Design problem. A weakness of these formulations, though, is that we are not able to ask for fault-tolerance larger than the connectivity. Taking inspiration from recent definitions and progress in graph spanners, we introduce and study new variants of these problems under a notion of relative fault-tolerance. Informally, we require not that two nodes are connected if there are a bounded number of faults (as in the classical setting), but that two nodes are connected if there are a bounded number of faults \emph{and the two nodes are connected in the underlying graph post-faults}. That is, the subgraph we build must ``behave'' identically to the underlying graph with respect to connectivity after bounded faults.

    We define and introduce these problems, and provide the first approximation algorithms: a $(1+4/k)$-approximation for the unweighted relative version of $k$-ECSS, a $2$-approximation for the weighted relative version of $k$-ECSS, and a $27/4$-approximation for the special case of Relative Survivable Network Design with only a single demand with a connectivity requirement of $3$. To obtain these results, we introduce a number of technical ideas that may of independent interest. First, we give a generalization of Jain's iterative rounding analysis that works even when the cut-requirement function is not weakly supermodular, but instead satisfies a weaker definition we introduce and term local weak supermodularity. Second, we prove a structure theorem and design an approximation algorithm utilizing a new decomposition based on important separators, which are structures commonly used in fixed-parameter algorithms that have not commonly been used in approximation algorithms.
  8. Controlling Epidemic Spread using Probabilistic Diffusion Models on Networks
    AISTATS '22
    (with Amy Babay, Aravind Srinivasan, Leonidas Tsepenekas, and Anil Vullikanti)
    The spread of an epidemic is often modeled by an SIR random process on a social network graph. The MinInfEdge problem for optimal social distancing involves minimizing the expected number of infections, when we are allowed to break at most B edges; similarly the MinInfNode problem involves removing at most B vertices. These are fundamental problems in epidemiology and network science. While a number of heuristics have been considered, the complexity of this problem remains generally open.

    In this paper, we present two bicriteria approximation algorithms for the MinInfEdge problem, which give the first non-trivial approximations for this problem. The first is based on the cut sparsification result technique of Karger, which works for any graph, when the transmission probabilities are not too small. The second is a Sample Average Approximation (SAA) based algorithm, which we analyze for the Chung-Lu random graph model. We also extend some of our results for the MinInfNode problem.
  9. Fair Disaster Containment via Graph-Cut Problems
    AISTATS '22
    (with Aravind Srinivasan, Leonidas Tsepenekas, and Anil Vullikanti)
    Graph cut problems are fundamental in Combinatorial Optimization, and are a central object of study in both theory and practice. Further, the study of fairness in Algorithmic Design and Machine Learning has recently received significant attention, with many different notions proposed and analyzed for a variety of contexts. In this paper we initiate the study of fairness for graph cut problems by giving the first fair definitions for them, and subsequently we demonstrate appropriate algorithmic techniques that yield a rigorous theoretical analysis. Specifically, we incorporate two different notions of fairness, namely demographic and probabilistic individual fairness, in a particular cut problem that models disaster containment scenarios. Our results include a variety of approximation algorithms with provable theoretical guarantees.
  10. Vertex Fault-Tolerant Emulators
    ITCS '22
    (with Greg Bodwin and Yasamin Nazari)
    A k-spanner of a graph G is a sparse subgraph that preserves its shortest path distances up to a multiplicative stretch factor of k, and a k-emulator is similar but not required to be a subgraph of G. A classic theorem by Thorup and Zwick [JACM '05] shows that, despite the extra flexibility available to emulators, the size/stretch tradeoffs for spanners and emulators are equivalent. Our main result is that this equivalence in tradeoffs no longer holds in the commonly-studied setting of graphs with vertex failures. That is: we introduce a natural definition of vertex fault-tolerant emulators, and then we show a three-way tradeoff between size, stretch, and fault-tolerance for these emulators that polynomially surpasses the tradeoff known to be optimal for spanners.

    We complement our emulator upper bound with a lower bound construction that is essentially tight (within logn factors of the upper bound) when the stretch is 2k−1 and k is either a fixed odd integer or 2. We also show constructions of fault-tolerant emulators with additive error, demonstrating that these also enjoy significantly improved tradeoffs over those available for fault-tolerant additive spanners.

  11. Partially Optimal Edge Fault-Tolerant Spanners
    SODA '22
    (with Greg Bodwin and Caleb Robelle)
    Recent work has established that, for every positive integer k, every n-node graph has a (2k−1)-spanner on O(f1−1/kn1+1/k) edges that is resilient to f edge or vertex faults. For vertex faults, this bound is tight. However, the case of edge faults is not as well understood: the best known lower bound for general k is Ω(f(1/2)−(1/(2k))n1+1/k+fn). Our main result is to nearly close this gap with an improved upper bound, thus separating the cases of edge and vertex faults. For odd k, our new upper bound is Ok(f(1/2)−(1/(2k))n1+1/k+fn), which is tight up to hidden poly(k) factors. For even k, our new upper bound is Ok(f1/2n1+1/k+fn), which leaves a gap of poly(k) f1/(2k). Our proof is an analysis of the fault-tolerant greedy algorithm, which requires exponential time, but we also show that there is a polynomial-time algorithm which creates edge fault tolerant spanners that are larger only by factors of k.
  12. Faster Matchings via Learned Duals
    NeurIPS '21
    (with Sungjin Im, Thomas Lavastida, Benjamin Moseley, and Sergei Vassilvitskii)
    Selected for Oral Presentation
    A recent line of research investigates how algorithms can be augmented with machine-learned predictions to overcome worst case lower bounds. This area has revealed interesting algorithmic insights into problems, with particular success in the design of competitive online algorithms. However, the question of improving algorithm running times with predictions has largely been unexplored.

    We take a first step in this direction by combining the idea of machine-learned predictions with the idea of "warm-starting" primal-dual algorithms. We consider one of the most important primitives in combinatorial optimization: weighted bipartite matching and its generalization to b-matching. We identify three key challenges when using learned dual variables in a primal-dual algorithm. First, predicted duals may be infeasible, so we give an algorithm that efficiently maps predicted infeasible duals to nearby feasible solutions. Second, once the duals are feasible, they may not be optimal, so we show that they can be used to quickly find an optimal solution. Finally, such predictions are useful only if they can be learned, so we show that the problem of learning duals for matching has low sample complexity. We validate our theoretical findings through experiments on both real and synthetic data. As a result we give a rigorous, practical, and empirically effective method to compute bipartite matchings.
  13. Optimal Vertex Fault-Tolerant Spanners in Polynomial Time
    SODA '21
    (with Greg Bodwin and Caleb Robelle)
    Recent work has pinned down the existentially optimal size bounds for vertex fault-tolerant spanners: for any positive integer k, every n-node graph has a (2k−1)-spanner on O(f1−1/k n1+1/k) edges resilient to f vertex faults, and there are examples of input graphs on which this bound cannot be improved. However, these proofs work by analyzing the output spanner of a certain exponential-time greedy algorithm. In this work, we give the first algorithm that produces vertex fault tolerant spanners of optimal size and which runs in polynomial time. Specifically, we give a randomized algorithm which takes Õ(f1−1/k n2+1/k + mf2) time. We also derandomize our algorithm to give a deterministic algorithm with similar bounds. This reflects an exponential improvement in runtime over [Bodwin-Patel PODC '19], the only previously known algorithm for constructing optimal vertex fault-tolerant spanners.
  14. Efficient and Simple Algorithms for Fault Tolerant Spanners
    PODC '20
    (with Caleb Robelle)
    It was recently shown that a version of the greedy algorithm gives a construction of fault-tolerant spanners that is size-optimal, at least for vertex faults. However, the algorithm to construct this spanner is not polynomial-time, and the best-known polynomial time algorithm is significantly suboptimal. Designing a polynomial-time algorithm to construct (near-)optimal fault-tolerant spanners was given as an explicit open problem in the two most recent papers on fault-tolerant spanners ([Bodwin, Dinitz, Parter, Vassilevka Williams SODA '18] and [Bodwin, Patel PODC '19]). We give a surprisingly simple algorithm which runs in polynomial time and constructs fault-tolerant spanners that are extremely close to optimal (off by only a linear factor in the stretch) by modifying the greedy algorithm to run in polynomial time. To complement this result, we also give simple distributed constructions in both the LOCAL and CONGEST models.
  15. Scheduling for Weighted Flow and Completion Times in Reconfigurable Networks
    INFOCOM '20
    (with Benjamin Moseley)
    New optical technologies offer the ability to reconfigure network topologies dynamically, rather than setting them once and for all. This is true in both optical wide area networks (optical WANs) and in datacenters, despite the many differences between these two settings. Because of these new technologies, there has been a surge of both practical and theoretical research on algorithms to take advantage of them. In particular, Jia et al. [INFOCOM '17] designed online scheduling algorithms for dynamically reconfigurable topologies for both the makespan and sum of completion times objectives. In this paper, we work in the same setting but study an objective that is more meaningful in an online setting: the sum of flow times. The flow time of a job is the total amount of time that it spends in the system, which may be considerably smaller than its completion time if it is released late. We provide competitive algorithms for the online setting with speed augmentation, and also give a lower bound proving that speed augmentation is in fact necessary. As a side effect of our techniques, we also improve and generalize the results of Jia et al. on completion times by giving an O(1)-competitive algorithm for the arbitrary sizes and release times setting even when nodes have different degree bounds, and moreover allow for the weighted sum of completion times (or flow times).
  16. Lasserre Integrality Gaps for Graph Spanners and Related Problems
    WAOA '20
    (with Yasamin Nazari and Zeyu Zhang)
    There has been significant recent progress on algorithms for approximating graph spanners, i.e., algorithms which approximate the best spanner for a given input graph. Essentially all of these algorithms use the same basic LP relaxation, so a variety of papers have studied the limitations of this approach and proved integrality gaps for this LP in a variety of settings. We extend these results by showing that even the strongest lift-and-project methods cannot help significantly, by proving polynomial integrality gaps even for n^{Ω(ϵ)} levels of the Lasserre hierarchy, for both the directed and undirected spanner problems. We also extend these integrality gaps to related problems, notably Directed Steiner Network and Shallow-Light Steiner Network.
  17. Massively Parallel Approximate Distance Sketches
    OPODIS '19
    (with Yasamin Nazari)
    Best Student Paper Award
    Data structures that allow efficient distance estimation (distance oracles or distance sketches) have been extensively studied, and are particularly well studied in centralized models and classical distributed models such as the CONGEST model. We initiate their study in newer (and arguably more realistic) models of distributed computation such as the Congested Clique model and the Massively Parallel Computation (MPC) model, as well as in related big data models such as streaming and semi-streaming. We provide algorithms for constructing the well-known Thorup-Zwick distance oracle (or distance sketches) in multiple different computational models and discuss the tradeoffs between different complexity measures such as space, time, approximation ratio, and communication complexity.
  18. Reception Capacity: Definitions, Game Theory, and Hardness
    ALGOSENSORS '19
    (with Naomi Ephraim)
    The capacity of wireless networks is a classic and important topic of study. Informally, the capacity of a network is simply the total amount of information which it can transfer. In the context of models of wireless radio networks, this has usually meant the total number of point-to-point messages which can be sent or received in one time step. This definition has seen intensive study in recent years, particularly with respect to more accurate models of radio networks such as the SINR model. This paper is motivated by an obvious fact: radio antennae are (at least traditionally) omnidirectional, and hence point-to-point connections are not necessarily the best definition of the true capacity of a wireless network. To fix this, we introduce a new definition of reception capacity as the maximum number of messages which can be received in one round, and show that this is related to a new optimization problem we call the Maximum Perfect Dominated Set (MaxPDS) problem. Using this relationship we give a tight lower bound for approximating this capacity which essentially matches a known upper bound. As our main result, we analyze this notion of capacity under game-theoretic constraints, giving tight bounds on the average quality achieved at any coarse correlated equilibrium (and thus at any Nash). This immediately gives bounds on the average behavior of the natural distributed algorithm in which every transmitter uses online learning algorithms to learn whether to transmit.
  19. The Capacity of Smartphone Peer-to-Peer Networks
    DISC '19
    (with Magnús Halldórsson, Calvin Newport, and Alex Weaver)
    We study three capacity problems in the mobile telephone model, a network abstraction that models the peer-to-peer communication capabilities implemented in most commodity smartphone operating systems. The capacity of a network expresses how much sustained throughput can be maintained for a set of communication demands, and is therefore a fundamental bound on the usefulness of a network. Because of this importance, wireless network capacity has been active area of research for the last two decades.

    The three capacity problems that we study differ in the structure of the communication demands. The first problem we consider is pairwise capacity, where the demands are (source, destination) pairs. Pairwise capacity is one of the most classical definitions, as it was analyzed in the seminal paper of Gupta and Kumar on wireless network capacity. The second problem we study is broadcast capacity, in which the only demand is a single source which must deliver packets to all other nodes in the network. Finally, we turn our attention to all-to-all capacity, in which all nodes must deliver packets to all other nodes. In all three of these problems we characterize the optimal achievable throughput for any given network, and design algorithms which asymptotically match this performance. We also study these problems in networks generated randomly by a process introduced by Gupta and Kumar, and fully characterize their achievable throughput.

    Interestingly, the techniques that we develop for all-to-all capacity also allow us to design a one-shot gossip algorithm that solves the problem with a time complexity that is within a polylogarithmic factor of optimal in every graph, which largely resolves an open question from previous work on the one-shot gossip problem in this model.
  20. Approximating the Norms of Graph Spanners
    APPROX '19
    (with Eden Chlamtáč and Thomas Robinson)
    The $\ell_p$-norm of the degree vector was recently introduced by [Chlamt\'a\v{c}, Dinitz, Robinson ICALP '19] as a new cost metric for graph spanners, as it interpolates between two traditional notions of cost (the sparsity $\ell_1$ and the max degree $\ell_{\infty}$) and is well-motivated from applications. We study this from an approximation algorithms point of view, analyzing old algorithms and designing new algorithms for this new context, as well as providing hardness results. Our main results are for the $\ell_2$-norm and stretch $3$, where we give a tight analysis of the greedy algorithm and a new algorithm specifically tailored to this setting which gives an improved approximation ratio.
  21. Distributed Algorithms for Minimum Degree Spanning Trees
    PODC '19
    (with Magnús Halldórsson, Taisuke Izumi, and Calvin Newport)
    The minimum degree spanning tree (MDST) problem requires the construction of a spanning tree T for graph G=(V,E) with n vertices, such that the maximum degree d of T is the smallest among all spanning trees of G. In this paper, we present two new distributed approximation algorithms for the MDST problem. Our first result is a randomized distributed algorithm that constructs a spanning tree of maximum degree d̂ =O(d logn). It requires O((D+√n)log^2 n) rounds (w.h.p.), where D is the graph diameter, which matches (within log factors) the optimal round complexity for the related minimum spanning tree problem. Our second result refines this approximation factor by constructing a tree with maximum degree d̂ =O(d + log n), though at the cost of additional polylogarithmic factors in the round complexity. Although efficient approximation algorithms for the MDST problem have been known in the sequential setting since the 1990's, our results are first efficient distributed solutions for this problem.
  22. The Norms of Graph Spanners
    ICALP '19
    (with Eden Chlamtáč and Thomas Robinson)
    A t-spanner of a graph G is a subgraph H in which all distances are preserved up to a multiplicative t factor. A classical result of Althöfer et al. is that for every integer k and every graph G, there is a (2k−1)-spanner of G with at most O(n^{1+1/k}) edges. But for some settings the more interesting notion is not the number of edges, but the degrees of the nodes. This spurred interest in and study of spanners with small maximum degree. However, this is not necessarily a robust enough objective: we would like spanners that not only have small maximum degree, but also have "few" nodes of "large" degree. To interpolate between these two extremes, in this paper we initiate the study of graph spanners with respect to the ℓp-norm of their degree vector, thus simultaneously modeling the number of edges (the ℓ_1-norm) and the maximum degree (the ℓ_∞-norm). We give precise upper bounds for all ranges of p and stretch t: we prove that the greedy (2k−1)-spanner has ℓp norm of at most max(O(n),O(n^{(k+p)/(kp)})), and that this bound is tight (assuming the Erdős girth conjecture). We also study universal lower bounds, allowing us to give "generic" guarantees on the approximation ratio of the greedy algorithm which generalize and interpolate between the known approximations for the ℓ1 and ℓ∞ norm. Finally, we show that at least in some situations, the ℓp norm behaves fundamentally differently from ℓ_1 or ℓ_∞: there are regimes (p=2 and stretch 3 in particular) where the greedy spanner has a provably superior approximation to the generic guarantee.
  23. Policy Regret in Repeated Games
    NeurIPS '18
    (with Raman Arora, Teodor V. Marinov, and Mehryar Mohri)
    The notion of "policy regret" in online learning is supposed to capture the reactions of the adversary to the actions taken by the learner, which more traditional notions such as external regret do not take into account. We revisit this notion of policy regret, and first show that there are online learning settings in which policy regret and external regret are incompatible: any sequence of play which does well with respect to one must do poorly with respect to the other. We then focus on the game theoretic setting, when the adversary is a self-interested agent. In this setting we show that the external regret and policy regret are not in conflict, and in fact that a wide class of algorithms can ensure both as long as the adversary is also using such an algorithm. We also define a new notion of equilibrium which we call a "policy equilibrium", and show that no-policy regret algorithms will have play which converges to such an equilibrium. Relating this back to external regret, we show that coarse correlated equilibria (which no-external regret players will converge to) are a strict subset of policy equilibria. So in game-theoretic settings every sequence of play with no external regret also has no policy regret, but the converse is not true.
  24. Large Low-Diameter Graphs are Good Expanders
    ESA '18, Journal of Combinatorial Theory (B)
    (with Michael Schapira and Gal Shahaf)
    We revisit the classical question of the relationship between the diameter of a graph and its expansion properties. One direction is well understood: expander graphs exhibit essentially the lowest possible diameter. We focus on the reverse direction. We show that "sufficiently large" graphs of fixed diameter and degree must be "good" expanders. We prove this statement for various definitions of "sufficiently large" (multiplicative/additive factor from the largest possible size), for different forms of expansion (edge, vertex, and spectral expansion), and for both directed and undirected graphs. A recurring theme is that the lower the diameter of the graph and (more importantly) the larger its size, the better the expansion guarantees. We discuss the implications of our results for open questions in graph theory and for recent advances in computer networking, and leave the reader with many interesting open questions.
  25. Optimal Vertex Fault Tolerant Spanners (for fixed stretch)
    SODA '18
    (with Greg Bodwin, Merav Parter, and Virginia Vassilevska Williams)
    A k-spanner} of a graph G is a sparse subgraph H whose shortest path distances match those of $G$ up to a multiplicative error k. In this paper we study spanners that are resistant to faults. A subgraph H \subseteq G is an f-vertex fault tolerant (VFT)} k-spanner if H \setminus F is a k-spanner of G \setminus F for any small set F of f vertices that might ``fail.'' One of the main questions in the area is: what is the minimum size of an f fault tolerant k-spanner that holds for all n node graphs (as a function of f, k, and n)? This question was first studied in the context of geometric graphs [Levcopoulos et al. STOC '98, Czumaj and Zhao SoCG '03] and has more recently been considered in general undirected graphs [Chechik et al. STOC '09, Dinitz and Krauthgamer PODC '11].

    In this paper, we settle the question of the optimal size of a VFT spanner, in the setting where the stretch factor k is fixed. Specifically, we prove that every (undirected, possibly weighted) n-node graph G has a (2k-1)-spanner resilient to f vertex faults with O_k(f^{1 - 1/k} n^{1 + 1/k} ) edges, and this is fully optimal (unless the famous Erdos Girth Conjecture is false). Our lower bound even generalizes to imply that no data structure capable of approximating dist_{G \setminus F}(s, t) within multiplicative (2k-1) error (for any given s, t, |F| \le f) can beat the space usage of our spanner in the worst case. We note that all previous upper bounds carried an almost quadratic (or worse) dependence on f, whereas the dependence in our tight bound is sublinear. To the best of our knowledge, this is the first instance in fault tolerant network design in which introducing fault tolerance to the structure increases the size of the (non-FT) structure by a sublinear factor in f. Another advantage of this result is that our spanners are constructed by a very natural and simple greedy algorithm, which is the obvious extension of the standard greedy algorithm used to build spanners in the non-faulty setting.

    We also consider the edge fault tolerant (EFT) model, defined analogously with edge failures rather than vertex failures. We show that the same spanner upper bound applies in this setting. Our data structure lower bound extends to the case k=2 (and hence we close the EFT problem for 3-approximations), but it falls to \Omega(f^{1/2 - 1/(2k)} \cdot n^{1 + 1/k}) for k \ge 3. We leave it as an open problem to close this gap.
  26. Characterizing Demand Graphs for (Fixed-Parameter) Shallow-Light Steiner Network
    FSTTCS '18
    (with Amy Babay and Zeyu Zhang)
    We consider the Shallow-Light Steiner Network problem from a fixed-parameter perspective. Given a graph G, a distance bound L, and p pairs of vertices (s1,t1),...,(sp,tp), the objective is to find a minimum-cost subgraph G′ such that si and ti have distance at most L in G′ (for every i∈[p]). Our main result is on the fixed-parameter tractability of this problem with parameter p. We exactly characterize the demand structures that make the problem "easy", and give FPT algorithms for those cases. In all other cases, we show that the problem is W[1]-hard. We also extend our results to handle general edge lengths and costs, precisely characterizing which demands allow for good FPT approximation algorithms and which demands remain W[1]-hard even to approximate.
  27. Distributed Distance-Bounded Network Design Through Distributed Convex Programming
    OPODIS '17
    (with Yasamin Nazari)
    Solving linear programs is often a challenging task in distributed settings. While there are good algorithms for solving packing and covering linear programs in a distributed manner (Kuhn et al. 2006), this is essentially the only class of linear programs for which such an algorithm is known. In this work we provide a distributed algorithm for solving a different class of convex programs which we call "distance-bounded network design convex programs". These can be thought of as relaxations of network design problems in which the connectivity requirement includes a distance constraint (most notably, graph spanners). Our algorithm runs in O((D/ϵ)logn) rounds in the LOCAL model and finds a (1+ϵ)-approximation to the optimal LP solution for any 0<ϵ≤1, where D is the largest distance constraint. While solving linear programs in a distributed setting is interesting in its own right, this class of convex programs is particularly important because solving them is often a crucial step when designing approximation algorithms. Hence we almost immediately obtain new and improved distributed approximation algorithms for a variety of network design problems, including Basic 3- and 4-Spanner, Directed k-Spanner, Lowest Degree k-Spanner, and Shallow-Light Steiner Network Design with a spanning demand graph. Our algorithms do not require any "heavy" computation and essentially match the best-known centralized approximation algorithms, while previous approaches which do not use heavy computation give approximations which are worse than the best-known centralized bounds.
  28. Timely, Reliable, and Cost-Effective Internet Transport Service using Dissemination Graphs
    ICDCS '17
    (with Amy Babay, Emily Wagner, and Yair Amir)
    Best Paper Award
    Emerging applications such as remote manipulation and remote robotic surgery require communication that is both timely and reliable, but the Internet natively supports only communication that is either completely reliable with no timeliness guarantees (e.g. TCP) or timely with best-effort reliability (e.g. UDP). We present an overlay transport service that can provide highly reliable communication while meeting stringent timeliness guarantees (e.g. 130ms round-trip latency across the US) over the Internet. To enable routing schemes that can support the necessary timeliness and reliability, we introduce dissemination graphs, providing a unified framework for specifying routing schemes ranging from a single path, to multiple disjoint paths, to arbitrary graphs. We conduct an extensive analysis of real-world network data, finding that a routing approach using two disjoint paths performs well in most cases, and that cases where two disjoint paths do not perform well typically involve problems around a source or destination. Based on this analysis, we develop a timely dissemination-graph-based routing method that can add targeted redundancy in problematic areas of the network. This approach can cover over 99% of the performance gap between a traditional single-path approach and an optimal (but prohibitively expensive) scheme, while two dynamic disjoint paths cover about 70% of this gap, and two static disjoint paths cover about 45%. This performance improvement is obtained at a cost increase of about 2% over two disjoint paths.
  29. Load Balancing with Bounded Convergence in Dynamic Networks
    INFOCOM '17
    (with Jeremy Fineman, Seth Gilbert, and Calvin Newport)
    Load balancing is an important activity in distributed systems. At a high-level of abstraction, this problem assumes processes in the system begin with a potentially uneven assignment of "work" which they must then attempt to spread evenly. There are many different approaches to solving this problem. In this paper, we focus on local load balancing---an approach in which work is balanced in an iterative and distributed manner by having processes exchange work in each round with neighbors in an underlying communication network. The goal with local load balancing is to prove that these local exchanges will lead over time to a global balance. We describe and analyze a new local load balancing strategy called max neighbor, and prove a bound on the number of rounds required for it to obtain a parameterized level of balance, with high probability, in a general dynamic network topology. We then prove this analysis tight (within logarithmic factors) by describing a network and initial work distribution for which max neighbor matches its upper bound, and then build on this to prove that no load balancing algorithm in which every node exchanges work with at most one partner per round can converge asymptotically faster than max neighbor.
  30. Approximating Approximate Distance Oracles
    ITCS '17
    (with Zeyu Zhang)
    Given a finite metric space (V,d), an approximate distance oracle is a data structure which, when queried on two points u,v \in V, returns an approximation to the the actual distance between $u$ and $v$ which is within some bounded stretch factor of the true distance. There has been significant work on the tradeoff between the important parameters of approximate distance oracles (and in particular between the size, stretch, and query time), but in this paper we take a different point of view, that of per-instance optimization. If we are given an particular input metric space and stretch bound, can we find the smallest possible approximate distance oracle for that particular input? Since this question is not even well-defined, we restrict our attention to well-known classes of approximate distance oracles, and study whether we can optimize over those classes.

    In particular, we give an O(log n)-approximation to the problem of finding the smallest stretch 3 Thorup-Zwick distance oracle, as well as the problem of finding the smallest Patrascu-Roditty distance oracle. We also prove a matching Omega(log n) lower bound for both problems, and an \Omega(n^{\frac{1}{k}-\frac{1}{2^{k-1}}}) integrality gap for the more general stretch $(2k-1)$ Thorup-Zwick distance oracle. We also consider the problem of approximating the best TZ or PR approximate distance oracle with outliers, and show that more advanced techniques (SDP relaxations in particular) allow us to optimize even in the presence of outliers.
  31. Minimizing the Union: Tight Approximations for Small Set Bipartite Vertex Expansion
    SODA '17
    (with Eden Chlamtáč and Yury Makarychev)
    In the Minimum k-Union problem (MkU) we are given a set system with $n$ sets and are asked to select k sets in order to minimize the size of their union. Despite being a very natural problem, it has received surprisingly little attention: the only known approximation algorithm is an O(\sqrt{n})-approximation due to [Chlamtáč et al APPROX '16]. This problem can also be viewed as the bipartite version of the Small Set Vertex Expansion problem (SSVE), which we call the Small Set Bipartite Vertex Expansion problem (SSBVE). SSVE, in which we are asked to find a set of $k$ nodes to minimize their vertex expansion, has not been as well studied as its edge-based counterpart Small Set Expansion (SSE), but has recently received significant attention, e.g. [Louis-Makarychev APPROX '15]. However, due to the connection to Unique Games and hardness of approximation the focus has mostly been on sets of size k = \Omega(n), while we focus on the case of general k, for which no polylogarithmic approximation is known.

    We improve the upper bound for this problem by giving an n^{1/4+\eps} approximation for SSBVE for any constant \eps > 0. Our algorithm follows in the footsteps of Densest k-Subgraph (DkS) and related problems, by designing a tight algorithm for random models, and then extending it to give the same guarantee for arbitrary instances. Moreover, we show that this is tight under plausible complexity conjectures: it cannot be approximated better than O(n^{1/4}) assuming an extension of the so-called "Dense versus Random" conjecture for DkS to hypergraphs.

    In addition to conjectured hardness via our reduction, we show that the same lower bound is also matched by an integrality gap for a super-constant number of rounds of the Sherali-Adams LP hierarchy, and an even worse integrality gap for the natural SDP relaxation. Finally, we note that there exists a simple bicriteria O~(\sqrt{n}) approximation for the more general SSVE problem (where no non-trivial approximations were known for general k).
  32. Approximating Spanners and Directed Steiner Forest: Upper and Lower Bounds
    SODA '17, ACM Transactions on Algorithms
    (with Eden Chlamtáč, Guy Kortsarz, and Bundit Laekhanukit)
    It was recently found that there are very close connections between the existence of additive spanners (subgraphs where all distances are preserved up to an additive stretch), distance preservers (subgraphs in which demand pairs have their distance preserved exactly), and pairwise spanners (subgraphs in which demand pairs have their distance preserved up to a multiplicative or additive stretch) [Abboud-Godwin SODA '16, Godwin-Williams SODA '16]. We study these problems from an optimization point of view, where rather than studying the existence of extremal instances we are given an instance and are asked to find the sparsest possible spanner/preserver. We give an O(n3/5 +ϵ)-approximation for distance preservers and pairwise spanners (for arbitrary constant ϵ > 0). This is the first nontrivial upper bound for either problem, both of which are known to be as hard to approximate as Label Cover. We also prove Label Cover hardness for approximating additive spanners, even for the cases of additive 1 stretch (where one might expect a polylogarithmic approximation, since the related multiplicative 2-spanner problem admits an O(\log n)-approximation) and additive polylogarithmic stretch (where the related multiplicative spanner problem has an O(1)-approximation).

    Interestingly, the techniques we use in our approximation algorithm extend beyond distance-based problem to pure connectivity network design problems. In particular, our techniques allow us to give an O(n3/5 + ϵ)-approximation for the Directed Steiner Forest problem (for arbitrary constant ϵ > 0) when all edges have uniform costs, improving the previous best O(n2/3 + ϵ)-approximation due to Berman et al. [ICALP '11] (which holds for general edge costs).
  33. Xpander: Towards Optimal-Performance Datacenters
    CoNext '16
    (with Asaf Valadarsky, Gal Shahaf, and Michael Schapira)
    Despite extensive efforts to meet ever-growing demands, today's datacenters often exhibit far-from-optimal performance in terms of network utilization, resiliency to failures, cost efficiency, incremental expandability, and more. Consequently, many novel architectures for high-performance datacenters have been proposed. We show that the benefits of state-of-the-art proposals are, in fact, derived from the fact that they are (implicitly) utilizing "expander graphs" (aka expanders) as their network topologies, thus unveiling a unifying theme of these proposals. We observe, however, that these proposals are not optimal with respect to performance, do not scale, or suffer from seemingly insurmountable deployment challenges. We leverage these insights to present Xpander, a novel datacenter architecture that achieves near-optimal performance and provides a tangible alternative to existing datacenter designs. Xpander's design turns ideas from the rich graph-theoretic literature on constructing optimal expanders into an operational reality. We evaluate Xpander via theoretical analyses, extensive simulations, experiments with a network emulator, and an implementation on an SDN-capable network testbed. Our results demonstrate that Xpander significantly outperforms both traditional and proposed datacenter designs. We discuss challenges to real-world deployment and explain how these can be resolved.
  34. The Densest k-Subhypergraph Problem
    APPROX '16, SIAM Journal on Discrete Mathematics
    (with Eden Chlamtáč, Christian Konrad, Guy Kortsarz, and George Rabanca)
    The Densest k-Subgraph (DkS) problem, and its corresponding minimization problem Smallest p-Edge Subgraph (SpES), have come to play a central role in approximation algorithms. This is due both to their practical importance, and their usefulness as a tool for solving and establishing approximation bounds for other problems. These two problems are not well understood, and it is widely believed that they do not an admit a subpolynomial approximation ratio (although the best known hardness results do not rule this out).

    In this paper we generalize both DkS and SpES from graphs to hypergraphs. We consider the Densest k-Subhypergraph problem (given a hypergraph (V,E), find a subset W⊆V of k vertices so as to maximize the number of hyperedges contained in W) and define the Minimum p-Union problem (given a hypergraph, choose p of the hyperedges so as to minimize the number of vertices in their union). We focus in particular on the case where all hyperedges have size 3, as this is the simplest non-graph setting. For this case we provide an O(n4(4−√3)/13+ϵ) ≤ O(n0.697831+ϵ)-approximation (for arbitrary constant ϵ>0) for Densest k-Subhypergraph and an Õ(n2/5)-approximation for Minimum p-Union. We also give an O(√m)-approximation for Minimum p-Union in general hypergraphs. Finally, we examine the interesting special case of interval hypergraphs (instances where the vertices are a subset of the natural numbers and the hyperedges are intervals of the line) and prove that both problems admit an exact polynomial time solution on these instances.
  35. Computing Approximate PSD Factorizations
    APPROX '16
    (with Amitabh Basu and Xin Li)
    We give an algorithm for computing approximate PSD factorizations of nonnegative matrices. The running time of the algorithm is polynomial in the dimensions of the input matrix, but exponential in the PSD rank and the approximation error. The main ingredient is an exact factorization algorithm when the rows and columns of the factors are constrained to lie in a general polyhedron. This strictly generalizes nonnegative matrix factorizations which can be captured by letting this polyhedron to be the nonnegative orthant.
  36. Approximating Low-Stretch Spanners
    SODA '16
    (with Zeyu Zhang)
    Despite significant recent progress on approximating graph spanners (subgraphs which approximately preserve distances), there are still several large gaps in our understanding. We give new results for two of them: approximating basic k-spanner (particularly for small k), and the dependence on f when approximating f-fault tolerant spanners.

    We first design an Õ(n1/3)-approximation for 4-spanner (both basic and directed). This was the last value of k for which only an O(√n)-approximation was known for basic k-spanner, and thus implies that for any k the approximation ratio is at most Õ(n1/3). For basic k-spanner, we also show an integrality gap for the natural flow-based LP (the main tool in almost all nontrivial spanner approximations) which nearly matches the trivial approximation of n^{\frac{1}{\lfloor (k+1)/2\rfloor}}.

    For f-fault tolerant spanners, we show that in the small-stretch setting (k ∈ {3,4}) it is possible to entirely remove the dependence on f from the approximation ratio, at the cost of moving to bicriteria guarantees. The previous best dependence on f was either almost-linear (in the undirected setting) or exponential (in the directed setting for stretch 4).
  37. Smoothed Analysis of Dynamic Networks
    DISC '15, Distributed Computing
    (with Jeremy Fineman, Seth Gilbert, and Calvin Newport)
    Invited to special issue of Distributed Computing
    We generalize the technique of smoothed analysis to distributed algorithms in dynamic network models. Whereas standard smoothed analysis studies the impact of small random perturbations of input values on algorithm performance metrics, dynamic graph smoothed analysis studies the impact of random perturbations of the underlying changing network graph topologies. Similar to the original application of smoothed analysis, our goal is to study whether known strong lower bounds in dynamic network models are robust or fragile: do they withstand small (random) perturbations, or do such deviations push the graphs far enough from a precise pathological instance to enable much better performance? Fragile lower bounds are likely not relevant for real-world deployment, while robust lower bounds represent a true difficulty caused by dynamic behavior. We apply this technique to three standard dynamic network problems with known strong worst-case lower bounds: random walks, flooding, and aggregation. We prove that these bounds provide a spectrum of robustness when subjected to smoothing---some are extremely fragile (random walks), some are moderately fragile / robust (flooding), and some are extremely robust (aggregation).
  38. Explicit Expanding Expanders
    ESA '15, Algorithmica
    (with Michael Schapira and Asaf Valadarsky)
    Invited to special issue of Algorithmica
    Deterministic constructions of expander graphs have been an important topic of research in computer science and mathematics, with many well-studied constructions of infinite families of expanders. In some applications, though, an infinite family is not enough: we need expanders which are "close" to each other. We study the following question: Construct an an infinite sequence of expanders G0,G1,…, such that for every two consecutive graphs Gi and Gi+1, Gi+1 can be obtained from Gi by adding a single vertex and inserting/removing a small number of edges, which we call the expansion cost of transitioning from Gi to Gi+1. This question is very natural, e.g., in the context of datacenter networks, where the vertices represent racks of servers, and the expansion cost captures the amount of rewiring needed when adding another rack to the network. We present an explicit construction of d-regular expanders with expansion cost at most 5d/2, for any d≥6. Our construction leverages the notion of a "2-lift" of a graph. This operation was first analyzed by Bilu and Linial, who repeatedly applied 2-lifts to construct an infinite family of expanders which double in size from one expander to the next. Our construction can be viewed as a way to "interpolate" between Bilu-Linial expanders with low expansion cost while preserving good edge expansion throughout. While our main motivation is centralized (datacenter networks), we also get the best-known distributed expander construction in the "self-healing" model.
  39. Towards Resistance Sparsifiers
    RANDOM '15
    (with Robert Krauthgamer and Tal Wagner)
    We study resistance sparsification of graphs, in which the goal is to find a sparse subgraph (with reweighted edges) that approximately preserves the effective resistances between every pair of nodes. We show that every dense regular expander admits a (1+ε)-resistance sparsifier of size Õ(n/ε), and conjecture this bound holds for all graphs on n nodes. In comparison, spectral sparsification is a strictly stronger notion and requires Ω(n/ε2) edges even on the complete graph.

    Our approach leads to the following structural question on graphs: Does every dense regular expander contain a sparse regular expander as a subgraph? Our main technical contribution, which may of independent interest, is a positive answer to this question in a certain setting of parameters. Combining this with a recent result of von Luxburg, Radl, and Hein (JMLR, 2014) leads to the aforementioned resistance sparsifiers.
  40. Lowest Degree k-Spanner: Approximation and Hardness
    APPROX '14, Theory of Computing
    | ToC | LIPIcs
    (with Eden Chlamtáč)
    Invited to special issue of Theory of Computing
    A k-spanner is a subgraph in which distances are approximately preserved, up to some given stretch factor k. We focus on the following problem: Given a graph and a value k, can we find a k-spanner that minimizes the maximum degree? While reasonably strong bounds are known for some spanner problems, they almost all involve minimizing the total number of edges. Switching the objective to the degree introduces significant new challenges, and currently the only known approximation bound is an O~(Delta^{3 - 2sqrt{2}})-approximation for the special case when k=2 [Chlamtáč, Dinitz, Krauthgamer FOCS 2012] (where Delta is the maximum degree in the input graph). In this paper we give the first non-trivial algorithm and polynomial-factor hardness of approximation for the case of general k. Specifically, we give an LP-based O~(Delta^{(1-1/k)^2})-approximation and prove that it is hard to approximate the optimum to within Delta^{Omega(1/k)} when the graph is undirected, and to within Delta^{Omega(1)} when it is directed.
  41. Improved Approximation Algorithm for Steiner k-Forest with Nearly Uniform Weights
    APPROX '14, ACM Transactions on Algorithms
    (with Guy Kortsarz and Zeev Nutov)
    In the Steiner k-Forest problem we are given an edge weighted graph, a collection $D$ of node pairs, and an integer $k \leq |D|$. The goal is to find a minimum cost subgraph that connects at least $k$ pairs. The best known ratio for this problem is $\min\{O(\sqrt{n}),O(\sqrt{k})\}$ \cite{GHNR}. In \cite{GHNR} it is also shown that ratio $\rho$ for Steiner k-Forest implies ratio $O(\rho\cdot \log^2 n)$ for the Dial-a-Ride problem: given an edge weighted graph and a set of items with a source and a destination each, find a minimum length tour to move each object from its source to destination, but carrying at most $k$ objects at a time. The only other algorithm known for Dial-a-Ride, besides the one resulting from \cite{GHNR}, has ratio $O(\sqrt{n})$ \cite{CR98}.

    We obtain ratio $n^{0.448}$ for Steiner k-Forest and Dial-a-Ride with unit weights, breaking the $O(\sqrt{n})$ ratio barrier for this natural special case. We also show that if the maximum weight of an edge is $O(n^{\epsilon})$, then one can achieve ratio $O(n^{(1+\epsilon)\cdot 0.448})$, which is less than $\sqrt{n}$ if $\epsilon$ is small enough.

    To prove our main result we consider the following generalization of the Minimum $k$-Edge Subgraph (Mk-ES) problem, which we call Min-Cost $\ell$-Edge-Profit Subgraph (MCl-EPS): Given a graph $G=(V,E)$ with edge-profits $w=\{p_e:e \in E\}$ and node-costs $c=\{c_v:v \in V\}$, and a lower profit bound $\ell$, find a minimum node-cost subgraph of $G$ of edge profit at least $\ell$. The Mk-ES problem is a special case of MCl-EPS with unit node costs and unit edge profits. The currently best known ratio for Mk-ES is $n^{3-2\sqrt{2} + \epsilon}$ (note that $3-2\sqrt{2} < 0.1716$) \cite{CDK}. We extend this ratio to MCl-EPS for arbitrary node weights and edge profits that are polynomial in $n$. This extension is of independent interest.
  42. Braess's Paradox in Wireless Networks: The Danger of Improved Technology
    DISC '13
    | LNCS | arxiv |pdf
    (with Merav Parter)
    When comparing new wireless technologies, it is common to consider the effect that they have on the capacity of the network (defined as the maximum number of simultaneously satisfiable links). For example, it has been shown that giving receivers the ability to do interference cancellation, or allowing transmitters to use power control, never decreases the capacity and can in certain cases increase it by $\Omega(\log (\Delta \cdot P _{\max}))$, where $\Delta$ is the ratio of the longest link length to the smallest transmitter-receiver distance and $P_{\max}$ is the maximum transmission power. But there is no reason to expect the optimal capacity to be realized in practice, particularly since maximizing the capacity is known to be NP-hard. In reality, we would expect links to behave as self-interested agents, and thus it makes more sense to compare the values reached at game-theoretic notions of equilibria than the optimum values.

    In this paper we initiate this line of work by comparing various notions of equilibria (particularly Nash equilibria and no-regret behavior) when using a supposedly "better" technology. We show a version of Braess's Paradox for all of them: in certain networks, upgrading technology can actually make the equilibria worse, despite an increase in the capacity. We construct instances where this decrease is a constant factor for power control, interference cancellation, and improvements in the SINR threshold ($\beta$), and is $\Omega(\log \Delta)$ when power control is combined with interference cancellation. However, we show that these examples are basically tight: the decrease is at most $O(1)$ for power control, interference cancellation, and improved $\beta$, and is at most $O(\log \Delta)$ when power control is combined with interference cancellation.
  43. Packing Interdiction and Partial Covering Problems
    IPCO '13
    | LNCS | pdf
    (with Anupam Gupta)
    In the Packing Interdiction problem we are given a packing LP together with a separate interdiction cost for each LP variable and a global interdiction budget. Our goal is to harm the LP: which variables should we forbid the LP from using (subject to forbidding variables of total interdiction cost at most the budget) in order to minimize the value of the resulting LP? There is a long line of work on interdiction problems in graphs (interdicting the maximum flow, the shortest path, the minimum spanning tree, etc.), but we are the first to consider the problem of interdicting linear programs. The motivation for packing interdiction is matching interdiction, in which we are given a graph with edge weights and edge costs, and try to destroy edges of total cost at most some value B in order to minimize the maximum weight matching of the resulting graph. Zenklusen showed that this problem is NP-hard and gave a 4-approximation when all edge weights are unit. We obtain an O(1)-approximation to the general matching interdiction problem (without the unit weight assumption) as a corollary of our main result, an O(log q \cdot min{q, log k})-approximation to Packing Interdiction where q is the row-sparsity of the packing LP and k is the column-sparsity.
  44. Matroid Secretary for Regular and Decomposable Matroids
    SODA '13, SIAM Journal on Computing
    (with Guy Kortsarz)
    In the matroid secretary problem we are given a stream of elements and asked to choose a set of elements that maximizes the total value of the set, subject to being an independent set of a matroid given in advance. The difficulty comes from the assumption that decisions are irrevocable: if we choose to accept an element when it is presented by the stream then we can never get rid of it, and if we choose not to accept it then we cannot later add it. Babaioff, Immorlica, and Kleinberg [SODA 2007] introduced this problem, gave O(1)-competitive algorithms for certain classes of matroids, and conjectured that every matroid admits an O(1)-competitive algorithm. However, most matroids that are known to admit an O(1)-competitive algorithm can be easily represented using graphs (e.g. graphic and transversal matroids). In particular, there is very little known about F-representable matroids (the class of matroids that can be represented as elements of a vector space over a field F), which are one of the foundational matroid classes. Moreover, most of the known techniques are as dependent on graph theory as they are on matroid theory. We go beyond graphs by giving an O(1)-competitive algorithm for regular matroids (the class of matroids that are representable over every field), and use techniques that are matroid-theoretic rather than graph-theoretic. We use the regular matroid decomposition theorem of Seymour to decompose any regular matroid into matroids which are either graphic, cographic, or isomorphic to R_{10}, and then show how to combine algorithms for these basic classes into an algorithm for regular matroids. This allows us to generalize beyond regular matroids to any class of matroids that admits such a decomposition into classes for which we already have good algorithms. In particular, we give an O(1)-competitive algorithm for the class of max-flow min-cut matroids.
  45. Everywhere-Sparse Spanners via Dense Subgraphs
    FOCS '12
    (with Eden Chlamtáč and Robert Krauthgamer)
    Recent significant progress in constructing graph spanners that are sparse (small number of edges) or light (low total weight) has skipped everywhere-sparse spanners (small maximum degree). Similar to other network design problems, this maximum degree objective has turned out to be challenging despite its mathematical simplicity and usefulness in applications. In the Lowest Degree 2-Spanner (LD2S) problem, the goal is to compute a 2-spanner of an input graph so as to minimize the maximum degree. We design a polynomial-time algorithm achieving approximation factor Õ(Δ^(3-2√2)) ≈ Õ(Δ^(0.172)), where Δ is the maximum degree of the input graph. The previous Õ(Δ^(1/4))$--approximation was proved nearly two decades ago by Kortsarz and Peleg [SODA 1994, SICOMP 1998].

    Our main conceptual contribution is establishing a formal connection between LD2S and a variant of the Densest k-Subgraph (DkS) problem. Specifically, we design for both problems strong relaxations based on Linear Programming (LP) hierarchies, and show that "faithful" randomized rounding of the DkS-variant can be used to round LD2S solutions. Our notion of faithfulness intuitively means that all vertices and edges are chosen with probability proportional to their LP value, but the precise formulation is more subtle.

    Unfortunately, the best algorithms known for DkS use the Sherali-Adams LP hierarchy in a non-faithful way [Bhaskara, Charikar, Chlamtáč, Feige, and Vijayaraghavan, STOC 2010]. Our main technical contribution is to overcome this shortcoming, while still matching the gap that arises in random graphs by planting a subgraph with same log-density.

  46. IBGP and Constrained Connectivity
    APPROX '12
    | LNCS | arxiv
    (with Gordon Wilfong)
    We initiate the theoretical study of the problem of minimizing the size of an IBGP overlay in an Autonomous System (AS) in the Internet subject to a natural notion of correctness derived from the standard "hot-potato" routing rules. For both natural versions of the problem (where we measure the size of an overlay by either the number of edges or the maximum degree) we prove that it is NP-hard to approximate to a factor better than Ω(log n) and provide approximation algorithms with ratio Õ(√(n)). In addition, we give a slightly worse Õ(n^(2/3))$-approximation based on primal-dual techniques that has the virtue of being both fast and good in practice, which we show via simulations on the actual topologies of five large Autonomous Systems.

    The main technique we use is a reduction to a new connectivity-based network design problem that we call Constrained Connectivity. In this problem we are given a graph G=(V,E), and for every pair of vertices u,v \in V we are given a set S(u,v) ⊆ V called the safe set of the pair. The goal is to find the smallest subgraph H of G in which every pair of vertices u,v is connected by a path contained in S(u,v). We show that the IBGP problem can be reduced to the special case of Constrained Connectivity where G = K_n and safe sets are defined geometrically based on the IGP distances in the AS. We also believe that Constrained Connectivity is an interesting problem in its own right, so provide stronger hardness results (2^(log^(1-ε) n)-hardness of approximation) and integrality gaps (n^(1/3 - ε) for the general case. On the positive side, we show that Constrained Connectivity turns out to be much simpler for some interesting special cases other than iBGP: when safe sets are symmetric and hierarchical, we give a polynomial time algorithm that computes an optimal solution.

  47. Label Cover Instances with Large Girth and the Hardness of Approximating Basic k-Spanner
    ICALP '12, ACM Transactions on Algorithms
    (with Guy Kortsarz and Ran Raz)
    We study the well-known Label Cover problem under the additional requirement that problem instances have large girth. We show that if the girth is some k, the problem is roughly 2^{\log^{1-\epsilon} n/k} hard to approximate for all constant \epsilon > 0. A similar theorem was claimed by Elkin and Peleg [ICALP 2000], but their proof was later found to have a fundamental error. Thus for the problem of Label Cover with large girth we give the first non-trivial lower bound. We use the new proof to show inapproximability for the basic k-spanner problem, which is both the simplest problem in graph spanners and one of the few for which super-logarithmic hardness was not known. Assuming NP \not\subseteq BPTIME(2^{polylog(n)}), we show (roughly) that for every k \geq 3 and every constant \epsilon > 0 it is hard to approximate the basic k-spanner problem within a factor better than 2^{(\log^{1-\epsilon} n) / k}. A similar hardness for basic k-spanner was claimed by Elkin and Peleg [ICALP 2000], but the error in their analysis of Label Cover made this proof fail as well. For the basic k-spanner problem we improve the previous best lower bound of \Omega(\log n)/k by Kortsarz [1998]. Our main technique is subsampling the edges of 2-query PCPs, which allows us to reduce the degree of a PCP to be essentially equal to the soundness desired. This turns out to be enough to basically guarantee large girth.
  48. Efficient Computation of Distance Sketches in Distributed Networks
    SPAA '12, Distributed Computing
    (with Atish Das Sarma and Gopal Pandurangan)
    Distance computation (e.g., computing shortest paths) is one of the most fundamental primitives used in communication networks. The cost of effectively and accurately computing pairwise network distances can become prohibitive in large-scale networks such as the Internet and Peer-to-Peer (P2P) networks. To negotiate the rising need for very efficient distance computation at scales never imagined before, approximation techniques for numerous variants of this question have recently received significant attention in the literature. Several different areas of theoretical research have emerged centered around this problem, such as metric embeddings, distance labelings, spanners, and distance oracles. The goal is to preprocess the graph and store a small amount of information such that whenever a query for any pairwise distance is issued, the distance can be well approximated (i.e., with small stretch) very quickly in an online fashion. Specifically, the pre-processing (usually) involves storing a small sketch with each node, such that at query time only the sketches of the concerned nodes need to be looked up to compute the approximate distance.

    Techniques derived from metric embeddings have been considered extensively by the networking community, usually under the name of network coordinate systems. On the other hand, while the computation of distance oracles has received considerable attention in the context of web graphs and social networks, there has been little work towards similar algorithms within the networking community. In this paper, we present the first theoretical study of distance sketches derived from distance oracles in a distributed network. We first present a fast distributed algorithm for computing approximate distance sketches, based on a distributed implementation of the distance oracle scheme of [Thorup-Zwick, JACM 2005]. We also show how to modify this basic construction to achieve different tradeoffs between the number of pairs for which the distance estimate is accurate, the size of the sketches, and the time and message complexity necessary to compute them. These tradeoffs can then be combined to give an efficient construction of small sketches with provable average-case as well as worst-case performance. Our algorithms use only small-sized messages and hence are suitable for bandwidth-constrained networks, and can be used in various networking applications such as topology discovery and construction, token management, load balancing, monitoring overlays, and several other problems in distributed algorithms.

  49. Fault-Tolerant Spanners: Better and Simpler
    PODC '11
    |
    (with Robert Krauthgamer)
    A natural requirement of many distributed structures is fault-tolerance: after some failures, whatever remains from the structure should still be effective for whatever remains from the network. In this paper we examine spanners of general graphs that are tolerant to vertex failures, and significantly improve their dependence on the number of faults r, for all stretch bounds.

    For stretch k ≥ 3 we design a simple transformation that converts every k-spanner construction with at most f(n) edges into an r-fault-tolerant k-spanner construction with at most O(r^3 log n) ⋅ f(2n/r) edges. Applying this to standard greedy spanner constructions gives r-fault tolerant k-spanners with Õ(r^2 n^(1+(2/(k+1)))) edges. The previous construction by Chechik, Langberg, Peleg, and Roddity [STOC 2009] depends similarly on n but exponentially on r (approximately like k^r).

    For the case k=2 and unit-length edges, an O(r log n)-approximation algorithm is known from recent work of Dinitz and Krauthgamer [STOC 2011], where several spanner results are obtained using a common approach of rounding a natural flow-based linear programming relaxation. Here we use a different (stronger) LP relaxation and improve the approximation ratio to O(log n), which is, notably, independent of the number of faults r. We further strengthen this bound in terms of the maximum degree by using the Lovasz Local Lemma.

    Finally, we show that most of our constructions are inherently local by designing equivalent distributed algorithms in the LOCAL model of distributed computation.

  50. Directed Spanners via Flow-Based Linear Programs
    STOC '11
    |
    (with Robert Krauthgamer)
    We examine directed spanners through flow-based linear programming relaxations. We design an Õ(n^(2/3))-approximation algorithm for the directed k-spanner problem that works for all k ≥ 1, which is the first sublinear approximation for arbitrary edge-lengths. Even in the more restricted setting of unit edge-lengths, our algorithm improves over the previous Õ(n^(1-1/k)) approximation of Bhattacharyya et al. when k ≥ 4. For the special case of k=3 we design a different algorithm achieving an Õ(n^(1/2))-approximation, improving the previous Õ(n^(2/3)). Both of our algorithms easily extend to the fault-tolerant setting, which has recently attracted attention but not from an approximation viewpoint. We also prove a nearly matching integrality gap of Ω(n^(1/3 - ε) for any constant ε > 0.

    A virtue of all our algorithms is that they are relatively simple. Technically, we introduce a new yet natural flow-based relaxation, and show how to approximately solve it even when its size is not polynomial. The main challenge is to design a rounding scheme that "coordinates" the choices of flow-paths between the many demand pairs while using few edges overall. We achieve this, roughly speaking, by randomization at the level of vertices.

  51. Distributed Algorithms for Approximating Wireless Network Capacity
    INFOCOM '10
    |
    doi | pdf
    In this paper we consider the problem of maximizing wireless network capacity (a.k.a. one-shot scheduling) in both the protocol and physical models. We give the first distributed algorithms with provable guarantees in the physical model, and show how they can be generalized to more complicated metrics and settings in which the physical assumptions are slightly violated. We also give the first algorithms in the protocol model that do not assume transmitters can coordinate with their neighbors in the interference graph, so every transmitter chooses whether to broadcast based purely on local events. Our techniques draw heavily from algorithmic game theory and machine learning theory, even though our goal is a distributed algorithm. Indeed, our main results allow every transmitter to run any algorithm it wants, so long as its algorithm has a learning-theoretic property known as no-regret in a game-theoretic setting.
  52. Graphical Representations of Clutters
    Ars Combinatoria (2010)
    |
    (with Jonah Gold, Thomas Sharkey, and Lorenzo Traldi)
    We discuss the use of K-terminal networks to represent arbitrary clutters. A given clutter has many different representations, and there does not seem to be any set of simple transformations that can be used to transform one representation of a clutter into any other. We observe that for t ≥ 2 the class of clutters that can be represented using no more than t terminals is closed under minors, and has infinitely many forbidden minors.
  53. Maximizing Capacity in Arbitrary Wireless Networks in the SINR Model: Complexity and Game Theory
    INFOCOM '09
    |
    doi | pdf
    (with Matthew Andrews)
    In this paper we consider the problem of maximizing the number of supported connections in arbitrary wireless networks where a transmission is supported if and only if the signal-to-interference-plus-noise ratio at the receiver is greater than some threshold. The aim is to choose transmission powers for each connection so as to maximize the number of connections for which this threshold is met.

    We believe that analyzing this problem is important both in its own right and also because it arises as a subproblem in many other areas of wireless networking. We study both the complexity of the problem and also present some game theoretic results regarding capacity that is achieved by completely distributed algorithms. We also feel that this problem is intriguing since it involves both continuous aspects (i.e. choosing the transmission powers) as well as discrete aspects (i.e. which connections should be supported). Our results are:

    —We show that maximizing the number of supported connections is NP-hard, even when there is no background noise. This is in contrast to the problem of determining whether or not a given set of connections is feasible since that problem can be solved via linear programming.

    —We present a number of approximation algorithms for the problem. All of these approximation algorithms run in polynomial time and have an approximation ratio that is independent of the number of connections.

    —We examine a completely distributed algorithm and analyze it as a game in which a connection receives a positive payoff if it is successful and a negative payoff if it is unsuccessful while transmitting with nonzero power. We show that in this game there is not necessarily a pure Nash equilibrium but if such an equilibrium does exist the corresponding price of anarchy is independent of the number of connections. We also show that a mixed Nash equilibrium corresponds to a probabilistic transmission strategy and in this case such an equilibrium always exists and has a price of anarchy that is independent of the number of connections.

  54. Secretary Problems: Weights and Discounts
    SODA '09
    |
    (with Moshe Babaioff, Anupam Gupta, Nicole Immorlica, and Kunal Talwar)
    The classical secretary problem studies the problem of selecting online an element (a "secretary") with maximum value in a randomly ordered sequence. The difficulty lies in the fact that an element must be either selected or discarded upon its arrival, and this decision is irrevocable. Constant-competitive algorithms are known for the classical secretary problems (see, e.g., the survey of Freeman) and several variants. We study the following two extensions of the secretary problem:

    —In the discounted secretary problem, there is a time-dependent "discount" factor d(t), and the benefit derived from selecting an element/secretary e at time t is d(t)⋅ v(e). For this problem with arbitrary (not necessarily decreasing) functions d(t), we show a constant-competitive algorithm when the expected optimum is known in advance. With no prior knowledge, we exhibit a lower bound of Ω(log n/log log n), and give a nearly-matching O(log n)-competitive algorithm.

    —In the weighted secretary problem, up to K secretaries can be selected; when a secretary is selected (s)he must be irrevocably assigned to one of K positions, with position k having weight w(k), and assigning object/secretary e to position k has benefit w(k) ⋅ v(e). The goal is to select secretaries and assign them to positions to maximize the total benefit. We give constant-competitive algorithms for this problem.

    Most of these results can also be extended to the matroid secretary case (Babaioff et al.) for a large family of matroids with a constant-factor loss, and an O(log rank) loss for general matroids. These results are based on a reduction from various matroids to partition matroids which present a unified approach to many of the upper bounds of Babaioff et al. These problems have connections to online mechanism design (see, e.g., Hajiaghayi et al.). All our algorithms are monotone, and hence lead to truthful mechanisms for the corresponding online auction problems.

  55. Online, Dynamic, and Distributed Embeddings of Approximate Ultrametrics
    DISC '08
    |
    doi | pdf
    The theoretical computer science community has traditionally used embeddings of finite metrics as a tool in designing approximation algorithms. Recently, however, there has been considerable interest in using metric embeddings in the context of networks to allow network nodes to have more knowledge of the pairwise distances between other nodes in the network. There has also been evidence that natural network metrics like latency and bandwidth have some nice structure, and in particular come close to satisfying an ε-three point condition or an ε-four point condition. This empirical observation has motivated the study of these special metrics, including strong results about embeddings into trees and ultrametrics. Unfortunately all of the current embeddings require complete knowledge about the network up front, and so are less useful in real networks which change frequently. We give the first metric embeddings which have both low distortion and require only small changes in the structure of the embedding when the network changes. In particular, we give an embedding of semimetrics satisfying an ε-three point condition into ultrametrics with distortion (1 + ε)^(log n  +  4) and the property that any new node requires only O(n^(1/3)) amortized edge swaps, where we use the number of edge swaps as a measure of “structural change”. This notion of structural change naturally leads to small update messages in a distributed implementation in which every node has a copy of the embedding. The natural offline embedding has only (1 + ε)^(log n) distortion but can require Ω(n) amortized edge swaps per node addition. This online embedding also leads to a natural dynamic algorithm that can handle node removals as well as insertions.
  56. Compact Routing with Slack
    PODC '07
    |
    Given a weighted graph G=(V,E), a compact routing scheme is a distributed algorithm for forwarding packets from any source to any destination. The fundamental tradeoff is between the space used at each node and the stretch of the total route, measured by the multiplicative factor between the actual distance traveled and the length of the shortest possible route. We extend the normal definition with a slack parameter ε, which allows us to ignore up to εn^2 of the shortest pairwise routes and give a guarantee on the remaining ones. For constant ε we give constant stretch, polylogarithmic space schemes in the name-dependent model and in the designer-port name-independent model, and give a lower bound that proves that such schemes do not exist in the fixed-port name-independent model. In the name-dependent model we also give a gracefully degrading scheme which works simultaneously for all ε > 0 and guarantees constant average stretch with polylog space.
  57. Spanners with Slack
    ESA '06
    |
    doi | pdf
    (with T-H. Hubert Chan and Anupam Gupta)
    Given a metric (V,d), a spanner is a sparse graph whose shortest-path metric approximates the distance d to within a small multiplicative distortion. In this paper, we study the problem of spanners with slack: e.g., can we find sparse spanners where we are allowed to incur an arbitrarily large distortion on a small constant fraction of the distances, but are then required to incur only a constant (independent of n) distortion on the remaining distances? We answer this question in the affirmative, thus complementing similar recent results on embeddings with slack into ℓ p spaces. For instance, we show that if we ignore an ε fraction of the distances, we can get spanners with O(n) edges and O(log(1/ε)) distortion for the remaining distances.

    We also show how to obtain sparse and low-weight spanners with slack from existing constructions of conventional spanners, and these techniques allow us to also obtain the best known results for distance oracles and distance labelings with slack. This paper complements similar results obtained in recent research on slack embeddings into normed metric spaces.

  58. Full rank tilings of finite abelian groups
    SIAM Journal on Discrete Mathematics (2006)
    |
    doi | pdf
    A tiling of a finite abelian group G is a pair (V,A) of subsets of G such that 0 is in both V and A and every g ∈ G can be uniquely written as g=v+a with v ∈ V and a ∈ A. Tilings are a special case of normed factorizations, a type of factorization by subsets that was introduced by Hajós [Casopsis Puest Path. Rys., 74, (1949), pp. 157-162]. A tiling is said to be full rank if both V and A generate G. Cohen, Litsyn, Vardy, and Zémor [SIAM J. Discrete Math., 9 (1996), pp. 393-412] proved that any tiling of Z_2^n can be decomposed into full rank and trivial tilings. We generalize this decomposition from Z_2^n to all finite abelian groups. We also show how to generate larger full rank tilings from smaller ones, and give two sufficient conditions for a group to admit a full rank tiling, showing that many groups do admit them. In particular, we prove that if p ≥ 5 is a prime and n ≥ 4, then Z_p^n admits a full rank tiling. This bound on n is tight for 5 ≤ p ≤ 11, and is conjectured to be tight for all primes p.
  59. Enumeration of Balanced Tournament Designs on 10 points
    Journal of Combinatorial Mathematics and Combinatorial Computing (2005)
    |
    (with Jeffrey Dinitz)
    We enumerate the balanced tournament designs on 10 points (BTD(5)) and find that there are exactly 30,220,557 nonisomorphic designs. We also find that there are exactly two nonisomorphic partitioned BTD(5)'s and 8,081,114 factored BTD(5)'s on 10 points. We enumerate other classes of balanced tournament designs on 10 points and give examples of some of the more interesting ones. In 1988 Corriveau enumerated the nonisomorphic BTD(4)'s finding that there are 47 of them. This paper enumerates the next case and provides another good example of the combinatorial explosion phenomenon.

Surveys:

  1. Recent Advances on the Matroid Secretary Problem
    SIGACT News (2013)
    | ACM DL | pdf
    The matroid secretary problem was introduced by Babaioff, Immorlica, and Kleinberg in SODA 2007 as an online problem that was both mathematically interesting and had applications to online auctions. In this column I will introduce and motivate this problem, and give a survey of some of the exciting work that has been done on it over the past 6 years. While we have a much better understanding of matroid secretary now than we did in 2007, the main conjecture is still open: does there exist an O(1)-competitive algorithm?

Manuscripts:

  1. Almost Tight Bounds for Differentially Private Densest Subgraph
    (with Satyen Kale, Silvio Lattanzi, and Sergei Vassilvitskii)
    We study the Densest Subgraph (DSG) problem under the additional constraint of differential privacy. DSG is a fundamental theoretical question which plays a central role in graph analytics, and so privacy is a natural requirement. But all known private algorithms for Densest Subgraph lose constant multiplicative factors as well as relative large (at least log2n) additive factors, despite the existence of non-private exact algorithms. We show that, perhaps surprisingly, these losses are not necessary: in both the classic differential privacy model and the LEDP model (local edge differential privacy, introduced recently by Dhulipala et al. [FOCS 2022]), we give (ϵ,δ)-differentially private algorithms with no multiplicative loss whatsoever. In other words, the loss is purely additive. Moreover, our additive losses improve the best-known previous additive loss (in any version of differential privacy) when 1/δ is at least polynomial in n, and are almost tight: in the centralized setting, our additive loss is O(logn/ϵ) while there is a known lower bound of Ω(\sqrt(logn/ϵ)). Additionally, we give a different algorithm that is ϵ-differentially private in the LEDP model which achieves a multiplicative ratio arbitrarily close to 2, along with an additional additive factor. This improves over the previous multiplicative 4-approximation in the LEDP model. Finally, we conclude with extensions of our techniques to both the node-weighted and the directed versions of the problem.
  2. Degrees and Network Design: New Problems and Approximations
    (with Guy Kortsarz and Shi Li)
    While much of network design focuses mostly on cost (number or weight of edges), node degrees have also played an important role. They have traditionally either appeared as an objective, to minimize the maximum degree (e.g., the Minimum Degree Spanning Tree problem), or as constraints which might be violated to give bicriteria approximations (e.g., the Minimum Cost Degree Bounded Spanning Tree problem). We extend the study of degrees in network design in two ways. First, we introduce and study a new variant of the Survivable Network Design Problem where in addition to the traditional objective of minimizing the cost of the chosen edges, we add a constraint that the ℓp-norm of the node degree vector is bounded by an input parameter. This interpolates between the classical settings of maximum degree (the ℓ∞-norm) and the number of edges (the ℓ1-degree), and has natural applications in distributed systems and VLSI design. We give a constant bicriteria approximation in both measures using convex programming. Second, we provide a polylogrithmic bicriteria approximation for the Degree Bounded Group Steiner problem on bounded treewidth graphs, solving an open problem from [Kortsarz and Nutov, Discret. Appl. Math. 2022] and [Guo et al., Algorithmica 2022].

Just for fun:

  1. Highly efficient and effective removal of fat from fried chicken via centrifugation
    Annals of Improbable Research (2013)
    | pdf
    (with Lucas Carey and Desiree Tillo)
    Approximately 20% of Israelis and >30% of Americans are obese and deep fried foods contribute a significant fraction of the daily fat intake. Fried chicken is a popular high fat food and contributes to a significant fraction of daily fat consumption. Indeed, an increase in fried chicken consumption has been blamed for contributing to the increase in obesity in both China and the Arabian Peninsula. Most current research on reducing the fat in fried chicken has focused on reducing fat uptake during the frying process, however, these methods have not been adopted by restaurants or consumers. Here we present centrifugation of the already cooked fried chicken as an alternative method for fat reduction. We show that centrifugation of fried chicken reduces the fat content. This method, in contrast to all existing methods that target food preparation and cooking, can be applied to already fried chicken, thus providing the first method that can be used directly by consumers with access to a centrifuge.
  2. An Incentive-Compatible Mechanism for all Settings: Chuck Norris
    SIGBOVIK '08
    | pdf
    A very important property for any mechanism is incentive-compatibility. A mechanism is incentive-compatible if no agent has an incentive not to follow the protocol. In this paper we present a new meta-mechanism that can be applied to any existing mechanism to make it incentive compatible. This meta-mechanism is Chuck Norris. This shows that any mechanism can be made incentive-compatible, and thus the field of mechanism design is now solved.
  3. Slacking with Slack
    SIGBOVIK '08
    | pdf
    The classical graduate student problem is the well-studied problem of how a graduate student can spend all of their time slacking off in graduate school while still graduating. A famous impossibility result of Bovik states that if all of a student’s time is spent slacking, then it is impossible to graduate. We relax this problem by adding a slack parameter ε, representing the fraction of time that the student has to spend working. On this ε fraction we make no guarantee at all about the enjoyment of the student, but this enables us to guarantee graduation while also guaranteeing large enjoyment on the other 1 - ε fraction of the time.