Peter Robinson
I'm an Associate Professor in the AU School of Computer & Cyber Sciences. My research focuses on designing new distributed and parallel algorithms, the distributed processing of big data, achieving fault-tolerance in communication networks against adversarial attacks, and developing robust protocols that work in highly dynamic environments such as peer-to-peer networks and mobile ad-hoc networks.
My research has been supported by the General Research Fund (Hong Kong), the Natural Sciences and Engineering Research Council (Canada), IBM Research, and the London Mathematical Society (UK).
News
- New paper at SPAA 2024!
- 2 papers at ITCS 2024!
- Slides for my talk at AMG @ DISC 2023
Research [DBLP]
2024
- Dynamic Maximal Matching in Clique NetworksArXivDOI
M Li, P Robinson, X Zhu. 15th Innovations in Theoretical Computer Science. (ITCS 2024).
AbstractWe consider the problem of computing a maximal matching with a distributed algorithm in the presence of batch-dynamic changes to the graph topology. We assume that a graph of $n$ nodes is vertex-partitioned among $k$ players that communicate via message passing. Our goal is to provide an efficient algorithm that quickly updates the matching even if an adversary determines batches of $\ell$ edge insertions or deletions. Assuming a link bandwidth of $O(\beta\log n)$ bits per round, for a parameter $\beta \ge 1$, we first show a lower bound of $\Omega( \frac{\ell\log k}{\beta k^2\log n})$ rounds for recomputing a matching assuming an oblivious adversary who is unaware of the initial (random) vertex partition as well as the current state of the players, and a stronger lower bound of $\Omega(\frac{\ell}{\beta k\log n})$ rounds against an adaptive adversary, who may choose any balanced (but not necessarily random) vertex partition initially and who knows the current state of the players. We also present a randomized algorithm that has an initialization time of $O( \frac{n}{\beta k}\log n )$ rounds, while achieving an update time that that is independent of $n$: In more detail, the update time is $O( \lceil \frac{\ell}{\beta k} \rceil \log (\beta k) )$ against an oblivious adversary, who must fix all updates in advance. If we consider the stronger adaptive adversary, the update time becomes $O( \lceil \frac{\ell}{\beta\sqrt{k}} \rceil \log (\beta k)$ rounds. - Sparse Spanners with Small Distance and Congestion Stretches
C Busch, D Kowalski, P Robinson. 35th ACM Symposium on Parallelism in Algorithms and Architectures. (SPAA 2024). - The Message Complexity of Distributed Graph OptimizationArXivDOI
F Dufoulon, S Pai, G Pandurangan, S Pemmaraju, P Robinson. 15th Innovations in Theoretical Computer Science. (ITCS 2024).
2023
- Distributed Sketching Lower Bounds for k-Edge Connected Spanning Subgraph, Breadth-First Search Tree, and LCL ProblemsDOI
P Robinson. 36th International Symposium on Distributed Computing (DISC 2023). - Improved Tradeoffs for Leader ElectionArXivDOI
S Kutten, P Robinson, M M Tan, X Zhu. 42th ACM Symposium on Principles of Distributed Computing. (PODC 2023). - What Can We Compute in a Single Round of the Congested Clique?ArXivDOI
P Robinson. Brief Announcement in: 42th ACM Symposium on Principles of Distributed Computing. (PODC 2023). - Tight Bounds on the Message Complexity of Distributed Tree VerificationArXivDOI
S Kutten, P Robinson, M M Tan. 27th International Conference on Principles of Distributed Systems. (OPODIS 2023). - Leader Election in Well-Connected Graphs.ArXivDOI
S Gilbert, P Robinson, S Sourav. Algorithmica, 85(4): 1029-1066. (Algorithmica).
2022
- Latency, capacity, and distributed minimum spanning treesDOI
J Augustine, S Gilbert, F Kuhn, P Robinson, S Sourav. Journal of Computer and System Sciences, Elsevier. (JCSS). - Byzantine-Resilient Counting in Networks.DOI
Soumyottam Chatterjee, Gopal Pandurangan, Peter Robinson. 42nd IEEE International Conference on Distributed Computing Systems (ICDCS 2022). - Awake-Efficient Distributed Algorithms for Maximal Independent SetDOI
K Hourani, G Pandurangan, P Robinson. 42nd IEEE International Conference on Distributed Computing Systems (ICDCS 2022).
2021
- Being fast means being chatty: The Local Information Cost of Graph Spanners.ArXivDOI
Peter Robinson. 32th ACM-SIAM Symposium on Discrete Algorithms (SODA 2021). - Can We Break Symmetry with $o(m)$ Communication?ArXivDOI
Shreyas Pai, Gopal Pandurangan, Sriram V. Pemmaraju, Peter Robinson. 40th ACM Symposium on Principles of Distributed Computing (PODC 2021).
2020
- DConstructor: Efficient and Robust Network Construction with Polylogarithmic Overhead.DOI
Seth Gilbert, Gopal Pandurangan, Peter Robinson, Amitabh Trehan. 39th ACM Symposium on Principles of Distributed Computing (PODC 2020). - Latency, Capacity, and Distributed Minimum Spanning Tree.DOI
John Augustine, Seth Gilbert, Fabian Kuhn, Peter Robinson, Suman Sourav. 40th IEEE International Conference on Distributed Computing Systems (ICDCS 2020). - A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning TreesArXivDOI
Gopal Pandurangan, Peter Robinson, Michele Scquizzato. ACM Transactions on Algorithms (ACM TALG).
AbstractThis paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in $\tilde{O}(D + \sqrt{n})$ time and exchanges $\tilde{O}(m)$ messages (both with high probability), where $n$ is the number of nodes of the network, $D$ is the diameter, and $m$ is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of $\tilde{\Omega}(D + \sqrt{n})$ [Elkin, SIAM J. Comput. 2006] and the message lower bound of $\Omega(m)$ [Kutten et al., JACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both $\tilde{\Omega}(D + \sqrt{n})$ rounds and $\Omega(m)$ messages.
2019
- The Complexity of Symmetry Breaking in Massive Graphs.ArXivDOI
Christian Konrad, Sriram V. Pemmaraju, Talal Riaz, and Peter Robinson. 32nd International Symposium on Distributed Computing LIPIcs, Vol. 146 (DISC 2019). - Slow Links, Fast Links, and the Cost of Gossip.DOI
Seth Gilbert, Peter Robinson, Suman Sourav. IEEE Transactions on Parallel and Distributed Systems (IEEE TPDS). - The Complexity of Leader Election in Diameter-two Networks.ArXivDOI
Soumyottam Chatterjee, Gopal Pandurangan, Peter Robinson. Distributed Computing (DC). - Network Size Estimation in Small-World Networks under Byzantine Faults.ArXivDOI
Soumyottam Chatterjee, Gopal Pandurangan, Peter Robinson. 33rd IEEE International Parallel and Distributed Processing Symposium (IPDPS 2019). - On the Hardness of the Strongly Dependent Decision Problem.ArXivDOI
Martin Biely and Peter Robinson. 20th International Conference on Distributed Computing and Networking (ICDCN 2019).
AbstractWe present necessary and sufficient conditions for solving the strongly dependent decision (SDD) problem in various distributed systems. Our main contribution is a novel characterization of the SDD problem based on point-set topology. For partially synchronous systems, we show that any algorithm that solves the SDD problem induces a set of executions that is closed with respect to the point-set topology. We also show that the SDD problem is not solvable in the asynchronous system augmented with any arbitrarily strong failure detectors.
2018
- Fault-Tolerant Consensus with an Abstract MAC Layer.ArXivDOI
Calvin Newport and Peter Robinson. 32nd International Symposium on Distributed Computing (DISC 2018).
AbstractIn this paper, we study fault-tolerant distributed consensus in wireless systems. In more detail, we produce two new randomized algorithms that solve this problem in the abstract MAC layer model, which captures the basic interface and communication guarantees provided by most wireless MAC layers. Our algorithms work for any number of failures, require no advance knowledge of the network participants or network size, and guarantee termination with high probability after a number of broadcasts that are polynomial in the network size. Our first algorithm satisfies the standard agreement property, while our second trades a faster termination guarantee in exchange for a looser agreement property in which most nodes agree on the same value. These are the first known fault-tolerant consensus algorithms for this model. In addition to our main upper bound results, we explore the gap between the abstract MAC layer and the standard asynchronous message passing model by proving fault-tolerant consensus is impossible in the latter in the absence of information regarding the network participants, even if we assume no faults, allow randomized solutions, and provide the algorithm a constant-factor approximation of the network size. - Leader Election in Well-Connected Graphs
Seth Gilbert, Peter Robinson, Suman Sourav. 37th ACM Symposium on Principles of Distributed Computing (PODC 2018).
AbstractIn this paper, we look at the problem of randomized leader election in synchronous distributed networks with a special focus on the message complexity. We provide an algorithm that solves the implicit version of leader election (where non-leader nodes need not be aware of the identity of the leader) in any general network with $O(\sqrt{n} \log^{7/2} n \cdot t_{mix})$ messages and in $O(t_{mix}\log^2 n)$ time, where $n$ is the number of nodes and $t_{mix}$ refers to the mixing time of a random walk in the network graph $G$. For several classes of well-connected networks (that have a large conductance or alternatively small mixing times e.g. expanders, hypercubes, etc), the above result implies extremely efficient (sublinear running time and messages) leader election algorithms. Correspondingly, we show that any substantial improvement is not possible over our algorithm, by presenting an almost matching lower bound for randomized leader election. We show that $\Omega(\sqrt{n}/\phi^{3/4})$ messages are needed for any leader election algorithm that succeeds with probability at least $1-o(1)$, where $\phi$ refers to the conductance of a graph. To the best of our knowledge, this is the first work that shows a dependence between the time and message complexity to solve leader election and the connectivity of the graph $G$, which is often characterized by the graph's conductance $\phi$. Apart from the $\Omega(m)$ bound in Kutten et al 2015 (where $m$ denotes the number of edges of the graph), this work also provides one of the first non-trivial lower bounds for leader election in general networks. - On the Distributed Complexity of Large-Scale Graph ComputationsDOI
Gopal Pandurangan, Peter Robinson, Michele Scquizzato. 30th ACM Symposium on Parallelism in Algorithms and Architectures. (SPAA 2018).
AbstractMotivated by the need to understand the algorithmic foundations of distributed large-scale graph computations, we study some fundamental graph problems in a message-passing model for distributed computing where $k \geq 2$ machines jointly perform computations on graphs with $n$ nodes (typically, $n \gg k$). The input graph is assumed to be initially randomly partitioned among the $k$ machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. We present (almost) tight bounds for the round complexity of two fundamental graph problems, namely triangle enumeration and PageRank computation. Our tight lower bounds, a main contribution of the paper, are established through an information-theoretic approach that relates the round complexity to the minimal amount of information required by machines for solving a problem. Our approach is generic and can be useful in showing lower bounds in the context of similar problems and similar models. Our approach, as demonstrated in the case of triangle enumeration, can yield stronger round lower bounds as well as message-round tradeoffs compared to approaches that use communication complexity techniques. We then present algorithms that (almost) match the lower bounds; these algorithms exhibit a round complexity which scales superlinearly in $k$, improving significantly over previous results. Specifically, we show the following results: 1. Triangle enumeration: We show that there exist graphs with $m$ edges where any distributed algorithm requires $\tilde{\Omega}(m/k^{5/3})$ rounds. This result implies the first non-trivial lower bound of $\tilde\Omega(n^{1/3})$ rounds for the congested clique model, which is tight up to logarithmic factors. We also present a distributed algorithm that enumerates all the triangles of a graph in $\tilde{O}(m/k^{5/3})$ rounds. 2. PageRank: We show a lower bound of $\tilde{\Omega}(n/k^2)$ rounds, and present a simple distributed algorithm that computes the PageRank of all the nodes of a graph in $\tilde{O}(n/k^2)$ rounds. - Breaking the $\Omega(\sqrt{n})$ Barrier: Fast Consensus under a Late AdversaryArXivDOI
Peter Robinson, Christian Scheideler, Alexander Setzer. 30th ACM Symposium on Parallelism in Algorithms and Architectures. (SPAA 2018).
AbstractWe study the consensus problem in a synchronous distributed system of $n$ nodes under an adaptive adversary that has a slightly outdated view of the system and can block all incoming and outgoing communication of a constant fraction of the nodes in each round. Motivated by a result of Ben-Or and Bar-Joseph (1998), showing that any consensus algorithm that is resilient against a linear number of crash faults requires $\tilde \Omega(\sqrt{n})$ rounds in an $n$-node network against an adaptive adversary, we consider a late adaptive adversary, who has full knowledge of the network state at the beginning of the previous round and unlimited computational power, but is oblivious to the current state of the nodes. Our main contributions are randomized distributed algorithms that achieve consensus with high probability among all except a small constant fraction of the nodes (i.e., ``almost-everywhere'') against a late adaptive adversary who can block up to $\epsilon n$ nodes in each round, for a small constant $\epsilon >0$. Our first protocol achieves binary almost-everywhere consensus and also guarantees a decision on the majority input value, thus ensuring plurality consensus. We also present an algorithm that achieves the same time complexity for multi-value consensus. Both of our algorithms succeed in $O(\log n)$ rounds with high probability, thus breaking the known $\tilde\Omega(\sqrt{n})$ lower bound for fully adaptive adversaries. Our algorithms are scalable to large systems as each node contacts only an (amortized) constant number of peers in each communication round. We show that our algorithms are optimal up to constant (resp. sub-logarithmic) factors by proving that every almost-everywhere consensus protocol takes $\Omega(\log_d n)$ rounds in the worst case, where $d$ is an upper bound on the number of communication requests initiated per node in each round. - Slow Links, Fast Links, and the Cost of GossipDOI
Seth Gilbert, Peter Robinson, Suman Sourav. 38th IEEE International Conference on Distributed Computing Systems (ICDCS 2018).
AbstractConsider the classical problem of information dissemination: one (or more) nodes in a network have some information that they want to distribute to the remainder of the network. In this paper, we study the cost of information dissemination in networks where edges have latencies, i.e., sending a message from one node to another takes some amount of time. We first generalize the idea of conductance to weighted graphs, defining $\phi_*$ to be the ``weighted conductance'' and $\ell_*$ to be the ``critical latency.'' One goal of this paper is to argue that $\phi_*$ characterizes the connectivity of a weighted graph with latencies in much the same way that conductance characterizes the connectivity of unweighted graphs. We give near tight upper and lower bounds on the problem of information dissemination, up to polylogarithmic factors. Specifically, we show that in a graph with (weighted) diameter $D$ (with latencies as weights), maximum degree $\Delta$, weighted conductance $\phi_*$ and critical latency $\ell_*$, any information dissemination algorithm requires at least $\Omega(\min(D+\Delta, \ell_*/\phi_*))$ time. We show several variants of the lower bound (e.g., for graphs with small diameter, graphs with small max-degree, etc.) by reduction to a simple combinatorial game. We then give nearly matching algorithms, showing that information dissemination can be solved in $O(\min((D + \Delta)\log^3{n}), (\ell_*/\phi_*)\log(n))$ time. % $O(\min(D\log^3(n), \ell_*\log(n)/\phi_*))$. The algorithm consists of two sub-algorithms: This is achieved by combining two cases. When nodes do not know the latency of the adjacent edges, we show that the classical push-pull algorithm is (near) optimal when the diameter or maximum degree is large. For the case where the diameter and maximum degree are small, we give an alternative strategy in which we first discover the latencies and then use an algorithm for known latencies based on a weighted spanner construction. (Our algorithms are within polylogarithmic factors of being tight both for known and unknown latencies.) - The Complexity of Leader Election: A Chasm at Diameter TwoDOI
Soumyottam Chatterjee, Gopal Pandurangan, Peter Robinson. 19th International Conference on Distributed Computing and Networking (ICDCN 2018).
Top Nomination for Best Paper Award.
AbstractLeader election is one of the fundamental problems in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message complexity of leader election in synchronous distributed networks, in particular, in networks of diameter two. Kutten et al. [JACM 2015] showed a fundamental lower bound of $\Omega(m)$ ($m$ is the number of edges in the network) on the message complexity of (implicit) leader election that applied also to Monte Carlo randomized algorithms with constant success probability; this lower bound applies for graphs that have {diameter at least three}. On the other hand, for complete graphs (i.e., diameter 1), Kutten et al. [TCS 2015] established a tight bound of $\tilde{\Theta}(\sqrt{n})$ on the message complexity of randomized leader election ($n$ is the number of nodes in the network). For graphs of diameter two, the complexity was not known. In this paper, we settle this complexity by showing a tight bound of $\tilde{\Theta}(n)$ on the message complexity of leader election in diameter-two networks. We first give a simple randomized Monte-Carlo leader election algorithm that with high probability (i.e., probability at least $1 - n^{-c}$, for some positive constant $c$) succeeds and uses $O(n\log^3{n})$ messages and runs in $O(1)$ rounds; this algorithm works without knowledge of $n$ (and hence needs no global knowledge). We then show that any algorithm (even Monte Carlo randomized algorithms with large enough constant success probability) needs $\Omega(n)$ messages (even when $n$ is known), regardless of the number of rounds. We also present an $O(n\log{n})$ messages deterministic algorithm that takes $O(\log{n})$ rounds (but needs knowledge of $n$); we show that this message complexity is tight for deterministic algorithms. Our results show that leader election can be solved in diameter-two graphs in (essentially) linear (in $n$) message complexity and thus the $\Omega(m)$ lower bound does not apply to diameter-two graphs. - Gracefully Degrading Consensus and k-Set Agreement in Directed Dynamic Networks
Martin Biely, Peter Robinson, Ulrich Schmid, Manfred Schwarz, Kyrill Winkler. Theoretical Computer Science 726: 41-77 (2018) (TCS). - Fast Distributed Algorithms for Connectivity and MST in Large GraphsDOI
Gopal Pandurangan, Peter Robinson, Michele Scquizzato. Special Issue of ACM Transactions on Parallel Computing. (TOPC).
AbstractMotivated by the increasing need to understand the algorithmic foundations of distributed large-scale graph computations, we study a number of fundamental graph problems in a message-passing model for distributed computing where $k \geq 2$ machines jointly perform computations on graphs with $n$ nodes (typically, $n \gg k$). The input graph is assumed to be initially randomly partitioned among the $k$ machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. Our main result is an (almost) optimal distributed randomized algorithm for graph connectivity. Our algorithm runs in $\tilde{O}(n/k^2)$ rounds ($\tilde{O}$ notation hides a $\text{polylog}(n)$ factor and an additive $\text{polylog}(n)$ term). This improves over the best previously known bound of $\tilde{O}(n/k)$ [Klauck et al., SODA 2015], and is optimal (up to a polylogarithmic factor) in view of an existing lower bound of $\tilde{\Omega}(n/k^2)$. Our improved algorithm uses a bunch of techniques, including linear graph sketching, that prove useful in the design of efficient distributed graph algorithms. We then present fast randomized algorithms for computing minimum spanning trees, (approximate) min-cuts, and for many graph verification problems. All these algorithms take $\tilde{O}(n/k^2)$ rounds, and are optimal up to polylogarithmic factors. We also show an almost matching lower bound of $\tilde{\Omega}(n/k^2)$ for many graph verification problems using lower bounds in random-partition communication complexity. - The Distributed Minimum Spanning Tree ProblemDOI
Gopal Pandurangan, Peter Robinson, Michele Scquizzato. Distributed Computing Column, EATCS Bulletin, No 125: June 2018. (EATCS).
2017
- A Time- and Message-Optimal Distributed Algorithm for Minimum Spanning TreesDOI
Gopal Pandurangan, Peter Robinson, Michele Scquizzato. 49th ACM Symposium on Theory of Computing (STOC 2017).
AbstractThis paper presents a randomized (Las Vegas) distributed algorithm that constructs a minimum spanning tree (MST) in weighted networks with optimal (up to polylogarithmic factors) time and message complexity. This algorithm runs in $\tilde{O}(D + \sqrt{n})$ time and exchanges $\tilde{O}(m)$ messages (both with high probability), where $n$ is the number of nodes of the network, $D$ is the diameter, and $m$ is the number of edges. This is the first distributed MST algorithm that matches simultaneously the time lower bound of $\tilde{\Omega}(D + \sqrt{n})$ [Elkin, SIAM J. Comput. 2006] and the message lower bound of $\Omega(m)$ [Kutten et al., JACM 2015], which both apply to randomized Monte Carlo algorithms. The prior time and message lower bounds are derived using two completely different graph constructions; the existing lower bound construction that shows one lower bound does not work for the other. To complement our algorithm, we present a new lower bound graph construction for which any distributed MST algorithm requires both $\tilde{\Omega}(D + \sqrt{n})$ rounds and $\Omega(m)$ messages. - Symmetry Breaking in the Congest Model: Message- and Time-Efficient Algorithms for Ruling Sets.
Shreyas Pai, Gopal Pandurangan, Sriram V. Pemmaraju, Talal Riaz, Peter Robinson. 31st International Symposium on Distributed Computing (DISC 2017).
AbstractWe study local symmetry breaking problems in the Congest model, focusing on ruling set problems, which generalize the fundamental Maximal Independent Set (MIS) problem. The time (round) complexity of MIS (and ruling sets) have attracted much attention in the Local model. Indeed, recent results (Barenboim et al., FOCS 2012, Ghaffari SODA 2016) for the MIS problem have tried to break the long-standing $O(\log n)$-round ``barrier'' achieved by Luby's algorithm, but these yield $o(\log n)$-round complexity only when the maximum degree $\Delta$ is somewhat small relative to $n$. More importantly, these results apply only in the Local model. In fact, the best known time bound in the Congest model is still $O(\log n)$ (via Luby's algorithm) even for somewhat small $\Delta$. Furthermore, message complexity has been largely ignored in the context of local symmetry breaking. Luby's algorithm takes $O(m)$ messages on $m$-edge graphs and this is the best known bound with respect to messages. Our work is motivated by the following central question: can we break the $\Theta(m)$ message bound and the $\Theta(\log n)$ time bound in the Congest model for MIS or closely-related symmetry breaking problems? This paper presents progress towards this question for the distributed ruling set problem in the Congest model. A $\beta$-ruling set is an independent set such that every node in the graph is at most $\beta$ hops from a node in the independent set. We present the following results: 1. Time Complexity: We show that we can break the $O(\log n)$ ``barrier'' for 2- and 3-ruling sets. We compute 3-ruling sets in $O\left(\log n/\log \log n\right)$ rounds with high probability (whp). More generally we show that 2-ruling sets can be computed in $O\left(\log \Delta \cdot (\log n)^{1/2 + \varepsilon} + \log n/\log\log n\right)$ rounds for any $\varepsilon > 0$, which is $o(\log n)$ for a wide range of $\Delta$ values (e.g., $\Delta = 2^{(\log n)^{1/2-\varepsilon}}$). These are the first 2- and 3-ruling set algorithms to improve over the $O(\log n)$-round complexity of Luby's algorithm in the Congest model. 2. Message Complexity: We show an $\Omega(n^2)$ lower bound on the message complexity of computing an MIS (i.e., 1-ruling set) which holds also for randomized algorithms and present a contrast to this by showing a randomized algorithm for 2-ruling sets that, whp, uses only $O(n \log^2 n)$ messages and runs in $O(\Delta \log n)$ rounds. This is the first message-efficient algorithm known for ruling sets, which takes near-linear message complexity (which is optimal up to a polylogarithmic factor). Our results are a step toward understanding the time and message complexity of symmetry breaking problems in the Congest model.
2016
- Fast Distributed Algorithms for Connectivity and MST in Large GraphsDOI
Gopal Pandurangan, Peter Robinson, Michele Scquizzato. 28th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2016).
AbstractMotivated by the increasing need to understand the algorithmic foundations of distributed large-scale graph computations, we study a number of fundamental graph problems in a message-passing model for distributed computing where $k \geq 2$ machines jointly perform computations on graphs with $n$ nodes (typically, $n \gg k$). The input graph is assumed to be initially randomly partitioned among the $k$ machines, a common implementation in many real-world systems. Communication is point-to-point, and the goal is to minimize the number of communication rounds of the computation. Our main result is an (almost) optimal distributed randomized algorithm for graph connectivity. Our algorithm runs in $\tilde{O}(n/k^2)$ rounds ($\tilde{O}$ notation hides a $\text{polylog}(n)$ factor and an additive $\text{polylog}(n)$ term). This improves over the best previously known bound of $\tilde{O}(n/k)$ [Klauck et al., SODA 2015], and is optimal (up to a polylogarithmic factor) in view of an existing lower bound of $\tilde{\Omega}(n/k^2)$. Our improved algorithm uses a bunch of techniques, including linear graph sketching, that prove useful in the design of efficient distributed graph algorithms. We then present fast randomized algorithms for computing minimum spanning trees, (approximate) min-cuts, and for many graph verification problems. All these algorithms take $\tilde{O}(n/k^2)$ rounds, and are optimal up to polylogarithmic factors. We also show an almost matching lower bound of $\tilde{\Omega}(n/k^2)$ for many graph verification problems using lower bounds in random-partition communication complexity. - Efficient Computation of Sparse StructuresDOI
David G. Harris, Ehab Morsy, Gopal Pandurangan, Peter Robinson, Aravind Srinivasan. Random Structures & Algorithms (RSA).
AbstractBasic graph structures such as maximal independent sets (MIS's) have spurred much theoretical research in randomized and distributed algorithms, and have several applications in networking and distributed computing as well. However, the extant (distributed) algorithms for these problems do not necessarily guarantee fault-tolerance or load-balance properties. We propose and study ''low-average degree'' or ``sparse'' versions of such structures. Interestingly, in sharp contrast to, say, MIS's, it can be shown that checking whether a structure is sparse, will take substantial time. Nevertheless, we are able to develop good sequential/distributed (randomized) algorithms for such sparse versions. We also complement our algorithms with several lower bounds. Randomization plays a key role in our upper and lower bound results. - DEX: Self-Healing ExpandersDOI
Gopal Pandurangan, Peter Robinson, Amitabh Trehan. Distributed Computing (DC).
AbstractWe present a fully-distributed self-healing algorithm DEX, that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders --- whose expansion properties hold deterministically --- that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion which rapidly degrade in a dynamic setting; in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only $O(\log n)$ rounds and $O(\log n)$ messages are needed with high probability ($n$ is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table (DHT) on top of DEX, with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes. - Distributed Algorithmic Foundations of Dynamic Networks
John Augustine, Gopal Pandurangan, Peter Robinson. SIGACT News Distributed Computing Column 1/2016 (SIGACT).
2015
- Enabling Efficient and Robust Distributed Computation in Highly Dynamic NetworksDOI
John Augustine, Gopal Pandurangan, Peter Robinson, Scott Roche, Eli Upfal. 56th Annual IEEE Symposium on Foundations of Computer Science (FOCS 2015).
AbstractMotivated by the need for designing efficient and robust fully-distributed computation in highly dynamic networks such as Peer-to-Peer (P2P) networks, we study distributed protocols for constructing and maintaining dynamic network topologies with good expansion properties. Our goal is to maintain a sparse (bounded degree) expander topology despite heavy churn (i.e., nodes joining and leaving the network continuously over time). We assume that the churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Our main contribution is a randomized distributed protocol that guarantees with high probability the maintenance of a constant degree graph with high expansion even under continuous high adversarial churn. Our protocol can tolerate a churn rate of up to $O(n/\text{polylog}(n))$ per round (where $n$ is the stable network size). Our protocol is efficient, lightweight, and scalable, and it incurs only $O(\text{polylog}(n))$ overhead for topology maintenance: only polylogarithmic (in $n$) bits needs to be processed and sent by each node per round and any node's computation cost per round is also polylogarithmic. The given protocol is a fundamental ingredient that is needed for the design of efficient fully-distributed algorithms for solving fundamental distributed computing problems such as agreement, leader election, search, and storage in highly dynamic P2P networks and enables fast and scalable algorithms for these problems that can tolerate a large amount of churn. - On the Complexity of Universal Leader ElectionArXivDOI
Shay Kutten, Gopal Pandurangan, David Peleg, Peter Robinson, Amitabh Trehan. Journal of the ACM, vol. 62(1), 7:1-7:27 (JACM).
AbstractElecting a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most ''obvious'' complexity bounds have not been proven for randomized algorithms. The ``obvious'' lower bounds of $\Omega(m)$ messages ($m$ is the number of edges in the network) and $\Omega(D)$ time ($D$ is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even $\Omega(n)$ ($n$ is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that $D$ and $n$ were not known). We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every $D$, $m$, and $n$, and hold even if $D$, $m$, and $n$ are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an $O(m)$ messages algorithm. An $O(D)$ time algorithm is known. An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks. - Distributed Computation of Large-scale Graph ProblemsArXivDOI
Hartmut Klauck, Danupon Nanongkai, Gopal Pandurangan, Peter Robinson. 26th ACM-SIAM Symposium on Discrete Algorithms (SODA 2015).
AbstractMotivated by the increasing need for fast distributed processing of large-scale graphs such as the Web graph and various social networks, we study a number of fundamental graph problems in the message-passing model, where we have $k$ machines that jointly perform computation on an arbitrary $n$-node (typically, $n \gg k$) input graph. The graph is assumed to be randomly partitioned among the $k \geq 2$ machines (a common implementation in many real world systems). The communication is point-to-point, and the goal is to minimize the time complexity, i.e., the number of communication rounds, of solving various fundamental graph problems. We present lower bounds that quantify the fundamental time limitations of distributively solving graph problems. We first show a lower bound of $\Omega(n/k)$ rounds for computing a spanning tree (ST) of the input graph. This result also implies the same bound for other fundamental problems such as computing a minimum spanning tree (MST), breadth-first tree (BFS), and shortest paths tree (SPT). We also show an $\Omega(n/k^2)$ lower bound for connectivity, ST verification and other related problems. Our lower bounds develop and use new bounds in random-partition communication complexity. To complement our lower bounds, we also give algorithms for various fundamental graph problems, e.g., PageRank, MST, connectivity, ST verification, shortest paths, cuts, spanners, covering problems, densest subgraph, subgraph isomorphism, finding triangles, etc. We show that problems such as PageRank, MST, connectivity, and graph covering can be solved in $\tilde{O}(n/k)$ time (the notation $\tilde O$ hides $\text{polylog}(n)$ factors and an additive $\text{polylog}(n)$ term); this shows that one can achieve almost linear (in $k$) speedup, whereas for shortest paths, we present algorithms that run in $\tilde{O}(n/\sqrt{k})$ time (for $(1+\epsilon)$-factor approximation) and in $\tilde{O}(n/k)$ time (for $O(\log n)$-factor approximation) respectively. Our results are a step towards understanding the complexity of distributively solving large-scale graph problems. - Fast Byzantine Leader Election in Dynamic Networks
John Augustine, Gopal Pandurangan, Peter Robinson. 29th International Symposium on Distributed Computing (DISC 2015).
AbstractMotivated by robust, secure, and efficient distributed computation in Peer-to-Peer (P2P) networks, we study fundamental Byzantine problems in dynamic networks where the topology can change from round to round and nodes can also experience heavy churn (i.e., nodes can join and leave the network continuously over time). We assume the full information model where the Byzantine nodes have complete knowledge about the entire state of network at every round (including random choices made by all the nodes), have unbounded computational power and can deviate arbitrarily from the protocol. The churn is controlled by an adversary that has complete knowledge and control of what nodes join and leave and at what time and also may rewire the topology in every round and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Byzantine protocols for fundamental distributed computing problems such as agreement and leader election have been studied extensively for the last three decades in static networks; however, these solutions do not work in dynamic networks which characterize many real-world networks such as P2P networks. Our main contribution is an $O(\log^3 n)$ round algorithm that achieves Byzantine leader election under the presence of up to $O({n}^{1/2 - \epsilon})$ Byzantine nodes (for a small constant $\epsilon > 0$) and a churn of up to $O(\sqrt{n}/\text{poly}\log(n))$ nodes per round (where $n$ is the stable network size). The algorithm elects a leader with probability at least $1-n^{-\Omega(1)}$ and guarantees that it is an honest node with probability at least $1-n^{-\Omega(1)}$; assuming the algorithm succeeds, the leader's identity will be known to a $1-o(1)$ fraction of the honest nodes. Our algorithm is fully-distributed, localized (does not require any global knowledge), lightweight, and is simple to implement. It is also scalable, as it runs in polylogarithmic time and requires nodes to send and receive messages of only polylogarithmic size per round. To the best of our knowledge, our algorithm is the first scalable solution for Byzantine leader election in a dynamic network with a high rate of churn; our protocol can also be used to solve Byzantine agreement in a straightforward way. We also show how to implement an (almost-everywhere) public coin with constant bias in a dynamic network with Byzantine nodes and provide a mechanism for enabling honest nodes to store information reliably in the network, which might be of independent interest. In decentralized and dynamic P2P systems where a substantial part of the network may be controlled by malicious nodes, the presented algorithm and techniques can serve as building blocks for designing robust and secure distributed protocols. - Gracefully Degrading Consensus and k-Set Agreement in Directed Dynamic NetworksArXivDOI
Martin Biely, Peter Robinson, Ulrich Schmid, Manfred Schwarz, Kyrill Winkler. 2nd International Conference on Networked Systems (NETYS 2015).
AbstractWe present the first consensus/k-set agreement algorithm for synchronous dynamic networks with unidirectional links, controlled by an omniscient message adversary, which automatically adapts to the actual network properties in a run: If the network is sufficiently well-connected, it solves consensus, while it degrades gracefully to general k-set agreement in less well-connected communication graphs. The actual number k of system-wide decision values is determined by the number of certain vertex-stable root components occurring in a run, which are strongly connected components without incoming links from outside. Related impossibility results reveal that our condition is reasonably close to the solvability border for k-set agreement.
2014
- DEX: Self-Healing ExpandersArXivDOI
Gopal Pandurangan, Peter Robinson, Amitabh Trehan. 28th IEEE International Parallel Distributed Processing Symposium (IPDPS 2014).
AbstractWe present a fully-distributed self-healing algorithm DEX, that maintains a constant degree expander network in a dynamic setting. To the best of our knowledge, our algorithm provides the first efficient distributed construction of expanders --- whose expansion properties hold deterministically --- that works even under an all-powerful adaptive adversary that controls the dynamic changes to the network (the adversary has unlimited computational power and knowledge of the entire network state, can decide which nodes join and leave and at what time, and knows the past random choices made by the algorithm). Previous distributed expander constructions typically provide only probabilistic guarantees on the network expansion which rapidly degrade in a dynamic setting; in particular, the expansion properties can degrade even more rapidly under adversarial insertions and deletions. Our algorithm provides efficient maintenance and incurs a low overhead per insertion/deletion by an adaptive adversary: only $O(\log n)$ rounds and $O(\log n)$ messages are needed with high probability ($n$ is the number of nodes currently in the network). The algorithm requires only a constant number of topology changes. Moreover, our algorithm allows for an efficient implementation and maintenance of a distributed hash table (DHT) on top of DEX, with only a constant additional overhead. Our results are a step towards implementing efficient self-healing networks that have guaranteed properties (constant bounded degree and expansion) despite dynamic changes. - Distributed Symmetry Breaking in HypergraphsArXivDOI
Shay Kutten, Danupon Nanongkai, Gopal Pandurangan, Peter Robinson. 28th International Symposium on Distributed Computing (DISC 2014).
AbstractFundamental local symmetry breaking problems such as Maximal Independent Set (MIS) and coloring have been recognized as important by the community, and studied extensively in (standard) graphs. In particular, fast (i.e., logarithmic run time) randomized algorithms are well-established for MIS and $\Delta +1$-coloring in both the LOCAL and CONGEST distributed computing models. On the other hand, comparatively much less is known on the complexity of distributed symmetry breaking in hypergraphs. In particular, a key question is whether a fast (randomized) algorithm for MIS exists for hypergraphs. In this paper, we study the distributed complexity of symmetry breaking in hypergraphs by presenting distributed randomized algorithms for a variety of fundamental problems under a natural distributed computing model for hypergraphs. We first show that MIS in hypergraphs (of arbitrary dimension) can be solved in $O(\log^2 n)$ rounds ($n$ is the number of nodes of the hypergraph) in the LOCAL model. We then present a key result of this paper --- an $O(\Delta^{\epsilon}\text{poly} \log n)$-round hypergraph MIS algorithm in the CONGEST model where $\Delta$ is the maximum node degree of the hypergraph and $\epsilon > 0$ is any arbitrarily small constant. To demonstrate the usefulness of hypergraph MIS, we present applications of our hypergraph algorithm to solving problems in (standard) graphs. In particular, the hypergraph MIS yields fast distributed algorithms for the balanced minimal dominating set problem (left open in Harris et al. [ICALP 2013]) and the minimal connected dominating set problem. We also present distributed algorithms for coloring, maximal matching, and maximal clique in hypergraphs. Our work shows that while some local symmetry breaking problems such as coloring can be solved in polylogarithmic rounds in both the LOCAL and CONGEST models, for many other hypergraph problems such as MIS, hitting set, and maximal clique, it remains challenging to obtain polylogarithmic time algorithms in the CONGEST model. This work is a step towards understanding this dichotomy in the complexity of hypergraph problems as well as using hypergraphs to design fast distributed algorithms for problems in (standard) graphs. - Sublinear Bounds for Randomized Leader ElectionArXivDOI
Shay Kutten, Gopal Pandurangan, David Peleg, Peter Robinson, Amitabh Trehan. Special Issue of Theoretical Computer Science, Elsevier. (TCS).
AbstractThis paper concerns randomized leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete $n$-node networks that runs in $O(1)$ rounds and (with high probability) uses only $O(\sqrt{n}\log^{3/2} n)$ messages to elect a unique leader (with high probability). When considering the ''explicit'' variant of leader election where eventually every node knows the identity of the leader, our algorithm yields the asymptotically optimal bounds of $O(1)$ rounds and $O(n)$ messages. This algorithm is then extended to one solving leader election on any connected non-bipartite $n$-node graph $G$ in $O(\tau(G))$ time and $O(\tau(G)\sqrt{n}\log^{3/2} n)$ messages, where $\tau(G)$ is the mixing time of a random walk on $G$. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficient deterministic leader election algorithms. Finally, we present an almost matching lower bound for randomized leader election, showing that $\Omega(\sqrt{n})$ messages are needed for any leader election algorithm that succeeds with probability at least $1/e + \epsilon$, for any small constant $\epsilon > 0$. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks. - Distributed Agreement in Dynamic Peer-to-Peer NetworksArXivDOI
John Augustine, Gopal Pandurangan, Peter Robinson, Eli Upfal. Journal of Computer and System Sciences, Elsevier. (JCSS). - The Generalized Loneliness Detector and Weak System Models for k-Set AgreementArXivDOI
Martin Biely, Peter Robinson, Ulrich Schmid. IEEE Transactions on Parallel and Distributed Systems, vol. 25(4), 1078-1088 (IEEE TPDS).
AbstractThis paper presents two weak partially synchronous system models MAnti[n-k] and MSink[n-k], which are just strong enough for solving $k$-set agreement: We introduce the generalized $(n-k)$-loneliness failure detector $\mathcal{L}(k)$, which we first prove to be sufficient for solving $k$-set agreement, and show that $\mathcal{L}(k)$ but not $\mathcal{L}(k-1)$ can be implemented in both models. MAnti[n-k] and MSink[n-k] are hence the first message passing models that lie between models where $\Omega$ (and therefore consensus) can be implemented and the purely asynchronous model. We also address $k$-set agreement in anonymous systems, that is, in systems where (unique) process identifiers are not available. Since our novel $k$-set agreement algorithm using $\mathcal{L}(k)$ also works in anonymous systems, it turns out that the loneliness failure detector $\mathcal{L}=\mathcal{L}(n-1)$ introduced by Delporte et al. is also the weakest failure detector for set agreement in anonymous systems. Finally, we analyze the relationship between $\mathcal{L}(k)$ and other failure detectors suitable for solving $k$-set agreement.
2013
- Sublinear Bounds for Randomized Leader ElectionArXivDOI
Shay Kutten, Gopal Pandurangan, David Peleg, Peter Robinson, Amitabh Trehan. 14th International Conference on Distributed Computing and Networking (ICDCN 2013). Best Paper Award.
AbstractThis paper concerns randomized leader election in synchronous distributed networks. A distributed leader election algorithm is presented for complete n-node networks that runs in $O(1)$ rounds and (with high probability) takes only $O(\sqrt{n}\log^{3/2}n)$ messages to elect a unique leader (with high probability). This algorithm is then extended to solve leader election on any connected non-bipartite n-node graph $G$ in $O(\tau(G))$ time and $O(\tau(G)\sqrt{n}\log^{3/2}n)$ messages, where $\tau(G)$ is the mixing time of a random walk on $G$. The above result implies highly efficient (sublinear running time and messages) leader election algorithms for networks with small mixing times, such as expanders and hypercubes. In contrast, previous leader election algorithms had at least linear message complexity even in complete graphs. Moreover, super-linear message lower bounds are known for time-efficient deterministic leader election algorithms. Finally, an almost-tight lower bound is presented for randomized leader election, showing that $\Omega(\sqrt{n})$ messages are needed for any $O(1)$ time leader election algorithm which succeeds with high probability. It is also shown that $\Omega(n^{1/3})$ messages are needed by any leader election algorithm that succeeds with high probability, regardless of the number of the rounds. We view our results as a step towards understanding the randomized complexity of leader election in distributed networks. - Efficient Computation of Balanced StructuresArXivDOI
David G. Harris, Ehab Morsy, Gopal Pandurangan, Peter Robinson, Aravind Srinivasan. 40th International Colloquium on Automata, Languages and Programming (ICALP 2013).
AbstractBasic graph structures such as maximal independent sets (MIS’s) have spurred much theoretical research in distributed algorithms, and have several applications in networking and distributed computing as well. However, the extant (distributed) algorithms for these problems do not necessarily guarantee fault-tolerance or load-balance properties: For example, in a star-graph, the central vertex, as well as the set of leaves, are both MIS’s, with the latter being much more fault-tolerant and balanced - existing distributed algorithms do not handle this distinction. We propose and study "low-average degree" or "balanced" versions of such structures. Interestingly, in sharp contrast to, say, MIS’s, it can be shown that checking whether a structure is balanced, will take substantial time. Nevertheless, we are able to develop good sequential and distributed algorithms for such "balanced" versions. We also complement our algorithms with lower bounds. - On the Complexity of Universal Leader ElectionArXivDOI
Shay Kutten, Gopal Pandurangan, David Peleg, Peter Robinson, Amitabh Trehan. 32nd ACM Symposium on Principles of Distributed Computing (PODC 2013).
AbstractElecting a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most ''obvious'' complexity bounds have not been proven for randomized algorithms. The ``obvious'' lower bounds of $\Omega(m)$ messages ($m$ is the number of edges in the network) and $\Omega(D)$ time ($D$ is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even $\Omega(n)$ ($n$ is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that $D$ and $n$ were not known). We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every $D$, $m$, and $n$, and hold even if $D$, $m$, and $n$ are known, all the nodes wake up simultaneously, and the algorithms can make any use of node's identities. To show that these bounds are tight, we present an $O(m)$ messages algorithm. An $O(D)$ time algorithm is known. An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks. - Fast Byzantine Agreement in Dynamic NetworksArXivDOI
John Augustine, Gopal Pandurangan, Peter Robinson 32nd ACM Symposium on Principles of Distributed Computing (PODC 2013).
AbstractWe study Byzantine agreement in dynamic networks where topology can change from round to round and nodes can also experience heavy churn (i.e., nodes can join and leave the network continuously over time). Our main contributions are randomized distributed algorithms that guarantee almost-everywhere Byzantine agreement with high probability even under a large number of Byzantine nodes and continuous adversarial churn in a number of rounds that is polylogarithmic in $n$ (where $n$ is the stable network size). We show that our algorithms are essentially optimal (up to polylogarithmic factors) with respect to the amount of Byzantine nodes and churn rate that they can tolerate by showing lower bound. In particular, we present the following results: \begin{enumerate} \item An $O(\log^3 n)$ round randomized algorithm that achieves almost-everywhere Byzantine agreement with high probability under a presence of up to $O(\sqrt{n}/\text{polylog}(n))$ Byzantine nodes and up to a churn of $O(\sqrt{n}/\text{polylog}(n))$ nodes per round. We assume that the Byzantine nodes have knowledge about the entire state of network at every round (including random choices made by all the nodes) and can behave arbitrarily. We also assume that an adversary controls the churn --- it has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power (but is oblivious to the topology changes from round to round). Our algorithm requires only polylogarithmic in $n$ bits to be processed and sent (per round) by each node. \item We also present an $O(\log^3 n)$ round randomized algorithm that has same guarantees as the above algorithm, but works even when the churn and network topology is controlled by an adaptive adversary (that can choose the topology based on the current states of the nodes). However, this algorithm requires up to polynomial in $n$ bits to be processed and sent (per round) by each node. \item We show that the above bounds are essentially the best possible, if one wants fast (i.e., polylogarithmic run time) algorithms, by showing that any (randomized) algorithm to achieve agreement in a dynamic network controlled by an adversary that can churn up to $\Theta(\sqrt{ n \log n})$ nodes per round should take at least a polynomial number of rounds. \end{enumerate} Our algorithms are the first-known, fully-distributed, Byzantine agreement algorithms in highly dynamic networks. We view our results as a step towards understanding the possibilities and limitations of highly dynamic networks that are subject to malicious behavior by a large number of nodes. - Search and Storage in Dynamic Peer-to-Peer NetworksArXivDOI
John Augustine, Anisur Molla, Ehab Morsy, Gopal Pandurangan, Peter Robinson, Eli Upfal. 25th ACM Symposium on Parallelism in Algorithms and Architectures (SPAA 2013).
AbstractWe study robust and efficient distributed algorithms for searching, storing, and maintaining data in dynamic Peer-to-Peer (P2P) networks. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to guarantee, despite high node churn rate, that a large number of nodes in the network can store, retrieve, and maintain a large number of data items. Our main contributions are fast randomized distributed algorithms that guarantee the above with high probability even under high adversarial churn. In particular, we present the following main results: \begin{enumerate} \item A randomized distributed search algorithm that with high probability guarantees that searches from as many as $n - o(n)$ nodes ($n$ is the stable network size) succeed in ${O}(\log n )$-rounds despite ${O}(n/\log^{1+\delta} n)$ churn, for any small constant $\delta > 0$, per round. We assume that the churn is controlled by an oblivious adversary (that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm). \item A storage and maintenance algorithm that guarantees, with high probability, data items can be efficiently stored (with only $\Theta(\log{n})$ copies of each data item) and maintained in a dynamic P2P network with churn rate up to ${O}(n/\log^{1+\delta} n)$ per round. Our search algorithm together with our storage and maintenance algorithm guarantees that as many as $n - o(n)$ nodes can efficiently store, maintain, and search even under ${O}(n/\log^{1+\delta} n)$ churn per round. Our algorithms require only polylogarithmic in $n$ bits to be processed and sent (per round) by each node. \end{enumerate} To the best of our knowledge, our algorithms are the first-known, fully-distributed storage and search algorithms that provably work under highly dynamic settings (i.e., high churn rates per step). Furthermore, they are localized (i.e., do not require any global topological knowledge) and scalable. A technical contribution of this paper, which may be of independent interest, is showing how random walks can be provably used to derive scalable distributed algorithms in dynamic networks with adversarial node churn. - Robust Leader Election in a Fast-Changing World
John Augustine, Tejas Kulkarni, Paresh Nakhe, Peter Robinson. 9th International Workshop on Foundations of Mobile Computing (FOMC 2013).
AbstractWe consider the problem of electing a leader among nodes in a highly dynamic network where the adversary has unbounded capacity to insert and remove nodes (including the leader) from the network and change connectivity at will. We present a randomized algorithm that (re)elects a leader in $O(D\log n)$ rounds with high probability, where $D$ is a bound on the dynamic diameter of the network and $n$ is the maximum number of nodes in the network at any point in time. We assume a model of broadcast-based communication where a node can send only $1$ message of $O(\log n)$ bits per round and is not aware of the receivers in advance. Thus our results also apply to mobile wireless ad-hoc networks, improving over the optimal (for deterministic algorithms) $O(Dn)$ solution presented at FOMC 2011. We show that our algorithm is optimal by proving that any randomized algorithm takes at least $\Omega(D\log n)$ rounds to elect a leader with high probability, which shows that our algorithm yields the best possible (up to constants) termination time.
2012
- Towards Robust and Efficient Computation in Dynamic Peer-to-Peer NetworksArXivDOI
John Augustine, Gopal Pandurangan, Peter Robinson, Eli Upfal. 23rd ACM-SIAM Symposium on Discrete Algorithms (SODA 2012).
AbstractMotivated by the need for robust and fast distributed computation in highly dynamic Peer-to-Peer (P2P) networks, we study algorithms for the fundamental distributed agreement problem. P2P networks are highly dynamic networks that experience heavy node churn (i.e., nodes join and leave the network continuously over time). Our goal is to design fast algorithms (running in a small number of rounds) that guarantee, despite high node churn rate, that almost all nodes reach a stable agreement. Our main contributions are randomized distributed algorithms that guarantee stable almost-everywhere agreement with high probability even under high adversarial churn in a polylogarithmic number of rounds. In particular, we present the following results: \begin{enumerate} \item An $O(\log^2 n)$-round ($n$ is the stable network size) randomized algorithm that achieves almost-everywhere agreement with high probability under up to linear churn per round (i.e., $\epsilon n$, for some small constant $\epsilon > 0$), assuming that the churn is controlled by an oblivious adversary (that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm). \item An $O(\log m\log^3 n)$-round randomized algorithm that achieves almost-everywhere agreement with high probability under up to $\epsilon \sqrt{n}$ churn per round (for some small $\epsilon > 0$), where $m$ is the size of the input value domain, that works even under an adaptive adversary (that also knows the past random choices made by the algorithm). \item We also show that no deterministic algorithm can guarantee almost-everywhere agreement (regardless of the number of rounds), even under constant churn rate. \end{enumerate} Our algorithms are the first-known, fully-distributed, agreement algorithms that work under highly dynamic settings (i.e., high churn rates per step). Furthermore, they are localized (i.e., do not require any global topological knowledge), simple, and easy to implement. These algorithms can serve as building blocks for implementing other non-trivial distributed computing tasks in dynamic P2P networks. - Agreement in Directed Dynamic NetworksArXivDOI
Martin Biely, Peter Robinson, Ulrich Schmid. 19th International Colloquium on Structural Information and Communication Complexity (SIROCCO 2012).
AbstractWe study distributed computation in synchronous dynamic networks where an omniscient adversary controls the unidirectional communication links. Its behavior is modeled as a sequence of directed graphs representing the active (i.e. timely) communication links per round. We prove that consensus is impossible under some natural weak connectivity assumptions and introduce vertex-stable root components as a means for circumventing this impossibility. Essentially, we assume that there is a short period of time during which an arbitrary part of the network remains strongly connected, while its interconnect topology may keep changing continuously. We present a consensus algorithm that works under this assumption and prove its correctness. Our algorithm maintains a local estimate of the communication graphs and applies techniques for detecting stable network properties and univalent system configurations. Our possibility results are complemented by several impossibility results and lower bounds for consensus and other distributed computing problems like leader election, revealing that our algorithm is asymptotically optimal.
2011
- Optimal Regional Consecutive Leader Election in Mobile Ad-Hoc NetworksArXivDOI
Hyun Chul Chung, Peter Robinson, Jennifer L. Welch. 7th ACM SIGACT/SIGMOBILE International Workshop on Foundations of Mobile Computing (part of FCRC 2011).
AbstractThe regional consecutive leader election (RCLE) problem requires mobile nodes to elect a leader within bounded time upon entering a specific region. We prove that any algorithm requires $\Omega(Dn)$ rounds for leader election, where D is the diameter of the network and $n$ is the total number of nodes. We then present a fault-tolerant distributed algorithm that solves the RCLE problem and works even in settings where nodes do not have access to synchronized clocks. Since nodes set their leader variable within $O(Dn)$ rounds, our algorithm is asymptotically optimal with respect to time complexity. Due to its low message bit complexity, we believe that our algorithm is of practical interest for mobile wireless ad-hoc networks. Finally, we present a novel and intuitive constraint on mobility that guarantees a bounded communication diameter among nodes within the region of interest. - Solving k-Set Agreement with Stable Skeleton GraphsArXivDOI
Martin Biely, Peter Robinson, Ulrich Schmid. 16th IEEE International Symposium on Parallel and Distributed Processing Workshops and PhD Forum (IPDPS 2011). - Easy Impossibility Proofs for k-Set Agreement in Message Passing SystemsArXivDOI
Martin Biely, Peter Robinson, Ulrich Schmid. 15th International Conference On Principles Of Distributed Systems (OPODIS 2011).
AbstractDespite of being quite similar agreement problems, distributed consensus ($1$-set agreement) and general $k$-set agreement require surprisingly different techniques for proving their impossibility in asynchronous systems with crash failures: Rather, than the relatively simple bivalence arguments as in the impossibility proof for consensus in the presence of a single crash failure, known proofs for the impossibility of $k$-set agreement in shared memory systems with $f\geq k>1$ crash failures use algebraic topology or a variant of Sperner's Lemma. In this paper, we present a generic theorem for proving the impossibility of $k$-set agreement in various message passing settings, which is based on a reduction to the consensus impossibility in a certain subsystem resulting from a partitioning argument. We demonstrate the broad applicability of our result by exploring the possibility/impossibility border of $k$-set agreement in several message passing system models: (i) asynchronous systems with crash failures, (ii) partially synchronous processes with (initial) crash failures, and, most importantly, (iii) asynchronous systems augmented with failure detectors. Furthermore, by extending the algorithm for initial crashes of Fisher, Lynch and Patterson (1985) to general $k$-set agreement, we show that the impossibility border of (i) is tightly matched. The impossibility proofs in cases (i), (ii), and (iii) are instantiations of our main theorem. Regarding (iii), applying our technique reveals the exact border for the parameter $k$ where $k$-set agreement is solvable with the failure detector class $(\Sigma_k,\Omega_k)_{1\le k\le n-1}$ of Bonnet and Raynal. As $\Sigma_k$ was shown to be necessary for solving $k$-set agreement, this result yields new insights on the quest for the weakest failure detector - The Asynchronous Bounded-Cycle ModelArXivDOI
Peter Robinson and Ulrich Schmid. Theoretical Computer Science 412 (2011) 5580–5601. (TCS).
AbstractThis paper shows how synchrony conditions can be added to the purely asynchronous model in a way that avoids any reference to message delays and computing step times, as well as any global constraints on communication patterns and network topology. Our Asynchronous Bounded-Cycle (ABC) model just bounds the ratio of the number of forward- and backward-oriented messages in certain ''relevant'' cycles in the space-time diagram of an asynchronous execution. We show that clock synchronization and lock-step rounds can easily be implemented and proved correct in the ABC model, even in the presence of Byzantine failures. Furthermore, we prove that any algorithm working correctly in the partially synchronous $\Theta$-Model also works correctly in the ABC model. In our proof, we first apply a novel method for assigning certain message delays to asynchronous executions, which is based on a variant of Farkas' theorem of linear inequalities and a non-standard cycle-space of graphs. Using methods from point-set topology, we then prove that the existence of this delay assignment implies model indistinguishability for time-free safety and liveness properties. Finally, we introduce several weaker variants of the ABC model and relate our model to the existing partially synchronous system models, in particular, the classic models of Dwork, Lynch and Stockmayer. Furthermore, we discuss aspects of the ABC model's applicability in real systems, in particular, in the context of VLSI Systems-on-Chip.
2010
- Regional Consecutive Leader Election in Mobile Ad-Hoc Networks
Hyun Chul Chung, Peter Robinson, Jennifer L. Welch. 6th ACM SIGACT/SIGMOBILE Workshop on Foundations of Mobile Computing (DIALM-POMC 2010).
2009
- Weak Synchrony Models and Failure Detectors for Message Passing k-Set AgreementDOI
Martin Biely, Peter Robinson, Ulrich Schmid. 13th International Conference On Principles Of Distributed Systems (OPODIS 2009).
2008
- The Asynchronous Bounded-Cycle ModelDOI
Peter Robinson and Ulrich Schmid. 10th International Symposium on Stabilization, Safety, and Security of Distributed Systems (SSS 2008). Best Paper Award.
AbstractThis paper shows how synchrony conditions can be added to the purely asynchronous model in a way that avoids any reference to message delays and computing step times, as well as any global constraints on communication patterns and network topology. Our Asynchronous Bounded-Cycle (ABC) model just bounds the ratio of the number of forward- and backward-oriented messages in certain ''relevant'' cycles in the space-time diagram of an asynchronous execution. We show that clock synchronization and lock-step rounds can easily be implemented and proved correct in the ABC model, even in the presence of Byzantine failures. Furthermore, we prove that any algorithm working correctly in the partially synchronous $\Theta$-Model also works correctly in the ABC model. In our proof, we first apply a novel method for assigning certain message delays to asynchronous executions, which is based on a variant of Farkas' theorem of linear inequalities and a non-standard cycle-space of graphs. Using methods from point-set topology, we then prove that the existence of this delay assignment implies model indistinguishability for time-free safety and liveness properties. Finally, we introduce several weaker variants of the ABC model and relate our model to the existing partially synchronous system models, in particular, the classic models of Dwork, Lynch and Stockmayer. Furthermore, we discuss aspects of the ABC model's applicability in real systems, in particular, in the context of VLSI Systems-on-Chip.
2006
- Log File Processing by Machine Learning and Information Extraction
Peter Robinson. Master Thesis. TU Wien, Institute of Computer Languages, 2006. Nominated for Distinguished Young Alumnus Award.
AbstractIn today's computer network systems lots of events are constantly written to log files. Unfortunately there is no common standard defining the structure of these event messages which are partly in human readable natural language form. This lack of structure makes automatic processing a lot more difficult. This master thesis describes the architecture and implementation of the LoP-System, a system that attempts to create machine readable event structures from ordinary log file events by natural language processing. The thesis explains implementational details as well as the theoretical concepts used. The core of the system consists of a series of cascaded but independent components, partly enhanced with machine learning techniques. The raw input is first processed by a simple recursive descent parser which recognizes syntactical features (e.g. IP addresses) and is then passed on to a part-of-speech tagger based on a hidden Markov model. Applying regular expression patterns to the tagged words is used to combine them to basic word groups (e.g. noun groups), which are subsequently semantically analyzed. The final step is the construction of the output events by a rule based event constructor. All components are implemented in Haskell, a purely functional programming language. Some of the components developed during this thesis, especially the part-of-speech tagger, are general natural language processing tools and can be applied to other domains.
Code
I'm interested in parallel and distributed programming and related technologies such as software transactional memory and the actor-model. Recently, I have been working on implementing a simulation environment for distributed algorithms in Elixir/Erlang, and implementing non-blocking data structures in Haskell suitable for multi-core machines. Below is a (non-comprehensive) list of software that I have written.
- concurrent hash table: a thread-safe hash table that scales to multicores.
- data dispersal: an implementation of an (m,n)-threshold information dispersal scheme that is space-optimal.
- secret sharing: an implementation of a secret sharing scheme that provides information-theoretic security.
- tskiplist: a data structure with range-query support for software transactional memory.
- stm-io-hooks: An extension of Haskell's Software Transactional Memory (STM) monad with commit and retry IO hooks.
- Mathgenealogy: Visualize your (academic) genealogy! A program for extracting data from the Mathematics Genealogy project.
- I extended Haskell's Cabal, for using a "world" file to keep track of installed packages. (Now part of the main distribution.)
Teaching
- Mathematical Structures for CS, Fall 2022, Spring 2023.
- Computer Networks, Fall 2021, Fall 2020, 2019.
- Database Systems, Spring 2021, Spring 2020.
- Distributed Computing, Spring 2019.
- Randomized Algorithms, Fall 2018: Intro slides. Part 1 on Concentration Bounds.
- Advanced Distributed Systems, Fall 2016, 2017.
- Computation with Data, Fall 2016.
- Internet and Web Technologies, Spring 2016.
Misc
- Google scholar profile
- My profile on StackExchange