~~NOCACHE~~
/* DO NOT EDIT THIS FILE */
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday December 12, 2017, 2PM, Salle 3052\\
**Jean Krivine** (IRIF) //Incremental Update for Graph Rewriting//
\\
Graph rewriting formalisms are well-established models for the representation of biological systems such as protein-protein interaction networks. The combinatorial complexity of these models usually prevents any explicit representation of the variables of the system, and one has to rely on stochastic simulations in order to sample the possible trajectories of the underlying Markov chain. The bottleneck of stochastic simulation algorithms is the update of the propensity function that describes the probability that a given rule is to be applied next. In this talk we present an algorithm based on a data structure, called extension basis, that can be used to update the counts of predefined graph observables after a rule of the model has been applied.
Reference: Boutillier P., Ehrhard T., Krivine J. (2017) Incremental Update for Graph Rewriting. In: Yang H. (eds) Programming Languages and Systems. ESOP 2017. Lecture Notes in Computer Science, vol 10201. Springer, Berlin, Heidelberg
Séminaire commun du pole Algorihtmes et Structures Discrètes.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday October 17, 2017, 2PM, Salle 3052\\
**Claire Mathieu** (DI - ENS) //Online k-compaction//
\\
Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an
online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.
This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday September 26, 2017, 2PM, Salle 1007\\
**Jara Uitto** (ETH Zurich) //Tight Lower Bounds for the Cops and Robbers Game//
\\
For the game of cops and robbers, it is known that in 1-cop-win graphs, the cop can capture the robber in O(n) time and that there exist graphs in which this capture time is tight. When k ≥ 2, a simple counting argument shows that in k-cop-win graphs, the capture time is at most O( n^(k+1) ), however, no non-trivial lower bounds were previously known; indeed, in their 2011 book, Bonato and Nowakowski ask whether this upper bound can be improved. We answer the question of Bonato and Nowakowski on the negative, proving that the O(n^(k+1) ) bound is asymptotically tight for any constant k ≥ 2. This yields a surprising gap in the capture time complexities between the 1-cop and the 2-cop cases.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday September 12, 2017, 2PM, Salle 1007\\
**Mor Perry** //Aspects of Distributed Verification//
\\
In this talk, we focus on two different aspects of verification of boolean predicates over distributed networks. Given a network configuration, the proof-labeling scheme (PLS) model defines a distributed proof in the form of a label that is given to each node, and all nodes locally verify that the network configuration satisfies the desired boolean predicate by exchanging labels with their neighbors. The complexity measure of the scheme, a.k.a. the proof size, is defined to be the maximum size of a label.
1. Space-time tradeoffs for distributed verification, joint work with Rafail Ostrovsky and Will Rosenbaum. In this work, we introduce the notion of a t-PLS, which allows the verification procedure to run for super-constant time. Our work analyzes the tradeoffs of t-PLS between time, label size, message length, and computation space. We construct a universal t-PLS and prove that it uses the same amount of total communication as a known one-round universal PLS, and t factor smaller labels. In addition, we provide a general technique to prove lower bounds for space-time tradeoffs of t-PLS. We use this technique to show an optimal tradeoff for testing that a network is acyclic (cycle free). Our optimal t-PLS for acyclicity uses proof size, message size and computation space O( ( log n)/t).
2. Approximate proof-labeling schemes, joint work with Keren Censor-Hillel and Ami Paz. In this work we extend the PLS model by defining the approximate PLS (APLS) model. In this new model, the predicates for verification are of the form f(G)\le f'(G), where f, f': F -> N for a family of configurations F and the set of natural numbers N. Informally, the predicates considered in this model are a comparison between two values of the configuration. As in the PLS model, nodes exchange labels in order to locally verify the predicate, and all must accept if the network satisfies the predicate. The soundness condition is relaxed with an approximation ration alpha, so that only if f(G) > alpha*f'(G) some node must reject. We show that in the APLS model, the proof size can be much smaller than the proof size of the same predicate in the PLS model. Moreover, we prove that there is a tradeoff between the approximation ratio and the proof size.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday June 20, 2017, 2PM, Salle 1007\\
**Siddarth Gupta** //A Topological Algorithm for Determining How Road Networks Evolve Over Time//
\\
We provide an efficient algorithm for determining how a road network has evolved over time, given two snapshot instances from different dates. To allow for such determinations across different databases and even against hand drawn maps, we take a strictly topological approach in this paper, so that we compare road networks based strictly on graph-theoretic properties. Given two road networks of same region from two different dates, our approach allows one to match road network portions that remain intact and also point out added or removed portions. We analyze our algorithm both theoretically, showing that it runs in polynomial time for non-degenerate road networks even though a related problem is NP-complete, and experimentally, using dated road networks from the TIGER/Line archive of the U.S. Census Bureau.
Can you tell me how long should be the talk?
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday June 13, 2017, 2PM, Salle 1007\\
**Afshin Behmaram** //Matching and covering in cubic graphs//
\\
In this talk, i will present first some theorems and observations about matching in cubic graphs and about covering the edges of cubic graphs by perfect matchings. Then I will present some interesting open problems in this area.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday May 2, 2017, 2PM, Salle 1007\\
**Pierre Aboulker** (ULB) //From chromatic number to dichromatic number//
\\
A k-dicolouring of a digraph is a partition of its vertex setinto k acyclic subdigraphs. The dichromatic number of a
digraph D is the minimum k such that D has a k-dicolouring.
We first give some properties related to the dichromatic number in order to show why and how it generalizes the chromatic number of non-oriented graphs. Then we investigate the following questions: What can we say about subgraphs, induced subgraphs and topological minors of a digraph with large dichromatic number?
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday April 25, 2017, 2PM, Salle 1007\\
**Mike Molloy** (University of Toronto and Ecole Normale Superieure Paris) //Entropy Compression and the Lovasz Local Lemma//
\\
The Lovasz Local Lemma, a cornerstone of the probabilistic method, is a powerful and widely used proof technique. In 2009, Moser introduced a technique called entropy compression to provide efficient algorithms which construct objects that the Local Lemma guarantees to exist. Recently, entropy compression has been used to develop more powerful versions of the Local Lemma which provide existence proofs in settings where the original Local Lemma does not apply. I will illustrate this technique with applications to graph colouring: (a) colouring triangle-free graphs, and (b) frugal colouring, where no colour can appear too many times in any neighbourhood.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday April 18, 2017, 2PM, Salle 1007\\
**Mikael Rabie** (LIX) //Time and Homonyms Considerations over Community Protocols//
\\
Guerraoui and Ruppert introduced the model of Community Protocols. This Distributed System works with agents having a finite memory and unique identifiers (the set of identifiers being ordered). Each agent can store a finite number of identifiers they heard about. The interactions are asynchronous and pairwise, following a fair scheduler.
The computation power of this model is fully characterized: it corresponds exactly to what non deterministic Turing Machines can compute on a space O(n\log n).
In this talk, I will focus on two restrictions of the model:
The first is what happens when agents may share identifiers, the population admitting homonyms. I will introduce a hierarchy, with characterizations depending on the rate of unique identifiers present in the population. The main result is that with log n identifiers, a Turing Machine with a polylogarithmic space can be simulated.
The second considers the following time restriction: what can be computed in a polylogarithmic number of parallel interactions. This version is not yet characterized, but I will provide some impossibility results, some computable protocols, and I will give the tighter bound we found.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday April 4, 2017, 2PM, Salle 1007\\
**Valentin Garnero** (INRIA Sophia Antipolis) //(Méta)-noyaux constructifs et linéaires dans les graphes peu denses//
\\
Dans cet exposé je vous présenterai les résultats obtenus au cours de ma thèse, laquelle traite de noyaux et de complexité paramétrée.
La complexité paramétrée est une branche de l'algorithmie qui analyse la complexité d'un problème en fonction de la taille des données n et d'un paramètre k (arbitraire). L'objectif est de pouvoir attaquer des problèmes difficiles (NPc) et de montrer que l’exposition combinatoire n'est fonction que du paramètre (et pas de la taille des données), cad, trouver des algorithmes en f(k)+p(n). L'extraction de noyau est une méthode qui permet d'obtenir des algorithmes avec un telle complexité. Il s'agit d'un pré-calcul (une réduction polynomiale) qui réduit la taille des données en temps polynomial, tout en garantissant que la taille du noyau (l'instance réduite) est bornée par une fonction du paramètre g(k). Il suffit de résoudre le problème dans le noyau, de n'importe quelle façon, pour obtenir un algorithme en f(g(k)) + p(n).
La méthode de décomposition en régions [Alber, Fellows, Niedermeier] est un résultat majeur dans le domaine des noyaux. Elle a permis de construire de nombreux noyaux linaires pour des variantes de la Domination dans les graphes planaires. J'illustrerai cette méthode avec le cas de la Domination Rouge Bleu, qui consiste à trouver, dans un graphe bicoloré, un ensemble de sommets bleus tel que tous les sommets rouges sont à distance au plus 1 de la solution.
Cette méthode a ensuite été généralisée par des méta-résultats [ex: Bodlaender, Fomin, Lokshtanov, Penninkx, Saurabh, Thilikos], qui prouve l'existence de noyaux (dans des graphes peu denses) pour tout problème vérifiant certaines conditions génériques. Je présenterai un de ses méta-résultats, qui se base sur la programmation dynamique et sur la décomposition en protrusion, et qui a le mérite d’être constructif.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday March 28, 2017, 2PM, Salle 1007\\
**Juho Hirvonen** (IRIF) //Recent developments in the theory of distributed graph algorithms//
\\
I will survey and try to explain the intuition behind several recent developments in the theory of distributed graph algorithms. I will focus on the LOCAL model, where the measure of interest is how far information has to be propagated in a communication network in order solve a given problem. The problems of interest will be locally checkable labelling problems (LCLs), a natural class of problems that allow distributed verification of proposed solutions.
I will discuss two recent papers in this field. First, we gave a lower bound showing that there exist LCL problems of ”intermediate” complexity, that is, complexity strictly between known complexity classes (Brandt et al., STOC 2016). The proof is by a new kind of simulation argument. Second, Chang et al. (FOCS 2016) showed that this lower bound implies an exponential separation between the randomized and deterministic LOCAL models. Chang et al. also show further connections between the randomized and deterministic models, and establish a useful speed-up simulation for the deterministic LOCAL model.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday March 21, 2017, 2PM, Salle 1007\\
**Evangelos Bampas** (LIF - Université Aix Marseille) //Linear search by a pair of distinct-speed robots//
\\
We will present algorithms for the evacuation problem by a pair of distinct-speed robots on an infinite line. In this problem, two mobile robots with different maximal speeds are initially placed at the same point on an infinite line. The robots need to find a stationary target (i.e., the exit), which is placed at an unknown location on the line. The search is completed when both robots arrive at the exit and the goal is to conclude evacuation in as little time as possible. The robot that discovers the exit first may communicate it to the other robot. We consider two models of communication between the robots, namely wireless communication and face-to-face communication. We present an optimal algorithm for any combination of robot speeds in the case of face-to-face communication. In the case of wireless communication, our algorithm is optimal if the slow robot is at most 6 times slower than the fast robot.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday February 28, 2017, 2PM, Salle 1007\\
**Laurent Viennot** (INRIA - IRIF) //Beyond Highway Dimension: Small Distance Labels Using Tree Skeletons//
\\
Joint work with Adrian Kosowski
The goal of a hub-based distance labeling scheme for a network $G = (V, E)$ is to assign a small subset $S(u) \subseteq V$ to each node $u \in V$, in such a way that for any pair of nodes $u, v$, the intersection of hub sets $S(u) \cap S(v)$ contains a node on the shortest $uv$-path. The existence of small hub sets, and consequently efficient shortest path processing algorithms, for road networks is an empirical observation. A theoretical explanation for this phenomenon was proposed by Abraham et al. (SODA 2010) through a network parameter they called highway dimension, which captures the size of a hitting set for a collection of shortest paths of length at least $r$ intersecting a given ball of radius $2r$. In this talk, we revisit this explanation, introducing a more tractable (and directly comparable) parameter based solely on the structure of shortest-path spanning trees, which we call skeleton dimension. We show that skeleton dimension admits an intuitive definition for both directed and undirected graphs, provides a way of computing labels more efficiently than by using highway dimension, and leads to comparable or stronger theoretical bounds on hub set size.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday February 7, 2017, 2PM, Salle 1007\\
**Edouard Bonnet** (Middlesex University, London) //Fine-grained complexity of coloring geometric intersection graphs.//
\\
We are interested in the following problem: Given a collection of n objects in the plane and an integer k,
is there a proper k-coloring of its intersection graph (i.e., such that two overlapping objects get different colors).
Here we think of k as a function of n; it can be constant, linear, or anything in between.
From an application of a known separator theorem, if the input objects are fat (i.e., have bounded diameter/width ratio),
then this problem can be solved in time 2^{O(\sqrt{n k} \log n)}.
We will show that this running time is essentially optimal under the Exponential Time Hypothesis (ETH).
We then move to intersection graphs of segments (which are quintessentially non fat) and show that the situation is quite
different than with fat objects.
We end on a surprising note: 3-coloring and 4+-coloring of segments have significantly different behaviors.
The guideline and motivation of the talk is to go beyond the NP-hardness of coloring those geometric graphs,
and precise what is the best (hopefully subexponential) running time that we can get.
This is based on a joint work with Csaba Biró, Dániel Marx, Tillmann Miltzow, and Paweł Rzążewski, and an ongoing project with Stéphan Thomassé and a subset of the aforementioned authors.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday January 24, 2017, 2PM, Salle 1007\\
**Maximilien Danisch** (Telecom Paris Tech) //Towards real-world graph algorithmics//
\\
Real-world graphs (a.k.a. complex networks) are ubiquitous: the web, Facebook, Twitter, brain networks, protein interaction networks, etc. Studying these graphs and solving computational problems on them (say maximum clique, partitioning or dense subgraph) has applications in many fields.
I will first show that the structure of some real-world graphs is special. In particular, they are not adversarial and some difficult problems (say NP-complete or NP-hard problems) can be solved on some huge real-world graphs (say 2G edges or more).
I will then present two works along the lines of "understanding and leveraging the structure of real-world graphs in order to build better algorithms": (i) Enumerating all k-cliques and (ii) Computing the density-friendly graph decomposition.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday January 10, 2017, 2PM, Salle 1007\\
**Carl Feghali** (IRIF) //Problems and Results in Kempe Equivalence of Colorings//
\\
Given a graph G with a coloring, a Kempe chain of G is a maximal connected subgraph of G in which each vertex has one of two colors. A Kempe change is the operation of swapping the colors of a Kempe chain. Two colorings are Kempe equivalent if one can be obtained from the other by a sequence of Kempe changes. In this talk, we survey some of the existing results, open problems and common proof techniques in this area.
[[en:seminaires:adg:index|Distributed algorithms and graphs]]\\
Tuesday January 3, 2017, 2PM, Salle 1007\\
**Marthe Bonamy** (Labri - CNRS) //Reed's conjecture and strong edge coloring//
\\
The chromatic number of a graph is trivially bounded from above by the maximum degree plus one, and from below by the size of a largest clique. Reed proved in 1998 that compared to the trivial upper bound, we can always save a number of colors proportional to the gap between the maximum degree and the size of a largest clique. A key step in the proof deals with how to spare colors in a graph whose every vertex "sees few edges" in its neighborhood. We improve the existing approach, and discuss its applications to Reed's theorem and strong edge coloring.
This is joint work with Thomas Perrett (Technical University of Denmark) and Luke Postle (University of Waterloo).