ihp-logo.jpeg

ihp1.jpg

25-27 novembre 2024, Institut Henri Poincaré (IHP), Paris

Local Information

The 4th annual workshop for the working group “Complexity and Algorithms” (gt-CoA) will take place in the Hermite amphitheater at Institut Henri Poincaré, which is located 11 rue Pierre et Marie Curie, Paris (see maps above), from November 25 to November 27. The program will begin after lunch on Monday and end mid afternoon on Wednesday.

Participants must book their accommodation in Paris themselves.

Registration

Registration is free but mandatory and is done at the following link. The registration deadline is November 8.

Invited Speakers
  • Vincent Cohen-Addad (Google)
  • Aline Parreau (LIRIS)
  • Vianney Perchet (CREST)
  • Mahsa Shirmohammadi (IRIF) Rida Ait El Manssour (Oxford)
Scientific Program (subject to modifications)

Monday (Nov. 25)

13h45-14h00: Opening and Welcome

  • 14h00-15h00: Invited talk by Vianney Perchet: Prophet inequalities, old and new
In this short introduction, I will describe the stopping time problem called “prophet inequality”. This 40-years old line of research gains a lot of momentum in the recent years, due to applications to online advertisement (among others).

I will recall the global problem and major results from the 80’s, then move to more recent developments including what I believe are interesting open research directions.

  • 15h00-15h20: Martin Krejca: From Market Saturation to Social Reinforcement: Understanding the Impact of Non-Linearity in Information Diffusion Models
Diffusion of information in networks is at the core of many problems in AI. Common examples include the spread of ideas and rumors as well as marketing campaigns. Typically, information diffuses at a non-linear rate, for example, if markets become saturated or if users of social networks reinforce each other’s opinions. Despite these characteristics, this area has seen little research, compared to the vast amount of results for linear models, which exhibit less complex dynamics. Especially, when considering the possibility of re-infection, no fully rigorous guarantees exist so far.

We address this shortcoming by studying a very general non-linear diffusion model that captures saturation as well as reinforcement. More precisely, we consider a variant of the SIS model in which vertices get infected at a rate that scales polynomially in the number of their infected neighbors, weighted by an infection coefficient 𝜆. We give the first fully rigorous results for thresholds of 𝜆 at which the expected survival time becomes super-polynomial. For cliques, we show that when the infection rate scales sub-linearly, the threshold only shifts by a poly-logarithmic factor, compared to the standard SIS model. In contrast, super-linear scaling changes the process considerably and shifts the threshold by a polynomial term. For stars, sub-linear and super-linear scaling behave similar and both shift the threshold by a polynomial factor. Our bounds are almost tight, as they are only apart by at most a poly-logarithmic factor from the lower thresholds, at which the expected survival time is logarithmic.

  • 15h20-15h40: Eniko Kevie: Online Non-linear Covering with Multiple Oracles
Designing online algorithms with machine learning predictions is a recent approach beyond the worst-case paradigm for various practically relevant online problems like scheduling, caching, clustering, and ski rental. While most previous learning-augmented algorithms focus on integrating the predictions of a single oracle, we study the design of online algorithms with multiple prediction sources (experts). To go beyond the performance guarantee of the popular static best expert in hindsight benchmark, we compare our algorithm to a new benchmark (the linear combination of predictions that change over time).

We present a competitive algorithm in the new dynamic benchmark for 0-1 online non-linear covering with a performance guarantee of $O(\ln(K)) \cdot (\lambda/(1-\mu \cdot \ln(K)))$, where K is the number of experts and $(\lambda, \mu)$ are parameters of the objective function. In particular, for 0-1 online linear covering problems, the competitive ratio is $O(\ln(K))$.

Our novel approach gives a new perspective on combining several online algorithms in an online manner - a central subject in the online algorithm research community.

15h40-16h20: Coffee break

  • 16h20-16h40: Romain Cosson: Is maze-solving parallelizable?
Can you find a short path, without a map? Is maze-solving parallelizable? These questions can be cast formally using the toolbox of competitive analysis, a toolkit that was developed since the 1990s to study online algorithms. In this talk, I will present these problems, that are known as « layered graph traversal » (introduced by the literature on online algorithms) and « collective tree exploration » (introduced by the literature on distributed algorithms). We will focus on the connexions between these problems and use them as a case study of the connections between online a distributed algorithms.
  • 16h40-17h00: Coutouly Yannis: Colorless Computability of General Adversary of IIS model using Topological Method
This talk present a new theorem of colorless computability for general adversary of the IIS-model. It starts to recall the usual encoding of distributed system into simplicial complexes. This type of encoding are already used to provide some computability result for some specific shared memory model of communication (IIS). We enrich this model with an adversary that can model any failures pattern on such communication. Such adversary where proven to be challenging to tackle especially the 'non-compact“ type. Using a new topology on the set of execution, from an article presented at DISC23, we extend a well-know result on distributed colorless task (such as consensus, k-set Agreement…) which state (roughly) : The computability of a distributed system can be reduced to a problem of existence of a continuous function between a topological space and a simplicial complexe. More formally : A task (I,O,Delta) is solvable on M a general adversary of the IIS model if and only if there is a continuous function from geo(I \times M) to |O|. Depending on the time, I may talk about some application of such theorem on some task such as the set-agreement.
  • 17h00-17h20: Mikaël Rabie: LOCAL SLEEPING model
The SLEEPING model introduces a new complexity metric in Distributed Computing, on the energy efficiency side. The goal is to minimize the number of steps an agent must be active to compute a solution in a network.
  • 17h20-17h40: Maria Kokkou: Deterministic Self-Stabilising Leader Election for Programmable Matter with Constant Memory
The problem of electing a unique leader is central to all distributed systems, including programmable matter systems where particles have constant size memory. In this paper, we present a silent self-stabilising, deterministic, stationary, election algorithm for particles having constant memory, assuming that the system is simply connected. Our algorithm is elegant and simple, and requires constant memory per particle. We prove that our algorithm always stabilises to a configuration with a unique leader, under a daemon satisfying some fairness guarantees (Gouda fairness [Gouda 2001]). We use the special geometric properties of programmable matter in 2D triangular grids to obtain the first self-stabilising algorithm for such systems. This result is surprising since it is known that silent self-stabilising algorithms for election in general distributed networks require $\Omega(\log n)$ bits of memory per node, even for ring topologies [Dolev et al. 1999].

Tuesday (Nov. 26)

  • 09h00-10h00: Invited talk by Mahsa Shirmohammadi Rida Ait El Manssour: The complexity of Orbit-Closure problems
Computational problems concerning the orbit of a point under the action of a matrix group occur in numerous subfields of computer science, including complexity theory, program analysis, quantum computation, and automata theory. In many cases the focus extends beyond orbits proper to orbit closures under a suitable topology. Typically one starts from a group and several points and asks questions about the orbit closure of the points under the action of the group, e.g., whether two given orbit closures intersect.

In this talk, I will introduce and motivate several problems involving groups and orbit closures, focusing on computation and determination tasks. In regards to computation, we are given a set of matrices and tasked with computing the defining ideal of the algebraic group generated by those matrices. The determination task, on the other hand, involves starting with a given variety and determining whether and how it can be represented as an algebraic group or an orbit closure. The “how” question often asks whether the underlying group can be topologically generated by s matrices for a given s.

The talk is based on several joint works: https://dl.acm.org/doi/10.1145/3476446.3536172 https://arxiv.org/abs/2407.04626 https://arxiv.org/abs/2407.09154

  • 10h00-10h20: Subhayan Saha: Absolute Reconstruction of Some Polynomial Families via Connections to Tensor Decomposition
We consider the following algorithmic problem : If an arbitrary homogeneous degree-d polynomial in n variables over R or C is given as blackbox (oracle access to the polynomial by evaluation at points), decide whether it can be written as sums of powers of linearly independent linear forms. Through the equivalence of tensors and polynomials, this is also equivalent to checking the existence of a low-rank decomposition of symmetric tensors. In this talk, I will sketch an algorithm for this decision problem which requires polynomial (in n and d) many calls to the blackbox and arithmetic operations. I will also briefly discuss some recent ongoing work on generalizing this algorithm to other polynomial families, such as degree-d set-multilinear polynomials (which correspond to ordinary tensors). This is based on joint work with Pascal Koiran.
  • 10h20-10h40: Laurent Feuilloley: Short and local transformations between $(\Delta+1)$-colorings
Consider a coloring of a graph of maximum degree $\Delta$ with at most $\Delta + 1$ colors. If a node does not have all the $\Delta + 1$ colors in its neighborhood, we can change its color, while keeping a proper coloring. Given two such colorings of the same graph, how many recoloring steps do we need to transform one into the other? We give the first linear upper bound (for $\Delta>2$), via techniques inspired by local distributed algorithms.

10h40-11h15: Coffee break

  • 11h15-11h35: Maher Mallem: Kernelization Algorithms on Machine Scheduling with Time Windows
Over the past few years parameterized complexity has successfully been considered on scheduling problems, with many fixed-parameter tractable algorithms established on a variety of parameters. Knowing this, it is standard to look for kernels and associated kernelization algorithms in order to preprocess scheduling instances and improve the runtime of parameterized algorithms. However multiple parameterized scheduling problems still lack kernel characterizations, especially when the number of job types is not bounded by the parameter in any way. We study the kernelization of a single machine scheduling problem with precedence constraints and job time windows with respect to two graph parameters: the pathwidth and the vertex cover of the compatibility graph induced by the job time window overlaps. While this problem is known to be fixed-parameter tractable with respect to the pathwidth, we show that a polynomial kernel is unlikely to be built. Nevertheless we provide a polynomial kernel with respect to the vertex cover of the compatibility graph. In contrast when parallel processors with processing set restrictions are considered, we prove that the problem is W[1]-hard with respect to the vertex cover. This paves the way to kernel ideas for other parameters like pathwidth with the hope of using them in conjunction with known fixed-parameter tractable algorithms and improve their runtime in practice.
  • 11h35-11h55: Christoph Dürr: Overcoming Brittleness in Pareto-Optimal Learning-Augmented Algorithms
The study of online algorithms with machine-learned predictions has gained considerable prominence in recent years. One of the common objectives in the design and analysis of such algorithms is to attain (Pareto) optimal tradeoffs between the consistency of the algorithm, i.e., its performance assuming perfect predictions, and its robustness, i.e., the performance of the algorithm under adversarial predictions. In this work, we demonstrate that this optimization criterion can be extremely brittle, in that the performance of Pareto-optimal algorithms may degrade dramatically even in the presence of imperceptive prediction error. To remedy this drawback, we propose a new framework in which the smoothness in the performance of the algorithm is enforced by means of a user-specified profile. This allows us to regulate the performance of the algorithm as a function of the prediction error, while simultaneously maintaining the analytical notion of consistency/robustness tradeoffs, adapted to the profile setting. We apply this new approach to a well-studied online problem, namely the one-way trading problem. For this problem, we further address another limitation of the state-of-the-art Pareto-optimal algorithms, namely the fact that they are tailored to worst-case, and extremely pessimistic inputs. We propose a new Pareto-optimal algorithm that leverages any deviation from the worst-case input to its benefit, and introduce a new metric that allows us to compare any two Pareto-optimal algorithms via a dominance relation.
  • 11h55-12h15: Bertrand Simon: Contract Scheduling with Distributional and Multiple Advice
Contract scheduling is a widely studied framework for designing real-time systems with interruptible capabilities. Previous work has showed that a prediction on the interruption time can help improve the performance of contract-based systems, however it has relied on a single prediction that is provided by a deterministic oracle. In this work, we introduce and study more general and realistic learning-augmented settings in which the prediction is in the form of a probability distribution, or it is given as a set of multiple possible interruption times. For both prediction settings, we design and analyze schedules which perform optimally if the prediction is accurate, while simultaneously guaranteeing the best worst-case performance if the prediction is adversarial. We also provide evidence that the resulting system is robust to prediction errors in the distributional setting. Last, we present an experimental evaluation that confirms the theoretical findings, and illustrates the performance improvements that can be attained in practice.

12h15-14h00: Lunch (on your own)

  • 14h00-15h00: Invited talk by Aline Parreau: The Maker-Breaker domination game
In the Maker-Breaker domination game, two players, Dominator and Staller, alternately claim vertices of a graph. Dominator wants to claim all the vertices of a dominating set whereas Staller wants to prevent her to do so, or equivalently wants to claim a vertex and all its neighbours. Deciding if Staller or Dominator has a winning strategy is in general a PSPACE-complete problem. In this talk, we will review some recent (positive) results on this game. Using partition strategies for Dominator, we can show that Dominator always wins in a regular graph, that deciding the outcome in an outerplanar graph or a block graph is polynomial and we reduce the algorithmic complexity for interval graphs from PSPACE to NP.

This is joint work with Guillaume Bagan, Eric Duchêne, Valentin Gledel and Tuomo Lehtilä.

  • 15h00-15h20: Guillaume Lagarde: A $(1+\epsilon)$-approximation for Ultrametric Embedding in subquadratic time
Ultrametrics are representations of data that emphasize hierarchical properties and are widely used in data visualization and clustering. In the Ultrametric Embedding problem, we are given $n$ points in a metric space and must output an ultrametric that best preserves the distance between points in the original data (more precisely, we want to minimize the worst-case distortion). Farach et al. gave an exact $O(n²)$-time algorithm for this problem, but their algorithm does not scale to very large datasets. Therefore, recent work has focused on obtaining subquadratic algorithms by approximating an optimal solution for *Euclidean* metrics, resulting in a $\sqrt{2}c$-approximation algorithm that runs in time $n^{1+O(1/c^2)}$. In this talk, I will present our algorithm that achieves an arbitrarily precise approximation in subquadratic time for Euclidean metrics, namely it outputs a (1+c)-approximation of the best ultrametric embedding in time $Õ(n^{1+1/c})$.

Our result is based on two constructions that may be of independent interest for approximation algorithms on high-dimensional Euclidean data. The first is a $Õ(n^{1+1/c^2})$ time algorithm for computing a *uniformly approximate* minimum spanning tree of Euclidean metrics, using a BFS guided by Locality Sensitive Hashing (LSH). The second is a dynamic data structure for approximate farthest neighbor in Euclidean spaces. This construction uses random projections and allows to i) query the farthest point in a cluster and ii) merge two clusters, both in $Õ(n^{1/c^2})$ time.

Joint work with Gabriel Bathie.

  • 15h20-15h40: Giannos Stamoulis: Finding irrelevant vertices in linear time on bounded-genus graphs
The irrelevant vertex technique provides a powerful tool for the design of parameterized algorithms for a wide variety of problems on graphs. A common characteristic of these problems, permitting the application of this technique on surface-embedded graphs, is the fact that every graph of large enough treewidth contains a vertex that is irrelevant, in the sense that its removal yields an equivalent instance of the problem. The straightforward application of this technique yields algorithms with running time that is quadratic in the size of the input graph. This running time is due to the fact that it takes linear time to detect one irrelevant vertex and the total number of irrelevant vertices to be detected is linear as well. Using advanced techniques, sub-quadratic algorithms have been designed for particular problems, even in general graphs. However, designing a general framework for linear-time algorithms has been open, even for the bounded-genus case.

In this talk we present a general framework that enables finding in linear time a set of irrelevant vertices whose removal yields a bounded-treewidth graph, provided that the input graph has bounded genus. Our method is applicable to a wide variety of known graph containment or graph modification problems where the irrelevant vertex technique applies. Examples include the (Induced) Minor Folio problem, the (Induced) Disjoint Paths problem, and the F-Minor-Deletion problem.

Joint work with Petr A. Golovach, Stavros G. Kolliopoulos, and Dimitrios M. Thilikos.

  • 15h40-16h00: Loïc Dubois: Untangling Graphs on Surfaces
How to remove the crossings of a graph drawn on a surface by sliding the drawing along the surface? This natural problem generalizes planarity testing. We provide the first polynomial-time algorithm for it. Incidentally we give a new algorithm for the related problem of minimizing the crossings of curves. Then we introduce an algorithmic generalization of Tutte's spring theorem: putting a drawing in “convex” position removes all crossings when possible. Our techniques combine algorithmic and topological tools. This is joint work with Éric Colin de Verdière and Vincent Despré.

16h00-16h30: Coffee break

  • 16h30-16h50: Georgii Melidi: Scenario-Based Robust Optimization of Tree Structures
We initiate the study of tree structures in the context of scenario-based robust optimization. Specifically, we study Binary Search Trees (BSTs) and Huffman coding, two fundamental techniques for efficiently managing and encoding data based on a known set of frequencies of keys. Given k different scenarios, each defined by a distinct frequency distribution over the keys, our objective is to compute a single tree of best-possible performance, relative to any scenario. We consider, as performance metrics, the competitive ratio, which compares multiplicatively the cost of the solution to the tree of least cost among all scenarios, as well as the regret, which induces a similar, but additive comparison. For BSTs, we show that the problem is NP-hard across both metrics. We also show how to obtain a tree of competitive ratio ⌈log2(k+1)⌉, and we prove that this ratio is optimal. For Huffman Trees, we show that the problem is, likewise, NP-hard across both metrics; we also give an algorithm of regret ⌈log2k⌉, which we show is near-optimal, by proving a lower bound of ⌊log2k⌋. Last, we give a polynomial-time algorithm for computing Pareto-optimal BSTs with respect to their regret, assuming scenarios defined by uniform distributions over the keys. This setting captures, in particular, the first study of fairness in the context of data structures. We provide an experimental evaluation of all algorithms. To this end, we also provide mixed integer linear program formulation for computing optimal trees.

Joint work with Spyros Angelopoulos, Christoph Dürr and Alex Elenter.

  • 16h50-17h10: Dimitrios Los: Naively Sorting Evolving Data is Optimal and Robust
We study comparison sorting in the evolving data model, introduced by Anagnostopoulos, Kumar, Mahdian and Upfal (2011), where the true total order changes while the sorting algorithm is processing the input. More precisely, each comparison operation of the algorithm is followed by a sequence of evolution steps, where an evolution step perturbs the rank of a random item by a “small” random value. The goal is to maintain an ordering that remains close to the true order over time. Previous works have analyzed adaptations of classic sorting algorithms, assuming that an evolution step changes the rank of an item by just one, and that a fixed constant number b of evolution steps take place between two comparisons. In fact, the only previous result achieving optimal linear total deviation, by Besa Vial, Devanny, Eppstein, Goodrich and Johnson (2018a), applies just for b=1.

We analyze a very simple sorting algorithm suggested by Mahdian (2014), which samples a random pair of adjacent items in each step and swaps them if they are out of order. We show that the algorithm achieves and maintains, with high probability, optimal total deviation, O(n), and optimal maximum deviation, O(log n), under very general model settings. Namely, the perturbation introduced by each evolution step is sampled from a general distribution of bounded moment generating function, and we just require that the average number of evolution steps between two sorting steps be bounded by an (arbitrary) constant, where the average is over a linear number of steps.

The key ingredients of our proof are a novel potential function argument that inserts “gaps” in the list of items, and a general analysis framework which separates the analysis of sorting from that of the evolution steps, and is applicable to a variety of settings for which previous approaches do not apply. Our results settle conjectures and open problems in the three aforementioned works, and provide theoretical support that simple quadratic algorithms are optimal and robust for sorting evolving data, as empirically observed by Besa Vial, Devanny, Eppstein, Goodrich and Johnson (2018b).

Joint work with George Giakkoupis and Marcos Kiwi, to be presented at FOCS 2024.

  • 17h10-17h30: Jonas Ellert: Palindromic Length in Small Space
A palindrome is a string that reads the same as its reversal, e.g., “ressasser.” While it is easy to detect whether a given string belongs to the language of palindromes, detecting the concatenation of multiple palindromes is much more intricate. We address one of the harder problems in this area: Given a string of length n, what is the minimal number k such that the string is the concatenation of k palindromes? We focus on the space complexity and show that, apart from the read-only string, f(k)*(log n)^k additional working space is sufficient. The solution is achieved through a new space-efficient encoding of the prefixes of the string that are the concatenation of k palindromes. The encoding follows from a careful analysis of the structure and combinatorial properties of such prefixes. We also provide an almost matching lower bound for the encoding. Our solution is the first step toward a streaming algorithm for computing the palindromic length.
  • 17h30-17h50: Junyao Zhao: Beyond Worst-Case Budget-Feasible Mechanism Design
Motivated by large-market applications such as crowdsourcing, we revisit the problem of budget-feasible mechanism design under a ``small-bidder assumption''. Anari, Goel, and Nikzad (2018) gave a mechanism that has optimal competitive ratio 1-1/e on worst-case instances. However, we observe that on many realistic instances, their mechanism is significantly outperformed by a simpler open clock auction by Ensthaler and Giebe (2014), although the open clock auction only achieves competitive ratio 1/2 in the worst case. Is there a mechanism that gets the best of both worlds, i.e., a mechanism that is worst-case optimal and performs favorably on realistic instances? In this work, we design a new mechanism that gives an affirmative answer to this question. We prove that on every instance, our mechanism performs at least as good as all uniform mechanisms, including Anari, Goel, and Nikzad's and Ensthaler and Giebe's mechanisms. Moreover, we show that in a semi-random model called budget-smoothed analysis, where the adversary designs a single worst-case market for a distribution of budgets, our mechanism is optimal among all (including non-uniform) mechanisms and guarantees a strictly better-than-(1-1/e) expected competitive ratio for any non-trivial budget distribution. Finally, we empirically evaluate our mechanism on realistic instances and observe that it beats the worst-case 1-1/e competitive ratio by a significant margin and compares favorably to both mechanisms mentioned above.

Wednesday (Nov. 27)

  • 09h00-10h00: Invited talk by Vincent Cohen-Addad: Recent Progress on Correlation Clustering: From Local Algorithms to Better Approximation Algorithms and Back
Correlation clustering is a classic model for clustering problems arising in machine learning and data mining. Given a set of data elements represented as vertices of a graph and pairwise similarity represented as edges, the goal is to find a partition of the vertex set so as to minimize the total number of edges across the parts plus the total number of non-edges within the parts. Introduced in the early 2000s [Bansal et al., 2004], correlation clustering has received a large amount of attention through the years. A natural linear programming relaxation was shown to have an integrality gap of at least 2 and at most 2.5 [Ailon et al., 2008] in 2005, and in 2015 at most 2.06 [Chawla et al., 2015]. In 2021, motivated by large-scale application new structural insights allowed to derive a simple, practical algorithm that achieved an O(1)-approximation in a variety of models (Massively Parallel, Sublinear, Streaming or Differentially-private) [Vincent Cohen{-}Addad et al., 2021; Cohen-Addad et al., 2022]. These new insights turned out to be a key building block in designing better algorithms: It serves as a pre-clustering of the input graph that enables algorithm with approximation guarantees significantly better than 2 [Vincent Cohen{-}Addad et al., 2023; Vincent Cohen{-}Addad et al., 2022]. It is a key component in the new algorithm that achieves a 1.44-approximation [Nairen Cao et al., 2024] and in the new local-search based 1.84-approximation for the Massively Parallel, Sublinear, and Streaming models [Vincent Cohen{-}Addad et al., 2024]. This talk will review the above recent development and what are the main open research directions. A collection of joint works with Nairen Cao, Silvio Lattanzi, Euiwoong Lee, Shi Li, David Rasmussen Lolck, Slobodan Mitrovic, Alantha Newman, Ashkan Norouzi-Fard, Nikos Parotsidis, Marcin Pilipczuk, Jakub Tarnawski, Mikkel Thorup, Lukas Vogl, Shuyi Yan, Hanwen Zhang.
  • 10h00-10h20: Taïssir Marcé: How to Wake Up a swarm of robots on a graph?
Le problème de Réveil de robots est un problème d’optimisation de tournée introduit au début des années 2000 ([2],[3]). On considère un ensemble de robots positionnés dans l’espace. Initialement, tous les robots sont désactivés (ou endormi), sauf un, appelé robot source, qui est initialement activé. Un robot activé peut réveiller un robot désactivé en s’y déplaçant physiquement. Le problème d’optimisation émanant consiste alors à déterminer dans quel ordre réveiller les robots pour minimiser le temps total de réveil, c’est à dire le temps t∗ à partir plus aucun robot n’est endormi. On parle de réveil de robots dans les graphes pour les instances dont l’espace métrique est définit par un graphe. Dans ce cas, les robots sont initialement positionnés sur des sommets du graphes, et se déplacent uniquement le long des arêtes, qui sont éventuellement pondérées. Le problème de Réveil de robots dans les graphes en général est NP-complet [2]. En 2003, Arkin, Bender et Ge proposent une 22-approximation pour le réveil des robots sur un arbre non pondéré. Ce résultat est étendu au réveil dans les graphes non pondérés en général, pour lesquels il suffit de construire un arbre couvrant avant d’appliquer l’algorithme sur cet arbre. [1] On propose une nouvelle analyse pour un algorithme de réveil de robots dans les arbres basé sur la même stratégie, permettant d’en déduire une 11.89-approximation du temps optimal pour le réveil de robots dans tout graphe non-pondéré.
  • 10h20-10h40: Manolis Vasilakis: Parameterized Maximum Node-Disjoint Paths
We revisit the \textsc{Maximum Node-Disjoint Paths} problem, the natural optimization version of the famous \textsc{Node-Disjoint Paths} problem, where we are given a graph $G$, $k$ (demand) pairs of vertices $(s_i, t_i)$, and an integer $\ell$, and are asked whether there exist at least $\ell$ vertex-disjoint paths in $G$ whose endpoints are given pairs. This problem has been intensely studied from both the approximation and parameterized complexity point of view and is notably known to be intractable by standard structural parameters, such as tree-depth, as well as the combined parameter $\ell$ plus pathwidth. We present several results improving and clarifying this state of the art, with an emphasis towards FPT approximation.

Our main positive contribution is to show that the problem's intractability can be overcome using approximation: We show that for several of the structural parameters for which the problem is hard, most notably tree-depth, the problem admits an \emph{efficient FPT approximation scheme}, returning a $(1-\varepsilon)$-approximate solution in time $f(\mathrm{td},\varepsilon)n^{\mathcal{O}(1)}$. We manage to obtain these results by comprehensively mapping out the structural parameters for which the problem is FPT if $\ell$ is also a parameter, hence showing that understanding $\ell$ as a parameter is key to the problem's approximability. This, in turn, is a problem we are able to solve via a surprisingly simple color-coding algorithm, which relies on identifying an insightful problem-specific variant of the natural parameter, namely the number of vertices used in the solution.

The results above are quite encouraging, as they indicate that in some situations where the problem does not admit an FPT algorithm, it is still solvable almost to optimality in FPT time. A natural question is whether the FPT approximation algorithm we devised for tree-depth can be extended to pathwidth. We resolve this negatively, showing that under the \emph{Parameterized Inapproximability Hypothesis} no FPT approximation scheme for this parameter is possible, even in time $f(\mathrm{pw},\varepsilon)n^{g(\varepsilon)}$. We thus precisely determine the parameter border where the problem transitions from “hard but approximable” to “inapproximable”.

Lastly, we strengthen existing lower bounds by replacing W[1]-hardness by XNLP-completeness for parameter pathwidth, and improving the $n^{o(\sqrt{\mathrm{td}})}$ ETH-based lower bound for tree-depth to (the optimal) $n^{o(\mathrm{td})}$.

10h40-11h15: Coffee break

  • 11h15-11h35: Stéphane Devismes: Asynchronous Self-stabilization Made Fast, Simple, and Energy-efficient
Distributed systems are ubiquitous, and their distributed nature make them particularly vulnerable to faults. Being able to automatically recover from these faults is of utmost importance, and self-stabilization is a general and lightweight approach to tackle this problem. However, fully asynchronous self-stabilizing algorithms (FASS) are notoriously difficult to design and prove. It thus makes sense to create and prove a transformer that turns synchronous algorithms into FASSes. The Rollback Compiler of Awerbuch and Varghese (FOCS 1991) is such a transformer. The problem is that although it produces FASSes that are fast (time being evaluated in rounds), we prove that their energy requirement (measured through the number of state changes) can be exponential in the number n of nodes. Actually, regardless of the problem, the literature only contains a few FASSes which are asymptotically optimal time-wise. Moreover, several of these algorithms have been shown to require an exponential amount of energy, and no such algorithms are known to be energy-efficient. In this paper, we introduce the first transformer that turns any terminating synchronous algorithm into a fully asynchronous self-stabilizing algorithm with essentially the same time complexity and with a low energy requirement (polynomial in n). However, as for the rollback compiler, this comes at the cost of a (reasonable) memory increase. Our approach is compatible with most models, ranging from the LOCAL model (a powerful synchronous fault-free model), down to models such as the Stone Age model, in which a node cannot even know how many neighbors it has. In particular, we can transform extremely fast algorithms such as the classical Θ{log*n)-time ring coloring algorithm by Cole and Vishkin (FOCS 1986) into fast and energy-efficient FASSes. We also provide the best FASSes so far for many classical distributed problems such as leader election and spanning tree constructions (e.g., BFS, shortest-path).
  • 11h35-11h55: Julien Zylberman: Efficient quantum circuits for quantum routines
Many quantum algorithms rely on the assumption that efficient quantum routines exist for tasks such as quantum state preparation (the process of encoding classical data into qubit states), unitary and non-unitary diagonal operators, and multi-controlled operations. Implementing these routines on a quantum computer necessitates the synthesis of quantum circuits, where efficiency is gauged by circuit size, depth, and the number of ancilla qubits required. However, existing methods for exact quantum circuit synthesis, particularly for quantum state preparation and diagonal operators, often scale exponentially with the number of qubits.

In this talk, I will first provide a brief overview of these existing methods. I will then introduce several quantum circuits we have developed that significantly improve efficiency. These include the Walsh Series loader [1], which utilizes the Walsh representation of classical data to achieve efficient quantum state preparation; an adjustable-depth quantum circuit capable of performing both unitary and non-unitary diagonal operations [2]; and a novel multi-controlled NOT gate with polylogarithmic depth and no need for ancilla qubits [3], offering a superpolynomial speedup compared to previous methods with linear depth.

[1] Zylberman, J., & Debbasch, F. (2024). Efficient quantum state preparation with walsh series. Physical Review A, 109(4), 042401 [2] Zylberman, J., Nzongani, U., Simonetto, A., & Debbasch, F. (2024). Efficient Quantum Circuits for Non-Unitary and Unitary Diagonal Operators with Space-Time-Accuracy trade-offs. arXiv preprint arXiv:2404.02819 [3] Claudon, B., Zylberman, J., Feniou, C., Debbasch, F., Peruzzo, A., & Piquemal, J. P. (2024). Polylogarithmic-depth controlled-NOT gates without ancilla qubits. Nature Communications, 15(1), 5886

  • 11h55-12h15: Roman Edenhofer: Directed st-connectivity with few paths is in quantum logspace
We present a quantum logspace procedure to count st-paths on directed graphs for which we are promised that there are at most polynomially many paths starting in s and polynomially many paths ending in t. For comparison, the best known classical upper bound in this case just to decide st-connectivity is DSPACE(log^2 n / log log n). The result establishes a new relationship between quantum logspace and unambiguity and fewness subclasses of NL. Further, some preprocessing in our approach also allows us to verify whether there are at most polynomially many paths between any two nodes in quantum logspace. This yields the first natural candidate for a language problem separating quantum logspace from deterministic logspace. Until now, all candidates separating these classes were promise problems.

12h15-12h30: Final discussions and closing

Call for Presentations

We would like to invite those working in or close to the area of Algorithms and Complexity to submit a talk proposal on the topic of their choice. The format of contributed talks will be 20 minutes. Students and postdocs are especially welcome. You can submit the abstract and title of your talk proposal on the registration form or by sending us an email (sebastien(dot)tavenas(at)univ-smb(dot)fr and arnaud(dot)de-mesmay(at)univ-eiffel(dot)fr). The deadline for talk proposals is September 30.