Équipe thématique Algorithmes et complexité

#### Jour, heure et lieu

Le mardi à 11h, salle 1007

Le calendrier des séances (format iCal).
Pour ajouter le calendrier des séances à votre agenda favori, souscrire au calendrier en indiquant ce lien.

#### Contact(s)

Les diffusions en direct ont lieu sur BigBlueButton ou Zoom.
Les enregistrements sont disponibles sur Youtube.

### Prochaine séance

Algorithmes et complexité
Mardi 25 janvier 2022, 14 heures, 3052
Simona Etinski (INRIA / IRIF) Latest challenges in post-quantum cryptography

In 2016, the National Institute of Standards and Technology (NIST) announced a call for standardization (also known as “NIST competition”) of post-quantum cryptographic protocols, i.e., cryptographic protocols considered to be resistant to quantum attacks. NIST was mainly interested in two types of protocols: digital signatures and encryption/key-establishment schemes. The fourth and final round of this competition is about to start, and once it is finished, the most efficient and thoroughly analyzed protocols will be standardized.

In this talk, we focus on the proposal for a digital signature. It is based upon a problem from coding theory, known as a syndrome decoding problem, and analyzed using cryptanalytic means. Namely, we analyze the time complexity of the information set decoding algorithms, widely believed to be the best algorithms for solving the syndrome decoding problem. By evaluating their complexity, both in the classical and quantum domain, we reason about the hardness of the problem. Finally, we give an example of the scheme based upon the syndrome decoding problem and analyze its security imposed by the hardness of the problem. We examine the tradeoff between signature's security and its size, which is a major challenge to be addressed in the competition.

### Séances passées

#### Année 2021

Algorithmes et complexité
Mercredi 15 décembre 2021, 10 heures, Salle 1007
Serge Massar (Laboratoire d’Information Quantique CP224, Université libre de Bruxelles) Characterizing the intersection of QMA and coQMA

We show that the functional analogue of QMA∩coQMA, denoted F(QMA∩coQMA), equals the complexity class Total Functional QMA (TFQMA). To prove this we need to introduce alternative definitions of QMA∩coQMA in terms of a single quantum verification procedure. We show that if TFQMA equals the functional analogue of BQP (FBQP), then QMA∩coQMA = BQP. We show that if there is a QMA complete problem that (robustly) reduces to a problem in TFQMA, then QMA∩coQMA = QMA. These results provide strong evidence that the inclusions FBQP⊆TFQMA⊆FQMA are strict, since otherwise the corresponding inclusions in BQP⊆QMA∩coQMA⊆QMA would become equalities.

Algorithmes et complexité
Mardi 7 décembre 2021, 11 heures, Salle 1007
Bruce Kapron (Computer Science Department, University of Victoria) Type-two feasibility via bounded query revision

The problem of generalizing the notion of polynomial time to a type-two setting, where inputs include not only finitary data, e.g., numbers or strings of symbols, but also functions on such data, was first posed by Constable in 1973. Mehlhorn subsequently proposed a solution, based on a generalization Cobham's scheme-based approach which uses limited recursion on notation. The resulting class was shown to be robust with respect to oracle Turing machine (OTM) computation, but the problem of providing a purely machine-based characterization remained open for almost two decades, when Kapron and Cook gave a solution using notions of function length and second-order polynomials. While a natural generalization of the usual notion of poly-time, this characterization still had some shortcomings, as function length is itself not a feasible notion. A charaterization using ordinary polynomials remained an elusive goal.

Recently, we have provided several such characterizations. The underlying idea is to bound run time in terms of an ordinary polynomial in the largest answer returned by an oracle, while imposing a constant upper bound on the number of times an oracle query (answer) may exceed all previous ones in size. The resulting classes are contained in Mehlhorn's class, and when closed under composition are in fact equivalent.

In this talk we will present these recent results and follow-up work involving new scheme-based characterizations of Mehlhorn's class, as well as a recent characterization using a tier-based type system for a simple imperative programming language with oracle calls.

(Joint work with F. Steinberg, and with E. Hainry, J.-Y. Marion, and R. Pechoux.)

Algorithmes et complexité
Mardi 7 décembre 2021, 14 heures, 3052
Daniel Szilagyi Introduction to convexity - optimization and sampling

In this talk we present some basic notions of convexity, from a quantum-algorithmic point of view. The talk will consist of two parts. We first warm-up by introducing some classical first- and second-order methods for convex optimization. Then, we move on to more structured convex problems, introduce linear programming and the basic ideas of interior-point methods. We mention some quantum algorithms for LPs and SDPs. In the second part of the talk we discuss sampling, and in particular its applications to estimating volumes of polytopes. We present the basic classical sampling algorithms (ball walk and hit-and-run), and the general framework for applying these algorithms to volume estimation. We conclude by discussing the opportunities for quantum speedups - in particular, we present a high-level overview of the work by Chakrabarti, Childs, Hung, Li, Wang and Wu from 2019.

Quantum info Paris

Algorithmes et complexité
Mercredi 17 novembre 2021, 11 heures, Salle 1007
Spyros Angelopoulos (LIP6) Competitive Sequencing with Noisy Advice

Several well-studied online resource allocation problems can be formulated in terms of infinite, increasing sequences of positive values, in which each element is associated with a corresponding allocation value. Examples include problems such as online bidding, searching for a hidden target on an unbounded line, and designing interruptible algorithms based on contract scheduling. The performance of the online algorithm, in each of these problems, is measured by the competitive ratio, which describes the multiplicative performance loss due to the absence of full information on the instance.

We study such competitive sequencing problems in a setting in which the online algorithm has some (potentially) erroneous information, expressed as a k-bit advice string, for some given k. For concreteness, we follow the formulation of contract scheduling as case study. We first consider the untrusted advice setting in which the objective is to obtain a Pareto-optimal schedule, considering the two extremes: either the advice is error-free, or it is generated by a (malicious) adversary. Here, we show a Pareto-optimal schedule, using a new approach for applying the functional-based lower-bound technique due to [Gal, Israel J. Math. 1972]. Next, we introduce and study a new noisy advice setting, in which a number of the advice bits may be erroneous; the exact error is unknown to the online algorithm, which only has access to a pessimistic estimation (i.e., an upper bound on this error). We give the first upper and lower bounds on the competitive ratio of an online problem in this setting. To this end, we combine ideas from robust query-based search in arrays (such as Rényi-Ulam types of games) and fault-tolerant contract scheduling. The presentation will also discuss the applicability of these approaches in more general online problems.

Joint work with Diogo Arsénio and Shahin Kamali.

Algorithmes et complexité
Mardi 19 octobre 2021, 14 heures, Salle 3052
Stephen Piddock (School of Mathematics, University of Bristol) Quantum analogue simulation: universality and complexity

Quantum analogue simulation aims to reproduce the physics of a target Hamiltonian H, using a different physical system; and this can be achieved by ensuring that the low energy subspace of the physical Hamiltonian is approximately H. First I will describe a construction of a 1D quantum system that can simulate all other Hamiltonians (a property we call universality) just by varying the length of the chain. The construction relies heavily on the possibility of encoding a (quantum) computation into the chain. Then I will go onto show a much more general equivalence between universality and the computational complexity of the Local Hamiltonian problem - the problem of approximately estimating the ground state energy of a system.

Algorithmes et complexité
Mardi 12 octobre 2021, 11 heures, Salle 1007
Pierre Meyer (IRIF, IDC Herzliya) Breaking the Circuit Size Barrier for Secure Computation under Quasi-Polynomial LPN

In this work we introduce a new (circuit-dependent) homomorphic secret sharing (HSS) scheme for any $\log/\log\log$-local circuit, with communication proportional only to the width of the circuit and polynomial computation, which is secure assuming the super-polynomial hardness of learning parity with noise (LPN). At the heart of our new construction is a pseudorandom correlation generator (PCG) which allows two parties to locally stretch short seeds into pseudorandom instances of an arbitrary $\log/\log\log$-local additive correlation.

Our main application, and the motivation behind this work, is a generic two-party secure computation protocol for every layered (boolean or arithmetic) circuit of size $s$ with total communication $O(s/\log\log s)$ and polynomial computation, assuming the super-polynomial hardness of the standard learning parity with noise assumption (a circuit is layered if its nodes can be partitioned in layers, such that any wire connects adjacent layers). This expands the set of assumptions under which the circuit-size barrier' can be broken, for a large class of circuits. The strength of the underlying assumption is tied to the sublinearity factor: we achieve communication $O(s/k(s))$ under the $s^{2^{k(s)}}$-hardness of LPN, for any $k(s) \leq (\log\log s) / 4$.

Previously, the set of assumptions known to imply a PCG for correlations of degree $\omega(1)$ or generic secure computation protocols with sublinear communication was restricted to LWE, DDH, and a circularly secure variant of DCR.

Algorithmes et complexité
Mardi 22 juin 2021, 11 heures, Salle 3052 + Zoom
Subhasree Patro (CWI) Quantum Fine-Grained Complexity

One of the major challenges in the field of complexity theory is the inability to prove unconditional time lower bounds, including for practical problems within the complexity class P. One way around this is the study of fine-grained complexity where we use special reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of some key problems. For example, computing the edit distance between two strings, a problem that has a practical interest in the comparing of DNA strings, has an algorithm that takes O(n^2) time. Using a fine-grained reduction it can be shown that faster algorithms for edit distance also imply a faster algorithm for the Boolean Satisfiability (SAT) problem (that is believed to not exist) - strong evidence that it will be very hard to solve the edit distance problem faster. Other than SAT, 3SUM, and APSP are other such key problems that are very suitable to use as a basis for such reductions, since they are natural to describe and well-studied.

The situation in the quantum regime is no better; almost all known lower bounds for quantum algorithms are defined in terms of query complexity, which doesn’t help much for problems for which the best-known algorithms take superlinear time. Therefore, employing fine-grained reductions in the quantum setting seems a natural way forward. However, translating the classical fine-grained reductions directly into the quantum regime is not always possible for various reasons. In this talk, I will present some recent results in which we circumvent these challenges and prove quantum time lower bounds for some problems in BQP conditioned on the conjectured quantum hardness of SAT (and its variants) and the 3SUM problem.

Algorithmes et complexité
Mercredi 9 juin 2021, 10 heures, Collège de France
Frédéric Magniez - Elham Kashefi (IRIF - CNRS, University of Edinburgh) Derniers développements pour l’internet et intelligence artificielle quantique

Huitième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Elham Kashefi sur « Quantum Computing as a Service: Secure and Verifiable Multi-Tenant Quantum Data Centre » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-06-09-10h00.htm

Algorithmes et complexité
Mardi 8 juin 2021, 16 heures, Online
Kyriakos Axiotis (MIT) Decomposable Submodular Function Minimization via Maximum Flow

Abstract: Submodular function minimization is a primitive that is extensively used in a lot of ML applications, including MAP inference in MRFs, image segmentation, and clustering. Although polynomial-time algorithms for this problem exist, their runtime is prohibitive for large scale applications. Fortunately, in practice it is often the case that the function can be decomposed as a sum of r simpler functions, each acting on a small number of elements, k. We present an algorithm that can solve this problem (as well as the more general parametric problem) in time O((n+r)^(1.497) poly(k)), where n is the number of elements in the ground set. Perhaps surprisingly, we achieve this by O~(poly(k)) calls to a maximum flow oracle, thus reducing a problem that is unrelated to graphs to a graph problem. In fact, if max flow is solvable in near-linear time and k=O(1), our algorithm runs in near-linear time. At a technical level, we develop a dual iterative method that computes each step by approximating the submodular base polytope by a graph cut polytope. To solve the resulting graph problem, we develop an efficient algorithm for the parametric min cut problem, which might also be of independent interest. Joint work with Adam Karczmarz, Anish Mukherjee, Piotr Sankowski, and Adrian Vladu

Algorithmes et complexité
Mercredi 2 juin 2021, 10 heures, Collège de France
Frédéric Magniez - André Chailloux (IRIF - INRIA) Limites et impact du calcul quantique

Septième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de André Chailloux sur « Suprématie quantique : où en sommes-nous aujourd'hui ? » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-06-02-10h00.htm

Algorithmes et complexité
Mercredi 26 mai 2021, 10 heures, Collège de France
Frédéric Magniez - Iordanis Kerenidis (IRIF - CNRS) Transformée de Fourier quantique II

Sixième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Iordanis Kerenidis sur « Quantum Machine Learning » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-26-10h00.htm

Algorithmes et complexité
Mercredi 19 mai 2021, 10 heures, Collège de France
Frédéric Magniez - Stacey Jeffery (IRIF - CWI) Optimisation quantique

Cinquième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Stacey Jeffery sur « A Unified Framework for Quantum Walk Search » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-19-10h00.htm

Algorithmes et complexité
Mercredi 12 mai 2021, 10 heures, Collège de France
Frédéric Magniez - Miklos Santha (IRIF - CNRS, CQT) Transformée de Fourier quantique I

Quatrième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Miklos Santha sur « Le problème du sous-groupe caché » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-12-10h00.htm

Algorithmes et complexité
Mercredi 5 mai 2021, 10 heures, Collège de France
Frédéric Magniez - Simon Perdrix (IRIF - CNRS) Circuits quantiques, premiers algorithmes

Troisième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Simon Perdrix sur « Langages graphiques pour programmer et raisonner en informatique quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-05-10h00.htm

Algorithmes et complexité
Mardi 4 mai 2021, 11 heures, Online
Paweł Gawrychowski (University of Wrocław) Distance oracles and labeling schemes for planar graphs

A fundamental question concerning graphs is that of constructing a data structure, called a distance oracle, that allows us to compute the distance between any pair of nodes. The goal is to avoid preprocessing the answer to every possible query while keeping the query time small. This is quite hard if we insist on receiving the exact answer and don't make any assumptions about the graph. A particularly natural class of inputs in this context are planar graphs. It is only recently that we have discovered that strongly subquadratic distance oracle with polylogarithmic query time exists for such graphs. I will present the most recent developments in this and related areas. In particular, I will talk about the so-called informative labeling schemes for distances in planar graphs, which can be seen as a clean distributed version of distance oracles.

Algorithmes et complexité
Mercredi 14 avril 2021, 10 heures, Collège de France
Frédéric Magniez - Thomas Vidick (IRIF - California Institute of Technology) Cryptographie et communication quantiques

Deuxième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Thomas Vidick sur « Certifier la génération de nombres aléatoires avec le quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-04-14-10h00.htm

Algorithmes et complexité
Mercredi 7 avril 2021, 10 heures, Collège de France
Frédéric Magniez - Eleni Diamanti (IRIF - CNRS) Information quantique, premières utilisations calculatoires

Premier cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Eleni Diamanti sur « Réseaux de communication quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-04-07-10h00.htm

Algorithmes et complexité
Jeudi 1 avril 2021, 18 heures, Collège de France
Frédéric Magniez (IRIF, Collège de France) Algorithmes quantiques : quand la physique quantique défie la thèse de Church-Turing

Leçon inaugurale du cours au Collège de France sur les Algorithmes quantiques. Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/inaugural-lecture-2021-04-01-18h00.htm.

Algorithmes et complexité
Mardi 30 mars 2021, 11 heures, Zoom
David Saulpic (LIP6) A New Coreset Framework for Clustering

Given a metric space, the (k, z)-clustering problem consists of finding k centers such that the sum of the of distances raised to the power z of every point to its closest center is minimized. This encapsulates the famous k-median (z = 1) and k-means (z = 2) clustering problems. Designing small-space sketches of the data that approximately preserves the cost of the solutions, also known as coresets, has been an important research direction over the last 15 years. In this paper, we present a new, simple coreset framework that simultaneously improves upon the best known bounds for a large variety of settings, ranging from Euclidean space, doubling metric, minor-free metric, and the general metric cases. The paper is joint work with Vincent Cohen-Addad and Chris Schwiegelshohn.

Algorithmes et complexité
Mardi 16 mars 2021, 11 heures, Room 3052 & Online
Federico Centrone (IRIF) Experimental demonstration of quantum advantage for NP verification with limited information

Algorithmes et complexité
Mardi 2 mars 2021, 16 heures, Zoom
Soheil Behnezhad (Department of Computer Science, University of Maryland) Beating Two-Thirds For Random-Order Streaming Matching

We study the maximum matching problem in the random-order semi-streaming setting. In this problem, the edges of an arbitrary $n$-vertex graph $G=(V, E)$ arrive in a stream one by one and in a random order. The goal is to have a single pass over the stream, use $O(n \cdot \mathrm{poly}\log{(n)})$ space, and output a large matching of $G$.

We prove that for an absolute constant $\epsilon_0 > 0$, one can find a $(2/3 + \epsilon_0)$-approximate maximum matching of $G$ using $O(n \log n)$ space with high probability. This breaks the natural boundary of $2/3$ for this problem prevalent in the prior work and resolves an open problem of Bernstein~[ICALP'20] on whether a $(2/3 + \Omega(1))$-approximation is achievable.

Algorithmes et complexité
Mardi 9 février 2021, 11 heures, Online
Xavier Waintal (Univ. Grenoble Alpes, CEA) What Limits the Simulation of Quantum Computers?

An ultimate goal of quantum computing is to perform calculations beyond the reach of any classical computer. It is therefore imperative that useful quantum computers be very difficult to simulate classically, otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubits N or the depth D of the circuit. This difficulty has triggered recent experiments on deep, random circuits that aim to demonstrate that quantum devices may already perform tasks beyond the reach of classical computing. These real quantum computing devices, however, suffer from many sources of decoherence and imprecision which limit the degree of entanglement that can actually be reached to a fraction of its theoretical maximum. They are characterized by an exponentially decaying fidelity F∼(1−ε)^(ND) with an error rate ε per operation as small as ≈1% for current devices with several dozen qubits or even smaller for smaller devices. In this work, we provide new insight on the computing capabilities of real quantum computers by demonstrating that they can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wave functions using matrix product states, which are able to capture states with low to moderate entanglement very accurately. This compression introduces a finite error rate ε so that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly with N and D in sharp contrast with exact simulation algorithms. We illustrate our algorithms with simulations of random circuits for qubits connected in both one- and two-dimensional lattices. We find that ε can be decreased at a polynomial cost in computing power down to a minimum error ε∞. Getting below ε∞ requires computing resources that increase exponentially with ε∞/ε. For a two-dimensional array of N=54 qubits and a circuit with control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor 3 for similar computing time. Our results suggest that, despite the high fidelity reached by quantum devices, only a tiny fraction (∼10^−8) of the system Hilbert space is actually being exploited.

Algorithmes et complexité
Mardi 19 janvier 2021, 11 heures, Online
Christian Konrad (University of Bristol) Optimal Lower Bounds for Matching and Vertex Cover in Dynamic Graph Streams

In this talk, I will discuss simple optimal lower bounds on the one-way two-party communication complexity of approximate Maximum Matching and Minimum Vertex Cover with deletions. In this model, Alice holds a set of edges and sends a single message to Bob. Bob holds a set of edge deletions, which form a subset of Alice's edges, and needs to report a large matching or a small vertex cover in the graph spanned by the edges that are not deleted. Our results imply optimal space lower bounds for insertion-deletion streaming algorithms for Maximum Matching and Minimum Vertex Cover. An optimal space lower bound for Maximum Matching was previously given by Assadi et al. [SODA 2016], however, this lower bound only holds for the subclass of streaming algorithms that are able to process very long (at least triple exponential in n) input streams.

This work appeared at CCC'20. Joint work with Jacques Dark.

Algorithmes et complexité
Mardi 5 janvier 2021, 11 heures, Online
Benny Applebaum (Tel-Aviv University) The Round Complexity of Perfectly-Secure Multiparty Computation

We study the round complexity of general secure multiparty computation (MPC) in the classical information-theoretic model of Ben-Or, Goldwasser, and Wigderson [BGW88]. That is, we strive for perfect, information-theoretic and error-free, security against any coalition of at most t computationally-unbounded corrupted parties. Classical feasibility results show that this is possible for any t<n/2 in the passive setting, when the parties follow the protocol, and up to t<n/3 in the active (aka Byzantine) setting when the parties may deviate from the protocol.

I will survey a recent line of works that settles the round complexity of perfect-MPC with optimal security threshold for general functions. In particular, we show that 2 rounds are sufficient and necessary in the passive setting and 4 rounds are sufficient and necessary in the active setting.

Based on joint works with Zvika Brakerski, Eliran Kachlon, Arpita Ptra, and Rotem Tsabary.

#### Année 2020

Algorithmes et complexité
Mardi 15 décembre 2020, 14 heures, Online
Paul Fermé (ENS Lyon) Tight Approximation Guarantees for Concave Coverage Problems

In the maximum coverage problem, we are given subsets $T_1, \ldots, T_m$ of a universe $[n]$ along with an integer $k$ and the objective is to find a subset $S \subseteq [m]$ of size $k$ that maximizes $C(S) := \lvert\bigcup_{i \in S} T_i\rvert$. It is a classic result that the greedy algorithm for this problem achieves an optimal approximation ratio of $(1-e^{-1})$. In this work we consider a generalization of this problem wherein an element $a$ can contribute by an amount that depends on the number of times it is covered. Given a concave, nondecreasing function $\varphi$, we define $C^{\varphi}(S) := \sum_{a \in [n]}w_a\varphi(|S|_a)$, where $|S|_a = |\{i \in S : a \in T_i\}|$. The standard maximum coverage problem corresponds to taking $\varphi(j) = \min\{j,1\}$. For any such $\varphi$, we provide an efficient algorithm that achieves an approximation ratio equal to the Poisson concavity ratio of $\varphi$, defined by $\alpha_{\varphi} := \min_{x \in \mathbb{N}^*} \frac{\mathbb{E}[\varphi(Poi(x))]}{\varphi(\mathbb{E}[Poi(x)])}$. Complementing this approximation guarantee, we establish a matching NP-hardness result when $\varphi$ grows in a sublinear way. As special cases, we improve the result of [BFGG20] about maximum multi-coverage, that was based on the unique games conjecture, and we recover the result of [DDMS20] on multi-winner approval-based voting for geometrically dominant rules. Our result goes beyond these special cases and we illustrate it with applications to distributed resource allocation problems, welfare maximization problems and approval-based voting for general rules.

Based on joint work with Siddharth Barman and Omar Fawzi.

Algorithmes et complexité
Mercredi 2 décembre 2020, 14 heures, Online
Claire Mathieu (IRIF) Competitive Data-Structure Dynamization

Data-structure dynamization is a general approach for making static data structures dynamic. It is used extensively in geometric settings and in the guise of so-called merge (or compaction) policies in big-data databases such as Google Bigtable and LevelDB (our focus). Previous theoretical work is based on worst-case analyses for uniform inputs – insertions of one item at a time and constant read rate. In practice, merge policies must not only handle batch insertions and varying read/write ratios, they can take advantage of such non-uniformity to reduce cost on a per-input basis. To model this, we initiate the study of data-structure dynamization through the lens of competitive analysis, via two new online set-cover problems. For each, the input is a sequence of disjoint sets of weighted items. The sets are revealed one at a time. The algorithm must respond to each with a set cover that covers all items revealed so far. It obtains the cover incrementally from the previous cover by adding one or more sets and optionally removing existing sets. For each new set the algorithm incurs build cost equal to the weight of the items in the set. In the first problem the objective is to minimize total build cost plus total query cost, where the algorithm incurs a query cost at each time t equal to the current cover size. In the second problem, the objective is to minimize the build cost while keeping the query cost from exceeding k (a given parameter) at any time. We give deterministic online algorithms for both variants, with competitive ratios of Θ(log∗n) and k, respectively. The latter ratio is optimal for the second variant.

Algorithmes et complexité
Mercredi 28 octobre 2020, 11 heures, Salle 3052 & En ligne
Simon Mauras (IRIF) Assessment of Covid-19 containment strategies in primary schools and workplaces using contact graphs

I will explain how modelling the epidemic using graphs could help in reopening strategies. This work has been done in collaboration with Guillaume Duboc, Max Dupré la Tour, Paolo Frasca, Claire Mathieu, Lulla Opatowski and Laurent Viennot.

Algorithmes et complexité
Mercredi 21 octobre 2020, 11 heures, Salle 3052 & En ligne
Geoffroy Couteau (IRIF) Contact Tracing

Presentation of the working group on privacy in contact tracing organized during the COVID-19 pandemic.

Algorithmes et complexité
Mercredi 7 octobre 2020, 11 heures, Salle 3052
Adrian Vladu (IRIF) Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs

We present an m^{4/3+o(1)} log W -time algorithm for solving the minimum cost flow problem in graphs with unit capacity, where W is the maximum absolute value of any edge weight. For sparse graphs, this improves over the best known running time for this problem and, by well-known reductions, also implies improved running times for the shortest path problem with negative weights, minimum cost bipartite b-matching when |b|_1 = O(m), and recovers the running time of the currently fastest algorithm for maximum flow in graphs with unit capacities, due to Liu and Sidford (FOCS, 2020).

Our algorithm relies on developing an interior point method–based framework which acts on the space of circulations in the underlying graph. From the combinatorial point of view, this framework can be viewed as iteratively improving the cost of a suboptimal solution by pushing flow around some carefully chosen circulations.

This talk will be self-contained and will assume no prior background in optimization. Based on https://arxiv.org/abs/2003.04863, appears in FOCS 2020.

Algorithmes et complexité
Jeudi 1 octobre 2020, 14 heures, Salle 3052
Vera Traub (University of Bonn) An improved approximation algorithm for ATSP

In a recent breakthrough, Svensson, Tarnawski, and Végh gave the first constant-factor approximation algorithm for the asymmetric traveling salesman problem (ATSP). In this work we revisit their algorithm. While following their overall framework, we improve on each part of it.

Svensson, Tarnawski, and Végh perform several steps of reducing ATSP to more and more structured instances. We avoid one of their reduction steps (to irreducible instances) and thus obtain a simpler and much better reduction to vertebrate pairs. Moreover, we show that a slight variant of their algorithm for vertebrate pairs has a much smaller approximation ratio.

Overall we improve the approximation ratio from 506 to 22 + ε for any ε > 0. We also improve the upper bound on the integrality ratio of the standard LP relaxation from 319 to 22.

This is joint work with Jens Vygen.

Live webcast of IGAFIT Algorithmic Colloquium

Algorithmes et complexité
Vendredi 25 septembre 2020, 11 heures, Salle 3052
Makrand Sinha (CWI) k-Forrelation Optimally Separates Quantum and Classical Query Complexity

Live webcast of CWI/QuSoft seminar

Algorithmes et complexité
Vendredi 18 septembre 2020, 11 heures, Salle 3052
Arjan Cornelissen (QuSoft, U of Amsterdam) A self-contained, simplified analysis of span programs algorithms

Over the years, many query-efficient quantum algorithms have been constructed through the span program framework. Despite its successes, the analysis behind this framework is complicated, and its full potential is not yet fully understood. In our current research, we attempt to clean up the framework and simplify its analysis, so as to better expose the mathematical structure and more clearly understand how it is exploited. In this talk, we showcase the simplified analysis and share some of the new insights we have gained along the way. No prior knowledge on span programs is required.

Live webcast of CWI/QuSoft seminar

Algorithmes et complexité
Mardi 30 juin 2020, 19 heures, Online
Amos Korman (IRIF) SIROCCO Prize for Innovation in Distributed Computing

Live broadcasting of the 2020 SIROCCO Prize for Innovation in Distributed Computing awarded to Amos Korman. Amos will give a 1h talk on his research regarding ants and mathematics. You need to register here in order to receive the zoom link: https://sirocco2020.cs.uni-paderborn.de/registration.html.

Algorithmes et complexité
Mardi 9 juin 2020, 14 heures 30, Online
Francesco D'amore (INRIA) On the Search Efficiency of Parallel Lévy Walks in ${\mathbb Z}^2$

Motivated by the Lévy flight foraging hypothesis – the premise that the movement of various animal species searching for food resembles a Lévy walk – we study the hitting time of parallel Lévy walks on the infinite 2-dimensional grid. Lévy walks are characterized by a parameter $\alpha \in(1,+\infty)$, that is the exponent of the power law distribution of the time intervals at which the moving agent randomly changes direction. In the setting we consider, called the ANTS problem (Feinerman et al. PODC 2012), $k$ independent discrete-time Lévy walks start simultaneously at the origin, and we are interested in the time $h_{k,\ell}$ before some walk visits a given target node on the grid, at distance $\ell$ from the origin.

In this setting, we provide a comprehensive analysis of the efficiency of Lévy walks for the complete range of the exponent $\alpha$. For any choice of $\alpha$, we observe that the total work until the target is visited, i.e., the product $k \cdot h_{k,\ell}$, is at least $\Omega(\ell^2)$ with constant probability. Our main result is that the right choice for $\alpha$ to get optimal work varies between $1$ and $3$, as a function of the number $k$ of available agents. For instance, when $k = \tilde \Theta(\ell^{1-\epsilon})$, for some positive constant $\epsilon < 1$, then the unique optimal setting for $\alpha$ lies in the super-diffusive regime $(2,3)$, namely, $\alpha = 2+\epsilon$. Our results should be contrasted with various previous works in the continuous time-space setting showing that the exponent $\alpha = 2$ is optimal for a wide range of related search problems on the plane.

The online reference is here https://hal.archives-ouvertes.fr/hal-02530253

Algorithmes et complexité
Mardi 2 juin 2020, 11 heures, Online
Weiqiang Wen (IRISA) Learning With Errors and Extrapolated Dihedral Cosets

The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regev's famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS'02). We also discuss a connection between eDCP and Childs and Van Dam's algorithm for generalized hidden shift problems (SODA'07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

Algorithmes et complexité
Mardi 26 mai 2020, 11 heures, Online
Édouard Bonnet (ENS Lyon) Twin-width

Inspired by an invariant defined on permutations by Guillemot and Marx [SODA '14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-width graphs, $K_t$-free unit ball graphs, posets with antichains of bounded size, and proper subclasses of permutation graphs all have bounded twin-width. On all these classes we will see how to compute in polynomial time a sequence of d-contractions, witness that the twin-width is at most d. FO model checking, that is deciding if a given first-order formula $\phi$ evaluates to true on a given binary structure G on a domain D, happens to be FPT in $|\phi|$ on classes of bounded twin-width, provided the witness is given. More precisely, being given a d-contraction sequence for G, our algorithm runs in time $f(d,|\phi|) |D|$ where f is a computable but non-elementary function. We will also see that bounded twin-width is preserved by FO interpretations and transductions. This unifies and significantly extends the knowledge on fixed-parameter tractability of FO model checking on non-monotone classes, such as the FPT algorithm on bounded-width posets by Gajarsky et al. [FOCS '15]. Finally we mention a curious connection between bounded twin-width and small classes.

Joint work with Colin Geniet, Eun Jung Kim, Stéphan Thomassé, and Rémi Watrigant

Algorithmes et complexité
Mardi 19 mai 2020, 11 heures, Online

Clustering problems are fundamental computational problems. On the one hand they are at the heart of various data analysis, bioinformatics or machine learning approaches, on the other hand, they can be seen as variants of 'set cover' problems (involving e.g. metrics, graphs with a special structure, etc.) and so are very basic problems.

For many applications, finding efficient fixed-parameter algorithms is a very natural approach. Popular choices of parameters are either parameters of the problem itself (e.g. the number of clusters) or on the underlying structure of the input (e.g. the dimension in the metric case). We will focus on classic metric clustering objectives such as k-median and k-means and review recent results on the fixed parameter complexity of these problems, and the most important open problems.

Algorithmes et complexité
Mardi 21 avril 2020, 11 heures, Online
Yihui Quek (Stanford) Robust Quantum Minimum Finding with an Application to Hypothesis Selection

Finding the minimum in a well-ordered list of length N - minimum finding' - is a critical subroutine in fields ranging from optimisation to finance. Following the 1996 quantum algorithm of Durr and Hoyer exhibiting a quadratic speedup for this problem, we ask if this speedup is preserved when the pairwise comparator between the elements is imprecise, or noisy in the following sense: given two elements to compare, if the values of the elements differ by at least α, then the comparison will be made correctly; if the values of the elements are closer than α, the outcome of the comparison is not subject to any guarantees. We answer this question in the affirmative, demonstrating a quantum algorithm for minimum-finding robust to such noise, that also preserves the quadratic speedup (to highest order) of the noiseless case while outputting a good approximation to the minimum with high probability. We demonstrate an application of this algorithm to the problem of hypothesis selection with N hypotheses, where the role of the noisy comparator is played by a statistical primitive known as the Scheffé test which compares two hypotheses' l1 distance to an unknown distribution.

Algorithmes et complexité
Vendredi 10 avril 2020, 15 heures, Online
André Schrottenloher, Yixin Shen (INRIA, IRIF) Improved Classical and Quantum Algorithms for Subset-Sum

We present new classical and quantum algorithms for solving random hard instances of the subset-sum problem, in which we are given n integers on n bits and try to find a subset of them that sums to a given target. This classical NP-complete problem has several applications in cryptography and underlies the security of some proposed post-quantum cryptosystems. At EUROCRYPT 2010, Howgrave-Graham and Joux (HGJ) introduced the representation technique and presented an algorithm running in time $\tilde{O}(2^{0.337 n})$. This asymptotic time was improved by Becker, Coron, Joux (BCJ) at EUROCRYPT 2011. We show how to improve this further. We then move to the context of quantum algorithms. The two previous quantum speedups in the literature are given by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013) and Helm and May (TQC 2018), which are respectively quantum versions of HGJ and BCJ. They both rely on the framework of quantum walks, use exponential quantum memory with quantum random-access and require an unproven conjecture on quantum walk updates. We devise a new algorithm, using quantum search only, that achieves the first quantum speedup in the model of classical memory with quantum random access. Next, we study improvements for the quantum walks. We show how to avoid the quantum walk conjecture and give a better quantum walk time complexity for subset-sum.

Joint work with Xavier Bonnetain and Rémi Bricout.

Algorithmes et complexité
Mardi 31 mars 2020, 14 heures 30, Online

I will present a sorting algorithm called adaptive ShiversSort. This recently developed algorithm improves the speed at which it sorts arrays by using the existence of already sorted subarrays. I will focus on the links between this algorithm and the algorithm TimSort, which is a standard algorithm in Python and Java. I will also show that the complexity of adaptive ShiversSort, in terms of comparisons performed, is optimal up to an additive linear factor.

Algorithmes et complexité
Jeudi 20 février 2020, 10 heures, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 4/4)

Applications: Max-k-cut, graph coloring, theta function and perfect graphs, unique games.

Algorithmes et complexité
Mardi 18 février 2020, 10 heures, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 3/4)

SDP Rounding (basic concepts such as problem formulations, normal distribution, random hyperplane rounding).

Algorithmes et complexité
Vendredi 14 février 2020, 10 heures, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 2/4)

Applications: Dominating sets in digraphs, kernels, semi-kernels, arc colored digraphs, asymmetric k-center, triangle transversals.

Algorithmes et complexité
Jeudi 13 février 2020, 10 heures, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 1/4)

LP Randomized Rounding (basic concepts such as extreme points, duality, complementary slackness).

Algorithmes et complexité
Mardi 11 février 2020, 11 heures, Salle 1007
Simon Apers (CWI) Quantum Speedup for Graph Sparsification, Cut Approximation and Laplacian Solving

Graph sparsification is an important component of a large number of algorithms, ranging from approximation algorithms for cut problems to Laplacian system solvers. In its strongest form, “spectral sparsification” reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. The breakthrough work by Benczúr and Karger (STOC'96) and Spielman and Teng (STOC'04) showed that sparsification can be done optimally in time near-linear in the number of edges of the original graph.

In this work we show that quantum algorithms allow to speed up spectral sparsification, and thereby many of the derived algorithms. To this end we build on a string of existing results, most notably sparsification algorithms by Spielman and Srivastava (STOC'08) and Koutis and Xu (TOPC'16), a spanner construction by Thorup and Zwick (STOC'01), a single-source shortest-paths quantum algorithm by Dürr et al. (ICALP'04) and an efficient k-wise independent hash construction by Christiani, Pagh and Thorup (STOC'15). Combining our sparsification algorithm with existing classical algorithms yields the first quantum speedup for approximating the max cut, min cut, min st-cut, sparsest cut and balanced separator of a graph. Combining our algorithm with a classical Laplacian solver, we demonstrate a similar speedup for Laplacian solving, for approximating effective resistances, cover times and eigenvalues of the Laplacian, and for spectral clustering.

This is joint work with Ronald de Wolf.

Algorithmes et complexité
Mardi 4 février 2020, 11 heures, Salle 1007
Jacques Dark (University of Warwick) Two-Party Lower Bounds for Graph Streaming Problems

In the turnstile graph streaming model we receive a sequence of edge insertions and deletions and we wish to store some small summary which can answer a question about the final graph. A few years ago [Yi, Nguyen, Woodruff 2014] and [Ai, Hu, Li, Woodruff 2016] explained why all known turnstile streaming algorithms could be described as “linear sketches” (random linear transformations) of the underlying data: because any 1-pass turnstile streaming algorithm which works for all possible input streams can be simulated by a linear sketch with only logarithmic space overhead.

This is a very powerful tool which immediately transformed every sketching lower bound into a turnstile streaming lower bound - such as the tight bound for approximate maximum matching of [Assadi, Khanna, Li, Yaroslavtsev 2016]. However, there are many interesting input constraints we can consider which are not compatible with the sketching reduction. What happens if we enforce that the input graph must always be simple (contains no repeat edges)? What happens when we bound the number of deletions?

In this talk, I will present work towards answering these questions. We introduce a new elementary communication problem - finding a hidden bit in a partially revealed matrix - and show how tight two-party communication bounds for the maximum matching and minimum vertex cover problems can be achieved by surprisingly simple reductions from this problem. In particular, this gives us tight turnstile streaming bounds even for always-simple inputs or streams of bounded-length n^2. I will conclude with a conjecture for the bounded-deletions case and a discussion of the remaining challenges.

Algorithmes et complexité
Mardi 28 janvier 2020, 11 heures, Salle 1007
Spyros Angelopoulos (LIP6) Online Computation with Untrusted Advice

Joint work with Christoph Dürr, Shendan Jin, Shahin Kamali and Marc Renault.

Algorithmes et complexité
Jeudi 23 janvier 2020, 11 heures, Salle 1007
Moni Naor (Weizmann Institute) Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Suppose that you want to check whether a graph satisfies some (graph) property. The goal is to minimize the number of queries to the adjacency matrix. You are given as an “untrusted hint” an isomorphic copy of the graph. You must always be correct, but the number of queries made is only measured when the hint is correct. Do such unlabeled certificates help? In the worst case? Per instance?

In this talk I will discuss labeled and unlabeled certificates, in particular in the context of instance optimality“. This is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively.

Joint work with Tomer Grossman and Ilan Komargodski

Algorithmes et complexité
Mardi 21 janvier 2020, 11 heures, Salle 1007
Tatiana Starikovskaya (ENS) Sliding window property testing for regular languages

We discuss the problem of recognizing regular languages in a variant of the streaming model of computation, called the sliding window model. In this model, we are given a size of the sliding window n and a stream of symbols. At each time instant, we must decide whether the suffix of length n of the current stream (the window'') belongs to a given regular language.

In this work, we make an important step forward and combine the sliding window model with the property testing setting, which results in ultra-efficient algorithms for all regular languages. Informally, a sliding window property tester must accept the active window if it belongs to the language and reject it if it is far from the language. We consider deterministic and randomized sliding window property testers with one-sided and two-sided errors. In particular, we show that for any regular language, there is a deterministic sliding window property tester that uses logarithmic space and a randomized sliding window property tester with two-sided error that uses constant space.

The talk is based on a joint work with Ganardi, Hucke, and Lohrey published in ISAAC 2019.

Algorithmes et complexité
Mardi 14 janvier 2020, 11 heures, Salle 1007
Olivier Tercieux (Paris School of Economics) Unpaired Kidney Exchange: Overcoming Double Coincidence of Wants without Money

In this talk, I'll present the basics of the kidney exchange problem and a brief overview of the different practices in the world. I'll further describe a new matching algorithm—Unpaired kidney exchange—and study its properties in a dynamic matching model with heterogeneous agents. We will show that the average waiting time under the Unpaired algorithm is close-to optimal, and substantially less than the standard – and commonly used – pairwise and chain exchange algorithms. Using a rich dataset of the kidney patients in France, counterfactual simulations show that the Unpaired algorithm can match nearly 57% of the patients, with an average waiting time of 424 days (state-of-the-art algorithms match about 31% with an average waiting time of 675 days or more). The optimal algorithm performs only slightly better: it matches 58% of the patients and leads to an average waiting time of 410 days. The Unpaired algorithm confronts two incentive-related practical challenges. We address those challenges via a practical version of the Unpaired algorithm that employs kidneys from the deceased donors waiting list. The practical version can match nearly 87% of patient-donor pairs, while reducing the average waiting time to about 141 days. Finally, I'll conclude discussing the relationship between this algorithm and the current reform of the French kidney exchange program.

Most of the talk is based on https://drive.google.com/file/d/1ZxPC3sc9HvrlG7JUtqCdyZ_BSHttkGhN/view. I'll also mention https://www.ipp.eu/publication/juin-2019-perspectives-sur-le-programme-de-dons-croises-de-reins-en-france/.

#### Année 2019

Algorithmes et complexité
Mardi 17 décembre 2019, 11 heures, Salle 1007
Clément Canonne (IBM Research) Uniformity Testing for High-Dimensional Distributions: Subcube Conditioning, Random Restrictions, and Mean Testing

Given a distribution p​​ on {-1,1}^d​​, we want to test whether p​​ is uniform. If p is assumed to be a product distribution, this can be done with Theta(sqrt{d}) samples; without this assumption, then things get exponentially worse and Theta(sqrt{2^d}) are necessary and sufficient. Assume however we can condition on arbitrary bits: does the problem become easier? If so, how much?

This is the “subcube conditional sampling model”, first considered in Bhattacharyya and Chakraborty (2018). We provide a nearly optimal ~O(sqrt{d})-algorithm for testing uniformity in this model. The key component of our proof is a natural notion of random restriction for distributions on {-1,1}^d, and a quantitative analysis of how such a restriction affects the mean vector of the distribution. Along the way, we provide a /very/ nearly tight upper bound on the (standard) sample complexity of a natural problem, “mean testing.”

Joint work with Xi Chen, Gautam Kamath, Amit Levi, and Erik Waingarten.

Algorithmes et complexité
Lundi 16 décembre 2019, 12 heures 30, Amphi Turing
M. Teillaud, T. Starikovskaya, M. Bonamy, C. Mathieu Autour des algorithmes

This half-day of talks is in celebration of Algorithms, the research domain of Claire Mathieu, 2019 recipient of a CNRS Silver Medal. The talks, aimed at a non-specialized audience, will present a sample of research on Algorithms in France in a few selected areas: Computational Geometry (Monique Teillaud), Strings (Tatiana Starikoskaya), Graphs and Complexity (Marthe Bonamy), and Social Choice (Claire Mathieu). The afternoon will conclude with a discussion of new research directions in Algorithms. The participants to the roundtable will give their viewpoint on problems in the design and analysis of algorithms arising from their respective research domains. We may hear about statistical machine learning, clustering, and social networks (Anne Boyer), data journalism, detection of fake news, and magaging data in the Cloud (Ioana Manolescu), and economics and society (Camille Terrier).

Programme :

12h30-13h15 : Accueil et café

13h15-13h30 : Ouverture

13h30-14h15 : Exposé de Monique Teillaud, Autour des triangulations de Delaunay

14h15-15h : Exposé de Tatiana Starikovskaya, Streaming algorithms for string processing

15h-15h45 : Exposé de Marthe Bonamy, Complexity of graph coloring

15h45-16h15 : goûter

16h15-17h : Exposé de Claire Mathieu, Rank Aggregation

17h-18h : Table ronde “Prospectives de l'algorithmique en France” animée par Claire Mathieu. Participantes : Anne Boyer, Ioana Manolescu et Camille Terrier.

Algorithmes et complexité
Mardi 3 décembre 2019, 11 heures, Salle 1007

Today’s massive and dynamic data sets have motivated many computer scientists and mathematicians to study classical problems in combinatorics and graph theory in various settings. In certain settings the algorithms’ access to the data is restricted while in others the resources are limited. In this talk we explore such settings in three directions: ranking of objects, property testing in social networks and finding communities in dynamic graphs.

In the first part, we discuss algorithms on permutations as well as prefixes of permutations also known as top-k lists. The study of later particularly arises in big data scenarios when analysis of the entire permutation is costly or not interesting. We start by discussing random walks on the set of full rankings or permutations of n objects, a set whose size is n!. Since 1990s to today, various versions of this problem have been studied, each for different motivation.

The second part of the talk is devoted to property testing in social networks: given a graph, an algorithm seeks to approximate several parameters of a graph just by accessing the graph by means of oracles and while querying these oracles as few times as possible. We introduce a new oracle access model which is applicable to social networks, and assess the complexity of estimating parameters such as number of users (vertices) in this model.

In the third part of the talk, we introduce a new dynamic graph model which is based on triadic closure: a friendship is more likely to be formed between a pair of users with a larger number of mutual friends. We find upper bounds for the rate of graph evolution in this model and using such bounds we devise algorithms discovering communities. I will finish this talk by proposing new directions and presenting related open problems.

Algorithmes et complexité
Mercredi 27 novembre 2019, 14 heures, Salle 3052
Adrian Vladu (Boston University) Submodular Maximization with Matroid and Packing Constraints in Parallel

We study the adaptive complexity of maximizing the multilinear extension of a submodular function, subject to a matroid constraint, or multiple packing constraints. The goal is to match the best approximation guarantees known in the classical sequential setting while performing only few adaptive rounds, each of which consists of a polynomially bounded number of queries to an evaluation oracle.

Our results are as follows:

* For a matroid constraint, in O(log^2 n/ε^3) adaptive rounds, we obtain a 1−1/e−ε approximation for monotone functions, and a 1/e−ε approximation for non-monotone functions. This offers an exponential speedup over the previously existing algorithms.

* For m packing constraints, up to polylogarithmic factors in n, m and 1/ε, we require Õ(1/ε^2) adaptive rounds to obtain a 1/e−ε approximation for non-monotone functions, and 1-1/e-ε approximation for monotone functions. This matches the iteration complexity of the fastest currently known parallel algorithm for solving packing linear programs [Young ’01, Mahoney et al, ’16].

Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function.

Algorithmes et complexité
Mardi 15 octobre 2019, 11 heures, Salle 1007
Zvi Lotker (Ben Gurion University) Variation on preferential-attachment

In this talk, I will describe how preferential attachment arises from the first principle using game theory. Next, I will extend the model of preferential attachment into a general model, which allows for the incorporation of Homophily ties in the network. This talk is based on joint works with Prof. Chen Avin, Avi Cohen, Yinon Nahum, Prof. Pierre Fraigniaud, and Prof. David Peleg.

Algorithmes et complexité
Mardi 10 septembre 2019, 11 heures, Salle 1007
David Saulpic (ENS) Fast Approximation Schemes for Clustering in Doubling Metrics

We consider classical clustering problems such as k-Median or k-Means in metric spaces that generalize Euclidean space, namely doubling metrics. We give a surprisingly simple framework that yields nearly linear-time approximation schemes for each problem, making a significant improvement over the state-of-the-art algorithms which run in time n^(d/\eps)^O(d), for metric of doubling dimension d. Moreover, we show how to extend the techniques used to get the first efficient approximation schemes for the problems of prize-collecting $k$-Medians and $k$-Means, and efficient bicriteria approximation schemes for k-Medians with outliers, k-Means with outliers and k-Center.

Algorithmes et complexité
Mardi 2 juillet 2019, 11 heures, Salle 1007
Andrei Romashchenko (LIRMM) An Operational Characterization of Mutual Information in Algorithmic Information Theory

We show that for every pair of strings (x,y) the mutual information between x and y in the sense of Kolmogorov complexity is equal (within logarithmic precision) to the length of the longest shared secret key that two parties, one having x and the other one having y, can establish via a probabilistic protocol with interaction on a public channel. We extend this result to the setting of L>2 parties holding mutually correlated strings x_1 … x_L. [Joint work with Marius Zimand]

Algorithmes et complexité
Lundi 24 juin 2019, 11 heures, Salle 3052
Carola Doerr (CNRS, LIP6 Sorbonne University) Evolutionary Algorithms – From Theory to Practice and Back

Most real-world optimization problems do not have an explicit problem formulation, but only allow to compute the quality of selected solution candidates. Solving such black-box optimization problems requires iterative procedures which use the feedback gained from previous evaluations to determine the strategy by which the next solution candidates are generated. Many black-box optimization algorithms, such as Simulated Annealing, Evolutionary Algorithms, Swarm Intelligence Algorithms, are randomized – making it very difficult to analyze their performances mathematically.

In the last 15 years, the theory of randomized black-box optimization has advanced considerably, and has contributed to efficient optimization by providing insights into the working principles of black-box optimization which are hard or impossible to obtain by empirical means. On the other hand, empirically-guided benchmarking has opened up new research directions for theoretical investigations.

In this presentation we will discuss the state of the art in the theory of randomized black-box optimization algorithms. As part of this critical survey we will also mention a number of open questions and connections to other fields of Computer Science.

Algorithmes et complexité
Jeudi 6 juin 2019, 14 heures, Salle 3052
Baruch Schieber (NJIT) Constrained Submodular Maximization via Greedy Local Search

We present a simple combinatorial 1/2(1-1/e^2)-approximation algorithm for maximizing a monotone submodular function subject to a knapsack and a matroid constraint. This classic problem is known to be hard to approximate within factor better than 1-1/e. We extend the algorithm to yield 1/(k+1)(1-1/e^(k+1)) approximation for submodular maximization subject to a single knapsack and k matroid constraints, for any fixed k > 1. Our algorithms, which combine the greedy algorithm of [Khuller, Moss and Naor, 1999] and [Sviridenko, 2004] with local search, show the power of this natural framework in submodular maximization with combined constraints.

This is joint work with Kanthi Sarpatwar and Hadas Shachnai

Algorithmes et complexité
Mardi 28 mai 2019, 11 heures, Salle 1007
Simon Apers (INRIA) Quantum Fast-Forwarding: Markov Chains and Graph Property Testing

I will introduce a new tool for quantum algorithms called quantum fast-forwarding (QFF). The tool uses quantum walks as a means to quadratically fast-forward a reversible Markov chain. In a number of quantum walk steps that scales as the square root of the classical runtime, and inversely proportional to the 2-norm of the classical distribution, it retrieves a quantum superposition that encodes the classical distribution. This shows that quantum walks can accelerate the transient dynamics of Markov chains, thereby complementing the line of results that proves the acceleration of their limit behavior.

This tool leads to speedups on random walk algorithms in a very natural way. Specifically I will consider random walk algorithms for testing the graph expansion and clusterability, and show that QFF allows to quadratically improve the dependency of the classical property testers on the random walk runtime. Moreover, this new quantum algorithm exponentially improves the space complexity of the classical tester to logarithmic. As a subroutine of independent interest, I use QFF to determine whether a given pair of nodes lies in the same cluster or in separate clusters. This solves a robust version of s-t connectivity, relevant in a learning context for classifying objects among a set of examples. The different algorithms crucially rely on the quantum speedup of the transient behavior of random walks.

Algorithmes et complexité
Mardi 21 mai 2019, 11 heures, Salle 1007
Miklos Santha (IRIF, CQT) Discrete logarithm and Diffie-Hellman problems in identity black-box groups

We investigate the computational complexity of the discrete logarithm, the computational Diffie-Hellman and the decisional Diffie-Hellman problems in some identity black-box groups G_{p,t}, where p is a prime number and t is a positive integer. These are defined as quotient groups of vector space Z_p^{t+1} by a hyperplane H given through an identity oracle. While in general black-box groups with unique encoding these computational problems are classically all hard and quantumly all easy, we find that in the groups G_{p,t} the situation is more contrasted. We prove that while there is a polynomial time probabilistic algorithm to solve the decisional Diffie-Hellman problem in G_{p,1}, the probabilistic query complexity of all the other problems is Omega(p) and their quantum query complexity is Omega(\sqrt{p}). Our results therefore provide a new example of a group where the computational and the decisional Diffie-Hellman problems have widely different complexity.

Joint work with Gabor Ivanyos and Antoine Joux.

Algorithmes et complexité
Mardi 7 mai 2019, 11 heures, Salle 1007
Benjamin Smith (INRIA, LIX) Pre- and post-quantum Diffie-Hellman from groups, actions, and isogenies

Diffie-Hellman key exchange is at the foundations of public-key cryptography, but conventional group-based Diffie-Hellman is vulnerable to Shor's quantum algorithm. A range of 'post-quantum Diffie-Hellman' protocols have been proposed to mitigate this threat, including the Couveignes, Rostovtsev-Stolbunov, SIDH, and CSIDH schemes, all based on the combinatorial and number-theoretic structures formed by isogenies of elliptic curves. Pre-and post-quantum Diffie-Hellman schemes resemble each other at the highest level, but the further down we dive, the more differences emerge-differences that are critical when we use Diffie-Hellman as a basic component in more complicated constructions. In this survey we compare and contrast pre-and post-quantum Diffie-Hellman algorithms, highlighting some important subtleties.

Algorithmes et complexité
Mardi 16 avril 2019, 11 heures, Salle 1007
Clément Canonne (Stanford University) Statistical Inference Under Local Information Constraints

Independent samples from an unknown probability distribution p on a domain of size k are distributed across n players, with each player holding one sample. Each player can send a message to a central referee in a simultaneous message passing (SMP) model of communication, whose goal is to solve a pre-specified inference problem on p. The catch, however, is that each player cannot simply send their own sample to the referee; instead, the message they send must obey some (local) information constraint. For instance, each player may be limited to communicating only L bits, where L « log k; or they may seek to reveal as little information as possible, and preserve local differential privacy.

We propose a general formulation for inference problems in this distributed setting, and instantiate it to two fundamental inference questions, learning and uniformity testing. We study the role of randomness for those questions, and obtain striking separations between public- and private-coin protocols for the latter, while showing the two settings are equally powerful for the former. (Put differently, sharing with your neighbors does help a lot for the test, but not really for the learning.)

Based on joint works with Jayadev Acharya (Cornell University), Cody Freitag (Cornell University), and Himanshu Tyagi (IISc Bangalore).

Algorithmes et complexité
Mardi 26 mars 2019, 11 heures, Salle 1007
Alexander Knop (UC San Diego) On proof systems based on branching programs

It is well known that there is a connection between resolution complexity of a formula and complexity of (very weak) branching programs for the corresponding search problem. In the talk, we will discuss proof systems that allow to generalize this statement and explain the lower bounds for these proof systems. In order to achieve this lower bounds, we show how to transform functions with a high fixed-partition communication complexity into functions with a high best-partition communication complexity and as a result, we translate the landscape of fixed-partition communication complexity classes into the landscape of best-partition communication complexity classes.

Algorithmes et complexité
Mardi 5 mars 2019, 11 heures, Salle 1007
Valia Mitsou (IRIF) Limitations of treewidth for problems beyond NP

In this seminar, we take a closer look at the parameterized complexity of problems belonging in Σ^p_2 and Π^p_2, the second level of the polynomial hierarchy. We provide tight fine-grained bounds on their complexity with respect to the most important structural graph parameter, the treewidth. We observe that these problems exhibit similar behavior: we show that a variety of problems including Choosability, ∃∀SAT, along with several problems from AI such as Abduction, Abstract Argumentation and others, while they admit a $2^{2^{O(tw)}}$ algorithm, they cannot be solved in time $2^{2^{o(tw)}}$ under the Exponential Time Hypothesis.

Algorithmes et complexité
Vendredi 1 mars 2019, 15 heures, Salle 1007
Jacob Leshno (Chicago Booth) Facilitating Student Information Acquisition in Matching Markets

We develop a tractable model for a matching market where students form preferences through costly information acquisition. As the market outcome consists of both an assignment as well as the acquired information, we study how the marketplace can direct both the allocation and the information acquisition process. We define regret-free stable outcomes, where the matching is stable under the partial preferences and each student has acquired information optimally, as if she were last to market; regret-free stable outcomes are optimal in settings where students make decentralized decisions about how to acquire information and can wait for more information. We show that regret-free stable outcomes always exist and can be characterized by cutoffs. However, we also show that regret-free stable outcomes cannot be discovered under practical restrictions – any discovery process can reach deadlocks where every student would like to wait for additional market information before acquiring information. Instead, historical information can facilitate the implementation of approximate regret-free stable outcomes, suggesting that an important role of matching markets is to aggregate information from multiple years. We also provide a method for bootstrapping information acquisition when historical information is not available.

Algorithmes et complexité
Mardi 26 février 2019, 11 heures, Salle 1007
Uri Zwick (Tel Aviv University) Faster k-SAT algorithms using biased-PPSZ

The PPSZ algorithm, due to Paturi, Pudlak, Saks and Zane, is currently the fastest known algorithm for the k-SAT problem, for every k>3. For 3-SAT, a tiny improvement over PPSZ was obtained by Hertli. We introduce a biased version of the PPSZ algorithm using which we obtain an improvement over PPSZ for every k>=3. For k=3 we also improve on Herli's result and get a much more noticeable improvement over PPSZ, though still relatively small. In particular, for Unique 3-SAT, we improve the current bound from 1.308^n to 1.307^n.

Joint work with Thomas Dueholm Hansen, Haim Kaplan and Or Zamir

Algorithmes et complexité
Mardi 19 février 2019, 11 heures, Salle 3052
Danupon Nanongkai (KTH) Distributed Shortest Paths, Exactly

This talk concerns the problem of quickly computing distances and shortest paths on distributed networks (the CONGEST model). There have been many developments for this problem in the last few year, resulting in tight approximation schemes. This left open whether exact algorithms can perform equally well. In this talk, we will discuss some recent progress in answering this question. Most recent works that this talk is based on are with Sebastian Krinninger (FOCS 2018) and Aaron Bernstein (ArXiv 2018).

Algorithmes et complexité
Mardi 29 janvier 2019, 11 heures, Salle 1007
Balthazar Bauer (ENS) An application of communication complexity to cryptography

The hash function are very important primitives in cryptology. To use it, we need to trust the designer. To solve this problem, we can combine several hash functions independently built. But we have to suppose that the designers could communicate. That's why we can reduce the security properties to communication complexity problems. After a presentation of these reductions, we will look in details the communication problems linked to these properties, our knowledge, new results about them. And at the end the open problems that challenge us will be presented.

#### Année 2018

Algorithmes et complexité
Mardi 4 décembre 2018, 11 heures, Salle 1007
Geoffroy Couteau (KIT) Compressing Vector OLE

In this talk, I will discuss recent results on the problem of compressing correlated (pseudo) random strings.

Oblivious linear-function evaluation (OLE) is a secure two-party protocol allowing a receiver to learn any linear combination of a pair of field elements held by a sender. OLE serves as a common building block for secure computation of arithmetic circuits, analogously to the role of oblivious transfer (OT) for boolean circuits. A useful extension of OLE is vector OLE (VOLE), allowing the receiver to learn any linear combination of two vectors held by the sender. In several applications of OLE, one can replace a large number of instances of OLE by a smaller number of long instances of VOLE. This motivates the goal of amortizing the cost of generating long instances of VOLE.

I will present a new approach for generating pseudo-random instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction. This provides the first example of compressing a non-trivial and cryptographically useful correlation with good concrete efficiency. This result enjoys several applications; I will discuss in particular an application to the construction of non-interactive zero-knowledge proofs with reusable preprocessing.

Our VOLE generators are based on a novel combination of function secret sharing (FSS) for multi-point functions and linear codes in which decoding is intractable. Their security can be based on variants of the learning parity with noise (LPN) assumption over large fields that resist known attacks. I will also discuss recent exciting developments toward the 'dream goal' of compressing arbitrary pseudorandom bilinear correlations.

Algorithmes et complexité
Mardi 27 novembre 2018, 11 heures, Salle 1007
Sagnik Mukhopadhyay (Computer Science Institute of Charles University) Lifting theorems for Equality

We show a deterministic simulation (or lifting) theorem for composed problems $f \circ \EQ_n$ where the inner function (the gadget) is Equality on $n$ bits. When $f$ is a total function on $p$ bits, it is easy to show via a rank argument that the communication complexity of $f\circ \EQ_n$ is $\Omega(\deg(f) \cdot n)$. However, there is a surprising counter-example of a partial function $f$ on $p$ bits, such that any completion $f'$ of $f$ has $\deg(f') = \Omega(p)$, and yet $f \circ \EQ_n$ has communication complexity $O(n)$. Nonetheless, we are able to show that the communication complexity of $f \circ \EQ_n$ is at least $D(f) \cdot n$ for a complexity measure $D(f)$ which is closely related to the $\AND$-query complexity of $f$ and is lower-bounded by the logarithm of the leaf complexity of $f$. As a corollary, we also obtain lifting theorems for the set-disjointness gadget, and a lifting theorem in the context of parity decision-trees, for the $\NOR$ gadget.

In this talk, I will talk about simulations or lifting theorems in general from a previous work of Chattopadhyay et al., and lifting theorem for Equality in particular—especially why the general recipe of Chattopadhyay et al. does not work for Equality. I will also mention an application of this technique to prove tight communication lower bound for Gap-Equality problem. This is a joint work with Bruno Loff.

Algorithmes et complexité
Jeudi 22 novembre 2018, 14 heures, Salle 1007
Aviad Rubinstein (Stanford) Distributed PCP Theorems for Hardness of Approximation in P

We present a new model of probabilistically checkable proof (PCP), which we call “Distributed PCP”: A satisfying assignment (x in {0,1}^n) to a SAT instance is shared between two parties (Alice knows x_1, … x_{n/2}, and Bob knows x_{n/2+1},…,x_n). Their goal is to jointly write a PCP for x, while exchanging little or no information.

Using our new framework, we obtain, for the first time, PCP-like hardness of approximation results for problems in P.

Based on joint works with Amir Abboud, Lijie Chen, Shafi Goldwasser, Kaifeng Lyu, Guy Rothblum, and Ryan Williams.

Algorithmes et complexité
Mardi 16 octobre 2018, 11 heures 30, Salle 1007
Suryajith Chillara (IIT Bombay) A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas

It is a fundamental problem in the study of computational complexity to understand if access to more resources would mean strictly more computational power. In classical complexity, we have seen such examples in terms of Time Hierarchy and Space Hierarchy theorems. Here, time and space are the resources. It is natural to ask such a question in the setting of algebraic complexity setting. Here, access to more resources translates to allowing the model to perform more number of operations which in turn is allowing the “algebraic formulas” to have larger size.

Arithmetic formulas are directed trees where the leaves are labelled by variables and elements of the field, and internal vertices are labelled by + or x. Every internal vertex computes a polynomial by operating on its inputs. It is easy to see that they are a natural model of computation of polynomials, syntactically (symbolically). The size of the arithmetic formulas refers to the number of + and x operations needed to compute the polynomial.

Rephrasing the question in the algebraic setting we ask the following: for any s, \varepsilon >0, are there polynomial computations that can be efficiently computed by arithmetic formulas of size s but not by any arithmetic formula of size s^{1-\varepsilon}?

In this talk, we will restrict ourselves to arithmetic formulas where computation at every vertex is a multilinear polynomial and here we show explicit separations between the expressive powers of multilinear formulas of small-depth and all polynomial sizes. The multilinear restriction is a reasonable one since most polynomials of interest are indeed multilinear.

Formally, we will show that for any s = poly(n) and \delta > 0, we show that there are explicit polynomial families that have multilinear formulas of size at most s but no 'small'-depth multilinear formulas of size less than s^{0.5 - \delta}. Our proof can be viewed as a derandomization of a lower bound technique of Raz (JACM 2009) using \epsilon-biased spaces.

(Joint work with Nutan Limaye and Srikanth Srinivasan. Appeared in the proceedings of ICALP 2018.)

Algorithmes et complexité
Mardi 4 septembre 2018, 11 heures, Salle 1007
Kavitha Telikepalli (Tata Institute of Fundamental Research) Popular Matchings: A Tale of Two Classes

We consider popular matching problems in a stable marriage instance G. A matching M is popular if it does not lose a head-to-head election against any matching, where each vertex casts a vote for the matching where it gets a better assignment. Popular matchings always exist in G since every stable matching is popular. Algorithmic questions for several popular matching problems have been studied in the last few years: these include finding a max-size popular matching, finding a popular matching with our favorite edge(s) in it, finding a max-weight popular (mixed) matching when there are edge weights. We will see efficient algorithms for some of these problems.

Interestingly, all tractability results in popular matchings seem to reduce the popular matching problem to an appropriate question in either stable matchings or dominant matchings. Dominant matchings always exist in G and form a subclass of max-size popular matchings while stable matchings form a subclass of min-size popular matchings. We recently showed that deciding if G admits a popular matching that is neither stable nor dominant is NP-hard. This implies a tight 1/2-factor approximation for the max-weight popular matching problem in G and the hardness of finding a popular matching in a non-bipartite graph.

[Joint work with Agnes Cseh, Yuri Faenza, Chien-Chung Huang, Vladlena Powers, and Xingyu Zhang.]

Algorithmes et complexité
Mardi 19 juin 2018, 11 heures, Salle 1007
Leonard Wossnig (University College London) Sketching as tool for faster quantum simulation

Following the works of Drineas et al. (2004), Randomised Numerical Linear Algebra (RNLA) applications have enjoyed a increasing surge of interest in the classical algorithms community. Following their success we recently demonstrated a classical algorithm based on the Nystroem method which allows us to efficiently simulate dynamics of quantum system with specific structural conditions. We briefly introduce the most basic RNLA algorithm by Drineas et al. to give an overview of the topic and then outline our own recent work. We follow with a discussion of potential application of related methods, including spectral sparsification, in quantum algorithms for Hamiltonian Simulation and quantum linear systems; We finish with a short description of own current work related to spectral sparsification (Spielmann & Teng 2011) for Hamiltonian simulation.

Algorithmes et complexité
Mardi 5 juin 2018, 11 heures, Salle 1007
Andrea Rocchetto (Oxford - UCL) Learnability and quantum computation

Computational learning theory provides a mathematical framework for rigorously formulating learning problems from both a statistical and computational perspective. During this talk I will present two independent results at the interplay of learning theory and quantum computation. First, I will address the problem: “can we learn faster with a quantum computer?”. In particular, I will discuss the learnability of the class of boolean functions that can be expressed as polynomial size formulae in disjunctive normal form (DNF) and show that this is efficiently quantum PAC learnable under product distributions. The learnability of DNFs is a central problem in learning theory and the best known classical algorithm (using standard ways to access the function) runs in superpolynomial time. Second, I will discuss the problem “what it means to learn a quantum state and can we do that efficiently?”. Here, building on a model developed by Aaronson in 2007, I will present a class of states, namely stabiliser states, that can be learned using only a linear number of measurements and polynomial - classical - computation. Part of the results are joint work with Varun Kanade and Simone Severini.

Algorithmes et complexité
Mardi 29 mai 2018, 11 heures, Salle 1007
Steve Alpern (University of Wariwick, UK) Shortest Paths in Networks with Unreliable Directions

The work of Korman and others, based on investigations of navigational techniques of ‘crazy ants’, leads to the following problem: Let Q be a network with given arc lengths, and let H be a ‘home’ node. At each node a permanent direction is given (one of the adjacent arcs) which leads to a shortest path path from that node to H with probability p less than one. Starting from a node S, a searcher chooses the given direction at each reached node with probability q; otherwise he chooses randomly. He arrives home at node H in expected time T=T(S,H,p,q). How should q be chosen to minimize T? Perhaps when reaching a node the searcher sees its degree k so chooses the given direction with probability q(k). Related problems have been studied for tree from a graph theoretical perspective.

Algorithmes et complexité
Mardi 22 mai 2018, 11 heures, Salle 1007
Anupam Prakash (IRIF) Improved quantum linear system solvers and applications to machine learning

I will describe recent results in the QRAM model that improve the sparsity dependance for quantum linear system solvers to \mu(A) = \min (||A||_F, ||A||_1) where ||A||_1 is the maximum l-1 norm of the rows and columns of A and ||A||_F is the Frobenius norm. These results achieve a worst case quadratic speedup over the HHL algorithm and its improvements and exponential speedups for some cases. I will also present some applications of the improved linear system solvers to quantum machine learning.

Algorithmes et complexité
Mercredi 11 avril 2018, 14 heures, Salle 1007
Libor Caha (RCQI Bratislava) The Feynman-Kitaev computer's clock: bias, gaps, idling and pulse tuning

We present a collection of results about the clock in Feynman's computer construction and Kitaev's Local Hamiltonian problem. First, by analyzing the spectra of quantum walks on a line with varying endpoint terms, we find a better lower bound on the gap of the Feynman Hamiltonian, which translates into a less strict promise gap requirement for the QMA-complete Local Hamiltonian problem. We also translate this result into the language of adiabatic quantum computation. Second, introducing an idling clock construction with a large state space but fast Cesaro mixing, we provide a way for achieving an arbitrarily high success probability of computation with Feynman's computer with only a logarithmic increase in the number of clock qubits. Finally, we tune and thus improve the costs (locality, gap scaling) of implementing a (pulse) clock with a single excitation.

Joint work with Zeph Landau and Daniel Nagaj

Algorithmes et complexité
Mardi 10 avril 2018, 11 heures, Salle 1007
Pierre Aboulker (Université Grenoble-Alpes, Labo G-SCOP) Distributed coloring of graphs with fewer colors

We give an efficient (polylog(n) rounds) algorithm that colors a graph with few colors in the distributed setting (LOCAL model). Among other things, we will see that this algorithm 6-colors planar graphs (using 1 less color than the previous best algorithm of Goldberg, Plotkin, and Shannon) and will show that one cannot 4-color planar graph in o(n) rounds.

This is a joint work with Bousquet, Bonamy, and Esperet.

Algorithmes et complexité
Mardi 27 mars 2018, 11 heures, Salle 1007
Kamil Khadiev (University of Latvia) Quantum online algorithms with restricted memory

An online algorithm is a well-known computational model for solving optimization problems. The defining property of this model is following. An algorithm reads an input piece by piece and should return output variables after some of the input variables immediately, even if the answer depends on the whole input. An online algorithm should return output for minimizing an objective function.

We consider streaming algorithms and two-way automata as models for online algorithms. We compare quantum and classical models in case of logarithmic memory and sublogarithmic memory.

We get the following results for online streaming algorithms: - a quantum online streaming algorithm with 1 qubit of memory and 1 advice bit can be better than a classical online streaming algorithm with $o(\log n)$ bits of memory and $o(\log n)$ advice bits. - Quantum online streaming algorithm with a constant size of memory and $t$ advice bits can be better than deterministic online streaming algorithms with $t$ advice bits and unlimited computational power. - In a case of a polylogarithmic size of memory, quantum online streaming algorithms can be better than classical ones even if they have advice bits.

We get the following results for two way automata as an online algorithm for solving online minimization problems: - a two way automata with quantum and classical states for online minimization problems with a constant size of memory can be better than classical ones even if they have advice bits.

Algorithmes et complexité
Mardi 20 mars 2018, 11 heures, Salle 1007
Valia Mitsou (IRIF) On the complexity of defective coloring.

In defective coloring, we are given a graph G and two integers c, ∆* and are asked if we can c-color G so that the maximum degree induced by any color class is at most ∆*. This problem is a generalization of proper coloring, for which Δ* = 0.

Defective coloring, like proper coloring, has many applications, for example in scheduling (assign jobs to machines which can work on up to Δ* jobs in parallel), or in radio networks (assign frequencies to antennas with some slack, that is, an antenna can be assigned the same frequency with up to Δ* other antennas within its range without a signal conflict).

We will present some algorithmic and complexity results on this problem in classes of graphs where proper coloring is easy: on the one hand, we will consider subclasses of perfect graphs, namely cographs and chordal graphs; on the other hand we will talk about structural parameterizations, such as treewidth and feedback vertex set.

Joint work with Rémy Belmonte (University of Electro-Communications, Tokyo) and Michael Lampis (Lamsade, Université Paris Dauphine) that appeared in WG '17 and in STACS '18.

Algorithmes et complexité
Mardi 13 février 2018, 11 heures, Salle 1007
Antoine Grospellier (INRIA) Efficient decoding of random errors for quantum expander codes

We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of α-percolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected α-subset of E, where X is an α-subset of E if |X ∩ E| ≥ α|X|.

Algorithmes et complexité
Lundi 12 février 2018, 14 heures, Salle 3052
Nabil Mustafa (ESIEE) Local Search for Geometric Optimization Problems.

Local-search is an intuitive approach towards solving combinatorial optimization problems: start with any feasible solution, and try to improve it by local improvements. Like other greedy approaches, it can fail to find the global optimum by getting stuck on a locally optimal solution. In this talk I will present the ideas and techniques behind the use of local-search in the design of provably good approximation algorithms for some combinatorial problems, such as independent sets, vertex cover, dominating sets in geometric intersection graphs. The key unifying theme is the analysis of local expansion in planar graphs. Joint work with Norbert Bus, Shashwat Garg, Bruno Jartoux and Saurabh Ray.

Algorithmes et complexité
Mardi 6 février 2018, 11 heures, Salle 1007
Shendan Jin (LIP6) Online Maximum Matching with Recourse

We study the online maximum matching problem with recourse in a model in which the edges are associated with a known recourse parameter k. An online algorithm for this problem has to maintain a valid matching while edges of the underlying graph are presented one after the other. At any moment the algorithm can decide to include an edge into the matching or to exclude it, under the restriction that at most k actions per edge take place, where k is typically a small constant. This problem was introduced and studied in the context of general online packing problems with recourse by [Avitabile, Mathieu, Parkinson, 2013], whereas the special case k=2 has been studied by [Boyar et al., 2017].

In the first part of this paper, we consider the edge arrival model, in which an arriving edge never disappears from the graph. Here, we first show an improved analysis on the performance of the algorithm given in [AMP, 2013], by exploiting the structure of the specific problem. In addition, we extend the result of [Boyar et al., 2017] and show that the greedy algorithm has ratio 3/2 for every even positive k and ratio 2 for every odd k. Moreover, we present and analyze L-Greedy — an improvement of the greedy algorithm — which for small values of k outperforms the algorithm of [AMP, 2013]. In terms of lower bounds, we show that no deterministic algorithm better than 1+1/(k−1) exists, improving upon the lower bound of 1+1/k shown in [AMP, 2013].

The second part of the paper is devoted to the edge arrival/departure model, which is the fully dynamic variant of the online matching problem with recourse. The analysis of L-Greedy carries through in this model; moreover we show a general lower bound of (k2−3k+3)/(k2−4k+5) for all even k≥4 and provide the stronger bound of 10/7 for k=4. For k∈{2,3}, the competitive ratio is 3/2.

Joint work with Spyros Angelopoulos and Christoph Dürr.

Algorithmes et complexité
Lundi 5 février 2018, 14 heures, Salle 2018
Charles Bennett Do-It-Yourself randomness over Device-Independent randomness

Algorithmes et complexité
Mardi 30 janvier 2018, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Allison Bishop (DI ENS, IRIF - IEX and Columbia University) On Algorithms Operating in Adversarial Conditions

Eighth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Allison Bishop on Algorithms Operating in Adversarial Conditions (at 11am).

Algorithmes et complexité
Mardi 23 janvier 2018, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Tim Roughgarden (DI ENS, IRIF - Stanford University) On Game Theory Through the Computational Lens

Seventh lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Tim Roughgarden on Game Theory Through the Computational Lens (at 11am).

Algorithmes et complexité
Mercredi 17 janvier 2018, 11 heures, Salle 2015
Philip Lazos (Oxford) The Infinite Server Problem

We study a variant of the k-server problem, the infinite server problem, in which infinitely many servers reside initially at a particular point of the metric space and serve a sequence of requests. In the framework of competitive analysis, we show a surprisingly tight connection between this problem and the (h,k)-server problem, in which an online algorithm with k servers competes against an offline algorithm with h servers. Specifically, we show that the infinite server problem has bounded competitive ratio if and only if the (h,k)-server problem has bounded competitive ratio for some k=O(h). We give a lower bound of 3.146 for the competitive ratio of the infinite server problem, which implies the same lower bound for the (h,k)-server problem even when k/h→∞ and holds also for the line metric; the previous known bounds were 2.4 for general metric spaces and 2 for the line. For weighted trees and layered graphs we obtain upper bounds, although they depend on the depth. Of particular interest is the infinite server problem on the line, which we show to be equivalent to the seemingly easier case in which all requests are in a fixed bounded interval away from the original position of the servers. This is a special case of a more general reduction from arbitrary metric spaces to bounded subspaces. Unfortunately, classical approaches (double coverage and generalizations, work function algorithm, balancing algorithms) fail even for this special case.

Algorithmes et complexité
Mardi 16 janvier 2018, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Jon Kleinberg (DI ENS, IRIF - Cornell University) On Algorithms and Fairness

Sixth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Jon Kleinberg on Algorithms and Fairness (at 11am).

Algorithmes et complexité
Mardi 16 janvier 2018, 14 heures 30, Salle 3014
Nicolas Thiery (Université Paris Sud) Computing huge subspaces of diagonal harmonic polynomials: symmetries to the rescue!

Last spring, I visited François Bergeron and we worked on his favorite objects: the spaces H(n,k) of diagonal harmonic polynomials in k sets of n variables. Those spaces connect to many areas of algebraic combinatorics, representation theory and beyond, and the case H(n,2) became famous a decade ago with the n! conjecture and its proof by Haiman.

To fuel his ongoing studies François needed to compute the structure of H(5,6). This is a space of dimension 6.10^5 made of polynomials in 30 variables of degree up to 15, each having thousands of terms.

In this talk, I'll explain how the calculation can now be completed in 45 minutes with a dozen cores and ~15Go of memory. This exploits a combination of strategies (symmetries, representation theory of the symmetric and general linear group, …), each of which reduces the complexity in time and memory by one or two orders of magnitude.

There will be little prerequisites and it's my hope that some strategies (and maybe the code!) could be used in other contexts.

Algorithmes et complexité
Mardi 9 janvier 2018, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Mark Jerrum (DI ENS, IRIF - Queen Mary, University of London) On Sampling and Approximate Counting

Fifth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Mark Jerrum on Sampling and Approximate Counting (at 11am).

#### Année 2017

Algorithmes et complexité
Mardi 19 décembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Bruno Salvy (DI ENS, IRIF - INRIA) On Analytic Combinatorics

Fourth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Bruno Salvy on Analytic Combinatorics (at 11am).

Algorithmes et complexité
Jeudi 14 décembre 2017, 14 heures 30, Salle 1016
Emanuele Natale (Max Planck Institute for Informatics) Computing through Dynamics: Principles for Distributed Coordination

We present analytical results regarding some simple randomized protocols, called dynamics, for solving fundamental distributed consensus problems, together with examples on how to use them to build lightweight algorithms for other important distributed problems. More specifically, we provide an overview of the theory regarding several dynamics such as the 3 Majority, the Averaging and the Undecided-State ones, and we show how to use them to solve plurality consensus, distributed clustering, clock synchronization and information spreading. Motivated by applications to systems whose complexity is in-between biological and human-made ones, we focus mainly on unstructured and random interaction models, and we also deal with scenarios in which the communication is affected by noise or when a self-stabilizing protocol is required.

Algorithmes et complexité
Mardi 12 décembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Amos Fiat (DI ENS, IRIF - University of Tel-Aviv) On Static and Dynamic Pricing

Third lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Amos Fiat on Static and Dynamic Pricing (at 11am).

Algorithmes et complexité
Mardi 12 décembre 2017, 14 heures, Salle 3052
Jean Krivine (IRIF) Incremental Update for Graph Rewriting

Graph rewriting formalisms are well-established models for the representation of biological systems such as protein-protein interaction networks. The combinatorial complexity of these models usually prevents any explicit representation of the variables of the system, and one has to rely on stochastic simulations in order to sample the possible trajectories of the underlying Markov chain. The bottleneck of stochastic simulation algorithms is the update of the propensity function that describes the probability that a given rule is to be applied next. In this talk we present an algorithm based on a data structure, called extension basis, that can be used to update the counts of predefined graph observables after a rule of the model has been applied.

Reference: Boutillier P., Ehrhard T., Krivine J. (2017) Incremental Update for Graph Rewriting. In: Yang H. (eds) Programming Languages and Systems. ESOP 2017. Lecture Notes in Computer Science, vol 10201. Springer, Berlin, Heidelberg

Séminaire commun du pole Algorithmes et Structures Discrètes

Algorithmes et complexité
Mardi 5 décembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Laurent Massoulié (DI ENS, IRIF - INRIA) Community Detection

Second lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Laurent Massoulié on Community Detection (at 11am).

Algorithmes et complexité
Mardi 28 novembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Pierre Fraigniaud (DI ENS, IRIF - IRIF, CNRS) On Distributed Algorithms

First lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Pierre Fraigniaud on Distributed Algorithms (at 11am).

Algorithmes et complexité
Mardi 21 novembre 2017, 11 heures, Salle 1007
André Chailloux (INRIA Paris) A tight security reduction in the quantum random oracle model for code-based signature schemes

Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.

Joint work with Thomas Debris-Alazard

Algorithmes et complexité
Jeudi 16 novembre 2017, 14 heures, Salle 3052
Elena Kirshanova (ENS Lyon) Connections between the Dihedral Coset Problem and LWE

In this talk, I explain that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), called extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, the result generalizes Regev’s famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS 02). We'll also discuss a connection between eDCP and Childs and Van Dam’s algorithm for generalized hidden shift problems (SODA 07). The result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

Algorithmes et complexité
Mardi 14 novembre 2017, 14 heures, 3052
Laurent Massoulié (MSR-Inria) Rapid Mixing of Local Graph Dynamics

Graph dynamics arise naturally in many contexts. For instance in peer-to-peer networks, a participating peer may replace an existing connection with one neighbour by a new connection with a neighbour's neighbour. Several such local rewiring rules have been proposed to ensure that peer-to-peer networks achieve good connectivity properties (e.g. high expansion) in equilibrium. However it has remained an open question whether there existed such rules that also led to fast convergence to equilibrium. In this work we provide an affirmative answer: We exhibit a local rewiring rule that converges to equilibrium after each participating node has undergone only a number of rewirings that is poly-logarithmic in the system size. The proof involves consideration of the whole isoperimetric profile of the graph, and may be of independent interest.

This is joint work with Rémi Varloot.

Algorithmes et complexité
Vendredi 10 novembre 2017, 15 heures, Salle 3052
Simon Apers (Ghent University) Mixing and quantum sampling with quantum walks: some results and questions

Quantum walks have been shown to quadratically speed up search problems on graphs. We discuss their usage towards mixing and quantum sampling on graphs, problems that are in a sense dual to the search problem. Quantum sampling, or quantum state generation, roughly amounts to creating a superposition over the elements of a set, where this set may be implicitly defined. It is known that if this task can be solved efficiently, then the complexity class called “statistical zero knowledge” is in BQP. We discuss a folklore approach to this problem called “inverted search”, where a quantum search algorithm is essentially inverted to yield a quantum sampling algorithm. We also show how this approach solves the search problem when starting from a single element of the graph, rather than a superposition. The easier task of mixing on graphs asks for a classical sample of the set, rather than a superposition. It has long been conjectured that quantum walks can quadratically speed up this task when compared to simple random walks. We show how a quantum sampling approach confirms this conjecture for a limited set of graphs. For more general graphs however, it is conceivable that the quantum sampling problem cannot be solved efficiently, such that a different approach towards mixing is needed. We discuss ideas in this direction.

Algorithmes et complexité
Mardi 31 octobre 2017, 11 heures, Salle 1007
Ami Paz (IRIF) A (2+\epsilon)-Approximation for Maximum Weight Matching in the Semi-Streaming Model

In this talk I will present our new (2+\epsilon)-approximation algorithm for the maximum weight matching problem in the semi-streaming model, that was presented in SODA 2017.

We will start by discussing the local-ratio technique, a simple, sequential approximation paradigm we use in our algorithm. Then, we will consider the variations needed in order to adjust this technique to the semi-streaming model.

Based on a joint work with Gregory Schwartzman.

Algorithmes et complexité
Mardi 17 octobre 2017, 14 heures, Salle 3052
Claire Mathieu (DI - ENS) Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.

Algorithmes et complexité
Mardi 10 octobre 2017, 11 heures, Salle 1007
Victor Verdugo (ENS - Universidad de Chile) How Large is Your Graph?

We consider the problem of estimating the graph size, where one is given only local access to the graph. We formally define a query model in which one starts with a \emph{seed} node and is allowed to make queries about neighbours of nodes that have already been seen. In the case of undirected graphs, an estimator of Katzir et al. (2014) based on a sample from the stationary distribution π uses O(1/‖π‖_2+davg) queries, we prove that this is tight. In addition, we establish this as a lower bound even when the algorithm is allowed to crawl the graph arbitrarily, the results of Katzir et al. give an upper bound that is worse by a multiplicative factor tmix⋅log(n). The picture becomes significantly different in the case of directed graphs. We show that without strong assumptions on the graph structure, the number of nodes cannot be predicted to within a constant multiplicative factor without using a number of queries that are at least linear in the number of nodes, in particular, rapid mixing and small diameter, properties that most real-world networks exhibit, do not suffice. The question of interest is whether any algorithm can beat breadth-first search. We introduce a new parameter, generalising the well-studied conductance, such that if a suitable bound on it exists and is known to the algorithm, the number of queries required is sublinear in the number of edges, we show that this is tight.

Algorithmes et complexité
Mardi 3 octobre 2017, 11 heures, Salle 1007
Giuseppe Italiano (Universita di Roma Tor Vergata) Decremental Single-Source Reachability and Strongly Connected Components in O(m \sqrt{n log n}) Total Update Time

We present randomized algorithms with a total update time of O(m \sqrt{n} log n) for the problems of decremental single-source reachability and decremental strongly connected components on directed graphs. This improves sharply recent breakthrough results of Henzinger, Krinninger and Nanongkai [STOC 14, ICALP 15]. In addition, our algorithms are arguably simpler. In this talk, we first review the Las Vegas algorithm by Roditty and Zwick [SICOMP 08] and the deterministic algorithm by Łącki [TALG 13], both with a total update time O(mn) (expected time in the case of the algorithm by Roditty and Zwick). Our algorithms carefully combine these two results, along with new structural properties, including a separation lemma in the case of graphs with a large diameter.

Joint work with Shiri Chechik, Thomas Dueholm Hansen, Jakub Łącki and Nikos Parotsidis.

Algorithmes et complexité
Mardi 26 septembre 2017, 11 heures, Salle 1007
Vianney Perchet (CMLA, Ens Paris-Saclay) Online Search Problems

In traditional “search problems”, an agent aims at exploring a graph in order to find a hidden object as fast as possible. We consider here the Bayesian repeated scenario with several iid instance of the same basic search problem. The objective is, within a given amount of time, to find as many objects as possible with the possibility to leave an instance for the next one at any moment.

We also consider the non-bayesian case with a variant of regret minimization. We provide and analyze several algorithms with fast rates of convergence and/or guaranteed efficiency.

Algorithmes et complexité
Mardi 19 septembre 2017, 11 heures, Salle 1007

Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a good' hierarchical clustering is one that minimizes some cost function. He showed that this cost function has certain desirable properties.

We take an axiomatic approach to defining good' objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of “admissible” objective functions (that includes Dasgupta's one) that have the property that when the input admits a natural' hierarchical clustering, it has an optimal value.

Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better algorithms. For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an O(log^{3/2} n)-approximation. We give a refined analysis of the algorithm and show that it in fact achieves an O(\sqrt{log n})-approx. (Charikar and Chatziafratis independently proved that it is a O(\sqrt{log n})-approx.). This improves upon the LP-based O(logn)-approx. of Roy and Pokutta. For dissimilarity-based hierarchical clustering, we show that the classic average-linkage algorithm gives a factor 2 approx., and provide a simple and better algorithm that gives a factor 3/2 approx..

Finally, we consider beyond-worst-case' scenario through a generalisation of the stochastic block model for hierarchical clustering. We show that Dasgupta's cost function has desirable properties for these inputs and we provide a simple 1 + o(1)-approximation in this setting.

Joint work with Varun Kanade, Frederik Mallmann-Trenn, and Claire Mathieu.

Algorithmes et complexité
Mardi 11 juillet 2017, 11 heures, Salle 1007
Oded Regev (Courant Institute of Mathematical Sciences, NYU) A Reverse Minkowski Theorem

Informally, Minkowski's first theorem states that lattices that are globally dense (have small determinant) are also locally dense (have lots of points in a small ball around the origin). This fundamental result dates back to 1891 and has a very wide range of applications.

I will present a proof of a reverse form of Minkowski's theorem, conjectured by Daniel Dadush in 2012, showing that locally dense lattices are also globally dense (in the appropriate sense).

The talk will be self contained and I will not assume any familiarity with lattices.

Based on joint papers with Daniel Dadush and Noah Stephens-Davidowitz.

Algorithmes et complexité
Jeudi 6 juillet 2017, 11 heures, Salle 3052
Joel Friedman (University of British Columbia) Inner Rank and Lower Bounds for Matrix Multiplication

We give a short proof of the optimality of Strassen's classical algorithm for 2 x 2 matrix multiplication, and more generally give a 2n^2 - n + 1 lower bound for n x n matrix multiplcation, in terms of the border rank (and, hence, also rank) of the associated tensor, over an arbitrary field.

We call the tool we use “inner rank,” which seems to have gone unnoticed in the literature (at least in the way we use it). While our bounds are not competitive with the best bounds to date over the complex numubers, we argue that “inner rank” may lead to improved results and merits further study.

Algorithmes et complexité
Mardi 20 juin 2017, 11 heures, Salle 1007
Laurent Bulteau (CNRS, Laboratoire d'Informatique Gaspard Monge, Marne-la-Vallée, France.) Beyond Adjacency Maximization: Scaffold Filling for New String Distances

In Genomic Scaffold Filling, one aims at polishing in silico a draft genome, called scaffold. The scaffold is given in the form of an ordered set of gene sequences, called contigs. This is done by confronting the scaffold to an already complete reference genome from a close species. More precisely, given a scaffold S, a reference genome G (represented as a string) and a score function f() between two genomes, the aim is to complete S by adding the missing genes from G so that the obtained complete genome S* optimizes f(S*,G). In this talk, we will consider two alternative score functions: the first aims at maximizing the number of common k-mers between S* and G (k-Mer Scaffold Filling), the second aims at minimizing the number of string breakpoints between S* and G (Min-Breakpoint Scaffold Filling). We study these problems from the parameterized complexity point of view, providing fixed-parameter (FPT) algorithms for both problems. In particular, we show that k-Mer Scaffold Filling is FPT wrt. the number of additional k-mers realized by the completion of S. We also show that Min-Breakpoint Scaffold Filling is FPT wrt. a parameter combining the number of missing genes, the number of gene repetitions and the target distance.

Algorithmes et complexité
Mardi 13 juin 2017, 11 heures, Salle 1007
Abel Molina The Optimality of Projections for Quantum State Exclusion

We will first motivate the problem of quantum state exclusion of pure states, through its connections with the PBR game and with compatibility conditions for quantum state assignments. Then, we will discuss our recent result regarding the optimality of projections for perfect state exclusion of 3 pure states in 3 dimensions (arXiv:1702.06449). We will describe our techniques to prove this result, which are based on arguments involving convexity, rank and symmetry properties. Finally, we will discuss other related results, as well as possible avenues for extending our results.

Algorithmes et complexité
Mardi 6 juin 2017, 16 heures, 3052
Thomas Vidick (California Institute of Technology) Entanglement Tests from Group Representations.

I will present a novel perspective on entanglement tests, such as the CHSH inequality and extensions thereof, pioneered by Slofstra and (unknowingly) Gowers-Hatami, that is based on the theory of (approximate) representations of non-abelian groups. We will see how this theory leads to a simple analysis of:

(i) The first family of self-tests (or, two-player entangled games) for arbitrarily high-dimensional entanglement that is robust to a constant fraction of noise. (Joint work with Natarajan, arXiv:1610.03574.)

(ii) The first finite self-test such that for any eps>0, entangled states of dimension poly(1/eps) are both necessary and sufficient to achieve success probability 1-eps. (Joint work with Slofstra, in preparation.)

Algorithmes et complexité
Mardi 2 mai 2017, 11 heures, Salle 1007
Pierre Senellart (DI/ENS) Tree decompositions for probabilistic data management

Joint work with Antoine Amarilli, Pierre Bourhis, Mikaël Monet, Silviu Maniu.

In this talk, we review recent work on the problem of efficiently querying probabilistic databases, i.e., compact representations of probability distributions over databases, by exploiting the structure of the data. In the deterministic setting, a celebrated result by Courcelle shows that any monadic second-order query can be evaluated in linear-time on instances of bounded treewidth. We show that this result generalizes to probabilistic instances through the use of provenance. In the probabilistic setting, we show that a bound on the treewidth is necessary for query evaluation to be tractable, in sharp contrast with the deterministic setting. This leads us to studying which real-world databases actually have low treewidth, and to propose practical algorithms for query evaluation in databases that can be partially decomposed in a low-treewidth part and a high-treewidth core.

Pierre Senellart is a Professor in the Computer Science Department at the École normale supérieure in Paris, France, and the head of the Valda team of Inria Paris. He is an alumnus of ENS and obtained his M.Sc. (2003) and Ph.D. (2007) in computer science from Université Paris-Sud, studying under the supervision of Serge Abiteboul. He was awarded an Habilitation à diriger les recherches in 2012 from Université Pierre et Marie Curie. Before joining ENS, he was an Associate Professor (2008–2013) then a Professor (2013–2016) at Télécom ParisTech. He also held secondary appointments as Lecturer at the University of Hong Kong in 2012–2013, and as Senior Research Fellow at the National University of Singapore from 2014 to 2016. His research interests focus around practical and theoretical aspects of Web data management, including Web crawling and archiving, Web information extraction, uncertainty management, Web mining, and intensional data management.

Algorithmes et complexité
Mardi 25 avril 2017, 11 heures, Salle 1007
Nicolas Flammarion (DI/ENS) Optimal rates for Least-Squares Regression through stochastic gradient descent.

At a broad level, stochastic optimization is concerned with optimizing a function in the presence of noise. In this context, we consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error. We present the first algorithm that achieves the optimal prediction error rates for least-squares regression, both in terms of forgetting of initial conditions in O(1/n^2), and in terms of dependence on the noise and dimension d of the problem, as O(d/n). Our new algorithm is based on averaged accelerated regularized gradient descent, and may also be analyzed through finer assumptions on initial conditions and the Hessian matrix, leading to dimension-free quantities that may still be small while the “optimal” terms above are large.

Algorithmes et complexité
Mardi 18 avril 2017, 11 heures, Salle 1007
Adrian Kosowski (Inria, IRIF Université Paris 7) Protocols for Detecting a Signal

We consider the following interaction scenario: a population of agents is required to perpetually detect whether or not the population contains at least one agent in a distinguished state X. This problem may be viewed as a variant of rumor-spreading, in which a “true” rumor should spread and persist in the population for as long as its primary rumor source X is present, and a “false” rumor without a source should be actively suppressed. Investigations into the problem are also directly motivated by the question of detecting and amplifying trace concentrations of a chemical substance X, e.g., in a framework of DNA computing with chemical reaction networks (CRN-s).

We describe two population protocols which solve the considered problem efficiently, converging to a correct output state on a (1-epsilon)-fraction of a population of n agents within a polylogarithmic number of steps, starting from any initial configuration of the system, w.h.p. under a fair uniformly random scheduler. One of the proposed protocols requires O(log n) states and has certain desirable robustness properties, while the other uses only a constant number of states.

Algorithmes et complexité
Jeudi 13 avril 2017, 14 heures, Salle 3052
Przemyslaw Uznanski (ETH Zürich, Switzerland) All-Pairs 2-reachability in O(n^ω log n) Time

In the 2-reachability problem we are given a directed graph G and we wish to determine if there are two (edge or vertex) disjoint paths from u to v, for given pair of vertices u and v. We present an algorithm that computes 2-reachability information for all pairs of vertices in O(n^ω logn) time. Hence, we show that the running time of all-pairs 2-reachability is within a log factor of transitive closure.

Moreover, our algorithm produces a witness (i.e., a separating edge or a separating vertex) for all pair of vertices where 2-reachability does not hold. By processing these witnesses, we can compute all the edge- and vertex-dominator trees of G in O(n^2) additional time, which in turn enables us to answer various connectivity queries in O(1) time. For instance, we can test in constant time if there is a path from u to v avoiding an edge e, for any pair of query vertices u and v, and any query edge e, or if there is a path from u to v avoiding a vertex w, for any query vertices u, v, and w.

This is a joint work with Loukas Georgiadis (University of Ioannina), Daniel Graf (ETH Zurich), Giuseppe Italiano and Nikos Parotsidis (University of Rome, Tor Vergata).

Algorithmes et complexité
Mardi 21 mars 2017, 11 heures, Salle 1007
Emanuele Natale Friend or Foe? Population Protocols can perform Community Detection

We present a simple distributed algorithm that, given a regular graph consisting of two communities (or clusters), each inducing a good expander and such that the cut between them has sparsity $1/\mbox{polylog}(n)$, recovers the two communities.

More precisely, upon running the protocol, every node assigns itself a binary label of $m = \Theta(\log n)$ bits, so that with high probability, for all but a small number of outliers, nodes within the same community are assigned labels with Hamming distance $o(m)$, while nodes belonging to different communities receive labels with Hamming distance at least $m/2 - o(m)$. We refer to such an outcome as a “community sensitive labeling” of the graph.

Our algorithm uses $\Theta(\log^2 n)$ local memory and computes the community sensitive labeling after each node performs $\Theta(\log^2 n)$ steps of local work. Our algorithm and its analysis work in the “(random) population protocol” model, in which anonymous nodes do not share any global clock (the model is asynchronous) and communication occurs over one single (random) edge per round. We believe, this is the first provably-effective protocol for community detection that works in this model.

Joint work with Luca Becchetti, Andrea Clementi, Francesco Pasquale, Prasad Raghavendra, and Luca Trevisan.

Link to the arxiv version: https://arxiv.org/abs/1703.05045

Algorithmes et complexité
Mardi 21 février 2017, 11 heures, Salle 1007
Zvi Lotker Literature Networks and Time in Social Networks.

In this talk I will describe the connection between social networks and theater plays. I will show how algorithms can analyze plays using social networks, and how plays can reveal an interesting algorithmic problem.

Algorithmes et complexité
Mardi 7 février 2017, 11 heures, Salle 1007
Charles Paperman Streaming and circuit complexity

In this talk, I will present a connection between the streaming complexity and the circuit complexity of regular languages through a notion of streaming by block. This result provides tight constructions of boolean circuits computing an automaton, thanks to some classical and recent results on the circuit complexity of regular languages.

Algorithmes et complexité
Mardi 31 janvier 2017, 11 heures, Salle 1007
Chien-Chung Huang (CNRS, ENS Paris) Popularity, Mixed Matchings, and Self-duality