Algorithmes et complexité
mardi 04 décembre 2018, 11h00, Salle 1007
Geoffroy Couteau (KIT) Compressing Vector OLE

In this talk, I will discuss recent results on the problem of compressing correlated (pseudo) random strings.

Oblivious linear-function evaluation (OLE) is a secure two-party protocol allowing a receiver to learn any linear combination of a pair of field elements held by a sender. OLE serves as a common building block for secure computation of arithmetic circuits, analogously to the role of oblivious transfer (OT) for boolean circuits. A useful extension of OLE is vector OLE (VOLE), allowing the receiver to learn any linear combination of two vectors held by the sender. In several applications of OLE, one can replace a large number of instances of OLE by a smaller number of long instances of VOLE. This motivates the goal of amortizing the cost of generating long instances of VOLE.

I will present a new approach for generating pseudo-random instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction. This provides the first example of compressing a non-trivial and cryptographically useful correlation with good concrete efficiency. This result enjoys several applications; I will discuss in particular an application to the construction of non-interactive zero-knowledge proofs with reusable preprocessing.

Our VOLE generators are based on a novel combination of function secret sharing (FSS) for multi-point functions and linear codes in which decoding is intractable. Their security can be based on variants of the learning parity with noise (LPN) assumption over large fields that resist known attacks. I will also discuss recent exciting developments toward the 'dream goal' of compressing arbitrary pseudorandom bilinear correlations.

Algorithmes et complexité
mardi 27 novembre 2018, 11h00, Salle 1007
Sagnik Mukhopadhyay (Computer Science Institute of Charles University) Lifting theorems for Equality

We show a deterministic simulation (or lifting) theorem for composed problems $f \circ \EQ_n$ where the inner function (the gadget) is Equality on $n$ bits. When $f$ is a total function on $p$ bits, it is easy to show via a rank argument that the communication complexity of $f\circ \EQ_n$ is $\Omega(\deg(f) \cdot n)$. However, there is a surprising counter-example of a partial function $f$ on $p$ bits, such that any completion $f'$ of $f$ has $\deg(f') = \Omega(p)$, and yet $f \circ \EQ_n$ has communication complexity $O(n)$. Nonetheless, we are able to show that the communication complexity of $f \circ \EQ_n$ is at least $D(f) \cdot n$ for a complexity measure $D(f)$ which is closely related to the $\AND$-query complexity of $f$ and is lower-bounded by the logarithm of the leaf complexity of $f$. As a corollary, we also obtain lifting theorems for the set-disjointness gadget, and a lifting theorem in the context of parity decision-trees, for the $\NOR$ gadget.

In this talk, I will talk about simulations or lifting theorems in general from a previous work of Chattopadhyay et al., and lifting theorem for Equality in particular—especially why the general recipe of Chattopadhyay et al. does not work for Equality. I will also mention an application of this technique to prove tight communication lower bound for Gap-Equality problem. This is a joint work with Bruno Loff.

Algorithmes et complexité
jeudi 22 novembre 2018, 14h00, Salle 1007
Aviad Rubinstein (Stanford) Distributed PCP Theorems for Hardness of Approximation in P

We present a new model of probabilistically checkable proof (PCP), which we call “Distributed PCP”: A satisfying assignment (x in {0,1}^n) to a SAT instance is shared between two parties (Alice knows x_1, … x_{n/2}, and Bob knows x_{n/2+1},…,x_n). Their goal is to jointly write a PCP for x, while exchanging little or no information.

Using our new framework, we obtain, for the first time, PCP-like hardness of approximation results for problems in P.

Based on joint works with Amir Abboud, Lijie Chen, Shafi Goldwasser, Kaifeng Lyu, Guy Rothblum, and Ryan Williams.

Algorithmes et complexité
mardi 16 octobre 2018, 11h30, Salle 1007
Suryajith Chillara (IIT Bombay) A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas

It is a fundamental problem in the study of computational complexity to understand if access to more resources would mean strictly more computational power. In classical complexity, we have seen such examples in terms of Time Hierarchy and Space Hierarchy theorems. Here, time and space are the resources. It is natural to ask such a question in the setting of algebraic complexity setting. Here, access to more resources translates to allowing the model to perform more number of operations which in turn is allowing the “algebraic formulas” to have larger size.

Arithmetic formulas are directed trees where the leaves are labelled by variables and elements of the field, and internal vertices are labelled by + or x. Every internal vertex computes a polynomial by operating on its inputs. It is easy to see that they are a natural model of computation of polynomials, syntactically (symbolically). The size of the arithmetic formulas refers to the number of + and x operations needed to compute the polynomial.

Rephrasing the question in the algebraic setting we ask the following: for any s, \varepsilon >0, are there polynomial computations that can be efficiently computed by arithmetic formulas of size s but not by any arithmetic formula of size s^{1-\varepsilon}?

In this talk, we will restrict ourselves to arithmetic formulas where computation at every vertex is a multilinear polynomial and here we show explicit separations between the expressive powers of multilinear formulas of small-depth and all polynomial sizes. The multilinear restriction is a reasonable one since most polynomials of interest are indeed multilinear.

Formally, we will show that for any s = poly(n) and \delta > 0, we show that there are explicit polynomial families that have multilinear formulas of size at most s but no 'small'-depth multilinear formulas of size less than s^{0.5 - \delta}. Our proof can be viewed as a derandomization of a lower bound technique of Raz (JACM 2009) using \epsilon-biased spaces.

(Joint work with Nutan Limaye and Srikanth Srinivasan. Appeared in the proceedings of ICALP 2018.)

Algorithmes et complexité
mardi 04 septembre 2018, 11h00, Salle 1007
Kavitha Telikepalli (Tata Institute of Fundamental Research) Popular Matchings: A Tale of Two Classes

We consider popular matching problems in a stable marriage instance G. A matching M is popular if it does not lose a head-to-head election against any matching, where each vertex casts a vote for the matching where it gets a better assignment. Popular matchings always exist in G since every stable matching is popular. Algorithmic questions for several popular matching problems have been studied in the last few years: these include finding a max-size popular matching, finding a popular matching with our favorite edge(s) in it, finding a max-weight popular (mixed) matching when there are edge weights. We will see efficient algorithms for some of these problems.

Interestingly, all tractability results in popular matchings seem to reduce the popular matching problem to an appropriate question in either stable matchings or dominant matchings. Dominant matchings always exist in G and form a subclass of max-size popular matchings while stable matchings form a subclass of min-size popular matchings. We recently showed that deciding if G admits a popular matching that is neither stable nor dominant is NP-hard. This implies a tight 1/2-factor approximation for the max-weight popular matching problem in G and the hardness of finding a popular matching in a non-bipartite graph.

[Joint work with Agnes Cseh, Yuri Faenza, Chien-Chung Huang, Vladlena Powers, and Xingyu Zhang.]

Algorithmes et complexité
mardi 19 juin 2018, 11h00, Salle 1007
Leonard Wossnig (University College London) Sketching as tool for faster quantum simulation

Following the works of Drineas et al. (2004), Randomised Numerical Linear Algebra (RNLA) applications have enjoyed a increasing surge of interest in the classical algorithms community. Following their success we recently demonstrated a classical algorithm based on the Nystroem method which allows us to efficiently simulate dynamics of quantum system with specific structural conditions. We briefly introduce the most basic RNLA algorithm by Drineas et al. to give an overview of the topic and then outline our own recent work. We follow with a discussion of potential application of related methods, including spectral sparsification, in quantum algorithms for Hamiltonian Simulation and quantum linear systems; We finish with a short description of own current work related to spectral sparsification (Spielmann & Teng 2011) for Hamiltonian simulation.

Algorithmes et complexité
mardi 05 juin 2018, 11h00, Salle 1007
Andrea Rocchetto (Oxford - UCL) Learnability and quantum computation

Computational learning theory provides a mathematical framework for rigorously formulating learning problems from both a statistical and computational perspective. During this talk I will present two independent results at the interplay of learning theory and quantum computation. First, I will address the problem: “can we learn faster with a quantum computer?”. In particular, I will discuss the learnability of the class of boolean functions that can be expressed as polynomial size formulae in disjunctive normal form (DNF) and show that this is efficiently quantum PAC learnable under product distributions. The learnability of DNFs is a central problem in learning theory and the best known classical algorithm (using standard ways to access the function) runs in superpolynomial time. Second, I will discuss the problem “what it means to learn a quantum state and can we do that efficiently?”. Here, building on a model developed by Aaronson in 2007, I will present a class of states, namely stabiliser states, that can be learned using only a linear number of measurements and polynomial - classical - computation. Part of the results are joint work with Varun Kanade and Simone Severini.

Algorithmes et complexité
mardi 29 mai 2018, 11h00, Salle 1007
Steve Alpern (University of Wariwick, UK) Shortest Paths in Networks with Unreliable Directions

The work of Korman and others, based on investigations of navigational techniques of ‘crazy ants’, leads to the following problem: Let Q be a network with given arc lengths, and let H be a ‘home’ node. At each node a permanent direction is given (one of the adjacent arcs) which leads to a shortest path path from that node to H with probability p less than one. Starting from a node S, a searcher chooses the given direction at each reached node with probability q; otherwise he chooses randomly. He arrives home at node H in expected time T=T(S,H,p,q). How should q be chosen to minimize T? Perhaps when reaching a node the searcher sees its degree k so chooses the given direction with probability q(k). Related problems have been studied for tree from a graph theoretical perspective.

Algorithmes et complexité
mardi 22 mai 2018, 11h00, Salle 1007
Anupam Prakash (IRIF) Improved quantum linear system solvers and applications to machine learning

I will describe recent results in the QRAM model that improve the sparsity dependance for quantum linear system solvers to \mu(A) = \min (||A||_F, ||A||_1) where ||A||_1 is the maximum l-1 norm of the rows and columns of A and ||A||_F is the Frobenius norm. These results achieve a worst case quadratic speedup over the HHL algorithm and its improvements and exponential speedups for some cases. I will also present some applications of the improved linear system solvers to quantum machine learning.

Algorithmes et complexité
mercredi 11 avril 2018, 14h00, Salle 1007
Libor Caha (RCQI Bratislava) The Feynman-Kitaev computer's clock: bias, gaps, idling and pulse tuning

We present a collection of results about the clock in Feynman's computer construction and Kitaev's Local Hamiltonian problem. First, by analyzing the spectra of quantum walks on a line with varying endpoint terms, we find a better lower bound on the gap of the Feynman Hamiltonian, which translates into a less strict promise gap requirement for the QMA-complete Local Hamiltonian problem. We also translate this result into the language of adiabatic quantum computation. Second, introducing an idling clock construction with a large state space but fast Cesaro mixing, we provide a way for achieving an arbitrarily high success probability of computation with Feynman's computer with only a logarithmic increase in the number of clock qubits. Finally, we tune and thus improve the costs (locality, gap scaling) of implementing a (pulse) clock with a single excitation.

Joint work with Zeph Landau and Daniel Nagaj

Algorithmes et complexité
mardi 10 avril 2018, 11h00, Salle 1007
Pierre Aboulker (Université Grenoble-Alpes, Labo G-SCOP) Distributed coloring of graphs with fewer colors

We give an efficient (polylog(n) rounds) algorithm that colors a graph with few colors in the distributed setting (LOCAL model). Among other things, we will see that this algorithm 6-colors planar graphs (using 1 less color than the previous best algorithm of Goldberg, Plotkin, and Shannon) and will show that one cannot 4-color planar graph in o(n) rounds.

This is a joint work with Bousquet, Bonamy, and Esperet.

Algorithmes et complexité
mardi 27 mars 2018, 11h00, Salle 1007
Kamil Khadiev (University of Latvia) Quantum online algorithms with restricted memory

An online algorithm is a well-known computational model for solving optimization problems. The defining property of this model is following. An algorithm reads an input piece by piece and should return output variables after some of the input variables immediately, even if the answer depends on the whole input. An online algorithm should return output for minimizing an objective function.

We consider streaming algorithms and two-way automata as models for online algorithms. We compare quantum and classical models in case of logarithmic memory and sublogarithmic memory.

We get the following results for online streaming algorithms: - a quantum online streaming algorithm with 1 qubit of memory and 1 advice bit can be better than a classical online streaming algorithm with $o(\log n)$ bits of memory and $o(\log n)$ advice bits. - Quantum online streaming algorithm with a constant size of memory and $t$ advice bits can be better than deterministic online streaming algorithms with $t$ advice bits and unlimited computational power. - In a case of a polylogarithmic size of memory, quantum online streaming algorithms can be better than classical ones even if they have advice bits.

We get the following results for two way automata as an online algorithm for solving online minimization problems: - a two way automata with quantum and classical states for online minimization problems with a constant size of memory can be better than classical ones even if they have advice bits.

Algorithmes et complexité
mardi 20 mars 2018, 11h00, Salle 1007
Valia Mitsou (IRIF) On the complexity of defective coloring.

In defective coloring, we are given a graph G and two integers c, ∆* and are asked if we can c-color G so that the maximum degree induced by any color class is at most ∆*. This problem is a generalization of proper coloring, for which Δ* = 0.

Defective coloring, like proper coloring, has many applications, for example in scheduling (assign jobs to machines which can work on up to Δ* jobs in parallel), or in radio networks (assign frequencies to antennas with some slack, that is, an antenna can be assigned the same frequency with up to Δ* other antennas within its range without a signal conflict).

We will present some algorithmic and complexity results on this problem in classes of graphs where proper coloring is easy: on the one hand, we will consider subclasses of perfect graphs, namely cographs and chordal graphs; on the other hand we will talk about structural parameterizations, such as treewidth and feedback vertex set.

Joint work with Rémy Belmonte (University of Electro-Communications, Tokyo) and Michael Lampis (Lamsade, Université Paris Dauphine) that appeared in WG '17 and in STACS '18.

Algorithmes et complexité
mardi 13 février 2018, 11h00, Salle 1007
Antoine Grospellier (INRIA) Efficient decoding of random errors for quantum expander codes

We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of α-percolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected α-subset of E, where X is an α-subset of E if |X ∩ E| ≥ α|X|.

Algorithmes et complexité
lundi 12 février 2018, 14h00, Salle 3052
Nabil Mustafa (ESIEE) Local Search for Geometric Optimization Problems.

Local-search is an intuitive approach towards solving combinatorial optimization problems: start with any feasible solution, and try to improve it by local improvements. Like other greedy approaches, it can fail to find the global optimum by getting stuck on a locally optimal solution. In this talk I will present the ideas and techniques behind the use of local-search in the design of provably good approximation algorithms for some combinatorial problems, such as independent sets, vertex cover, dominating sets in geometric intersection graphs. The key unifying theme is the analysis of local expansion in planar graphs. Joint work with Norbert Bus, Shashwat Garg, Bruno Jartoux and Saurabh Ray.

Algorithmes et complexité
mardi 06 février 2018, 11h00, Salle 1007
Shendan Jin (LIP6) Online Maximum Matching with Recourse

We study the online maximum matching problem with recourse in a model in which the edges are associated with a known recourse parameter k. An online algorithm for this problem has to maintain a valid matching while edges of the underlying graph are presented one after the other. At any moment the algorithm can decide to include an edge into the matching or to exclude it, under the restriction that at most k actions per edge take place, where k is typically a small constant. This problem was introduced and studied in the context of general online packing problems with recourse by [Avitabile, Mathieu, Parkinson, 2013], whereas the special case k=2 has been studied by [Boyar et al., 2017].

In the first part of this paper, we consider the edge arrival model, in which an arriving edge never disappears from the graph. Here, we first show an improved analysis on the performance of the algorithm given in [AMP, 2013], by exploiting the structure of the specific problem. In addition, we extend the result of [Boyar et al., 2017] and show that the greedy algorithm has ratio 3/2 for every even positive k and ratio 2 for every odd k. Moreover, we present and analyze L-Greedy — an improvement of the greedy algorithm — which for small values of k outperforms the algorithm of [AMP, 2013]. In terms of lower bounds, we show that no deterministic algorithm better than 1+1/(k−1) exists, improving upon the lower bound of 1+1/k shown in [AMP, 2013].

The second part of the paper is devoted to the edge arrival/departure model, which is the fully dynamic variant of the online matching problem with recourse. The analysis of L-Greedy carries through in this model; moreover we show a general lower bound of (k2−3k+3)/(k2−4k+5) for all even k≥4 and provide the stronger bound of 10/7 for k=4. For k∈{2,3}, the competitive ratio is 3/2.

Joint work with Spyros Angelopoulos and Christoph Dürr.

Algorithmes et complexité
lundi 05 février 2018, 14h00, Salle 2018
Charles Bennett () Do-It-Yourself randomness over Device-Independent randomness

Algorithmes et complexité
mardi 30 janvier 2018, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Allison Bishop (DI ENS, IRIF - IEX and Columbia University) On Algorithms Operating in Adversarial Conditions

Eighth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Allison Bishop on Algorithms Operating in Adversarial Conditions (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-30-10h00.htm

Algorithmes et complexité
mardi 23 janvier 2018, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Tim Roughgarden (DI ENS, IRIF - Stanford University) On Game Theory Through the Computational Lens

Seventh lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Tim Roughgarden on Game Theory Through the Computational Lens (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-23-10h00.htm

Algorithmes et complexité
mercredi 17 janvier 2018, 11h00, Salle 2015
Philip Lazos (Oxford) The Infinite Server Problem

We study a variant of the k-server problem, the infinite server problem, in which infinitely many servers reside initially at a particular point of the metric space and serve a sequence of requests. In the framework of competitive analysis, we show a surprisingly tight connection between this problem and the (h,k)-server problem, in which an online algorithm with k servers competes against an offline algorithm with h servers. Specifically, we show that the infinite server problem has bounded competitive ratio if and only if the (h,k)-server problem has bounded competitive ratio for some k=O(h). We give a lower bound of 3.146 for the competitive ratio of the infinite server problem, which implies the same lower bound for the (h,k)-server problem even when k/h→∞ and holds also for the line metric; the previous known bounds were 2.4 for general metric spaces and 2 for the line. For weighted trees and layered graphs we obtain upper bounds, although they depend on the depth. Of particular interest is the infinite server problem on the line, which we show to be equivalent to the seemingly easier case in which all requests are in a fixed bounded interval away from the original position of the servers. This is a special case of a more general reduction from arbitrary metric spaces to bounded subspaces. Unfortunately, classical approaches (double coverage and generalizations, work function algorithm, balancing algorithms) fail even for this special case.

Algorithmes et complexité
mardi 16 janvier 2018, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Jon Kleinberg (DI ENS, IRIF - Cornell University) On Algorithms and Fairness

Sixth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Jon Kleinberg on Algorithms and Fairness (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-16-10h00.htm

Algorithmes et complexité
mardi 16 janvier 2018, 14h30, Salle 3014
Nicolas Thiery (Université Paris Sud) Computing huge subspaces of diagonal harmonic polynomials: symmetries to the rescue!

Last spring, I visited François Bergeron and we worked on his favorite objects: the spaces H(n,k) of diagonal harmonic polynomials in k sets of n variables. Those spaces connect to many areas of algebraic combinatorics, representation theory and beyond, and the case H(n,2) became famous a decade ago with the n! conjecture and its proof by Haiman.

To fuel his ongoing studies François needed to compute the structure of H(5,6). This is a space of dimension 6.10^5 made of polynomials in 30 variables of degree up to 15, each having thousands of terms.

In this talk, I'll explain how the calculation can now be completed in 45 minutes with a dozen cores and ~15Go of memory. This exploits a combination of strategies (symmetries, representation theory of the symmetric and general linear group, …), each of which reduces the complexity in time and memory by one or two orders of magnitude.

There will be little prerequisites and it's my hope that some strategies (and maybe the code!) could be used in other contexts.

Algorithmes et complexité
mardi 09 janvier 2018, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Mark Jerrum (DI ENS, IRIF - Queen Mary, University of London) On Sampling and Approximate Counting

Fifth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Mark Jerrum on Sampling and Approximate Counting (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-09-10h00.htm

Algorithmes et complexité
mardi 19 décembre 2017, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Bruno Salvy (DI ENS, IRIF - INRIA) On Analytic Combinatorics

Fourth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Bruno Salvy on Analytic Combinatorics (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-19-10h00.htm

Algorithmes et complexité
jeudi 14 décembre 2017, 14h30, Salle 1016
Emanuele Natale (Max Planck Institute for Informatics) Computing through Dynamics: Principles for Distributed Coordination

We present analytical results regarding some simple randomized protocols, called dynamics, for solving fundamental distributed consensus problems, together with examples on how to use them to build lightweight algorithms for other important distributed problems. More specifically, we provide an overview of the theory regarding several dynamics such as the 3 Majority, the Averaging and the Undecided-State ones, and we show how to use them to solve plurality consensus, distributed clustering, clock synchronization and information spreading. Motivated by applications to systems whose complexity is in-between biological and human-made ones, we focus mainly on unstructured and random interaction models, and we also deal with scenarios in which the communication is affected by noise or when a self-stabilizing protocol is required.

Algorithmes et complexité
mardi 12 décembre 2017, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Amos Fiat (DI ENS, IRIF - University of Tel-Aviv) On Static and Dynamic Pricing

Third lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Amos Fiat on Static and Dynamic Pricing (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-12-10h00.htm

Algorithmes et complexité
mardi 12 décembre 2017, 14h00, Salle 3052
Jean Krivine (IRIF) Incremental Update for Graph Rewriting

Graph rewriting formalisms are well-established models for the representation of biological systems such as protein-protein interaction networks. The combinatorial complexity of these models usually prevents any explicit representation of the variables of the system, and one has to rely on stochastic simulations in order to sample the possible trajectories of the underlying Markov chain. The bottleneck of stochastic simulation algorithms is the update of the propensity function that describes the probability that a given rule is to be applied next. In this talk we present an algorithm based on a data structure, called extension basis, that can be used to update the counts of predefined graph observables after a rule of the model has been applied.

Reference: Boutillier P., Ehrhard T., Krivine J. (2017) Incremental Update for Graph Rewriting. In: Yang H. (eds) Programming Languages and Systems. ESOP 2017. Lecture Notes in Computer Science, vol 10201. Springer, Berlin, Heidelberg

Algorithmes et complexité
mardi 05 décembre 2017, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Laurent Massoulié (DI ENS, IRIF - INRIA) Community Detection

Second lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Laurent Massoulié on Community Detection (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-05-10h00.htm

Algorithmes et complexité
mardi 28 novembre 2017, 10h00, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Pierre Fraigniaud (DI ENS, IRIF - IRIF, CNRS) On Distributed Algorithms

First lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Pierre Fraigniaud on Distributed Algorithms (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-11-28-10h00.htm

Algorithmes et complexité
mardi 21 novembre 2017, 11h00, Salle 1007
André Chailloux (INRIA Paris) A tight security reduction in the quantum random oracle model for code-based signature schemes

Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.

Joint work with Thomas Debris-Alazard

Algorithmes et complexité
jeudi 16 novembre 2017, 14h00, Salle 3052
Elena Kirshanova (ENS Lyon) Connections between the Dihedral Coset Problem and LWE

In this talk, I explain that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), called extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, the result generalizes Regev’s famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS 02). We'll also discuss a connection between eDCP and Childs and Van Dam’s algorithm for generalized hidden shift problems (SODA 07). The result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

Algorithmes et complexité
mardi 14 novembre 2017, 14h00, 3052
Laurent Massoulié (MSR-Inria) Rapid Mixing of Local Graph Dynamics

Graph dynamics arise naturally in many contexts. For instance in peer-to-peer networks, a participating peer may replace an existing connection with one neighbour by a new connection with a neighbour's neighbour. Several such local rewiring rules have been proposed to ensure that peer-to-peer networks achieve good connectivity properties (e.g. high expansion) in equilibrium. However it has remained an open question whether there existed such rules that also led to fast convergence to equilibrium. In this work we provide an affirmative answer: We exhibit a local rewiring rule that converges to equilibrium after each participating node has undergone only a number of rewirings that is poly-logarithmic in the system size. The proof involves consideration of the whole isoperimetric profile of the graph, and may be of independent interest.

This is joint work with Rémi Varloot.

Algorithmes et complexité
vendredi 10 novembre 2017, 15h00, Salle 3052
Simon Apers (Ghent University) Mixing and quantum sampling with quantum walks: some results and questions

Quantum walks have been shown to quadratically speed up search problems on graphs. We discuss their usage towards mixing and quantum sampling on graphs, problems that are in a sense dual to the search problem. Quantum sampling, or quantum state generation, roughly amounts to creating a superposition over the elements of a set, where this set may be implicitly defined. It is known that if this task can be solved efficiently, then the complexity class called “statistical zero knowledge” is in BQP. We discuss a folklore approach to this problem called “inverted search”, where a quantum search algorithm is essentially inverted to yield a quantum sampling algorithm. We also show how this approach solves the search problem when starting from a single element of the graph, rather than a superposition. The easier task of mixing on graphs asks for a classical sample of the set, rather than a superposition. It has long been conjectured that quantum walks can quadratically speed up this task when compared to simple random walks. We show how a quantum sampling approach confirms this conjecture for a limited set of graphs. For more general graphs however, it is conceivable that the quantum sampling problem cannot be solved efficiently, such that a different approach towards mixing is needed. We discuss ideas in this direction.

Algorithmes et complexité
mardi 31 octobre 2017, 11h00, Salle 1007
Ami Paz (IRIF) A (2+\epsilon)-Approximation for Maximum Weight Matching in the Semi-Streaming Model

In this talk I will present our new (2+\epsilon)-approximation algorithm for the maximum weight matching problem in the semi-streaming model, that was presented in SODA 2017.

We will start by discussing the local-ratio technique, a simple, sequential approximation paradigm we use in our algorithm. Then, we will consider the variations needed in order to adjust this technique to the semi-streaming model.

Based on a joint work with Gregory Schwartzman.

Algorithmes et complexité
mardi 17 octobre 2017, 14h00, Salle 3052
Claire Mathieu (DI - ENS) Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.

Algorithmes et complexité
mardi 10 octobre 2017, 11h00, Salle 1007
Victor Verdugo (ENS - Universidad de Chile) How Large is Your Graph?

We consider the problem of estimating the graph size, where one is given only local access to the graph. We formally define a query model in which one starts with a \emph{seed} node and is allowed to make queries about neighbours of nodes that have already been seen. In the case of undirected graphs, an estimator of Katzir et al. (2014) based on a sample from the stationary distribution π uses O(1/‖π‖_2+davg) queries, we prove that this is tight. In addition, we establish this as a lower bound even when the algorithm is allowed to crawl the graph arbitrarily, the results of Katzir et al. give an upper bound that is worse by a multiplicative factor tmix⋅log(n). The picture becomes significantly different in the case of directed graphs. We show that without strong assumptions on the graph structure, the number of nodes cannot be predicted to within a constant multiplicative factor without using a number of queries that are at least linear in the number of nodes, in particular, rapid mixing and small diameter, properties that most real-world networks exhibit, do not suffice. The question of interest is whether any algorithm can beat breadth-first search. We introduce a new parameter, generalising the well-studied conductance, such that if a suitable bound on it exists and is known to the algorithm, the number of queries required is sublinear in the number of edges, we show that this is tight.

Algorithmes et complexité
mardi 03 octobre 2017, 11h00, Salle 1007
Giuseppe Italiano (Universita di Roma Tor Vergata) Decremental Single-Source Reachability and Strongly Connected Components in O(m \sqrt{n log n}) Total Update Time

We present randomized algorithms with a total update time of O(m \sqrt{n} log n) for the problems of decremental single-source reachability and decremental strongly connected components on directed graphs. This improves sharply recent breakthrough results of Henzinger, Krinninger and Nanongkai [STOC 14, ICALP 15]. In addition, our algorithms are arguably simpler. In this talk, we first review the Las Vegas algorithm by Roditty and Zwick [SICOMP 08] and the deterministic algorithm by Łącki [TALG 13], both with a total update time O(mn) (expected time in the case of the algorithm by Roditty and Zwick). Our algorithms carefully combine these two results, along with new structural properties, including a separation lemma in the case of graphs with a large diameter.

Joint work with Shiri Chechik, Thomas Dueholm Hansen, Jakub Łącki and Nikos Parotsidis.

Algorithmes et complexité
mardi 26 septembre 2017, 11h00, Salle 1007
Vianney Perchet (CMLA, Ens Paris-Saclay) Online Search Problems

In traditional “search problems”, an agent aims at exploring a graph in order to find a hidden object as fast as possible. We consider here the Bayesian repeated scenario with several iid instance of the same basic search problem. The objective is, within a given amount of time, to find as many objects as possible with the possibility to leave an instance for the next one at any moment.

We also consider the non-bayesian case with a variant of regret minimization. We provide and analyze several algorithms with fast rates of convergence and/or guaranteed efficiency.

Algorithmes et complexité
mardi 19 septembre 2017, 11h00, Salle 1007
Vincent Viallat Cohen-Addad () Hierarchical Clustering: Objective Functions and Algorithms

Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a `good' hierarchical clustering is one that minimizes some cost function. He showed that this cost function has certain desirable properties.

We take an axiomatic approach to defining `good' objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of “admissible” objective functions (that includes Dasgupta's one) that have the property that when the input admits a `natural' hierarchical clustering, it has an optimal value.

Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better algorithms. For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an O(log^{3/2} n)-approximation. We give a refined analysis of the algorithm and show that it in fact achieves an O(\sqrt{log n})-approx. (Charikar and Chatziafratis independently proved that it is a O(\sqrt{log n})-approx.). This improves upon the LP-based O(logn)-approx. of Roy and Pokutta. For dissimilarity-based hierarchical clustering, we show that the classic average-linkage algorithm gives a factor 2 approx., and provide a simple and better algorithm that gives a factor 3/2 approx..

Finally, we consider `beyond-worst-case' scenario through a generalisation of the stochastic block model for hierarchical clustering. We show that Dasgupta's cost function has desirable properties for these inputs and we provide a simple 1 + o(1)-approximation in this setting.

Joint work with Varun Kanade, Frederik Mallmann-Trenn, and Claire Mathieu.

Algorithmes et complexité
mardi 11 juillet 2017, 11h00, Salle 1007
Oded Regev (Courant Institute of Mathematical Sciences, NYU) A Reverse Minkowski Theorem

Informally, Minkowski's first theorem states that lattices that are globally dense (have small determinant) are also locally dense (have lots of points in a small ball around the origin). This fundamental result dates back to 1891 and has a very wide range of applications.

I will present a proof of a reverse form of Minkowski's theorem, conjectured by Daniel Dadush in 2012, showing that locally dense lattices are also globally dense (in the appropriate sense).

The talk will be self contained and I will not assume any familiarity with lattices.

Based on joint papers with Daniel Dadush and Noah Stephens-Davidowitz.

Algorithmes et complexité
jeudi 06 juillet 2017, 11h00, Salle 3052
Joel Friedman (University of British Columbia) Inner Rank and Lower Bounds for Matrix Multiplication

We give a short proof of the optimality of Strassen's classical algorithm for 2 x 2 matrix multiplication, and more generally give a 2n^2 - n + 1 lower bound for n x n matrix multiplcation, in terms of the border rank (and, hence, also rank) of the associated tensor, over an arbitrary field.

We call the tool we use “inner rank,” which seems to have gone unnoticed in the literature (at least in the way we use it). While our bounds are not competitive with the best bounds to date over the complex numubers, we argue that “inner rank” may lead to improved results and merits further study.

Algorithmes et complexité
mardi 20 juin 2017, 11h00, Salle 1007
Laurent Bulteau (CNRS, Laboratoire d'Informatique Gaspard Monge, Marne-la-Vallée, France.) Beyond Adjacency Maximization: Scaffold Filling for New String Distances

In Genomic Scaffold Filling, one aims at polishing in silico a draft genome, called scaffold. The scaffold is given in the form of an ordered set of gene sequences, called contigs. This is done by confronting the scaffold to an already complete reference genome from a close species. More precisely, given a scaffold S, a reference genome G (represented as a string) and a score function f() between two genomes, the aim is to complete S by adding the missing genes from G so that the obtained complete genome S* optimizes f(S*,G). In this talk, we will consider two alternative score functions: the first aims at maximizing the number of common k-mers between S* and G (k-Mer Scaffold Filling), the second aims at minimizing the number of string breakpoints between S* and G (Min-Breakpoint Scaffold Filling). We study these problems from the parameterized complexity point of view, providing fixed-parameter (FPT) algorithms for both problems. In particular, we show that k-Mer Scaffold Filling is FPT wrt. the number of additional k-mers realized by the completion of S. We also show that Min-Breakpoint Scaffold Filling is FPT wrt. a parameter combining the number of missing genes, the number of gene repetitions and the target distance.

Algorithmes et complexité
mardi 13 juin 2017, 11h00, Salle 1007
Abel Molina () The Optimality of Projections for Quantum State Exclusion

We will first motivate the problem of quantum state exclusion of pure states, through its connections with the PBR game and with compatibility conditions for quantum state assignments. Then, we will discuss our recent result regarding the optimality of projections for perfect state exclusion of 3 pure states in 3 dimensions (arXiv:1702.06449). We will describe our techniques to prove this result, which are based on arguments involving convexity, rank and symmetry properties. Finally, we will discuss other related results, as well as possible avenues for extending our results.

Algorithmes et complexité
mardi 06 juin 2017, 16h00, 3052
Thomas Vidick (California Institute of Technology) Entanglement Tests from Group Representations.

I will present a novel perspective on entanglement tests, such as the CHSH inequality and extensions thereof, pioneered by Slofstra and (unknowingly) Gowers-Hatami, that is based on the theory of (approximate) representations of non-abelian groups. We will see how this theory leads to a simple analysis of:

(i) The first family of self-tests (or, two-player entangled games) for arbitrarily high-dimensional entanglement that is robust to a constant fraction of noise. (Joint work with Natarajan, arXiv:1610.03574.)

(ii) The first finite self-test such that for any eps>0, entangled states of dimension poly(1/eps) are both necessary and sufficient to achieve success probability 1-eps. (Joint work with Slofstra, in preparation.)

Algorithmes et complexité
mardi 02 mai 2017, 11h00, Salle 1007
Pierre Senellart (DI/ENS) Tree decompositions for probabilistic data management

Joint work with Antoine Amarilli, Pierre Bourhis, Mikaël Monet, Silviu Maniu.

In this talk, we review recent work on the problem of efficiently querying probabilistic databases, i.e., compact representations of probability distributions over databases, by exploiting the structure of the data. In the deterministic setting, a celebrated result by Courcelle shows that any monadic second-order query can be evaluated in linear-time on instances of bounded treewidth. We show that this result generalizes to probabilistic instances through the use of provenance. In the probabilistic setting, we show that a bound on the treewidth is necessary for query evaluation to be tractable, in sharp contrast with the deterministic setting. This leads us to studying which real-world databases actually have low treewidth, and to propose practical algorithms for query evaluation in databases that can be partially decomposed in a low-treewidth part and a high-treewidth core.

Pierre Senellart is a Professor in the Computer Science Department at the École normale supérieure in Paris, France, and the head of the Valda team of Inria Paris. He is an alumnus of ENS and obtained his M.Sc. (2003) and Ph.D. (2007) in computer science from Université Paris-Sud, studying under the supervision of Serge Abiteboul. He was awarded an Habilitation à diriger les recherches in 2012 from Université Pierre et Marie Curie. Before joining ENS, he was an Associate Professor (2008–2013) then a Professor (2013–2016) at Télécom ParisTech. He also held secondary appointments as Lecturer at the University of Hong Kong in 2012–2013, and as Senior Research Fellow at the National University of Singapore from 2014 to 2016. His research interests focus around practical and theoretical aspects of Web data management, including Web crawling and archiving, Web information extraction, uncertainty management, Web mining, and intensional data management.

Algorithmes et complexité
mardi 25 avril 2017, 11h00, Salle 1007
Nicolas Flammarion (DI/ENS) Optimal rates for Least-Squares Regression through stochastic gradient descent.

At a broad level, stochastic optimization is concerned with optimizing a function in the presence of noise. In this context, we consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error. We present the first algorithm that achieves the optimal prediction error rates for least-squares regression, both in terms of forgetting of initial conditions in O(1/n^2), and in terms of dependence on the noise and dimension d of the problem, as O(d/n). Our new algorithm is based on averaged accelerated regularized gradient descent, and may also be analyzed through finer assumptions on initial conditions and the Hessian matrix, leading to dimension-free quantities that may still be small while the “optimal” terms above are large.

Algorithmes et complexité
mardi 18 avril 2017, 11h00, Salle 1007
Adrian Kosowski (Inria, IRIF Université Paris 7) Protocols for Detecting a Signal

We consider the following interaction scenario: a population of agents is required to perpetually detect whether or not the population contains at least one agent in a distinguished state X. This problem may be viewed as a variant of rumor-spreading, in which a “true” rumor should spread and persist in the population for as long as its primary rumor source X is present, and a “false” rumor without a source should be actively suppressed. Investigations into the problem are also directly motivated by the question of detecting and amplifying trace concentrations of a chemical substance X, e.g., in a framework of DNA computing with chemical reaction networks (CRN-s).

We describe two population protocols which solve the considered problem efficiently, converging to a correct output state on a (1-epsilon)-fraction of a population of n agents within a polylogarithmic number of steps, starting from any initial configuration of the system, w.h.p. under a fair uniformly random scheduler. One of the proposed protocols requires O(log n) states and has certain desirable robustness properties, while the other uses only a constant number of states.

Algorithmes et complexité
jeudi 13 avril 2017, 14h00, Salle 3052
Przemyslaw Uznanski (ETH Zürich, Switzerland) All-Pairs 2-reachability in O(n^ω log n) Time

In the 2-reachability problem we are given a directed graph G and we wish to determine if there are two (edge or vertex) disjoint paths from u to v, for given pair of vertices u and v. We present an algorithm that computes 2-reachability information for all pairs of vertices in O(n^ω logn) time. Hence, we show that the running time of all-pairs 2-reachability is within a log factor of transitive closure.

Moreover, our algorithm produces a witness (i.e., a separating edge or a separating vertex) for all pair of vertices where 2-reachability does not hold. By processing these witnesses, we can compute all the edge- and vertex-dominator trees of G in O(n^2) additional time, which in turn enables us to answer various connectivity queries in O(1) time. For instance, we can test in constant time if there is a path from u to v avoiding an edge e, for any pair of query vertices u and v, and any query edge e, or if there is a path from u to v avoiding a vertex w, for any query vertices u, v, and w.

This is a joint work with Loukas Georgiadis (University of Ioannina), Daniel Graf (ETH Zurich), Giuseppe Italiano and Nikos Parotsidis (University of Rome, Tor Vergata).

Link to the paper: https://arxiv.org/abs/1612.08075

Algorithmes et complexité
mardi 21 mars 2017, 11h00, Salle 1007
Emanuele Natale () Friend or Foe? Population Protocols can perform Community Detection

We present a simple distributed algorithm that, given a regular graph consisting of two communities (or clusters), each inducing a good expander and such that the cut between them has sparsity $1/\mbox{polylog}(n)$, recovers the two communities.

More precisely, upon running the protocol, every node assigns itself a binary label of $m = \Theta(\log n)$ bits, so that with high probability, for all but a small number of outliers, nodes within the same community are assigned labels with Hamming distance $o(m)$, while nodes belonging to different communities receive labels with Hamming distance at least $m/2 - o(m)$. We refer to such an outcome as a “community sensitive labeling” of the graph.

Our algorithm uses $\Theta(\log^2 n)$ local memory and computes the community sensitive labeling after each node performs $\Theta(\log^2 n)$ steps of local work. Our algorithm and its analysis work in the “(random) population protocol” model, in which anonymous nodes do not share any global clock (the model is asynchronous) and communication occurs over one single (random) edge per round. We believe, this is the first provably-effective protocol for community detection that works in this model.

Joint work with Luca Becchetti, Andrea Clementi, Francesco Pasquale, Prasad Raghavendra, and Luca Trevisan.

Link to the arxiv version: https://arxiv.org/abs/1703.05045

Algorithmes et complexité
mardi 21 février 2017, 11h00, Salle 1007
Zvi Lotker () Literature Networks and Time in Social Networks.

In this talk I will describe the connection between social networks and theater plays. I will show how algorithms can analyze plays using social networks, and how plays can reveal an interesting algorithmic problem.

Algorithmes et complexité
mardi 07 février 2017, 11h00, Salle 1007
Charles Paperman () Streaming and circuit complexity

In this talk, I will present a connection between the streaming complexity and the circuit complexity of regular languages through a notion of streaming by block. This result provides tight constructions of boolean circuits computing an automaton, thanks to some classical and recent results on the circuit complexity of regular languages.

Algorithmes et complexité
mardi 31 janvier 2017, 11h00, Salle 1007
Chien-Chung Huang (CNRS, ENS Paris) Popularity, Mixed Matchings, and Self-duality

We consider the problem of matching applicants to jobs under two-sided preferences in a popular and utility-optimal manner. Our input instance is a bipartite graph G = (A \cup B,E) with a utility function w: E \rightarrow Q where each vertex u \in A \cup B has a preference list ranking its neighbors in a strict order of preference. For any two matchings M and T in G, let \phi(M,T) be the number of vertices that prefer M to T. A matching M is popular if \phi(M,T) \geg \phi(T,M) for all matchings T in G. A mixed matching is defined as \Pi = \{(M_0,p_0),\ldots,(M_k,p_k)\} for some matchings M_0,\ldots,M_k and \sum_{i=0}^kp_i = 1, p_i \geq 0 for all i. The function \phi(\cdot,\cdot$ easily extends to mixed matchings; a mixed matching \Pi is popular if \phi(\Pi,\Lambda) \geq \phi(\Lambda,\Pi) for all mixed matchings \Lambda in G.

Motivated by the fact that a popular mixed matching could have a much higher utility than all popular matchings, we study the popular fractional matching polytope {\cal P}_G. Our main result is that this polytope is half-integral (and in the special case where a stable matching is a perfect matching, this polytope is integral), implying that there is always a max-utility popular mixed matching \Pi such that \Pi = {(M_0,\frac{1}{2}),(M_1,\frac{1}{2})} and \Pi can be computed in polynomial time. An immediate consequence is that in order to implement a max-utility popular mixed matching in $G$, we need just one single random bit.

We analyze {\cal P}_G whose description may have exponentially many constraints via an extended formulation in m+n dimensions with O(m+n) constraints. The linear program that gives rise to this formulation has an unusual property: self-duality. In other words, this linear program is exactly identical to its dual program. This is a rare case where an LP of a natural problem has such a property. The self-duality of the LP plays a crucial role in our proof of the half-integrality of {\cal P}_G.

We also show that our result carries over to the roommates problem, where the given graph G may not be bipartite. The polytope of popular fractional matchings is still half-integral here and thus we can compute a max-utility popular half-integral matching in G in polynomial time. To complement this result, we also show that the problem of computing a max-utility popular (integral) matching in a roommates instance G is NP-hard.

Algorithmes et complexité
mardi 10 janvier 2017, 11h00, Salle 1007
Vincent Jugé (LSV, CNRS & ENS Cachan, Univ. Paris-Saclay) Dynamic Complexity of Parity Games with Bounded Tree-Width

Dynamic complexity is concerned with the complexity of updating a solution to a problem when its input changes. A typical example is as follows: given an directed graph G and two pointed vertices s and t, you wish to know whether there exists a path from s to t in G. After your computation is complete, the graph is modified (e.g. by adding or deleting an edge): how should you proceed to update your answer at the least cost? Computing and storing auxiliary data, such as maintaining a covering forest, might be helpful in that regard.

In this talk, we will consider a specific class for dynamic complexity, called DynFO. A dynamic problem belongs to this class if updates can be performed by applying first-order formulas. We will show that a dynamic variant of Courcelle's theorem belongs to DynFO, and we will apply this result to computing optimal strategies in 2-player reachability or parity games.

This talk is based on joint a work with Patricia Bouyer-Decitre and Nicolas Markey.

Algorithmes et complexité
mardi 13 décembre 2016, 11h00, Salle 1007
Amos Korman (CNRS, IRIF) From Ants to Query Complexity

I will talk about my recent adventures with ants. Together with biologists we study P. longicornis ants as they collaboratively transport a large food item to their nest. This collective navigation process is guided by pheromones which are laid by individual ants. Using a new methodology to detect scent marks, we identify a new kind of ant trail characterized by very short and dynamic pheromone markings and highly stochastic navigation response to them. We argue that such a trail can be highly beneficial in conditions in which knowledge of individual ants regarding the underlying topological structure is unreliable. This gives rise to a new theoretical search model under unreliable guiding instructions, which is of independent computational interest. To illustrate the model, imagine driving a car in an unknown country that is in the aftermath of a major hurricane which has randomly flipped a certain small fraction of the road-signs. Under such conditions of unreliability, how can you still reach your destination fast? I will discuss the limits of unreliability that allow for efficient navigation. In trees, for example, there is a phase transition phenomenon that occurs roughly around 1/sqrt{D}. That is, if noise is above this threshold then any algorithm cannot avoid finding the target in exponential time (in the original distance), while below the threshold we identify an optimal, almost linear, walking algorithm. Finally, I will discuss algorithms that under such a noisy model aim to minimize the number of queries to find a target (rather than the number of moves).

This talk is based on joint works with biologists: Ofer Feinerman, Udi Fonio, Yael Heyman and Aviram Gelblum, and CS co-authors: Lucas Boczkowski, Adrian Kosowski and Yoav Rodeh.

Algorithmes et complexité
mardi 06 décembre 2016, 11h00, Salle 1007
Omar Fawzi () Algorithmic aspects of optimal channel coding

We study the problem of finding the maximum success probability for transmitting messages over a noisy channel from an algorithmic point of view. In particular, we show that a simple greedy polynomial-time algorithm computes a code achieving a (1-1/e)-approximation of the maximum success probability and that it is NP-hard to obtain an approximation ratio strictly better than (1-1/e). Moreover, the natural linear programming relaxation of this problem corresponds to the Polyanskiy-Poor-Verdú bound, which we also show has a value of at most 1/(1-1/e) times the maximum success probability.

Based on joint work with Siddharth Barman. arXiv:1508.04095

Algorithmes et complexité
vendredi 02 décembre 2016, 11h00, Salle 1007
Luc Sanselme () Determinism and Computational Power of Real Measurement-based Quantum Computation

Measurement-based quantum computing (MBQC) is a universal model for quantum computation. The combinatorial characterisation of determinism in this model, powered by measurements, and hence, fundamentally probabilistic, is the cornerstone of most of the breakthrough results in this field. The most general known sufficient condition for a deterministic MBQC to be driven is that the underlying graph of the computation has a particular kind of flow called Pauli flow. The necessity of the Pauli flow was an open question. We show that the Pauli flow is necessary for real-MBQC, and not in general providing counter-examples for (complex) MBQC.

We explore the consequences of this result for real MBQC and its applications. Real MBQC and more generally real quantum computing is known to be universal for quantum computing. Real MBQC has been used for interactive proofs by McKague. The two-prover case corresponds to real-MBQC on bipartite graphs. While (complex) MBQC on bipartite graphs are universal, the universality of real MBQC on bipartite graphs was an open question. We show that real bipartite MBQC is not universal proving that all measurements of real bipartite MBQC can be parallelised leading to constant depth computations. As a consequence, McKague techniques cannot lead to two-prover interactive proofs.

Joint work with Simon Perdrix.

Algorithmes et complexité
mardi 22 novembre 2016, 11h00, Salle 1007
Anca Nitulescu (ENS Paris) On the (In)security of SNARKs in the Presence of Oracles

In this work we study the feasibility of knowledge extraction for succinct non-interactive arguments of knowledge (SNARKs) in a scenario that, to the best of our knowledge, has not been analyzed before. While prior work focuses on the case of adversarial provers that may receive (statically generated) {\em auxiliary information}, here we consider the scenario where adversarial provers are given {\em access to an oracle}. For this setting we study if and under what assumptions such provers can admit an extractor. Our contribution is mainly threefold.

First, we formalize the question of extraction in the presence of oracles by proposing a suitable proof of knowledge definition for this setting. We call SNARKs satisfying this definition O SNARKs. Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on authenticated data) where we recognize an issue while doing the proof under the standard proof of knowledge definition of SNARKs. Third, we study whether O-SNARKs exist, providing both negative and positive results. On the negative side, we show that, assuming one way functions, there do not exist O-SNARKs in the standard model for every signing oracle family (and thus for general oracle families as well). On the positive side, we show that when considering signature schemes with appropriate restrictions on the message length O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming extraction with respect to specific distributions of auxiliary input.

Algorithmes et complexité
mardi 08 novembre 2016, 11h00, Salle 1007
Arpita Korwar () Polynomial Identity Testing of Sum of ROABPs

Polynomials are fundamental objects studied in mathematics. Though univariate polynomials are fairly well-understood, multivariate polynomials are not. Arithmetic circuits are the primary tool used to study polynomials in computer science. They allow for the classification of polynomials according to their complexity.

Polynomial identity testing (PIT) asks if a polynomial, input in the form of an arithmetic circuit, is identically zero.

One special kind of arithmetic circuits are read-once arithmetic branching programs (ROABPs), which can be written as a product of univariate polynomial matrices over distinct variables. We will be studying the characterization of an ROABP. In the process, we can give a polynomial time PIT for the sum of constantly many ROABPs.

Algorithmes et complexité
mardi 25 octobre 2016, 11h00, Salle 1007
Eric Angel (Université d'Évry Val d'Essonne IBISC) Clustering on k-edge-colored graphs.

We study the Max k-colored clustering problem, where, given an edge-colored graph with k colors, we seek to color the vertices of the graph so as to find a clustering of the vertices maximizing the number (or the weight) of matched edges, i.e. the edges having the same color as their extremities. We show that the cardinality problem is NP-hard even for edge-colored bipartite graphs with a chromatic degree equal to two and k ≥ 3. Our main result is a constant approximation algorithm for the weighted version of the Max k-colored clustering problem which is based on a rounding of a natural linear programming relaxation. For graphs with chromatic degree equal to two, we improve this ratio by exploiting the relation of our problem with the Max 2-and problem. We also present a reduction to the maximum-weight independent set (IS) problem in bipartite graphs which leads to a polynomial time algorithm for the case of two colors.

Algorithmes et complexité
mardi 18 octobre 2016, 11h00, Salle 1007
Carola Doerr () Provable Performance Gains via Dynamic Parameter Choices in Heuristic Optimization

In many optimization heuristics there are a number of parameters to be chosen. These parameters typically have a crucial impact on the performance of the algorithm. It is therefore of great interest to set these parameters wisely. Unfortunately, determining the optimal parameter choices for a randomized search heuristic via mathematical means is a rather difficult task. Even worse, for many problems the optimal parameter choices seem to change during the optimization process. While this seems quite intuitive, little theoretical evidence exist to support this claim. In a series of recent works we have proposed two very simple success-based update rules for the parameter settings of some standard search heuristics. For both these rules we can prove that they yield a better performance than any static parameter choice. Based on joint work with Benjamin Doerr (Ecole Polytechnique), Timo Koetzing (HPI Potsdam, Germany), and Jing Yang (Ecole Polytechnique).

Algorithmes et complexité
mardi 11 octobre 2016, 11h00, Salle 1007
Dieter van Melkebeek (University of Wisconsin, Madison) Deterministic Isolation for Space-Bounded Computation

Isolation is the process of singling out a solution to a problem that may have many solutions. It plays an important role in the design of efficient parallel algorithms as it ensures that the various parallel processes all work towards a single global solution rather than towards individual solutions that may not be compatible with one another. For example, the best parallel algorithms for finding perfect matchings in graphs hinge on isolation for this reason. Isolation is also an ingredient in some efficient sequential algorithms. For example, the best running times for certain NP-hard problems like finding hamiltonian paths in graphs are achieved via isolation.

All of these algorithms are randomized, and the only reason is the use of the Isolation Lemma – that for any set system over a finite universe, a random assignment of small integer weights to the elements of the universe has a high probability of yielding a unique set of minimum weight in the system. For each of the underlying problems it is open whether deterministic algorithms of similar efficiency exist.

This talk is about the possibility of deterministic isolation in the space-bounded setting. The question is: Can one always make the accepting computation paths of nondeterministic space-bounded machines unique without changing the underlying language and without blowing up the space by more than a constant factor? Or equivalently, does there exist a deterministic logarithmic space mapping reduction from directed st-connectivity to itself that transforms positive instances into ones where there is a unique path from s to t?

I will present some recent results towards a resolution of this question, obtained jointly with Gautam Prakriya. Our approach towards a positive resolution can be viewed as derandomizing the Isolation Lemma in the context of space-bounded computation.

Algorithmes et complexité
mardi 20 septembre 2016, 11h00, Salle 1007
Eldar Fischer (Faculty of CS, Technion - Israel Institue of Technology) Improving and extending testing of distributions for shape restrictions.

Distribution property testing deals with what information can be deduced about an unknown distribution over {1,…,n}, where we are only allowed to obtain a relatively small number of independent samples from the distribution. In the basic model the algorithm may only base its decision on receiving a sequence of samples from the distribution, while in the conditional model the algorithm may also request samples out of the conditional distribution over subsets of {1,…,n}.

A test has to distinguish a distribution satisfying a given property from a distribution that is far in the variation distance from satisfying it. A range of properties such as monotonicity and log-concavity has been unified under the banner of L-decomposable properties. Here we improve upon the basic model test for all such properties, as well as provide a new test under the conditional model whose number of queries does not directly depend on n. We also provide a conditional model test for a wider range of properties, that in particular yields tolerant testing for all L-decomposable properties. For tolerant testing conditional samples are essential, as an efficient test in the basic model is known not to exist.

Our main mechanism is a way of efficiently producing a partition of {1,…,n} into intervals satisfying a small-weight requirement with respect to the unknown distribution. Also, we show that investigating just one such partition is sufficient for solving the testing question, as opposed to prior works where a search for the “correct” partition was performed.

Joint work with Oded Lachish and Yadu Vasudev.

Algorithmes et complexité
mardi 13 septembre 2016, 11h00, Salle 1007
Tatiana Starikovskaya (IRIF, Université Paris Diderot) Streaming and communication complexity of Hamming distance

We will discuss the complexity of one of the most basic problems in pattern matching, that of approximating the Hamming distance. Given a pattern P of length n the task is to output an approximation of the Hamming distance (that is, the number of mismatches) between P and every n-length substring of a longer text. We provide the first efficient one-way randomised communication protocols as well as a new, fast and space efficient streaming algorithm for this problem.

Algorithmes et complexité
lundi 29 août 2016, 11h00, Room 2002
Sanjeev Khanna (University of Pennsylvania) On the Single-Pass Streaming Complexity of the Set Cover Problem

In the set cover problem, we are given a collection of $m$ subsets over a universe of $n$ elements, and the goal is to find a sub-collection of sets whose union covers the universe. The set cover problem is a fundamental optimization problem with many applications in computer science and related disciplines. In this talk, we investigate the set cover problem in the streaming model of computation whereby the sets are presented one by one in a stream, and the goal is to solve the set cover problem using a space-efficient algorithm.

We show that to compute an $\alpha$-approximate set cover (for any $\alpha= o(\sqrt{n})$) via a single-pass streaming algorithm, $\Theta(mn/\alpha)$ space is both necessary and sufficient (up to an $O(\log{n})$ factor). We further study the problem of estimating the size of a minimum set cover (as opposed to finding the actual sets), and show that this turns out to be a distinctly easier problem. Specifically, we prove that $\Theta(mn/\alpha^2)$ space is both sufficient and necessary (up to logarithmic factors) for estimating the size of a minimum set cover to within a factor of $\alpha$. Our algorithm in fact works for the more general problem of estimating the optimal value of a covering integer program. These results provide a tight resolution of the space-approximation tradeoff for single-pass streaming algorithms for the set cover problem.

This is joint work with my students Sepehr Assadi and Yang Li.

Algorithmes et complexité
mardi 05 juillet 2016, 11h00, Salle 1007
Alexandra Kolla (University of Illinois at Urbana-Champaign) Towards Constructing Expanders via Lifts: Hopes and Limitations.

In this talk, I will examine the spectrum of random k-lifts of d-regular graphs. We show that, for random shift k-lifts (which includes 2-lifts), if all the nontrivial eigenvalues of the base graph G are at most \lambda in absolute value, then with high probability depending only on the number n of nodes of G (and not on k), if k is *small enough*, the absolute value of every nontrivial eigenvalue of the lift is at most O(\lambda). While previous results on random lifts were asymptotically true with high probability in the degree of the lift k, our result is the first upperbound on spectra of lifts for bounded k. In particular, it implies that a typical small lift of a Ramanujan graph is almost Ramanujan. I will present a quasi-polynomial time algorithm for constructing almost-Ramanujan expanders through such lifts. I will also discuss some impossibility results for large k, which, as one consequence, imply that there is no hope of constructing large Ramanujan graphs from large abelian k-lifts.

Based on joint work with Naman Agarwal Karthik Chandrasekaran and Vivek Madan.

Algorithmes et complexité
mardi 21 juin 2016, 11h00, Salle 1007
Nathanaël Fijalkow () Alternating Communication Complexity, with Applications to Online Space Complexity

We study the model of alternating communication complexity introduced by Babai, Frankl and Simon in 1986. We extend the rank lower bound to this setting. We show some applications of this technique for online space complexity, as defined by Karp in the 60s. This measure of complexity quantifies the amount of space used by a Turing machine whose input tape can read each symbol only once, from left to right. In particular, we obtain logarithmic lower bounds on the alternating online space complexity of the set of prime numbers written in binary, which is an exponential improvement over the previous result due to Hartmanis and Shank in 1968.

Algorithmes et complexité
mardi 31 mai 2016, 11h00, Salle 1007
Stacey Jeffery (Institute for Quantum Information and Matter, Caltech) Span Programs, NAND-Trees, and Graph Connectivity

We show a connection between NAND-tree evaluation and st-connectivity problems on certain graphs to generalize a superpolynomial quantum speedup of Zhan et al. for a promise version of NAND-tree formula evaluation. In particular, we show that the quantum query complexity of evaluating NAND-tree instances with average choice complexity at most W is O(W), where average choice complexity is a measure of the difficulty of winning the associated two-player game. Our results follow from relating average choice complexity to the effective resistance of these graphs, which itself corresponds to the span program witness size. These connections suggest an interesting relationship between st-connectivity problems and span program algorithms, that we hope may have further applications.

This is joint work with Shelby Kimmel.

Algorithmes et complexité
mardi 24 mai 2016, 11h00, Salle 1007
Tim Black (University of Chicago) Monotone Properties of k-Uniform Hypergraphs are Weakly Evasive.

The decision-tree complexity of a Boolean function is the number of input bits that must be queried (adaptively) in the worst case to determine the value of the function. A Boolean function in n variables is weakly evasive if its decision-tree complexity is Omega(n). By k-graphs we mean k-uniform hypergraphs. A k-graph property on v vertices is a Boolean function on n = \binom{v}{k} variables corresponding to the k-subsets of a v-set that is invariant under the v! permutations of the v-set (isomorphisms of k-graphs).

Rivest and Vuillemin (1976) proved that all non-constant monotone graph properties (k=2) are weakly evasive, confirming a conjecture of Aanderaa and Rosenberg (1973). Kulkarni, Qiao, and Sun (2013) proved the analogous result for 3-graphs. We extend these results to k-graphs for every fixed k. From this, we show that monotone Boolean functions invariant under the action of a large primitive group are weakly evasive.

While KQS (2013) employ the powerful topological approach of Kahn, Saks, and Sturtevant (1984) combined with heavy number theory, our argument is elementary and self-contained (modulo some basic group theory). Inspired by the outline of the KQS approach, we formalize the general framework of “orbit augmentation sequences” of sets with group actions. We show that a parameter of such sequences, called the “spacing,” is a lower bound on the decision-tree complexity for any nontrivial monotone property that is G-invariant for all groups G involved in the orbit augmentation sequence, assuming all those groups are p-groups. We develop operations on such sequences such as composition and direct product which will provide helpful machinery for our applications. We apply this general technique to k-graphs via certain liftings of k-graphs with wreath product action of p-groups.

Algorithmes et complexité
mardi 17 mai 2016, 11h00, Salle 1007
Nikhil Bansal (Eindhoven University of Technology) Solving optimization problems on noisy planar graphs

Many graph problems that are hard to approximate on general graphs become much more tractable on planar graphs. In particular, planar graphs can be decomposed into small pieces or into bounded treewidth graphs, leading to PTASes for these problems. But little is known about the noisy setting where the graphs are only nearly planar, i.e. deleting few edges makes them planar. One obstacle is that current planar decomposition techniques fail completely with noise. Another obstacle is that the known guarantees for the planarization problem are too weak for our purpose.

We show that using linear programming methods such as configuration LPs and spreading metrics, one can get around these barriers and obtain PTASes for many problems on noisy planar graphs. This resolves an open question of Magen and Moharrami, that was recently popularized by Claire Mathieu.

Algorithmes et complexité
mardi 10 mai 2016, 11h00, Salle 1007
Jean Cardinal () Solving k-SUM using few linear queries

The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within P, and it is in particular open whether it admits an algorithm of complexity O(n^c) with c<⌈k/2⌉. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n^3 log^3(n)) solving k-SUM. Furthermore, we show that there exists a randomized algorithm that runs in Õ (n^{⌈k/2⌉+8}) time, and performs O(n^3 log^3(n)) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n^3 log^3(n)) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-à-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P. We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)-linear decision trees of depth o(n^4).

Joint work with John Iacono and Aurélien Ooms

Algorithmes et complexité
mardi 03 mai 2016, 11h00, Salle 1007
Mehdi Mhalla (LIG Grenoble) Pseudotelepathy games with graph states, contextuality and multipartiteness width.

Analyzing pseudotelepathy graph games, we propose a way to build contextuality scenarios exhibiting the quantum supremacy using graph states. We consider the combinatorial structures generating equivalent scenarios. We investigate which scenarios are more multipartite and show that there exists graphs generating scenarios with a linear multipartiteness width.

This is based on a joint work with Peter Hoyer and Simon Perdrix.

Algorithmes et complexité
mardi 26 avril 2016, 11h00, Salle 4033
Jan Hackfeld (TU Berlin) Undirected Graph Exploration with Θ(log log n) Pebbles

We consider the fundamental problem of exploring an undirected and initially unknown graph by an agent with little memory. The vertices of the graph are unlabeled, and the edges incident to a vertex have locally distinct labels. In this setting, it is known that Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that this memory requirement can be decreased significantly by making a part of the memory distributable in the form of pebbles. A pebble is a device that can be dropped to mark a vertex and can be collected when the agent returns to the vertex. We show that for an agent O(log log n) distinguishable pebbles and bits of memory are sufficient to explore any bounded-degree graph with at most n vertices. We match this result with a lower bound exhibiting that for any agent with sub-logarithmic memory, Ω(log log n) distinguishable pebbles are necessary for exploration.

This talk is based on joint work with Yann Disser and Max Klimm (SODA'16).

Algorithmes et complexité
mardi 19 avril 2016, 11h00, Salle 1007
Charles Bennett () Is there such a thing as private classical information?

Classical secret information lies on a slippery slope between public information and quantum information. Even leaving aside fanciful attacks like neutrino tomography, a typical classical secret—say a paper document locked in a safe—quickly decoheres and becomes recoverable in principle from the environment outside the safe. On the other hand, if a system is so well insulated from its environment that it does not decohere, it can be used as a quantum memory, capable of existing in a superposition of classical states and of being entangled with other other quantum memories. We discuss the practical and theoretical difficulty of recovering a classical secret from its decohered environment, and of protecting a classical secret by arranging that some information required to recover it escapes into parts of the environment inaccessible to the eavesdropper.

Algorithmes et complexité
mercredi 30 mars 2016, 14h00, Salle 3058
Manoj Prabhakaran (University of Illinois, Urbana-Champaign) Rényi Information Complexity and an Information Theoretic Characterization of the Partition Bound

We introduce a new information-theoretic complexity measure IC∞ for 2-party functions which is a lower-bound on communication complexity, and has the two leading lower-bounds on communication complexity as its natural relaxations: (external) information complexity (IC) and logarithm of partition complexity (prt). These two lower-bounds had so far appeared conceptually quite different from each other, but we show that they are both obtained from IC∞ using two different, but natural relaxations:

  • IC∞ is similar to information complexity IC, except that it uses Rényi mutual information of order ∞ instead of Shannon's mutual information (which is Rényi mutual information of order 1). Hence, the relaxation of IC∞ that yields IC is to change the order of Rényi mutual information used in its definition from ∞ to 1.
  • The relaxation that connects IC∞ with partition complexity is to replace protocol transcripts used in the definition of IC∞ with what we term “pseudotranscripts,” which omits the interactive nature of a protocol, but only requires that the probability of any transcript given inputs x and y to the two parties, factorizes into two terms which depend on x and y separately. While this relaxation yields an apparently different definition than (log of) partition function, we show that the two are in fact identical. This gives us a surprising characterization of the partition bound in terms of an information-theoretic quantity.

Further understanding IC∞ might have consequences for important direct-sum problems in communication complexity, as it lies between communication complexity and information complexity.

We also show that if both the above relaxations are simultaneously applied to IC∞, we obtain a complexity measure that is lower-bounded by the (log of) relaxed partition complexity, a complexity measure introduced by Kerenidis et al. (FOCS 2012). We obtain a similar (but incomparable) connection between (external) information complexity and relaxed partition complexity as Kerenidis et al., using an arguably more direct proof.

This is joint work with Vinod Prabhakaran.

Algorithmes et complexité
vendredi 11 mars 2016, 11h00, Salle 1007
Christian Konrad (Reykjavik University) Streaming Algorithms for Partitioning Sequences and Trees

Partitioning sequences and trees are classical load balancing problems that received considerable attention in the 80ies and 90ies. For both problems exact algorithms with different characteristics exist. However, in the context of massive data sets, these algorithms fail since they assume random access to the input, an assumption that can hardly be granted. The key motivation of this work is the partitioning of current XML databases, some of which reach many terrabytes in size. In an XML database, data is organized in a tree structure.

In this talk, I will present streaming algorithms for both problems. The presented algorithms require a random access memory whose size is only logarithmic in the size of the input, which makes them good candidates for performing well in practice. This work will be presented next week at ICDT 2016, the 19th International Conference on Database Theory.

Algorithmes et complexité
mardi 23 février 2016, 11h00, Salle 1007
Ashwin Nayak (University of Waterloo) Sampling quantum states

A classic result in information theory, the source coding theorem, shows how we may compress a sample from a random variable X, into essentially H(X) bits on average, without any loss. (Here H(X) denotes the Shannon entropy of X.) We revisit the analogous problem in quantum communication, in the presence of shared entanglement. No characterization of the communication needed for lossless compression is known in this scenario. We study a natural protocol for such compression, quantum rejection sampling, and give bounds on its complexity. Eventhough we do not have a precise characterization of the complexity, we show how it may be used to derive some consequences of lossless compression.

Joint work with Ala Shayeghi.

Algorithmes et complexité
mardi 16 février 2016, 11h00, Salle 1007
Johan Thapper (Université Paris-Est, Marne-la-Vallée, LIGM) Constraint Satisfaction Problems, LP relaxations and Polymorphisms

An instance of the Constraint Satisfaction Problem (CSP) is given by a set of constraints over a set of variables. The variables take values from a (finite) domain and the constraints are specified by relations over the domain that need to hold between various subsets of variables. In the late 90s, it was realised that the complexity of the CSP restricted to some fixed set of relations is captured by an associated set of operations called polymorphisms. This connection has lead to a great influx of ideas and tools (as well as researchers) from universal algebra, a field of mathematics that in particular studies algebras of such operations.

A quite general optimisation version of the CSP is obtained by replacing the relations by arbitrary functions from tuples of domain values to the rationals extended with positive infinity. The goal of this problem, called the Valued Constraint Satisfaction Problem (VCSP), is to minimise a sum of such functions over all assignments. The complexity classification project of the VCSP has taken some great strides over the last four years and has recently been reduced to its more famous decision problem counterpart: the dichotomy conjecture for the CSP.

I will talk about how polymorphisms appear in the study of the CSP and some of what universal algebra has taught us. I will then show how these results can be used for characterising the efficacy of Sherali-Adams linear programming relaxations of the VCSP.

This is based on joint work with Standa Zivny, University of Oxford (UK).

Algorithmes et complexité
mardi 02 février 2016, 11h00, Salle 1007
Balthazar Bauer () Compression of communication protocols

In communication theory, there are two big characteristics for a protocol: its communication complexity and its information complexity. There is a huge activity in the research community to find interesting relations between these two characteristics. An idea is to try to compress a protocol and to see the efficiency of the compression for a protocol with a known communication and information. Here we will see some recent schemes of compression. It will be also a good occasion to discover some common tricks and algorithms in communication and information theory.

Algorithmes et complexité
mardi 26 janvier 2016, 11h00, Salle 1007
Chien-Chung Huang (Chalmers University of Technology and Göteborg University) Exact and Approximation Algorithms for Weighted Matroid Intersection

We propose new exact and approximation algorithms for the weighted matroid intersection problem. Our exact algorithm is faster than previous algorithms when the largest weight is relatively small. Our approximation algorithm delivers a $(1-\epsilon)$-approximate solution with a running time significantly faster than most known exact algorithms.

The core of our algorithms is a decomposition technique: we decompose an instance of the weighted matroid intersection problem into a set of instances of the unweighted matroid intersection problem. The computational advantage of this approach is that we can make use of fast unweighted matroid intersection algorithms as a black box for designing algorithms. Precisely speaking, we prove that we can solve the weighted matroid intersection problem via solving $W$ instances of the unweighted matroid intersection problem, where $W$ is the largest given weight. Furthermore, we can find a $(1-\epsilon)$-approximate solution via solving $O(\epsilon^{-1} \log r)$ instances of the unweighted matroid intersection problem, where $r$ is the smallest rank of the given two matroids.