Algorithmes et complexité
Mardi 19 décembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Bruno Salvy (DI ENS, IRIF - INRIA) On Analytic Combinatorics

Fourth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Bruno Salvy on Analytic Combinatorics (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-19-10h00.htm

Algorithmes et complexité
Jeudi 14 décembre 2017, 14 heures 30, Salle 1016
Emanuele Natale (Max Planck Institute for Informatics) Computing through Dynamics: Principles for Distributed Coordination

We present analytical results regarding some simple randomized protocols, called dynamics, for solving fundamental distributed consensus problems, together with examples on how to use them to build lightweight algorithms for other important distributed problems. More specifically, we provide an overview of the theory regarding several dynamics such as the 3 Majority, the Averaging and the Undecided-State ones, and we show how to use them to solve plurality consensus, distributed clustering, clock synchronization and information spreading. Motivated by applications to systems whose complexity is in-between biological and human-made ones, we focus mainly on unstructured and random interaction models, and we also deal with scenarios in which the communication is affected by noise or when a self-stabilizing protocol is required.

Algorithmes et complexité
Mardi 12 décembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Amos Fiat (DI ENS, IRIF - University of Tel-Aviv) On Static and Dynamic Pricing

Third lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Amos Fiat on Static and Dynamic Pricing (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-12-10h00.htm

Algorithmes et complexité
Mardi 12 décembre 2017, 14 heures, Salle 3052
Jean Krivine (IRIF) Incremental Update for Graph Rewriting

Graph rewriting formalisms are well-established models for the representation of biological systems such as protein-protein interaction networks. The combinatorial complexity of these models usually prevents any explicit representation of the variables of the system, and one has to rely on stochastic simulations in order to sample the possible trajectories of the underlying Markov chain. The bottleneck of stochastic simulation algorithms is the update of the propensity function that describes the probability that a given rule is to be applied next. In this talk we present an algorithm based on a data structure, called extension basis, that can be used to update the counts of predefined graph observables after a rule of the model has been applied.

Reference: Boutillier P., Ehrhard T., Krivine J. (2017) Incremental Update for Graph Rewriting. In: Yang H. (eds) Programming Languages and Systems. ESOP 2017. Lecture Notes in Computer Science, vol 10201. Springer, Berlin, Heidelberg

Séminaire commun du pole Algorithmes et Structures Discrètes

Algorithmes et complexité
Mardi 5 décembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Laurent Massoulié (DI ENS, IRIF - INRIA) Community Detection

Second lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Laurent Massoulié on Community Detection (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-05-10h00.htm

Algorithmes et complexité
Mardi 28 novembre 2017, 10 heures, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Pierre Fraigniaud (DI ENS, IRIF - IRIF, CNRS) On Distributed Algorithms

First lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Pierre Fraigniaud on Distributed Algorithms (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-11-28-10h00.htm

Algorithmes et complexité
Mardi 21 novembre 2017, 11 heures, Salle 1007
André Chailloux (INRIA Paris) A tight security reduction in the quantum random oracle model for code-based signature schemes

Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.

Joint work with Thomas Debris-Alazard

Algorithmes et complexité
Jeudi 16 novembre 2017, 14 heures, Salle 3052
Elena Kirshanova (ENS Lyon) Connections between the Dihedral Coset Problem and LWE

In this talk, I explain that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), called extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, the result generalizes Regev’s famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS 02). We'll also discuss a connection between eDCP and Childs and Van Dam’s algorithm for generalized hidden shift problems (SODA 07). The result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

Algorithmes et complexité
Mardi 14 novembre 2017, 14 heures, 3052
Laurent Massoulié (MSR-Inria) Rapid Mixing of Local Graph Dynamics

Graph dynamics arise naturally in many contexts. For instance in peer-to-peer networks, a participating peer may replace an existing connection with one neighbour by a new connection with a neighbour's neighbour. Several such local rewiring rules have been proposed to ensure that peer-to-peer networks achieve good connectivity properties (e.g. high expansion) in equilibrium. However it has remained an open question whether there existed such rules that also led to fast convergence to equilibrium. In this work we provide an affirmative answer: We exhibit a local rewiring rule that converges to equilibrium after each participating node has undergone only a number of rewirings that is poly-logarithmic in the system size. The proof involves consideration of the whole isoperimetric profile of the graph, and may be of independent interest.

This is joint work with Rémi Varloot.

Algorithmes et complexité
Vendredi 10 novembre 2017, 15 heures, Salle 3052
Simon Apers (Ghent University) Mixing and quantum sampling with quantum walks: some results and questions

Quantum walks have been shown to quadratically speed up search problems on graphs. We discuss their usage towards mixing and quantum sampling on graphs, problems that are in a sense dual to the search problem. Quantum sampling, or quantum state generation, roughly amounts to creating a superposition over the elements of a set, where this set may be implicitly defined. It is known that if this task can be solved efficiently, then the complexity class called “statistical zero knowledge” is in BQP. We discuss a folklore approach to this problem called “inverted search”, where a quantum search algorithm is essentially inverted to yield a quantum sampling algorithm. We also show how this approach solves the search problem when starting from a single element of the graph, rather than a superposition. The easier task of mixing on graphs asks for a classical sample of the set, rather than a superposition. It has long been conjectured that quantum walks can quadratically speed up this task when compared to simple random walks. We show how a quantum sampling approach confirms this conjecture for a limited set of graphs. For more general graphs however, it is conceivable that the quantum sampling problem cannot be solved efficiently, such that a different approach towards mixing is needed. We discuss ideas in this direction.

Algorithmes et complexité
Mardi 31 octobre 2017, 11 heures, Salle 1007
Ami Paz (IRIF) A (2+\epsilon)-Approximation for Maximum Weight Matching in the Semi-Streaming Model

In this talk I will present our new (2+\epsilon)-approximation algorithm for the maximum weight matching problem in the semi-streaming model, that was presented in SODA 2017.

We will start by discussing the local-ratio technique, a simple, sequential approximation paradigm we use in our algorithm. Then, we will consider the variations needed in order to adjust this technique to the semi-streaming model.

Based on a joint work with Gregory Schwartzman.

Algorithmes et complexité
Mardi 17 octobre 2017, 14 heures, Salle 3052
Claire Mathieu (DI - ENS) Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.

Algorithmes et complexité
Mardi 10 octobre 2017, 11 heures, Salle 1007
Victor Verdugo (ENS - Universidad de Chile) How Large is Your Graph?

We consider the problem of estimating the graph size, where one is given only local access to the graph. We formally define a query model in which one starts with a \emph{seed} node and is allowed to make queries about neighbours of nodes that have already been seen. In the case of undirected graphs, an estimator of Katzir et al. (2014) based on a sample from the stationary distribution π uses O(1/‖π‖_2+davg) queries, we prove that this is tight. In addition, we establish this as a lower bound even when the algorithm is allowed to crawl the graph arbitrarily, the results of Katzir et al. give an upper bound that is worse by a multiplicative factor tmix⋅log(n). The picture becomes significantly different in the case of directed graphs. We show that without strong assumptions on the graph structure, the number of nodes cannot be predicted to within a constant multiplicative factor without using a number of queries that are at least linear in the number of nodes, in particular, rapid mixing and small diameter, properties that most real-world networks exhibit, do not suffice. The question of interest is whether any algorithm can beat breadth-first search. We introduce a new parameter, generalising the well-studied conductance, such that if a suitable bound on it exists and is known to the algorithm, the number of queries required is sublinear in the number of edges, we show that this is tight.

Algorithmes et complexité
Mardi 3 octobre 2017, 11 heures, Salle 1007
Giuseppe Italiano (Universita di Roma Tor Vergata) Decremental Single-Source Reachability and Strongly Connected Components in O(m \sqrt{n log n}) Total Update Time

We present randomized algorithms with a total update time of O(m \sqrt{n} log n) for the problems of decremental single-source reachability and decremental strongly connected components on directed graphs. This improves sharply recent breakthrough results of Henzinger, Krinninger and Nanongkai [STOC 14, ICALP 15]. In addition, our algorithms are arguably simpler. In this talk, we first review the Las Vegas algorithm by Roditty and Zwick [SICOMP 08] and the deterministic algorithm by Łącki [TALG 13], both with a total update time O(mn) (expected time in the case of the algorithm by Roditty and Zwick). Our algorithms carefully combine these two results, along with new structural properties, including a separation lemma in the case of graphs with a large diameter.

Joint work with Shiri Chechik, Thomas Dueholm Hansen, Jakub Łącki and Nikos Parotsidis.

Algorithmes et complexité
Mardi 26 septembre 2017, 11 heures, Salle 1007
Vianney Perchet (CMLA, Ens Paris-Saclay) Online Search Problems

In traditional “search problems”, an agent aims at exploring a graph in order to find a hidden object as fast as possible. We consider here the Bayesian repeated scenario with several iid instance of the same basic search problem. The objective is, within a given amount of time, to find as many objects as possible with the possibility to leave an instance for the next one at any moment.

We also consider the non-bayesian case with a variant of regret minimization. We provide and analyze several algorithms with fast rates of convergence and/or guaranteed efficiency.

Algorithmes et complexité
Mardi 19 septembre 2017, 11 heures, Salle 1007
Vincent Viallat Cohen-Addad Hierarchical Clustering: Objective Functions and Algorithms

Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a `good' hierarchical clustering is one that minimizes some cost function. He showed that this cost function has certain desirable properties.

We take an axiomatic approach to defining `good' objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of “admissible” objective functions (that includes Dasgupta's one) that have the property that when the input admits a `natural' hierarchical clustering, it has an optimal value.

Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better algorithms. For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an O(log^{3/2} n)-approximation. We give a refined analysis of the algorithm and show that it in fact achieves an O(\sqrt{log n})-approx. (Charikar and Chatziafratis independently proved that it is a O(\sqrt{log n})-approx.). This improves upon the LP-based O(logn)-approx. of Roy and Pokutta. For dissimilarity-based hierarchical clustering, we show that the classic average-linkage algorithm gives a factor 2 approx., and provide a simple and better algorithm that gives a factor 3/2 approx..

Finally, we consider `beyond-worst-case' scenario through a generalisation of the stochastic block model for hierarchical clustering. We show that Dasgupta's cost function has desirable properties for these inputs and we provide a simple 1 + o(1)-approximation in this setting.

Joint work with Varun Kanade, Frederik Mallmann-Trenn, and Claire Mathieu.

Algorithmes et complexité
Mardi 11 juillet 2017, 11 heures, Salle 1007
Oded Regev (Courant Institute of Mathematical Sciences, NYU) A Reverse Minkowski Theorem

Informally, Minkowski's first theorem states that lattices that are globally dense (have small determinant) are also locally dense (have lots of points in a small ball around the origin). This fundamental result dates back to 1891 and has a very wide range of applications.

I will present a proof of a reverse form of Minkowski's theorem, conjectured by Daniel Dadush in 2012, showing that locally dense lattices are also globally dense (in the appropriate sense).

The talk will be self contained and I will not assume any familiarity with lattices.

Based on joint papers with Daniel Dadush and Noah Stephens-Davidowitz.

Algorithmes et complexité
Jeudi 6 juillet 2017, 11 heures, Salle 3052
Joel Friedman (University of British Columbia) Inner Rank and Lower Bounds for Matrix Multiplication

We give a short proof of the optimality of Strassen's classical algorithm for 2 x 2 matrix multiplication, and more generally give a 2n^2 - n + 1 lower bound for n x n matrix multiplcation, in terms of the border rank (and, hence, also rank) of the associated tensor, over an arbitrary field.

We call the tool we use “inner rank,” which seems to have gone unnoticed in the literature (at least in the way we use it). While our bounds are not competitive with the best bounds to date over the complex numubers, we argue that “inner rank” may lead to improved results and merits further study.

Algorithmes et complexité
Mardi 20 juin 2017, 11 heures, Salle 1007
Laurent Bulteau (CNRS, Laboratoire d'Informatique Gaspard Monge, Marne-la-Vallée, France.) Beyond Adjacency Maximization: Scaffold Filling for New String Distances

In Genomic Scaffold Filling, one aims at polishing in silico a draft genome, called scaffold. The scaffold is given in the form of an ordered set of gene sequences, called contigs. This is done by confronting the scaffold to an already complete reference genome from a close species. More precisely, given a scaffold S, a reference genome G (represented as a string) and a score function f() between two genomes, the aim is to complete S by adding the missing genes from G so that the obtained complete genome S* optimizes f(S*,G). In this talk, we will consider two alternative score functions: the first aims at maximizing the number of common k-mers between S* and G (k-Mer Scaffold Filling), the second aims at minimizing the number of string breakpoints between S* and G (Min-Breakpoint Scaffold Filling). We study these problems from the parameterized complexity point of view, providing fixed-parameter (FPT) algorithms for both problems. In particular, we show that k-Mer Scaffold Filling is FPT wrt. the number of additional k-mers realized by the completion of S. We also show that Min-Breakpoint Scaffold Filling is FPT wrt. a parameter combining the number of missing genes, the number of gene repetitions and the target distance.

Algorithmes et complexité
Mardi 13 juin 2017, 11 heures, Salle 1007
Abel Molina The Optimality of Projections for Quantum State Exclusion

We will first motivate the problem of quantum state exclusion of pure states, through its connections with the PBR game and with compatibility conditions for quantum state assignments. Then, we will discuss our recent result regarding the optimality of projections for perfect state exclusion of 3 pure states in 3 dimensions (arXiv:1702.06449). We will describe our techniques to prove this result, which are based on arguments involving convexity, rank and symmetry properties. Finally, we will discuss other related results, as well as possible avenues for extending our results.

Algorithmes et complexité
Mardi 6 juin 2017, 16 heures, 3052
Thomas Vidick (California Institute of Technology) Entanglement Tests from Group Representations.

I will present a novel perspective on entanglement tests, such as the CHSH inequality and extensions thereof, pioneered by Slofstra and (unknowingly) Gowers-Hatami, that is based on the theory of (approximate) representations of non-abelian groups. We will see how this theory leads to a simple analysis of:

(i) The first family of self-tests (or, two-player entangled games) for arbitrarily high-dimensional entanglement that is robust to a constant fraction of noise. (Joint work with Natarajan, arXiv:1610.03574.)

(ii) The first finite self-test such that for any eps>0, entangled states of dimension poly(1/eps) are both necessary and sufficient to achieve success probability 1-eps. (Joint work with Slofstra, in preparation.)

Algorithmes et complexité
Mardi 2 mai 2017, 11 heures, Salle 1007
Pierre Senellart (DI/ENS) Tree decompositions for probabilistic data management

Joint work with Antoine Amarilli, Pierre Bourhis, Mikaël Monet, Silviu Maniu.

In this talk, we review recent work on the problem of efficiently querying probabilistic databases, i.e., compact representations of probability distributions over databases, by exploiting the structure of the data. In the deterministic setting, a celebrated result by Courcelle shows that any monadic second-order query can be evaluated in linear-time on instances of bounded treewidth. We show that this result generalizes to probabilistic instances through the use of provenance. In the probabilistic setting, we show that a bound on the treewidth is necessary for query evaluation to be tractable, in sharp contrast with the deterministic setting. This leads us to studying which real-world databases actually have low treewidth, and to propose practical algorithms for query evaluation in databases that can be partially decomposed in a low-treewidth part and a high-treewidth core.

Pierre Senellart is a Professor in the Computer Science Department at the École normale supérieure in Paris, France, and the head of the Valda team of Inria Paris. He is an alumnus of ENS and obtained his M.Sc. (2003) and Ph.D. (2007) in computer science from Université Paris-Sud, studying under the supervision of Serge Abiteboul. He was awarded an Habilitation à diriger les recherches in 2012 from Université Pierre et Marie Curie. Before joining ENS, he was an Associate Professor (2008–2013) then a Professor (2013–2016) at Télécom ParisTech. He also held secondary appointments as Lecturer at the University of Hong Kong in 2012–2013, and as Senior Research Fellow at the National University of Singapore from 2014 to 2016. His research interests focus around practical and theoretical aspects of Web data management, including Web crawling and archiving, Web information extraction, uncertainty management, Web mining, and intensional data management.

Algorithmes et complexité
Mardi 25 avril 2017, 11 heures, Salle 1007
Nicolas Flammarion (DI/ENS) Optimal rates for Least-Squares Regression through stochastic gradient descent.

At a broad level, stochastic optimization is concerned with optimizing a function in the presence of noise. In this context, we consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error. We present the first algorithm that achieves the optimal prediction error rates for least-squares regression, both in terms of forgetting of initial conditions in O(1/n^2), and in terms of dependence on the noise and dimension d of the problem, as O(d/n). Our new algorithm is based on averaged accelerated regularized gradient descent, and may also be analyzed through finer assumptions on initial conditions and the Hessian matrix, leading to dimension-free quantities that may still be small while the “optimal” terms above are large.

Algorithmes et complexité
Mardi 18 avril 2017, 11 heures, Salle 1007
Adrian Kosowski (Inria, IRIF Université Paris 7) Protocols for Detecting a Signal

We consider the following interaction scenario: a population of agents is required to perpetually detect whether or not the population contains at least one agent in a distinguished state X. This problem may be viewed as a variant of rumor-spreading, in which a “true” rumor should spread and persist in the population for as long as its primary rumor source X is present, and a “false” rumor without a source should be actively suppressed. Investigations into the problem are also directly motivated by the question of detecting and amplifying trace concentrations of a chemical substance X, e.g., in a framework of DNA computing with chemical reaction networks (CRN-s).

We describe two population protocols which solve the considered problem efficiently, converging to a correct output state on a (1-epsilon)-fraction of a population of n agents within a polylogarithmic number of steps, starting from any initial configuration of the system, w.h.p. under a fair uniformly random scheduler. One of the proposed protocols requires O(log n) states and has certain desirable robustness properties, while the other uses only a constant number of states.

Algorithmes et complexité
Jeudi 13 avril 2017, 14 heures, Salle 3052
Przemyslaw Uznanski (ETH Zürich, Switzerland) All-Pairs 2-reachability in O(n^ω log n) Time

In the 2-reachability problem we are given a directed graph G and we wish to determine if there are two (edge or vertex) disjoint paths from u to v, for given pair of vertices u and v. We present an algorithm that computes 2-reachability information for all pairs of vertices in O(n^ω logn) time. Hence, we show that the running time of all-pairs 2-reachability is within a log factor of transitive closure.

Moreover, our algorithm produces a witness (i.e., a separating edge or a separating vertex) for all pair of vertices where 2-reachability does not hold. By processing these witnesses, we can compute all the edge- and vertex-dominator trees of G in O(n^2) additional time, which in turn enables us to answer various connectivity queries in O(1) time. For instance, we can test in constant time if there is a path from u to v avoiding an edge e, for any pair of query vertices u and v, and any query edge e, or if there is a path from u to v avoiding a vertex w, for any query vertices u, v, and w.

This is a joint work with Loukas Georgiadis (University of Ioannina), Daniel Graf (ETH Zurich), Giuseppe Italiano and Nikos Parotsidis (University of Rome, Tor Vergata).

Link to the paper: https://arxiv.org/abs/1612.08075

Algorithmes et complexité
Mardi 21 mars 2017, 11 heures, Salle 1007
Emanuele Natale Friend or Foe? Population Protocols can perform Community Detection

We present a simple distributed algorithm that, given a regular graph consisting of two communities (or clusters), each inducing a good expander and such that the cut between them has sparsity $1/\mbox{polylog}(n)$, recovers the two communities.

More precisely, upon running the protocol, every node assigns itself a binary label of $m = \Theta(\log n)$ bits, so that with high probability, for all but a small number of outliers, nodes within the same community are assigned labels with Hamming distance $o(m)$, while nodes belonging to different communities receive labels with Hamming distance at least $m/2 - o(m)$. We refer to such an outcome as a “community sensitive labeling” of the graph.

Our algorithm uses $\Theta(\log^2 n)$ local memory and computes the community sensitive labeling after each node performs $\Theta(\log^2 n)$ steps of local work. Our algorithm and its analysis work in the “(random) population protocol” model, in which anonymous nodes do not share any global clock (the model is asynchronous) and communication occurs over one single (random) edge per round. We believe, this is the first provably-effective protocol for community detection that works in this model.

Joint work with Luca Becchetti, Andrea Clementi, Francesco Pasquale, Prasad Raghavendra, and Luca Trevisan.

Link to the arxiv version: https://arxiv.org/abs/1703.05045

Algorithmes et complexité
Mardi 21 février 2017, 11 heures, Salle 1007
Zvi Lotker Literature Networks and Time in Social Networks.

In this talk I will describe the connection between social networks and theater plays. I will show how algorithms can analyze plays using social networks, and how plays can reveal an interesting algorithmic problem.

Algorithmes et complexité
Mardi 7 février 2017, 11 heures, Salle 1007
Charles Paperman Streaming and circuit complexity

In this talk, I will present a connection between the streaming complexity and the circuit complexity of regular languages through a notion of streaming by block. This result provides tight constructions of boolean circuits computing an automaton, thanks to some classical and recent results on the circuit complexity of regular languages.

Algorithmes et complexité
Mardi 31 janvier 2017, 11 heures, Salle 1007
Chien-Chung Huang (CNRS, ENS Paris) Popularity, Mixed Matchings, and Self-duality

We consider the problem of matching applicants to jobs under two-sided preferences in a popular and utility-optimal manner. Our input instance is a bipartite graph G = (A \cup B,E) with a utility function w: E \rightarrow Q where each vertex u \in A \cup B has a preference list ranking its neighbors in a strict order of preference. For any two matchings M and T in G, let \phi(M,T) be the number of vertices that prefer M to T. A matching M is popular if \phi(M,T) \geg \phi(T,M) for all matchings T in G. A mixed matching is defined as \Pi = \{(M_0,p_0),\ldots,(M_k,p_k)\} for some matchings M_0,\ldots,M_k and \sum_{i=0}^kp_i = 1, p_i \geq 0 for all i. The function \phi(\cdot,\cdot$ easily extends to mixed matchings; a mixed matching \Pi is popular if \phi(\Pi,\Lambda) \geq \phi(\Lambda,\Pi) for all mixed matchings \Lambda in G.

Motivated by the fact that a popular mixed matching could have a much higher utility than all popular matchings, we study the popular fractional matching polytope {\cal P}_G. Our main result is that this polytope is half-integral (and in the special case where a stable matching is a perfect matching, this polytope is integral), implying that there is always a max-utility popular mixed matching \Pi such that \Pi = {(M_0,\frac{1}{2}),(M_1,\frac{1}{2})} and \Pi can be computed in polynomial time. An immediate consequence is that in order to implement a max-utility popular mixed matching in $G$, we need just one single random bit.

We analyze {\cal P}_G whose description may have exponentially many constraints via an extended formulation in m+n dimensions with O(m+n) constraints. The linear program that gives rise to this formulation has an unusual property: self-duality. In other words, this linear program is exactly identical to its dual program. This is a rare case where an LP of a natural problem has such a property. The self-duality of the LP plays a crucial role in our proof of the half-integrality of {\cal P}_G.

We also show that our result carries over to the roommates problem, where the given graph G may not be bipartite. The polytope of popular fractional matchings is still half-integral here and thus we can compute a max-utility popular half-integral matching in G in polynomial time. To complement this result, we also show that the problem of computing a max-utility popular (integral) matching in a roommates instance G is NP-hard.

Algorithmes et complexité
Mardi 10 janvier 2017, 11 heures, Salle 1007
Vincent Jugé (LSV, CNRS & ENS Cachan, Univ. Paris-Saclay) Dynamic Complexity of Parity Games with Bounded Tree-Width

Dynamic complexity is concerned with the complexity of updating a solution to a problem when its input changes. A typical example is as follows: given an directed graph G and two pointed vertices s and t, you wish to know whether there exists a path from s to t in G. After your computation is complete, the graph is modified (e.g. by adding or deleting an edge): how should you proceed to update your answer at the least cost? Computing and storing auxiliary data, such as maintaining a covering forest, might be helpful in that regard.

In this talk, we will consider a specific class for dynamic complexity, called DynFO. A dynamic problem belongs to this class if updates can be performed by applying first-order formulas. We will show that a dynamic variant of Courcelle's theorem belongs to DynFO, and we will apply this result to computing optimal strategies in 2-player reachability or parity games.

This talk is based on joint a work with Patricia Bouyer-Decitre and Nicolas Markey.