Algorithms and complexity
Tuesday December 15, 2020, 2PM, Online
Paul Fermé (ENS Lyon) Tight Approximation Guarantees for Concave Coverage Problems

In the maximum coverage problem, we are given subsets $T_1, \ldots, T_m$ of a universe $[n]$ along with an integer $k$ and the objective is to find a subset $S \subseteq [m]$ of size $k$ that maximizes $C(S) := \lvert\bigcup_{i \in S} T_i\rvert$. It is a classic result that the greedy algorithm for this problem achieves an optimal approximation ratio of $(1-e^{-1})$. In this work we consider a generalization of this problem wherein an element $a$ can contribute by an amount that depends on the number of times it is covered. Given a concave, nondecreasing function $\varphi$, we define $C^{\varphi}(S) := \sum_{a \in [n]}w_a\varphi(|S|_a)$, where $|S|_a = |\{i \in S : a \in T_i\}|$. The standard maximum coverage problem corresponds to taking $\varphi(j) = \min\{j,1\}$. For any such $\varphi$, we provide an efficient algorithm that achieves an approximation ratio equal to the Poisson concavity ratio of $\varphi$, defined by $\alpha_{\varphi} := \min_{x \in \mathbb{N}^*} \frac{\mathbb{E}[\varphi(Poi(x))]}{\varphi(\mathbb{E}[Poi(x)])}$. Complementing this approximation guarantee, we establish a matching NP-hardness result when $\varphi$ grows in a sublinear way. As special cases, we improve the result of [BFGG20] about maximum multi-coverage, that was based on the unique games conjecture, and we recover the result of [DDMS20] on multi-winner approval-based voting for geometrically dominant rules. Our result goes beyond these special cases and we illustrate it with applications to distributed resource allocation problems, welfare maximization problems and approval-based voting for general rules.

Based on joint work with Siddharth Barman and Omar Fawzi.

Algorithms and complexity
Wednesday December 2, 2020, 2PM, Online
Claire Mathieu (IRIF) Competitive Data-Structure Dynamization

Data-structure dynamization is a general approach for making static data structures dynamic. It is used extensively in geometric settings and in the guise of so-called merge (or compaction) policies in big-data databases such as Google Bigtable and LevelDB (our focus). Previous theoretical work is based on worst-case analyses for uniform inputs – insertions of one item at a time and constant read rate. In practice, merge policies must not only handle batch insertions and varying read/write ratios, they can take advantage of such non-uniformity to reduce cost on a per-input basis. To model this, we initiate the study of data-structure dynamization through the lens of competitive analysis, via two new online set-cover problems. For each, the input is a sequence of disjoint sets of weighted items. The sets are revealed one at a time. The algorithm must respond to each with a set cover that covers all items revealed so far. It obtains the cover incrementally from the previous cover by adding one or more sets and optionally removing existing sets. For each new set the algorithm incurs build cost equal to the weight of the items in the set. In the first problem the objective is to minimize total build cost plus total query cost, where the algorithm incurs a query cost at each time t equal to the current cover size. In the second problem, the objective is to minimize the build cost while keeping the query cost from exceeding k (a given parameter) at any time. We give deterministic online algorithms for both variants, with competitive ratios of Θ(log∗n) and k, respectively. The latter ratio is optimal for the second variant.

Algorithms and complexity
Wednesday October 28, 2020, 11AM, Salle 3052 & En ligne
Simon Mauras (IRIF) Assessment of Covid-19 containment strategies in primary schools and workplaces using contact graphs

I will explain how modelling the epidemic using graphs could help in reopening strategies. This work has been done in collaboration with Guillaume Duboc, Max Dupré la Tour, Paolo Frasca, Claire Mathieu, Lulla Opatowski and Laurent Viennot.

Algorithms and complexity
Wednesday October 21, 2020, 11AM, Salle 3052 & En ligne
Geoffroy Couteau (IRIF) Contact Tracing

Presentation of the working group on privacy in contact tracing organized during the COVID-19 pandemic.

Algorithms and complexity
Wednesday October 7, 2020, 11AM, Salle 3052
Adrian Vladu (IRIF) Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs

We present an m^{4/3+o(1)} log W -time algorithm for solving the minimum cost flow problem in graphs with unit capacity, where W is the maximum absolute value of any edge weight. For sparse graphs, this improves over the best known running time for this problem and, by well-known reductions, also implies improved running times for the shortest path problem with negative weights, minimum cost bipartite b-matching when |b|_1 = O(m), and recovers the running time of the currently fastest algorithm for maximum flow in graphs with unit capacities, due to Liu and Sidford (FOCS, 2020).

Our algorithm relies on developing an interior point method–based framework which acts on the space of circulations in the underlying graph. From the combinatorial point of view, this framework can be viewed as iteratively improving the cost of a suboptimal solution by pushing flow around some carefully chosen circulations.

This talk will be self-contained and will assume no prior background in optimization. Based on https://arxiv.org/abs/2003.04863, appears in FOCS 2020.

Algorithms and complexity
Thursday October 1, 2020, 2PM, Salle 3052
Vera Traub (University of Bonn) An improved approximation algorithm for ATSP

In a recent breakthrough, Svensson, Tarnawski, and Végh gave the first constant-factor approximation algorithm for the asymmetric traveling salesman problem (ATSP). In this work we revisit their algorithm. While following their overall framework, we improve on each part of it.

Svensson, Tarnawski, and Végh perform several steps of reducing ATSP to more and more structured instances. We avoid one of their reduction steps (to irreducible instances) and thus obtain a simpler and much better reduction to vertebrate pairs. Moreover, we show that a slight variant of their algorithm for vertebrate pairs has a much smaller approximation ratio.

Overall we improve the approximation ratio from 506 to 22 + ε for any ε > 0. We also improve the upper bound on the integrality ratio of the standard LP relaxation from 319 to 22.

This is joint work with Jens Vygen.

Live webcast of IGAFIT Algorithmic Colloquium

Algorithms and complexity
Friday September 25, 2020, 11AM, Salle 3052
Makrand Sinha (CWI) k-Forrelation Optimally Separates Quantum and Classical Query Complexity

About this paper with Nikhil Bansal: https://arxiv.org/abs/2008.07003

Live webcast of CWI/QuSoft seminar

Algorithms and complexity
Friday September 18, 2020, 11AM, Salle 3052
Arjan Cornelissen (QuSoft, U of Amsterdam) A self-contained, simplified analysis of span programs algorithms

Over the years, many query-efficient quantum algorithms have been constructed through the span program framework. Despite its successes, the analysis behind this framework is complicated, and its full potential is not yet fully understood. In our current research, we attempt to clean up the framework and simplify its analysis, so as to better expose the mathematical structure and more clearly understand how it is exploited. In this talk, we showcase the simplified analysis and share some of the new insights we have gained along the way. No prior knowledge on span programs is required.

Live webcast of CWI/QuSoft seminar

Algorithms and complexity
Tuesday June 30, 2020, 7PM, Online
Amos Korman (IRIF) SIROCCO Prize for Innovation in Distributed Computing

Live broadcasting of the 2020 SIROCCO Prize for Innovation in Distributed Computing awarded to Amos Korman. Amos will give a 1h talk on his research regarding ants and mathematics. You need to register here in order to receive the zoom link: https://sirocco2020.cs.uni-paderborn.de/registration.html.

Algorithms and complexity
Tuesday June 9, 2020, 2:30PM, Online
Francesco D'amore (INRIA) On the Search Efficiency of Parallel Lévy Walks in ${\mathbb Z}^2$

Motivated by the Lévy flight foraging hypothesis – the premise that the movement of various animal species searching for food resembles a Lévy walk – we study the hitting time of parallel Lévy walks on the infinite 2-dimensional grid. Lévy walks are characterized by a parameter $\alpha \in(1,+\infty)$, that is the exponent of the power law distribution of the time intervals at which the moving agent randomly changes direction. In the setting we consider, called the ANTS problem (Feinerman et al. PODC 2012), $k$ independent discrete-time Lévy walks start simultaneously at the origin, and we are interested in the time $h_{k,\ell}$ before some walk visits a given target node on the grid, at distance $\ell$ from the origin.

In this setting, we provide a comprehensive analysis of the efficiency of Lévy walks for the complete range of the exponent $\alpha$. For any choice of $\alpha$, we observe that the total work until the target is visited, i.e., the product $k \cdot h_{k,\ell}$, is at least $\Omega(\ell^2)$ with constant probability. Our main result is that the right choice for $\alpha$ to get optimal work varies between $1$ and $3$, as a function of the number $k$ of available agents. For instance, when $k = \tilde \Theta(\ell^{1-\epsilon})$, for some positive constant $\epsilon < 1$, then the unique optimal setting for $\alpha$ lies in the super-diffusive regime $(2,3)$, namely, $\alpha = 2+\epsilon$. Our results should be contrasted with various previous works in the continuous time-space setting showing that the exponent $\alpha = 2$ is optimal for a wide range of related search problems on the plane.

The online reference is here https://hal.archives-ouvertes.fr/hal-02530253

Algorithms and complexity
Tuesday June 2, 2020, 11AM, Online
Weiqiang Wen (IRISA) Learning With Errors and Extrapolated Dihedral Cosets

The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regev's famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS'02). We also discuss a connection between eDCP and Childs and Van Dam's algorithm for generalized hidden shift problems (SODA'07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

Algorithms and complexity
Tuesday May 26, 2020, 11AM, Online
Édouard Bonnet (ENS Lyon) Twin-width

Inspired by an invariant defined on permutations by Guillemot and Marx [SODA '14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-width graphs, $K_t$-free unit ball graphs, posets with antichains of bounded size, and proper subclasses of permutation graphs all have bounded twin-width. On all these classes we will see how to compute in polynomial time a sequence of d-contractions, witness that the twin-width is at most d. FO model checking, that is deciding if a given first-order formula $\phi$ evaluates to true on a given binary structure G on a domain D, happens to be FPT in $|\phi|$ on classes of bounded twin-width, provided the witness is given. More precisely, being given a d-contraction sequence for G, our algorithm runs in time $f(d,|\phi|) |D|$ where f is a computable but non-elementary function. We will also see that bounded twin-width is preserved by FO interpretations and transductions. This unifies and significantly extends the knowledge on fixed-parameter tractability of FO model checking on non-monotone classes, such as the FPT algorithm on bounded-width posets by Gajarsky et al. [FOCS '15]. Finally we mention a curious connection between bounded twin-width and small classes.

Joint work with Colin Geniet, Eun Jung Kim, Stéphan Thomassé, and Rémi Watrigant

Algorithms and complexity
Tuesday May 19, 2020, 11AM, Online
Vincent Cohen-Addad (Google Zürich) On the Parameterized Complexity of Clustering

Clustering problems are fundamental computational problems. On the one hand they are at the heart of various data analysis, bioinformatics or machine learning approaches, on the other hand, they can be seen as variants of 'set cover' problems (involving e.g. metrics, graphs with a special structure, etc.) and so are very basic problems.

For many applications, finding efficient fixed-parameter algorithms is a very natural approach. Popular choices of parameters are either parameters of the problem itself (e.g. the number of clusters) or on the underlying structure of the input (e.g. the dimension in the metric case). We will focus on classic metric clustering objectives such as k-median and k-means and review recent results on the fixed parameter complexity of these problems, and the most important open problems.

Algorithms and complexity
Tuesday April 21, 2020, 11AM, Online
Yihui Quek (Stanford) Robust Quantum Minimum Finding with an Application to Hypothesis Selection

Finding the minimum in a well-ordered list of length N - `minimum finding' - is a critical subroutine in fields ranging from optimisation to finance. Following the 1996 quantum algorithm of Durr and Hoyer exhibiting a quadratic speedup for this problem, we ask if this speedup is preserved when the pairwise comparator between the elements is imprecise, or noisy in the following sense: given two elements to compare, if the values of the elements differ by at least α, then the comparison will be made correctly; if the values of the elements are closer than α, the outcome of the comparison is not subject to any guarantees. We answer this question in the affirmative, demonstrating a quantum algorithm for minimum-finding robust to such noise, that also preserves the quadratic speedup (to highest order) of the noiseless case while outputting a good approximation to the minimum with high probability. We demonstrate an application of this algorithm to the problem of hypothesis selection with N hypotheses, where the role of the noisy comparator is played by a statistical primitive known as the Scheffé test which compares two hypotheses' l1 distance to an unknown distribution.

Algorithms and complexity
Friday April 10, 2020, 3PM, Online
André Schrottenloher, Yixin Shen (INRIA, IRIF) Improved Classical and Quantum Algorithms for Subset-Sum

We present new classical and quantum algorithms for solving random hard instances of the subset-sum problem, in which we are given n integers on n bits and try to find a subset of them that sums to a given target. This classical NP-complete problem has several applications in cryptography and underlies the security of some proposed post-quantum cryptosystems. At EUROCRYPT 2010, Howgrave-Graham and Joux (HGJ) introduced the representation technique and presented an algorithm running in time $\tilde{O}(2^{0.337 n})$. This asymptotic time was improved by Becker, Coron, Joux (BCJ) at EUROCRYPT 2011. We show how to improve this further. We then move to the context of quantum algorithms. The two previous quantum speedups in the literature are given by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013) and Helm and May (TQC 2018), which are respectively quantum versions of HGJ and BCJ. They both rely on the framework of quantum walks, use exponential quantum memory with quantum random-access and require an unproven conjecture on quantum walk updates. We devise a new algorithm, using quantum search only, that achieves the first quantum speedup in the model of classical memory with quantum random access. Next, we study improvements for the quantum walks. We show how to avoid the quantum walk conjecture and give a better quantum walk time complexity for subset-sum.

Joint work with Xavier Bonnetain and Rémi Bricout.

Algorithms and complexity
Tuesday March 31, 2020, 2:30PM, Online
Vincent Jugé (LIGM) Adaptive ShiversSort - A new sorting algorithm

I will present a sorting algorithm called adaptive ShiversSort. This recently developed algorithm improves the speed at which it sorts arrays by using the existence of already sorted subarrays. I will focus on the links between this algorithm and the algorithm TimSort, which is a standard algorithm in Python and Java. I will also show that the complexity of adaptive ShiversSort, in terms of comparisons performed, is optimal up to an additive linear factor.

Algorithms and complexity
Thursday February 20, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 4/4)

Applications: Max-k-cut, graph coloring, theta function and perfect graphs, unique games.

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Tuesday February 18, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 3/4)

SDP Rounding (basic concepts such as problem formulations, normal distribution, random hyperplane rounding).

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Friday February 14, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 2/4)

Applications: Dominating sets in digraphs, kernels, semi-kernels, arc colored digraphs, asymmetric k-center, triangle transversals.

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Thursday February 13, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 1/4)

LP Randomized Rounding (basic concepts such as extreme points, duality, complementary slackness).

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Tuesday February 11, 2020, 11AM, Salle 1007
Simon Apers (CWI) Quantum Speedup for Graph Sparsification, Cut Approximation and Laplacian Solving

Graph sparsification is an important component of a large number of algorithms, ranging from approximation algorithms for cut problems to Laplacian system solvers. In its strongest form, “spectral sparsification” reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. The breakthrough work by Benczúr and Karger (STOC'96) and Spielman and Teng (STOC'04) showed that sparsification can be done optimally in time near-linear in the number of edges of the original graph.

In this work we show that quantum algorithms allow to speed up spectral sparsification, and thereby many of the derived algorithms. To this end we build on a string of existing results, most notably sparsification algorithms by Spielman and Srivastava (STOC'08) and Koutis and Xu (TOPC'16), a spanner construction by Thorup and Zwick (STOC'01), a single-source shortest-paths quantum algorithm by Dürr et al. (ICALP'04) and an efficient k-wise independent hash construction by Christiani, Pagh and Thorup (STOC'15). Combining our sparsification algorithm with existing classical algorithms yields the first quantum speedup for approximating the max cut, min cut, min st-cut, sparsest cut and balanced separator of a graph. Combining our algorithm with a classical Laplacian solver, we demonstrate a similar speedup for Laplacian solving, for approximating effective resistances, cover times and eigenvalues of the Laplacian, and for spectral clustering.

This is joint work with Ronald de Wolf.

Algorithms and complexity
Tuesday February 4, 2020, 11AM, Salle 1007
Jacques Dark (University of Warwick) Two-Party Lower Bounds for Graph Streaming Problems

In the turnstile graph streaming model we receive a sequence of edge insertions and deletions and we wish to store some small summary which can answer a question about the final graph. A few years ago [Yi, Nguyen, Woodruff 2014] and [Ai, Hu, Li, Woodruff 2016] explained why all known turnstile streaming algorithms could be described as “linear sketches” (random linear transformations) of the underlying data: because any 1-pass turnstile streaming algorithm which works for all possible input streams can be simulated by a linear sketch with only logarithmic space overhead.

This is a very powerful tool which immediately transformed every sketching lower bound into a turnstile streaming lower bound - such as the tight bound for approximate maximum matching of [Assadi, Khanna, Li, Yaroslavtsev 2016]. However, there are many interesting input constraints we can consider which are not compatible with the sketching reduction. What happens if we enforce that the input graph must always be simple (contains no repeat edges)? What happens when we bound the number of deletions?

In this talk, I will present work towards answering these questions. We introduce a new elementary communication problem - finding a hidden bit in a partially revealed matrix - and show how tight two-party communication bounds for the maximum matching and minimum vertex cover problems can be achieved by surprisingly simple reductions from this problem. In particular, this gives us tight turnstile streaming bounds even for always-simple inputs or streams of bounded-length n^2. I will conclude with a conjecture for the bounded-deletions case and a discussion of the remaining challenges.

Joint work with Christian Konrad.

Algorithms and complexity
Tuesday January 28, 2020, 11AM, Salle 1007
Spyros Angelopoulos (LIP6) Online Computation with Untrusted Advice

The advice model of online computation captures the setting in which an online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can chose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm typically treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze robust online algorithms that are resilient and perform well even when the advice is generated in a malicious, adversarial manner. We show how the new paradigm can be applied to well-studied online problems such as ski rental, online bidding, bin packing, and list update.

Joint work with Christoph Dürr, Shendan Jin, Shahin Kamali and Marc Renault.

Algorithms and complexity
Thursday January 23, 2020, 11AM, Salle 1007
Moni Naor (Weizmann Institute) Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Suppose that you want to check whether a graph satisfies some (graph) property. The goal is to minimize the number of queries to the adjacency matrix. You are given as an “untrusted hint” an isomorphic copy of the graph. You must always be correct, but the number of queries made is only measured when the hint is correct. Do such unlabeled certificates help? In the worst case? Per instance?

In this talk I will discuss labeled and unlabeled certificates, in particular in the context of ``instance optimality“. This is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively.

Joint work with Tomer Grossman and Ilan Komargodski

Algorithms and complexity
Tuesday January 21, 2020, 11AM, Salle 1007
Tatiana Starikovskaya (ENS) Sliding window property testing for regular languages

We discuss the problem of recognizing regular languages in a variant of the streaming model of computation, called the sliding window model. In this model, we are given a size of the sliding window n and a stream of symbols. At each time instant, we must decide whether the suffix of length n of the current stream (``the window'') belongs to a given regular language.

In this work, we make an important step forward and combine the sliding window model with the property testing setting, which results in ultra-efficient algorithms for all regular languages. Informally, a sliding window property tester must accept the active window if it belongs to the language and reject it if it is far from the language. We consider deterministic and randomized sliding window property testers with one-sided and two-sided errors. In particular, we show that for any regular language, there is a deterministic sliding window property tester that uses logarithmic space and a randomized sliding window property tester with two-sided error that uses constant space.

The talk is based on a joint work with Ganardi, Hucke, and Lohrey published in ISAAC 2019.

Algorithms and complexity
Tuesday January 14, 2020, 11AM, Salle 1007
Olivier Tercieux (Paris School of Economics) Unpaired Kidney Exchange: Overcoming Double Coincidence of Wants without Money

In this talk, I'll present the basics of the kidney exchange problem and a brief overview of the different practices in the world. I'll further describe a new matching algorithm—Unpaired kidney exchange—and study its properties in a dynamic matching model with heterogeneous agents. We will show that the average waiting time under the Unpaired algorithm is close-to optimal, and substantially less than the standard – and commonly used – pairwise and chain exchange algorithms. Using a rich dataset of the kidney patients in France, counterfactual simulations show that the Unpaired algorithm can match nearly 57% of the patients, with an average waiting time of 424 days (state-of-the-art algorithms match about 31% with an average waiting time of 675 days or more). The optimal algorithm performs only slightly better: it matches 58% of the patients and leads to an average waiting time of 410 days. The Unpaired algorithm confronts two incentive-related practical challenges. We address those challenges via a practical version of the Unpaired algorithm that employs kidneys from the deceased donors waiting list. The practical version can match nearly 87% of patient-donor pairs, while reducing the average waiting time to about 141 days. Finally, I'll conclude discussing the relationship between this algorithm and the current reform of the French kidney exchange program.

Most of the talk is based on https://drive.google.com/file/d/1ZxPC3sc9HvrlG7JUtqCdyZ_BSHttkGhN/view. I'll also mention https://www.ipp.eu/publication/juin-2019-perspectives-sur-le-programme-de-dons-croises-de-reins-en-france/.