Day, hour and place

Tuesday / Wednesday at 11am, room 3052

The calendar of events (iCal format).
In order to add the event calendar to your favorite agenda, subscribe to the calendar by using this link.


Contact(s)


The live broadcasts are on Zoom.
The video recordings are on Youtube.


Algorithms and complexity
Tuesday December 17, 2024, 11AM, Salle 3052
Team Event AlgoComp lightning talks ⚡️



Year 2024

Algorithms and complexity
Wednesday December 11, 2024, 11AM, Salle 3071
Akin Ünal (ETH Zürich) On the Soundness of Algebraic Attacks against Code-based Assumptions

Algorithms and complexity
Tuesday December 10, 2024, 11AM, Salle 3052
Orr Paradise (UC Berkeley) Models that prove their own correctness

This talk introduces Self-Proving models, a new class of models that formally prove the correctness of their outputs via an Interactive Proof system. After reviewing some related literature, I will formally define Self-Proving models and their per-input (worst-case) guarantees. I will then present algorithms for learning these models and explain how the complexity of the proof system affects the complexity of the learning algorithms. Finally, I will show experiments where Self-Proving models are trained to compute the Greatest Common Divisor of two integers, and to prove the correctness of their results to a simple verifier.

Joint work with Noga Amit, Shafi Goldwasser, and Guy N. Rothblum.

Algorithms and complexity
Tuesday November 19, 2024, 11AM, Salle 3052
Philip Verduyn Lunel (LIP6) Quantum Position Verification Protocols: Security and Practical Considerations

In this talk we will give an overview of the area of position verification protocols. In position verification, the idea is to use an individual's geographical location as a cryptographic credential. In a position-verification protocol, the limitations imposed by the speed of light, as described by special relativity, are used to verify that a party is at their claimed location. This task has been shown to be impossible to implement using only classical information. Initially, the hope was that using quantum information we could construct secure quantum position verification (QPV) protocols. However, it has been shown that any QPV protocol can be broken by attackers that use an amount of entanglement exponential in the input size.

Thus, unconditionally-secure quantum position-verification protocols do not exist. From a practical point of view, not all is lost. The exponential upper bound for a general attack is still astronomically large for only a relatively small input. Thus, we can still hope for practically secure QPV protocols. This raises the problem of designing protocols that are secure in a practical setting. In this talk we will present an overview of the field and present some proposed protocols and discuss their advantages and drawbacks. We will also discuss the best known entanglement bounds.

Algorithms and complexity
Tuesday October 22, 2024, 11AM, Salle 3052
Adam Wesołowski (Royal Holloway) Advances in quantum algorithms for the shortest path problem

In this talk, we will delve into quantum complexity of some graph problems: Single Pair Shortest Pathfinding (SPSP), Diameter, Radius, Eccentricity. We will shortly review the electrical network framework for graphs and proceed to discuss recent advances in quantum algorithms and lower bounds for these problems and present a number of open problems. We give an algorithm in the adjacency array model that under special conditions solves the SPSP problem in $\tilde{O}(l\sqrt{m})$ steps, and with parallelization in $\tilde{O}(\sqrt{lm})$ steps, matching the upper bound on path detection up to polylog factors, where $l$ is the length of the shortest path between the considered vertices.

Algorithms and complexity
Tuesday October 15, 2024, 11AM, Salle 3052
Christoph Egger (IRIF) On Bounded Storage Key Agreement and One-Way Functions

We study key agreement in the bounded-storage model, where the participants and the adversary can use an a priori fixed bounded amount of space, and receive a large stream of data. While key agreement is known to exist unconditionally in this model (Cachin and Maurer, Crypto'97), there are strong lower bounds on the space complexity of the participants, round complexity, and communication complexity that unconditional protocols can achieve.

In this work, we explore how a minimal use of cryptographic assumptions can help circumvent these lower bounds. We obtain several contributions:

- Assuming one-way functions, we construct a one-round key agreement in the bounded-storage model, with arbitrary polynomial space gap between the participants and the adversary, and communication slightly larger than the adversarial storage. Additionally, our protocol can achieve everlasting security using a second streaming round.

- In the other direction, we show that one-way functions are *necessary* for key agreement in the bounded-storage model with large space gaps. We further extend our results to the setting of *fully-streaming* adversaries, and to the setting of key agreement with multiple streaming rounds.

Our results rely on a combination of information-theoretic arguments and technical ingredients such as pseudorandom generators for space-bounded computation, and a tight characterization of the space efficiency of known reductions between standard Minicrypt primitives (from distributional one-way functions to pseudorandom functions), which might be of independent interest.

Joint work with Chris Brzuska, Geoffroy Couteau and Willy Quach

Algorithms and complexity
Tuesday October 8, 2024, 11AM, Salle 3052
Julia Kastner (CWI, Amsterdam) Pairing-Free Blind Signatures from Standard Assumptions in the ROM

Blind Signatures are a useful primitive for privacy preserving applications such as electronic payments, e-voting, anonymous credentials, and more.

However, existing practical blind signature schemes based on standard assumptions require either pairings or lattices. We present the first practical construction of a round-optimal blind signature in the random oracle model based on standard assumptions without resorting to pairings or lattices. In particular, our construction is secure under the strong RSA assumption and DDH (in pairing-free groups). For our construction, we provide a NIZK-friendly signature based on strong RSA, and efficiently instantiate a variant of Fischlin's generic framework (CRYPTO'06). Our Blind Signature scheme has signatures of size 4.28 KB and communication cost 10.98 KB. On the way, we develop techniques that might be of independent interest. In particular, we provide efficient *relaxed* range-proofs for large ranges with subversion zero-knowledge and compact commitments to elements of arbitrary groups.

Joint work with Michael Reichle and Ky Nguyen.

Algorithms and complexity
Tuesday October 1, 2024, 11AM, Salle 3052
Philips George John (NUS, Singapore) Distribution Learning Meets Graph Structure Sampling via Online Learning

The problem of PAC-learning high dimensional Bayesian networks (distributions) with bounded indegree is well-studied in the literature, especially with respect to KL divergence and total variation distance. While the sample complexity of the learning problem is well-understood and near-sample-optimal algorithms are known, the problem of efficiently (in polynomial time) learning constant-indegree Bayesian networks, even over the binary variables, is still open except in special cases such as tree Bayes nets.

This work establishes a novel link between the problem of PAC-learning high-dimensional graphical models and the task of (efficient) counting and sampling of graph structures, using an online learning framework. We observe that if we apply the exponentially weighted average (EWA) or randomized weighted majority (RWM) forecasters on a sequence of samples from a distribution P using the log loss function, the average regret incurred by the forecaster's predictions can be used to bound the expected KL divergence between P and the predictions. Known regret bounds for EWA and RWM then yield new sample complexity bounds for learning Bayes nets. Moreover, these algorithms can be made computationally efficient for several interesting classes of Bayes nets. Specifically, we give a new sample-optimal and polynomial time learning algorithm with respect to trees of unknown structure and the first polynomial sample and time algorithm for learning with respect to Bayes nets over a given chordal skeleton.

Algorithms and complexity
Wednesday September 25, 2024, 11AM, Salle 3052
Sanjeev Khanna (University of Pennsylvania) A Faster Combinatorial Algorithm for Maximum Bipartite Matching

The maximum bipartite matching problem is among the most fundamental and well-studied problems in combinatorial optimization. A beautiful and celebrated combinatorial algorithm of Hopcroft and Karp (1973) shows that maximum bipartite matching can be solved in $O(m \sqrt{n})$ time on a graph with $n$ vertices and $m$ edges. For the case of very dense graphs, a fast matrix multiplication based approach gives a running time of $O(n^{2.371})$. These results represented the fastest known algorithms for the problem until 2013, when Madry introduced a new approach based on continuous techniques achieving much faster runtime in sparse graphs. This line of research has culminated in a spectacular recent breakthrough due to Chen et al. (2022) that gives an $m^{1+o(1)}$ time algorithm for maximum bipartite matching (and more generally, for min cost flows).

This raises a natural question: are continuous techniques essential to obtaining fast algorithms for the bipartite matching problem? In this talk, I will describe a new, purely combinatorial algorithm for bipartite matching, that runs in $\tilde{O}(m^{1/3}n^{5/3})$ time, and hence outperforms both Hopcroft-Karp and the fast matrix multiplication based algorithms on moderately dense graphs. I will also briefly discuss some subsequent work that further improves this runtime.

This is joint work with Julia Chuzhoy (TTIC).

Algorithms and complexity
Tuesday September 17, 2024, 11AM, Salle 3052
Paul Hermouet (LIP6) Towards Unclonable Cryptography in the Plain Model

Properties of quantum mechanics have enabled the emergence of quantum cryptographic protocols achieving important goals which are proven to be impossible classically. Perhaps one of the most striking examples of such protocols are the unclonable primitives, which allows different cryptographic “objects” (such as keys, ciphertexts, signatures, programs, etc.) to be encoded in a quantum state in a manner that permits the “object” to be used but not copied. Recent works have shown how to construct some of these primitives, by using different assumptions and models.

In this talk, I will present some of these unclonable primitives, and discuss how to leverage coset states to construct them. I will subsequently elaborate on my recent efforts in constructing two of these primitives - copy-protection for point functions and unclonable encryption - in the plain model. Finally, and if time allows, I will discuss show how to construct coset-state-based primitives in a semi-quantum way, that is without a quantum channel.

Based on joint works with Huy Vu and Céline Chevalier: “Semi-quantum Copy-Protection and More” (TCC 2023: https://eprint.iacr.org/2023/244), “Towards Unclonable Cryptography in the Plain Model” (preprint: https://eprint.iacr.org/2023/1825).

Algorithms and complexity
Tuesday September 10, 2024, 11AM, Salle 3052
Ashwin Nayak (IQC, University of Waterloo) State distinguishability and learning

A basic problem in quantum communication is that of recovering classical information encoded in quantum states. Several problems in learning theory may naturally be cast into this form. We will describe some problems of this nature, specifically the coupon collector problem and its variants. We will present more efficient algorithms as well as lower bounds for these problems, along with implications for learning.

Based on work with Shima Bab Hadiashar and Pulkit Sinha.

Algorithms and complexity
Tuesday August 27, 2024, 11AM, Salle 3052
Arjan Cornelissen (IRIF) Quantum Sabotage Complexity

Given a Boolean function, the goal in the usual query model is to compute it on an unknown input, while minimizing the number of queries to it. One can also consider a “distinguishing” problem: given a 0-input x and a 1-input y, either all differing locations are replaced by a ∗, or all differing locations are replaced by †, and an algorithm’s goal is to identify which of these is the case while minimizing the number of queries. Ben-David and Kothari [ToC’18] introduced the notion of randomized sabotage complexity of a Boolean function to be the zero-error randomized query complexity of this problem. A natural follow-up question is to understand the quantum query complexity of this problem. In this talk, I elaborate on our recent paper “Quantum sabotage complexity” that initiates a systematic study of this.

Algorithms and complexity
Friday July 12, 2024, 2PM, Salle 3052
Nicholas Spooner (University of Warwick) An efficient quantum parallel repetition theorem and applications

A fundamental question in cryptography is whether the security of a “weak” cryptographic primitive can be generically amplified. We investigate this question in the quantum setting. We prove a general security amplification result for any quantum cryptographic primitive with a three-message security game, including (canonical) quantum commitments and quantum money.

Algorithms and complexity
Tuesday June 25, 2024, 2PM, Salle 3052
Atsuya Hasegawa (The University of Tokyo) On the Power of Quantum Distributed Proofs

Quantum nondeterministic distributed computing was recently introduced as dQMA (distributed quantum Merlin-Arthur) protocols by Fraigniaud, Le Gall, Nishimura and Paz (ITCS 2021). In dQMA protocols, with the help of quantum proofs and local communication, nodes on a network verify a global property of the network. Fraigniaud et al. showed that, when the network size is small, there exists an exponential separation in proof size between distributed classical and quantum verification protocols, for the equality problem, where the verifiers check if all the data owned by a subset of them are identical. In this paper, we further investigate and characterize the power of the dQMA protocols for various decision problems.

First, we give a more efficient dQMA protocol for the equality problem with a simpler analysis. We also show a quantum advantage for the equality problem on path networks still persists even when the network size is large, by considering “relay points” between extreme nodes. Second, we show that even in a general network, there exist efficient dQMA protocols for the ranking verification problem, the Hamming distance problem, and more problems that derive from efficient quantum one-way communication protocols. Third, in a line network, we construct an efficient dQMA protocol for a problem that has an efficient two-party QMA communication protocol. Finally, we obtain the first lower bounds on the proof and communication cost of dQMA protocols.

Based on joint work with Srijita Kundu and Harumichi Nishimura (https://arxiv.org/abs/2403.14108)

Algorithms and complexity
Monday June 24, 2024, 2PM, Salle 3052
Chandrima Kayal (ISI Kolkata) On higher multiplicity hyperplane and polynomial covers for symmetry-preserving subsets of the hypercube

Suppose we want to cover all the vertices of the n-dimensional Boolean cube using a minimum number of hyperplanes. Observe that this can be done using only two hyperplanes. Moreover, no single hyperplane can cover all the vertices. Now, what if we want to cover only a subset of the Boolean cube? For example, suppose we want to cover all the vertices except one, viz. the origin. One can observe that n hyperplanes are sufficient. But can we do better? The celebrated covering result by Alon and Füredi shows that at least n hyperplanes will be required to cover all the vertices of the n-dimensional cube leaving out exactly one vertex.

We shall discuss different versions of this covering problem, and give a generalization of Alon and Füredi’s covering result for any symmetric subset of the Boolean cube. Also, we shall show a strict separation between the size of a polynomial cover and a hyperplane cover.

This work was jointly done with Arijit Ghosh, Soumi Nandi, and S. Venkitesh.

Algorithms and complexity
Tuesday June 18, 2024, 11AM, Salle 3052
Sebastian Zur (QuSoft, University of Amsterdam) Multidimensional Electrical Networks and their Application to Exponential Speedups for Graph Problems

Recently, Apers and Piddock [TQC ’23] strengthened the natural connection between quantum walks and electrical networks by considering Kirchhoff’s Law and Ohm’s Law. In this work, we develop the multidimensional electrical network by defining Kirchhoff’s Alternative Law and Ohm’s Alternative Law based on the novel multidimensional quantum walk framework by Jeffery and Zur [STOC ’23]. This multidimensional electrical network allows us to sample from the electrical flow obtained via a multidimensional quantum walk algorithm and achieve exponential quantum-classical separations for certain graph problems.

We first use this framework to find a marked vertex in one-dimensional random hierarchical graphs as defined by Balasubramanian, Li, and Harrow [arXiv ’23]. In this work, they generalised the well known exponential quantum-classical separation of the welded tree problem by Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman [STOC ’03] to random hierarchical graphs. Our result partially recovers their results with an arguably simpler analysis. Furthermore, by constructing a 3-regular graph based on welded trees, this framework also allows us to show an exponential speedup for the pathfinding problem. This solves one of the open problems by Li [arXiv ’23], where they construct a non-regular graph and use the degree information to achieve a similar speedup.

In analogy to the connection between the (edge-vertex) incidence matrix of a graph and Kirchhoff’s Law and Ohm’s Law in an electrical network, we also rebuild the connection between the alternative incidence matrix and Kirchhoff’s Alternative Law and Ohm’s Alternative Law. By establishing this connection, we expect that the multidimensional electrical network could have more applications beyond quantum walks.

Algorithms and complexity
Monday June 17, 2024, 11AM, Salle 3052
Gopal Pandurangan (University of Houston) Energy-Efficient Distributed Algorithms

Energy is a premium resource in battery-powered wireless and sensor networks, and the bulk of it is used by nodes when they are awake, i.e., when they are sending, receiving, and even just listening for messages. On the other hand, when a node is sleeping, it does not perform any communication and thus spends very little energy. Several recent works have addressed the problem of designing energy-efficient distributed algorithms for various fundamental problems. These algorithms operate by minimizing the number of rounds in which any node is awake, also called the awake complexity.

In this talk we present results on energy-efficient distributed algorithms for various distributed computing problems. These results show that fundamental problems such as Maximal Independent Set (MIS), Leader Election, Spanning Tree, and Minimum Spanning tree (MST) and several other problems can be solved in a small awake complexity, which can be significantly better than the traditional round complexity (which counts both sleeping and awake rounds).

In particular, a main result that we will present is that the fundamental MIS problem can be solved in (expected) $O(1)$ rounds node-averaged awake complexity, which measures the average number of rounds a node is awake during the algorithm. In particular, we present a randomized distributed algorithm for MIS that has expected $O(1)$-rounds node-averaged awake complexity and, with high probability has $O(\log{n})$-rounds worst-case awake complexity and $O(\log^{3.41}n)$-rounds worst-case complexity.

We will also discuss a subsequent result that shows that MIS can be computed in $O(\log \log n)$ (worst-case) awake complexity that is exponentially better compared to the best known round complexity of $O(\log n)$ and also bypassing its fundamental $\Omega(\sqrt{\log n/\log \log n})$ round complexity lower bound exponentially.

Algorithms and complexity
Tuesday June 11, 2024, 11AM, Salle 3052
Divyarthi Mohan (Tel-Aviv University) Optimal Stopping with Interdependent Values

Consider the problem of selling a single item to one of n buyers arriving sequentially, whereupon the arrival of each buyer we need to decide whether to accept this value and stop the process (before seeing the values of the future bidders), or irrevocably and irreversibly reject this bidder and continue on to the next. The objective is to maximize the accepted value, which poses a fundamental question: how effectively can an online algorithm perform compared to the offline optimal solution? Further, adding to the challenge in many scenarios a bidder's value for the item may depend on private information of other bidders. For example, in an art auction, a buyer's value for the item may be influenced by how other buyer's so far have liked it, and in fact, may even depend on how the future buyers will value the item as that would affect its resale value.

We study online selection problems in both the prophet and secretary settings, when arriving agents have interdependent values. In the interdependent values model, introduced in the seminal work of Milgrom and Weber [1982], each agent has a private signal and the value of an agent is a function of the signals held by all agents. Results in online selection crucially rely on some degree of independence of values, which is conceptually at odds with the interdependent values model. For prophet and secretary models under the standard independent values assumption, prior works provide constant factor approximations to the welfare. On the other hand, when agents have interdependent values, prior works in Economics and Computer Science provide truthful mechanisms that obtain optimal and approximately optimal welfare under certain assumptions on the valuation functions. We bring together these two important lines of work and provide the first constant factor approximations for prophet and secretary problems with interdependent values. We consider both the algorithmic setting, where agents are non-strategic (but have interdependent values), and the mechanism design setting with strategic agents. All our results are constructive and use simple stopping rules.

Joint work with Simon Mauras and Rebecca Reiffenhäuser.

Algorithms and complexity
Wednesday June 5, 2024, 11AM, Salle 4052 (PCQC)
Marin Costes (Université Paris-Saclay) Space-time deterministic graph rewriting

We study non-terminating graph rewriting models, whose local rules are applied non-deterministically—and yet enjoy a strong form of determinism, namely space-time determinism. Of course in the case of terminating computation it is well-known that the mess introduced by asynchronous rule applications may not matter to the end result, as confluence conspires to produce a unique normal form. In the context of non-terminating computation however, confluence is a very weak property, and (almost) synchronous rule applications is always preferred e.g. when it comes to simulating dynamical systems. Here we provide sufficient conditions so that asynchronous local rule applications conspire to produce well-determined events in the space-time unfolding of the graph, regardless of their application orders. Our first example is an asynchronous simulation of a dynamical system. Our second example features time dilation, in the spirit of general relativity.

Algorithms and complexity
Tuesday June 4, 2024, 11AM, Salle 3052
Kuo-Chin Chen (Foxconn Research) Quantum Walks on Simplicial Complexes and Harmonic Homology: Application to Topological Data Analysis with Superpolynomial Speedups

Incorporating higher-order interactions in information processing enables us to build more accurate models, gain deeper insights into complex systems, and address real-world challenges more effectively. However, existing methods, such as random walks on oriented simplices and homology, which capture these interactions, are not known to be efficient. This work investigates whether quantum walks on simplicial complexes exhibit quantum advantages. We introduce a novel quantum walk that encodes the combinatorial Laplacian, a key mathematical object whose spectral properties reflect the topology of the underlying simplicial complex. Furthermore, we construct a unitary encoding that projects onto the kernel of the Laplacian, representing the space of harmonic cycles in the complex's homology. Combined with the efficient construction of quantum walk unitaries for clique complexes that we present, this paves the way for utilizing quantum walks to explore higher-order interactions within topological structures. Our results achieve superpolynomial quantum speedup with quantum walks without relying on quantum oracles for large datasets.

Crucially, the walk operates on a state space encompassing both positively and negatively oriented simplices, effectively doubling its size compared to unoriented approaches. Through coherent interference of these paired simplices, we are able to successfully encode the combinatorial Laplacian, which would otherwise be impossible. This observation constitutes our major technical contribution. We also extend the framework by constructing variant quantum walks. These variants enable us to: (1) estimate the normalized persistent Betti numbers, capturing topological information throughout a deformation process, and (2) verify a specific QMA1-hard problem, showcasing potential applications in computational complexity theory.

Online talk

Algorithms and complexity
Wednesday May 29, 2024, 2PM, Salle 3052
Leo Zhou (Caltech) Local Minima in Quantum Systems

Finding ground states of quantum many-body systems is known to be hard for both classical and quantum computers. As a result, when Nature cools a quantum system in a low-temperature thermal bath, the ground state cannot always be found efficiently. Instead, Nature finds a local minimum of the energy. In this work, we study the problem of finding local minima in quantum systems under thermal perturbations. While local minima are much easier to find than ground states, we show that finding a local minimum is computationally hard for classical computers, even when the task is to output a single-qubit observable at any local minimum. In contrast, we prove that a quantum computer can always find a local minimum efficiently using a thermal gradient descent algorithm that mimics the cooling process in Nature. To establish the classical hardness of finding local minima, we consider a family of two-dimensional Hamiltonians such that any problem solvable by polynomial-time quantum algorithms can be reduced to finding local minima of these Hamiltonians. Therefore, cooling systems to local minima is universal for quantum computation, and, assuming quantum computation is more powerful than classical computation, finding local minima is classically hard and quantumly easy.

Algorithms and complexity
Tuesday May 28, 2024, 11AM, Salle 3052
Nikolas Melissaris (Aarhus University) Cheater Identification on a Budget: MPC with Identifiable Abort from Pairwise MACs

Because of the scientific diversity of the audience, I will break the talk into two parts. The first part will be an introduction to secure multiparty computation (MPC). This will help with notions, notation, and definitions that will show up later. It will include what we can hope to achieve depending on various parameters that give different results such as how many parties are corrupted and how much power they hold. I will then give a brief overview of works on a more specific security notion that we will deal with: Identifiable Abort. The second part of the talk will focus specifically on our work which is described below.

Cheater identification in MPC allows the honest parties to agree upon the identity of a cheating party, in case the protocol aborts. In the context of a dishonest majority, this becomes especially critical, as it serves to thwart denial-of-service attacks and mitigate known impossibility results on ensuring fairness and guaranteed output delivery.

In this work, we present a new, lightweight approach to achieving identifiable abort in dishonest majority MPC. We avoid all of the heavy machinery used in previous works, instead relying on a careful combination of lightweight detection mechanisms and techniques from state-of-the-art protocols secure with (non-identifiable) abort.

At the core of our construction is a homomorphic, multi-receiver commitment scheme secure with identifiable abort. This commitment scheme can be constructed from cheap vector oblivious linear evaluation protocols based on learning parity with noise. To support cheater identification, we design a general compilation technique, similar to a compiler of Ishai et al. (Crypto 2014), but avoid its requirement for adaptive security of the underlying protocol. Instead, we rely on a different (and seemingly easier to achieve) property we call online extractability, which may be of independent interest. Our MPC protocol can be viewed as a version of the BDOZ MPC scheme (Bendlin et al., Eurocrypt 2011) based on pairwise information-theoretic MACs, enhanced to support cheater identification and a highly efficient preprocessing phase, essentially as efficient as the non-identifiable protocol of Le Mans (Rachuri & Scholl, Crypto 2022).

Algorithms and complexity
Tuesday May 28, 2024, 3PM, Salle 4052 (PCQC)
Galina Pass (QuSoft, University of Amsterdam) Multidimensional Quantum Walks, Recursion, and Quantum Divide & Conquer

We introduce an object called a subspace graph that formalizes the technique of multidimensional quantum walks. Composing subspace graphs allows one to seamlessly combine quantum and classical reasoning, keeping a classical structure in mind, while abstracting quantum parts into subgraphs with simple boundaries as needed. As an example, we show how to combine a switching network with arbitrary quantum subroutines, to compute a composed function. As another application, we give a time-efficient implementation of quantum Divide & Conquer when the sub-problems are combined via a Boolean formula. We use this to quadratically speed up Savitch’s algorithm for directed st-connectivity.

Algorithms and complexity
Tuesday May 21, 2024, 3PM, Salle 4052 (PCQC)
Xiaodi Wu (University of Maryland) Hamiltonian-oriented Quantum Algorithm Design and Programming

This is an exciting time for quantum computing where early-stage quantum computers become available at your fingertips through clouds. The conventional design of quantum algorithms is centered around the abstraction of quantum circuits and relies on a digital mindset for application design and implementation. While serving as an elegant mathematical interface, circuit-based digital abstraction usually fails to capture the native programmability of quantum devices, and incurs large overheads, which significantly restricts its near-term feasibility where the quantum computing resource is the major limitation.

We propose to use quantum Hamiltonian evolution as the central object in end-to-end quantum application design, i.e. the so-called Hamiltonian-oriented paradigm, based on the observation that Hamiltonian evolution is a native abstraction for both low-level hardware control and high-level quantum applications. We illustrate that the Hamiltonian-oriented design not only allows more efficient implementation of existing quantum algorithms but also inspires novel quantum algorithms, especially in optimization and scientific computing, which are hard to perceive in the circuit model. We also develop a programming infrastructure called SIMUQ (SIMUlation language for Quantum) for easy implementation of Hamiltonian-based quantum applications for domain experts on heterogeneous quantum devices.

Algorithms and complexity
Tuesday May 21, 2024, 2PM, Salle 3052
Lawrence Roy (Aarhus University) VOLE-in-the-Head and Post-Quantum Signatures

VOLE-in-the-head (closely related to the MPC-in-the-head paradigm of Ishai et al.) is recent technique for computationally efficient but non-succinct NIZK in the random oracle model. It was applied to post quantum signatures in the signature scheme FAEST, a submission to the NIST call for additional digital signature schemes, which works by proving knowledge of an AES key. I will present the VOLE-in-the-head technique, it's application to FAEST, and the recent “One Tree” optimizations.

Algorithms and complexity
Tuesday April 30, 2024, 11AM, Salle 3052
André Chailloux (INRIA Paris) The Quantum Decoding Problem

One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution (SIS) problem to the Learning with Errors (LWE) problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry [CLZ22] that this reduction can be made more powerful by replacing the LWE problem with a quantum equivalent, where the errors are given in quantum superposition. In parallel, Regev’s reduction has recently been adapted in the context of code-based cryptography by Debris, Remaud and Tillich [DRT23], who showed a reduction between the Short Codeword Problem and the Decoding Problem (the DRT reduction). This motivates the study of the Quantum Decoding Problem (QDP), which is the Decoding Problem but with errors in quantum superposition and see how it behaves in the DRT reduction.

The purpose of this paper is to introduce and to lay a firm foundation for QDP. We first show QDP is likely to be easier than classical decoding, by proving that it can be solved in quantum polynomial time in a large regime of noise whereas no non-exponential quantum algorithm is known for the classical decoding problem. Then, we show that QDP can even be solved (albeit not necessarily efficiently) beyond the information theoretic Shannon limit for classical decoding. We give precisely the largest noise level where we can solve QDP giving the information theoretic limit for this new problem. Finally, we study how QDP can be used in the DRT reduction. First, we show that our algorithms can be properly used in the DRT reduction showing that our quantum algorithms for QDP beyond Shannon capacity can be used to find minimal weight codewords in a random code. On the negative side, we show that the DRT reduction cannot be, in all generality, a reduction between finding small codewords and QDP by exhibiting quantum algorithms for QDP where this reduction entirely fails. Our proof techniques include the use of specific quantum measurements, such as q-ary unambiguous state discrimination and pretty good measurements as well as strong concentration bounds on weight distribution of random shifted dual codes, which we relate using quantum Fourier analysis.

Algorithms and complexity
Tuesday April 23, 2024, 11AM, Salle 3052
Team Event AlgoComp lightning talks ⚡️

Algorithms and complexity
Tuesday April 16, 2024, 10AM, Salle 3052
Zvi Lotker (Bar-Ilan University) Narratives: Shaping the Evolution of Social Networks and the Concept of Time (Part 2 of a mini-course of two sessions, 2 hours each)

The mini-course explores the way time flows in dynamic social networks. We will use subjective clocks to open a window to the human soul. The course will cover the second part of my book Analyzing Narratives in Social Networks (https://link.springer.com/book/10.1007/978-3-030-68299-6).

In the first lecture, we will cover chapters 11, 12, and 13. In the second lecture, we will cover chapters 14, 15, and 16.

Chapter 11 studies the perception of time in narrative. We model the narrative of a reader's experience of time as an evolving social network. This chapter takes the function analysis approach, which uses functions as the principal mathematical object. Therefore, the evolving social network is a function of time.

Chapter 12 introduces the notion of subjective and objective clocks. We model subjective time in narrative, and introduce a the subjective clock to allow machines to measure subjective time. We define clocks as an increasing monotonic function. This definition deviates from the traditional definition of clocks as oscillators. The advantage of the monotonic definition is that it is inclusive of both psychological/subjective clocks and objective clocks.

Chapter 13 explains how to compare clocks. Following the stochastic process definition of Martingales, the basic idea is to remove all Euclidean symmetries (such as translation and rotation) from the two clocks. This is equivalent to removing the linear part of those clocks. We introduce the $M$ diagram, which allows for representation of the two clocks as a function of each other after cancelling the Euclidean symmetry. We compare clocks by finding the point at which clocks begin to deviate from one another.

Chapter 14 uses the relationships between clocks to identify critical events in the evolution of a social network. We use the drift between the weighted clock and the event clock to identify critical events in weighted social networks. Chapter 14 defines the law of two clocks, which we can implement in narrative processing.

Chapter 15 provides techniques to transform evolving social networks into a real function. This helps us simplify the analysis of social networks. We use these techniques in narrative processing.

Chapter 16 defines higher orders of social networks and gives some explanation of evolving social network surfaces. We related social network surfaces to sub-stories within a larger overarching narrative.

Algorithms and complexity
Monday April 15, 2024, 10AM, Salle 3052
Zvi Lotker (Bar-Ilan University) Narratives: Shaping the Evolution of Social Networks and the Concept of Time (Part 1 of a mini-course of two sessions, 2 hours each)

The mini-course explores the way time flows in dynamic social networks. We will use subjective clocks to open a window to the human soul. The course will cover the second part of my book Analyzing Narratives in Social Networks (https://link.springer.com/book/10.1007/978-3-030-68299-6).

In the first lecture, we will cover chapters 11, 12, and 13. In the second lecture, we will cover chapters 14, 15, and 16.

Chapter 11 studies the perception of time in narrative. We model the narrative of a reader's experience of time as an evolving social network. This chapter takes the function analysis approach, which uses functions as the principal mathematical object. Therefore, the evolving social network is a function of time.

Chapter 12 introduces the notion of subjective and objective clocks. We model subjective time in narrative, and introduce a the subjective clock to allow machines to measure subjective time. We define clocks as an increasing monotonic function. This definition deviates from the traditional definition of clocks as oscillators. The advantage of the monotonic definition is that it is inclusive of both psychological/subjective clocks and objective clocks.

Chapter 13 explains how to compare clocks. Following the stochastic process definition of Martingales, the basic idea is to remove all Euclidean symmetries (such as translation and rotation) from the two clocks. This is equivalent to removing the linear part of those clocks. We introduce the $M$ diagram, which allows for representation of the two clocks as a function of each other after cancelling the Euclidean symmetry. We compare clocks by finding the point at which clocks begin to deviate from one another.

Chapter 14 uses the relationships between clocks to identify critical events in the evolution of a social network. We use the drift between the weighted clock and the event clock to identify critical events in weighted social networks. Chapter 14 defines the law of two clocks, which we can implement in narrative processing.

Chapter 15 provides techniques to transform evolving social networks into a real function. This helps us simplify the analysis of social networks. We use these techniques in narrative processing.

Chapter 16 defines higher orders of social networks and gives some explanation of evolving social network surfaces. We related social network surfaces to sub-stories within a larger overarching narrative.

Algorithms and complexity
Tuesday April 9, 2024, 11AM, Salle 3052
Christoph Egger (IRIF) On Fine-Grained Key-Agreement

We are considering fine-grained key agreement. In a key agreement protocol, two parties wish to establish a shared secret that remains hidden from any passive observer. Fine-grained key agreement requires that, for example, an adversary running in quadratic time compared to the honest parties fails to learn about the key.

One can require the adversary to *find* a key or alternatively just require it to *distinguish* between the produced key and a random key. The two notions are equivalent given *classical* access to a random function oracle; however, they are different when parties have superposition access to said oracle – the best bound for the classical transformation uses semi-classical one-way-to-hiding due to Ambainis et al. (Crypto'19) which incurs a security degradation. Consequently, the key-agreement protocol by Brassard et al. (JCryptology'19) does not *directly* provide distinguishing security.

In the talk, I will present a key-agreement protocol relative to a random function oracle with superposition access, which directly provides distinguishing security, and discuss semi-classical one-way-to-hiding, which is a core building block of this work.

In a different direction, one can also base key agreement on space hardness: An adversary that has less *memory* than some (polynomial) bound fails to break the security. I will mention some preliminary results in this setting.

Algorithms and complexity
Friday April 5, 2024, 11AM, Salle 3052
Yanlin Chen (CWI Amsterdam) A Quantum Speed-Up for Approximating the Top Eigenvectors of a Matrix

Finding a good approximation of the top eigenvector of a given d × d matrix A is a basic and important computational problem, with many applications. We give two different quantum algorithms that, given query access to the entries of A and assuming a constant eigenvalue gap, output a classical description of a good approximation of the top eigenvector: one algorithm with time complexity d^{1.5+o(1)} and one with time complexity \tilde{O}(d^{1.75}) that has a slightly better dependence on the precision of the approximation. Both provide a polynomial speed-up over the best-possible classical algorithm, which needs Ω(d^2) queries to entries of A (and hence Ω(d^2) time). We extend this to a quantum algorithm that outputs a classical description of the subspace spanned by the top-q eigenvectors in time qd^{1.5+o(1)}. We also prove a nearly-optimal lower bound of \tilde{Ω}(d^{1.5}) on the quantum query complexity of approximating the top eigenvector. Our quantum algorithms run a version of the classical power method that is robust to certain benign kinds of errors, where we implement each matrix-vector multiplication with small and well-behaved error on a quantum computer, in different ways for the two algorithms.

Our first algorithm used block-encoding techniques to compute the matrix-vector product as a quantum state, from which we obtain a classical description by a new time-efficient unbiased pure-state tomography algorithm that has essentially optimal sample complexity O(d log(d)/ε^2) and that comes with improved statistical properties compared to earlier pure-state tomography algorithms. Our second algorithm estimated the matrix-vector product one entry at a time, using a new “Gaussian phase estimation” procedure. We also develop a time-efficient process- tomography algorithm for reflections around bounded-rank subspaces, providing the basis for our top-eigensubspace estimation application.

This is the joint work with Ronald de Wolf and András Gilyén.

Algorithms and complexity
Tuesday April 2, 2024, 11AM, Salle 3071
Michael Klooß (Aalto University) Adaptive Special Soundness: Improved Knowledge Extraction by Adaptive Useful Challenge Sampling

Proving knowledge soundness of an interactive proof from scratch is often a challenging task. This has motivated the development of various special soundness frameworks which, in a nutshell, separate knowledge extractors into two parts: (1) an extractor to produce a set of accepting transcripts conforming to some structure; (2) a witness recovery algorithm to recover a witness from a set of transcripts with said structure. These frameworks take care of (1), so it suffices for a protocol designer to specify (2) which is often simple®.

However, special soundness fails to adequately capture guarantees provided by simple “probabilistic tests”, e.g., short random linear combinations as used in many lattice-based proof systems. In this talk, I will introduce (adaptive) special soundness, which captures many scenarios and discuss the rewinding-based knowledge extraction. Moreover, I will point out current limitations and open problems with regard to (adaptive) special soundness.

This talk is based on join work with Thomas Attema, Russell W. F. Lai, and Pavlo Yatsyna.

Algorithms and complexity
Tuesday March 26, 2024, 11AM, Salle 3052
Willy Quach (Weizmann Institute of Science) How (not) to prove cryptographic security against quantum attacks

Perhaps one of the most spectacular feats of quantum computers is the ability to break (public-key) cryptosystems that are used to secure the Internet. This directly motivates the study of cryptography that is secure against quantum attacks, namely post-quantum cryptography.

In this talk, I will focus on the following basic question: how do we prove that a cryptographic construction is secure against quantum attacks? I will highlight a perhaps surprising distinction between the security of cryptographic constructions against quantum attacks, and the quantum hardness of the underlying problems that are used to reason about security. Doing so will highlight a natural connection to cryptographic proofs of quantumness.

Based on joint work with Alex Lombardi, Ethan Mook, and Daniel Wichs.

Algorithms and complexity
Tuesday March 19, 2024, 11AM, Salle 3052
Joon Lee (EPFL) The Sparse Parity Matrix

The last decade witnessed several pivotal results on random inference problems where the aim is to learn a hidden ground truth from indirect randomised observations; much of this research has been guided by statistical physics intuition.  Prominent examples include the stochastic block model, low-density parity check codes or compressed sensing.  In all random inference problems studied so far the posterior distribution of the ground truth given the observations appears to enjoy a key property called “strong replica symmetry”.  This means that the overlap of the posterior distribution with the ground truth (basically the number of bits that can be learned correctly) concentrates on a deterministic value. Whether this is generally true has been an open question.  In this paper we discover an example of an inference problem based on a very simple random matrix over F2 that fails to exhibit strong replica symmetry.  Beyond its impact on random inference problems, the random matrix model, reminiscent of the binomial Erdos-Renyi random graph, gives rise to a natural random constraint satisfaction problem related to the intensely studied random k-XORSAT problem.

Algorithms and complexity
Tuesday March 12, 2024, 11AM, Salle 3052
Christina Boura (UVSQ) New results in differential cryptanalysis

Differential cryptanalysis, introduced in 1990 by Biham and Shamir, is a powerful cryptanalysis technique against block ciphers. It is based on the existence of an input difference that propagates through the different layers of the cipher to an output difference with high probability. Although the central idea of this technique is quite simple to understand, mounting an optimal differential attack against a given cipher is a highly non-trivial task. The difficulty is two-fold. First, finding good differentials and accurately estimating their probability is a difficult computational problem. Second, performing a key recovery based on the existence of a good differential is a hard optimization task. In this talk we will revisit the differential cryptanalysis technique and present recent advances for treating the two exposed problems.

Algorithms and complexity
Tuesday March 5, 2024, 11AM, Salle 3052
Julian Wechs (Université Libre de Bruxelles) Subsystem decompositions of quantum circuits and processes with indefinite causal order

One can theoretically conceive of situations in which the causal order between quantum operations is no longer well-defined. Such causally indefinite processes are of interest from a foundational perspective, but also open up new possibilities in quantum information, as they go beyond the conventional paradigm of quantum circuits. A central question is which of these processes have an operational interpretation or physical realisation, and, more particularly, whether indefinite causal order could be realised within standard physics. It has been shown that certain causally indefinite processes can take place as part of standard quantum mechanical evolutions, described by acyclic quantum circuits, if the latter are represented with respect to an alternative choice of quantum subsystems that can be delocalised in time. In this seminar, I will present a general framework to describe such transformations between different subsystem decompositions of quantum circuits. This puts the notion that indefinite causal order is realisable in standard quantum theory on a rigorous footing.

Algorithms and complexity
Tuesday February 27, 2024, 11AM, Salle 3052
Sophie Huiberts (LIMOS, Clermont-Ferrand) Open problems about the simplex method

The simplex method is a very efficient algorithm. In this talk we see a few of the state-of-the-art theories for explaining this observation. We will discuss what it takes for a mathematical model to explain an algorithm’s qualities, and whether existing theories meet this bar. Following this, we will question what the simplex method is and if the theoretician's simplex method is the same algorithm as the practitioner's simplex method.

Algorithms and complexity
Tuesday February 27, 2024, 2PM, Salle 3052
Christian Konrad (University of Bristol) An Unconditional Lower Bound for Two-Pass Streaming Algorithms for Maximum Matching Approximation

In this talk, I will discuss a recent joint work with my PhD student Kheeran Naidu on lower bounds for the Maximum Matching problem in the semi-streaming model. In this model, an algorithm performs multiple passes over the edges of an input graph and is tasked with computing a matching that is as large as possible using space O(n polylog n), where n is the number of vertices of the input graph.

We will focus on the two-pass setting, and we show that every randomized two-pass streaming algorithm that computes a better than 8/9-approximation to Maximum Matching requires strictly more than semi-streaming space.

Prior to our work, only a conditional two-pass lower bound by Assadi [SODA'22] was known that relates the quality of their lower bound to the maximum density of Ruzsa-Szemeredi graphs (RS-graphs) with matchings of linear sizes. In the best case, i.e., if very dense RS-graphs with linear-sized matchings exist, their lower bound rules out approximation ratios above 0.98, however, with current knowledge, only approximation factors of 1 − o(1) are ruled out.

Our lower bound makes use of the information cost trade-off of the Index problem in the two-party communication setting established by Jain et al. [JACM’09]. To the best of our knowledge, our work is the first that exploits this trade-off result in the context of lower bounds for multi-pass graph streaming algorithms.

Algorithms and complexity
Tuesday January 9, 2024, 4PM, Salle 3052
Gabriel Senno (Quside) Quantifying the intrinsic randomness of quantum measurements

In this talk, I will tell you about our recent results on the maximum probability with which an adversary, Eve, can guess the outcomes of a known POVM $M$ on a system $S$ in a known state $\rho$. We study two alternative adversarial scenarios for this problem: a classical one and a quantum one. In the classical picture, every time the system is measured, Eve knows in which pure state from an ensemble compatible with $\rho$ the system is and which extremal POVM in some convex decomposition of $M$ is being performed. In the quantum picture, following [Frauchiger et al., arXiv:1311.4547], the POVM $M$ is seen as a projective measurement on the system $S$ plus an ancillary system $A$, which can be in a mixed state, and Eve holds a purification $E$ of $SA$. Notice that these two scenarios are indistinguishable for someone doing tomography on S. We show that, unlike the case of projective measurements (as it was already known) or pure states (as we prove), when nonextremal measurements and states are considered, Eve’s guessing probability in the quantum picture can be strictly greater than in the classical one.


Year 2023

Algorithms and complexity
Tuesday December 19, 2023, 11AM, Salle 3052
Arthur Braida (Atos) Tight Lieb-Robinson Bound for approximation ratio in Quantum Annealing

Quantum annealing (QA) holds promise for optimization problems in quantum computing, especially for combinatorial optimization. This analog framework attracts attention for its potential to address complex problems. Its gate-based homologous, QAOA with proven performance, has brought lots of attention to the NISQ era. Several numerical benchmarks try to classify these two metaheuristics however, classical computational power highly limits the performance insights. In this work, we introduce a new parametrized version of QA enabling a precise 1-local analysis of the algorithm. We develop a tight Lieb-Robinson bound for regular graphs, achieving the best-known numerical value to analyze QA locally. Studying MaxCut over cubic graph as a benchmark optimization problem, we show that a linear-schedule QA with a 1-local analysis achieves an approximation ratio over 0.7020, outperforming any known 1-local algorithms.

Algorithms and complexity
Wednesday December 13, 2023, 4PM, Salle 3052
Ainesh Bakshi (MIT) Learning quantum Hamiltonians at any temperature in polynomial time

A quantum system of n interacting particles at thermal equilibrium can be described using a polynomial number of parameters, which correspond to the strengths of the forces between particles. A natural open question has been whether these parameters can be estimated efficiently by running experiments on the system. We resolve this question by giving the first polynomial-time algorithm for this task. This improves over prior work, which uses polynomially many samples from the system, but requires exponential time. In this talk, I will introduce the problem and describe some of the key ideas behind our algorithm.

Based on joint work with Allen Liu, Ankur Moitra and Ewin Tang.

online talk

Algorithms and complexity
Tuesday December 12, 2023, 11AM, Salle 3052
Deeksha Adil (ETH Zurich) Dynamically Computing Approximate Eigenvectors

Maintaining spectral information quickly with changing data matrix is essential in several applications. In this talk I would give an overview of what is known in this area and present a simple yet novel algorithm and analysis of maintaining an approximate maximum eigenvalue and eigenvector of a positive semi-definite matrix quickly, that undergoes decremental updates. Our algorithms also hint towards a separation between oblivious and adaptive adversaries in the dynamic setting, which has not yet been resolved.

This is based on a very recent work with Thatchaphol Sararunak.

Algorithms and complexity
Tuesday December 5, 2023, 11AM, Salle 3052
Marc-Olivier Renou (INRIA Saclay) Causal and No Signalling-LOCAL models as the largest generalization of synchronous quantum distributed computing

In the LOCAL model of distributed computing, n processors are connected together through a graph. They communicate with their neighbours in a synchronised way a limited number of times (through infinite bandwith communication channels), and then produce an output. Their goal is to find a strategy solving some information processing task, e.g. to obtain a proper coloring of the graph. To answer this question, the considered physical theory of information matters: for instance, Quantum Information Theory (QIT) can in principle allow for strategies solving the task quicker.

Finding lower bounds on the maximal speed at which QIT can solve a task is tricky. To circumvent this difficulty, several proofs based on the No-Signalling (NS) Principle (which states that an event A can influence an event B only if some signal can physically be sent from A to B) were proposed and a new NS-LOCAL model of distributed computing was proposed.

I’ll re-interpret this NS-LOCAL model based on the theoretical physics formalism of light cones in circuits. I will justify that the NS-LOCAL model is the “largest reasonable model with a pre-shared resource between the nodes”. Then, I’ll introduce a weaker Causal-LOCAL model, the “largest reasonable model without pre-shared resource between the nodes”.

All this will be illustrated with easy to understand examples. I was recently told by Vaclav Rozhon: We computer scientists are very scared of the word “quantum”. Note that I’ll almost not talk about quantum theory, hence no need to be scared.

Algorithms and complexity
Wednesday November 22, 2023, 11AM, Salle 4052 (PCQC)
Sébastien Designolle (Zuse Institut Berlin) Frank-Wolfe algorithms for Bell nonlocality

In Bell scenarios with two outcomes per party, we algorithmically consider the two sides of the membership problem for the local polytope: constructing local models and deriving separating hyperplanes, that is, Bell inequalities. We take advantage of the recent developments in so-called Frank-Wolfe algorithms to significantly increase the convergence rate of existing methods. As an application, we study the threshold value for the nonlocality of two-qubit Werner states under projective measurements. Here, we improve on both the upper and lower bounds present in the literature. Importantly, our bounds are entirely analytical; moreover, they yield refined bounds on the value of the Grothendieck constant of order three: 1.4367 ≤ KG(3) ≤ 1.4546. We also demonstrate the efficiency of our approach in multipartite Bell scenarios, and present the first local models for all projective measurements with visibilities noticeably higher than the entanglement threshold.

Algorithms and complexity
Wednesday November 22, 2023, 3PM, Salle 3052
Sourav Chakraborty (ISI Kolkata) Distinct Elements in Streams and the Klee's Measure Problem

We will present a very simple streaming algorithm on F0 estimation that also caught the eye of Donald E. Knuth. In a recent article, Donald E. Knuth started with the following two paragraphs:

“Sourav Chakraborty, N. V. Vinodchandran, and Kuldeep S. Meel have recently proposed an interesting algorithm for the following problem: A stream of elements (a1, a2,…,am) is input, one at a time, and we want to know how many of them are distinct. In other words, if A = {a1, a2,…,am} is the set of elements in the stream, with multiplicities ignored, we want to know |A|, the size of that set. But we don’t have much memory; in fact, |A| is probably a lot larger than the number of elements that we can hold in memory at any one time. What is a good strategy for computing an unbiased estimate of |A|?

Their algorithm is not only interesting, it is extremely simple. Furthermore, it’s wonderfully suited to teaching students who are learning the basics of computer science. (Indeed, ever since I saw it, a few days ago, I’ve been unable to resist trying to explain the ideas to just about everybody I meet.) Therefore I’m pretty sure that something like this will eventually become a standard textbook topic. This note is an initial approximation to what I might write about it if I were preparing a textbook about data streams.”

This simple algorithm comes out of the first ever “efficient” streaming algorithm (from PODS 21) for the Klee's Measure problem, which was a big open problem in the world of streaming for many years.

This work is based on joint works with N. V. Vinodchandran, and Kuldeep S. Meel across multiple articles, notable the following: Estimating the Size of Union of Sets in Streaming Models. PODS 2021; Estimation of the Size of Union of Delphic Sets: Achieving Independence from Stream Size. PODS 2022; Distinct Elements in Streams: An Algorithm for the (Text) Book. ESA 2022.

Algorithms and complexity
Tuesday November 21, 2023, 11AM, Salle 3052
Jessica Bavaresco (University of Geneva) Quantum information processing from the approach of higher-order operations

Several fundamental objects in quantum mechanics are described by linear operations that transform quantum states. For example, measurements are operations that transform quantum states into probability distributions, while quantum channels transform quantum states into other quantum states. The formalism of higher-order operations then concerns itself with the question of how to transform these quantum operations, e.g., how a quantum channel can be transformed into another quantum channel or into a probability distribution. Hence, higher-order operations are built on the idea of studying “functions of functions” from a quantum-information perspective.

In this talk, we will discuss the basics of the formalism of higher-order operations, and how their often simple mathematical characterization can be exploited to efficiently optimize over quantum strategies for different information-theoretic tasks. We will discuss its applications to channel discrimination, quantum circuit optimization, and its connection to indefinite causal order—a phenomenon that naturally emerges from the higher-order formalism. In particular, we will discuss potential advantages of indefinite causal order, and the cost, in terms of the necessary number of calls of an arbitrary quantum operation, of simulating it with standard quantum circuits.

Algorithms and complexity
Tuesday November 21, 2023, 3PM, Salle 4052 (PCQC)
Sourav Chakraborty (ISI Kolkata) Testing correctness of samplers using property testing: from theory to practice and back again

How can one test the correctness of a program that is supposed to output an element from a large universe according to a certain distribution? These kinds of programs are heavily used in real life but are rarely tested for correctness.

This problem can be framed as a problem in property testing. Property testing is a subject that deals with these challenges. It tries to design sub-linear algorithms for testing various properties of inputs. The key lies in the way the data is accessed by the algorithm.

One of the central problems in property testing and many other related subjects is testing if a distribution has a certain property - say whether a distribution on a finite set is uniform. The conventional way of accessing the distributions is by drawing samples according to the distributions. Unfortunately, in this setting the number of samples that are necessary for testing properties of distribution (for most natural properties) is polynomial in the size of support of the distribution. Thus when the support is relatively big the algorithms become impractical in real life applications.

We introduced a new way of accessing the distribution using ``conditional-sampling oracle“. This oracle can be used to design much faster algorithms for testing properties of distribution and thus makes the algorithm useful in practical scenarios.

We show that the conditional oracle can be implemented in many real life problems and we have been able to show the usefulness of this model and our algorithms in practical purposes and in other areas of research - like testing of probabilistic verification. This model also throws a number of interesting theoretical questions.

The talk will be based on the following works: On the Power of Conditional Samples in Distribution Testing with Eldar Fischer, Arie MAtsliah and Yonatan Goldhrish (SICOMP 2016); Property Testing of Joint Distributions using Conditional Samples with Rishiraj Bhattacharyya (ToCT 2018); On Testing of Uniform Samplers with Kuldeep Meel (AAAI2019); On Testing of Samplers with Kuldeep Meel and Yash Pote (NeuRIPS 2020); Designing Samplers is Easy: The Boon of Testers with Kuldeep Meel, Priyanka Golia and Mate Soos (FMCAD22); On Quantitative Testing of Samplers with Kuldeep Meel, Priyanka Golia and Mate Soos (CP22); Testing of Horn Samplers with Ansuman Banerjee, Shayak Chakraborty, Sayantan Sen, Uddalok Sarkar and Kuldeep Meel (AISTAT 2023); Tight Lower Bound on Equivalence Testing in Conditional Sampling Model with Diptarka Chakraborty and Gunjan Kumar (SODA 2024).

Algorithms and complexity
Wednesday November 15, 2023, 11AM, Salle 1007
Jinge Bao (CQT, Singapore) On the quantum time complexity of divide and conquer

We initiate a systematic study of the time complexity of quantum divide and conquer algorithms for classical problems. We establish generic conditions under which search and minimization problems with classical divide and conquer algorithms are amenable to quantum speedup, and apply these theorems to an array of problems involving strings, integers, and geometric objects. They include LONGEST DISTINCT SUBSTRING, KLEE’S COVERAGE, several optimization problems on stock transactions and k-INCREASING SUBSEQUENCE. For most of these results our quantum time upper bound matches the quantum query lower bound for the problem, up to polylogarithmic factors.

Algorithms and complexity
Tuesday November 14, 2023, 11AM, Salle 3052
Aaron Sidford (Stanford University) Efficiently Minimizing the Maximum Loss

In this talk I will discuss recent advances in the fundamental robust optimization problem of minimizing the maximum of a finite number of convex loss functions. In particular, I will show how to develop stochastic methods for approximately solving this problem with a near-optimal number of gradient queries. Along the way, I will cover several optimization techniques of broader utility, including accelerated methods for using ball-optimization oracles and stochastic bias-reduced gradient methods.

This talk will include joint work with Hilal Asi, Yair Carmon, Arun Jambulapati, and Yujia Jin including https://arxiv.org/abs/2105.01778 and https://arxiv.org/abs/2106.09481.

Algorithms and complexity
Wednesday November 8, 2023, 2PM, Salle 3052
Marco Túlio Quintino (LIP6) Simulating qubit correlations with classical communication

We consider general prepare-and-measure scenarios in which Alice can transmit qubit states to Bob, who can perform general measurements in the form of positive operator-valued measures (POVMs). We show that the statistics obtained in any such quantum protocol can be simulated by the purely classical means of shared randomness and two bits of communication. Furthermore, we prove that two bits of communication is the minimal cost of a perfect classical simulation.

Then, we apply our methods to Bell scenarios, which extends the well-known Toner and Bacon protocol. In particular, two bits of communication are enough to simulate all quantum correlations associated to arbitrary local POVMs applied to any entangled two-qubit state. For the case of projective measurements, we provide explicit protocols that simulate perfectly the statistics of all local projective measurements on any pair of entangled qubits by communicating one classical trit. If the state is weakly entangled, already a single bit is sufficient and the expected amount of communication approaches zero in the limit where the degree of entanglement approaches zero.

This talk is mainly based on https://arxiv.org/abs/2207.02244 and will also discuss some results from https://arxiv.org/abs/2207.12457.

Algorithms and complexity
Tuesday November 7, 2023, 2PM, Salle 4052 (PCQC)
Casper Gyurik (Leiden University) On provable learning separations between classical and quantum learners

One of the key challenges of the quantum machine learning field is identifying learning problems where quantum learning algorithms can achieve a provable exponential advantage over classical learning algorithms. Proofs of separation for learning mostly work by reductions to plausible complexity theoretic assumptions of various types. Previously known examples of advantages are all contrived, and all rely on cryptographic methods to make learning hard for a classical learner. However, the broadly shared intuition is that learning separations should be most apparent when the learning task involves data from genuinely quantum sources (but still classical, i.e., measured out), which are very different from cryptographic problems. In this talk I will discuss this apparent discrepancy by providing a fine-grained analysis of existing learning separations and present new results showing how various types of advantages can be obtained, which will include advantages stemming from a large class of complex quantum physical systems.

Algorithms and complexity
Tuesday October 24, 2023, 11AM, Salle 3052
Michele Orrù (LIP6) Elastic SNARKs for Diverse Environments

We introduce and study elastic SNARKs, a class of proofs where the prover can select different time and memory tradeoffs, depending on the execution environment and the proved statement. The output proof is independent of the chosen configuration.

We construct an elastic SNARK for rank-1 constraint satisfiability (R1CS). In a time-efficient configuration, the prover uses a linear number of cryptographic operations and a linear amount of memory. In a space-efficient configuration, the prover uses streaming algorithms and a quasilinear number of cryptographic operations with a logarithmic amount of memory. A key component of our construction is an elastic probabilistic proof. Along the way, we also formulate a streaming framework for R1CS that we deem of independent interest.

We additionally contribute Gemini, a Rust implementation of our protocol. Our benchmarks show that Gemini, on a single machine, supports R1CS instances with tens of billions of constraints.

Joint work with Jonathan Bootle (IBM Zurich), Alessandro Chiesa (EPFL), and Yuncong Hu (Shanghai Jiao Tong University).

Article link: https://ia.cr/2022/420

Algorithms and complexity
Tuesday October 10, 2023, 11AM, Salle 3052
Asad Raza (LIP6) MA-hardness of Geometrically Local Stoquastic Hamiltonians

The Local Hamiltonian problem for stoquastic local Hamiltonians (those that avoid the sign problem) is shown to be StoqMA-complete when the interactions are algebraically local. Moreover, the problem becomes MA-complete if the Hamiltonian is also frustration-free. In this work, we extend these results, proving the hardness of the problem for Hamiltonians that are geometrically local on 1D and 2D lattices, with only nearest neighbour interactions between qudits.

Algorithms and complexity
Tuesday October 3, 2023, 11AM, Salle 3052
Marcos Kiwi (Universidad de Chile) Label propagation on binomial random graphs

We study a variant of the widely popular, fast, and often used family of community detection procedures referred to as label propagation algorithms. These mechanisms also exhibit many parallels with models of opinion exchange dynamics and consensus mechanisms in distributed computing.

Initially, given a network, each vertex starts with a random label in the interval [0,1]. Then, in each round of the algorithm, every vertex switches its label to the majority label in its neighborhood (including its own label). At the first round, ties are broken towards smaller labels, while at each of the next rounds, ties are broken uniformly at random.

We investigate the performance of this algorithm on the binomial random graph G(n,p). Via a technically delicate analysis, we show that for np>n^(5/8+epsilon), the algorithm terminates with a single label a.a.s. Moreover, we show that if np = omega(n^(2/3)), a.a.s., this label is the smallest one, whereas if n^(5/8+epsilon) < np = o(n^(2/3)), the surviving label is, a.a.s., not the smallest one.

Joint work with Lyuben Lichev (Univ. Jean Monnet), Dieter Mitsche (Univ. Jean Monnet & Pontifícia Univ. Católica de Chile), and Pawel Pralat (Toronto Metropolitan Univ.).

Joint seminar with Distributed

Algorithms and complexity
Wednesday September 27, 2023, 11AM, Salle 3052
Oded Regev (NYU Courant) An Efficient Quantum Factoring Algorithm

We show that n-bit integers can be factorized by independently running a quantum circuit with $\tilde{O}(n^{3/2})$ gates for $\sqrt{n}+4$ times, and then using polynomial-time classical post-processing. In contrast, Shor's algorithm requires circuits with $\tilde{O}(n^2)$ gates. The correctness of our algorithm relies on a number-theoretic heuristic assumption reminiscent of those used in subexponential classical factorization algorithms. It is currently not clear if the algorithm can lead to improved physical implementations in practice.

No background in quantum computation will be assumed.

Based on the recent arXiv preprint: https://arxiv.org/abs/2308.06572

Algorithms and complexity
Wednesday September 13, 2023, 11AM, Salle 147 (Olympe de Gouges)
Sanjeev Khanna (University of Pennsylvania) Sublinear Algorithms for Hierarchical Clustering

Hierarchical clustering is a popular technique for organizing data as a rooted tree structure that simultaneously clusters data at multiple levels of granularity. A well-studied recent objective function views the input as a weighted graph with edges indicating similarity between the data points, and focuses on finding a tree that minimizes the cost of hierarchical partitioning. The resulting optimization problem is NP-hard, and previous algorithms for approximating this objective require at least linear time/space. In this talk, we will consider algorithms for hierarchical clustering that use sublinear resources (space, time, and communication). 

Specifically, we will present sublinear algorithms for hierarchical clustering in the streaming model (space), in the query model (time), and in the MPC model (communication). At the core of our algorithmic results is a connection between hierarchical clustering and a suitably relaxed notion of cut sparsifiers of graphs that lends itself to efficient sublinear algorithms. We complement our algorithmic results by establishing nearly matching lower bounds that rule out algorithms with better performance guarantees in each of these models.

This is joint work with Arpit Agarwal, Huan Li, and Prathamesh Patil.

Algorithms and complexity
Tuesday September 5, 2023, 11AM, Salle 147 (Olympe de Gouges)
Felix Rohrbach (Technische Universität Darmstadt) Searching for ELFs in the Cryptographic Forest

Extremely Lossy Functions (ELFs) are families of functions that, depending on the choice during key generation, either operate in injective mode or instead have only a polynomial image size. The choice of the mode is indistinguishable to an outsider. ELFs were introduced by Zhandry (Crypto 2016) and have been shown to be very useful in replacing random oracles in a number of applications.

One open question is to determine the minimal assumption needed to instantiate ELFs. While all constructions of ELFs depend on some form of exponentially-secure public-key primitive, it was conjectured that exponentially-secure secret-key primitives, such as one-way functions, hash functions or one-way product functions, might be sufficient to build ELFs.

In this talk I will present our result which answers this conjecture mostly negatively: We show that no primitive, which can be derived from a random oracle (which includes all secret-key primitives mentioned above), is enough to construct even moderately lossy functions in a black-box manner. However, we also show that (extremely) lossy functions themselves do not imply public-key cryptography, leaving open the option to build ELFs from some intermediate primitive between the classical categories of secret-key and public-key cryptography.

Algorithms and complexity
Wednesday July 12, 2023, 11AM, Salle 147 (Olympe de Gouges)
Anindya De (University of Pennsylvania) Testing Noisy Linear Equations for Sparsity

Consider the following basic problem in sparse linear regression – an algorithm gets labeled samples of the form (x, <w.x> + \eps) where w is an unknown n-dimensional vector, x is drawn from a background distribution D and \eps is some independent noise. Given the promise that w is k-sparse, the breakthrough work of Candes, Rhomberg and Tao (2005) shows that w can be recovered with samples and time which scales as O(k log n). This should be contrasted with general linear regression where O(n) samples are information theoretically necessary.

In this talk, we look at this question from the vantage point of property testing and study the decision variant of the following question – namely, what is the complexity of deciding if the unknown vector w is k-sparse (or at least say 0.01 far from k-sparse in \ell_2 distance). We show that the decision version of the problem can be solved with samples which are independent of n as long as the background distribution D is i.i.d. and the components are not Gaussian. We further show that weakening any of the conditions in this result necessarily makes the complexity scale as log n (thus showing our results are tight).

Algorithms and complexity
Tuesday July 11, 2023, 11AM, Salle 147 (Olympe de Gouges)
Naman Kumar (IRIF) Private Information Retrieval with Guaranteed Output

Private Information Retrieval (PIR) allows a client to query a server holding a database $D$ (or a set of non-colluding servers holding the same database) for the value of the database at index $i$ (i.e., $D[i]$). Security warrants that the servers must not learn anything about $i$. Much research has focused on the concrete efficiency of PIR schemes; however, no scheme guarantees that the client will indeed receive $D[i]$ when some threshold number of servers can be malicious.

In this talk, I will describe the construction of the first 3-server PIR scheme where the client is guaranteed to obtain the output even in the presence of 1 malicious server. The protocol is concretely efficient and enjoys several attractive properties: 1) The client performs no cryptographic operations; 2) All cryptographic operations performed by the servers can be done in an offline pre-processing phase without the involvement of the client; 3) It is based on symmetric key cryptographic primitives. I will further talk about where this fits in the PIR landscape and its immediate applications.

Algorithms and complexity
Tuesday July 4, 2023, 11AM, Salle 165 (Olympe de Gouges)
Pawel Gawrychowski (University of Wrocław) How to exploit periodicity

Periodicity is a fundamental combinatorial property of strings. We say that p is a period of a string s[1..n] if s[i]=s[i+p] for every i such that both s[i] and s[i+p] are defined. While this notion seems interesting on its own, it can be often used as a tool for designing efficient algorithms. At a high level, many of such algorithms operate differently depending on whether a given string does or does not have a small period, where small usually means smaller than half of its length (or, say, quarter). In other words, we design an algorithm that is efficient if the given string is repetitive, and another algorithm that is efficient if the given string is non-repetitive, in every case carefully exploiting either the periodicity or the fact that input looks “random”, and then run the appropriate algorithm. Of course, in some cases, one needs to proceed in a more complex manner, and instead of classifying the whole string look at its substrings and process each of them differently depending on its structure. I will survey some of such results, including the recent generalization of periodicity that can be applied in approximate pattern matching.

Algorithms and complexity
Tuesday June 27, 2023, 11AM, Salle 147 (Olympe de Gouges)
Rajat Mittal (IIT Kanpur) Composition of randomized query for outer functions with full complexity

For any Boolean functions $f$ and $g$, the question whether $R(f\circ g) = \tilde{\Theta}(R(f) \cdot R(g))$, is known as the composition question for the randomized query complexity. This question is one of the important and well-studied problems in the field of analysis of Boolean functions, and yet we are far from answering it satisfactorily.

It is known that the measure composes if one assumes various properties of the outer function (or inner function). In this talk we extend the class of outer functions for which randomized query composes. Specifically, we show that when the randomized query complexity of outer function is its arity (full), composition holds true for any inner function. Such a result was known for approximate degree composition but not for randomized query complexity.

Algorithms and complexity
Tuesday June 20, 2023, 2PM, Salle 147 (Olympe de Gouges)
Samuel Bouaziz-Ermann (LIP6) Quantum Security of Subset Cover Problems

The subset cover problem for $k \geq 1$ hash functions, which can be seen as an extension of the collision problem, was introduced in 2002 by Reyzin and Reyzin to analyse the security of their hash-function based signature scheme HORS. The security of many hash-based signature schemes relies on this problem or a variant of this problem (e.g. HORS, SPHINCS, SPHINCS+, $\dots$).

Recently, Yuan, Tibouchi and Abe (2022) introduced a variant to the subset cover problem, called restricted subset cover, and proposed a quantum algorithm for this problem. In this work, we prove that any quantum algorithm needs to make $\Omega\left((k+1)^{-\frac{2^{k}}{2^{k+1}-1}}\cdot N^{\frac{2^{k}-1}{2^{k+1}-1}}\right)$ queries to the underlying hash functions with codomain size $N$ to solve the restricted subset cover problem, which essentially matches the query complexity of the algorithm proposed by Yuan, Tibouchi and Abe.

We also analyze the security of the general $(r,k)$–subset cover problem, which is the underlying problem that implies the unforgeability of HORS under a $r$-chosen message attack (for $r \geq 1$). We prove that a generic quantum algorithm needs to make $\Omega\left(N^{k/5}\right)$ queries to the underlying hash functions to find a $(1,k)$–subset cover. We also propose a quantum algorithm that finds a $(r,k)$–subset cover making $O\left(N^{k/(2+2r)}\right)$ queries to the $k$ hash functions.

Algorithms and complexity
Monday June 12, 2023, 11AM, Salle 147 (Olympe de Gouges)
Themistoklis Melissourgos (University of Essex) Strong Inapproximability Results on Pizza-Sharing

We study the computational complexity of finding a solution in straight-cut and square-cut pizza sharing, where the objective is to fairly divide 2-dimensional resources. We show that, for both the discrete and continuous versions of the problems, finding an approximate solution is PPA-complete even for very large constant additive approximation, while finding an exact solution for the square-cut problem is FIXP-hard and in BU. We also study the corresponding decision variants of the problems and their complexity. All our hardness results for the continuous versions apply even when the input is very simple, namely, a union of unweighted squares or triangles.

Joint work with Argyrios Deligkas and John Fearnley.

Joint seminar with Verification

Algorithms and complexity
Tuesday June 6, 2023, 11AM, Salle 147 (Olympe de Gouges)
Galina Pass (QuSoft, University of Amsterdam) (No) Quantum space-time tradeoff for USTCON

The problem of undirected st-connectivity is important both for its applications in network problems, and for its theoretical connections with logspace complexity. Classically, a long line of work led to a time-space tradeoff of T=Õ(n^2/S) for any S such that S=Ω(log(n)) and S=O(n^2/m), where T is the running time and S is the used space. In this talk, I will present a surprising result that quantumly there is no nontrivial time-space tradeoff: there is a quantum algorithm that achieves both optimal time Õ(n) and optimal space O(log(n)) simultaneously. This improves on previous results, which required either O(log(n)) space and Õ(n^1.5) time, or Õ(n) space and time. To complement this, we show that there is a nontrivial time-space tradeoff when given a lower bound on the spectral gap of a corresponding random walk.

Algorithms and complexity
Tuesday May 16, 2023, 11AM, Salle 146 (Olympe de Gouges)
Isabella Ziccardi (University of L'Aquila) Distributed Self-Stabilizing MIS with Few States and Weak Communication

I will talk about a random process that computes a maximal independent set (MIS) on a general n-vertex graph. Despite its simplicity, this process has hardly received attention in the literature. It is easy to prove that the process is fast on certain graph families, such cliques and trees. However, analyzing the process on graphs beyond these simple cases seems challenging. I will present a result stating that the process is fast also on G(n,p) random graphs, for 0 ≤ p ≤ poly(log n) ・n^(-1/2) and p ≥ 1/ poly(log n). I will also present an extension of this process which is fast on all G(n,p). Both processes readily translate into distributed MIS algorithms, which are self-stabilizing, use constant space, constant random bits per round, and work in distributed models with restricted communication capabilities.

Algorithms and complexity
Wednesday May 3, 2023, 11AM, Salle 147 (Olympe de Gouges)
Balthazar Bauer (IRIF) Beyond Uber: Instantiating Generic Groups via PGGs

The Generic Group Model (GGM) has been used to make analyses of many cryptographic assumptions and protocols. However, it is well known that the GGM should be used with caution, as there are protocols that are secure in this model that are not secure in practice. This motivates the study of standard model notions formalizing the fact that a real-world group “looks generic” in some sense.

We introduce a standard model definition called pseudo-generic group (PGG). The definition we obtain generalizes the Uber assumptions. Our other results focus on applications of PGGs. We first show that PGGs are a generalization of Uber, and then present a number of applications. Some of our implications use a new type of hash function, which we call linear dependency destroyers (LDDs) and which we use to relate PGGs to UCE hash functions (which are the analog of PGGs for hash functions).

Algorithms and complexity
Tuesday April 25, 2023, 11AM, Salle 3052
Marcel Dall'Agnol (University of Warwick) Complexity separations in quantum property testing

Two of the most important questions in the theory of computer science are: - Is verifying the solution to a computational problem easier than deciding if one exists? - Can quantum computers solve problems their classical counterparts cannot? Indeed, their formalisations in the model of poly-time computation yield the fundamental P vs. NP (or BPP vs. MA) and BPP vs. BQP problems. Closely related is the “quantum P vs. quantum NP” question: - Is verification easier than decision *for quantum computers*?

In the property-testing setting, which models super-efficient algorithms that only probe a minuscule portion of their input, decision/verification and quantum/classical separations exist: positive answers to the first two questions have been known for some time. However, the property-testing analogue of QMA (“quantum NP”) has not heretofore been systematically explored. Our paper fills this gap by initiating its study: we formalise super-efficient quantum verifiers and study their power and limitations. Among other results, we prove all of the possible separations among QMA, MA and BQP in the property-testing setting.

Joint work with Tom Gur, Subhayan Roy Moulik and Justin Thaler.

Algorithms and complexity
Tuesday April 18, 2023, 11AM, Salle 3052
Dung Bui (IRIF) Private Set Intersection (PSI) and its Applications

Private Set Intersection (PSI) is a cryptographic primitive that allows parties to jointly compute the set of all common elements between their datasets, without leaking any value outside the intersection. It is a special case of secure multi-party computation (MPC). In this talk, we will introduce PSI and its state of the art. We also propose new protocols for private set intersection (PSI), building upon recent constructions of pseudorandom correlation generators, such as vector-OLE and ring-OLE. Our new constructions improve over the state of the art on several aspects and perform especially well in the setting where the parties have databases with small entries.

This joint work with Geoffroy Couteau was published at PKC 2023. Available here: ia.cr/2022/334

Algorithms and complexity
Tuesday April 18, 2023, 2PM, Salle 3052
Deeksha Adil (ETH Zurich) Fast Algorithms for Regression Problems

Increasing data sizes necessitate fast and efficient algorithms for analyzing them. Regression is one such essential tool that is used widely in computer science. In this talk, I will focus on the “p-​norm regression problem”, which is a generalization of the standard “linear regression problem”, and captures several important problems including the maximum flow problem on graphs. Historically, obtaining fast, high-​accuracy algorithms for this problem has been challenging due to the lack of smoothness and strong convexity of the function, however, recent breakthroughs have been able to get around these issues. I will present an overview of how these algorithms work and discuss some generalizations of these techniques to other regression problems.

Algorithms and complexity
Wednesday April 12, 2023, 11AM, Salle 3052
Harold Nieuwboer (University of Amsterdam and Ruhr University Bochum) Interior-point methods on manifolds: theory and applications

Interior-point methods offer a highly versatile framework for convex optimization that is effective in theory and practice. A key notion in their theory is that of a self-concordant barrier. We give a suitable generalization of self-concordance to Riemannian manifolds and show that it gives the same structural results and guarantees as in the Euclidean setting, in particular local quadratic convergence of Newton's method. We analyze a path-following method for optimizing compatible objectives over a convex domain for which one has a self-concordant barrier, and obtain the standard complexity guarantees as in the Euclidean setting. We provide general constructions of barriers, and show that on the space of positive-definite matrices and other symmetric spaces, the squared distance to a point is self-concordant. To demonstrate the versatility of our framework, we give algorithms with state-of-the-art complexity guarantees for the general class of scaling and non-commutative optimization problems, which have been of much recent interest, and we provide the first algorithms for efficiently finding high-precision solutions for computing minimal enclosing balls and geometric medians in nonpositive curvature.

Joint work with Hiroshi Hirai and Michael Walter, based on https://arxiv.org/abs/2212.10981 and https://arxiv.org/abs/2303.04771.

Algorithms and complexity
Thursday April 6, 2023, 11AM, Salle 3052
Yuval Ishai (Technion) Cryptography from One-Way Noisy Communication

We consider the traditional goals of cryptography, including secure communication and secure computation, when allowing only one-way communication over noisy channels: - Can Alice transmit a message to Bob without Eve learning this message? - Can Alice transmit exactly one of two messages to Bob without Alice learning which message was received?

We show how to circumvent information-theoretic impossibility results by settling for computational security and using the power of cryptographic obfuscation. In particular, under plausible assumptions, we obtain a positive answer to the first question whenever Alice's channel to Bob is not a degradation of the channel to Eve, and a positive answer to the second question over simple channels such as a binary symmetric channel or a binary erasure channel.

The first part of the talk is based on joint work with Alexis Korb, Paul Lou, and Amit Sahai. The second part is based on joint work with Shweta Agrawal, Eyal Kushilevitz, Varun Narayanan, Manoj Prabhakaran, Vinod M. Prabhakaran, and Alon Rosen.

Algorithms and complexity
Monday April 3, 2023, 2PM, Salle 3052
Gabriel Le Bouder (LIP6) Memory-Optimization for Self-Stabilizing Distributed Algorithms

Self-stabilization is a suitable paradigm for distributed systems, particularly prone to transient faults. Errors such as memory or messages corruption, break of a communication link, can put the system in an inconsistent state. A protocol is self-stabilizing if, whatever the initial state of the system, it guarantees that it will return a normal behavior in finite time. Several constraints concern algorithms designed for distributed systems. Asynchrony is one emblematic example. With the development of networks of connected, autonomous devices, it also becomes crucial to design algorithms with a low energy consumption, and not requiring much in terms of resources.

One way to address these problems is to aim at reducing the size of the messages exchanged between the nodes of the network. This thesis focuses on the memory optimization of the communication for self-stabilizing distributed algorithms. We will present a negative results, proving the impossibility to solve some problems under a certain limit on the size of the exchanged messages, and a particularly efficient algorithms in terms of memory for one fundamental problem in distributed systems: the token circulation.

Joint seminar with Distributed

Algorithms and complexity
Tuesday March 21, 2023, 11AM, Salle 3052
Xavier Coiteux-Roy (TUM) Any physical theory of nature must be boundlessly multipartite nonlocal

The experimental violation of Bell inequalities has supported the rejection of classical mechanics in favour of quantum mechanics. But will we one day replace quantum mechanics by a more complete, *future* physical theory? In this work, we extend Bell's theorem to multipartite scenarios where the parties are constrained by the structure of a network. More precisely, we define and investigate the concept of genuinely multipartite nonlocality (GMNL), which is a property exhibited by some quantum correlations that can be obtained from quantum operations on N-partite quantum states but cannot be explained by convex combinations of local processes on (N-1)-partite states coming from any such *future* theories. We conclude by reporting some recent investigations of GMNL in graph states.

Algorithms and complexity
Tuesday March 14, 2023, 11AM, Salle 3052
Aleksandros Sobczyk (IBM Research Zurich) Approximate Euclidean lengths and distances beyond Johnson-Lindenstrauss

In this presentation we will dive in the core of Randomized Numerical Linear Algebra and present out recent results (Sobczyk and Luisier, NeurIPS 2022) related to the Johnson-Lindenstrauss (JL) lemma and its applications. We revisit a key problem: how to approximate the Euclidean lengths (or distances) of a set of high-dimensional vectors. Can we do better than JL? Based on techniques which are inspired by the recently published Hutch++ algorithm for trace estimation, we proposed algorithms which can (provably) improve the standard JL bounds, both in theory and in practice. We will describe all these algorithms and sketch the proof techniques. These improvements can ultimately lead to more practical and efficient randomized algorithms for scientific computing, especially when high accuracy is required, which is a known shortcoming of JL-based random projections. We will also discuss how these results can be applied to other problems, namely to compute Euclidean distances between high-dimensional data points, to estimate the statistical leverage scores of a matrix, and to approximate the charge densities in a quantum mechanical system. The talk will be concluded with open problems.

Algorithms and complexity
Tuesday March 14, 2023, 2PM, Salle 3052
Thomas Peters (UCLouvain) Traceable Receipt-Free Encryption

Chosen-ciphertext game-based security (CCA) definitions capture confidentiality by asking an adversary to distinguish between honestly computed encryptions of chosen plaintexts. In the context of voting systems, such guarantees have been shown to be sufficient to prove ballot privacy (Asiacrypt'12). In this presentation, we will see that they fall short when one seeks to obtain receipt-freeness, that is, when corrupted voters who submit chosen ciphertexts encrypting their vote must be prevented from proving how they voted to a third party. Since no known encryption security notion can lead to a receipt-free ballot submission process, we address this challenge by proposing a novel publicly verifiable encryption primitive coined Traceable Receipt-free Encryption (TREnc) and a new notion of traceable CCA security filling the definitional gap underlined above. TREnc is the first primitive allowing to generically realize a non-interactive receipt-free e-voting system in a flexible plug-and-play model. In addition to this sufficient condition, TREnc is mild enough to design the most efficient e-voting scheme among those achieving similar goals. In other words, modeling TREnc provides a better understanding of the cryptographic techniques that help reach receipt-freeness more easily.

Work published at Asiacrypt'22. Available here: https://ia.cr/2022/822.

Short Bio: Thomas Peters is a Professor and an FNRS research associate at UCLouvain, Belgium. He got a Master degree in pure mathematics and a Ph.D. in cryptography before joining ENS as a postdoc. His interests range from modeling security and building cryptographic primitives to the design of efficient public- and secret-key encryption schemes as well as advanced signature schemes with privacy-preserving features, and to zero-knowledge proofs.

Algorithms and complexity
Wednesday March 8, 2023, 11AM, Salle 3052
Alexander Koch (IRIF) An Introduction to Card-Based Cryptography

Modern cryptography does not only enable to protect your personal data on the Internet, or to authenticate for certain services, but also evaluate a function on private inputs of multiple parties, without anyone being able to learn something about these inputs. Cryptographic protocols of this type are called “secure multiparty computation” and are suitable for a broad spectrum of applications, such as for voting or auctions, where the vote or bid should remain private. To prove security of such protocols, one needs assumptions that are often complexity-theoretic in nature – for example that it is difficult to factorize sufficiently large numbers. Security assumptions that are based on *physical principles* exhibit quite some advantages when compared to complexity-theoretic assumptions: the protocols are often conceptually simpler, the security is independent of the computational power of an attacker, and the functioning and security is often more transparent to humans. The talk highlights cryptographic protocols for secure multiparty computation that is executed with the help of a deck of *physical playing cards*. The security assumption is that the cards have indistinguishable backs and that certain shuffling actions can be performed securely. One application of these protocols is a didactic method to illustrate cryptography and to allow for secure multiparty computation that can be performed completely without any computer involved.

In this area of cryptography, researchers aim to construct protocols using a minimal number of cards – and to prove them optimal in this sense. Depending on the requirements posed to running time (finite vs. finite only in expectation) and to the practicability of the used shuffles, one can derive different lower bounds for the necessary number of cards. For the necessary structural insights to these lower bound results, I will present “state diagrams”, a graph-based representation of all possible protocol runs, which allow for direct reading of correctness and security from the diagram. With this method, proofs of lower bounds with respect to the number of cards become proofs that certain protocol states are not reachable in the associated combinatorial graph structure. Using this, one can even formalize the respective notions of card-based cryptography as a C program and prove the run-minimality of a card-minimal AND protocol using a Software Bounded Model Checking approach. If there is time, I will also briefly cover the case of secure multiparty computation that additionally protects the *function to be computed* for Turing machines.

The talk is meant as an accessible overview talk on the topic.

Algorithms and complexity
Tuesday February 28, 2023, 11AM, Salle 3052
Arne Heimendahl (University of Cologne) Low distortion embeddings of flat tori

Studying embeddings of metric spaces has been demonstrated to be an extremely useful tool with several applications in computer science, most prominently for approximation algorithms. The goal is to embed a “difficult” metric space into another simpler metric space (such as Euclidean space), in a way that approximately preserves distances.

In this talk, I will consider embeddings of flat tori R^n/L, where L is an n-dimensional lattice. I will show how the least distortion embedding of a flat torus can be computed by an infinite-dimensional semidefinite program. Analysing the semidefinite program, I will derive some interesting results about the structure of least distortion embeddings. Additionally, I will discuss (potential) approaches to tackle the closest vector problem, by making use of the embedding of the torus R^n/L.

Based on joint work with Moritz Lücke, Frank Vallentin and Marc Zimmermann.

Algorithms and complexity
Tuesday February 28, 2023, 10AM, Salle 3052
Hamoon Mousavi (Columbia University) Non-commutativity for combinatorial optimization

The Max-Cut problem is the meeting ground for two beautiful results in theoretical computer science: On the one hand we have the Tsirelson’s theorem for XOR games from quantum information and on the other hand we have the Goemans-Williamson’s hyperplane rounding algorithm, a celebrated result in combinatorial optimization. Making this connection explicit allows us to reinterpret the Goemans-Williamson algorithm in terms of operators rather than hyperplanes. We draw a few lessons from this reinterpretation and we see how one might extend this story to other problems in combinatorial optimization. The hope here is to obtain better approximation ratios for combinatorial optimization problems. We also see how one can tell this story purely in terms of non-local games. In the language of non-local games the question we are trying to answer is: Is there a good classical strategy hidden somewhere inside every quantum strategy of a nonlocal game?

Algorithms and complexity
Wednesday February 22, 2023, 11AM, Salle 3052
Omri Weinstein (Columbia University) Approximate Matrix Multiplication via Spherical Convolutions

We introduce a new bilinear operation, which we call Polyform, for fast approximate matrix multiplication, through sums of carefully weighted sparse polynomial multiplications. This algorithmic framework leads to several (parametrized) improvements over known worst-case speed-vs-accuracy tradeoffs for approximate matrix multiplication. It also yields a cheap practical alternative to matrix multiplication in deep neural networks (DNNs), which is the main bottleneck in large-scale training and inference.

This algorithm involves unexpected connections to additive combinatorics, sparse Fourier transforms, and spherical harmonics. In particular, optimizing the polynomial's coefficients leads to a (non-convex) low-rank SDP problem generalizing Spherical Codes. Meanwhile, our experiments demonstrate that, when using SGD to optimize these coefficients in DNNs, Polyform provides a major speedup on state-of-art DL models with minor accuracy loss. This suggests replacing matrix multiplication with (variants of) polynomial multiplication in large-scale deep neural networks.

Algorithms and complexity
Tuesday February 21, 2023, 11AM, Salle 3052
Pihla Karanko (Aalto University) Introduction to garbling (with penguin illustrations!)

Penguin A, who studies machine learning, has come up with an amazing circuit, that takes as input some parameters, such as how thick your feathers are, and outputs whether you have heightened risk of cold sensitivity or fish allergy.
 Penguin B would like to use the circuit, but does not want to reveal their sensitive medical data to A.
 On the other hand, the circuit contains sensitive medical information of a few other penguins, so A cannot simply publish the algorithm either.

Garbling to the rescue!

Garbling is a cryptographic technique that allows B to compute the output of A's circuit using B's data, without B learning anything about the circuit (other than the output) and without A learning B's data. In this talk, I show how A can “encrypt” each gate in the circuit and thus obtain a garbled circuit that B can evaluate, without learning what the gates are.

Additionally, I talk about the limitations on what types of circuits we can garble, in particular, what kind of security you can achieve, if you want to garble cryptographic circuits.

Algorithms and complexity
Wednesday February 15, 2023, 11AM, Salle 3052
Ansis Rosmanis (Nagoya University) Non-trivial quantum lower bound for 3-coloring the ring using non-signaling arguments

In the LOCAL model of distributed computing, where in a single round of communication each node can send to each of its neighbors a message of an arbitrary size, it is know that, classically, the round complexity of 3-coloring an n-node ring is Theta(log^∗ n). In the case where the communication is quantum, only trivial bounds were previously known: at least some communication must take place.

In a recent joint work with François Le Gall, we showed the first non-trivial quantum lower bound for 3-coloring by using the observation that the probabilistic output of the network must be non-signaling, thus effectively sidestepping the difficulty of characterizing the power of quantum entanglement.

In this talk, I will present our results, observations that led to them, and potential directions for future work.

Algorithms and complexity
Tuesday February 14, 2023, 11AM, Salle 3052
Benoît Valiron (LMF, CentraleSupélec) Complete Equational Theories for Linear Optical and Quantum Circuits

In this talk, we introduce two complete equational theories: one for linear optical circuits and one for quantum circuits. More precisely, in both cases we introduce a set of circuit equations that we prove to be sound and complete: two circuits represent the same quantum evolution if and only if they can be transformed one into the other using the equations.

We shall first present our proposed equational theory for linear optical circuits. We shall then discuss how the semantics of both kind of circuits differ, and how to relate them using controlled-gates. The discussion will drive is to the completeness result for quantum circuits, based on the one for linear optical circuits.

Joint seminar with PPS

Algorithms and complexity
Tuesday February 7, 2023, 11AM, Salle 3052
Pierre Meyer (IRIF / Reichman University) On Low-End Obfuscation and Learning

Informally, obfuscation aims to compile programmes into unintelligible ones while preserving functionality. As a first approximation, this can be seen as trying to convert a programme to a “black-box representation”, which cannot be used for anything beyond obtaining its input/output behaviour. Learning, on the other hand, aims to generate the representation of a function given oracle access to it. In this talk, we will investigate new links between the two fields.

No specialised knowledge of cryptography or computational complexity is required, and the talk is aimed at a general TCS audience. For completeness however, we provide below a more technical description of our results.

Most recent works on cryptographic obfuscation focus on the high-end regime of obfuscating general circuits while guaranteeing computational indistinguishability between functionally equivalent circuits. Motivated by the goals of simplicity and efficiency, we initiate a systematic study of “low-end” obfuscation, focusing on simpler representation models and information-theoretic notions of security. We obtain the following results. 1) Positive results via “white-box” learning. We present a general technique for obtaining perfect indistinguishability obfuscation from exact learning algorithms that are given restricted access to the representation of the input function. We demonstrate the usefulness of this approach by obtaining simple obfuscation for decision trees and multilinear read-k arithmetic formulas. 2) Negative results via PAC learning. A proper obfuscation scheme obfuscates programs from a class C by programs from the same class. Assuming the existence of one-way functions, we show that there is no proper indistinguishability obfuscation scheme for k-CNF formulas for any constant k ≥ 3; in fact, even obfuscating 3-CNF by k-CNF is impossible. This result applies even to computationally secure obfuscation, and makes an unexpected use of PAC learning in the context of negative results for obfuscation. 3) Separations. We study the relations between different information-theoretic notions of indistinguishability obfuscation, giving cryptographic evidence for separations between them.

This is joint work with Elette Boyle, Yuval Ishai, Robert Robere, and Gal Yehuda, which was presented at ITCS 2023.

Algorithms and complexity
Wednesday February 1, 2023, 2PM, Salle 3052
Vincent Cohen-Addad (Google Research) Recent Progress on Correlation Clustering: The Power of Sherali-Adams and New Practical insights

Correlation clustering is a classic model for clustering with several applications in data mining and unsupervised machine learning. In this problem, we are given a complete graph where each edge is labeled + or -; and the goal is to find a partition of the vertex set so as to minimize the number of + edges across the parts of the partition plus the number of – edges within the parts of the partition.

We will first present a new result showing that a constant number of rounds of the Sherali-Adams hierarchy yields a strict improvement over the natural LP: we present the first better-than-two approximation for the problem, improving upon the previous approximation of 2.06 of Chawla, Makarychev, Schramm, Yaroslavtsev based on rounding the natural LP (which is known to have an integrality gap of 2). We will then review several recent ideas which have led to practical constant factor approximations to Correlation Clustering in various setups: distributed and parallel environments, differentially-private algorithms, dynamic algorithms, or sublinear time algorithms.

This is a combination of joint works with Euiwoong Lee, Alantha Newman, Silvio Lattanzi, Slobodan Mitrovic, Ashkan Norouzi-Fard, Nikos Parotsidis, Jakub Tarnawski, Chenglin Fan and Andreas Maggiori.

Algorithms and complexity
Monday January 30, 2023, 11AM, Salle 3052
Samson Wang (Imperial College London) Qubit-Efficient Randomized Quantum Algorithms for Linear Algebra

We propose a class of randomized quantum algorithms for the task of sampling from matrix functions, without the use of quantum block encodings or any other coherent oracle access to the matrix elements. As such, our use of qubits is purely algorithmic, and no additional qubits are required for quantum data structures. For NxN matrices, the space cost is log(N)+2 qubits and depending on the structure of the matrices, the gate complexity can be comparable to state-of-the-art methods that use quantum data structures of up to size O(N^2), when considering equivalent end-to-end problems. Within our framework, we present a quantum linear system solver that allows one to sample properties of the solution vector, as well as an algorithm for sampling properties of ground states of Hamiltonians. As a concrete application, we combine these two sub-routines to present a scheme for calculating Green's functions of quantum many-body systems.

Algorithms and complexity
Wednesday January 25, 2023, 10AM, Salle 3052
Francisco Escudero Gutiérrez (CWI, Amsterdam) Aaronson and Ambainis conjecture: a route towards the need for structure in quantum speedups

Query complexity is a model computing that allows to understand algorithms, both quantum and classical. In this setting, we can prove rigorous upper and lower bounds, so we can make comparisons between the quantum and classical world. We will focus on the case of Boolean functions, $f: S\subseteq \{-1,1\}^n \to \{-1,1\}$. If f is total ($S=\{-1,1\}^n$), it is known that the quantum query complexity and the classical one are polynomially related. It is also known that for very structured problems (partial functions, with $|S|<<2^n$) these two quantities can be exponentially separated (see for instance period finding). A folklore conjecture dating from the 90's asserts that a lot of structure is indeed needed to attain these superpolynomial quantum advantages. Aaronson and Ambainis suggested a way of explaining that believe in 2009, when they conjectured that every bounded polynomial of bounded degree has a very influential variable. This, alongside the fact that quantum query algorithms output bounded polynomials of bounded degree, would imply that quantum query algorithms could be approximated by classical ones almost everywhere with only a polynomial overhead in the query complexity. The community has made a few attempts to prove this conjecture, but it is still open and it is only known to be true for some particular cases.

Nevertheless, it might not be necessary to prove the exact statement proposed by Aaronson and Ambainis to show that quantum speedups need structure. This is because in 2017 Arunachalam, Briet and Palazuelos showed that the output of quantum query algorithms are not only bounded, but also completely bounded. Given that being completely bounded is much more restrictive than being bounded, it might be easier to prove the Aaronson and Ambainis conjecture for this smaller family of polynomials. We explore this line of research and prove a new particular case of the Aaronson and Ambainis conjecture. To do show, we use the technique used by Varopoulos to rule out a trilinear von Neumann's inequality.

Algorithms and complexity
Wednesday January 25, 2023, 11AM, Salle 3052
Anupa Sunny (IRIF) Certificate Games

In this talk, we introduce Certificate Game complexity, a new measure of complexity based on the probability of winning a game in which two players are given inputs with different function values and are asked to output some index i where their inputs differ, in a zero-communication setting. We study four variants of it, namely those with private coin, public coin, shared entanglement and non-signaling strategies, and place them in the landscape of known complexity measures.

We show that the public coin model is bounded above by randomized query and certificate complexities, while the non-signaling model, which is the weakest model we consider, is bounded below by fractional certificate complexity. The quantum measure highlights an interesting and surprising difference between classical and quantum query models. While public coin certificate game complexity is bounded above by randomized query complexity, quantum certificate game complexity can be quadratically larger than quantum query complexity.

This is a joint work with Sourav Chakraborty, Anna Gal, Sophie Laplante and Rajat Mittal.

Algorithms and complexity
Tuesday January 24, 2023, 11AM, Salle 3052
Sebastian Zur (CWI, Amsterdam) Multidimensional Quantum Walks

While quantum walk frameworks make it easy to design quantum algorithms, as evidenced by their wide application across domains, the major drawback is that they can achieve at most a quadratic speedup over the best classical algorithm.

In this work, we generalise the electric network framework – the most general of quantum walk frameworks, into a new framework that we call the multidimensional quantum walk framework, which no longer suffers from the aforementioned drawback, while still maintaining the original classical walk intuition. We then apply this framework to the k-distinctness problem, to obtain a time complexity that matches the currently best known query complexity by Belovs(2012), as well as to the welded trees problem, where we can solve the problem in a linear number of queries.

This talk will focus on the application to welded trees.

Joint work with Stacey Jeffery (2208.13492)

Algorithms and complexity
Wednesday January 18, 2023, 11AM, Salle 3052
Nicholas Spooner (University of Warwick) How to Rewind a Quantum Computer

Classical computations are inherently “many-time”: given a device that performs a computation on a provided input, we can always (at least in principle) run the computation on many different inputs by rewinding the device, returning it to its initial state. This principle does not apply to quantum computations: a quantum computation can irreversibly lose information, and so cannot in general be rewound. This causes serious problems for the analysis of interactive cryptographic protocols in the quantum setting: the classical security reduction typically relies on the ability to rewind the adversary in order to record its responses across different interactions.

In this talk, I will present a new, general quantum rewinding technique that transforms a “one-time” quantum computation into a “many-time” computation. This opens the door to quantum security for many tasks. One key application is to prove that Kilian’s four-message succinct argument system for NP is secure against quantum attack (assuming the post-quantum hardness of the Learning with Errors problem).

Based on joint work with Alessandro Chiesa, Fermi Ma, Alex Lombardi, and Mark Zhandry.

online talk

Algorithms and complexity
Tuesday January 17, 2023, 2PM, Salle 3052
Krystal Guo (KdVI, University of Amsterdam) Algebraic graph theory and quantum walks

The interplay between the properties of graphs and the eigenvalues of their adjacency matrices is well-studied. Important graph invariants, such as diameter and chromatic number, can be understood using these eigenvalue techniques. In this talk, we bring these classical techniques in algebraic graph theory to the study of quantum walks.

A system of interacting quantum qubits can be modelled by a quantum process on an underlying graph and is, in some sense, a quantum analogue of random walk. This gives rise to a rich connection between graph theory, linear algebra and quantum computing. In this talk, I will give an overview of applications of algebraic graph theory in quantum walks, as well as various recent results.

In a recent paper with V. Schmeits, we show that the evolution of a general discrete-time quantum walk that consists of two reflections satisfies a Chebyshev recurrence, under a projection. We apply this to a model of discrete-time quantum walk proposed by H. Zhan [J. Algebraic Combin. 53(4):1187-1213, 2020], whose transition matrix is given by two reflections, using the face and vertex incidence relations of a graph embedded in an orientable surface. For the vertex-face walk, we prove theorems about perfect state transfer and periodicity and give infinite families of examples where these occur.

Joint seminar with Graphs

Algorithms and complexity
Tuesday January 17, 2023, 11AM, Salle 3052
Tamara Kohler (mathQI, Universidad Complutense de Madrid) Clique Homology is QMA1-hard

We tackle the long-standing question of the computational complexity of determining homology groups of simplicial complexes, a fundamental task in computational topology, posed by Kaibel and Pfetsch 20 years ago. We show that this decision problem is QMA1-hard. Moreover, we show that a version of the problem satisfying a suitable promise and certain constraints is contained in QMA. This suggests that the seemingly classical problem may in fact be quantum mechanical. In fact, we are able to significantly strengthen this by showing that the problem remains QMA1-hard in the case of clique complexes, a family of simplicial complexes specified by a graph which is relevant to the problem of topological data analysis. The proof combines a number of techniques from Hamiltonian complexity and homological algebra. We discuss potential implications for the problem of quantum advantage in topological data analysis.

Algorithms and complexity
Tuesday January 10, 2023, 11AM, Salle 3052
Christoph Egger (IRIF) Separating Distributional Collision Resistance from (Plain) Collision Resistance

In this talk we investigate the relation between Collision Resistant Hash Functions and their distributional relaxation. Loosely speaking a shrinking function is called collision resistant if no probabilistic polynomial time adversary can produce two inputs that map to the same value, while it is (only) distributional collision resistant if such efficient adversaries cannot provide uniform such values.

While hash functions are commonly considered to be part of symmetric cryptography, their existence cannot be based on one-way functions in a black box manner (Simon'98), a result that directly extends to the distributional relaxation. Yet, we are able to show that distributional collision resistance cannot be boosted to (full) collision resistance. Formally, we show the existence of an oracle relative to which distributional collision resistant hash functions exist but (full) collision resistant hash functions do not.

Algorithms and complexity
Tuesday January 10, 2023, 10AM, Salle 3052
Sacha Servan-Schreiber (MIT) Private Access Control for Function Secret Sharing

Function Secret Sharing (FSS) allows a dealer to share a function f with two or more evaluators. Given secret shares of a function f, the evaluators can locally compute secret shares of f(x) on an input x, without learning information about f.

In this talk, I will present a model and constructions for access control in FSS. We consider a setting where evaluators need to ensure that the dealer is authorized to share the provided function. We show how, for a function family F and an access control list defined over the family, the evaluators receiving the shares of f (from the family F) can efficiently check that the dealer knows the corresponding access key in the access control list.

We show that this model enables new applications of FSS, such as: – anonymous authentication in a multi-party setting, – access control in private databases, and – authentication and spam prevention in anonymous communication systems.

Our definitions and constructions abstract and improve the concrete efficiency of several recent systems that implement ad-hoc mechanisms for access control over FSS.

Speaker bio: Sacha is a 4th year PhD student at MIT CSAIL working on privacy-preserving systems and applied cryptography.


Year 2022

Algorithms and complexity
Wednesday December 14, 2022, 11AM, Salle 3052
Fabien Dufoulon (University of Houston) Sleeping is Superefficient: MIS in Exponentially Better Awake Complexity

Maximal Independent Set (MIS) is one of the fundamental and most well-studied problems in distributed graph algorithms. Even after four decades of intensive research, the best-known (randomized) MIS algorithms have O(log n) round complexity on general graphs [Luby, STOC 1986] (where n is the number of nodes), while the best-known lower bound is Ω(√(log(n)/log(log n))) [Kuhn, Moscibroda, and Wattenhofer, JACM 2016]. Breaking past the O(log n) bound or showing stronger lower bounds have been longstanding open problems.

Energy is a premium resource in battery-powered wireless and sensor networks, and the bulk of it is used by nodes when they are awake, i.e., when they are sending, receiving, and even just listening for messages. On the other hand, when a node is sleeping, it does not perform any communication and thus spends very little energy. Several recent works have addressed the problem of designing energy-efficient distributed algorithms for various fundamental problems. These algorithms operate by minimizing the number of rounds in which any node is awake, also called the awake complexity. An intriguing question is whether one can design a distributed MIS algorithm that is significantly more energy-efficient (having very small awake complexity) compared to existing algorithms.

Our main contribution is to show that MIS can be computed in awake complexity that is exponentially better compared to the best known round complexity of O(log n) and also bypassing its fundamental Ω(√(log(n)/log(log n))) round complexity lower bound (almost) exponentially. Specifically, we show that MIS can be computed by a randomized distributed (Monte Carlo) algorithm in O(log log n) awake complexity with high probability. Since a node spends significant energy only in its awake rounds, our result shows that MIS can be computed in a highly energy-efficient way compared to the existing MIS algorithms.

Joint seminar with Distributed

Algorithms and complexity
Wednesday December 14, 2022, 2PM, Salle 3052
Hang Zhou (École Polytechnique) Unsplittable Euclidean Capacitated Vehicle Routing

In the unsplittable capacitated vehicle routing problem, we are given a metric space with a vertex called depot and a set of vertices called terminals. Each terminal is associated with a positive demand between 0 and 1. The goal is to find a minimum length collection of tours starting and ending at the depot such that the demand of each terminal is covered by a single tour (i.e., the demand cannot be split), and the total demand of the terminals in each tour does not exceed the capacity of 1.

Our main result is a polynomial-time $(2+\epsilon)$-approximation algorithm for this problem in the two-dimensional Euclidean plane, i.e., for the special case where the terminals and the depot are associated with points in the Euclidean plane and their distances are defined accordingly. This improves on recent work by Blauth, Traub, and Vygen [IPCO'21] and Friggstad, Mousavi, Rahgoshay, and Salavatipour [IPCO'22].

This is joint work with Claire Mathieu and Fabrizio Grandoni.

Algorithms and complexity
Tuesday December 6, 2022, 11AM, Salle 1004
Changpeng Shao (University of Bristol) Quantum communication complexity of linear regression

In this talk, I will introduce a recent work with Ashley Montanaro. I will show you that quantum computers can have exponential speedups in terms of communication complexity for some fundamental linear algebra problems. I will mainly introduce our results on solving linear regressions. The talk is based on arXiv:2210.01601.

online talk

Algorithms and complexity
Wednesday November 30, 2022, 4PM, Salle 3052
Ce Jin (MIT) Quantum Speed-ups for String Synchronizing Sets, Longest Common Substring, and k-mismatch Matching

Longest Common Substring (LCS) is an important text processing problem, which has recently been investigated in the quantum query model. The decisional version of this problem, LCS with threshold d, asks whether two length-n input strings have a common substring of length d. The two extreme cases, d = 1 and d = n, correspond respectively to Element Distinctness and Unstructured Search, two fundamental problems in quantum query complexity. However, the intermediate case 1 « d « n was not fully understood.

We show that the complexity of LCS with threshold d smoothly interpolates between the two extreme cases up to n^{o(1)} factors: LCS with threshold d has a quantum algorithm in n^{2/3 + o(1)}/d^{1/6} query complexity and time complexity, and requires at least Omega(n^{2/3}/d^{1/6}) quantum query complexity.

Our result improves upon previous upper bounds O~(min{ n/d^{1/2}, n^{2/3} }) (Le Gall and Seddighin ITCS 2022, Akmal and Jin SODA 2022), and answers an open question of Akmal and Jin.

Our main technical contribution is a quantum speed-up of the powerful String Synchronizing Set technique introduced by Kempa and Kociumaka (STOC 2019). Our quantum string synchronizing set also yields a near-optimal LCE data structure in the quantum setting, and an algorithm for the k-mismatch matching problem with k^{3/4}n^{1/2+o(1)} quantum query complexity.

Based on a SODA'23 paper joint with Jakob Nogler (ETH Zurich) and a SODA'22 paper joint with Shyan Akmal (MIT).

online talk

Algorithms and complexity
Wednesday November 30, 2022, 11AM, Salle 3052
Chris Brzuska (Aalto University) Obfuscation: Implications in Complexity and Cryptography

Obfuscation transforms a program C into a functionally equivalent program C' which hides the inner workings of C. The first formalizations of this hiding property in cryptography suffered from strong impossibility results and thus, the seemingly weak notion of indistinguishability obfuscation (iO) was formulated. It only requires that obfuscations of two circuits are indistinguishable when two circuits have equal functionality to begin with.

Although iO seems such a weak definition, it turns out surprisingly powerful (*). It allows us to transform worst-case NP problems into one-way functions, one-way functions into public-key encryption and even into fully homomorphic encryption—transformations which otherwise suffer from (black-box) separations result. In fact, iO is so strong that its existence is mutually exclusive with other assumptions which had previously been considered plausible.

At the same time, iO is also very weak; it does not even imply that P is different from NP although most cryptographic primitives (one-way functions, public-key encryption etc.) do.

In this talk, we explore the above conceptual implications. The talk is intended for a (broad) theory audience.

(*) “I must admit that I was very skeptic of the possible applicability of the notion of indistinguishable obfuscation.” Oded Goldreich when commenting on the surprising success of iO in cryptographc research.

Algorithms and complexity
Tuesday November 22, 2022, 11AM, Salle 3052
David Saulpic (University of Vienna) Embedding Euclidean Spaces into Trees for Clustering: Scalable Differentially Private Clustering

We study the private k-median and k-means clustering problem in d dimensional Euclidean space. By leveraging tree embeddings, we give an efficient and easy to implement algorithm, that is empirically competitive with state of the art non private methods. We prove that our method computes a solution with cost at most $O(d^{3/2}\log n)\cdot OPT + O(k d^2 \log^2 n / \epsilon^2)$, where $\epsilon$ is the privacy guarantee. (The dimension term, d, can be replaced with $O(\log k)$ using standard dimension reduction techniques).

Although the worst-case guarantee is worse than that of state of the art private clustering methods, the algorithm we propose is practical, runs in near-linear, $\tilde{O}(nkd)$, time and scales to tens of millions of points. We also show that our method is amenable to parallelization in large-scale distributed computing environments. In particular we show that our private algorithms can be implemented in logarithmic number of MPC rounds in the sublinear memory regime. Finally, we complement our theoretical analysis with an empirical evaluation demonstrating the algorithm's efficiency and accuracy in comparison to other privacy clustering baselines.

Joint work with Vincent Cohen-Addad, Alessandro Epasto, Silvio Lattanzi, Vahab Mirrokni, Andres Munoz Medina, Chris Schwiegelshohn and Sergei Vassilvitskii

Algorithms and complexity
Wednesday November 16, 2022, 2PM, Salle 3052
Daniel Vaz (DIENS/IRIF) Good and Efficient Approximation for the Sparsest Cut Problem in Bounded-Treewidth Graphs

Sparsest cut is a fundamental graph problem, which models a general notion of balanced separator of a graph, and has uses in graph theory and divide-and-conquer approaches. In it, we are given an edge-capacitated graph, together with demands over pair of vertices, and we want to find a cut that minimizes the ratio between the capacities and demands across the cut. In other words, we aim to separate as much demand as possible using little cut capacity. For general graphs, the best known approximation factor is Õ(sqrt(log n)), and the problem is known to have no constant-approximation under the unique games conjecture.

In this talk, we focus on the simpler setting of bounded-treewidth graphs, and present new constant-factor approximation algorithms. Known algorithms in this setting are either inefficient or have large approximation factors. We will see that these algorithms can be framed as specific cases of a general algorithm with uses beyond sparsest cut, and then show how to combine the best of both approaches to obtain efficient and good approximations to the problem. As a result, we give the first constant-factor approximation algorithm running in FPT time.

Algorithms and complexity
Tuesday November 15, 2022, 11AM, Salle 3052
Arnaud De Mesmay (LIGM) Fitting Metrics and Ultrametrics with Minimum Disagreements

Given a matrix recording pairwise distances, the Metric Violation Distance problem asks to modify the minimum number of entries of the matrix to make it into a metric. Due to its large number of applications in various data analysis and optimization tasks, this problem has been actively studied recently.

We will explain the various connections of this problem to other more well-known problems, and then present an O(log n)-approximation algorithm, exponentially improving the previous best approximation ratio of O(OPT^1/3). A major strength of this algorithm is also its simplicity and running time. We will also investigate the closely related problem of Ultrametric Violation Distance, where one aims at modifying the minimum number of entries so as to obtain an ultrametric, and present an O(1)-approximation algorithm for this problem by leveraging its close connections with a hierarchical instance of correlation clustering. If time allows, we will conclude with lower bounds for the maximization versions of both problems based on the Unique Games Conjecture.

Joint work with Vincent Cohen-Addad, Chenglin Fan and Euiwoong Lee.

Algorithms and complexity
Tuesday November 8, 2022, 11AM, Salle 2017
Dimitris Achlioptas (University of Athens) The Lovász Local Lemma as Approximate Dynamic Programming

Imagine facing n switches hooked up to an array of lightbulbs. Each lightbulb is connected to only a handful of switches and turns on under many, but not all, configurations of the switches to which it is connected. The satisfiability question asks whether it is possible to set the switches so that all lightbulbs are on. Naturally, whenever the answer is positive, flipping a coin for each switch will cause all lightbulbs to light up with positive probability. At the heart of the Probabilistic Method is the observation that any multiplicative underestimation of this probability, no matter how poor, suffices to prove that the answer is positive. The Lovász Local Lemma (LLL), a cornerstone of the Probabilistic Method, is a tool for establishing positive probability lower bounds in the face of conflicting correlations. In this talk, I will present a new, computational view of the LLL which leads to significant generalizations, including a passage from the setting of probabilities to that of general supermodular functions.

Algorithms and complexity
Wednesday October 5, 2022, 4:30PM, Salle 3052
Daniel Szilagyi & Brandon Augustino Seminar Paris-Lehigh on Quantum Interior Point Methods

Algorithms and complexity
Friday September 30, 2022, 2PM, Salle 1007
Victor Verdugo (Universidad de O'Higgins, Chile) Multidimensional Apportionment Through Discrepancy Theory

Deciding how to allocate the seats of a house of representatives is one of the most fundamental problems in the political organization of societies, and has been widely studied over already two centuries. The idea of proportionality is at the core of most approaches to tackle this problem, and this notion is captured by the divisor methods, such as the Jefferson/D'Hondt method. In a seminal work, Balinski and Demange extended the single-dimensional idea of divisor methods to the setting in which the seat allocation is simultaneously determined by two dimensions, and proposed the so-called biproportional apportionment method. The method, currently used in several electoral systems, is however limited to two dimensions and the question of extending it is considered to be an important problem both theoretically and in practice. In this work, we initiate the study of multidimensional proportional apportionment. We first formalize a notion of multidimensional proportionality that naturally extends that of Balinski and Demange. By means of analyzing an appropriate integer linear program we can prove that, in contrast to the two-dimensional case, the existence of multidimensional proportional apportionments is not guaranteed and deciding its existence is NP-complete. Interestingly, our main result asserts that it is possible to find approximate multidimensional proportional apportionments that deviate from the marginals by a small amount. The proof arises through the lens of discrepancy theory, mainly inspired by the celebrated Beck-Fiala Theorem. This is joint work with José Correa, Javier Cembrano and Gonzalo Diaz (PNAS 2022, EC 2021 and EAAMO 2021).

Algorithms and complexity
Wednesday July 20, 2022, 11AM, Salle 3052
Maud Szusterman (IRIF) Compact formulations: how to deduce a linear extension from an algorithm?

When optimizing a linear functional over a polytope P, one may prefer to pass to a linear extension whose number of facets is significantly smaller. Well-known examples of such polytopes are spanning tree polytope and permutahedron. This has motivated searching for so-called (symmetric or not) compact formulations of polytopes. Sometimes, they cannot exist [Yannakakis, Rothvoss]. While a non-negative factorization of the slack matrix of P, implicitly gives rise to a linear extension Q, one may sometimes directly describe Q, building on an algorithm which unfolds all symmetries of P. This was achieved for the permutahedron by Goemans, then his construction was generalized by Kaibel and Pashkovich. The aim of this talk is to introduce the main notions around extension complexity, and then explain the construction by Goemans, after a gentle warm up with regular n-gons in the plane.

Algorithms and complexity
Tuesday July 12, 2022, 10AM, Salle 3052
Seeun William Umboh (The University of Sydney) Online Weighted Cardinality Joint Replenishment Problem with Delay

We study a generalization of the classic Online Joint Replenishment Problem (JRP) with Delays that we call the Online Weighted Cardinality JRP with Delays. The JRP is an extensively studied inventory management problem wherein requests for different item types arrive at various points in time. A request is served by ordering its corresponding item type. The cost of serving a set of requests depends on the item types ordered. Furthermore, each request incurs a delay penalty while it is left unserved. The objective is to minimise the total service and delay costs. In the Weighted Cardinality JRP, each item type has a positive weight and the cost of ordering is a non-decreasing, concave function of the total weight of the item types ordered. This problem was first considered in the offline setting by Cheung et al.~(2015) but nothing is known in the online setting. Our main result is a deterministic, constant competitive algorithm for this problem.

Algorithms and complexity
Tuesday July 12, 2022, 11AM, Salle 3052
Kheeran Naidu (University of Bristol) Space Optimal Vertex Cover in Dynamic Streams (with an overview of graph streaming)

With the world as connected as ever and data being created at an unprecedented rate, there has been an increasing need for algorithms that efficiently process massive graphs. Graph streaming is one such memory efficient setting where the edges of a graph are presented as a sequence of edges and an algorithm's goal is to store a sublinear number of edges needed to solve a graph problem exactly or approximately. In particular, this talk discusses the insertion-only, dynamic (or insertion-deletion) and sliding window graph streaming models of computation, highlighting the well-studied maximum matching and sister minimum vertex cover problems, and focusing on a recent space optimal (up to constant factors) algorithm for minimum vertex cover in the dynamic graph streaming model.

Algorithms and complexity
Tuesday June 28, 2022, 11AM, Salle 3052
Rajat Mittal (IIT Kanpur) Quantum query complexity (what, why and how)

Quantum query model has been one of the main tools in quantum computing for understanding the complexity of problems: both in terms of designing efficient algorithms, and giving lower bounds on the complexity of the problem. In case of lower bounds, the main benefit of query model is the fact that concrete mathematical techniques exist to understand the query complexity. Our main aim in this talk is to look at this computational model and see why this model has gained so much attention in the quantum computing world.

After looking at the background and definitions for the query model, we will look at reasons why quantum query complexity is an important area of study. Not just that it is one of the simplest settings to study the complexity of functions, it naturally arises in many other areas like property testing and communication complexity.

Finally, as mentioned before, there exist many mathematical tools to study quantum query complexity. We will talk about the two main lower bound methods: adversary and polynomial. It will allow us to give a very short proof that Grover search is optimal. For the case of symmetric functions, we will see that simple arguments allow us to characterize not just the query complexity but even its lower bound techniques.

Algorithms and complexity
Tuesday June 28, 2022, 3PM, Salle 3052
Manaswi Paraashar (Aarhus University) Quantum weirdness in query-to-communication simulation and the role of symmetry

Buhrman, Cleve and Wigderson (STOC'98) showed that for every Boolean function $f : \{0,1\}^n \to \{0,1\}$ and $G \in \{AND_2, XOR_2\}$, the bounded-error quantum communication complexity of the composed function ($f \circ G$) equals $O(Q(f) \log n)$, where $Q(f)$ denotes the bounded-error quantum query complexity of $f$. This is known as the BCW Simulation Theorem. This is in contrast with the classical setting, where it is easy to show that $R^{cc}(f \circ G) \leq 2R(f)$, where $R^{cc}$ and $R$ denote bounded-error communication and query complexity, respectively. We exhibited a total function for which the $\log n$ overhead in the BCW simulation is required.

We also answer several other questions related to quantum query-to-communication simulation: 1. We show that the $\log n$ overhead is not required when $f$ is symmetric (i.e., depends only on the Hamming weight of its input), generalizing a result of Aaronson and Ambainis for the Set-Disjointness function (Theory of Computing'05). 2. One may ask whether the $\log n$ overhead in the BCW Simulation Theorem can be avoided even when f is transitive, which is a weaker notion of symmetry. We show that the $\log n$ overhead is necessary for some transitive functions, even when we allow the quantum communication protocol an error probability that can be arbitrarily close to 1/2 (this corresponds to the unbounded-error model of communication). 3. We also give, among other things, a general recipe to construct functions for which the $\log n$ overhead is required in the BCW Simulation Theorem in the bounded-error communication model, even if the parties are allowed to share an arbitrary prior entangled state for free.

This talk is based on two joint works with Sourav Chakraborty, Arkadev Chattopadhyay, Peter Hoyer, Nikhil Mande, and Ronald de Wolf.

Algorithms and complexity
Tuesday June 21, 2022, 11AM, Salle 3052
Chandrima Kayal (ISI Kolkata) Separations between Combinatorial Measures for Transitive Functions

The role of symmetry in Boolean functions has been extensively studied in complexity theory. For example, Symmetric functions (functions that are invariant under all possible permutations on the input strings) are widely studied and well-understood classes of function. But there are a lot of functions with different types of symmetry which cannot be captured by the classes of symmetric functions. What happens for them? How different type of symmetry affects different complexity measures?

In this work, we study transitive functions in light of several combinatorial measures which is a natural generalization and a bigger class than symmetric functions. The question that we try to address in this paper is what are the maximum separations between various pairs of combinatorial measures for transitive functions. Such study for general Boolean functions has been going on for the past many years. The current best-known results for general Boolean functions have been nicely compiled by Aaronson et al. (STOC, 2021). But before this paper, no such systematic study has been done for the case of transitive functions.

Based on the various kinds of pointer functions, we construct new transitive functions whose complexity measures are similar to that of the original pointer functions. We have argued the barriers of Cheat Sheet techniques and also summarized the current knowledge of relations between various combinatorial measures of transitive functions in a table similar to the table compiled by Aaronson et al. (STOC, 2021) for general functions.

This is a joint work with Sourav Chakraborty and Manaswi Paraashar.

Algorithms and complexity
Tuesday June 21, 2022, 3PM, Salle 3052
Sayantan Sen (ISI Kolkata) Tolerant Bipartiteness Testing in Dense Graphs

Bipartite testing has been a central problem in the area of property testing since its inception in the seminal work of Goldreich, Goldwasser and Ron [FOCS'96 and JACM'98]. Though the non-tolerant version of bipartite testing has been extensively studied in the literature, the tolerant variant is not well understood. In this paper, we consider the following version of tolerant bipartite testing: Given a parameter $\varepsilon \in (0,1)$ and access to the adjacency matrix of a graph $G$, we can decide whether $G$ is $\varepsilon$-close to being bipartite or $G$ is at least $(2+\Omega(1))\varepsilon$-far from being bipartite, by performing $\widetilde{\mathcal{O}}\left(\frac{1}{\varepsilon ^3}\right)$ queries and in $2^{\widetilde{\mathcal{O}}(1/\varepsilon)}$ time. This improves upon the state-of-the-art query and time complexities of this problem of $\widetilde{\mathcal{O}}\left(\frac{1}{\varepsilon ^6}\right)$ and  $2^{\widetilde{\mathcal{O}}(1/\varepsilon^2)}$, respectively, from the work of Alon, Fernandez de la Vega, Kannan and Karpinski (STOC'02 and JCSS'03), where $\widetilde{\mathcal{O}}(\cdot)$ hides a factor polynomial in $\log \frac{1}{\varepsilon}$.

This is a joint work with Arijit Ghosh, Gopinath Mishra, and Rahul Raychaudhury.

Algorithms and complexity
Tuesday June 7, 2022, 11AM, Salle 3052
Francesco Mazzoncini (Télécom Paris) Hybrid Quantum Cryptography from One-way Quantum Communication Complexity Separation

We introduce an explicit construction for a key distribution protocol in the Quantum Computational Timelock (QCT) security model, where one assumes that computationally secure encryption may only be broken after a time much longer than the coherence time of available quantum memories. Taking advantage of the QCT assumptions, we build a key distribution protocol called HM-QCT from the Hidden Matching problem for which there exists an exponential gap in one-way communication complexity between classical and quantum strategies. We establish that the security of HM-QCT against arbitrary attacks can be reduced to the difficulty of solving the underlying distributed problem with classical information, while legitimate users can use quantum communication. This leads to a HM-QCT key distribution scheme over n bosonic modes that can be secure with up to O(sqrt(n)/log(n)) input photons per channel use, boosting key rates by several orders of magnitude compared to QKD and allowing to operate with a non-trusted receiver.

Algorithms and complexity
Tuesday June 7, 2022, 4PM, Salle 3052
Srikanth Srinivasan (IIT Bombay) Some Recent (and Some Old) Advances in Algebraic Complexity Theory

In the 1970s, Valiant initiated the study of lower bounds for algebraic models of computation. The main question of this area, the VP vs. VNP question, is a formally simpler version of the P vs. NP question. Unlike the latter question, there are very concrete modes of attack against the VP vs. VNP question, via lower bounds for constant-depth algebraic circuits [Agrawal-Vinay 2008]. I will introduce these notions, and talk about some constant-depth circuit lower bounds that have been proved since Valiant's initial results.

Algorithms and complexity
Tuesday May 17, 2022, 11AM, Salle 3052
Luca Zanetti (University of Bath) How fast can you mix on a graph?

We investigate fundamental barriers to fast mixing on a graph. In particular, we study the so-called Fastest Mixing Markov Chain Problem: given a graph G = (V, E), we desire the discrete-time Markov chain with smallest mixing time subject to having (1) uniform stationary distribution; (2) non-zero transition probabilities only across edges of the graph. We show that a geometric quantity, namely the vertex conductance of the graph, characterises the fastest mixing time. This characterisation can be seen as an analogue of the discrete Cheeger inequality, which characterises the mixing time of a simple random walk on a graph by its *edge* conductance. Our characterisation forbids fast mixing for graphs with small vertex conductance. To bypass this fundamental barrier, we consider Markov chains on G with equilibrium distribution which need not be uniform, but rather only ε-close to uniform in total variation. We show that another geometric quantity, the diameter of the graph, characterises such notion of “almost mixing”.

This is joint work with Sam Olesker-Taylor (arXiv:2111.05816).

Algorithms and complexity
Wednesday May 11, 2022, 11AM, Salle 3052
Marios Ioannou (Freie Universität Berlin) Learnability of the output distributions of local quantum circuits

There is currently a large interest in understanding the potential advantages quantum devices can offer for probabilistic modelling. In this work we investigate, within two different oracle models, the probably approximately correct (PAC) learnability of quantum circuit Born machines, i.e., the output distributions of local quantum circuits. We first show a negative result, namely, that the output distributions of super-logarithmic depth Clifford circuits are not sample-efficiently learnable in the statistical query model, i.e., when given query access to empirical expectation values of bounded functions over the sample space. This immediately implies the hardness, for both quantum and classical algorithms, of learning from statistical queries the output distributions of local quantum circuits using any gate set which includes the Clifford group. As many practical generative modelling algorithms use statistical queries – including those for training quantum circuit Born machines – our result is broadly applicable and strongly limits the possibility of a meaningful quantum advantage for learning the output distributions of local quantum circuits. As a positive result, we show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable by a classical learner. Our results are equally applicable to the problems of learning an algorithm for generating samples from the target distribution (generative modelling) and learning an algorithm for evaluating its probabilities (density modelling). They provide the first rigorous insights into the learnability of output distributions of local quantum circuits from the probabilistic modelling perspective.

Algorithms and complexity
Tuesday May 10, 2022, 5PM, Salle 3052
Michele Orrù (UC Berkeley) On the (in)security of ROS

We present an algorithm solving the ROS (Random inhomogeneities in a Overdetermined Solvable system of linear equations) problem in polynomial time for $\ell > \log p$ dimensions. Our algorithm can be combined with Wagner’s attack, and leads to a sub-exponential solution for any dimension $\ell$ with best complexity known so far. When concurrent executions are allowed, our algorithm leads to practical attacks against unforgeability of blind signature schemes such as Schnorr and Okamoto–Schnorr blind signatures, threshold signatures such as GJKR and the original version of FROST, multisignatures such as CoSI and the two-round version of MuSig, partially blind signatures such as Abe–Okamoto, and conditional blind signatures such as ZGP17.

This is joint work with Fabrice Benhamouda, Tancrède Lepoint, Julian Loss and Mariana Raykova.

Online talk

Algorithms and complexity
Wednesday April 27, 2022, 4PM, Salle 3052
Hamoon Mousavi (Columbia University) Nonlocal Games, Compression Theorems, and the Arithmetical Hierarchy

We investigate the connection between the complexity of nonlocal games and the arithmetical hierarchy, a classification of languages according to the complexity of arithmetical formulas defining them. It was recently shown by Ji, Natarajan, Vidick, Wright and Yuen that deciding whether the (finite-dimensional) quantum value of a nonlocal game is 1 or at most 12 is complete for the class Σ1 (i.e., RE). A result of Slofstra implies that deciding whether the commuting operator value of a nonlocal game is equal to 1 is complete for the class Π1 (i.e., coRE). We prove that deciding whether the quantum value of a two-player nonlocal game is exactly equal to 1 is complete for Π2; this class is in the second level of the arithmetical hierarchy and corresponds to formulas of the form “∀x∃yϕ(x,y)”. This shows that exactly computing the quantum value is strictly harder than approximating it, and also strictly harder than computing the commuting operator value (either exactly or approximately). We explain how results about the complexity of nonlocal games all follow in a unified manner from a technique known as compression. At the core of our Π2-completeness result is a new “gapless” compression theorem that holds for both quantum and commuting operator strategies. Our compression theorem yields as a byproduct an alternative proof of Slofstra's result that the set of quantum correlations is not closed. We also show how a “gap-preserving” compression theorem for commuting operator strategies would imply that approximating the commuting operator value is complete for Π1.

online talk

Algorithms and complexity
Wednesday April 20, 2022, 2PM, Salle 3052
Subhasree Patro (CWI Amsterdam) Memory Compression with Quantum Random-Access Gates

In the classical RAM, we have the following useful property. If we have an algorithm that uses M memory cells throughout its execution, and in addition is sparse, in the sense that, at any point in time, only m out of M cells will be non-zero, then we may “compress” it into another algorithm which uses only m log M memory and runs in almost the same time. We may do so by simulating the memory using either a hash table, or a self-balancing tree. We show an analogous result for quantum algorithms equipped with quantum random-access gates. If we have a quantum algorithm that runs in time T and uses M qubits, such that the state of the memory, at any time step, is supported on computational-basis vectors of Hamming weight at most m, then it can be simulated by another algorithm which uses only O(m log M) memory, and runs in time O˜(T ). We show how this theorem can be used, in a black-box way, to simplify the presentation in several papers. Broadly speaking, when there exists a need for a space-efficient history-independent quantum data-structure, it is often possible to construct a space-inefficient, yet sparse, quantum data structure, and then appeal to our main theorem. This results in simpler and shorter arguments.

Algorithms and complexity
Wednesday April 20, 2022, 3PM, Salle 3052
Arjan Cornelissen (CWI Amsterdam) Multivariate quantum Monte Carlo Estimation

Monte Carlo estimation is a well-known technique for estimating the expectation value of a random variable. For univariate random variables, taking values in the interval [-L,L], it is well-known that one classically can obtain an eps-precise estimate with O(L^2/eps^2) samples. Montanaro showed in 2015 that quantumly this can be sped up quadratically, i.e., that O(L/eps) quantum samples suffice. In this talk, we will generalize Montanaro's technique to the multivariate setting, and we show that a similar quadratic speed-up can be obtained. This result provides a useful building block as well for the more general case where we only assume that the random variable has a bounded covariance matrix. If time permits, we can also have a look at how this technique relates to Gilyén et al.'s gradient estimation algorithm.

Algorithms and complexity
Wednesday April 20, 2022, 5PM, Salle 3052
Pierre Meyer (IDC Herzliya / IRIF) Two-Round MPC from Minimal Assumptions

A recent breakthrough result that fully characterizes the round complexity of secure computation.

Algorithms and complexity
Tuesday April 12, 2022, 11AM, Salle 3052
Goran Žužić (ETH Zurich) Universal optimality in distributed computing and its connections to diverse areas of theoretical computer science

The modern computation and information processing systems shaping our world have become massively distributed, and a fundamental understanding of distributed algorithmics has never been more important. At the same time, despite 40 years of intense study, we often do not have an adequate understanding of the fundamental barriers that rule out the existence of ultra-fast distributed algorithms. This is true even for the most well-studied problems in computer science – including the shortest path, minimum spanning tree, minimum cut, and many other well-known tasks.

In this talk, I will present a high-level overview of a sequence of papers that give a near-complete answer to the above question. Its culmination is the following pie-in-the-sky result called *universal optimality*: for all of the tasks mentioned, there exists a single distributed algorithm that, when run on any communication network G, is provably competitive with the fastest algorithm on G.

The pursuit of universal optimality has led to the development of many new connections between distributed computing and other seemingly unrelated areas of theoretical computer science. Curiously, these connections have been mutually-beneficial and have already led to many breakthroughs not only in distributed computing, but also in the theory of metric embedding, information theory, and oblivious packet routing. I will briefly explore these connections.

Algorithms and complexity
Tuesday April 12, 2022, 3PM, Salle 3052
Balthazar Bauer (IRIF) Transferable E-cash: An analysis in the algebraic group model

As the digitization of global communications accelerates, there is a growing awareness of privacy issues among large segments of society. In particular, the confidentiality of commercial transactions is under debate. In the 1980s and 1990s, Chaum, a cryptographer, developed and then deployed with his company Digicash an “anonymous” digital payment system whose guarantees were based, among other things, on mathematical proof inspired by the still nascent paradigm of “proven security”. Although his company Digicash goes bankrupt after a few years, the academic community continues to study anonymous digital payment systems, seeking to develop new functionalities, including transferability.

Our main goal was to revisit security models concerning the anonymity of payments, and then to propose a theoretically deployable payment system also based on the paradigm of proven security. During the presentation, I will talk about security models guaranteeing anonymity, as well as more fundamental questions concerning a family of cryptographic assumptions commonly used in provable security.

Algorithms and complexity
Tuesday April 5, 2022, 2PM, LIP6, Tower 26, corridor 25/26, room 105
Ivan Supic (LIP6, Sorbonne University) Quantum networks self-test all entangled states

Certifying quantum properties with minimal assumptions is a fundamental problem in quantum information science. Self-testing is a method to infer the underlying physics of a quantum experiment only from the measured statistics. While all bipartite pure entangled states can be self-tested, little is known about how to self-test quantum states of an arbitrary number of systems. Here, we introduce a framework for network-assisted self-testing and use it to self-test any pure entangled quantum state of an arbitrary number of systems. The scheme requires the preparation of a number of singlets that scales linearly with the number of systems, and the implementation of standard projective and Bell measurements, all feasible with current technology. When all the network constraints are exploited, the obtained self-testing certification is stronger than what is achievable in any Bell-type scenario. Our work does not only solve an open question in the field but also shows how properly designed networks offer new opportunities for the certification of quantum phenomena.

Algorithms and complexity
Tuesday March 29, 2022, 4PM, Salle 3052
Sushant Sachdeva (University of Toronto) Maximum Flow and Minimum-Cost Flow in Almost-Linear Time

We give an algorithm that computes exact maximum flows and minimum-cost flows on directed graphs with m edges and polynomially bounded integral demands, costs, and capacities in m^(1+o(1)) time. Our algorithm builds the flow through a sequence of m^(1+o(1)) approximate undirected minimum-ratio cycles, each of which is computed and processed in amortized m^(o(1)) time using a dynamic data structure. Our framework extends to an algorithm running in m^(1+o(1)) time for computing flows that minimize general edge-separable convex functions to high accuracy. This gives an almost-linear time algorithm for several problems including entropy-regularized optimal transport, matrix scaling, p-norm flows, and isotonic regression.

The speaker will give the talk online.

Algorithms and complexity
Tuesday February 22, 2022, 2PM, Salle 3052
Alain Sarlette (QUANTIC lab, INRIA Paris) Building a fault-tolerant quantum computer with bosonic cat-codes

At the request of the organizers, this talk will review the last years' work by (mostly others in) our group towards correcting errors in future quantum computing hardware with the so-called “cat qubits”. The basic piece of hardware is a quantum harmonic oscillator, instead of a physical two-level system, hence “bosonic” cats. In practice this is embodied by superconducting circuits, thus in an LC oscillator, or by mechanical oscillators. It is the approach currently pursued by actors like Alice&Bob startup in Paris, and Amazon quantum computing, as well as by now tens of groups around the world. The talk will be tutorial in nature. We will start by clarifying the basic physical setting and assumptions, in which cat qubits can be implemented as having almost only phase-flip errors, and no bit-flips. Then we will describe the implementation of a universal set of error-protected gates in this setting. We will conclude with some perspectives with respect to more standard code-based Quantum Error Correction methods and the more general context in which hardware-based error correction can be pursued.

A Zoom link will be distributed via the qinfo-paris mailing list. If you would like to receive the link (or join the mailing list), please contact gribling@irif.fr

Algorithms and complexity
Tuesday January 25, 2022, 2PM, 3052
Simona Etinski (INRIA / IRIF) Latest challenges in post-quantum cryptography

In 2016, the National Institute of Standards and Technology (NIST) announced a call for standardization (also known as “NIST competition”) of post-quantum cryptographic protocols, i.e., cryptographic protocols considered to be resistant to quantum attacks. NIST was mainly interested in two types of protocols: digital signatures and encryption/key-establishment schemes. The fourth and final round of this competition is about to start, and once it is finished, the most efficient and thoroughly analyzed protocols will be standardized.

In this talk, we focus on the proposal for a digital signature. It is based upon a problem from coding theory, known as a syndrome decoding problem, and analyzed using cryptanalytic means. Namely, we analyze the time complexity of the information set decoding algorithms, widely believed to be the best algorithms for solving the syndrome decoding problem. By evaluating their complexity, both in the classical and quantum domain, we reason about the hardness of the problem. Finally, we give an example of the scheme based upon the syndrome decoding problem and analyze its security imposed by the hardness of the problem. We examine the tradeoff between signature's security and its size, which is a major challenge to be addressed in the competition.


Year 2021

Algorithms and complexity
Wednesday December 15, 2021, 10AM, Salle 1007
Serge Massar (Laboratoire d’Information Quantique CP224, Université libre de Bruxelles) Characterizing the intersection of QMA and coQMA

We show that the functional analogue of QMA∩coQMA, denoted F(QMA∩coQMA), equals the complexity class Total Functional QMA (TFQMA). To prove this we need to introduce alternative definitions of QMA∩coQMA in terms of a single quantum verification procedure. We show that if TFQMA equals the functional analogue of BQP (FBQP), then QMA∩coQMA = BQP. We show that if there is a QMA complete problem that (robustly) reduces to a problem in TFQMA, then QMA∩coQMA = QMA. These results provide strong evidence that the inclusions FBQP⊆TFQMA⊆FQMA are strict, since otherwise the corresponding inclusions in BQP⊆QMA∩coQMA⊆QMA would become equalities.

Algorithms and complexity
Tuesday December 7, 2021, 11AM, Salle 1007
Bruce Kapron (Computer Science Department, University of Victoria) Type-two feasibility via bounded query revision

The problem of generalizing the notion of polynomial time to a type-two setting, where inputs include not only finitary data, e.g., numbers or strings of symbols, but also functions on such data, was first posed by Constable in 1973. Mehlhorn subsequently proposed a solution, based on a generalization Cobham's scheme-based approach which uses limited recursion on notation. The resulting class was shown to be robust with respect to oracle Turing machine (OTM) computation, but the problem of providing a purely machine-based characterization remained open for almost two decades, when Kapron and Cook gave a solution using notions of function length and second-order polynomials. While a natural generalization of the usual notion of poly-time, this characterization still had some shortcomings, as function length is itself not a feasible notion. A charaterization using ordinary polynomials remained an elusive goal.

Recently, we have provided several such characterizations. The underlying idea is to bound run time in terms of an ordinary polynomial in the largest answer returned by an oracle, while imposing a constant upper bound on the number of times an oracle query (answer) may exceed all previous ones in size. The resulting classes are contained in Mehlhorn's class, and when closed under composition are in fact equivalent.

In this talk we will present these recent results and follow-up work involving new scheme-based characterizations of Mehlhorn's class, as well as a recent characterization using a tier-based type system for a simple imperative programming language with oracle calls.

(Joint work with F. Steinberg, and with E. Hainry, J.-Y. Marion, and R. Pechoux.)

Algorithms and complexity
Tuesday December 7, 2021, 2PM, 3052
Daniel Szilagyi Introduction to convexity - optimization and sampling

In this talk we present some basic notions of convexity, from a quantum-algorithmic point of view. The talk will consist of two parts. We first warm-up by introducing some classical first- and second-order methods for convex optimization. Then, we move on to more structured convex problems, introduce linear programming and the basic ideas of interior-point methods. We mention some quantum algorithms for LPs and SDPs. In the second part of the talk we discuss sampling, and in particular its applications to estimating volumes of polytopes. We present the basic classical sampling algorithms (ball walk and hit-and-run), and the general framework for applying these algorithms to volume estimation. We conclude by discussing the opportunities for quantum speedups - in particular, we present a high-level overview of the work by Chakrabarti, Childs, Hung, Li, Wang and Wu from 2019.

Quantum info Paris

Algorithms and complexity
Wednesday November 17, 2021, 11AM, Salle 1007
Spyros Angelopoulos (LIP6) Competitive Sequencing with Noisy Advice

Several well-studied online resource allocation problems can be formulated in terms of infinite, increasing sequences of positive values, in which each element is associated with a corresponding allocation value. Examples include problems such as online bidding, searching for a hidden target on an unbounded line, and designing interruptible algorithms based on contract scheduling. The performance of the online algorithm, in each of these problems, is measured by the competitive ratio, which describes the multiplicative performance loss due to the absence of full information on the instance.

We study such competitive sequencing problems in a setting in which the online algorithm has some (potentially) erroneous information, expressed as a k-bit advice string, for some given k. For concreteness, we follow the formulation of contract scheduling as case study. We first consider the untrusted advice setting in which the objective is to obtain a Pareto-optimal schedule, considering the two extremes: either the advice is error-free, or it is generated by a (malicious) adversary. Here, we show a Pareto-optimal schedule, using a new approach for applying the functional-based lower-bound technique due to [Gal, Israel J. Math. 1972]. Next, we introduce and study a new noisy advice setting, in which a number of the advice bits may be erroneous; the exact error is unknown to the online algorithm, which only has access to a pessimistic estimation (i.e., an upper bound on this error). We give the first upper and lower bounds on the competitive ratio of an online problem in this setting. To this end, we combine ideas from robust query-based search in arrays (such as Rényi-Ulam types of games) and fault-tolerant contract scheduling. The presentation will also discuss the applicability of these approaches in more general online problems.

Joint work with Diogo Arsénio and Shahin Kamali.

Algorithms and complexity
Tuesday October 19, 2021, 2PM, Salle 3052
Stephen Piddock (School of Mathematics, University of Bristol) Quantum analogue simulation: universality and complexity

Quantum analogue simulation aims to reproduce the physics of a target Hamiltonian H, using a different physical system; and this can be achieved by ensuring that the low energy subspace of the physical Hamiltonian is approximately H. First I will describe a construction of a 1D quantum system that can simulate all other Hamiltonians (a property we call universality) just by varying the length of the chain. The construction relies heavily on the possibility of encoding a (quantum) computation into the chain. Then I will go onto show a much more general equivalence between universality and the computational complexity of the Local Hamiltonian problem - the problem of approximately estimating the ground state energy of a system.

Based on https://arxiv.org/abs/2003.13753 and https://arxiv.org/abs/2101.12319

Algorithms and complexity
Tuesday October 12, 2021, 11AM, Salle 1007
Pierre Meyer (IRIF, IDC Herzliya) Breaking the Circuit Size Barrier for Secure Computation under Quasi-Polynomial LPN

In this work we introduce a new (circuit-dependent) homomorphic secret sharing (HSS) scheme for any $\log/\log\log$-local circuit, with communication proportional only to the width of the circuit and polynomial computation, which is secure assuming the super-polynomial hardness of learning parity with noise (LPN). At the heart of our new construction is a pseudorandom correlation generator (PCG) which allows two parties to locally stretch short seeds into pseudorandom instances of an arbitrary $\log/\log\log$-local additive correlation.

Our main application, and the motivation behind this work, is a generic two-party secure computation protocol for every layered (boolean or arithmetic) circuit of size $s$ with total communication $O(s/\log\log s)$ and polynomial computation, assuming the super-polynomial hardness of the standard learning parity with noise assumption (a circuit is layered if its nodes can be partitioned in layers, such that any wire connects adjacent layers). This expands the set of assumptions under which the `circuit-size barrier' can be broken, for a large class of circuits. The strength of the underlying assumption is tied to the sublinearity factor: we achieve communication $O(s/k(s))$ under the $s^{2^{k(s)}}$-hardness of LPN, for any $k(s) \leq (\log\log s) / 4$.

Previously, the set of assumptions known to imply a PCG for correlations of degree $\omega(1)$ or generic secure computation protocols with sublinear communication was restricted to LWE, DDH, and a circularly secure variant of DCR.

Algorithms and complexity
Tuesday June 22, 2021, 11AM, Salle 3052 + Zoom
Subhasree Patro (CWI) Quantum Fine-Grained Complexity

One of the major challenges in the field of complexity theory is the inability to prove unconditional time lower bounds, including for practical problems within the complexity class P. One way around this is the study of fine-grained complexity where we use special reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of some key problems. For example, computing the edit distance between two strings, a problem that has a practical interest in the comparing of DNA strings, has an algorithm that takes O(n^2) time. Using a fine-grained reduction it can be shown that faster algorithms for edit distance also imply a faster algorithm for the Boolean Satisfiability (SAT) problem (that is believed to not exist) - strong evidence that it will be very hard to solve the edit distance problem faster. Other than SAT, 3SUM, and APSP are other such key problems that are very suitable to use as a basis for such reductions, since they are natural to describe and well-studied.

The situation in the quantum regime is no better; almost all known lower bounds for quantum algorithms are defined in terms of query complexity, which doesn’t help much for problems for which the best-known algorithms take superlinear time. Therefore, employing fine-grained reductions in the quantum setting seems a natural way forward. However, translating the classical fine-grained reductions directly into the quantum regime is not always possible for various reasons. In this talk, I will present some recent results in which we circumvent these challenges and prove quantum time lower bounds for some problems in BQP conditioned on the conjectured quantum hardness of SAT (and its variants) and the 3SUM problem.

Algorithms and complexity
Wednesday June 9, 2021, 10AM, Collège de France
Frédéric Magniez - Elham Kashefi (IRIF - CNRS, University of Edinburgh) Derniers développements pour l’internet et intelligence artificielle quantique

Huitième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Elham Kashefi sur « Quantum Computing as a Service: Secure and Verifiable Multi-Tenant Quantum Data Centre » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-06-09-10h00.htm

Algorithms and complexity
Tuesday June 8, 2021, 4PM, Online
Kyriakos Axiotis (MIT) Decomposable Submodular Function Minimization via Maximum Flow

Abstract: Submodular function minimization is a primitive that is extensively used in a lot of ML applications, including MAP inference in MRFs, image segmentation, and clustering. Although polynomial-time algorithms for this problem exist, their runtime is prohibitive for large scale applications. Fortunately, in practice it is often the case that the function can be decomposed as a sum of r simpler functions, each acting on a small number of elements, k. We present an algorithm that can solve this problem (as well as the more general parametric problem) in time O((n+r)^(1.497) poly(k)), where n is the number of elements in the ground set. Perhaps surprisingly, we achieve this by O~(poly(k)) calls to a maximum flow oracle, thus reducing a problem that is unrelated to graphs to a graph problem. In fact, if max flow is solvable in near-linear time and k=O(1), our algorithm runs in near-linear time. At a technical level, we develop a dual iterative method that computes each step by approximating the submodular base polytope by a graph cut polytope. To solve the resulting graph problem, we develop an efficient algorithm for the parametric min cut problem, which might also be of independent interest. Joint work with Adam Karczmarz, Anish Mukherjee, Piotr Sankowski, and Adrian Vladu

Algorithms and complexity
Wednesday June 2, 2021, 10AM, Collège de France
Frédéric Magniez - André Chailloux (IRIF - INRIA) Limites et impact du calcul quantique

Septième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de André Chailloux sur « Suprématie quantique : où en sommes-nous aujourd'hui ? » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-06-02-10h00.htm

Algorithms and complexity
Wednesday May 26, 2021, 10AM, Collège de France
Frédéric Magniez - Iordanis Kerenidis (IRIF - CNRS) Transformée de Fourier quantique II

Sixième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Iordanis Kerenidis sur « Quantum Machine Learning » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-26-10h00.htm

Algorithms and complexity
Wednesday May 19, 2021, 10AM, Collège de France
Frédéric Magniez - Stacey Jeffery (IRIF - CWI) Optimisation quantique

Cinquième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Stacey Jeffery sur « A Unified Framework for Quantum Walk Search » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-19-10h00.htm

Algorithms and complexity
Wednesday May 12, 2021, 10AM, Collège de France
Frédéric Magniez - Miklos Santha (IRIF - CNRS, CQT) Transformée de Fourier quantique I

Quatrième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Miklos Santha sur « Le problème du sous-groupe caché » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-12-10h00.htm

Algorithms and complexity
Wednesday May 5, 2021, 10AM, Collège de France
Frédéric Magniez - Simon Perdrix (IRIF - CNRS) Circuits quantiques, premiers algorithmes

Troisième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Simon Perdrix sur « Langages graphiques pour programmer et raisonner en informatique quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-05-10h00.htm

Algorithms and complexity
Tuesday May 4, 2021, 11AM, Online
Paweł Gawrychowski (University of Wrocław) Distance oracles and labeling schemes for planar graphs

A fundamental question concerning graphs is that of constructing a data structure, called a distance oracle, that allows us to compute the distance between any pair of nodes. The goal is to avoid preprocessing the answer to every possible query while keeping the query time small. This is quite hard if we insist on receiving the exact answer and don't make any assumptions about the graph. A particularly natural class of inputs in this context are planar graphs. It is only recently that we have discovered that strongly subquadratic distance oracle with polylogarithmic query time exists for such graphs. I will present the most recent developments in this and related areas. In particular, I will talk about the so-called informative labeling schemes for distances in planar graphs, which can be seen as a clean distributed version of distance oracles.

Algorithms and complexity
Wednesday April 14, 2021, 10AM, Collège de France
Frédéric Magniez - Thomas Vidick (IRIF - California Institute of Technology) Cryptographie et communication quantiques

Deuxième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Thomas Vidick sur « Certifier la génération de nombres aléatoires avec le quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-04-14-10h00.htm

Algorithms and complexity
Wednesday April 7, 2021, 10AM, Collège de France
Frédéric Magniez - Eleni Diamanti (IRIF - CNRS) Information quantique, premières utilisations calculatoires

Premier cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Eleni Diamanti sur « Réseaux de communication quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-04-07-10h00.htm

Algorithms and complexity
Thursday April 1, 2021, 6PM, Collège de France
Frédéric Magniez (IRIF, Collège de France) Algorithmes quantiques : quand la physique quantique défie la thèse de Church-Turing

Leçon inaugurale du cours au Collège de France sur les Algorithmes quantiques. Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/inaugural-lecture-2021-04-01-18h00.htm.

Lien de diffusion : https://www.youtube.com/watch?v=JqNvAN05R04

Algorithms and complexity
Tuesday March 30, 2021, 11AM, Zoom
David Saulpic (LIP6) A New Coreset Framework for Clustering

Given a metric space, the (k, z)-clustering problem consists of finding k centers such that the sum of the of distances raised to the power z of every point to its closest center is minimized. This encapsulates the famous k-median (z = 1) and k-means (z = 2) clustering problems. Designing small-space sketches of the data that approximately preserves the cost of the solutions, also known as coresets, has been an important research direction over the last 15 years. In this paper, we present a new, simple coreset framework that simultaneously improves upon the best known bounds for a large variety of settings, ranging from Euclidean space, doubling metric, minor-free metric, and the general metric cases. The paper is joint work with Vincent Cohen-Addad and Chris Schwiegelshohn.

https://u-paris.zoom.us/j/88942335626?pwd=R25JRVhJWGU4dnNEUFYxZ1RKbDRPQT09

Algorithms and complexity
Tuesday March 16, 2021, 11AM, Room 3052 & Online
Federico Centrone (IRIF) Experimental demonstration of quantum advantage for NP verification with limited information

In recent years, many computational tasks have been proposed as candidates for showing a quantum computational advantage, that is an advantage in the time needed to perform the task using a quantum instead of a classical machine. Nevertheless, practical demonstrations of such an advantage remain particularly challenging because of the difficulty in bringing together all necessary theoretical and experimental ingredients. Here, we show an experimental demonstration of a quantum computational advantage in a prover-verifier interactive setting, where the computational task consists in the verification of an NP-complete problem by a verifier who only gets limited information about the proof sent by an untrusted prover in the form of a series of unentangled quantum states. We provide a simple linear optical implementation that can perform this verification task efficiently (within a few seconds), while we also provide strong evidence that, fixing the size of the proof, a classical computer would take much longer time (assuming only that it takes exponential time to solve an NP-complete problem). While our computational advantage concerns a specific task in a scenario of mostly theoretical interest, it brings us a step closer to potential useful applications, such as server-client quantum computing.

https://u-paris.zoom.us/j/88942335626?pwd=R25JRVhJWGU4dnNEUFYxZ1RKbDRPQT09

Algorithms and complexity
Tuesday March 2, 2021, 4PM, Zoom
Soheil Behnezhad (Department of Computer Science, University of Maryland) Beating Two-Thirds For Random-Order Streaming Matching

We study the maximum matching problem in the random-order semi-streaming setting. In this problem, the edges of an arbitrary $n$-vertex graph $G=(V, E)$ arrive in a stream one by one and in a random order. The goal is to have a single pass over the stream, use $O(n \cdot \mathrm{poly}\log{(n)})$ space, and output a large matching of $G$.

We prove that for an absolute constant $\epsilon_0 > 0$, one can find a $(2/3 + \epsilon_0)$-approximate maximum matching of $G$ using $O(n \log n)$ space with high probability. This breaks the natural boundary of $2/3$ for this problem prevalent in the prior work and resolves an open problem of Bernstein~[ICALP'20] on whether a $(2/3 + \Omega(1))$-approximation is achievable.

https://u-paris.zoom.us/j/88942335626?pwd=R25JRVhJWGU4dnNEUFYxZ1RKbDRPQT09

Algorithms and complexity
Tuesday February 9, 2021, 11AM, Online
Xavier Waintal (Univ. Grenoble Alpes, CEA) What Limits the Simulation of Quantum Computers?

An ultimate goal of quantum computing is to perform calculations beyond the reach of any classical computer. It is therefore imperative that useful quantum computers be very difficult to simulate classically, otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubits N or the depth D of the circuit. This difficulty has triggered recent experiments on deep, random circuits that aim to demonstrate that quantum devices may already perform tasks beyond the reach of classical computing. These real quantum computing devices, however, suffer from many sources of decoherence and imprecision which limit the degree of entanglement that can actually be reached to a fraction of its theoretical maximum. They are characterized by an exponentially decaying fidelity F∼(1−ε)^(ND) with an error rate ε per operation as small as ≈1% for current devices with several dozen qubits or even smaller for smaller devices. In this work, we provide new insight on the computing capabilities of real quantum computers by demonstrating that they can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wave functions using matrix product states, which are able to capture states with low to moderate entanglement very accurately. This compression introduces a finite error rate ε so that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly with N and D in sharp contrast with exact simulation algorithms. We illustrate our algorithms with simulations of random circuits for qubits connected in both one- and two-dimensional lattices. We find that ε can be decreased at a polynomial cost in computing power down to a minimum error ε∞. Getting below ε∞ requires computing resources that increase exponentially with ε∞/ε. For a two-dimensional array of N=54 qubits and a circuit with control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor 3 for similar computing time. Our results suggest that, despite the high fidelity reached by quantum devices, only a tiny fraction (∼10^−8) of the system Hilbert space is actually being exploited.

Algorithms and complexity
Tuesday January 19, 2021, 11AM, Online
Christian Konrad (University of Bristol) Optimal Lower Bounds for Matching and Vertex Cover in Dynamic Graph Streams

In this talk, I will discuss simple optimal lower bounds on the one-way two-party communication complexity of approximate Maximum Matching and Minimum Vertex Cover with deletions. In this model, Alice holds a set of edges and sends a single message to Bob. Bob holds a set of edge deletions, which form a subset of Alice's edges, and needs to report a large matching or a small vertex cover in the graph spanned by the edges that are not deleted. Our results imply optimal space lower bounds for insertion-deletion streaming algorithms for Maximum Matching and Minimum Vertex Cover. An optimal space lower bound for Maximum Matching was previously given by Assadi et al. [SODA 2016], however, this lower bound only holds for the subclass of streaming algorithms that are able to process very long (at least triple exponential in n) input streams.

This work appeared at CCC'20. Joint work with Jacques Dark.

Algorithms and complexity
Tuesday January 5, 2021, 11AM, Online
Benny Applebaum (Tel-Aviv University) The Round Complexity of Perfectly-Secure Multiparty Computation

We study the round complexity of general secure multiparty computation (MPC) in the classical information-theoretic model of Ben-Or, Goldwasser, and Wigderson [BGW88]. That is, we strive for perfect, information-theoretic and error-free, security against any coalition of at most t computationally-unbounded corrupted parties. Classical feasibility results show that this is possible for any t<n/2 in the passive setting, when the parties follow the protocol, and up to t<n/3 in the active (aka Byzantine) setting when the parties may deviate from the protocol.

I will survey a recent line of works that settles the round complexity of perfect-MPC with optimal security threshold for general functions. In particular, we show that 2 rounds are sufficient and necessary in the passive setting and 4 rounds are sufficient and necessary in the active setting.

Based on joint works with Zvika Brakerski, Eliran Kachlon, Arpita Ptra, and Rotem Tsabary.

BBB link: https://bbb-front.math.univ-paris-diderot.fr/recherche/yas-vbs-c25-ogj


Year 2020

Algorithms and complexity
Tuesday December 15, 2020, 2PM, Online
Paul Fermé (ENS Lyon) Tight Approximation Guarantees for Concave Coverage Problems

In the maximum coverage problem, we are given subsets $T_1, \ldots, T_m$ of a universe $[n]$ along with an integer $k$ and the objective is to find a subset $S \subseteq [m]$ of size $k$ that maximizes $C(S) := \lvert\bigcup_{i \in S} T_i\rvert$. It is a classic result that the greedy algorithm for this problem achieves an optimal approximation ratio of $(1-e^{-1})$. In this work we consider a generalization of this problem wherein an element $a$ can contribute by an amount that depends on the number of times it is covered. Given a concave, nondecreasing function $\varphi$, we define $C^{\varphi}(S) := \sum_{a \in [n]}w_a\varphi(|S|_a)$, where $|S|_a = |\{i \in S : a \in T_i\}|$. The standard maximum coverage problem corresponds to taking $\varphi(j) = \min\{j,1\}$. For any such $\varphi$, we provide an efficient algorithm that achieves an approximation ratio equal to the Poisson concavity ratio of $\varphi$, defined by $\alpha_{\varphi} := \min_{x \in \mathbb{N}^*} \frac{\mathbb{E}[\varphi(Poi(x))]}{\varphi(\mathbb{E}[Poi(x)])}$. Complementing this approximation guarantee, we establish a matching NP-hardness result when $\varphi$ grows in a sublinear way. As special cases, we improve the result of [BFGG20] about maximum multi-coverage, that was based on the unique games conjecture, and we recover the result of [DDMS20] on multi-winner approval-based voting for geometrically dominant rules. Our result goes beyond these special cases and we illustrate it with applications to distributed resource allocation problems, welfare maximization problems and approval-based voting for general rules.

Based on joint work with Siddharth Barman and Omar Fawzi.

Algorithms and complexity
Wednesday December 2, 2020, 2PM, Online
Claire Mathieu (IRIF) Competitive Data-Structure Dynamization

Data-structure dynamization is a general approach for making static data structures dynamic. It is used extensively in geometric settings and in the guise of so-called merge (or compaction) policies in big-data databases such as Google Bigtable and LevelDB (our focus). Previous theoretical work is based on worst-case analyses for uniform inputs – insertions of one item at a time and constant read rate. In practice, merge policies must not only handle batch insertions and varying read/write ratios, they can take advantage of such non-uniformity to reduce cost on a per-input basis. To model this, we initiate the study of data-structure dynamization through the lens of competitive analysis, via two new online set-cover problems. For each, the input is a sequence of disjoint sets of weighted items. The sets are revealed one at a time. The algorithm must respond to each with a set cover that covers all items revealed so far. It obtains the cover incrementally from the previous cover by adding one or more sets and optionally removing existing sets. For each new set the algorithm incurs build cost equal to the weight of the items in the set. In the first problem the objective is to minimize total build cost plus total query cost, where the algorithm incurs a query cost at each time t equal to the current cover size. In the second problem, the objective is to minimize the build cost while keeping the query cost from exceeding k (a given parameter) at any time. We give deterministic online algorithms for both variants, with competitive ratios of Θ(log∗n) and k, respectively. The latter ratio is optimal for the second variant.

Algorithms and complexity
Wednesday October 28, 2020, 11AM, Salle 3052 & En ligne
Simon Mauras (IRIF) Assessment of Covid-19 containment strategies in primary schools and workplaces using contact graphs

I will explain how modelling the epidemic using graphs could help in reopening strategies. This work has been done in collaboration with Guillaume Duboc, Max Dupré la Tour, Paolo Frasca, Claire Mathieu, Lulla Opatowski and Laurent Viennot.

Algorithms and complexity
Wednesday October 21, 2020, 11AM, Salle 3052 & En ligne
Geoffroy Couteau (IRIF) Contact Tracing

Presentation of the working group on privacy in contact tracing organized during the COVID-19 pandemic.

Algorithms and complexity
Wednesday October 7, 2020, 11AM, Salle 3052
Adrian Vladu (IRIF) Circulation Control for Faster Minimum Cost Flow in Unit-Capacity Graphs

We present an m^{4/3+o(1)} log W -time algorithm for solving the minimum cost flow problem in graphs with unit capacity, where W is the maximum absolute value of any edge weight. For sparse graphs, this improves over the best known running time for this problem and, by well-known reductions, also implies improved running times for the shortest path problem with negative weights, minimum cost bipartite b-matching when |b|_1 = O(m), and recovers the running time of the currently fastest algorithm for maximum flow in graphs with unit capacities, due to Liu and Sidford (FOCS, 2020).

Our algorithm relies on developing an interior point method–based framework which acts on the space of circulations in the underlying graph. From the combinatorial point of view, this framework can be viewed as iteratively improving the cost of a suboptimal solution by pushing flow around some carefully chosen circulations.

This talk will be self-contained and will assume no prior background in optimization. Based on https://arxiv.org/abs/2003.04863, appears in FOCS 2020.

Algorithms and complexity
Thursday October 1, 2020, 2PM, Salle 3052
Vera Traub (University of Bonn) An improved approximation algorithm for ATSP

In a recent breakthrough, Svensson, Tarnawski, and Végh gave the first constant-factor approximation algorithm for the asymmetric traveling salesman problem (ATSP). In this work we revisit their algorithm. While following their overall framework, we improve on each part of it.

Svensson, Tarnawski, and Végh perform several steps of reducing ATSP to more and more structured instances. We avoid one of their reduction steps (to irreducible instances) and thus obtain a simpler and much better reduction to vertebrate pairs. Moreover, we show that a slight variant of their algorithm for vertebrate pairs has a much smaller approximation ratio.

Overall we improve the approximation ratio from 506 to 22 + ε for any ε > 0. We also improve the upper bound on the integrality ratio of the standard LP relaxation from 319 to 22.

This is joint work with Jens Vygen.

Live webcast of IGAFIT Algorithmic Colloquium

Algorithms and complexity
Friday September 25, 2020, 11AM, Salle 3052
Makrand Sinha (CWI) k-Forrelation Optimally Separates Quantum and Classical Query Complexity

About this paper with Nikhil Bansal: https://arxiv.org/abs/2008.07003

Live webcast of CWI/QuSoft seminar

Algorithms and complexity
Friday September 18, 2020, 11AM, Salle 3052
Arjan Cornelissen (QuSoft, U of Amsterdam) A self-contained, simplified analysis of span programs algorithms

Over the years, many query-efficient quantum algorithms have been constructed through the span program framework. Despite its successes, the analysis behind this framework is complicated, and its full potential is not yet fully understood. In our current research, we attempt to clean up the framework and simplify its analysis, so as to better expose the mathematical structure and more clearly understand how it is exploited. In this talk, we showcase the simplified analysis and share some of the new insights we have gained along the way. No prior knowledge on span programs is required.

Live webcast of CWI/QuSoft seminar

Algorithms and complexity
Tuesday June 30, 2020, 7PM, Online
Amos Korman (IRIF) SIROCCO Prize for Innovation in Distributed Computing

Live broadcasting of the 2020 SIROCCO Prize for Innovation in Distributed Computing awarded to Amos Korman. Amos will give a 1h talk on his research regarding ants and mathematics. You need to register here in order to receive the zoom link: https://sirocco2020.cs.uni-paderborn.de/registration.html.

Algorithms and complexity
Tuesday June 9, 2020, 2:30PM, Online
Francesco D'amore (INRIA) On the Search Efficiency of Parallel Lévy Walks in ${\mathbb Z}^2$

Motivated by the Lévy flight foraging hypothesis – the premise that the movement of various animal species searching for food resembles a Lévy walk – we study the hitting time of parallel Lévy walks on the infinite 2-dimensional grid. Lévy walks are characterized by a parameter $\alpha \in(1,+\infty)$, that is the exponent of the power law distribution of the time intervals at which the moving agent randomly changes direction. In the setting we consider, called the ANTS problem (Feinerman et al. PODC 2012), $k$ independent discrete-time Lévy walks start simultaneously at the origin, and we are interested in the time $h_{k,\ell}$ before some walk visits a given target node on the grid, at distance $\ell$ from the origin.

In this setting, we provide a comprehensive analysis of the efficiency of Lévy walks for the complete range of the exponent $\alpha$. For any choice of $\alpha$, we observe that the total work until the target is visited, i.e., the product $k \cdot h_{k,\ell}$, is at least $\Omega(\ell^2)$ with constant probability. Our main result is that the right choice for $\alpha$ to get optimal work varies between $1$ and $3$, as a function of the number $k$ of available agents. For instance, when $k = \tilde \Theta(\ell^{1-\epsilon})$, for some positive constant $\epsilon < 1$, then the unique optimal setting for $\alpha$ lies in the super-diffusive regime $(2,3)$, namely, $\alpha = 2+\epsilon$. Our results should be contrasted with various previous works in the continuous time-space setting showing that the exponent $\alpha = 2$ is optimal for a wide range of related search problems on the plane.

The online reference is here https://hal.archives-ouvertes.fr/hal-02530253

Algorithms and complexity
Tuesday June 2, 2020, 11AM, Online
Weiqiang Wen (IRISA) Learning With Errors and Extrapolated Dihedral Cosets

The hardness of the learning with errors (LWE) problem is one of the most fruitful resources of modern cryptography. In particular, it is one of the most prominent candidates for secure post-quantum cryptography. Understanding its quantum complexity is therefore an important goal. We show that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), which we call extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, our result generalizes Regev's famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS'02). We also discuss a connection between eDCP and Childs and Van Dam's algorithm for generalized hidden shift problems (SODA'07). Our result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

Algorithms and complexity
Tuesday May 26, 2020, 11AM, Online
Édouard Bonnet (ENS Lyon) Twin-width

Inspired by an invariant defined on permutations by Guillemot and Marx [SODA '14], we introduce the notion of twin-width on graphs and on matrices. Proper minor-closed classes, bounded rank-width graphs, $K_t$-free unit ball graphs, posets with antichains of bounded size, and proper subclasses of permutation graphs all have bounded twin-width. On all these classes we will see how to compute in polynomial time a sequence of d-contractions, witness that the twin-width is at most d. FO model checking, that is deciding if a given first-order formula $\phi$ evaluates to true on a given binary structure G on a domain D, happens to be FPT in $|\phi|$ on classes of bounded twin-width, provided the witness is given. More precisely, being given a d-contraction sequence for G, our algorithm runs in time $f(d,|\phi|) |D|$ where f is a computable but non-elementary function. We will also see that bounded twin-width is preserved by FO interpretations and transductions. This unifies and significantly extends the knowledge on fixed-parameter tractability of FO model checking on non-monotone classes, such as the FPT algorithm on bounded-width posets by Gajarsky et al. [FOCS '15]. Finally we mention a curious connection between bounded twin-width and small classes.

Joint work with Colin Geniet, Eun Jung Kim, Stéphan Thomassé, and Rémi Watrigant

Algorithms and complexity
Tuesday May 19, 2020, 11AM, Online
Vincent Cohen-Addad (Google Zürich) On the Parameterized Complexity of Clustering

Clustering problems are fundamental computational problems. On the one hand they are at the heart of various data analysis, bioinformatics or machine learning approaches, on the other hand, they can be seen as variants of 'set cover' problems (involving e.g. metrics, graphs with a special structure, etc.) and so are very basic problems.

For many applications, finding efficient fixed-parameter algorithms is a very natural approach. Popular choices of parameters are either parameters of the problem itself (e.g. the number of clusters) or on the underlying structure of the input (e.g. the dimension in the metric case). We will focus on classic metric clustering objectives such as k-median and k-means and review recent results on the fixed parameter complexity of these problems, and the most important open problems.

Algorithms and complexity
Tuesday April 21, 2020, 11AM, Online
Yihui Quek (Stanford) Robust Quantum Minimum Finding with an Application to Hypothesis Selection

Finding the minimum in a well-ordered list of length N - `minimum finding' - is a critical subroutine in fields ranging from optimisation to finance. Following the 1996 quantum algorithm of Durr and Hoyer exhibiting a quadratic speedup for this problem, we ask if this speedup is preserved when the pairwise comparator between the elements is imprecise, or noisy in the following sense: given two elements to compare, if the values of the elements differ by at least α, then the comparison will be made correctly; if the values of the elements are closer than α, the outcome of the comparison is not subject to any guarantees. We answer this question in the affirmative, demonstrating a quantum algorithm for minimum-finding robust to such noise, that also preserves the quadratic speedup (to highest order) of the noiseless case while outputting a good approximation to the minimum with high probability. We demonstrate an application of this algorithm to the problem of hypothesis selection with N hypotheses, where the role of the noisy comparator is played by a statistical primitive known as the Scheffé test which compares two hypotheses' l1 distance to an unknown distribution.

Algorithms and complexity
Friday April 10, 2020, 3PM, Online
André Schrottenloher, Yixin Shen (INRIA, IRIF) Improved Classical and Quantum Algorithms for Subset-Sum

We present new classical and quantum algorithms for solving random hard instances of the subset-sum problem, in which we are given n integers on n bits and try to find a subset of them that sums to a given target. This classical NP-complete problem has several applications in cryptography and underlies the security of some proposed post-quantum cryptosystems. At EUROCRYPT 2010, Howgrave-Graham and Joux (HGJ) introduced the representation technique and presented an algorithm running in time $\tilde{O}(2^{0.337 n})$. This asymptotic time was improved by Becker, Coron, Joux (BCJ) at EUROCRYPT 2011. We show how to improve this further. We then move to the context of quantum algorithms. The two previous quantum speedups in the literature are given by Bernstein, Jeffery, Lange and Meurer (PQCRYPTO 2013) and Helm and May (TQC 2018), which are respectively quantum versions of HGJ and BCJ. They both rely on the framework of quantum walks, use exponential quantum memory with quantum random-access and require an unproven conjecture on quantum walk updates. We devise a new algorithm, using quantum search only, that achieves the first quantum speedup in the model of classical memory with quantum random access. Next, we study improvements for the quantum walks. We show how to avoid the quantum walk conjecture and give a better quantum walk time complexity for subset-sum.

Joint work with Xavier Bonnetain and Rémi Bricout.

Algorithms and complexity
Tuesday March 31, 2020, 2:30PM, Online
Vincent Jugé (LIGM) Adaptive ShiversSort - A new sorting algorithm

I will present a sorting algorithm called adaptive ShiversSort. This recently developed algorithm improves the speed at which it sorts arrays by using the existence of already sorted subarrays. I will focus on the links between this algorithm and the algorithm TimSort, which is a standard algorithm in Python and Java. I will also show that the complexity of adaptive ShiversSort, in terms of comparisons performed, is optimal up to an additive linear factor.

Algorithms and complexity
Thursday February 20, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 4/4)

Applications: Max-k-cut, graph coloring, theta function and perfect graphs, unique games.

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Tuesday February 18, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 3/4)

SDP Rounding (basic concepts such as problem formulations, normal distribution, random hyperplane rounding).

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Friday February 14, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 2/4)

Applications: Dominating sets in digraphs, kernels, semi-kernels, arc colored digraphs, asymmetric k-center, triangle transversals.

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Thursday February 13, 2020, 10AM, Salle 3052
Alantha Newman (Université Grenoble Alpes) Approximation algorithms based on LP and SDP rounding (Lecture 1/4)

LP Randomized Rounding (basic concepts such as extreme points, duality, complementary slackness).

Additional information: https://pagesperso.g-scop.grenoble-inp.fr/~newmana/AlgsGTMiniCourse2020/

Algorithms and complexity
Tuesday February 11, 2020, 11AM, Salle 1007
Simon Apers (CWI) Quantum Speedup for Graph Sparsification, Cut Approximation and Laplacian Solving

Graph sparsification is an important component of a large number of algorithms, ranging from approximation algorithms for cut problems to Laplacian system solvers. In its strongest form, “spectral sparsification” reduces the number of edges to near-linear in the number of nodes, while approximately preserving the cut and spectral structure of the graph. The breakthrough work by Benczúr and Karger (STOC'96) and Spielman and Teng (STOC'04) showed that sparsification can be done optimally in time near-linear in the number of edges of the original graph.

In this work we show that quantum algorithms allow to speed up spectral sparsification, and thereby many of the derived algorithms. To this end we build on a string of existing results, most notably sparsification algorithms by Spielman and Srivastava (STOC'08) and Koutis and Xu (TOPC'16), a spanner construction by Thorup and Zwick (STOC'01), a single-source shortest-paths quantum algorithm by Dürr et al. (ICALP'04) and an efficient k-wise independent hash construction by Christiani, Pagh and Thorup (STOC'15). Combining our sparsification algorithm with existing classical algorithms yields the first quantum speedup for approximating the max cut, min cut, min st-cut, sparsest cut and balanced separator of a graph. Combining our algorithm with a classical Laplacian solver, we demonstrate a similar speedup for Laplacian solving, for approximating effective resistances, cover times and eigenvalues of the Laplacian, and for spectral clustering.

This is joint work with Ronald de Wolf.

Algorithms and complexity
Tuesday February 4, 2020, 11AM, Salle 1007
Jacques Dark (University of Warwick) Two-Party Lower Bounds for Graph Streaming Problems

In the turnstile graph streaming model we receive a sequence of edge insertions and deletions and we wish to store some small summary which can answer a question about the final graph. A few years ago [Yi, Nguyen, Woodruff 2014] and [Ai, Hu, Li, Woodruff 2016] explained why all known turnstile streaming algorithms could be described as “linear sketches” (random linear transformations) of the underlying data: because any 1-pass turnstile streaming algorithm which works for all possible input streams can be simulated by a linear sketch with only logarithmic space overhead.

This is a very powerful tool which immediately transformed every sketching lower bound into a turnstile streaming lower bound - such as the tight bound for approximate maximum matching of [Assadi, Khanna, Li, Yaroslavtsev 2016]. However, there are many interesting input constraints we can consider which are not compatible with the sketching reduction. What happens if we enforce that the input graph must always be simple (contains no repeat edges)? What happens when we bound the number of deletions?

In this talk, I will present work towards answering these questions. We introduce a new elementary communication problem - finding a hidden bit in a partially revealed matrix - and show how tight two-party communication bounds for the maximum matching and minimum vertex cover problems can be achieved by surprisingly simple reductions from this problem. In particular, this gives us tight turnstile streaming bounds even for always-simple inputs or streams of bounded-length n^2. I will conclude with a conjecture for the bounded-deletions case and a discussion of the remaining challenges.

Joint work with Christian Konrad.

Algorithms and complexity
Tuesday January 28, 2020, 11AM, Salle 1007
Spyros Angelopoulos (LIP6) Online Computation with Untrusted Advice

The advice model of online computation captures the setting in which an online algorithm is given some partial information concerning the request sequence. This paradigm allows to establish tradeoffs between the amount of this additional information and the performance of the online algorithm. However, unlike real life in which advice is a recommendation that we can chose to follow or to ignore based on trustworthiness, in the current advice model, the online algorithm typically treats it as infallible. This means that if the advice is corrupt or, worse, if it comes from a malicious source, the algorithm may perform poorly. In this work, we study online computation in a setting in which the advice is provided by an untrusted source. Our objective is to quantify the impact of untrusted advice so as to design and analyze robust online algorithms that are resilient and perform well even when the advice is generated in a malicious, adversarial manner. We show how the new paradigm can be applied to well-studied online problems such as ski rental, online bidding, bin packing, and list update.

Joint work with Christoph Dürr, Shendan Jin, Shahin Kamali and Marc Renault.

Algorithms and complexity
Thursday January 23, 2020, 11AM, Salle 1007
Moni Naor (Weizmann Institute) Instance Complexity and Unlabeled Certificates in the Decision Tree Model

Suppose that you want to check whether a graph satisfies some (graph) property. The goal is to minimize the number of queries to the adjacency matrix. You are given as an “untrusted hint” an isomorphic copy of the graph. You must always be correct, but the number of queries made is only measured when the hint is correct. Do such unlabeled certificates help? In the worst case? Per instance?

In this talk I will discuss labeled and unlabeled certificates, in particular in the context of ``instance optimality“. This is a measure of goodness of an algorithm in which the performance of one algorithm is compared to others per input. This is in sharp contrast to worst-case and average-case complexity measures, where the performance is compared either on the worst input or on an average one, respectively.

Joint work with Tomer Grossman and Ilan Komargodski

Algorithms and complexity
Tuesday January 21, 2020, 11AM, Salle 1007
Tatiana Starikovskaya (ENS) Sliding window property testing for regular languages

We discuss the problem of recognizing regular languages in a variant of the streaming model of computation, called the sliding window model. In this model, we are given a size of the sliding window n and a stream of symbols. At each time instant, we must decide whether the suffix of length n of the current stream (``the window'') belongs to a given regular language.

In this work, we make an important step forward and combine the sliding window model with the property testing setting, which results in ultra-efficient algorithms for all regular languages. Informally, a sliding window property tester must accept the active window if it belongs to the language and reject it if it is far from the language. We consider deterministic and randomized sliding window property testers with one-sided and two-sided errors. In particular, we show that for any regular language, there is a deterministic sliding window property tester that uses logarithmic space and a randomized sliding window property tester with two-sided error that uses constant space.

The talk is based on a joint work with Ganardi, Hucke, and Lohrey published in ISAAC 2019.

Algorithms and complexity
Tuesday January 14, 2020, 11AM, Salle 1007
Olivier Tercieux (Paris School of Economics) Unpaired Kidney Exchange: Overcoming Double Coincidence of Wants without Money

In this talk, I'll present the basics of the kidney exchange problem and a brief overview of the different practices in the world. I'll further describe a new matching algorithm—Unpaired kidney exchange—and study its properties in a dynamic matching model with heterogeneous agents. We will show that the average waiting time under the Unpaired algorithm is close-to optimal, and substantially less than the standard – and commonly used – pairwise and chain exchange algorithms. Using a rich dataset of the kidney patients in France, counterfactual simulations show that the Unpaired algorithm can match nearly 57% of the patients, with an average waiting time of 424 days (state-of-the-art algorithms match about 31% with an average waiting time of 675 days or more). The optimal algorithm performs only slightly better: it matches 58% of the patients and leads to an average waiting time of 410 days. The Unpaired algorithm confronts two incentive-related practical challenges. We address those challenges via a practical version of the Unpaired algorithm that employs kidneys from the deceased donors waiting list. The practical version can match nearly 87% of patient-donor pairs, while reducing the average waiting time to about 141 days. Finally, I'll conclude discussing the relationship between this algorithm and the current reform of the French kidney exchange program.

Most of the talk is based on https://drive.google.com/file/d/1ZxPC3sc9HvrlG7JUtqCdyZ_BSHttkGhN/view. I'll also mention https://www.ipp.eu/publication/juin-2019-perspectives-sur-le-programme-de-dons-croises-de-reins-en-france/.


Year 2019

Algorithms and complexity
Tuesday December 17, 2019, 11AM, Salle 1007
Clément Canonne (IBM Research) Uniformity Testing for High-Dimensional Distributions: Subcube Conditioning, Random Restrictions, and Mean Testing

Given a distribution p​​ on {-1,1}^d​​, we want to test whether p​​ is uniform. If p is assumed to be a product distribution, this can be done with Theta(sqrt{d}) samples; without this assumption, then things get exponentially worse and Theta(sqrt{2^d}) are necessary and sufficient. Assume however we can condition on arbitrary bits: does the problem become easier? If so, how much?

This is the “subcube conditional sampling model”, first considered in Bhattacharyya and Chakraborty (2018). We provide a nearly optimal ~O(sqrt{d})-algorithm for testing uniformity in this model. The key component of our proof is a natural notion of random restriction for distributions on {-1,1}^d, and a quantitative analysis of how such a restriction affects the mean vector of the distribution. Along the way, we provide a /very/ nearly tight upper bound on the (standard) sample complexity of a natural problem, “mean testing.”

Joint work with Xi Chen, Gautam Kamath, Amit Levi, and Erik Waingarten.

Algorithms and complexity
Monday December 16, 2019, 12:30AM, Amphi Turing
M. Teillaud, T. Starikovskaya, M. Bonamy, C. Mathieu Autour des algorithmes

This half-day of talks is in celebration of Algorithms, the research domain of Claire Mathieu, 2019 recipient of a CNRS Silver Medal. The talks, aimed at a non-specialized audience, will present a sample of research on Algorithms in France in a few selected areas: Computational Geometry (Monique Teillaud), Strings (Tatiana Starikoskaya), Graphs and Complexity (Marthe Bonamy), and Social Choice (Claire Mathieu). The afternoon will conclude with a discussion of new research directions in Algorithms. The participants to the roundtable will give their viewpoint on problems in the design and analysis of algorithms arising from their respective research domains. We may hear about statistical machine learning, clustering, and social networks (Anne Boyer), data journalism, detection of fake news, and magaging data in the Cloud (Ioana Manolescu), and economics and society (Camille Terrier).

Programme :

12h30-13h15 : Accueil et café

13h15-13h30 : Ouverture

13h30-14h15 : Exposé de Monique Teillaud, Autour des triangulations de Delaunay

14h15-15h : Exposé de Tatiana Starikovskaya, Streaming algorithms for string processing

15h-15h45 : Exposé de Marthe Bonamy, Complexity of graph coloring

15h45-16h15 : goûter

16h15-17h : Exposé de Claire Mathieu, Rank Aggregation

17h-18h : Table ronde “Prospectives de l'algorithmique en France” animée par Claire Mathieu. Participantes : Anne Boyer, Ioana Manolescu et Camille Terrier.

Algorithms and complexity
Tuesday December 3, 2019, 11AM, Salle 1007
Shahrzad Haddadan (Max Planck) Algorithms for top-k Lists and Social Networks

Today’s massive and dynamic data sets have motivated many computer scientists and mathematicians to study classical problems in combinatorics and graph theory in various settings. In certain settings the algorithms’ access to the data is restricted while in others the resources are limited. In this talk we explore such settings in three directions: ranking of objects, property testing in social networks and finding communities in dynamic graphs.

In the first part, we discuss algorithms on permutations as well as prefixes of permutations also known as top-k lists. The study of later particularly arises in big data scenarios when analysis of the entire permutation is costly or not interesting. We start by discussing random walks on the set of full rankings or permutations of n objects, a set whose size is n!. Since 1990s to today, various versions of this problem have been studied, each for different motivation.

The second part of the talk is devoted to property testing in social networks: given a graph, an algorithm seeks to approximate several parameters of a graph just by accessing the graph by means of oracles and while querying these oracles as few times as possible. We introduce a new oracle access model which is applicable to social networks, and assess the complexity of estimating parameters such as number of users (vertices) in this model.

In the third part of the talk, we introduce a new dynamic graph model which is based on triadic closure: a friendship is more likely to be formed between a pair of users with a larger number of mutual friends. We find upper bounds for the rate of graph evolution in this model and using such bounds we devise algorithms discovering communities. I will finish this talk by proposing new directions and presenting related open problems.

Algorithms and complexity
Wednesday November 27, 2019, 2PM, Salle 3052
Adrian Vladu (Boston University) Submodular Maximization with Matroid and Packing Constraints in Parallel

We study the adaptive complexity of maximizing the multilinear extension of a submodular function, subject to a matroid constraint, or multiple packing constraints. The goal is to match the best approximation guarantees known in the classical sequential setting while performing only few adaptive rounds, each of which consists of a polynomially bounded number of queries to an evaluation oracle.

Our results are as follows:

* For a matroid constraint, in O(log^2 n/ε^3) adaptive rounds, we obtain a 1−1/e−ε approximation for monotone functions, and a 1/e−ε approximation for non-monotone functions. This offers an exponential speedup over the previously existing algorithms.

* For m packing constraints, up to polylogarithmic factors in n, m and 1/ε, we require Õ(1/ε^2) adaptive rounds to obtain a 1/e−ε approximation for non-monotone functions, and 1-1/e-ε approximation for monotone functions. This matches the iteration complexity of the fastest currently known parallel algorithm for solving packing linear programs [Young ’01, Mahoney et al, ’16].

Our results apply more generally to the problem of maximizing a diminishing returns submodular (DR-submodular) function.

Algorithms and complexity
Tuesday October 15, 2019, 11AM, Salle 1007
Zvi Lotker (Ben Gurion University) Variation on preferential-attachment

In this talk, I will describe how preferential attachment arises from the first principle using game theory. Next, I will extend the model of preferential attachment into a general model, which allows for the incorporation of Homophily ties in the network. This talk is based on joint works with Prof. Chen Avin, Avi Cohen, Yinon Nahum, Prof. Pierre Fraigniaud, and Prof. David Peleg.

Algorithms and complexity
Tuesday September 10, 2019, 11AM, Salle 1007
David Saulpic (ENS) Fast Approximation Schemes for Clustering in Doubling Metrics

We consider classical clustering problems such as k-Median or k-Means in metric spaces that generalize Euclidean space, namely doubling metrics. We give a surprisingly simple framework that yields nearly linear-time approximation schemes for each problem, making a significant improvement over the state-of-the-art algorithms which run in time n^(d/\eps)^O(d), for metric of doubling dimension d. Moreover, we show how to extend the techniques used to get the first efficient approximation schemes for the problems of prize-collecting $k$-Medians and $k$-Means, and efficient bicriteria approximation schemes for k-Medians with outliers, k-Means with outliers and k-Center.

Algorithms and complexity
Tuesday July 2, 2019, 11AM, Salle 1007
Andrei Romashchenko (LIRMM) An Operational Characterization of Mutual Information in Algorithmic Information Theory

We show that for every pair of strings (x,y) the mutual information between x and y in the sense of Kolmogorov complexity is equal (within logarithmic precision) to the length of the longest shared secret key that two parties, one having x and the other one having y, can establish via a probabilistic protocol with interaction on a public channel. We extend this result to the setting of L>2 parties holding mutually correlated strings x_1 … x_L. [Joint work with Marius Zimand]

Algorithms and complexity
Monday June 24, 2019, 11AM, Salle 3052
Carola Doerr (CNRS, LIP6 Sorbonne University) Evolutionary Algorithms – From Theory to Practice and Back

Most real-world optimization problems do not have an explicit problem formulation, but only allow to compute the quality of selected solution candidates. Solving such black-box optimization problems requires iterative procedures which use the feedback gained from previous evaluations to determine the strategy by which the next solution candidates are generated. Many black-box optimization algorithms, such as Simulated Annealing, Evolutionary Algorithms, Swarm Intelligence Algorithms, are randomized – making it very difficult to analyze their performances mathematically.

In the last 15 years, the theory of randomized black-box optimization has advanced considerably, and has contributed to efficient optimization by providing insights into the working principles of black-box optimization which are hard or impossible to obtain by empirical means. On the other hand, empirically-guided benchmarking has opened up new research directions for theoretical investigations.

In this presentation we will discuss the state of the art in the theory of randomized black-box optimization algorithms. As part of this critical survey we will also mention a number of open questions and connections to other fields of Computer Science.

Algorithms and complexity
Thursday June 6, 2019, 2PM, Salle 3052
Baruch Schieber (NJIT) Constrained Submodular Maximization via Greedy Local Search

We present a simple combinatorial 1/2(1-1/e^2)-approximation algorithm for maximizing a monotone submodular function subject to a knapsack and a matroid constraint. This classic problem is known to be hard to approximate within factor better than 1-1/e. We extend the algorithm to yield 1/(k+1)(1-1/e^(k+1)) approximation for submodular maximization subject to a single knapsack and k matroid constraints, for any fixed k > 1. Our algorithms, which combine the greedy algorithm of [Khuller, Moss and Naor, 1999] and [Sviridenko, 2004] with local search, show the power of this natural framework in submodular maximization with combined constraints.

This is joint work with Kanthi Sarpatwar and Hadas Shachnai

Algorithms and complexity
Tuesday May 28, 2019, 11AM, Salle 1007
Simon Apers (INRIA) Quantum Fast-Forwarding: Markov Chains and Graph Property Testing

I will introduce a new tool for quantum algorithms called quantum fast-forwarding (QFF). The tool uses quantum walks as a means to quadratically fast-forward a reversible Markov chain. In a number of quantum walk steps that scales as the square root of the classical runtime, and inversely proportional to the 2-norm of the classical distribution, it retrieves a quantum superposition that encodes the classical distribution. This shows that quantum walks can accelerate the transient dynamics of Markov chains, thereby complementing the line of results that proves the acceleration of their limit behavior.

This tool leads to speedups on random walk algorithms in a very natural way. Specifically I will consider random walk algorithms for testing the graph expansion and clusterability, and show that QFF allows to quadratically improve the dependency of the classical property testers on the random walk runtime. Moreover, this new quantum algorithm exponentially improves the space complexity of the classical tester to logarithmic. As a subroutine of independent interest, I use QFF to determine whether a given pair of nodes lies in the same cluster or in separate clusters. This solves a robust version of s-t connectivity, relevant in a learning context for classifying objects among a set of examples. The different algorithms crucially rely on the quantum speedup of the transient behavior of random walks.

Algorithms and complexity
Tuesday May 21, 2019, 11AM, Salle 1007
Miklos Santha (IRIF, CQT) Discrete logarithm and Diffie-Hellman problems in identity black-box groups

We investigate the computational complexity of the discrete logarithm, the computational Diffie-Hellman and the decisional Diffie-Hellman problems in some identity black-box groups G_{p,t}, where p is a prime number and t is a positive integer. These are defined as quotient groups of vector space Z_p^{t+1} by a hyperplane H given through an identity oracle. While in general black-box groups with unique encoding these computational problems are classically all hard and quantumly all easy, we find that in the groups G_{p,t} the situation is more contrasted. We prove that while there is a polynomial time probabilistic algorithm to solve the decisional Diffie-Hellman problem in G_{p,1}, the probabilistic query complexity of all the other problems is Omega(p) and their quantum query complexity is Omega(\sqrt{p}). Our results therefore provide a new example of a group where the computational and the decisional Diffie-Hellman problems have widely different complexity.

Joint work with Gabor Ivanyos and Antoine Joux.

Algorithms and complexity
Tuesday May 7, 2019, 11AM, Salle 1007
Benjamin Smith (INRIA, LIX) Pre- and post-quantum Diffie-Hellman from groups, actions, and isogenies

Diffie-Hellman key exchange is at the foundations of public-key cryptography, but conventional group-based Diffie-Hellman is vulnerable to Shor's quantum algorithm. A range of 'post-quantum Diffie-Hellman' protocols have been proposed to mitigate this threat, including the Couveignes, Rostovtsev-Stolbunov, SIDH, and CSIDH schemes, all based on the combinatorial and number-theoretic structures formed by isogenies of elliptic curves. Pre-and post-quantum Diffie-Hellman schemes resemble each other at the highest level, but the further down we dive, the more differences emerge-differences that are critical when we use Diffie-Hellman as a basic component in more complicated constructions. In this survey we compare and contrast pre-and post-quantum Diffie-Hellman algorithms, highlighting some important subtleties.

Algorithms and complexity
Tuesday April 16, 2019, 11AM, Salle 1007
Clément Canonne (Stanford University) Statistical Inference Under Local Information Constraints

Independent samples from an unknown probability distribution p on a domain of size k are distributed across n players, with each player holding one sample. Each player can send a message to a central referee in a simultaneous message passing (SMP) model of communication, whose goal is to solve a pre-specified inference problem on p. The catch, however, is that each player cannot simply send their own sample to the referee; instead, the message they send must obey some (local) information constraint. For instance, each player may be limited to communicating only L bits, where L « log k; or they may seek to reveal as little information as possible, and preserve local differential privacy.

We propose a general formulation for inference problems in this distributed setting, and instantiate it to two fundamental inference questions, learning and uniformity testing. We study the role of randomness for those questions, and obtain striking separations between public- and private-coin protocols for the latter, while showing the two settings are equally powerful for the former. (Put differently, sharing with your neighbors does help a lot for the test, but not really for the learning.)

Based on joint works with Jayadev Acharya (Cornell University), Cody Freitag (Cornell University), and Himanshu Tyagi (IISc Bangalore).

Algorithms and complexity
Tuesday March 26, 2019, 11AM, Salle 1007
Alexander Knop (UC San Diego) On proof systems based on branching programs

It is well known that there is a connection between resolution complexity of a formula and complexity of (very weak) branching programs for the corresponding search problem. In the talk, we will discuss proof systems that allow to generalize this statement and explain the lower bounds for these proof systems. In order to achieve this lower bounds, we show how to transform functions with a high fixed-partition communication complexity into functions with a high best-partition communication complexity and as a result, we translate the landscape of fixed-partition communication complexity classes into the landscape of best-partition communication complexity classes.

Algorithms and complexity
Tuesday March 5, 2019, 11AM, Salle 1007
Valia Mitsou (IRIF) Limitations of treewidth for problems beyond NP

In this seminar, we take a closer look at the parameterized complexity of problems belonging in Σ^p_2 and Π^p_2, the second level of the polynomial hierarchy. We provide tight fine-grained bounds on their complexity with respect to the most important structural graph parameter, the treewidth. We observe that these problems exhibit similar behavior: we show that a variety of problems including Choosability, ∃∀SAT, along with several problems from AI such as Abduction, Abstract Argumentation and others, while they admit a $2^{2^{O(tw)}}$ algorithm, they cannot be solved in time $2^{2^{o(tw)}}$ under the Exponential Time Hypothesis.

Algorithms and complexity
Friday March 1, 2019, 3PM, Salle 1007
Jacob Leshno (Chicago Booth) Facilitating Student Information Acquisition in Matching Markets

We develop a tractable model for a matching market where students form preferences through costly information acquisition. As the market outcome consists of both an assignment as well as the acquired information, we study how the marketplace can direct both the allocation and the information acquisition process. We define regret-free stable outcomes, where the matching is stable under the partial preferences and each student has acquired information optimally, as if she were last to market; regret-free stable outcomes are optimal in settings where students make decentralized decisions about how to acquire information and can wait for more information. We show that regret-free stable outcomes always exist and can be characterized by cutoffs. However, we also show that regret-free stable outcomes cannot be discovered under practical restrictions – any discovery process can reach deadlocks where every student would like to wait for additional market information before acquiring information. Instead, historical information can facilitate the implementation of approximate regret-free stable outcomes, suggesting that an important role of matching markets is to aggregate information from multiple years. We also provide a method for bootstrapping information acquisition when historical information is not available.

Algorithms and complexity
Tuesday February 26, 2019, 11AM, Salle 1007
Uri Zwick (Tel Aviv University) Faster k-SAT algorithms using biased-PPSZ

The PPSZ algorithm, due to Paturi, Pudlak, Saks and Zane, is currently the fastest known algorithm for the k-SAT problem, for every k>3. For 3-SAT, a tiny improvement over PPSZ was obtained by Hertli. We introduce a biased version of the PPSZ algorithm using which we obtain an improvement over PPSZ for every k>=3. For k=3 we also improve on Herli's result and get a much more noticeable improvement over PPSZ, though still relatively small. In particular, for Unique 3-SAT, we improve the current bound from 1.308^n to 1.307^n.

Joint work with Thomas Dueholm Hansen, Haim Kaplan and Or Zamir

Algorithms and complexity
Tuesday February 19, 2019, 11AM, Salle 3052
Danupon Nanongkai (KTH) Distributed Shortest Paths, Exactly

This talk concerns the problem of quickly computing distances and shortest paths on distributed networks (the CONGEST model). There have been many developments for this problem in the last few year, resulting in tight approximation schemes. This left open whether exact algorithms can perform equally well. In this talk, we will discuss some recent progress in answering this question. Most recent works that this talk is based on are with Sebastian Krinninger (FOCS 2018) and Aaron Bernstein (ArXiv 2018).

Algorithms and complexity
Tuesday January 29, 2019, 11AM, Salle 1007
Balthazar Bauer (ENS) An application of communication complexity to cryptography

The hash function are very important primitives in cryptology. To use it, we need to trust the designer. To solve this problem, we can combine several hash functions independently built. But we have to suppose that the designers could communicate. That's why we can reduce the security properties to communication complexity problems. After a presentation of these reductions, we will look in details the communication problems linked to these properties, our knowledge, new results about them. And at the end the open problems that challenge us will be presented.


Year 2018

Algorithms and complexity
Tuesday December 4, 2018, 11AM, Salle 1007
Geoffroy Couteau (KIT) Compressing Vector OLE

In this talk, I will discuss recent results on the problem of compressing correlated (pseudo) random strings.

Oblivious linear-function evaluation (OLE) is a secure two-party protocol allowing a receiver to learn any linear combination of a pair of field elements held by a sender. OLE serves as a common building block for secure computation of arithmetic circuits, analogously to the role of oblivious transfer (OT) for boolean circuits. A useful extension of OLE is vector OLE (VOLE), allowing the receiver to learn any linear combination of two vectors held by the sender. In several applications of OLE, one can replace a large number of instances of OLE by a smaller number of long instances of VOLE. This motivates the goal of amortizing the cost of generating long instances of VOLE.

I will present a new approach for generating pseudo-random instances of VOLE via a deterministic local expansion of a pair of short correlated seeds and no interaction. This provides the first example of compressing a non-trivial and cryptographically useful correlation with good concrete efficiency. This result enjoys several applications; I will discuss in particular an application to the construction of non-interactive zero-knowledge proofs with reusable preprocessing.

Our VOLE generators are based on a novel combination of function secret sharing (FSS) for multi-point functions and linear codes in which decoding is intractable. Their security can be based on variants of the learning parity with noise (LPN) assumption over large fields that resist known attacks. I will also discuss recent exciting developments toward the 'dream goal' of compressing arbitrary pseudorandom bilinear correlations.

Algorithms and complexity
Tuesday November 27, 2018, 11AM, Salle 1007
Sagnik Mukhopadhyay (Computer Science Institute of Charles University) Lifting theorems for Equality

We show a deterministic simulation (or lifting) theorem for composed problems $f \circ \EQ_n$ where the inner function (the gadget) is Equality on $n$ bits. When $f$ is a total function on $p$ bits, it is easy to show via a rank argument that the communication complexity of $f\circ \EQ_n$ is $\Omega(\deg(f) \cdot n)$. However, there is a surprising counter-example of a partial function $f$ on $p$ bits, such that any completion $f'$ of $f$ has $\deg(f') = \Omega(p)$, and yet $f \circ \EQ_n$ has communication complexity $O(n)$. Nonetheless, we are able to show that the communication complexity of $f \circ \EQ_n$ is at least $D(f) \cdot n$ for a complexity measure $D(f)$ which is closely related to the $\AND$-query complexity of $f$ and is lower-bounded by the logarithm of the leaf complexity of $f$. As a corollary, we also obtain lifting theorems for the set-disjointness gadget, and a lifting theorem in the context of parity decision-trees, for the $\NOR$ gadget.

In this talk, I will talk about simulations or lifting theorems in general from a previous work of Chattopadhyay et al., and lifting theorem for Equality in particular—especially why the general recipe of Chattopadhyay et al. does not work for Equality. I will also mention an application of this technique to prove tight communication lower bound for Gap-Equality problem. This is a joint work with Bruno Loff.

Algorithms and complexity
Thursday November 22, 2018, 2PM, Salle 1007
Aviad Rubinstein (Stanford) Distributed PCP Theorems for Hardness of Approximation in P

We present a new model of probabilistically checkable proof (PCP), which we call “Distributed PCP”: A satisfying assignment (x in {0,1}^n) to a SAT instance is shared between two parties (Alice knows x_1, … x_{n/2}, and Bob knows x_{n/2+1},…,x_n). Their goal is to jointly write a PCP for x, while exchanging little or no information.

Using our new framework, we obtain, for the first time, PCP-like hardness of approximation results for problems in P.

Based on joint works with Amir Abboud, Lijie Chen, Shafi Goldwasser, Kaifeng Lyu, Guy Rothblum, and Ryan Williams.

Algorithms and complexity
Tuesday October 16, 2018, 11:30AM, Salle 1007
Suryajith Chillara (IIT Bombay) A Quadratic Size-Hierarchy Theorem for Small-Depth Multilinear Formulas

It is a fundamental problem in the study of computational complexity to understand if access to more resources would mean strictly more computational power. In classical complexity, we have seen such examples in terms of Time Hierarchy and Space Hierarchy theorems. Here, time and space are the resources. It is natural to ask such a question in the setting of algebraic complexity setting. Here, access to more resources translates to allowing the model to perform more number of operations which in turn is allowing the “algebraic formulas” to have larger size.

Arithmetic formulas are directed trees where the leaves are labelled by variables and elements of the field, and internal vertices are labelled by + or x. Every internal vertex computes a polynomial by operating on its inputs. It is easy to see that they are a natural model of computation of polynomials, syntactically (symbolically). The size of the arithmetic formulas refers to the number of + and x operations needed to compute the polynomial.

Rephrasing the question in the algebraic setting we ask the following: for any s, \varepsilon >0, are there polynomial computations that can be efficiently computed by arithmetic formulas of size s but not by any arithmetic formula of size s^{1-\varepsilon}?

In this talk, we will restrict ourselves to arithmetic formulas where computation at every vertex is a multilinear polynomial and here we show explicit separations between the expressive powers of multilinear formulas of small-depth and all polynomial sizes. The multilinear restriction is a reasonable one since most polynomials of interest are indeed multilinear.

Formally, we will show that for any s = poly(n) and \delta > 0, we show that there are explicit polynomial families that have multilinear formulas of size at most s but no 'small'-depth multilinear formulas of size less than s^{0.5 - \delta}. Our proof can be viewed as a derandomization of a lower bound technique of Raz (JACM 2009) using \epsilon-biased spaces.

(Joint work with Nutan Limaye and Srikanth Srinivasan. Appeared in the proceedings of ICALP 2018.)

Algorithms and complexity
Tuesday September 4, 2018, 11AM, Salle 1007
Kavitha Telikepalli (Tata Institute of Fundamental Research) Popular Matchings: A Tale of Two Classes

We consider popular matching problems in a stable marriage instance G. A matching M is popular if it does not lose a head-to-head election against any matching, where each vertex casts a vote for the matching where it gets a better assignment. Popular matchings always exist in G since every stable matching is popular. Algorithmic questions for several popular matching problems have been studied in the last few years: these include finding a max-size popular matching, finding a popular matching with our favorite edge(s) in it, finding a max-weight popular (mixed) matching when there are edge weights. We will see efficient algorithms for some of these problems.

Interestingly, all tractability results in popular matchings seem to reduce the popular matching problem to an appropriate question in either stable matchings or dominant matchings. Dominant matchings always exist in G and form a subclass of max-size popular matchings while stable matchings form a subclass of min-size popular matchings. We recently showed that deciding if G admits a popular matching that is neither stable nor dominant is NP-hard. This implies a tight 1/2-factor approximation for the max-weight popular matching problem in G and the hardness of finding a popular matching in a non-bipartite graph.

[Joint work with Agnes Cseh, Yuri Faenza, Chien-Chung Huang, Vladlena Powers, and Xingyu Zhang.]

Algorithms and complexity
Tuesday June 19, 2018, 11AM, Salle 1007
Leonard Wossnig (University College London) Sketching as tool for faster quantum simulation

Following the works of Drineas et al. (2004), Randomised Numerical Linear Algebra (RNLA) applications have enjoyed a increasing surge of interest in the classical algorithms community. Following their success we recently demonstrated a classical algorithm based on the Nystroem method which allows us to efficiently simulate dynamics of quantum system with specific structural conditions. We briefly introduce the most basic RNLA algorithm by Drineas et al. to give an overview of the topic and then outline our own recent work. We follow with a discussion of potential application of related methods, including spectral sparsification, in quantum algorithms for Hamiltonian Simulation and quantum linear systems; We finish with a short description of own current work related to spectral sparsification (Spielmann & Teng 2011) for Hamiltonian simulation.

Algorithms and complexity
Tuesday June 5, 2018, 11AM, Salle 1007
Andrea Rocchetto (Oxford - UCL) Learnability and quantum computation

Computational learning theory provides a mathematical framework for rigorously formulating learning problems from both a statistical and computational perspective. During this talk I will present two independent results at the interplay of learning theory and quantum computation. First, I will address the problem: “can we learn faster with a quantum computer?”. In particular, I will discuss the learnability of the class of boolean functions that can be expressed as polynomial size formulae in disjunctive normal form (DNF) and show that this is efficiently quantum PAC learnable under product distributions. The learnability of DNFs is a central problem in learning theory and the best known classical algorithm (using standard ways to access the function) runs in superpolynomial time. Second, I will discuss the problem “what it means to learn a quantum state and can we do that efficiently?”. Here, building on a model developed by Aaronson in 2007, I will present a class of states, namely stabiliser states, that can be learned using only a linear number of measurements and polynomial - classical - computation. Part of the results are joint work with Varun Kanade and Simone Severini.

Algorithms and complexity
Tuesday May 29, 2018, 11AM, Salle 1007
Steve Alpern (University of Wariwick, UK) Shortest Paths in Networks with Unreliable Directions

The work of Korman and others, based on investigations of navigational techniques of ‘crazy ants’, leads to the following problem: Let Q be a network with given arc lengths, and let H be a ‘home’ node. At each node a permanent direction is given (one of the adjacent arcs) which leads to a shortest path path from that node to H with probability p less than one. Starting from a node S, a searcher chooses the given direction at each reached node with probability q; otherwise he chooses randomly. He arrives home at node H in expected time T=T(S,H,p,q). How should q be chosen to minimize T? Perhaps when reaching a node the searcher sees its degree k so chooses the given direction with probability q(k). Related problems have been studied for tree from a graph theoretical perspective.

Algorithms and complexity
Tuesday May 22, 2018, 11AM, Salle 1007
Anupam Prakash (IRIF) Improved quantum linear system solvers and applications to machine learning

I will describe recent results in the QRAM model that improve the sparsity dependance for quantum linear system solvers to \mu(A) = \min (||A||_F, ||A||_1) where ||A||_1 is the maximum l-1 norm of the rows and columns of A and ||A||_F is the Frobenius norm. These results achieve a worst case quadratic speedup over the HHL algorithm and its improvements and exponential speedups for some cases. I will also present some applications of the improved linear system solvers to quantum machine learning.

Algorithms and complexity
Wednesday April 11, 2018, 2PM, Salle 1007
Libor Caha (RCQI Bratislava) The Feynman-Kitaev computer's clock: bias, gaps, idling and pulse tuning

We present a collection of results about the clock in Feynman's computer construction and Kitaev's Local Hamiltonian problem. First, by analyzing the spectra of quantum walks on a line with varying endpoint terms, we find a better lower bound on the gap of the Feynman Hamiltonian, which translates into a less strict promise gap requirement for the QMA-complete Local Hamiltonian problem. We also translate this result into the language of adiabatic quantum computation. Second, introducing an idling clock construction with a large state space but fast Cesaro mixing, we provide a way for achieving an arbitrarily high success probability of computation with Feynman's computer with only a logarithmic increase in the number of clock qubits. Finally, we tune and thus improve the costs (locality, gap scaling) of implementing a (pulse) clock with a single excitation.

Joint work with Zeph Landau and Daniel Nagaj

Algorithms and complexity
Tuesday April 10, 2018, 11AM, Salle 1007
Pierre Aboulker (Université Grenoble-Alpes, Labo G-SCOP) Distributed coloring of graphs with fewer colors

We give an efficient (polylog(n) rounds) algorithm that colors a graph with few colors in the distributed setting (LOCAL model). Among other things, we will see that this algorithm 6-colors planar graphs (using 1 less color than the previous best algorithm of Goldberg, Plotkin, and Shannon) and will show that one cannot 4-color planar graph in o(n) rounds.

This is a joint work with Bousquet, Bonamy, and Esperet.

Algorithms and complexity
Tuesday March 27, 2018, 11AM, Salle 1007
Kamil Khadiev (University of Latvia) Quantum online algorithms with restricted memory

An online algorithm is a well-known computational model for solving optimization problems. The defining property of this model is following. An algorithm reads an input piece by piece and should return output variables after some of the input variables immediately, even if the answer depends on the whole input. An online algorithm should return output for minimizing an objective function.

We consider streaming algorithms and two-way automata as models for online algorithms. We compare quantum and classical models in case of logarithmic memory and sublogarithmic memory.

We get the following results for online streaming algorithms: - a quantum online streaming algorithm with 1 qubit of memory and 1 advice bit can be better than a classical online streaming algorithm with $o(\log n)$ bits of memory and $o(\log n)$ advice bits. - Quantum online streaming algorithm with a constant size of memory and $t$ advice bits can be better than deterministic online streaming algorithms with $t$ advice bits and unlimited computational power. - In a case of a polylogarithmic size of memory, quantum online streaming algorithms can be better than classical ones even if they have advice bits.

We get the following results for two way automata as an online algorithm for solving online minimization problems: - a two way automata with quantum and classical states for online minimization problems with a constant size of memory can be better than classical ones even if they have advice bits.

Algorithms and complexity
Tuesday March 20, 2018, 11AM, Salle 1007
Valia Mitsou (IRIF) On the complexity of defective coloring.

In defective coloring, we are given a graph G and two integers c, ∆* and are asked if we can c-color G so that the maximum degree induced by any color class is at most ∆*. This problem is a generalization of proper coloring, for which Δ* = 0.

Defective coloring, like proper coloring, has many applications, for example in scheduling (assign jobs to machines which can work on up to Δ* jobs in parallel), or in radio networks (assign frequencies to antennas with some slack, that is, an antenna can be assigned the same frequency with up to Δ* other antennas within its range without a signal conflict).

We will present some algorithmic and complexity results on this problem in classes of graphs where proper coloring is easy: on the one hand, we will consider subclasses of perfect graphs, namely cographs and chordal graphs; on the other hand we will talk about structural parameterizations, such as treewidth and feedback vertex set.

Joint work with Rémy Belmonte (University of Electro-Communications, Tokyo) and Michael Lampis (Lamsade, Université Paris Dauphine) that appeared in WG '17 and in STACS '18.

Algorithms and complexity
Tuesday February 13, 2018, 11AM, Salle 1007
Antoine Grospellier (INRIA) Efficient decoding of random errors for quantum expander codes

We show that quantum expander codes, a constant-rate family of quantum LDPC codes, with the quasi-linear time decoding algorithm of Leverrier, Tillich and Zémor can correct a constant fraction of random errors with very high probability. This is the first construction of a constant-rate quantum LDPC code with an efficient decoding algorithm that can correct a linear number of random errors with a negligible failure probability. Finding codes with these properties is also motivated by Gottesman’s construction of fault tolerant schemes with constant space overhead. In order to obtain this result, we study a notion of α-percolation: for a random subset E of vertices of a given graph, we consider the size of the largest connected α-subset of E, where X is an α-subset of E if |X ∩ E| ≥ α|X|.

Algorithms and complexity
Monday February 12, 2018, 2PM, Salle 3052
Nabil Mustafa (ESIEE) Local Search for Geometric Optimization Problems.

Local-search is an intuitive approach towards solving combinatorial optimization problems: start with any feasible solution, and try to improve it by local improvements. Like other greedy approaches, it can fail to find the global optimum by getting stuck on a locally optimal solution. In this talk I will present the ideas and techniques behind the use of local-search in the design of provably good approximation algorithms for some combinatorial problems, such as independent sets, vertex cover, dominating sets in geometric intersection graphs. The key unifying theme is the analysis of local expansion in planar graphs. Joint work with Norbert Bus, Shashwat Garg, Bruno Jartoux and Saurabh Ray.

Algorithms and complexity
Tuesday February 6, 2018, 11AM, Salle 1007
Shendan Jin (LIP6) Online Maximum Matching with Recourse

We study the online maximum matching problem with recourse in a model in which the edges are associated with a known recourse parameter k. An online algorithm for this problem has to maintain a valid matching while edges of the underlying graph are presented one after the other. At any moment the algorithm can decide to include an edge into the matching or to exclude it, under the restriction that at most k actions per edge take place, where k is typically a small constant. This problem was introduced and studied in the context of general online packing problems with recourse by [Avitabile, Mathieu, Parkinson, 2013], whereas the special case k=2 has been studied by [Boyar et al., 2017].

In the first part of this paper, we consider the edge arrival model, in which an arriving edge never disappears from the graph. Here, we first show an improved analysis on the performance of the algorithm given in [AMP, 2013], by exploiting the structure of the specific problem. In addition, we extend the result of [Boyar et al., 2017] and show that the greedy algorithm has ratio 3/2 for every even positive k and ratio 2 for every odd k. Moreover, we present and analyze L-Greedy — an improvement of the greedy algorithm — which for small values of k outperforms the algorithm of [AMP, 2013]. In terms of lower bounds, we show that no deterministic algorithm better than 1+1/(k−1) exists, improving upon the lower bound of 1+1/k shown in [AMP, 2013].

The second part of the paper is devoted to the edge arrival/departure model, which is the fully dynamic variant of the online matching problem with recourse. The analysis of L-Greedy carries through in this model; moreover we show a general lower bound of (k2−3k+3)/(k2−4k+5) for all even k≥4 and provide the stronger bound of 10/7 for k=4. For k∈{2,3}, the competitive ratio is 3/2.

Joint work with Spyros Angelopoulos and Christoph Dürr.

Algorithms and complexity
Monday February 5, 2018, 2PM, Salle 2018
Charles Bennett Do-It-Yourself randomness over Device-Independent randomness

Algorithms and complexity
Tuesday January 30, 2018, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Allison Bishop (DI ENS, IRIF - IEX and Columbia University) On Algorithms Operating in Adversarial Conditions

Eighth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Allison Bishop on Algorithms Operating in Adversarial Conditions (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-30-10h00.htm

Algorithms and complexity
Tuesday January 23, 2018, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Tim Roughgarden (DI ENS, IRIF - Stanford University) On Game Theory Through the Computational Lens

Seventh lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Tim Roughgarden on Game Theory Through the Computational Lens (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-23-10h00.htm

Algorithms and complexity
Wednesday January 17, 2018, 11AM, Salle 2015
Philip Lazos (Oxford) The Infinite Server Problem

We study a variant of the k-server problem, the infinite server problem, in which infinitely many servers reside initially at a particular point of the metric space and serve a sequence of requests. In the framework of competitive analysis, we show a surprisingly tight connection between this problem and the (h,k)-server problem, in which an online algorithm with k servers competes against an offline algorithm with h servers. Specifically, we show that the infinite server problem has bounded competitive ratio if and only if the (h,k)-server problem has bounded competitive ratio for some k=O(h). We give a lower bound of 3.146 for the competitive ratio of the infinite server problem, which implies the same lower bound for the (h,k)-server problem even when k/h→∞ and holds also for the line metric; the previous known bounds were 2.4 for general metric spaces and 2 for the line. For weighted trees and layered graphs we obtain upper bounds, although they depend on the depth. Of particular interest is the infinite server problem on the line, which we show to be equivalent to the seemingly easier case in which all requests are in a fixed bounded interval away from the original position of the servers. This is a special case of a more general reduction from arbitrary metric spaces to bounded subspaces. Unfortunately, classical approaches (double coverage and generalizations, work function algorithm, balancing algorithms) fail even for this special case.

Algorithms and complexity
Tuesday January 16, 2018, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Jon Kleinberg (DI ENS, IRIF - Cornell University) On Algorithms and Fairness

Sixth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Jon Kleinberg on Algorithms and Fairness (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-16-10h00.htm

Algorithms and complexity
Tuesday January 16, 2018, 2:30PM, Salle 3014
Nicolas Thiery (Université Paris Sud) Computing huge subspaces of diagonal harmonic polynomials: symmetries to the rescue!

Last spring, I visited François Bergeron and we worked on his favorite objects: the spaces H(n,k) of diagonal harmonic polynomials in k sets of n variables. Those spaces connect to many areas of algebraic combinatorics, representation theory and beyond, and the case H(n,2) became famous a decade ago with the n! conjecture and its proof by Haiman.

To fuel his ongoing studies François needed to compute the structure of H(5,6). This is a space of dimension 6.10^5 made of polynomials in 30 variables of degree up to 15, each having thousands of terms.

In this talk, I'll explain how the calculation can now be completed in 45 minutes with a dozen cores and ~15Go of memory. This exploits a combination of strategies (symmetries, representation theory of the symmetric and general linear group, …), each of which reduces the complexity in time and memory by one or two orders of magnitude.

There will be little prerequisites and it's my hope that some strategies (and maybe the code!) could be used in other contexts.

Algorithms and complexity
Tuesday January 9, 2018, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Mark Jerrum (DI ENS, IRIF - Queen Mary, University of London) On Sampling and Approximate Counting

Fifth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Mark Jerrum on Sampling and Approximate Counting (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2018-01-09-10h00.htm


Year 2017

Algorithms and complexity
Tuesday December 19, 2017, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Bruno Salvy (DI ENS, IRIF - INRIA) On Analytic Combinatorics

Fourth lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Bruno Salvy on Analytic Combinatorics (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-19-10h00.htm

Algorithms and complexity
Thursday December 14, 2017, 2:30PM, Salle 1016
Emanuele Natale (Max Planck Institute for Informatics) Computing through Dynamics: Principles for Distributed Coordination

We present analytical results regarding some simple randomized protocols, called dynamics, for solving fundamental distributed consensus problems, together with examples on how to use them to build lightweight algorithms for other important distributed problems. More specifically, we provide an overview of the theory regarding several dynamics such as the 3 Majority, the Averaging and the Undecided-State ones, and we show how to use them to solve plurality consensus, distributed clustering, clock synchronization and information spreading. Motivated by applications to systems whose complexity is in-between biological and human-made ones, we focus mainly on unstructured and random interaction models, and we also deal with scenarios in which the communication is affected by noise or when a self-stabilizing protocol is required.

Algorithms and complexity
Tuesday December 12, 2017, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Amos Fiat (DI ENS, IRIF - University of Tel-Aviv) On Static and Dynamic Pricing

Third lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Amos Fiat on Static and Dynamic Pricing (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-12-10h00.htm

Algorithms and complexity
Tuesday December 12, 2017, 2PM, Salle 3052
Jean Krivine (IRIF) Incremental Update for Graph Rewriting

Graph rewriting formalisms are well-established models for the representation of biological systems such as protein-protein interaction networks. The combinatorial complexity of these models usually prevents any explicit representation of the variables of the system, and one has to rely on stochastic simulations in order to sample the possible trajectories of the underlying Markov chain. The bottleneck of stochastic simulation algorithms is the update of the propensity function that describes the probability that a given rule is to be applied next. In this talk we present an algorithm based on a data structure, called extension basis, that can be used to update the counts of predefined graph observables after a rule of the model has been applied.

Reference: Boutillier P., Ehrhard T., Krivine J. (2017) Incremental Update for Graph Rewriting. In: Yang H. (eds) Programming Languages and Systems. ESOP 2017. Lecture Notes in Computer Science, vol 10201. Springer, Berlin, Heidelberg

Séminaire commun du pole Algorithmes et Structures Discrètes

Algorithms and complexity
Tuesday December 5, 2017, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Laurent Massoulié (DI ENS, IRIF - INRIA) Community Detection

Second lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Laurent Massoulié on Community Detection (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-12-05-10h00.htm

Algorithms and complexity
Tuesday November 28, 2017, 10AM, Collège de France, Amphithéâtre Maurice Halbwachs - Marcelin Berthelot
Claire Mathieu - Pierre Fraigniaud (DI ENS, IRIF - IRIF, CNRS) On Distributed Algorithms

First lecture from Claire Mathieu at Collège de France on Algorithms (at 10am), followed by a talk from Pierre Fraigniaud on Distributed Algorithms (at 11am).

Additional information: http://www.college-de-france.fr/site/claire-mathieu/course-2017-11-28-10h00.htm

Algorithms and complexity
Tuesday November 21, 2017, 11AM, Salle 1007
André Chailloux (INRIA Paris) A tight security reduction in the quantum random oracle model for code-based signature schemes

Quantum secure signature schemes have a lot of attention recently, in particular because of the NIST call to standardize quantum safe cryptography. However, only few signature schemes can have concrete quantum security because of technical difficulties associated with the Quantum Random Oracle Model (QROM). In this paper, we show that code-based signature schemes based on the full domain hash paradigm can behave very well in the QROM i.e. that we can have tight security reductions. We also study quantum algorithms related to the underlying code-based assumption. Finally, we apply our reduction to a concrete example: the SURF signature scheme. We provide parameters for 128 bits of quantum security in the QROM and show that the obtained parameters are competitive compared to other similar quantum secure signature schemes.

Joint work with Thomas Debris-Alazard

Algorithms and complexity
Thursday November 16, 2017, 2PM, Salle 3052
Elena Kirshanova (ENS Lyon) Connections between the Dihedral Coset Problem and LWE

In this talk, I explain that under quantum polynomial time reductions, LWE is equivalent to a relaxed version of the dihedral coset problem (DCP), called extrapolated DCP (eDCP). The extent of extrapolation varies with the LWE noise rate. By considering different extents of extrapolation, the result generalizes Regev’s famous proof that if DCP is in BQP (quantum poly-time) then so is LWE (FOCS 02). We'll also discuss a connection between eDCP and Childs and Van Dam’s algorithm for generalized hidden shift problems (SODA 07). The result implies that a BQP solution for LWE might not require the full power of solving DCP, but rather only a solution for its relaxed version, eDCP, which could be easier.

Algorithms and complexity
Tuesday November 14, 2017, 2PM, 3052
Laurent Massoulié (MSR-Inria) Rapid Mixing of Local Graph Dynamics

Graph dynamics arise naturally in many contexts. For instance in peer-to-peer networks, a participating peer may replace an existing connection with one neighbour by a new connection with a neighbour's neighbour. Several such local rewiring rules have been proposed to ensure that peer-to-peer networks achieve good connectivity properties (e.g. high expansion) in equilibrium. However it has remained an open question whether there existed such rules that also led to fast convergence to equilibrium. In this work we provide an affirmative answer: We exhibit a local rewiring rule that converges to equilibrium after each participating node has undergone only a number of rewirings that is poly-logarithmic in the system size. The proof involves consideration of the whole isoperimetric profile of the graph, and may be of independent interest.

This is joint work with Rémi Varloot.

Algorithms and complexity
Friday November 10, 2017, 3PM, Salle 3052
Simon Apers (Ghent University) Mixing and quantum sampling with quantum walks: some results and questions

Quantum walks have been shown to quadratically speed up search problems on graphs. We discuss their usage towards mixing and quantum sampling on graphs, problems that are in a sense dual to the search problem. Quantum sampling, or quantum state generation, roughly amounts to creating a superposition over the elements of a set, where this set may be implicitly defined. It is known that if this task can be solved efficiently, then the complexity class called “statistical zero knowledge” is in BQP. We discuss a folklore approach to this problem called “inverted search”, where a quantum search algorithm is essentially inverted to yield a quantum sampling algorithm. We also show how this approach solves the search problem when starting from a single element of the graph, rather than a superposition. The easier task of mixing on graphs asks for a classical sample of the set, rather than a superposition. It has long been conjectured that quantum walks can quadratically speed up this task when compared to simple random walks. We show how a quantum sampling approach confirms this conjecture for a limited set of graphs. For more general graphs however, it is conceivable that the quantum sampling problem cannot be solved efficiently, such that a different approach towards mixing is needed. We discuss ideas in this direction.

Algorithms and complexity
Tuesday October 31, 2017, 11AM, Salle 1007
Ami Paz (IRIF) A (2+\epsilon)-Approximation for Maximum Weight Matching in the Semi-Streaming Model

In this talk I will present our new (2+\epsilon)-approximation algorithm for the maximum weight matching problem in the semi-streaming model, that was presented in SODA 2017.

We will start by discussing the local-ratio technique, a simple, sequential approximation paradigm we use in our algorithm. Then, we will consider the variations needed in order to adjust this technique to the semi-streaming model.

Based on a joint work with Gregory Schwartzman.

Algorithms and complexity
Tuesday October 17, 2017, 2PM, Salle 3052
Claire Mathieu (DI - ENS) Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.

Algorithms and complexity
Tuesday October 10, 2017, 11AM, Salle 1007
Victor Verdugo (ENS - Universidad de Chile) How Large is Your Graph?

We consider the problem of estimating the graph size, where one is given only local access to the graph. We formally define a query model in which one starts with a \emph{seed} node and is allowed to make queries about neighbours of nodes that have already been seen. In the case of undirected graphs, an estimator of Katzir et al. (2014) based on a sample from the stationary distribution π uses O(1/‖π‖_2+davg) queries, we prove that this is tight. In addition, we establish this as a lower bound even when the algorithm is allowed to crawl the graph arbitrarily, the results of Katzir et al. give an upper bound that is worse by a multiplicative factor tmix⋅log(n). The picture becomes significantly different in the case of directed graphs. We show that without strong assumptions on the graph structure, the number of nodes cannot be predicted to within a constant multiplicative factor without using a number of queries that are at least linear in the number of nodes, in particular, rapid mixing and small diameter, properties that most real-world networks exhibit, do not suffice. The question of interest is whether any algorithm can beat breadth-first search. We introduce a new parameter, generalising the well-studied conductance, such that if a suitable bound on it exists and is known to the algorithm, the number of queries required is sublinear in the number of edges, we show that this is tight.

Algorithms and complexity
Tuesday October 3, 2017, 11AM, Salle 1007
Giuseppe Italiano (Universita di Roma Tor Vergata) Decremental Single-Source Reachability and Strongly Connected Components in O(m \sqrt{n log n}) Total Update Time

We present randomized algorithms with a total update time of O(m \sqrt{n} log n) for the problems of decremental single-source reachability and decremental strongly connected components on directed graphs. This improves sharply recent breakthrough results of Henzinger, Krinninger and Nanongkai [STOC 14, ICALP 15]. In addition, our algorithms are arguably simpler. In this talk, we first review the Las Vegas algorithm by Roditty and Zwick [SICOMP 08] and the deterministic algorithm by Łącki [TALG 13], both with a total update time O(mn) (expected time in the case of the algorithm by Roditty and Zwick). Our algorithms carefully combine these two results, along with new structural properties, including a separation lemma in the case of graphs with a large diameter.

Joint work with Shiri Chechik, Thomas Dueholm Hansen, Jakub Łącki and Nikos Parotsidis.

Algorithms and complexity
Tuesday September 26, 2017, 11AM, Salle 1007
Vianney Perchet (CMLA, Ens Paris-Saclay) Online Search Problems

In traditional “search problems”, an agent aims at exploring a graph in order to find a hidden object as fast as possible. We consider here the Bayesian repeated scenario with several iid instance of the same basic search problem. The objective is, within a given amount of time, to find as many objects as possible with the possibility to leave an instance for the next one at any moment.

We also consider the non-bayesian case with a variant of regret minimization. We provide and analyze several algorithms with fast rates of convergence and/or guaranteed efficiency.

Algorithms and complexity
Tuesday September 19, 2017, 11AM, Salle 1007
Vincent Viallat Cohen-Addad Hierarchical Clustering: Objective Functions and Algorithms

Hierarchical clustering is a recursive partitioning of a dataset into clusters at an increasingly finer granularity. Motivated by the fact that most work on hierarchical clustering was based on providing algorithms, rather than optimizing a specific objective, Dasgupta framed similarity-based hierarchical clustering as a combinatorial optimization problem, where a `good' hierarchical clustering is one that minimizes some cost function. He showed that this cost function has certain desirable properties.

We take an axiomatic approach to defining `good' objective functions for both similarity and dissimilarity-based hierarchical clustering. We characterize a set of “admissible” objective functions (that includes Dasgupta's one) that have the property that when the input admits a `natural' hierarchical clustering, it has an optimal value.

Equipped with a suitable objective function, we analyze the performance of practical algorithms, as well as develop better algorithms. For similarity-based hierarchical clustering, Dasgupta showed that the divisive sparsest-cut approach achieves an O(log^{3/2} n)-approximation. We give a refined analysis of the algorithm and show that it in fact achieves an O(\sqrt{log n})-approx. (Charikar and Chatziafratis independently proved that it is a O(\sqrt{log n})-approx.). This improves upon the LP-based O(logn)-approx. of Roy and Pokutta. For dissimilarity-based hierarchical clustering, we show that the classic average-linkage algorithm gives a factor 2 approx., and provide a simple and better algorithm that gives a factor 3/2 approx..

Finally, we consider `beyond-worst-case' scenario through a generalisation of the stochastic block model for hierarchical clustering. We show that Dasgupta's cost function has desirable properties for these inputs and we provide a simple 1 + o(1)-approximation in this setting.

Joint work with Varun Kanade, Frederik Mallmann-Trenn, and Claire Mathieu.

Algorithms and complexity
Tuesday July 11, 2017, 11AM, Salle 1007
Oded Regev (Courant Institute of Mathematical Sciences, NYU) A Reverse Minkowski Theorem

Informally, Minkowski's first theorem states that lattices that are globally dense (have small determinant) are also locally dense (have lots of points in a small ball around the origin). This fundamental result dates back to 1891 and has a very wide range of applications.

I will present a proof of a reverse form of Minkowski's theorem, conjectured by Daniel Dadush in 2012, showing that locally dense lattices are also globally dense (in the appropriate sense).

The talk will be self contained and I will not assume any familiarity with lattices.

Based on joint papers with Daniel Dadush and Noah Stephens-Davidowitz.

Algorithms and complexity
Thursday July 6, 2017, 11AM, Salle 3052
Joel Friedman (University of British Columbia) Inner Rank and Lower Bounds for Matrix Multiplication

We give a short proof of the optimality of Strassen's classical algorithm for 2 x 2 matrix multiplication, and more generally give a 2n^2 - n + 1 lower bound for n x n matrix multiplcation, in terms of the border rank (and, hence, also rank) of the associated tensor, over an arbitrary field.

We call the tool we use “inner rank,” which seems to have gone unnoticed in the literature (at least in the way we use it). While our bounds are not competitive with the best bounds to date over the complex numubers, we argue that “inner rank” may lead to improved results and merits further study.

Algorithms and complexity
Tuesday June 20, 2017, 11AM, Salle 1007
Laurent Bulteau (CNRS, Laboratoire d'Informatique Gaspard Monge, Marne-la-Vallée, France.) Beyond Adjacency Maximization: Scaffold Filling for New String Distances

In Genomic Scaffold Filling, one aims at polishing in silico a draft genome, called scaffold. The scaffold is given in the form of an ordered set of gene sequences, called contigs. This is done by confronting the scaffold to an already complete reference genome from a close species. More precisely, given a scaffold S, a reference genome G (represented as a string) and a score function f() between two genomes, the aim is to complete S by adding the missing genes from G so that the obtained complete genome S* optimizes f(S*,G). In this talk, we will consider two alternative score functions: the first aims at maximizing the number of common k-mers between S* and G (k-Mer Scaffold Filling), the second aims at minimizing the number of string breakpoints between S* and G (Min-Breakpoint Scaffold Filling). We study these problems from the parameterized complexity point of view, providing fixed-parameter (FPT) algorithms for both problems. In particular, we show that k-Mer Scaffold Filling is FPT wrt. the number of additional k-mers realized by the completion of S. We also show that Min-Breakpoint Scaffold Filling is FPT wrt. a parameter combining the number of missing genes, the number of gene repetitions and the target distance.

Algorithms and complexity
Tuesday June 13, 2017, 11AM, Salle 1007
Abel Molina The Optimality of Projections for Quantum State Exclusion

We will first motivate the problem of quantum state exclusion of pure states, through its connections with the PBR game and with compatibility conditions for quantum state assignments. Then, we will discuss our recent result regarding the optimality of projections for perfect state exclusion of 3 pure states in 3 dimensions (arXiv:1702.06449). We will describe our techniques to prove this result, which are based on arguments involving convexity, rank and symmetry properties. Finally, we will discuss other related results, as well as possible avenues for extending our results.

Algorithms and complexity
Tuesday June 6, 2017, 4PM, 3052
Thomas Vidick (California Institute of Technology) Entanglement Tests from Group Representations.

I will present a novel perspective on entanglement tests, such as the CHSH inequality and extensions thereof, pioneered by Slofstra and (unknowingly) Gowers-Hatami, that is based on the theory of (approximate) representations of non-abelian groups. We will see how this theory leads to a simple analysis of:

(i) The first family of self-tests (or, two-player entangled games) for arbitrarily high-dimensional entanglement that is robust to a constant fraction of noise. (Joint work with Natarajan, arXiv:1610.03574.)

(ii) The first finite self-test such that for any eps>0, entangled states of dimension poly(1/eps) are both necessary and sufficient to achieve success probability 1-eps. (Joint work with Slofstra, in preparation.)

Algorithms and complexity
Tuesday May 2, 2017, 11AM, Salle 1007
Pierre Senellart (DI/ENS) Tree decompositions for probabilistic data management

Joint work with Antoine Amarilli, Pierre Bourhis, Mikaël Monet, Silviu Maniu.

In this talk, we review recent work on the problem of efficiently querying probabilistic databases, i.e., compact representations of probability distributions over databases, by exploiting the structure of the data. In the deterministic setting, a celebrated result by Courcelle shows that any monadic second-order query can be evaluated in linear-time on instances of bounded treewidth. We show that this result generalizes to probabilistic instances through the use of provenance. In the probabilistic setting, we show that a bound on the treewidth is necessary for query evaluation to be tractable, in sharp contrast with the deterministic setting. This leads us to studying which real-world databases actually have low treewidth, and to propose practical algorithms for query evaluation in databases that can be partially decomposed in a low-treewidth part and a high-treewidth core.

Pierre Senellart is a Professor in the Computer Science Department at the École normale supérieure in Paris, France, and the head of the Valda team of Inria Paris. He is an alumnus of ENS and obtained his M.Sc. (2003) and Ph.D. (2007) in computer science from Université Paris-Sud, studying under the supervision of Serge Abiteboul. He was awarded an Habilitation à diriger les recherches in 2012 from Université Pierre et Marie Curie. Before joining ENS, he was an Associate Professor (2008–2013) then a Professor (2013–2016) at Télécom ParisTech. He also held secondary appointments as Lecturer at the University of Hong Kong in 2012–2013, and as Senior Research Fellow at the National University of Singapore from 2014 to 2016. His research interests focus around practical and theoretical aspects of Web data management, including Web crawling and archiving, Web information extraction, uncertainty management, Web mining, and intensional data management.

Algorithms and complexity
Tuesday April 25, 2017, 11AM, Salle 1007
Nicolas Flammarion (DI/ENS) Optimal rates for Least-Squares Regression through stochastic gradient descent.

At a broad level, stochastic optimization is concerned with optimizing a function in the presence of noise. In this context, we consider the optimization of a quadratic objective function whose gradients are only accessible through a stochastic oracle that returns the gradient at any given point plus a zero-mean finite variance random error. We present the first algorithm that achieves the optimal prediction error rates for least-squares regression, both in terms of forgetting of initial conditions in O(1/n^2), and in terms of dependence on the noise and dimension d of the problem, as O(d/n). Our new algorithm is based on averaged accelerated regularized gradient descent, and may also be analyzed through finer assumptions on initial conditions and the Hessian matrix, leading to dimension-free quantities that may still be small while the “optimal” terms above are large.

Algorithms and complexity
Tuesday April 18, 2017, 11AM, Salle 1007
Adrian Kosowski (Inria, IRIF Université Paris 7) Protocols for Detecting a Signal

We consider the following interaction scenario: a population of agents is required to perpetually detect whether or not the population contains at least one agent in a distinguished state X. This problem may be viewed as a variant of rumor-spreading, in which a “true” rumor should spread and persist in the population for as long as its primary rumor source X is present, and a “false” rumor without a source should be actively suppressed. Investigations into the problem are also directly motivated by the question of detecting and amplifying trace concentrations of a chemical substance X, e.g., in a framework of DNA computing with chemical reaction networks (CRN-s).

We describe two population protocols which solve the considered problem efficiently, converging to a correct output state on a (1-epsilon)-fraction of a population of n agents within a polylogarithmic number of steps, starting from any initial configuration of the system, w.h.p. under a fair uniformly random scheduler. One of the proposed protocols requires O(log n) states and has certain desirable robustness properties, while the other uses only a constant number of states.

Algorithms and complexity
Thursday April 13, 2017, 2PM, Salle 3052
Przemyslaw Uznanski (ETH Zürich, Switzerland) All-Pairs 2-reachability in O(n^ω log n) Time

In the 2-reachability problem we are given a directed graph G and we wish to determine if there are two (edge or vertex) disjoint paths from u to v, for given pair of vertices u and v. We present an algorithm that computes 2-reachability information for all pairs of vertices in O(n^ω logn) time. Hence, we show that the running time of all-pairs 2-reachability is within a log factor of transitive closure.

Moreover, our algorithm produces a witness (i.e., a separating edge or a separating vertex) for all pair of vertices where 2-reachability does not hold. By processing these witnesses, we can compute all the edge- and vertex-dominator trees of G in O(n^2) additional time, which in turn enables us to answer various connectivity queries in O(1) time. For instance, we can test in constant time if there is a path from u to v avoiding an edge e, for any pair of query vertices u and v, and any query edge e, or if there is a path from u to v avoiding a vertex w, for any query vertices u, v, and w.

This is a joint work with Loukas Georgiadis (University of Ioannina), Daniel Graf (ETH Zurich), Giuseppe Italiano and Nikos Parotsidis (University of Rome, Tor Vergata).

Link to the paper: https://arxiv.org/abs/1612.08075

Algorithms and complexity
Tuesday March 21, 2017, 11AM, Salle 1007
Emanuele Natale Friend or Foe? Population Protocols can perform Community Detection

We present a simple distributed algorithm that, given a regular graph consisting of two communities (or clusters), each inducing a good expander and such that the cut between them has sparsity $1/\mbox{polylog}(n)$, recovers the two communities.

More precisely, upon running the protocol, every node assigns itself a binary label of $m = \Theta(\log n)$ bits, so that with high probability, for all but a small number of outliers, nodes within the same community are assigned labels with Hamming distance $o(m)$, while nodes belonging to different communities receive labels with Hamming distance at least $m/2 - o(m)$. We refer to such an outcome as a “community sensitive labeling” of the graph.

Our algorithm uses $\Theta(\log^2 n)$ local memory and computes the community sensitive labeling after each node performs $\Theta(\log^2 n)$ steps of local work. Our algorithm and its analysis work in the “(random) population protocol” model, in which anonymous nodes do not share any global clock (the model is asynchronous) and communication occurs over one single (random) edge per round. We believe, this is the first provably-effective protocol for community detection that works in this model.

Joint work with Luca Becchetti, Andrea Clementi, Francesco Pasquale, Prasad Raghavendra, and Luca Trevisan.

Link to the arxiv version: https://arxiv.org/abs/1703.05045

Algorithms and complexity
Tuesday February 21, 2017, 11AM, Salle 1007
Zvi Lotker Literature Networks and Time in Social Networks.

In this talk I will describe the connection between social networks and theater plays. I will show how algorithms can analyze plays using social networks, and how plays can reveal an interesting algorithmic problem.

Algorithms and complexity
Tuesday February 7, 2017, 11AM, Salle 1007
Charles Paperman Streaming and circuit complexity

In this talk, I will present a connection between the streaming complexity and the circuit complexity of regular languages through a notion of streaming by block. This result provides tight constructions of boolean circuits computing an automaton, thanks to some classical and recent results on the circuit complexity of regular languages.

Algorithms and complexity
Tuesday January 31, 2017, 11AM, Salle 1007
Chien-Chung Huang (CNRS, ENS Paris) Popularity, Mixed Matchings, and Self-duality

We consider the problem of matching applicants to jobs under two-sided preferences in a popular and utility-optimal manner. Our input instance is a bipartite graph G = (A \cup B,E) with a utility function w: E \rightarrow Q where each vertex u \in A \cup B has a preference list ranking its neighbors in a strict order of preference. For any two matchings M and T in G, let \phi(M,T) be the number of vertices that prefer M to T. A matching M is popular if \phi(M,T) \geg \phi(T,M) for all matchings T in G. A mixed matching is defined as \Pi = \{(M_0,p_0),\ldots,(M_k,p_k)\} for some matchings M_0,\ldots,M_k and \sum_{i=0}^kp_i = 1, p_i \geq 0 for all i. The function \phi(\cdot,\cdot$ easily extends to mixed matchings; a mixed matching \Pi is popular if \phi(\Pi,\Lambda) \geq \phi(\Lambda,\Pi) for all mixed matchings \Lambda in G.

Motivated by the fact that a popular mixed matching could have a much higher utility than all popular matchings, we study the popular fractional matching polytope {\cal P}_G. Our main result is that this polytope is half-integral (and in the special case where a stable matching is a perfect matching, this polytope is integral), implying that there is always a max-utility popular mixed matching \Pi such that \Pi = {(M_0,\frac{1}{2}),(M_1,\frac{1}{2})} and \Pi can be computed in polynomial time. An immediate consequence is that in order to implement a max-utility popular mixed matching in $G$, we need just one single random bit.

We analyze {\cal P}_G whose description may have exponentially many constraints via an extended formulation in m+n dimensions with O(m+n) constraints. The linear program that gives rise to this formulation has an unusual property: self-duality. In other words, this linear program is exactly identical to its dual program. This is a rare case where an LP of a natural problem has such a property. The self-duality of the LP plays a crucial role in our proof of the half-integrality of {\cal P}_G.

We also show that our result carries over to the roommates problem, where the given graph G may not be bipartite. The polytope of popular fractional matchings is still half-integral here and thus we can compute a max-utility popular half-integral matching in G in polynomial time. To complement this result, we also show that the problem of computing a max-utility popular (integral) matching in a roommates instance G is NP-hard.

Algorithms and complexity
Tuesday January 10, 2017, 11AM, Salle 1007
Vincent Jugé (LSV, CNRS & ENS Cachan, Univ. Paris-Saclay) Dynamic Complexity of Parity Games with Bounded Tree-Width

Dynamic complexity is concerned with the complexity of updating a solution to a problem when its input changes. A typical example is as follows: given an directed graph G and two pointed vertices s and t, you wish to know whether there exists a path from s to t in G. After your computation is complete, the graph is modified (e.g. by adding or deleting an edge): how should you proceed to update your answer at the least cost? Computing and storing auxiliary data, such as maintaining a covering forest, might be helpful in that regard.

In this talk, we will consider a specific class for dynamic complexity, called DynFO. A dynamic problem belongs to this class if updates can be performed by applying first-order formulas. We will show that a dynamic variant of Courcelle's theorem belongs to DynFO, and we will apply this result to computing optimal strategies in 2-player reachability or parity games.

This talk is based on joint a work with Patricia Bouyer-Decitre and Nicolas Markey.


Year 2016

Algorithms and complexity
Tuesday December 13, 2016, 11AM, Salle 1007
Amos Korman (CNRS, IRIF) From Ants to Query Complexity

I will talk about my recent adventures with ants. Together with biologists we study P. longicornis ants as they collaboratively transport a large food item to their nest. This collective navigation process is guided by pheromones which are laid by individual ants. Using a new methodology to detect scent marks, we identify a new kind of ant trail characterized by very short and dynamic pheromone markings and highly stochastic navigation response to them. We argue that such a trail can be highly beneficial in conditions in which knowledge of individual ants regarding the underlying topological structure is unreliable. This gives rise to a new theoretical search model under unreliable guiding instructions, which is of independent computational interest. To illustrate the model, imagine driving a car in an unknown country that is in the aftermath of a major hurricane which has randomly flipped a certain small fraction of the road-signs. Under such conditions of unreliability, how can you still reach your destination fast? I will discuss the limits of unreliability that allow for efficient navigation. In trees, for example, there is a phase transition phenomenon that occurs roughly around 1/sqrt{D}. That is, if noise is above this threshold then any algorithm cannot avoid finding the target in exponential time (in the original distance), while below the threshold we identify an optimal, almost linear, walking algorithm. Finally, I will discuss algorithms that under such a noisy model aim to minimize the number of queries to find a target (rather than the number of moves).

This talk is based on joint works with biologists: Ofer Feinerman, Udi Fonio, Yael Heyman and Aviram Gelblum, and CS co-authors: Lucas Boczkowski, Adrian Kosowski and Yoav Rodeh.

Algorithms and complexity
Tuesday December 6, 2016, 11AM, Salle 1007
Omar Fawzi Algorithmic aspects of optimal channel coding

We study the problem of finding the maximum success probability for transmitting messages over a noisy channel from an algorithmic point of view. In particular, we show that a simple greedy polynomial-time algorithm computes a code achieving a (1-1/e)-approximation of the maximum success probability and that it is NP-hard to obtain an approximation ratio strictly better than (1-1/e). Moreover, the natural linear programming relaxation of this problem corresponds to the Polyanskiy-Poor-Verdú bound, which we also show has a value of at most 1/(1-1/e) times the maximum success probability.

Based on joint work with Siddharth Barman. arXiv:1508.04095

Algorithms and complexity
Friday December 2, 2016, 11AM, Salle 1007
Luc Sanselme Determinism and Computational Power of Real Measurement-based Quantum Computation

Measurement-based quantum computing (MBQC) is a universal model for quantum computation. The combinatorial characterisation of determinism in this model, powered by measurements, and hence, fundamentally probabilistic, is the cornerstone of most of the breakthrough results in this field. The most general known sufficient condition for a deterministic MBQC to be driven is that the underlying graph of the computation has a particular kind of flow called Pauli flow. The necessity of the Pauli flow was an open question. We show that the Pauli flow is necessary for real-MBQC, and not in general providing counter-examples for (complex) MBQC.

We explore the consequences of this result for real MBQC and its applications. Real MBQC and more generally real quantum computing is known to be universal for quantum computing. Real MBQC has been used for interactive proofs by McKague. The two-prover case corresponds to real-MBQC on bipartite graphs. While (complex) MBQC on bipartite graphs are universal, the universality of real MBQC on bipartite graphs was an open question. We show that real bipartite MBQC is not universal proving that all measurements of real bipartite MBQC can be parallelised leading to constant depth computations. As a consequence, McKague techniques cannot lead to two-prover interactive proofs.

Joint work with Simon Perdrix.

Algorithms and complexity
Tuesday November 22, 2016, 11AM, Salle 1007
Anca Nitulescu (ENS Paris) On the (In)security of SNARKs in the Presence of Oracles

In this work we study the feasibility of knowledge extraction for succinct non-interactive arguments of knowledge (SNARKs) in a scenario that, to the best of our knowledge, has not been analyzed before. While prior work focuses on the case of adversarial provers that may receive (statically generated) {\em auxiliary information}, here we consider the scenario where adversarial provers are given {\em access to an oracle}. For this setting we study if and under what assumptions such provers can admit an extractor. Our contribution is mainly threefold.

First, we formalize the question of extraction in the presence of oracles by proposing a suitable proof of knowledge definition for this setting. We call SNARKs satisfying this definition O SNARKs. Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on authenticated data) where we recognize an issue while doing the proof under the standard proof of knowledge definition of SNARKs. Third, we study whether O-SNARKs exist, providing both negative and positive results. On the negative side, we show that, assuming one way functions, there do not exist O-SNARKs in the standard model for every signing oracle family (and thus for general oracle families as well). On the positive side, we show that when considering signature schemes with appropriate restrictions on the message length O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming extraction with respect to specific distributions of auxiliary input.

Algorithms and complexity
Tuesday November 8, 2016, 11AM, Salle 1007
Arpita Korwar Polynomial Identity Testing of Sum of ROABPs

Polynomials are fundamental objects studied in mathematics. Though univariate polynomials are fairly well-understood, multivariate polynomials are not. Arithmetic circuits are the primary tool used to study polynomials in computer science. They allow for the classification of polynomials according to their complexity.

Polynomial identity testing (PIT) asks if a polynomial, input in the form of an arithmetic circuit, is identically zero.

One special kind of arithmetic circuits are read-once arithmetic branching programs (ROABPs), which can be written as a product of univariate polynomial matrices over distinct variables. We will be studying the characterization of an ROABP. In the process, we can give a polynomial time PIT for the sum of constantly many ROABPs.

Algorithms and complexity
Tuesday October 25, 2016, 11AM, Salle 1007
Eric Angel (Université d'Évry Val d'Essonne IBISC) Clustering on k-edge-colored graphs.

We study the Max k-colored clustering problem, where, given an edge-colored graph with k colors, we seek to color the vertices of the graph so as to find a clustering of the vertices maximizing the number (or the weight) of matched edges, i.e. the edges having the same color as their extremities. We show that the cardinality problem is NP-hard even for edge-colored bipartite graphs with a chromatic degree equal to two and k ≥ 3. Our main result is a constant approximation algorithm for the weighted version of the Max k-colored clustering problem which is based on a rounding of a natural linear programming relaxation. For graphs with chromatic degree equal to two, we improve this ratio by exploiting the relation of our problem with the Max 2-and problem. We also present a reduction to the maximum-weight independent set (IS) problem in bipartite graphs which leads to a polynomial time algorithm for the case of two colors.

Algorithms and complexity
Tuesday October 18, 2016, 11AM, Salle 1007
Carola Doerr Provable Performance Gains via Dynamic Parameter Choices in Heuristic Optimization

In many optimization heuristics there are a number of parameters to be chosen. These parameters typically have a crucial impact on the performance of the algorithm. It is therefore of great interest to set these parameters wisely. Unfortunately, determining the optimal parameter choices for a randomized search heuristic via mathematical means is a rather difficult task. Even worse, for many problems the optimal parameter choices seem to change during the optimization process. While this seems quite intuitive, little theoretical evidence exist to support this claim. In a series of recent works we have proposed two very simple success-based update rules for the parameter settings of some standard search heuristics. For both these rules we can prove that they yield a better performance than any static parameter choice. Based on joint work with Benjamin Doerr (Ecole Polytechnique), Timo Koetzing (HPI Potsdam, Germany), and Jing Yang (Ecole Polytechnique).

Algorithms and complexity
Tuesday October 11, 2016, 11AM, Salle 1007
Dieter van Melkebeek (University of Wisconsin, Madison) Deterministic Isolation for Space-Bounded Computation

Isolation is the process of singling out a solution to a problem that may have many solutions. It plays an important role in the design of efficient parallel algorithms as it ensures that the various parallel processes all work towards a single global solution rather than towards individual solutions that may not be compatible with one another. For example, the best parallel algorithms for finding perfect matchings in graphs hinge on isolation for this reason. Isolation is also an ingredient in some efficient sequential algorithms. For example, the best running times for certain NP-hard problems like finding hamiltonian paths in graphs are achieved via isolation.

All of these algorithms are randomized, and the only reason is the use of the Isolation Lemma – that for any set system over a finite universe, a random assignment of small integer weights to the elements of the universe has a high probability of yielding a unique set of minimum weight in the system. For each of the underlying problems it is open whether deterministic algorithms of similar efficiency exist.

This talk is about the possibility of deterministic isolation in the space-bounded setting. The question is: Can one always make the accepting computation paths of nondeterministic space-bounded machines unique without changing the underlying language and without blowing up the space by more than a constant factor? Or equivalently, does there exist a deterministic logarithmic space mapping reduction from directed st-connectivity to itself that transforms positive instances into ones where there is a unique path from s to t?

I will present some recent results towards a resolution of this question, obtained jointly with Gautam Prakriya. Our approach towards a positive resolution can be viewed as derandomizing the Isolation Lemma in the context of space-bounded computation.

Algorithms and complexity
Tuesday September 20, 2016, 11AM, Salle 1007
Eldar Fischer (Faculty of CS, Technion - Israel Institue of Technology) Improving and extending testing of distributions for shape restrictions.

Distribution property testing deals with what information can be deduced about an unknown distribution over {1,…,n}, where we are only allowed to obtain a relatively small number of independent samples from the distribution. In the basic model the algorithm may only base its decision on receiving a sequence of samples from the distribution, while in the conditional model the algorithm may also request samples out of the conditional distribution over subsets of {1,…,n}.

A test has to distinguish a distribution satisfying a given property from a distribution that is far in the variation distance from satisfying it. A range of properties such as monotonicity and log-concavity has been unified under the banner of L-decomposable properties. Here we improve upon the basic model test for all such properties, as well as provide a new test under the conditional model whose number of queries does not directly depend on n. We also provide a conditional model test for a wider range of properties, that in particular yields tolerant testing for all L-decomposable properties. For tolerant testing conditional samples are essential, as an efficient test in the basic model is known not to exist.

Our main mechanism is a way of efficiently producing a partition of {1,…,n} into intervals satisfying a small-weight requirement with respect to the unknown distribution. Also, we show that investigating just one such partition is sufficient for solving the testing question, as opposed to prior works where a search for the “correct” partition was performed.

Joint work with Oded Lachish and Yadu Vasudev.

Algorithms and complexity
Tuesday September 13, 2016, 11AM, Salle 1007
Tatiana Starikovskaya (IRIF, Université Paris Diderot) Streaming and communication complexity of Hamming distance

We will discuss the complexity of one of the most basic problems in pattern matching, that of approximating the Hamming distance. Given a pattern P of length n the task is to output an approximation of the Hamming distance (that is, the number of mismatches) between P and every n-length substring of a longer text. We provide the first efficient one-way randomised communication protocols as well as a new, fast and space efficient streaming algorithm for this problem.

Algorithms and complexity
Monday August 29, 2016, 11AM, Room 2002
Sanjeev Khanna (University of Pennsylvania) On the Single-Pass Streaming Complexity of the Set Cover Problem

In the set cover problem, we are given a collection of $m$ subsets over a universe of $n$ elements, and the goal is to find a sub-collection of sets whose union covers the universe. The set cover problem is a fundamental optimization problem with many applications in computer science and related disciplines. In this talk, we investigate the set cover problem in the streaming model of computation whereby the sets are presented one by one in a stream, and the goal is to solve the set cover problem using a space-efficient algorithm.

We show that to compute an $\alpha$-approximate set cover (for any $\alpha= o(\sqrt{n})$) via a single-pass streaming algorithm, $\Theta(mn/\alpha)$ space is both necessary and sufficient (up to an $O(\log{n})$ factor). We further study the problem of estimating the size of a minimum set cover (as opposed to finding the actual sets), and show that this turns out to be a distinctly easier problem. Specifically, we prove that $\Theta(mn/\alpha^2)$ space is both sufficient and necessary (up to logarithmic factors) for estimating the size of a minimum set cover to within a factor of $\alpha$. Our algorithm in fact works for the more general problem of estimating the optimal value of a covering integer program. These results provide a tight resolution of the space-approximation tradeoff for single-pass streaming algorithms for the set cover problem.

This is joint work with my students Sepehr Assadi and Yang Li.

Algorithms and complexity
Tuesday July 5, 2016, 11AM, Salle 1007
Alexandra Kolla (University of Illinois at Urbana-Champaign) Towards Constructing Expanders via Lifts: Hopes and Limitations.

In this talk, I will examine the spectrum of random k-lifts of d-regular graphs. We show that, for random shift k-lifts (which includes 2-lifts), if all the nontrivial eigenvalues of the base graph G are at most \lambda in absolute value, then with high probability depending only on the number n of nodes of G (and not on k), if k is *small enough*, the absolute value of every nontrivial eigenvalue of the lift is at most O(\lambda). While previous results on random lifts were asymptotically true with high probability in the degree of the lift k, our result is the first upperbound on spectra of lifts for bounded k. In particular, it implies that a typical small lift of a Ramanujan graph is almost Ramanujan. I will present a quasi-polynomial time algorithm for constructing almost-Ramanujan expanders through such lifts. I will also discuss some impossibility results for large k, which, as one consequence, imply that there is no hope of constructing large Ramanujan graphs from large abelian k-lifts.

Based on joint work with Naman Agarwal Karthik Chandrasekaran and Vivek Madan.

Algorithms and complexity
Tuesday June 21, 2016, 11AM, Salle 1007
Nathanaël Fijalkow Alternating Communication Complexity, with Applications to Online Space Complexity

We study the model of alternating communication complexity introduced by Babai, Frankl and Simon in 1986. We extend the rank lower bound to this setting. We show some applications of this technique for online space complexity, as defined by Karp in the 60s. This measure of complexity quantifies the amount of space used by a Turing machine whose input tape can read each symbol only once, from left to right. In particular, we obtain logarithmic lower bounds on the alternating online space complexity of the set of prime numbers written in binary, which is an exponential improvement over the previous result due to Hartmanis and Shank in 1968.

Algorithms and complexity
Tuesday May 31, 2016, 11AM, Salle 1007
Stacey Jeffery (Institute for Quantum Information and Matter, Caltech) Span Programs, NAND-Trees, and Graph Connectivity

We show a connection between NAND-tree evaluation and st-connectivity problems on certain graphs to generalize a superpolynomial quantum speedup of Zhan et al. for a promise version of NAND-tree formula evaluation. In particular, we show that the quantum query complexity of evaluating NAND-tree instances with average choice complexity at most W is O(W), where average choice complexity is a measure of the difficulty of winning the associated two-player game. Our results follow from relating average choice complexity to the effective resistance of these graphs, which itself corresponds to the span program witness size. These connections suggest an interesting relationship between st-connectivity problems and span program algorithms, that we hope may have further applications.

This is joint work with Shelby Kimmel.

Algorithms and complexity
Tuesday May 24, 2016, 11AM, Salle 1007
Tim Black (University of Chicago) Monotone Properties of k-Uniform Hypergraphs are Weakly Evasive.

The decision-tree complexity of a Boolean function is the number of input bits that must be queried (adaptively) in the worst case to determine the value of the function. A Boolean function in n variables is weakly evasive if its decision-tree complexity is Omega(n). By k-graphs we mean k-uniform hypergraphs. A k-graph property on v vertices is a Boolean function on n = \binom{v}{k} variables corresponding to the k-subsets of a v-set that is invariant under the v! permutations of the v-set (isomorphisms of k-graphs).

Rivest and Vuillemin (1976) proved that all non-constant monotone graph properties (k=2) are weakly evasive, confirming a conjecture of Aanderaa and Rosenberg (1973). Kulkarni, Qiao, and Sun (2013) proved the analogous result for 3-graphs. We extend these results to k-graphs for every fixed k. From this, we show that monotone Boolean functions invariant under the action of a large primitive group are weakly evasive.

While KQS (2013) employ the powerful topological approach of Kahn, Saks, and Sturtevant (1984) combined with heavy number theory, our argument is elementary and self-contained (modulo some basic group theory). Inspired by the outline of the KQS approach, we formalize the general framework of “orbit augmentation sequences” of sets with group actions. We show that a parameter of such sequences, called the “spacing,” is a lower bound on the decision-tree complexity for any nontrivial monotone property that is G-invariant for all groups G involved in the orbit augmentation sequence, assuming all those groups are p-groups. We develop operations on such sequences such as composition and direct product which will provide helpful machinery for our applications. We apply this general technique to k-graphs via certain liftings of k-graphs with wreath product action of p-groups.

Algorithms and complexity
Tuesday May 17, 2016, 11AM, Salle 1007
Nikhil Bansal (Eindhoven University of Technology) Solving optimization problems on noisy planar graphs

Many graph problems that are hard to approximate on general graphs become much more tractable on planar graphs. In particular, planar graphs can be decomposed into small pieces or into bounded treewidth graphs, leading to PTASes for these problems. But little is known about the noisy setting where the graphs are only nearly planar, i.e. deleting few edges makes them planar. One obstacle is that current planar decomposition techniques fail completely with noise. Another obstacle is that the known guarantees for the planarization problem are too weak for our purpose.

We show that using linear programming methods such as configuration LPs and spreading metrics, one can get around these barriers and obtain PTASes for many problems on noisy planar graphs. This resolves an open question of Magen and Moharrami, that was recently popularized by Claire Mathieu.

Algorithms and complexity
Tuesday May 10, 2016, 11AM, Salle 1007
Jean Cardinal Solving k-SUM using few linear queries

The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within P, and it is in particular open whether it admits an algorithm of complexity O(n^c) with c<⌈k/2⌉. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n^3 log^3(n)) solving k-SUM. Furthermore, we show that there exists a randomized algorithm that runs in Õ (n^{⌈k/2⌉+8}) time, and performs O(n^3 log^3(n)) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n^3 log^3(n)) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-à-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P. We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)-linear decision trees of depth o(n^4).

Joint work with John Iacono and Aurélien Ooms

Algorithms and complexity
Tuesday May 3, 2016, 11AM, Salle 1007
Mehdi Mhalla (LIG Grenoble) Pseudotelepathy games with graph states, contextuality and multipartiteness width.

Analyzing pseudotelepathy graph games, we propose a way to build contextuality scenarios exhibiting the quantum supremacy using graph states. We consider the combinatorial structures generating equivalent scenarios. We investigate which scenarios are more multipartite and show that there exists graphs generating scenarios with a linear multipartiteness width.

This is based on a joint work with Peter Hoyer and Simon Perdrix.

Algorithms and complexity
Tuesday April 26, 2016, 11AM, Salle 4033
Jan Hackfeld (TU Berlin) Undirected Graph Exploration with Θ(log log n) Pebbles

We consider the fundamental problem of exploring an undirected and initially unknown graph by an agent with little memory. The vertices of the graph are unlabeled, and the edges incident to a vertex have locally distinct labels. In this setting, it is known that Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that this memory requirement can be decreased significantly by making a part of the memory distributable in the form of pebbles. A pebble is a device that can be dropped to mark a vertex and can be collected when the agent returns to the vertex. We show that for an agent O(log log n) distinguishable pebbles and bits of memory are sufficient to explore any bounded-degree graph with at most n vertices. We match this result with a lower bound exhibiting that for any agent with sub-logarithmic memory, Ω(log log n) distinguishable pebbles are necessary for exploration.

This talk is based on joint work with Yann Disser and Max Klimm (SODA'16).

Algorithms and complexity
Tuesday April 19, 2016, 11AM, Salle 1007
Charles Bennett Is there such a thing as private classical information?

Classical secret information lies on a slippery slope between public information and quantum information. Even leaving aside fanciful attacks like neutrino tomography, a typical classical secret—say a paper document locked in a safe—quickly decoheres and becomes recoverable in principle from the environment outside the safe. On the other hand, if a system is so well insulated from its environment that it does not decohere, it can be used as a quantum memory, capable of existing in a superposition of classical states and of being entangled with other other quantum memories. We discuss the practical and theoretical difficulty of recovering a classical secret from its decohered environment, and of protecting a classical secret by arranging that some information required to recover it escapes into parts of the environment inaccessible to the eavesdropper.

Algorithms and complexity
Wednesday March 30, 2016, 2PM, Salle 3058
Manoj Prabhakaran (University of Illinois, Urbana-Champaign) Rényi Information Complexity and an Information Theoretic Characterization of the Partition Bound

We introduce a new information-theoretic complexity measure IC∞ for 2-party functions which is a lower-bound on communication complexity, and has the two leading lower-bounds on communication complexity as its natural relaxations: (external) information complexity (IC) and logarithm of partition complexity (prt). These two lower-bounds had so far appeared conceptually quite different from each other, but we show that they are both obtained from IC∞ using two different, but natural relaxations:
  • IC∞ is similar to information complexity IC, except that it uses Rényi mutual information of order ∞ instead of Shannon's mutual information (which is Rényi mutual information of order 1). Hence, the relaxation of IC∞ that yields IC is to change the order of Rényi mutual information used in its definition from ∞ to 1.
  • The relaxation that connects IC∞ with partition complexity is to replace protocol transcripts used in the definition of IC∞ with what we term “pseudotranscripts,” which omits the interactive nature of a protocol, but only requires that the probability of any transcript given inputs x and y to the two parties, factorizes into two terms which depend on x and y separately. While this relaxation yields an apparently different definition than (log of) partition function, we show that the two are in fact identical. This gives us a surprising characterization of the partition bound in terms of an information-theoretic quantity.

Further understanding IC∞ might have consequences for important direct-sum problems in communication complexity, as it lies between communication complexity and information complexity.

We also show that if both the above relaxations are simultaneously applied to IC∞, we obtain a complexity measure that is lower-bounded by the (log of) relaxed partition complexity, a complexity measure introduced by Kerenidis et al. (FOCS 2012). We obtain a similar (but incomparable) connection between (external) information complexity and relaxed partition complexity as Kerenidis et al., using an arguably more direct proof.

This is joint work with Vinod Prabhakaran.

Algorithms and complexity
Friday March 11, 2016, 11AM, Salle 1007
Christian Konrad (Reykjavik University) Streaming Algorithms for Partitioning Sequences and Trees

Partitioning sequences and trees are classical load balancing problems that received considerable attention in the 80ies and 90ies. For both problems exact algorithms with different characteristics exist. However, in the context of massive data sets, these algorithms fail since they assume random access to the input, an assumption that can hardly be granted. The key motivation of this work is the partitioning of current XML databases, some of which reach many terrabytes in size. In an XML database, data is organized in a tree structure.

In this talk, I will present streaming algorithms for both problems. The presented algorithms require a random access memory whose size is only logarithmic in the size of the input, which makes them good candidates for performing well in practice. This work will be presented next week at ICDT 2016, the 19th International Conference on Database Theory.

Algorithms and complexity
Tuesday February 23, 2016, 11AM, Salle 1007
Ashwin Nayak (University of Waterloo) Sampling quantum states

A classic result in information theory, the source coding theorem, shows how we may compress a sample from a random variable X, into essentially H(X) bits on average, without any loss. (Here H(X) denotes the Shannon entropy of X.) We revisit the analogous problem in quantum communication, in the presence of shared entanglement. No characterization of the communication needed for lossless compression is known in this scenario. We study a natural protocol for such compression, quantum rejection sampling, and give bounds on its complexity. Eventhough we do not have a precise characterization of the complexity, we show how it may be used to derive some consequences of lossless compression.

Joint work with Ala Shayeghi.

Algorithms and complexity
Tuesday February 16, 2016, 11AM, Salle 1007
Johan Thapper (Université Paris-Est, Marne-la-Vallée, LIGM) Constraint Satisfaction Problems, LP relaxations and Polymorphisms

An instance of the Constraint Satisfaction Problem (CSP) is given by a set of constraints over a set of variables. The variables take values from a (finite) domain and the constraints are specified by relations over the domain that need to hold between various subsets of variables. In the late 90s, it was realised that the complexity of the CSP restricted to some fixed set of relations is captured by an associated set of operations called polymorphisms. This connection has lead to a great influx of ideas and tools (as well as researchers) from universal algebra, a field of mathematics that in particular studies algebras of such operations.

A quite general optimisation version of the CSP is obtained by replacing the relations by arbitrary functions from tuples of domain values to the rationals extended with positive infinity. The goal of this problem, called the Valued Constraint Satisfaction Problem (VCSP), is to minimise a sum of such functions over all assignments. The complexity classification project of the VCSP has taken some great strides over the last four years and has recently been reduced to its more famous decision problem counterpart: the dichotomy conjecture for the CSP.

I will talk about how polymorphisms appear in the study of the CSP and some of what universal algebra has taught us. I will then show how these results can be used for characterising the efficacy of Sherali-Adams linear programming relaxations of the VCSP.

This is based on joint work with Standa Zivny, University of Oxford (UK).

Algorithms and complexity
Tuesday February 2, 2016, 11AM, Salle 1007
Balthazar Bauer Compression of communication protocols

In communication theory, there are two big characteristics for a protocol: its communication complexity and its information complexity. There is a huge activity in the research community to find interesting relations between these two characteristics. An idea is to try to compress a protocol and to see the efficiency of the compression for a protocol with a known communication and information. Here we will see some recent schemes of compression. It will be also a good occasion to discover some common tricks and algorithms in communication and information theory.

Algorithms and complexity
Tuesday January 26, 2016, 11AM, Salle 1007
Chien-Chung Huang (Chalmers University of Technology and Göteborg University) Exact and Approximation Algorithms for Weighted Matroid Intersection

We propose new exact and approximation algorithms for the weighted matroid intersection problem. Our exact algorithm is faster than previous algorithms when the largest weight is relatively small. Our approximation algorithm delivers a $(1-\epsilon)$-approximate solution with a running time significantly faster than most known exact algorithms.

The core of our algorithms is a decomposition technique: we decompose an instance of the weighted matroid intersection problem into a set of instances of the unweighted matroid intersection problem. The computational advantage of this approach is that we can make use of fast unweighted matroid intersection algorithms as a black box for designing algorithms. Precisely speaking, we prove that we can solve the weighted matroid intersection problem via solving $W$ instances of the unweighted matroid intersection problem, where $W$ is the largest given weight. Furthermore, we can find a $(1-\epsilon)$-approximate solution via solving $O(\epsilon^{-1} \log r)$ instances of the unweighted matroid intersection problem, where $r$ is the smallest rank of the given two matroids.