Algorithms and complexity
Wednesday December 15, 2021, 10AM, Salle 1007
Serge Massar (Laboratoire d’Information Quantique CP224, Université libre de Bruxelles) Characterizing the intersection of QMA and coQMA

We show that the functional analogue of QMA∩coQMA, denoted F(QMA∩coQMA), equals the complexity class Total Functional QMA (TFQMA). To prove this we need to introduce alternative definitions of QMA∩coQMA in terms of a single quantum verification procedure. We show that if TFQMA equals the functional analogue of BQP (FBQP), then QMA∩coQMA = BQP. We show that if there is a QMA complete problem that (robustly) reduces to a problem in TFQMA, then QMA∩coQMA = QMA. These results provide strong evidence that the inclusions FBQP⊆TFQMA⊆FQMA are strict, since otherwise the corresponding inclusions in BQP⊆QMA∩coQMA⊆QMA would become equalities.

Algorithms and complexity
Tuesday December 7, 2021, 11AM, Salle 1007
Bruce Kapron (Computer Science Department, University of Victoria) Type-two feasibility via bounded query revision

The problem of generalizing the notion of polynomial time to a type-two setting, where inputs include not only finitary data, e.g., numbers or strings of symbols, but also functions on such data, was first posed by Constable in 1973. Mehlhorn subsequently proposed a solution, based on a generalization Cobham's scheme-based approach which uses limited recursion on notation. The resulting class was shown to be robust with respect to oracle Turing machine (OTM) computation, but the problem of providing a purely machine-based characterization remained open for almost two decades, when Kapron and Cook gave a solution using notions of function length and second-order polynomials. While a natural generalization of the usual notion of poly-time, this characterization still had some shortcomings, as function length is itself not a feasible notion. A charaterization using ordinary polynomials remained an elusive goal.

Recently, we have provided several such characterizations. The underlying idea is to bound run time in terms of an ordinary polynomial in the largest answer returned by an oracle, while imposing a constant upper bound on the number of times an oracle query (answer) may exceed all previous ones in size. The resulting classes are contained in Mehlhorn's class, and when closed under composition are in fact equivalent.

In this talk we will present these recent results and follow-up work involving new scheme-based characterizations of Mehlhorn's class, as well as a recent characterization using a tier-based type system for a simple imperative programming language with oracle calls.

(Joint work with F. Steinberg, and with E. Hainry, J.-Y. Marion, and R. Pechoux.)

Algorithms and complexity
Tuesday December 7, 2021, 2PM, 3052
Daniel Szilagyi Introduction to convexity - optimization and sampling

In this talk we present some basic notions of convexity, from a quantum-algorithmic point of view. The talk will consist of two parts. We first warm-up by introducing some classical first- and second-order methods for convex optimization. Then, we move on to more structured convex problems, introduce linear programming and the basic ideas of interior-point methods. We mention some quantum algorithms for LPs and SDPs. In the second part of the talk we discuss sampling, and in particular its applications to estimating volumes of polytopes. We present the basic classical sampling algorithms (ball walk and hit-and-run), and the general framework for applying these algorithms to volume estimation. We conclude by discussing the opportunities for quantum speedups - in particular, we present a high-level overview of the work by Chakrabarti, Childs, Hung, Li, Wang and Wu from 2019.

Quantum info Paris

Algorithms and complexity
Wednesday November 17, 2021, 11AM, Salle 1007
Spyros Angelopoulos (LIP6) Competitive Sequencing with Noisy Advice

Several well-studied online resource allocation problems can be formulated in terms of infinite, increasing sequences of positive values, in which each element is associated with a corresponding allocation value. Examples include problems such as online bidding, searching for a hidden target on an unbounded line, and designing interruptible algorithms based on contract scheduling. The performance of the online algorithm, in each of these problems, is measured by the competitive ratio, which describes the multiplicative performance loss due to the absence of full information on the instance.

We study such competitive sequencing problems in a setting in which the online algorithm has some (potentially) erroneous information, expressed as a k-bit advice string, for some given k. For concreteness, we follow the formulation of contract scheduling as case study. We first consider the untrusted advice setting in which the objective is to obtain a Pareto-optimal schedule, considering the two extremes: either the advice is error-free, or it is generated by a (malicious) adversary. Here, we show a Pareto-optimal schedule, using a new approach for applying the functional-based lower-bound technique due to [Gal, Israel J. Math. 1972]. Next, we introduce and study a new noisy advice setting, in which a number of the advice bits may be erroneous; the exact error is unknown to the online algorithm, which only has access to a pessimistic estimation (i.e., an upper bound on this error). We give the first upper and lower bounds on the competitive ratio of an online problem in this setting. To this end, we combine ideas from robust query-based search in arrays (such as Rényi-Ulam types of games) and fault-tolerant contract scheduling. The presentation will also discuss the applicability of these approaches in more general online problems.

Joint work with Diogo Arsénio and Shahin Kamali.

Algorithms and complexity
Tuesday October 19, 2021, 2PM, Salle 3052
Stephen Piddock (School of Mathematics, University of Bristol) Quantum analogue simulation: universality and complexity

Quantum analogue simulation aims to reproduce the physics of a target Hamiltonian H, using a different physical system; and this can be achieved by ensuring that the low energy subspace of the physical Hamiltonian is approximately H. First I will describe a construction of a 1D quantum system that can simulate all other Hamiltonians (a property we call universality) just by varying the length of the chain. The construction relies heavily on the possibility of encoding a (quantum) computation into the chain. Then I will go onto show a much more general equivalence between universality and the computational complexity of the Local Hamiltonian problem - the problem of approximately estimating the ground state energy of a system.

Based on https://arxiv.org/abs/2003.13753 and https://arxiv.org/abs/2101.12319

Algorithms and complexity
Tuesday October 12, 2021, 11AM, Salle 1007
Pierre Meyer (IRIF, IDC Herzliya) Breaking the Circuit Size Barrier for Secure Computation under Quasi-Polynomial LPN

In this work we introduce a new (circuit-dependent) homomorphic secret sharing (HSS) scheme for any $\log/\log\log$-local circuit, with communication proportional only to the width of the circuit and polynomial computation, which is secure assuming the super-polynomial hardness of learning parity with noise (LPN). At the heart of our new construction is a pseudorandom correlation generator (PCG) which allows two parties to locally stretch short seeds into pseudorandom instances of an arbitrary $\log/\log\log$-local additive correlation.

Our main application, and the motivation behind this work, is a generic two-party secure computation protocol for every layered (boolean or arithmetic) circuit of size $s$ with total communication $O(s/\log\log s)$ and polynomial computation, assuming the super-polynomial hardness of the standard learning parity with noise assumption (a circuit is layered if its nodes can be partitioned in layers, such that any wire connects adjacent layers). This expands the set of assumptions under which the `circuit-size barrier' can be broken, for a large class of circuits. The strength of the underlying assumption is tied to the sublinearity factor: we achieve communication $O(s/k(s))$ under the $s^{2^{k(s)}}$-hardness of LPN, for any $k(s) \leq (\log\log s) / 4$.

Previously, the set of assumptions known to imply a PCG for correlations of degree $\omega(1)$ or generic secure computation protocols with sublinear communication was restricted to LWE, DDH, and a circularly secure variant of DCR.

Algorithms and complexity
Tuesday June 22, 2021, 11AM, Salle 3052 + Zoom
Subhasree Patro (CWI) Quantum Fine-Grained Complexity

One of the major challenges in the field of complexity theory is the inability to prove unconditional time lower bounds, including for practical problems within the complexity class P. One way around this is the study of fine-grained complexity where we use special reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of some key problems. For example, computing the edit distance between two strings, a problem that has a practical interest in the comparing of DNA strings, has an algorithm that takes O(n^2) time. Using a fine-grained reduction it can be shown that faster algorithms for edit distance also imply a faster algorithm for the Boolean Satisfiability (SAT) problem (that is believed to not exist) - strong evidence that it will be very hard to solve the edit distance problem faster. Other than SAT, 3SUM, and APSP are other such key problems that are very suitable to use as a basis for such reductions, since they are natural to describe and well-studied.

The situation in the quantum regime is no better; almost all known lower bounds for quantum algorithms are defined in terms of query complexity, which doesn’t help much for problems for which the best-known algorithms take superlinear time. Therefore, employing fine-grained reductions in the quantum setting seems a natural way forward. However, translating the classical fine-grained reductions directly into the quantum regime is not always possible for various reasons. In this talk, I will present some recent results in which we circumvent these challenges and prove quantum time lower bounds for some problems in BQP conditioned on the conjectured quantum hardness of SAT (and its variants) and the 3SUM problem.

Algorithms and complexity
Wednesday June 9, 2021, 10AM, Collège de France
Frédéric Magniez - Elham Kashefi (IRIF - CNRS, University of Edinburgh) Derniers développements pour l’internet et intelligence artificielle quantique

Huitième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Elham Kashefi sur « Quantum Computing as a Service: Secure and Verifiable Multi-Tenant Quantum Data Centre » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-06-09-10h00.htm

Algorithms and complexity
Tuesday June 8, 2021, 4PM, Online
Kyriakos Axiotis (MIT) Decomposable Submodular Function Minimization via Maximum Flow

Abstract: Submodular function minimization is a primitive that is extensively used in a lot of ML applications, including MAP inference in MRFs, image segmentation, and clustering. Although polynomial-time algorithms for this problem exist, their runtime is prohibitive for large scale applications. Fortunately, in practice it is often the case that the function can be decomposed as a sum of r simpler functions, each acting on a small number of elements, k. We present an algorithm that can solve this problem (as well as the more general parametric problem) in time O((n+r)^(1.497) poly(k)), where n is the number of elements in the ground set. Perhaps surprisingly, we achieve this by O~(poly(k)) calls to a maximum flow oracle, thus reducing a problem that is unrelated to graphs to a graph problem. In fact, if max flow is solvable in near-linear time and k=O(1), our algorithm runs in near-linear time. At a technical level, we develop a dual iterative method that computes each step by approximating the submodular base polytope by a graph cut polytope. To solve the resulting graph problem, we develop an efficient algorithm for the parametric min cut problem, which might also be of independent interest. Joint work with Adam Karczmarz, Anish Mukherjee, Piotr Sankowski, and Adrian Vladu

Algorithms and complexity
Wednesday June 2, 2021, 10AM, Collège de France
Frédéric Magniez - André Chailloux (IRIF - INRIA) Limites et impact du calcul quantique

Septième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de André Chailloux sur « Suprématie quantique : où en sommes-nous aujourd'hui ? » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-06-02-10h00.htm

Algorithms and complexity
Wednesday May 26, 2021, 10AM, Collège de France
Frédéric Magniez - Iordanis Kerenidis (IRIF - CNRS) Transformée de Fourier quantique II

Sixième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Iordanis Kerenidis sur « Quantum Machine Learning » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-26-10h00.htm

Algorithms and complexity
Wednesday May 19, 2021, 10AM, Collège de France
Frédéric Magniez - Stacey Jeffery (IRIF - CWI) Optimisation quantique

Cinquième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Stacey Jeffery sur « A Unified Framework for Quantum Walk Search » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-19-10h00.htm

Algorithms and complexity
Wednesday May 12, 2021, 10AM, Collège de France
Frédéric Magniez - Miklos Santha (IRIF - CNRS, CQT) Transformée de Fourier quantique I

Quatrième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Miklos Santha sur « Le problème du sous-groupe caché » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-12-10h00.htm

Algorithms and complexity
Wednesday May 5, 2021, 10AM, Collège de France
Frédéric Magniez - Simon Perdrix (IRIF - CNRS) Circuits quantiques, premiers algorithmes

Troisième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Simon Perdrix sur « Langages graphiques pour programmer et raisonner en informatique quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-05-05-10h00.htm

Algorithms and complexity
Tuesday May 4, 2021, 11AM, Online
Paweł Gawrychowski (University of Wrocław) Distance oracles and labeling schemes for planar graphs

A fundamental question concerning graphs is that of constructing a data structure, called a distance oracle, that allows us to compute the distance between any pair of nodes. The goal is to avoid preprocessing the answer to every possible query while keeping the query time small. This is quite hard if we insist on receiving the exact answer and don't make any assumptions about the graph. A particularly natural class of inputs in this context are planar graphs. It is only recently that we have discovered that strongly subquadratic distance oracle with polylogarithmic query time exists for such graphs. I will present the most recent developments in this and related areas. In particular, I will talk about the so-called informative labeling schemes for distances in planar graphs, which can be seen as a clean distributed version of distance oracles.

Algorithms and complexity
Wednesday April 14, 2021, 10AM, Collège de France
Frédéric Magniez - Thomas Vidick (IRIF - California Institute of Technology) Cryptographie et communication quantiques

Deuxième cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Thomas Vidick sur « Certifier la génération de nombres aléatoires avec le quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-04-14-10h00.htm

Algorithms and complexity
Wednesday April 7, 2021, 10AM, Collège de France
Frédéric Magniez - Eleni Diamanti (IRIF - CNRS) Information quantique, premières utilisations calculatoires

Premier cours de Frédéric Magniez au Collège de France sur les Algorithmes quantiques (à 10h), suivi d'un exposé de Eleni Diamanti sur « Réseaux de communication quantique » (à 11h30). Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/course-2021-04-07-10h00.htm

Algorithms and complexity
Thursday April 1, 2021, 6PM, Collège de France
Frédéric Magniez (IRIF, Collège de France) Algorithmes quantiques : quand la physique quantique défie la thèse de Church-Turing

Leçon inaugurale du cours au Collège de France sur les Algorithmes quantiques. Informations supplémentaires : https://www.college-de-france.fr/site/frederic-magniez/inaugural-lecture-2021-04-01-18h00.htm.

Lien de diffusion : https://www.youtube.com/watch?v=JqNvAN05R04

Algorithms and complexity
Tuesday March 30, 2021, 11AM, Zoom
David Saulpic (LIP6) A New Coreset Framework for Clustering

Given a metric space, the (k, z)-clustering problem consists of finding k centers such that the sum of the of distances raised to the power z of every point to its closest center is minimized. This encapsulates the famous k-median (z = 1) and k-means (z = 2) clustering problems. Designing small-space sketches of the data that approximately preserves the cost of the solutions, also known as coresets, has been an important research direction over the last 15 years. In this paper, we present a new, simple coreset framework that simultaneously improves upon the best known bounds for a large variety of settings, ranging from Euclidean space, doubling metric, minor-free metric, and the general metric cases. The paper is joint work with Vincent Cohen-Addad and Chris Schwiegelshohn.

https://u-paris.zoom.us/j/88942335626?pwd=R25JRVhJWGU4dnNEUFYxZ1RKbDRPQT09

Algorithms and complexity
Tuesday March 16, 2021, 11AM, Room 3052 & Online
Federico Centrone (IRIF) Experimental demonstration of quantum advantage for NP verification with limited information

In recent years, many computational tasks have been proposed as candidates for showing a quantum computational advantage, that is an advantage in the time needed to perform the task using a quantum instead of a classical machine. Nevertheless, practical demonstrations of such an advantage remain particularly challenging because of the difficulty in bringing together all necessary theoretical and experimental ingredients. Here, we show an experimental demonstration of a quantum computational advantage in a prover-verifier interactive setting, where the computational task consists in the verification of an NP-complete problem by a verifier who only gets limited information about the proof sent by an untrusted prover in the form of a series of unentangled quantum states. We provide a simple linear optical implementation that can perform this verification task efficiently (within a few seconds), while we also provide strong evidence that, fixing the size of the proof, a classical computer would take much longer time (assuming only that it takes exponential time to solve an NP-complete problem). While our computational advantage concerns a specific task in a scenario of mostly theoretical interest, it brings us a step closer to potential useful applications, such as server-client quantum computing.

https://u-paris.zoom.us/j/88942335626?pwd=R25JRVhJWGU4dnNEUFYxZ1RKbDRPQT09

Algorithms and complexity
Tuesday March 2, 2021, 4PM, Zoom
Soheil Behnezhad (Department of Computer Science, University of Maryland) Beating Two-Thirds For Random-Order Streaming Matching

We study the maximum matching problem in the random-order semi-streaming setting. In this problem, the edges of an arbitrary $n$-vertex graph $G=(V, E)$ arrive in a stream one by one and in a random order. The goal is to have a single pass over the stream, use $O(n \cdot \mathrm{poly}\log{(n)})$ space, and output a large matching of $G$.

We prove that for an absolute constant $\epsilon_0 > 0$, one can find a $(2/3 + \epsilon_0)$-approximate maximum matching of $G$ using $O(n \log n)$ space with high probability. This breaks the natural boundary of $2/3$ for this problem prevalent in the prior work and resolves an open problem of Bernstein~[ICALP'20] on whether a $(2/3 + \Omega(1))$-approximation is achievable.

https://u-paris.zoom.us/j/88942335626?pwd=R25JRVhJWGU4dnNEUFYxZ1RKbDRPQT09

Algorithms and complexity
Tuesday February 9, 2021, 11AM, Online
Xavier Waintal (Univ. Grenoble Alpes, CEA) What Limits the Simulation of Quantum Computers?

An ultimate goal of quantum computing is to perform calculations beyond the reach of any classical computer. It is therefore imperative that useful quantum computers be very difficult to simulate classically, otherwise classical computers could be used for the applications envisioned for the quantum ones. Perfect quantum computers are unarguably exponentially difficult to simulate: the classical resources required grow exponentially with the number of qubits N or the depth D of the circuit. This difficulty has triggered recent experiments on deep, random circuits that aim to demonstrate that quantum devices may already perform tasks beyond the reach of classical computing. These real quantum computing devices, however, suffer from many sources of decoherence and imprecision which limit the degree of entanglement that can actually be reached to a fraction of its theoretical maximum. They are characterized by an exponentially decaying fidelity F∼(1−ε)^(ND) with an error rate ε per operation as small as ≈1% for current devices with several dozen qubits or even smaller for smaller devices. In this work, we provide new insight on the computing capabilities of real quantum computers by demonstrating that they can be simulated at a tiny fraction of the cost that would be needed for a perfect quantum computer. Our algorithms compress the representations of quantum wave functions using matrix product states, which are able to capture states with low to moderate entanglement very accurately. This compression introduces a finite error rate ε so that the algorithms closely mimic the behavior of real quantum computing devices. The computing time of our algorithm increases only linearly with N and D in sharp contrast with exact simulation algorithms. We illustrate our algorithms with simulations of random circuits for qubits connected in both one- and two-dimensional lattices. We find that ε can be decreased at a polynomial cost in computing power down to a minimum error ε∞. Getting below ε∞ requires computing resources that increase exponentially with ε∞/ε. For a two-dimensional array of N=54 qubits and a circuit with control-Z gates, error rates better than state-of-the-art devices can be obtained on a laptop in a few hours. For more complex gates such as a swap gate followed by a controlled rotation, the error rate increases by a factor 3 for similar computing time. Our results suggest that, despite the high fidelity reached by quantum devices, only a tiny fraction (∼10^−8) of the system Hilbert space is actually being exploited.

Algorithms and complexity
Tuesday January 19, 2021, 11AM, Online
Christian Konrad (University of Bristol) Optimal Lower Bounds for Matching and Vertex Cover in Dynamic Graph Streams

In this talk, I will discuss simple optimal lower bounds on the one-way two-party communication complexity of approximate Maximum Matching and Minimum Vertex Cover with deletions. In this model, Alice holds a set of edges and sends a single message to Bob. Bob holds a set of edge deletions, which form a subset of Alice's edges, and needs to report a large matching or a small vertex cover in the graph spanned by the edges that are not deleted. Our results imply optimal space lower bounds for insertion-deletion streaming algorithms for Maximum Matching and Minimum Vertex Cover. An optimal space lower bound for Maximum Matching was previously given by Assadi et al. [SODA 2016], however, this lower bound only holds for the subclass of streaming algorithms that are able to process very long (at least triple exponential in n) input streams.

This work appeared at CCC'20. Joint work with Jacques Dark.

Algorithms and complexity
Tuesday January 5, 2021, 11AM, Online
Benny Applebaum (Tel-Aviv University) The Round Complexity of Perfectly-Secure Multiparty Computation

We study the round complexity of general secure multiparty computation (MPC) in the classical information-theoretic model of Ben-Or, Goldwasser, and Wigderson [BGW88]. That is, we strive for perfect, information-theoretic and error-free, security against any coalition of at most t computationally-unbounded corrupted parties. Classical feasibility results show that this is possible for any t<n/2 in the passive setting, when the parties follow the protocol, and up to t<n/3 in the active (aka Byzantine) setting when the parties may deviate from the protocol.

I will survey a recent line of works that settles the round complexity of perfect-MPC with optimal security threshold for general functions. In particular, we show that 2 rounds are sufficient and necessary in the passive setting and 4 rounds are sufficient and necessary in the active setting.

Based on joint works with Zvika Brakerski, Eliran Kachlon, Arpita Ptra, and Rotem Tsabary.

BBB link: https://bbb-front.math.univ-paris-diderot.fr/recherche/yas-vbs-c25-ogj