~~NOCACHE~~ /* DO NOT EDIT THIS FILE */ [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday December 17, 2024, 11AM, Salle 3052\\ **Team Event** //AlgoComp lightning talks ⚡️// \\ [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Wednesday December 11, 2024, 11AM, Salle 3071\\ **Akin Ünal** (ETH Zürich) //On the Soundness of Algebraic Attacks against Code-based Assumptions// \\ [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday December 10, 2024, 11AM, Salle 3052\\ **Orr Paradise** (UC Berkeley) //Models that prove their own correctness// \\ This talk introduces Self-Proving models, a new class of models that formally prove the correctness of their outputs via an Interactive Proof system. After reviewing some related literature, I will formally define Self-Proving models and their per-input (worst-case) guarantees. I will then present algorithms for learning these models and explain how the complexity of the proof system affects the complexity of the learning algorithms. Finally, I will show experiments where Self-Proving models are trained to compute the Greatest Common Divisor of two integers, and to prove the correctness of their results to a simple verifier. Joint work with Noga Amit, Shafi Goldwasser, and Guy N. Rothblum. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday November 19, 2024, 11AM, Salle 3052\\ **Philip Verduyn Lunel** (LIP6) //Quantum Position Verification Protocols: Security and Practical Considerations// \\ In this talk we will give an overview of the area of position verification protocols. In position verification, the idea is to use an individual's geographical location as a cryptographic credential. In a position-verification protocol, the limitations imposed by the speed of light, as described by special relativity, are used to verify that a party is at their claimed location. This task has been shown to be impossible to implement using only classical information. Initially, the hope was that using quantum information we could construct secure quantum position verification (QPV) protocols. However, it has been shown that any QPV protocol can be broken by attackers that use an amount of entanglement exponential in the input size. Thus, unconditionally-secure quantum position-verification protocols do not exist. From a practical point of view, not all is lost. The exponential upper bound for a general attack is still astronomically large for only a relatively small input. Thus, we can still hope for practically secure QPV protocols. This raises the problem of designing protocols that are secure in a practical setting. In this talk we will present an overview of the field and present some proposed protocols and discuss their advantages and drawbacks. We will also discuss the best known entanglement bounds. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday October 22, 2024, 11AM, Salle 3052\\ **Adam Wesołowski** (Royal Holloway) //Advances in quantum algorithms for the shortest path problem// \\ In this talk, we will delve into quantum complexity of some graph problems: Single Pair Shortest Pathfinding (SPSP), Diameter, Radius, Eccentricity. We will shortly review the electrical network framework for graphs and proceed to discuss recent advances in quantum algorithms and lower bounds for these problems and present a number of open problems. We give an algorithm in the adjacency array model that under special conditions solves the SPSP problem in $\tilde{O}(l\sqrt{m})$ steps, and with parallelization in $\tilde{O}(\sqrt{lm})$ steps, matching the upper bound on path detection up to polylog factors, where $l$ is the length of the shortest path between the considered vertices. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday October 15, 2024, 11AM, Salle 3052\\ **Christoph Egger** (IRIF) //On Bounded Storage Key Agreement and One-Way Functions// \\ We study key agreement in the bounded-storage model, where the participants and the adversary can use an a priori fixed bounded amount of space, and receive a large stream of data. While key agreement is known to exist unconditionally in this model (Cachin and Maurer, Crypto'97), there are strong lower bounds on the space complexity of the participants, round complexity, and communication complexity that unconditional protocols can achieve. In this work, we explore how a minimal use of cryptographic assumptions can help circumvent these lower bounds. We obtain several contributions: - Assuming one-way functions, we construct a one-round key agreement in the bounded-storage model, with arbitrary polynomial space gap between the participants and the adversary, and communication slightly larger than the adversarial storage. Additionally, our protocol can achieve everlasting security using a second streaming round. - In the other direction, we show that one-way functions are *necessary* for key agreement in the bounded-storage model with large space gaps. We further extend our results to the setting of *fully-streaming* adversaries, and to the setting of key agreement with multiple streaming rounds. Our results rely on a combination of information-theoretic arguments and technical ingredients such as pseudorandom generators for space-bounded computation, and a tight characterization of the space efficiency of known reductions between standard Minicrypt primitives (from distributional one-way functions to pseudorandom functions), which might be of independent interest. Joint work with Chris Brzuska, Geoffroy Couteau and Willy Quach [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday October 8, 2024, 11AM, Salle 3052\\ **Julia Kastner** (CWI, Amsterdam) //Pairing-Free Blind Signatures from Standard Assumptions in the ROM// \\ Blind Signatures are a useful primitive for privacy preserving applications such as electronic payments, e-voting, anonymous credentials, and more. However, existing practical blind signature schemes based on standard assumptions require either pairings or lattices. We present the first practical construction of a round-optimal blind signature in the random oracle model based on standard assumptions without resorting to pairings or lattices. In particular, our construction is secure under the strong RSA assumption and DDH (in pairing-free groups). For our construction, we provide a NIZK-friendly signature based on strong RSA, and efficiently instantiate a variant of Fischlin's generic framework (CRYPTO'06). Our Blind Signature scheme has signatures of size 4.28 KB and communication cost 10.98 KB. On the way, we develop techniques that might be of independent interest. In particular, we provide efficient *relaxed* range-proofs for large ranges with subversion zero-knowledge and compact commitments to elements of arbitrary groups. Joint work with Michael Reichle and Ky Nguyen. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday October 1, 2024, 11AM, Salle 3052\\ **Philips George John** (NUS, Singapore) //Distribution Learning Meets Graph Structure Sampling via Online Learning// \\ The problem of PAC-learning high dimensional Bayesian networks (distributions) with bounded indegree is well-studied in the literature, especially with respect to KL divergence and total variation distance. While the sample complexity of the learning problem is well-understood and near-sample-optimal algorithms are known, the problem of efficiently (in polynomial time) learning constant-indegree Bayesian networks, even over the binary variables, is still open except in special cases such as tree Bayes nets. This work establishes a novel link between the problem of PAC-learning high-dimensional graphical models and the task of (efficient) counting and sampling of graph structures, using an online learning framework. We observe that if we apply the exponentially weighted average (EWA) or randomized weighted majority (RWM) forecasters on a sequence of samples from a distribution P using the log loss function, the average regret incurred by the forecaster's predictions can be used to bound the expected KL divergence between P and the predictions. Known regret bounds for EWA and RWM then yield new sample complexity bounds for learning Bayes nets. Moreover, these algorithms can be made computationally efficient for several interesting classes of Bayes nets. Specifically, we give a new sample-optimal and polynomial time learning algorithm with respect to trees of unknown structure and the first polynomial sample and time algorithm for learning with respect to Bayes nets over a given chordal skeleton. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Wednesday September 25, 2024, 11AM, Salle 3052\\ **Sanjeev Khanna** (University of Pennsylvania) //A Faster Combinatorial Algorithm for Maximum Bipartite Matching// \\ The maximum bipartite matching problem is among the most fundamental and well-studied problems in combinatorial optimization. A beautiful and celebrated combinatorial algorithm of Hopcroft and Karp (1973) shows that maximum bipartite matching can be solved in $O(m \sqrt{n})$ time on a graph with $n$ vertices and $m$ edges. For the case of very dense graphs, a fast matrix multiplication based approach gives a running time of $O(n^{2.371})$. These results represented the fastest known algorithms for the problem until 2013, when Madry introduced a new approach based on continuous techniques achieving much faster runtime in sparse graphs. This line of research has culminated in a spectacular recent breakthrough due to Chen et al. (2022) that gives an $m^{1+o(1)}$ time algorithm for maximum bipartite matching (and more generally, for min cost flows). This raises a natural question: are continuous techniques essential to obtaining fast algorithms for the bipartite matching problem? In this talk, I will describe a new, purely combinatorial algorithm for bipartite matching, that runs in $\tilde{O}(m^{1/3}n^{5/3})$ time, and hence outperforms both Hopcroft-Karp and the fast matrix multiplication based algorithms on moderately dense graphs. I will also briefly discuss some subsequent work that further improves this runtime. This is joint work with Julia Chuzhoy (TTIC). [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday September 17, 2024, 11AM, Salle 3052\\ **Paul Hermouet** (LIP6) //Towards Unclonable Cryptography in the Plain Model// \\ Properties of quantum mechanics have enabled the emergence of quantum cryptographic protocols achieving important goals which are proven to be impossible classically. Perhaps one of the most striking examples of such protocols are the unclonable primitives, which allows different cryptographic "objects" (such as keys, ciphertexts, signatures, programs, etc.) to be encoded in a quantum state in a manner that permits the "object" to be used but not copied. Recent works have shown how to construct some of these primitives, by using different assumptions and models. In this talk, I will present some of these unclonable primitives, and discuss how to leverage **coset states** to construct them. I will subsequently elaborate on my recent efforts in constructing two of these primitives - copy-protection for point functions and unclonable encryption - in the plain model. Finally, and if time allows, I will discuss show how to construct coset-state-based primitives in a semi-quantum way, that is without a quantum channel. Based on joint works with Huy Vu and Céline Chevalier: "Semi-quantum Copy-Protection and More" (TCC 2023: https://eprint.iacr.org/2023/244), "Towards Unclonable Cryptography in the Plain Model" (preprint: https://eprint.iacr.org/2023/1825). [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday September 10, 2024, 11AM, Salle 3052\\ **Ashwin Nayak** (IQC, University of Waterloo) //State distinguishability and learning// \\ A basic problem in quantum communication is that of recovering classical information encoded in quantum states. Several problems in learning theory may naturally be cast into this form. We will describe some problems of this nature, specifically the coupon collector problem and its variants. We will present more efficient algorithms as well as lower bounds for these problems, along with implications for learning. Based on work with Shima Bab Hadiashar and Pulkit Sinha. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday August 27, 2024, 11AM, Salle 3052\\ **Arjan Cornelissen** (IRIF) //Quantum Sabotage Complexity// \\ Given a Boolean function, the goal in the usual query model is to compute it on an unknown input, while minimizing the number of queries to it. One can also consider a "distinguishing" problem: given a 0-input x and a 1-input y, either all differing locations are replaced by a ∗, or all differing locations are replaced by †, and an algorithm’s goal is to identify which of these is the case while minimizing the number of queries. Ben-David and Kothari [ToC’18] introduced the notion of randomized sabotage complexity of a Boolean function to be the zero-error randomized query complexity of this problem. A natural follow-up question is to understand the quantum query complexity of this problem. In this talk, I elaborate on our recent paper "Quantum sabotage complexity" that initiates a systematic study of this. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Friday July 12, 2024, 2PM, Salle 3052\\ **Nicholas Spooner** (University of Warwick) //An efficient quantum parallel repetition theorem and applications// \\ A fundamental question in cryptography is whether the security of a “weak” cryptographic primitive can be generically amplified. We investigate this question in the quantum setting. We prove a general security amplification result for any quantum cryptographic primitive with a three-message security game, including (canonical) quantum commitments and quantum money. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday June 25, 2024, 2PM, Salle 3052\\ **Atsuya Hasegawa** (The University of Tokyo) //On the Power of Quantum Distributed Proofs// \\ Quantum nondeterministic distributed computing was recently introduced as dQMA (distributed quantum Merlin-Arthur) protocols by Fraigniaud, Le Gall, Nishimura and Paz (ITCS 2021). In dQMA protocols, with the help of quantum proofs and local communication, nodes on a network verify a global property of the network. Fraigniaud et al. showed that, when the network size is small, there exists an exponential separation in proof size between distributed classical and quantum verification protocols, for the equality problem, where the verifiers check if all the data owned by a subset of them are identical. In this paper, we further investigate and characterize the power of the dQMA protocols for various decision problems. First, we give a more efficient dQMA protocol for the equality problem with a simpler analysis. We also show a quantum advantage for the equality problem on path networks still persists even when the network size is large, by considering "relay points" between extreme nodes. Second, we show that even in a general network, there exist efficient dQMA protocols for the ranking verification problem, the Hamming distance problem, and more problems that derive from efficient quantum one-way communication protocols. Third, in a line network, we construct an efficient dQMA protocol for a problem that has an efficient two-party QMA communication protocol. Finally, we obtain the first lower bounds on the proof and communication cost of dQMA protocols. Based on joint work with Srijita Kundu and Harumichi Nishimura (https://arxiv.org/abs/2403.14108) [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Monday June 24, 2024, 2PM, Salle 3052\\ **Chandrima Kayal** (ISI Kolkata) //On higher multiplicity hyperplane and polynomial covers for symmetry-preserving subsets of the hypercube// \\ Suppose we want to cover all the vertices of the n-dimensional Boolean cube using a minimum number of hyperplanes. Observe that this can be done using only two hyperplanes. Moreover, no single hyperplane can cover all the vertices. Now, what if we want to cover only a subset of the Boolean cube? For example, suppose we want to cover all the vertices except one, viz. the origin. One can observe that n hyperplanes are sufficient. But can we do better? The celebrated covering result by Alon and Füredi shows that at least n hyperplanes will be required to cover all the vertices of the n-dimensional cube leaving out exactly one vertex. We shall discuss different versions of this covering problem, and give a generalization of Alon and Füredi’s covering result for any symmetric subset of the Boolean cube. Also, we shall show a strict separation between the size of a polynomial cover and a hyperplane cover. This work was jointly done with Arijit Ghosh, Soumi Nandi, and S. Venkitesh. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday June 18, 2024, 11AM, Salle 3052\\ **Sebastian Zur** (QuSoft, University of Amsterdam) //Multidimensional Electrical Networks and their Application to Exponential Speedups for Graph Problems// \\ Recently, Apers and Piddock [TQC ’23] strengthened the natural connection between quantum walks and electrical networks by considering Kirchhoff’s Law and Ohm’s Law. In this work, we develop the multidimensional electrical network by defining Kirchhoff’s Alternative Law and Ohm’s Alternative Law based on the novel multidimensional quantum walk framework by Jeffery and Zur [STOC ’23]. This multidimensional electrical network allows us to sample from the electrical flow obtained via a multidimensional quantum walk algorithm and achieve exponential quantum-classical separations for certain graph problems. We first use this framework to find a marked vertex in one-dimensional random hierarchical graphs as defined by Balasubramanian, Li, and Harrow [arXiv ’23]. In this work, they generalised the well known exponential quantum-classical separation of the welded tree problem by Childs, Cleve, Deotto, Farhi, Gutmann, and Spielman [STOC ’03] to random hierarchical graphs. Our result partially recovers their results with an arguably simpler analysis. Furthermore, by constructing a 3-regular graph based on welded trees, this framework also allows us to show an exponential speedup for the pathfinding problem. This solves one of the open problems by Li [arXiv ’23], where they construct a non-regular graph and use the degree information to achieve a similar speedup. In analogy to the connection between the (edge-vertex) incidence matrix of a graph and Kirchhoff’s Law and Ohm’s Law in an electrical network, we also rebuild the connection between the alternative incidence matrix and Kirchhoff’s Alternative Law and Ohm’s Alternative Law. By establishing this connection, we expect that the multidimensional electrical network could have more applications beyond quantum walks. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Monday June 17, 2024, 11AM, Salle 3052\\ **Gopal Pandurangan** (University of Houston) //Energy-Efficient Distributed Algorithms// \\ Energy is a premium resource in battery-powered wireless and sensor networks, and the bulk of it is used by nodes when they are awake, i.e., when they are sending, receiving, and even just listening for messages. On the other hand, when a node is sleeping, it does not perform any communication and thus spends very little energy. Several recent works have addressed the problem of designing energy-efficient distributed algorithms for various fundamental problems. These algorithms operate by minimizing the number of rounds in which any node is awake, also called the awake complexity. In this talk we present results on energy-efficient distributed algorithms for various distributed computing problems. These results show that fundamental problems such as Maximal Independent Set (MIS), Leader Election, Spanning Tree, and Minimum Spanning tree (MST) and several other problems can be solved in a small awake complexity, which can be significantly better than the traditional round complexity (which counts both sleeping and awake rounds). In particular, a main result that we will present is that the fundamental MIS problem can be solved in (expected) $O(1)$ rounds node-averaged awake complexity, which measures the average number of rounds a node is awake during the algorithm. In particular, we present a randomized distributed algorithm for MIS that has expected $O(1)$-rounds node-averaged awake complexity and, with high probability has $O(\log{n})$-rounds worst-case awake complexity and $O(\log^{3.41}n)$-rounds worst-case complexity. We will also discuss a subsequent result that shows that MIS can be computed in $O(\log \log n)$ (worst-case) awake complexity that is exponentially better compared to the best known round complexity of $O(\log n)$ and also bypassing its fundamental $\Omega(\sqrt{\log n/\log \log n})$ round complexity lower bound exponentially. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday June 11, 2024, 11AM, Salle 3052\\ **Divyarthi Mohan** (Tel-Aviv University) //Optimal Stopping with Interdependent Values// \\ Consider the problem of selling a single item to one of n buyers arriving sequentially, whereupon the arrival of each buyer we need to decide whether to accept this value and stop the process (before seeing the values of the future bidders), or irrevocably and irreversibly reject this bidder and continue on to the next. The objective is to maximize the accepted value, which poses a fundamental question: how effectively can an online algorithm perform compared to the offline optimal solution? Further, adding to the challenge in many scenarios a bidder's value for the item may depend on private information of other bidders. For example, in an art auction, a buyer's value for the item may be influenced by how other buyer's so far have liked it, and in fact, may even depend on how the future buyers will value the item as that would affect its resale value. We study online selection problems in both the prophet and secretary settings, when arriving agents have interdependent values. In the interdependent values model, introduced in the seminal work of Milgrom and Weber [1982], each agent has a private signal and the value of an agent is a function of the signals held by all agents. Results in online selection crucially rely on some degree of independence of values, which is conceptually at odds with the interdependent values model. For prophet and secretary models under the standard independent values assumption, prior works provide constant factor approximations to the welfare. On the other hand, when agents have interdependent values, prior works in Economics and Computer Science provide truthful mechanisms that obtain optimal and approximately optimal welfare under certain assumptions on the valuation functions. We bring together these two important lines of work and provide the first constant factor approximations for prophet and secretary problems with interdependent values. We consider both the algorithmic setting, where agents are non-strategic (but have interdependent values), and the mechanism design setting with strategic agents. All our results are constructive and use simple stopping rules. Joint work with Simon Mauras and Rebecca Reiffenhäuser. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Wednesday June 5, 2024, 11AM, Salle 4052 (PCQC)\\ **Marin Costes** (Université Paris-Saclay) //Space-time deterministic graph rewriting// \\ We study non-terminating graph rewriting models, whose local rules are applied non-deterministically—and yet enjoy a strong form of determinism, namely space-time determinism. Of course in the case of terminating computation it is well-known that the mess introduced by asynchronous rule applications may not matter to the end result, as confluence conspires to produce a unique normal form. In the context of non-terminating computation however, confluence is a very weak property, and (almost) synchronous rule applications is always preferred e.g. when it comes to simulating dynamical systems. Here we provide sufficient conditions so that asynchronous local rule applications conspire to produce well-determined events in the space-time unfolding of the graph, regardless of their application orders. Our first example is an asynchronous simulation of a dynamical system. Our second example features time dilation, in the spirit of general relativity. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday June 4, 2024, 11AM, Salle 3052\\ **Kuo-Chin Chen** (Foxconn Research) //Quantum Walks on Simplicial Complexes and Harmonic Homology: Application to Topological Data Analysis with Superpolynomial Speedups// \\ Incorporating higher-order interactions in information processing enables us to build more accurate models, gain deeper insights into complex systems, and address real-world challenges more effectively. However, existing methods, such as random walks on oriented simplices and homology, which capture these interactions, are not known to be efficient. This work investigates whether quantum walks on simplicial complexes exhibit quantum advantages. We introduce a novel quantum walk that encodes the combinatorial Laplacian, a key mathematical object whose spectral properties reflect the topology of the underlying simplicial complex. Furthermore, we construct a unitary encoding that projects onto the kernel of the Laplacian, representing the space of harmonic cycles in the complex's homology. Combined with the efficient construction of quantum walk unitaries for clique complexes that we present, this paves the way for utilizing quantum walks to explore higher-order interactions within topological structures. Our results achieve superpolynomial quantum speedup with quantum walks without relying on quantum oracles for large datasets. Crucially, the walk operates on a state space encompassing both positively and negatively oriented simplices, effectively doubling its size compared to unoriented approaches. Through coherent interference of these paired simplices, we are able to successfully encode the combinatorial Laplacian, which would otherwise be impossible. This observation constitutes our major technical contribution. We also extend the framework by constructing variant quantum walks. These variants enable us to: (1) estimate the normalized persistent Betti numbers, capturing topological information throughout a deformation process, and (2) verify a specific QMA1-hard problem, showcasing potential applications in computational complexity theory. Online talk [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Wednesday May 29, 2024, 2PM, Salle 3052\\ **Leo Zhou** (Caltech) //Local Minima in Quantum Systems// \\ Finding ground states of quantum many-body systems is known to be hard for both classical and quantum computers. As a result, when Nature cools a quantum system in a low-temperature thermal bath, the ground state cannot always be found efficiently. Instead, Nature finds a local minimum of the energy. In this work, we study the problem of finding local minima in quantum systems under thermal perturbations. While local minima are much easier to find than ground states, we show that finding a local minimum is computationally hard for classical computers, even when the task is to output a single-qubit observable at any local minimum. In contrast, we prove that a quantum computer can always find a local minimum efficiently using a thermal gradient descent algorithm that mimics the cooling process in Nature. To establish the classical hardness of finding local minima, we consider a family of two-dimensional Hamiltonians such that any problem solvable by polynomial-time quantum algorithms can be reduced to finding local minima of these Hamiltonians. Therefore, cooling systems to local minima is universal for quantum computation, and, assuming quantum computation is more powerful than classical computation, finding local minima is classically hard and quantumly easy. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday May 28, 2024, 11AM, Salle 3052\\ **Nikolas Melissaris** (Aarhus University) //Cheater Identification on a Budget: MPC with Identifiable Abort from Pairwise MACs// \\ Because of the scientific diversity of the audience, I will break the talk into two parts. The first part will be an introduction to secure multiparty computation (MPC). This will help with notions, notation, and definitions that will show up later. It will include what we can hope to achieve depending on various parameters that give different results such as how many parties are corrupted and how much power they hold. I will then give a brief overview of works on a more specific security notion that we will deal with: Identifiable Abort. The second part of the talk will focus specifically on our work which is described below. Cheater identification in MPC allows the honest parties to agree upon the identity of a cheating party, in case the protocol aborts. In the context of a dishonest majority, this becomes especially critical, as it serves to thwart denial-of-service attacks and mitigate known impossibility results on ensuring fairness and guaranteed output delivery. In this work, we present a new, lightweight approach to achieving identifiable abort in dishonest majority MPC. We avoid all of the heavy machinery used in previous works, instead relying on a careful combination of lightweight detection mechanisms and techniques from state-of-the-art protocols secure with (non-identifiable) abort. At the core of our construction is a homomorphic, multi-receiver commitment scheme secure with identifiable abort. This commitment scheme can be constructed from cheap vector oblivious linear evaluation protocols based on learning parity with noise. To support cheater identification, we design a general compilation technique, similar to a compiler of Ishai et al. (Crypto 2014), but avoid its requirement for adaptive security of the underlying protocol. Instead, we rely on a different (and seemingly easier to achieve) property we call online extractability, which may be of independent interest. Our MPC protocol can be viewed as a version of the BDOZ MPC scheme (Bendlin et al., Eurocrypt 2011) based on pairwise information-theoretic MACs, enhanced to support cheater identification and a highly efficient preprocessing phase, essentially as efficient as the non-identifiable protocol of Le Mans (Rachuri & Scholl, Crypto 2022). [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday May 28, 2024, 3PM, Salle 4052 (PCQC)\\ **Galina Pass** (QuSoft, University of Amsterdam) //Multidimensional Quantum Walks, Recursion, and Quantum Divide & Conquer// \\ We introduce an object called a subspace graph that formalizes the technique of multidimensional quantum walks. Composing subspace graphs allows one to seamlessly combine quantum and classical reasoning, keeping a classical structure in mind, while abstracting quantum parts into subgraphs with simple boundaries as needed. As an example, we show how to combine a switching network with arbitrary quantum subroutines, to compute a composed function. As another application, we give a time-efficient implementation of quantum Divide & Conquer when the sub-problems are combined via a Boolean formula. We use this to quadratically speed up Savitch’s algorithm for directed st-connectivity. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday May 21, 2024, 3PM, Salle 4052 (PCQC)\\ **Xiaodi Wu** (University of Maryland) //Hamiltonian-oriented Quantum Algorithm Design and Programming// \\ This is an exciting time for quantum computing where early-stage quantum computers become available at your fingertips through clouds. The conventional design of quantum algorithms is centered around the abstraction of quantum circuits and relies on a digital mindset for application design and implementation. While serving as an elegant mathematical interface, circuit-based digital abstraction usually fails to capture the native programmability of quantum devices, and incurs large overheads, which significantly restricts its near-term feasibility where the quantum computing resource is the major limitation. We propose to use quantum Hamiltonian evolution as the central object in end-to-end quantum application design, i.e. the so-called Hamiltonian-oriented paradigm, based on the observation that Hamiltonian evolution is a native abstraction for both low-level hardware control and high-level quantum applications. We illustrate that the Hamiltonian-oriented design not only allows more efficient implementation of existing quantum algorithms but also inspires novel quantum algorithms, especially in optimization and scientific computing, which are hard to perceive in the circuit model. We also develop a programming infrastructure called SIMUQ (SIMUlation language for Quantum) for easy implementation of Hamiltonian-based quantum applications for domain experts on heterogeneous quantum devices. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday May 21, 2024, 2PM, Salle 3052\\ **Lawrence Roy** (Aarhus University) //VOLE-in-the-Head and Post-Quantum Signatures// \\ VOLE-in-the-head (closely related to the MPC-in-the-head paradigm of Ishai et al.) is recent technique for computationally efficient but non-succinct NIZK in the random oracle model. It was applied to post quantum signatures in the signature scheme FAEST, a submission to the NIST call for additional digital signature schemes, which works by proving knowledge of an AES key. I will present the VOLE-in-the-head technique, it's application to FAEST, and the recent "One Tree" optimizations. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday April 30, 2024, 11AM, Salle 3052\\ **André Chailloux** (INRIA Paris) //The Quantum Decoding Problem// \\ One of the founding results of lattice based cryptography is a quantum reduction from the Short Integer Solution (SIS) problem to the Learning with Errors (LWE) problem introduced by Regev. It has recently been pointed out by Chen, Liu and Zhandry [CLZ22] that this reduction can be made more powerful by replacing the LWE problem with a quantum equivalent, where the errors are given in quantum superposition. In parallel, Regev’s reduction has recently been adapted in the context of code-based cryptography by Debris, Remaud and Tillich [DRT23], who showed a reduction between the Short Codeword Problem and the Decoding Problem (the DRT reduction). This motivates the study of the Quantum Decoding Problem (QDP), which is the Decoding Problem but with errors in quantum superposition and see how it behaves in the DRT reduction. The purpose of this paper is to introduce and to lay a firm foundation for QDP. We first show QDP is likely to be easier than classical decoding, by proving that it can be solved in quantum polynomial time in a large regime of noise whereas no non-exponential quantum algorithm is known for the classical decoding problem. Then, we show that QDP can even be solved (albeit not necessarily efficiently) beyond the information theoretic Shannon limit for classical decoding. We give precisely the largest noise level where we can solve QDP giving the information theoretic limit for this new problem. Finally, we study how QDP can be used in the DRT reduction. First, we show that our algorithms can be properly used in the DRT reduction showing that our quantum algorithms for QDP beyond Shannon capacity can be used to find minimal weight codewords in a random code. On the negative side, we show that the DRT reduction cannot be, in all generality, a reduction between finding small codewords and QDP by exhibiting quantum algorithms for QDP where this reduction entirely fails. Our proof techniques include the use of specific quantum measurements, such as q-ary unambiguous state discrimination and pretty good measurements as well as strong concentration bounds on weight distribution of random shifted dual codes, which we relate using quantum Fourier analysis. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday April 23, 2024, 11AM, Salle 3052\\ **Team Event** //AlgoComp lightning talks ⚡️// \\ [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday April 16, 2024, 10AM, Salle 3052\\ **Zvi Lotker** (Bar-Ilan University) //Narratives: Shaping the Evolution of Social Networks and the Concept of Time (Part 2 of a mini-course of two sessions, 2 hours each)// \\ The mini-course explores the way time flows in dynamic social networks. We will use subjective clocks to open a window to the human soul. The course will cover the second part of my book Analyzing Narratives in Social Networks (https://link.springer.com/book/10.1007/978-3-030-68299-6). In the first lecture, we will cover chapters 11, 12, and 13. In the second lecture, we will cover chapters 14, 15, and 16. Chapter 11 studies the perception of time in narrative. We model the narrative of a reader's experience of time as an evolving social network. This chapter takes the function analysis approach, which uses functions as the principal mathematical object. Therefore, the evolving social network is a function of time. Chapter 12 introduces the notion of subjective and objective clocks. We model subjective time in narrative, and introduce a the subjective clock to allow machines to measure subjective time. We define clocks as an increasing monotonic function. This definition deviates from the traditional definition of clocks as oscillators. The advantage of the monotonic definition is that it is inclusive of both psychological/subjective clocks and objective clocks. Chapter 13 explains how to compare clocks. Following the stochastic process definition of Martingales, the basic idea is to remove all Euclidean symmetries (such as translation and rotation) from the two clocks. This is equivalent to removing the linear part of those clocks. We introduce the $M$ diagram, which allows for representation of the two clocks as a function of each other after cancelling the Euclidean symmetry. We compare clocks by finding the point at which clocks begin to deviate from one another. Chapter 14 uses the relationships between clocks to identify critical events in the evolution of a social network. We use the drift between the weighted clock and the event clock to identify critical events in weighted social networks. Chapter 14 defines the law of two clocks, which we can implement in narrative processing. Chapter 15 provides techniques to transform evolving social networks into a real function. This helps us simplify the analysis of social networks. We use these techniques in narrative processing. Chapter 16 defines higher orders of social networks and gives some explanation of evolving social network surfaces. We related social network surfaces to sub-stories within a larger overarching narrative. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Monday April 15, 2024, 10AM, Salle 3052\\ **Zvi Lotker** (Bar-Ilan University) //Narratives: Shaping the Evolution of Social Networks and the Concept of Time (Part 1 of a mini-course of two sessions, 2 hours each)// \\ The mini-course explores the way time flows in dynamic social networks. We will use subjective clocks to open a window to the human soul. The course will cover the second part of my book Analyzing Narratives in Social Networks (https://link.springer.com/book/10.1007/978-3-030-68299-6). In the first lecture, we will cover chapters 11, 12, and 13. In the second lecture, we will cover chapters 14, 15, and 16. Chapter 11 studies the perception of time in narrative. We model the narrative of a reader's experience of time as an evolving social network. This chapter takes the function analysis approach, which uses functions as the principal mathematical object. Therefore, the evolving social network is a function of time. Chapter 12 introduces the notion of subjective and objective clocks. We model subjective time in narrative, and introduce a the subjective clock to allow machines to measure subjective time. We define clocks as an increasing monotonic function. This definition deviates from the traditional definition of clocks as oscillators. The advantage of the monotonic definition is that it is inclusive of both psychological/subjective clocks and objective clocks. Chapter 13 explains how to compare clocks. Following the stochastic process definition of Martingales, the basic idea is to remove all Euclidean symmetries (such as translation and rotation) from the two clocks. This is equivalent to removing the linear part of those clocks. We introduce the $M$ diagram, which allows for representation of the two clocks as a function of each other after cancelling the Euclidean symmetry. We compare clocks by finding the point at which clocks begin to deviate from one another. Chapter 14 uses the relationships between clocks to identify critical events in the evolution of a social network. We use the drift between the weighted clock and the event clock to identify critical events in weighted social networks. Chapter 14 defines the law of two clocks, which we can implement in narrative processing. Chapter 15 provides techniques to transform evolving social networks into a real function. This helps us simplify the analysis of social networks. We use these techniques in narrative processing. Chapter 16 defines higher orders of social networks and gives some explanation of evolving social network surfaces. We related social network surfaces to sub-stories within a larger overarching narrative. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday April 9, 2024, 11AM, Salle 3052\\ **Christoph Egger** (IRIF) //On Fine-Grained Key-Agreement// \\ We are considering fine-grained key agreement. In a key agreement protocol, two parties wish to establish a shared secret that remains hidden from any passive observer. Fine-grained key agreement requires that, for example, an adversary running in quadratic time compared to the honest parties fails to learn about the key. One can require the adversary to *find* a key or alternatively just require it to *distinguish* between the produced key and a random key. The two notions are equivalent given *classical* access to a random function oracle; however, they are different when parties have superposition access to said oracle -- the best bound for the classical transformation uses semi-classical one-way-to-hiding due to Ambainis et al. (Crypto'19) which incurs a security degradation. Consequently, the key-agreement protocol by Brassard et al. (JCryptology'19) does not *directly* provide distinguishing security. In the talk, I will present a key-agreement protocol relative to a random function oracle with superposition access, which directly provides distinguishing security, and discuss semi-classical one-way-to-hiding, which is a core building block of this work. In a different direction, one can also base key agreement on space hardness: An adversary that has less *memory* than some (polynomial) bound fails to break the security. I will mention some preliminary results in this setting. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Friday April 5, 2024, 11AM, Salle 3052\\ **Yanlin Chen** (CWI Amsterdam) //A Quantum Speed-Up for Approximating the Top Eigenvectors of a Matrix// \\ Finding a good approximation of the top eigenvector of a given d × d matrix A is a basic and important computational problem, with many applications. We give two different quantum algorithms that, given query access to the entries of A and assuming a constant eigenvalue gap, output a classical description of a good approximation of the top eigenvector: one algorithm with time complexity d^{1.5+o(1)} and one with time complexity \tilde{O}(d^{1.75}) that has a slightly better dependence on the precision of the approximation. Both provide a polynomial speed-up over the best-possible classical algorithm, which needs Ω(d^2) queries to entries of A (and hence Ω(d^2) time). We extend this to a quantum algorithm that outputs a classical description of the subspace spanned by the top-q eigenvectors in time qd^{1.5+o(1)}. We also prove a nearly-optimal lower bound of \tilde{Ω}(d^{1.5}) on the quantum query complexity of approximating the top eigenvector. Our quantum algorithms run a version of the classical power method that is robust to certain benign kinds of errors, where we implement each matrix-vector multiplication with small and well-behaved error on a quantum computer, in different ways for the two algorithms. Our first algorithm used block-encoding techniques to compute the matrix-vector product as a quantum state, from which we obtain a classical description by a new time-efficient unbiased pure-state tomography algorithm that has essentially optimal sample complexity O(d log(d)/ε^2) and that comes with improved statistical properties compared to earlier pure-state tomography algorithms. Our second algorithm estimated the matrix-vector product one entry at a time, using a new “Gaussian phase estimation” procedure. We also develop a time-efficient process- tomography algorithm for reflections around bounded-rank subspaces, providing the basis for our top-eigensubspace estimation application. This is the joint work with Ronald de Wolf and András Gilyén. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday April 2, 2024, 11AM, Salle 3071\\ **Michael Klooß** (Aalto University) //Adaptive Special Soundness: Improved Knowledge Extraction by Adaptive Useful Challenge Sampling// \\ Proving knowledge soundness of an interactive proof from scratch is often a challenging task. This has motivated the development of various special soundness frameworks which, in a nutshell, separate knowledge extractors into two parts: (1) an extractor to produce a set of accepting transcripts conforming to some structure; (2) a witness recovery algorithm to recover a witness from a set of transcripts with said structure. These frameworks take care of (1), so it suffices for a protocol designer to specify (2) which is often simple(r). However, special soundness fails to adequately capture guarantees provided by simple "probabilistic tests", e.g., short random linear combinations as used in many lattice-based proof systems. In this talk, I will introduce (adaptive) special soundness, which captures many scenarios and discuss the rewinding-based knowledge extraction. Moreover, I will point out current limitations and open problems with regard to (adaptive) special soundness. This talk is based on join work with Thomas Attema, Russell W. F. Lai, and Pavlo Yatsyna. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday March 26, 2024, 11AM, Salle 3052\\ **Willy Quach** (Weizmann Institute of Science) //How (not) to prove cryptographic security against quantum attacks// \\ Perhaps one of the most spectacular feats of quantum computers is the ability to break (public-key) cryptosystems that are used to secure the Internet. This directly motivates the study of cryptography that is secure against quantum attacks, namely post-quantum cryptography. In this talk, I will focus on the following basic question: how do we prove that a cryptographic construction is secure against quantum attacks? I will highlight a perhaps surprising distinction between the security of cryptographic constructions against quantum attacks, and the quantum hardness of the underlying problems that are used to reason about security. Doing so will highlight a natural connection to cryptographic proofs of quantumness. Based on joint work with Alex Lombardi, Ethan Mook, and Daniel Wichs. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday March 19, 2024, 11AM, Salle 3052\\ **Joon Lee** (EPFL) //The Sparse Parity Matrix// \\ The last decade witnessed several pivotal results on random inference problems where the aim is to learn a hidden ground truth from indirect randomised observations; much of this research has been guided by statistical physics intuition.  Prominent examples include the stochastic block model, low-density parity check codes or compressed sensing.  In all random inference problems studied so far the posterior distribution of the ground truth given the observations appears to enjoy a key property called “strong replica symmetry”.  This means that the overlap of the posterior distribution with the ground truth (basically the number of bits that can be learned correctly) concentrates on a deterministic value. Whether this is generally true has been an open question.  In this paper we discover an example of an inference problem based on a very simple random matrix over F2 that fails to exhibit strong replica symmetry.  Beyond its impact on random inference problems, the random matrix model, reminiscent of the binomial Erdos-Renyi random graph, gives rise to a natural random constraint satisfaction problem related to the intensely studied random k-XORSAT problem. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday March 12, 2024, 11AM, Salle 3052\\ **Christina Boura** (UVSQ) //New results in differential cryptanalysis// \\ Differential cryptanalysis, introduced in 1990 by Biham and Shamir, is a powerful cryptanalysis technique against block ciphers. It is based on the existence of an input difference that propagates through the different layers of the cipher to an output difference with high probability. Although the central idea of this technique is quite simple to understand, mounting an optimal differential attack against a given cipher is a highly non-trivial task. The difficulty is two-fold. First, finding good differentials and accurately estimating their probability is a difficult computational problem. Second, performing a key recovery based on the existence of a good differential is a hard optimization task. In this talk we will revisit the differential cryptanalysis technique and present recent advances for treating the two exposed problems. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday March 5, 2024, 11AM, Salle 3052\\ **Julian Wechs** (Université Libre de Bruxelles) //Subsystem decompositions of quantum circuits and processes with indefinite causal order// \\ One can theoretically conceive of situations in which the causal order between quantum operations is no longer well-defined. Such causally indefinite processes are of interest from a foundational perspective, but also open up new possibilities in quantum information, as they go beyond the conventional paradigm of quantum circuits. A central question is which of these processes have an operational interpretation or physical realisation, and, more particularly, whether indefinite causal order could be realised within standard physics. It has been shown that certain causally indefinite processes can take place as part of standard quantum mechanical evolutions, described by acyclic quantum circuits, if the latter are represented with respect to an alternative choice of quantum subsystems that can be delocalised in time. In this seminar, I will present a general framework to describe such transformations between different subsystem decompositions of quantum circuits. This puts the notion that indefinite causal order is realisable in standard quantum theory on a rigorous footing. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday February 27, 2024, 11AM, Salle 3052\\ **Sophie Huiberts** (LIMOS, Clermont-Ferrand) //Open problems about the simplex method// \\ The simplex method is a very efficient algorithm. In this talk we see a few of the state-of-the-art theories for explaining this observation. We will discuss what it takes for a mathematical model to explain an algorithm’s qualities, and whether existing theories meet this bar. Following this, we will question what the simplex method is and if the theoretician's simplex method is the same algorithm as the practitioner's simplex method. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday February 27, 2024, 2PM, Salle 3052\\ **Christian Konrad** (University of Bristol) //An Unconditional Lower Bound for Two-Pass Streaming Algorithms for Maximum Matching Approximation// \\ In this talk, I will discuss a recent joint work with my PhD student Kheeran Naidu on lower bounds for the Maximum Matching problem in the semi-streaming model. In this model, an algorithm performs multiple passes over the edges of an input graph and is tasked with computing a matching that is as large as possible using space O(n polylog n), where n is the number of vertices of the input graph. We will focus on the two-pass setting, and we show that every randomized two-pass streaming algorithm that computes a better than 8/9-approximation to Maximum Matching requires strictly more than semi-streaming space. Prior to our work, only a conditional two-pass lower bound by Assadi [SODA'22] was known that relates the quality of their lower bound to the maximum density of Ruzsa-Szemeredi graphs (RS-graphs) with matchings of linear sizes. In the best case, i.e., if very dense RS-graphs with linear-sized matchings exist, their lower bound rules out approximation ratios above 0.98, however, with current knowledge, only approximation factors of 1 − o(1) are ruled out. Our lower bound makes use of the information cost trade-off of the Index problem in the two-party communication setting established by Jain et al. [JACM’09]. To the best of our knowledge, our work is the first that exploits this trade-off result in the context of lower bounds for multi-pass graph streaming algorithms. [[en:seminaires:algocomp:index|Algorithms and complexity]]\\ Tuesday January 9, 2024, 4PM, Salle 3052\\ **Gabriel Senno** (Quside) //Quantifying the intrinsic randomness of quantum measurements// \\ In this talk, I will tell you about our recent results on the maximum probability with which an adversary, Eve, can guess the outcomes of a known POVM $M$ on a system $S$ in a known state $\rho$. We study two alternative adversarial scenarios for this problem: a classical one and a quantum one. In the classical picture, every time the system is measured, Eve knows in which pure state from an ensemble compatible with $\rho$ the system is and which extremal POVM in some convex decomposition of $M$ is being performed. In the quantum picture, following [Frauchiger et al., arXiv:1311.4547], the POVM $M$ is seen as a projective measurement on the system $S$ plus an ancillary system $A$, which can be in a mixed state, and Eve holds a purification $E$ of $SA$. Notice that these two scenarios are indistinguishable for someone doing tomography on S. We show that, unlike the case of projective measurements (as it was already known) or pure states (as we prove), when nonextremal measurements and states are considered, Eve’s guessing probability in the quantum picture can be strictly greater than in the classical one.