Day on Probabilities in Theoretical Computer Science In the context of the "Probabilities in Theoretical Computer Science" Year 2023-24 of GDR IFM, a one-day conference is scheduled for September 2024 in the Paris area. The event will feature seven invited talks. This event will illustrate the importance of probabilities in computer science, from theory to applications. Our goal is also to foster interactions between researchers across different subfields of theoretical computer science, mathematics and statistical physics, all using probabilities in their work. Date: September 16th 2024, 9h-18h Location: Institut Henri Poincaré (Amphithéâtre Charles Hermite, Building : Borel - Ground floor) in Paris. Program 7 invited talks Registration is free but mandatory - Deadline: September 4th 2024 The event will consist of 7 invited talks covering a large spectrum of areas close to TCS and using probabilities. Each talk will present a broad topic, the problems considered, the techniques used, and the main research directions. Adeline Roux-Langlois (GREYC Caen; Cryptography) Using Rényi divergence in lattice-based cryptography Lattice-based cryptography is a very interesting candidate to obtain constructions which resist attacks using a quantum computer. Even if powerful enough quantum computers do not exist yet, it is necessary to be prepared and anticipate their arrival. In this talk, I will introduce lattice-based cryptography and some use of the Rényi divergence in the security proofs. Security proofs allow to prove the security of a cryptographic construction by showing that attacking the scheme is at least as hard as solving a difficult algorithmic problem. Andrew Gordon (Cogna and University of Edinburgh; Probabilistic programming) End-user programming and generative AI: opportunities for probabilistic programming The project of probabilistic programming has long sought to broaden the range of people who benefit from probabilistic inference. But what becomes of probabilistic programming in the age of generative AI? My perspective is influenced by our work at Cogna using AI to transform end users' productivity, by synthesising precision software from requirements. I will survey current progress, and suggest new opportunities for probabilistic programming. Benoît Delahaye (LS2N Nantes; Verification of probabilistic systems, applications) Statistical model checking for models containing differential equations: challenges and applications to Environment modeling In this talk, I will review a body of recent works aimed at modelling and analysing complex natural systems using (in part) statistical model-checking. The particularity of the systems considered is that they are made up, at least in part, of continuous processes modelled by differential equations. Firstly, I will present results that provide probabilistic guarantees on the statistical verification of these systems despite the errors introduced during the numerical integration of the differential equations, and I will present applications to the parameterization of these models and the analysis of their stability. Secondly, I will present an abstraction of these systems that allows them to be combined with discrete control processes. Finally, I will briefly present two concrete case studies of parametrisation of such models: the growth of the jellyfish Pelagia Noctiluca in the Mediterranean sea and the regrowth of plots of Amazonian forest when subjected to extreme events. Francis Bach (ENS Paris; Machine Learning) Probability and machine learning The relationships between probability and machine learning go both ways. In one direction, the classical statistical formulation through random data is probabilistic, but it goes further, with randomness used to speed-up algorithms, e.g., through stochastic gradient descent. More recently, machine learning has been used to propose new frameworks for classical probabilistic tasks such as sampling by denoising. In this talk, I will present an overview of recent advances. Mark Jerrum (Queen Mary, University of London; Stochastic processes, combinatorics) Modern mixing, spatial and temporal The last five years have seen a revolution in the analysis of the mixing time (i.e., the time required to converge to near-equilibrium) of Markov chains. It is now possible to prove optimal upper bounds on mixing time for a wide variety of Markov chains, including many arising in statistical physics and theoretical computer science. In some instances, no sub-exponential upper bound was previously known. The new ideas that have powered this revolution are strikingly novel, but their working out is often highly technical. I will present recent advances from the point of view of the user. The mixing time bounds can often be packaged in a simple form that hides the technical complexities underneath. Mika Göös (EPFL Lausanne; Computational complexity) Constant-Cost Randomised Communication Some of the most extreme examples of the power of randomness in computing come from communication complexity, where shared randomness can allow two parties to solve non-trivial problems with communication cost independent of the input size. The textbook example is the Equality problem, where two parties hold n-bit strings x and y, respectively, and they wish to decide whether x = y. While n bits of deterministic communication are necessary, there is a randomised protocol that communicates only 2 bits. Can we characterise all communication problems that admit such hyperefficient protocols? We discuss recent progress and open problems. Based on joint work with Yuting Fang, Nathaniel Harms, and Pooya Hatami. Tatiana Starikovskaya (ENS Paris; Algorithms) How Probability Helped Achieve Ultra-Efficient Algorithms for Pattern Matching In the fundamental problem of pattern matching, one is given two strings, usually referred to as a pattern and a text, and must decide whether the pattern appears as a substring of the text. In 1970, Knuth, Morris, and Pratt gave the first linear-time and linear-space algorithm for pattern matching. One can argue that this algorithm is optimal: one has to use linear time to read the input and one has to use linear space to store the input. In this talk, we will discuss how probability helped surpass these barriers. The detailed programme is as follows : Speakers Subjects Presentation 9h-9h45 Mika Göös (EPFL Lausanne) Computational complexity Constant-Cost Randomised Communication 9h50-10h35 Adeline Roux-Langlois (GREYC Caen) Cryptography Using Rényi divergence in lattice-based cryptography 10h35-10h55 Coffee break 10h55-11h40 Francis Bach (ENS Paris) Machine Learning Probability and machine learning 11h45-12h30 Tatiana Starikovskaya (ENS Paris) Algorithms How Probability Helped Achieve Ultra-Efficient Algorithms for Pattern Matching 12h30-13h30 Lunch 13h30-14h15 Mark Jerrum (Queen Mary, University of London) Stochastic processes, combinatorics Modern mixing, spatial and temporal 14h20-15h05 Andrew Gordon (Cogna and University of Edinburgh) Probabilistic programming Probabilistic programming 15h10-15h55 Benoît Delahaye (LS2N Nantes) Verification of probabilistic systems, applications Statistical model checking for models containing differential equations: challenges and applications to Environment modeling 16h00-16h15 Conclusion 16h15-16h45 Snack break 16h45-18h00 Moment of exchange You can find the recordings of the first five presentations on the YouTube IRIF channel. For reasons beyond our control, the recordings of the last two presentations have not been published. We apologise for any inconvenience caused. Organization: Juliette Calvi, Louis Esperet, Frédéric Magniez (do not hesitate to contact them for any inquiry about the event).