Joint PPS - PIHOC - DIAPASoN Workshop

February 16-18, 2021

The ANR PPS meeting took place online from Feb 16 to Feb 18, 2021 and was joined with the 4th edition of the PIHOC workshop series initiated by Ugo Dal Lago in 2018 and with a Dal Lago's DIAPASoN ERC project meeting.

The meeting was organised by the Institut de Recherche en Informatique Fondamentale and by Dal Lago's DIAPASoN ERC project.

The meeting was on Zoom. The video recordings provided on this web page are available until Feb 15, 2022.


(last update: Feb 16, 2021)

(GMT+1 (Bologna-Paris time))

Tuesday 16 February

Video recording. Password: Q%L5MdY.zb

Session 1 (chair Ugo Dal Lago)

  • 14:50 - 15:00 Welcome and connection to zoom
  • 15:00 - 16:00 Guillaume Baudart INRIA Parkas, ENS. Compiling Stan to generative probabilistic languages
  • 16:00 - 16:30 Jialu Bao University of Wisconsin. Data-Driven Invariant Learning for Probabilistic Programs


Session 2 (chair Thomas Ehrhard)

  • 17:00 - 17:30 Gordon Plotkin University of Edinburgh. Smart Choices and Decision-Making Languages
  • 17:30 - 17:45 Giulio Guerrieri University of Bath. Decomposing Probabilistic Lambda-Calculi Abstract
(This is joint work with Ugo Dal Lago and Willem Heijltjes)

A notion of probabilistic lambda-calculus usually comes with a prescribed reduction strategy, typically call-by-name or call-by-value, as the calculus is non-confluent and these strategies yield different results. This is a break with one of the main advantages of lambda-calculus: confluence, which means results are independent from the choice of strategy. We present a probabilistic lambda-calculus where the probabilistic operator is decomposed into two syntactic constructs: a generator, which represents a probabilistic event; and a consumer, which acts on the term depending on a given event. The resulting calculus, the Probabilistic Event Lambda-Calculus, is confluent, and interprets the call-by-name and call-by-value strategies through different interpretations of the probabilistic operator into our generator and consumer constructs. We present two notions of reduction, one via fine-grained local rewrite steps, and one by generation and consumption of probabilistic events. Simple types for the calculus are essentially standard, and they convey strong normalization. We demonstrate how we can encode call-by-name and call-by-value probabilistic evaluation.

  • 17:45 - 18:00 Melissa Antonelli University of Bologna. On Counting Propositional Logic Abstract
We present CPL, a novel counting propositional logic obtained as an extension of propositional logic by means of counting quantifiers. We study its semantics and introduce a sound and complete proof system, by starting from the simplistic fragment CPL_{0}. CPL comes out as particularly interesting when considering its connections with several branches of computer science (e.g. the satisfiability problem for CPL formulas in PNF with k nested quantifiers characterizes the corresponding level of Wagner's counting hierarchy, or the Curry-Howard correspondence with probabilistic lambda-calculus). We also briefly describe our in-progress research focussed on the development of a much more powerful logic, PPA. which is obtained as a natural generalization of CPL-quantification and is capable of expressing probabilistic arithmetical statements.

Wednesday 17 February

Session 3 (chair Ugo Dal Lago)

Video recording. Password: #J33&ha82H

  • 10:30 - 11:30 Jean-Simon Lemay University of Oxford. An introduction to reverse differential categories
  • 11:30 - 11:45 Jesse Sigal University of Edinburgh, Automatic Differentiation via Effects and Handlers: An Implementation in Frank
  • 11:45 - 12:00 Fredrik Dahlqvist University College London. Testing of Monte Carlo Inference Algorithms
  • 12:00 - 13:00 Benjamin Kaminski University College London. Quantitative Separation Logic Abstract
We present quantitative separation logic (QSL). In contrast to classical separation logic, QSL employs quantities which evaluate to real numbers instead of predicates which evaluate to Boolean values. The connectives of classical separation logic, separating conjunction and separating implication, are lifted from predicates to quantities. This extension is conservative: Both connectives are backward compatible to their classical analogs and obey the same laws, e.g. modus ponens, adjointness, etc.

Furthermore, we present a weakest precondition calculus for quantitative reasoning about probabilistic pointer programs in QSL. This calculus is a conservative extension of both Ishtiaq's, O'Hearn's and Reynolds' separation logic for heap-manipulating programs and Kozen's / McIver and Morgan's weakest preexpectations for probabilistic programs. Our calculus preserves O'Hearn's frame rule, which enables local reasoning - a key principle of separation logic.

Finally, we briefly touch upon open questions regarding lower bounds on weakest preexpectations and intensional completeness of our calculus.


Session 4 (chair Thomas Ehrhard)

Video recording. Password: #J33&ha82H

  • 14:30 - 15:30 Ohad Kammar University of Edinburgh. Higher-order building blocks for statistical modelling Abstract
Higher-order functions provide modular programmable abstractions, even if the resulting program is first-order. The PIHOC community have produced several breakthroughs in the foundations for higher-order probability in recent years, enabling us to formulate and manipulate these abstractions for statistical modelling. I will use quasi-Borel spaces to discuss such higher-order building blocks in existing and ongoing work in semantic foundations for statistical modelling, including operational, denotational, and axiomatic semantics, and the design and implementation of statistical inference and simulation systems.
  • 15:30 - 16:00 Martin Avanzini INRIA Focus. On Continuation-Passing Transformations and Expected Cost Analysis Abstract
We define a continuation-passing style (CPS) translation for a typed lambda-calculus with probabilistic choice, unbounded recursion, and a tick operator—for modelling cost. The target language is a (non-probabilistic) lambda-calculus, enriched with a type of extended positive reals and a non-standard fixpoint operator whose semantics is defined by the Monotone Convergence Theorem. We then show that applying the CPS transform of an expression M to the constant-zero function yields its expected cost. Since real-valued expressions of the the target language have infinitary behavior, we also provide a higher-order logic to reason about terms in the target language, and use this logic to prove expected cost of classic examples, including Coupon Collector and Random Walk. Moreover, we relate our translation to Kaminski et al ert-calculus, by showing that the latter can be recovered by applying our CPS translation to an (extension to the probabilistic setting of the classic) embedding of imperative programs into lambda-calculus.
  • 16:00 - 16:30 Johannes Borgström Uppsala University. Data augmentation, reproducibility, and the alive particle filter: An experience report from our work on using probabilistic programming for statistical phylogenetics


Session 5 (chair Michele Pagani)

Video recording. Password: #J33&ha82H

  • 17:00 - 17:30 Francesco Gavazzo University of Bologna. On Monadic Rewriting System Abstract

Motivated by the study of effectful programming languages and computations, we introduce the novel theory of monadic rewriting systems. The latter are rewriting systems whose notion of reduction is effectful, where effects are modelled as monads. Such systems are to effectful computations and programming languages what traditional rewriting systems are to pure computations and programming languages. Contrary to what happens in the ordinary operational semantics of monadic programming languages, defining mathematically and operationally meaningful notions of monadic rewriting turns out to be simply not possible for several monads, including the distribution, powerset, reader, and global state monad. This raises the question of when monadic rewriting is possible. We answer that question by identifying a class of monads, known as weakly cartesian monads, that guarantee monadic rewriting to be well-behaved. As a further evidence of that, we prove a number of proof techniques that allow us to reason about monadic rewriting systems. In case monads are given as equational theories, as it is the case for algebraic effects, we give a sufficient condition on the shape of the theory for having a well-behaved notion of monadic rewriting. Such a condition dates back to an old theory by Guatam and requires equations to be linear. Finally, we apply the abstract theory of monadic rewriting systems to the call-by-value $\lambda$-calculus with algebraic effects, this way obtaining the extension of the well-known confluence and standardisation theorems to an effectful setting. Remarkably, the latter holds for all algebraic effects, whereas the former for the commutative ones only.

  • 17:30 - 17:45 Carol Mak University of Oxford. Nonparametric Hamiltonian Monte Carlo

Thursday 18 February

Video recording. Password: pNu!dLMc03

Session 6 (chair Michele Pagani)

  • 14:30 - 15:30 Xavier Rival INRIA Antique, ENS. Towards Verified Stochastic Variational Inference for Probabilistic Programs Abstract

(Joint work with Wonyeol Lee, Hangyeol Yu, and Hongseok Yang, KAIST)

Probabilistic programming is the idea of writing models from statistics and machine learning using program notations and reasoning about these models using generic inference engines. Recently its combination with deep learning has been explored intensely, which led to the development of so called deep probabilistic programming languages, such as Pyro, Edward and ProbTorch. At the core of this development lie inference engines based on stochastic variational inference algorithms. When asked to find information about the posterior distribution of a model written in such a language, these algorithms convert this posterior-inference query into an optimisation problem and solve it approximately by a form of gradient ascent or descent.

In this talk, we analyse one of the most fundamental and versatile variational inference algorithms, called score estimator or REINFORCE, using tools from denotational semantics and program analysis. We formally express what this algorithm does on models denoted by programs, and expose implicit assumptions made by the algorithm on the models. The violation of these assumptions may lead to an undefined optimisation objective or the loss of convergence guarantee of the optimisation process. We then describe rules for proving these assumptions, which can be automated by static program analyses. Some of our rules use nontrivial facts from continuous mathematics, and let us replace requirements about integrals in the assumptions, such as integrability of functions defined in terms of programs' denotations, by conditions involving differentiation or boundedness, which are much easier to prove automatically (and manually). Following our general methodology, we have developed a static program analysis for the Pyro programming language that aims at discharging the assumption about what we call model-guide support match. Our analysis is applied to the eight representative model-guide pairs from the Pyro webpage, which include sophisticated neural network models such as AIR. It finds a bug in one of these cases, reveals a non-standard use of an inference engine in another, and shows that the assumptions are met in the remaining six cases.

  • 15:30 - 15:45 Jiajia Song MPI-SP. Certified Convergence and Stability Bounds for Machine Learning Algorithms
  • 15:45 - 16:15 Pedro Henrique Azevedo de Amorim Cornell University. Distribution Theoretic Semantics for Differentiable Programming Abstract
In this work we define $\lambda_\delta$, a core calculus for non-smooth differentiable programs and define its semantics using tools from distribution theory, a well-established area of functional analysis. We also show how $\lambda_\delta$ presents better equational properties than other existing semantics.


Session 7 (chair Ugo Dal Lago)

Video recording. Password: pNu!dLMc03

  • 16:30 - 17:00 Alexander Lew MIT. Probabilistic programming with correct-by-construction densities and disintegrations Abstract
Just as algorithms for optimization rely on gradients, algorithms for inference rely on densities and conditioning (or more generally, Radon-Nikodym derivatives and disintegration). Simple cases (e.g., densities of joint distributions built from primitives) are easy to automate, but many modeling and inference tasks require disintegrating or computing Radon-Nikodym derivatives between more complex measures. This presents two key challenges: (1) computing the density of a probabilistic program may require solving integrals currently out of reach for the best symbolic solvers; and (2) probabilistic programs may not be absolutely continuous with respect to the Lebesgue or counting measures, so standard probability density or mass functions may not exist. We present a language that facilitates the development of correct-by-construction Radon-Nikodym derivatives and disintegrations even in these cases. We apply our novel language constructs to three case studies, to build correct-by-construction implementations of: reversible-jump MCMC for models over multisets; noisy-channel data cleaning with detailed error models; and particle MCMC algorithms.
  • 17:00 - 17:30 Guillaume Geoffroy University of Bologna. Extensional Denotational Semantics of Higher-Order Probabilistic Programs, Beyond the Discrete Case
  • 17:30 - 18:00 Paolo Pistone University of Bologna. On Generalized Metric Spaces for the Simply Typed Lambda Calculus

Organizers: Ugo Dal Lago and Michele Pagani