Équipe-projet Inria Picube (Inria)

Jour, heure et lieu

Le lundi à 14h, 1007 ou 3052

Le calendrier des séances (format iCal).
Pour ajouter le calendrier des séances à votre agenda favori, souscrire au calendrier en indiquant ce lien.


Le séminaire Formath (Formalized mathematics) est le séminaire de l'équipe Picube.

Pour s'abonner à la liste de diffusion, consulter la page https://listes.irif.fr/sympa/info/formath ou écrire aux responsables du séminaire.

Année 2024

Lundi 10 juin 2024, 14 heures, 3052 and bbb Link
Sarah Reboullet (Université Paris Cité, IRIF & INRIA Picube) Fixing Internal Parametricity and Its Presheaf Model

Parametricity was introduced by Reynolds as a way to prove abstract uniformity properties for polymorphic functions. Initially, this was done externally, but recent works by Bernardy and Moulin have shown how to develop a dependent type theory that can prove these results internally. Such a theory could form the basis of a proof assistant with parametricity's uniformity properties. Unfortunately, the presheaf model of internal parametricity in the third paper of Bernardy and Moulin's series on internal parametricity contains flaws that need to be addressed. This observation will serve as a pretext for this talk to introduce the subjects of dependent type theory and (internal) parametricity.

Lundi 27 mai 2024, 14 heures, 3052 and Zoom Link
Laetitia Teodorescu (IRIF, Université Paris Cité & INRIA FLOWERS & Picube) Autotelic machine learning for programming and mathematics

One remarkable feature of humans is that they exhibit intrinsically-motivated behavior, partaking in activities for their own sake. This is a crucial facet of learning across all ages as well as a distinctive feature of art, science and innovation. In this talk, we will present some computational approaches seeking to emulate this process using state-of-the-art machine learning techniques as a backbone. We will first begin by surveying the field of open-ended learning and notably the concept of autotelic agents – agents that learn by inventing and pursuing their own goals. We will then present two recent works along those lines. The first is an iterative autotelic algorithm for generating a diverse set of challenging Python programming puzzles, inspired by recent advances in language-model based evolutionary computation. The second is an ongoing research project for building autotelic mathematical agents coupled to interactive theorem provers. The general design of such an agent will be given along with some motivating examples and discussion of current art. We will then present preliminary results with one important component of this agent: a theorem search engine in Lean 4, allowing one to retrieve lemmas and theorems with natural language queries.

Lundi 29 avril 2024, 14 heures, 3052 and bbb Link
Hugo Herbelin (Picube) Où se cache le calcul dans les topos élémentaires ?

Il est connu que les topos élémentaires ont la force logique de la logique d'ordre supérieur HOL, c'est-à-dire la force logique du système Fω. Plus concrètement, Lambek caractérise la logique des topos élémentaires comme étant une variante intuitionniste de HOL, incluant extensionnalité des propositions, extensionnalité des fonctions et axiome du choix unique. Pour chacun de ces ingrédients, et y compris pour son cœur logique qu'est Fω, on sait moralement associer un contenu calculatoire, mais où se cache-t-il dans la définition de topos élémentaire ? C'est le genre de questions qu'on se posera mais sans avoir toutefois toutes les réponses encore.

En tout cas, on commencera par voir le classificateur de sous-objet comme un hProp imprédicatif, par isoler le rôle de l'extensionnalité des propositions (= univalence pour hProp) dans la correspondance entre représentations fibrées et indexées des prédicats dans les topos, et par comprendre comment la somme disjointe peut se simuler en combinant le pouvoir de décision de la définition imprédicative de la disjonction avec le pouvoir de représentation des types par des parties singleton.

Mardi 26 mars 2024, 14 heures, 1020 and Renater
Lasse Blaauwbroek A machine learning platform for proof synthesis in Coq

Tactician is a proof synthesis platform that aims to make machine learning models practically accessible to Coq end-users. To this end, it makes a wide range of Coq's internal knowledge available to solving agents in a machine-learning friendly manner, including information on definitions, theorems and proofs. When the user invokes Tactician's 'synth' tactic, the solving agent is asked exploit this knowledge to complete the current proof. A wide variety of solving agents are available in the platform, including k-nearest neighbor solvers, graph neural networks, language models and random forests. Several solvers have state-of-the-art proving capabilities, far outperforming existing general purpose automation techniques. Much of this performance is due to Tactician's unique capability to learn from new knowledge on-the-fly. It can amend its models with new definitions, lemmas and proofs and immediately exploit this information during proof synthesis.

In this talk, I will give a general tutorial on Tactician. We start with a usage demonstration from an end-user perspective. Then we'll take a look at Tactician's internal architecture. There are several data formats for machine learning, including a novel graph-based representation of the Calculus of Existential Inductive Constructions, where every node in the graph represents a unique term modulo alpha-equivalence. This data is made available both as an offline dataset and through interaction with Coq using a rich remote procedure calling (RPC) protocol. We'll see how we can easily implement our own external solving agent in Python.

Lundi 18 mars 2024, 14 heures, Grenoble
Gdr Ifm Journées nationales du GDR (pas d'exposé)

Lundi 11 mars 2024, 14 heures, 3052
Gabriel Scherer Random generation of well-typed terms: a constraint-based approach

“Fuzzing” is a popular testing approach where we throw random inputs at a program and check that it respects some expected properties on all inputs. Fuzzing a metaprogram such as a compiler requires producing random input programs. To exercise interesting behaviors of the compiler, we want our input programs to be well-formed in a strong sense, in particular to be well-typed. Generating random well-typed program is difficult when the type-system is powerful, there is a whole scientific literature on the topic, see for example [Testing an Optimising Compiler by Generating Random Lambda Terms, Pałka, 2012].

Most existing generators integrate a large part of the implementation of a type-checker, “inverted” to assist in top-down random term generation by efficiently discarding choices that result in ill-typed programs. Scaling this approach to a full language would result in implementing two sophisticated type-checkers, once in the compiler and once in the program generator. This is one type-checker too many!

In this talk I will present preliminary work on reusing the constraint-based type inference approach [The Essence of ML-based type inference, Pottier and Rémy, 2004] to write a single type-checker that can be used for both purposes: to check types of user-provided programs, and to efficiently guide a random program generator. This only requires a fairly simple modification to constraint generators: parametrizing them over a search monad.

(Relevant previous work is [Making Random Judgments: Automatically Generating Well-Typed Terms from the Definition of a Type-System, Fetscher, Claessen, Pałka, Hughes, Findler]. They start from Redex, a Racket DSL to implement simple type systems, and derive a random program generator. The limitation of this approach is that Redex only works with fairly simple type system, in particular we do not expect it to scale to ML-style type inference with polymorphism, which is the part that hand-crafted generators also struggle on. Our work builds on top of constraint-solving approaches that are known to scale well to ML-style polymorphic; but it is preliminary and we have only implemented support for the simply-typed lambda-calculus for now.)

Lundi 4 mars 2024, 14 heures, 3052 and bbb Link
Guillaume Baudart Schedule Agnostic Semantics for Reactive Probabilistic Programming

Synchronous languages are now a standard industry tool for critical embedded systems. Designers write high-level specifications by composing streams of values using block diagrams. These languages have been extended with Bayesian reasoning to program state-space models which compute a stream of distributions given a stream of observations. Yet, the semantics of probabilistic models is only defined for scheduled equations – a significant limitation compared to dataflow synchronous languages and block diagrams which do not require any ordering.

In talk I will present two schedule agnostic semantics for a probabilistic synchronous language. The key idea is to interpret probabilistic expressions as a stream of un-normalized density functions which maps random variable values to a result and positive score. The co-iterative semantics extends the original semantics to interpret mutually recursive equations using a fixpoint operator. The relational semantics directly manipulates streams and is thus a better fit to reason about program equivalence. We use the relational semantics to prove the correctness of a program transformation required to run an optimized inference algorithm for state-space models with constant parameters.

This is joint work with Louis Mandel and Christine Tasson.

Lundi 26 février 2024, 14 heures, 3052
Julien Narboux (Université de Strasbourg) Formalization of geometry, and automated theorem proving using constraint solving

In this talk, we will present a quick overview of our work (with Michael Beeson, Pierre Boutry, Gabriel Braun and Charly Gries) about formalization of geometry and the GeoCoq library. We will delve into the axiomatic systems of influential mathematicians such as Euclid, Hilbert and Tarski and present formalization issues. A special focus will be placed on the formalization of continuity axioms and parallel postulates. We will highlight details and issues that can be seen only through the lens of a proof assistant and intuitionistic logic. We will present a syntactic proof of the independence of the parallel postulate. From axiomatic foundations to computer-assisted proofs, we will explore the interplay between synthetic and analytic geometry, and different kinds of automation. Finally, we will present our recent work on Larus (with Predrag Janičić): a theorem prover based on coherent logic and constraint solving which can output proofs that are both readable and machine checkable, along with illustrations generated semi-automatically.

Lundi 12 février 2024, 14 heures, 3052 and bbb Link
Esaïe Bauer (IRIF, Université Paris Cité & INRIA Picube) Cut-elimination for the circular modal mu-calculus: linear logic and super exponentials to the rescue

Contrarily to Girard's linear logic which is equipped with a rich proof theory, Kozen's modal mu-calculus has an underdeveloped one: for instance the modal mu-calculus is lacking a proper syntactic cut-elimination theorem.

During this talk, I will present a strategy to prove cut-elimination theorem for the modal mu-calculus. This strategy is to prove cut-elimination for a “linear translation” of the modal mu-calculus (that is to define a translation of the statements and proofs of the modal mu-calculus into a linear sequent calculus) and to deduce the desired cut-elimination results as corollaries.

While designing this linear translation, we come up with a sequent calculus exhibiting several modalities (or exponentials). It happens that the literature of linear logic offers tools to manage such calculi, for instance subexponentials by Nigam and Miller and super exponentials which is a work I did with Olivier Laurent.

We actually extend super exponentials with fixed-points and non-wellfounded proofs (of which the linear decomposition of the modal mu-calculus is an instance) and prove a syntactic cut-elimination theorem for these sequent calculi in the framework of non-wellfounded proof theory.

Back to the classical modal mu-calculus, we deduce its cut-elimination theorem from the above results, investigating both non-wellfounded and regular proofs.

This is a joint work with Alexis Saurin.

Lundi 22 janvier 2024, 14 heures, 3052 and bbb Link
Houda Mouhcine (Inria Saclay) & Micaela Mayero (Université Paris 13, Lipn, Love) Double exposé

1. [Premier exposé : Micaela] Title: An overview of the real numbers in theorem provers: an application with real analysis in Coq

Abstract: Formalizing real numbers in a formal proof tool represents a particular challenge. It is not only a question of representing numbers in computers but also of preserving all mathematical properties needed to make proofs. We will review the history of real numbers formalization in different proof assistants and the different ways of formalizing them, before giving an overview of real numbers libraries that exist today in provers. As application, we will finish by presenting real analysis developments in Coq.

2. [Deuxième exposé : Houda] Title: Formal proofs in applied mathematics: Formalization of Tonelli Theorem and Lagrange Finite Elements

Abstract. Numerical analysis, as an essential tool for solving complex mathematical problems through the use of algorithms and numerical approximations, is further fortified in this work by rigorously formalizing Tonelli's Theorem and Lagrange Finite Elements. These basic concepts are defined and validated using formal proof techniques, enhancing their credibility in various mathematical applications. In the field of integration, important results such as Tonelli's Theorem and the Lebesgue Induction Principle are covered. Furthermore, in the field of numerical resolution of partial differential equations, we explore finite elements, covering the general definition of finite elements, and geometric transformations, up to the full definition of simplicial Lagrange finite elements, including the unisolvance property.

Lundi 8 janvier 2024, 14 heures, 3052 and Galène Link (password: tensor)
Loïc Pujet (University of Stockholm) Choice Principles in Observational Type Theory

Observational Type Theory is an internal language for types equipped with a proof-irrelevant propositional equality. As such, it natively supports extensionality principles, UIP, and quotients of types by proof-irrelevant relations (à la Lean).

Unfortunately, it is difficult to use these quotients without any choice principles to extract information from proof-irrelevant truncations. In this talk, I will discuss the various choice principles one could hope to have in OTT, and I will use ideas from Higher Observational Type Theory to sketch a version of OTT with unique choice.

Année 2023

Lundi 11 décembre 2023, 14 heures, 3052
Meven Lennon-Bertrand (Université de Cambridge) Definitional Functoriality for Dependent (Sub)Types

In this talk I will explore subtyping for dependently-typed languages, in particular how to tame the coherence problematics typical for subtyping in this context. This naturally leads to investigating a form of functoriality equations, which do not hold on the nose in vanilla MLTT.

I will explain how to repair this and add these extra equations in, and the proof techniques based on logical relations we need to set up to establish the good theoretic properties of the resulting type system, that we have recently formalised in Coq.

Lundi 20 novembre 2023, 14 heures, 3052 and bbb Link
Paul Laforgue (IRIF, Université de Paris, Nomadic Labs) Characterisations of the must-preorder for asynchronous processes, mechanised in Coq.

Assessing whether a program can be used in place of another is a classical problem, typically addressed through refinement relations.

The must-preorder, introduced by De Nicola and Hennessy in 1984, considers one process to be a refinement of another if every test satisfied by the former is also satisfied by the latter. In practice, this corresponds to updating the software running on a server without clients noticing any change.

In this talk I will present a new characterisation of the must-preorder for asynchronous processes which is simpler than the existing ones and generalize the standard characterization defined for synchronous processes.

Our work led to the implementation of a Coq framework. I shall discuss how we leveraged Coq typeclasses to make our theorems applicable to any Labelled Transition System (LTS) that respects Selinger's axioms for output-buffered agents.

Lundi 13 novembre 2023, 14 heures, 3052 SG
Antoine Chambert-Loir (IMJ-PRG, Université Paris Cité) Simplicité des groupes (une expérience incomplète de formalisation)

La notion de groupe simple apparaît dans les travaux de Galois en relation avec la résolubilité par radicaux des équations polynomiales.

Je présenterai mon travail sur la formalisation en Lean de la simplicité du groupe alterné sur au moins 5 lettres qui repose sur un critère d'Iwasawa.

Ce théorème, souvent attribué à Galois lui-même, possède de nombreuses preuves, certaines assez courtes, et celle que j'ai choisie est en fin de compte assez longue.

Je tenterai de justifier pourquoi son schéma, les concepts qu'elle met en œuvre et les ramifications qu'elle offre la rendent, à mon sens, appropriée pour figurer dans une librairie de mathématiques formelles.

Lundi 6 novembre 2023, 14 heures, 3052 SG
Alexis Saurin (IRIF) When interpolation is proof-relevant: Craig-Lyndon interpolation as cut-introduction

In this talk, I will discuss a proof-relevant Craig-Lyndon interpolation theorem for first-order LL (linear logic) that I studied after a question by Sam van Gool and Hugo Férée during Scalp anual meeting last february.

This interpolation is proof-relevant in the following sense: if π proves A ⊢ B, then we can find (i) a formula C in the common vocabulary of A and B and (ii) two proofs π1 and π2 of A ⊢ C and C ⊢ B respectively, such that π1 composed with π2 cut-reduces to π. Similar proof-relevant interpolation results for LJ and LK are obtained as immediate corollaries via usual linear translations.

I will explain how this result follows straightforwardly from the usual proof-theoretic method for proving interpolation and how the construction of a proof-relevant interpolant can be viewed as a cut-introduction procedure.

Finally, I shall discuss how to extend the above result to (a fragment of) circular proofs of linear logic with least and greatest fixed-points.

Lundi 23 octobre 2023, 14 heures, 3052
Félix Castro (IRIF - Université Paris CIté) Revisiting the model of Hereditary Recursive Operations

Higher Type Arithmetic (HA^ω ) is a first-order many-sorted theory. It is a conservative extension of Heyting Arithmetic obtained by extending the syntax of terms to all of System T: the objects of interest here are the functionals of “higher types”. While equality between natural numbers is specified by the axioms of Peano, how can equality between functionals be defined? From this question, different versions of HA^ω arise, such as an extensional version (E-HA^ω) and an intentional version (I-HA^ω). In this talk, we will study a specific model of I-HA^w: the model of Hereditary Recursive Operations. We will see how this model interprets a family of typed operations “quote_A : A → Nat” that associate, to a given functional of type A, the “source code” of its normal form.

Lundi 16 octobre 2023, 14 heures, 3052
Thiago Felicissimo (Deducteam, INRIA & LFM, Paris-Saclay) Generic bidirectional typing for dependent type theories

Bidirectional typing is a discipline in which the typing judgment is decomposed explicitly into inference and checking modes, allowing to control the flow of type information in typing rules and to specify algorithmically how they should be used. Bidirectional typing has been fruitfully studied and bidirectional systems have been developed for many type theories. However, the formal development of bidirectional typing has until now been kept confined to specific theories, with general guidelines remaining informal. We contribute with a generic account of bidirectional typing for a general class of dependent type theories. This is done by first giving a general definition of type theories (or equivalently, a logical framework), for which we define declarative and bidirectional type systems. We then show, in a theory-independent fashion, that the two systems are equivalent. This equivalence is then explored to establish the decidability of typing for weak normalizing theories, yielding a generic type-checking algorithm that has been implemented in a prototype and used in practice with many theories.

Lundi 9 octobre 2023, 14 heures, 3052
Pierre Letouzey A propos d'une curieuse famille de fonctions récursives imbriquées due à Hofstadter

Au chapitre 5 du fameux livre “Gödel, Escher, Bach”, D. Hofstadter introduit la fonction G suivante comme exemple de définition récursive:
  G(0)=0 puis G(n)=n-G(G(n-1)) pour tout entier n>0

Il définit également une fonction H selon le même modèle, mais avec trois appels récursifs imbriqués au lieu de deux pour G, et mentionne que cela continue ensuite, pour n'importe quel degré d'imbrication. Cet exposé est consacré à l'étude de la famille de fonctions que l'on obtient ainsi. Je parlerai de leurs liens intimes avec (au moins!) des arbres rationnels infinis, des variantes des nombres de Fibonacci et leur systèmes de numération associés, des mots morphiques, des nombres de Pisot, des fractales de Rauzy. Je présenterai également le développement Coq correspondant, certifiant presque toutes les propriétés en question, y compris une conjecture OEIS concernant H. Cela m'a amené à considérer en Coq des réels, complexes, polynômes, matrices, etc, ce qui est encore loin d'être une sinécure. Il sera enfin question d'une conjecture personnelle récalcitrante : cette famille de fonctions semble croissante point-à-point.

NB: Il y aura un recouvrement certain mais loin d'être total avec un exposé précédent aux journées PPS 2018, consacré à l'époque à G et à un exercice au lecteur posé par Hostadter à propos de “l'arbre miroir” de G. Pas besoin ici d'avoir vu cet exposé 2018 !

Lundi 2 octobre 2023, 14 heures 15, 3052 SG
Emilio Gallego Arias (IRIF & PICUBE, INRIA) Flèche: Incremental Validation for Hybrid Formal Documents

We present Flèche, an engine for the incremental validation and edition of scientific formalized documents.

Flèche provides a formal document model, based on a notion of functional interpretation for a family of languages plus algebraic effects capturing document dynamics.

Flèche targets existing systems with non-trivial interfaces and documents in their daily use.

Some features of Flèche are: a) hybrid formal documents, where natural language is intermixed with its formal counterpart; b) modular error recovery and handling of incomplete documents; c) incremental, full-project model; d) standard document edition workflow.

We have released a Flèche implementation for the Coq Proof Assistant, `coq-lsp`, that uses the Language Server Protocol (LSP) as an editor API. Together with LSP-capable editors like Visual Studio Code, `coq-lsp` provides a new user interface for the Coq system with unique characteristics.

Mardi 12 septembre 2023, 10 heures 15, Salle 146, Bâtiment Olympe de Gouges
Ian Shillito And Iris Van Der Giessen Interpolation(s) via proof theory: sequents, Coq, and beyond sequents.

In this joint talk we will explore proof-theoretic methods to prove Craig and uniform interpolation in modal logics. Iris will start with an introduction of Craig and uniform interpolation in modal logics. Ian will continue with the presentation of existing proofs of Craig and uniform interpolation via sequent calculi for the modal logic K, as well as their mechanisation in Coq. While they are accessible, the understanding of these proofs will allow us to expand on the topic of Bílková’s pen-and-paper proof of uniform interpolation, which has been resisting mechanisation this far. The issues on the porting of the proof for K to GL will be explained. Finally, Iris will discuss the difficulties of adopting the sequent-style proof of uniform interpolation to hypersequent-style calculi. We will discuss joint work with Raheleh Jalali and Roman Kuznets in which uniform interpolation is proved for extensions of modal logic K5 via hypersequent-like calculi.

This will be a joint talk of a duration of about 1H45. Il s'agira d'un exposé commun, d'une durée d'environ 1H45

Lundi 5 juin 2023, 14 heures, 146 (Olympe de Gouges)
Wendlasida Ouedraogo (INRIA (Paris Saclay)) Source code optimization for safety critical systems

A computer system is safety-critical when it can cause serious damage to property, the environment, human life, or to society as a whole. Real-world safety-critical systems are also necessarily complex, because, to take into account the interactions between software, hardware, the physical environment, and sometimes their distributed nature (systems of systems), they need to implement a variety of safety measures, in software, hardware, in the system design, at development time, at compile time, and at run-time. Those safety measures which vary from one safety-critical system to another very often lead to a decrease in performance, for a increase in the execution time of software.

This research work is situated in the context of one such system, the communication-based train control (CBTC) system of Siemens Mobility France which operates a number of driverless subway systems around the World, including Paris lines 1, 4, and 14. That system is certified according to the industrial norm EN-50128 and up to the highest Safety Integrity Level 4, required for safety-critical systems with potentially catastrophic consequences. In this context, the thesis looks for an answer to the question of how to automatically optimize the execution time performance of such systems without losing the previously obtained safety guarantees.

The answer provided is provably correct optimization of source code. A first contribution is a source-to-source compiler for VCP Ada (a subset of Ada) programs, that optimizes source code while preserving the formal semantics of the programs. The compiler has been proven correct in the Coq proof assistant and guarantees the equivalence of execution between the original and the optimized program. The compiler copes with the complexities of the platform, due to hardware safety measures, which is important, since real-world safety-critical systems are often susceptible to having such measures, potentially conflicting with existing formally proven optimizing compilers. Moreover, choosing the approach of a source-to-source compilation shows to have methodological advantages over an approach to proven optimizations having a number of intermediate languages, allowing to simplify and diminishing the needed proof effort.

A second contribution is to the problem of provably correct lexical analysis of compilers, which has previously not received a lot of research attention, a weak link in any compilation chain using a proven or qualified compiler. We develop CoqLex, the first Coq-formalized lexer generator, based on a modification of an existing Coq implementation of regular expression matching via Brzozowski derivatives.

The developed theory and tools have been applied to optimize real-world VCP Ada programs of CBTC systems, consisting of thousands of source files, with promising results.

Lundi 22 mai 2023, 14 heures, Olympe de Gouges, 146
Pablo Donato (Partout Team, LIX laboratory) Deep Inference for Graphical Theorem Proving

In this talk, I will give an overview of the work I have been doing for my thesis in the past three years, which is an exploration in the design and implementation of graphical interfaces for constructing formal proofs interactively. A guiding principle throughout my thesis is the use of proof theory, which both inspired new ways of representing and interacting with logical entities, and provided solid foundations to certify the design decisions made along the way. In particular, I applied a new methodology in proof theory called “deep inference”, which enables reasoning deep inside arbitrary logical contexts.

In the first part of the talk, I will present our so-called proof-by-action paradigm, where the user builds proofs by executing gestural actions directly on the proof state. Typically, this corresponds to the possibility of clicking, dragging and dropping items which represent the conclusion and hypotheses of the current goal. This will be illustrated through a demonstration of Actema, a prototype of graphical interface implementing these ideas in the setting of intuitionistic first-order logic. The main feature of Actema is its drag-and-drop proof tactic, which can be seen as a powerful generalization of the standard “apply” and “rewrite” tactics found in most proof assistants. I will discuss its proof-theoretical foundation, a technique called “subformula linking”, as well as how we were able to make it available in the Coq proof assistant by building the coq-actema plugin.

In the second part of the talk, I will present a more experimental line of research where I aim to get rid of the traditional symbolic formulas of logic, in favor of a more structured and iconic representation of the proof state. By “iconic”, I mean that the visual representation of logical statements should hint at their meaning, and provide affordances to the ways in which they can be manipulated. It turns out that this has already been investigated by the visionary logician C. S. Peirce at the end of the 19th century, who devised arguably the first diagrammatic proof calculus in history: the existential graphs. I will present my own take on the existential graphs, which has crystallized into a proof system I call the “flower calculus”. With only 7 rules of (deep) inference, the flower calculus is sound and complete for intuitionistic first-order logic. I will conclude with a demonstration of the Flower Prover, another prototype of GUI for theorem proving in the proof-by-action paradigm, whose actions map directly to the rules of the flower calculus.

Lundi 15 mai 2023, 14 heures, Salle 146, Bâtiment Olympe de Gouges
Etienne Miquey (I2M) Evidenced frames: a unifying framework broadening realizability models (and that is formalised in Coq!)

Constructive foundations have for decades been built upon realizability models for higher-order logic and type theory. Nonetheless, the existing literature on realizability models has several limitations, including the following: 1) first of all, the question “What is a realizability model?” does not have a very satisfying formal answer 2) traditional realizability models have a rather limited notion of computation, which only supports non-termination and avoids many other commonly used effects. Work to address these limitations has typically overlaid structure on top of existing models, such as by using powersets to represent non-determinism, but kept the realizers themselves deterministic. 3) The algebraic structure of existing realizability models taking into account effects in a more direct fashion (as is done in Krivine realizability for instance) is hard to analyze.

In this work, we (Liron Cohen, Ross Tate & myself) alternatively address these limitations by introducing evidenced frames: a general-purpose framework for building realizability models that support diverse effectful computations. We demonstrate that they define a very flexible notion of realizability models, allowing in particular models wherein the realizers themselves can be effectful, such as λ-terms that can manipulate state, reduce non-deterministically, or fail entirely. Beyond the broader notions of computation, we demonstrate that evidenced frames form a unifying framework for (realizability) models of higher-order dependent predicate logic. In particular, we prove that evidenced frames are complete with respect to these models, and that the existing completeness construction for implicative algebras—another foundational framework for realizability—factors through our simpler construction. As such, we conclude that evidenced frames offer an ideal domain for unifying and broadening realizability models.

Most of our results have been formalised in the Coq proof assistant, and I plan to devote some part of the talk to present this formalisation.

Mardi 9 mai 2023, 11 heures, 147 (Olympe de Gouges)
Louise Dubois De Prisque (LSV, CNRS & ENS Paris-Saclay) Compositional pre-processing for Coq’s tactics

Automated tactics in Coq are numerous but often limited to some specific logical fragment. Pre-processing can help them to be efficient in more situations, but it is often specialized to one tactic. This is why we implemented Sniper, a general pre-processing tool made of predictable atomic compositional transformations. Sniper also provides a push-button automated tactic called snipe, which composes the transformations and calls external SMT-solvers within Coq thanks to the SMTCoq plugin.

Lundi 17 avril 2023, 14 heures, Olympe de Gouges (room 146)
Arthur Charguéraud (Camus team, ICPS, iCube, INRIA) A Modern Eye on Weakest Preconditions

The traditional big-step semantics judgment relates an input configuration to one output configuration. For a non-deterministic semantics, the big-step judgment does not naturally capture the property that all possible executions are well-behaved. In contrast, the omni-big-step semantics relates an input configuration to a set of reachable output configurations (technically, an overapproximation of that set), and directly captures total correctness properties. I will present the omni-big-step semantics judgment, which corresponds to an inductive definition of a weakest-precondition predicate. I will explain how this judgment leads to the simplest known soundness proof for a sequential Separation Logic, moreover capturing total correctness.

Besides, I will present “characteristic formulae in weakest-precondition style”. These formulae also correspond to the weakest precondition of a unannotated program, that is, a program without specifications nor invariants. Characteristic formulae provide a practical way of embedding Separation Logic reasoning in a proof assistant such as Coq. This approach is implemented in my program verification tool CFML, and is described in an all-in-Coq course, Volume 6 of the Software Foundations series.

Furthermore, I will advertise for two cousins of the omni-big-step semantics: the coinductive-omni-big-step semantics, which captures partial correctness; and the omni-small-step semantics, which relates one configuration to its set of successor configurations. I will explain how omni-semantics may be used to: (1) carry out type-soundness proofs in a style that avoids quadratic case inspections, and (2) carry out compiler correctness proofs using forward simulation proofs, even in the case of nondeterministic semantics.

Lundi 3 avril 2023, 14 heures, 1007
Olivier Laurent (Équipe Plume, Laboratoire de l'Informatique du Parallélisme (LIP), ENS Lyon) 1, 2, 3, Yalla: Linear Logic in Coq

We discuss the formalization of Linear Logic, through the development of the Yalla library for Coq:


The initial specification was to get a library which:

  • would provide standard results on the proof theory of Linear Logic;
  • could be reused, thus in particular able to deal with some variants of the system;
  • should be compatible with the Curry-Howard interpretation of proofs, thus faithful to their computational content.

We will describe the evolution of the project and how it matches this specification through the different versions of the library: Yalla 1, Yalla 2 (current version), Yalla 3 (future directions).

Following all previous known works, Yalla 1 defined proofs in Prop. It relies on an explicit exchange rule for dealing with the computational content of proofs. Yalla 2 corresponds to a move to Type to be able to extract computational informations from proofs. Yalla 3 will rely on a different way of approaching exchange to be able to deal with local transformations of proofs more easily.

Lundi 20 mars 2023, 15 heures, 1007
Riccardo Brasca (IMJ-PRG, Université Paris-Cité) Le dernier théorème de Fermat pour les premiers réguliers en Lean

Le dernier théorème de Fermat est un résultat en théorie des nombres qui a été conjecturé en 1637, mais qui a été démontré seulement en 1995. La démonstration repose sur des techniques mathématiques modernes et sa formalisation est actuellement hors de portée. Dans cet exposé je vais expliquer la formalisation en Lean d'un cas particulier de ce théorème, où l'on suppose que l'exposant est un nombre premier “régulier” (cette notion sera introduite pendant l'exposé). Je vais aussi décrire l'assistant de preuve Lean et sa bibliothèque mathématique mathlib, et j'expliquerai l'intérêt mathématique d'un tel projet. Aucune connaissance en théorie de nombres est nécessaire pour suivre l'exposé.