~~NOCACHE~~
/* DO NOT EDIT THIS FILE */
[[en:seminaires:types:index|Type theory and realisability]]\\
Wednesday December 11, 2019, 2PM, Salle 1007\\
**Danny Gratzer** //Type Theory à la Mode//
\\
Modalities have proved to be a recurring phenomenon in type theories, arising
in both mathematical and computational contexts. For instance, recent work on
cubical type theory has simplified the construction of univalent universes
using the internal type theory of cubical sets supplemented with a comonad. On
the other hand, guarded type theory has provided an account of coinduction
using a handful of modalities.
Despite the wide variety of uses, a general framework for modal dependent type
theories has still proven elusive. Each modal situation requires a handcrafted
type theory, and establishing the properties of these type theories requires a
significant technical investment.
In this work, we have attempted to isolate a class of modal situations which
can be given a single uniform syntax and allow for the typical metatheorems of
type theory to be proven once and for all. Our calculus takes a mode theory as
a parameter (an abstract description of the collection of modes, the modalities
between them, and the ways in which they interactions) and produces a full
dependent type theory satisfying canonicity.
Our approach generalizes the ideas latent in Nuyts, Vezzosi, and Devriese, and
simplifies the forthcoming work by Licata, Shulman, and Riley on a subset of
mode theories. This calculus is sufficiently flexible to model internal
parametricity, guarded recursion, and even axiomatic cohesion.
[[en:seminaires:types:index|Type theory and realisability]]\\
Wednesday November 27, 2019, 2PM, Salle 1007\\
**Bas Spitters** (Aarhus university) //Synthetic topology in Homotopy Type Theory for probabilistic programming//
\\
The ALEA Coq library formalizes measure theory based on a variant of the Giry
monad on the category of sets.
This enables the interpretation of a probabilistic programming language with
primitives for sampling from discrete distributions.
Continuous distributions (e.g. over the reals) cannot be treated this way, as
not all subsets are measurable.
We propose the use of synthetic topology to model continuous distributions
while staying close to ALEA's approach.
This allows us to model the apWhile language used in easycrypt to model
differential privacy.
We study the initial sigma-frame and the corresponding induced topology on
arbitrary sets.
We define valuations and lower integrals on sets that takes into account these
intrinsic topologies, and we prove versions of the Riesz and Fubini theorems.
We then show how the Lebesgue valuation, and hence continuous distributions,
can be constructed.
By interpreting our synthetic results in the K2-realizability topos, we recover
the valuations on topological domains used as the semantics for the augurv2
probabilistic language, which has been used for machine learning.
Joint work with Martin Bidlingmaier (Aarhus) and Florian Faissole (INRIA).
[[en:seminaires:types:index|Type theory and realisability]]\\
Wednesday September 18, 2019, 2PM, Salle 1007\\
**Karl Palmskog** (The University of Texas at Austin) //mCoq: Mutation Proving for Analysis of Verification Projects//
\\
Mutation analysis, which introduces artificial defects into software systems,
is the basis of mutation testing, a technique widely applied to evaluate and
enhance the quality of test suites. However, despite the deep analogy between
tests and formal proofs, mutation analysis has seldom been considered in the
context of deductive verification. We propose mutation proving, a technique for
analyzing verification projects that use proof assistants. We implemented our
technique for the Coq proof assistant in a tool dubbed mCoq. mCoq applies a
set of mutation operators to Coq definitions of functions and datatypes,
inspired by operators previously proposed for functional programming languages.
mCoq then checks proofs of lemmas affected by operator application. To make our
technique feasible in practice, we implemented several optimizations in mCoq
such as parallel proof checking. We applied mCoq to several medium and large
scale Coq projects, and recorded whether proofs passed or failed when applying
different mutation operators. We then qualitatively analyzed the mutants, and
found several examples of weak and incomplete specifications. For our
evaluation, we made many improvements to serialization of Coq code and even
discovered a notable bug in Coq itself, all acknowledged by developers. We
believe mCoq can be useful both to proof engineers for improving the quality of
their verification projects and to researchers for evaluating proof engineering
techniques.
Joint work with Ahmet Celik, Marinela Parovic, Emilio Jésus Gallego Arias, and
Milos Gligoric.
Speaker bio:
Karl Palmskog is a Research Fellow at The University of Texas at Austin. His
research focuses on proof engineering techniques and verification of
distributed systems using Coq. He was previously a postdoctoral researcher at
University of Illinois at Urbana-Champaign, working on topics related to the
actor model of message-passing concurrency. He obtained his PhD from KTH Royal
Institute of Technology.
Video available at https://youtu.be/S6b6OMFiDFA
Slides available at https://setoid.com/slides/irif-2019.pdf
[[en:seminaires:types:index|Type theory and realisability]]\\
Friday May 24, 2019, 2PM, Salle 1007\\
**Hugo Moeneclaey** //Monoids up to Coherent Homotopy in Two-Level Type Theory//
\\
When defining a monoid structure on an arbitrary type in HoTT, one
should require a multiplication that is not only homotopy-associative, but also
has an infinite tower of higher homotopies. For example in dimension two one
should have a condition similar to Mac Lane’s pentagon for monoidal categories.
We call such a monoid a monoid up to coherent homotopy.
The goal of my internship in Stockholm was to formalize them in Agda. It is
well-known that infinite towers of homotopies are hard to handle in plain HoTT,
so we postulate a variant of two-level type theory, with a strict equality and
an interval type. Then we adapt the set-theoretical treatment of monoids up to
coherent homotopy using operads as presented by Clemens Berger and Ieke
Moerdijk [1,2].
Our main results are:
(a) Monoids up to coherent homotopy are invariant under homotopy equivalence
(b) Loop spaces are monoids up to coherent homotopy.
In this talk I will present the classical theory of monoids up to coherent
homotopy, and indicates how two-level type theory can be used to formalize it.
References
1. Axiomatic homotopy theory for operads (arxiv.org/abs/math/0206094)
2. The Boardman-Vogt resolution of operads in monoidal model categories (arxiv.org/abs/math/0502155)
Organisé conjointement avec le groupe de travail catégories supérieures, polygraphes, et homotopie.
[[en:seminaires:types:index|Type theory and realisability]]\\
Friday March 29, 2019, 11AM, Salle 3052\\
**Emilio J. Gallego Arias** (MINES ParisTech) //Formalized Logic Programming, from Theory to Practice//
\\
I will present past and ongoing work to develop mechanically-verified
semantics and implementations of constraint logic programming.
(Constraint) logic programming is based on the utilization of proof
search as a device for computation. The power that such a paradigm
offers often comes at a cost for formal language research and
implementation. Logic variables, unification, renaming apart, and
proof search strategies are cumbersome to handle formally, and often
specified informally or at the meta-logical level.
In the first part of the talk I will describe algebraic semantics for
proof search, based on the Tarski's relation algebras and Freyd's
allegories. The main idea is to translate logic programs to _variable
free_ relations; distribute relational algebras with quasi-projections
do interpret defined predicates, whereas first-order rewriting
simulates the traditional SLD proof search procedure.
While the relation-based approach suffices to capture the proof search
in a complete way, some rewriting steps are unsatisfactory from a more
operational point of view, as they introduce extraneous terms in order
to preserve certain global constraints. In an attempt to remedy this,
we switch to the categorical version of relation algebra: allegories.
In particular, for a signature Σ, we will interpret logic programs in
the so-called Σ-allegories: distribute allegories whose conjunctive
part is tabulated by a "Regular Lawvere Category".
In this setting, the single primitive of relation composition
emcompasses unification, garbage collection, parameter passing, and
environment creation and destruction. Tabulations play a crucial role,
with their domain representing the set of live existential
variables. Proof search is carried out by a notion of diagram
rewriting, and is complete w.r.t. SLD, and satisfactory as form of
abstract syntax for the program's execution. This new semantics paves
the way to a formal extension of CLP with useful programming constructs.
In the second part, I will shift focus to a different part of the
logic programming spectrum, and describe ongoing efforts on
formalizing logic programming engines over theories with finitary
models on the Coq proof assistant. In particular, I will describe the
implementation and verification of an engine for incremental model
computation of a subset of Datalog that we have dubbed "Regular
Datalog".
"Regular" Datalog stems from [Reutter,Romero,Vardi] Regular Queries,
and it is of interest to the graph database community. The finite
nature of the semantics of such programs allows to handle them quite
naturally inside a constructive proof assistant, and to remain close
to the usual mathematical notation. Borrowing from techniques from
incremental view maintenance, we have developed a verified incremental
model construction engine; to achieve a cost-effective mechanized
proof of the soundness of the engine we had to develop a few notions
specific to the ITP domain, such as generalized program signatures and
a notion of relative satisfaction. We provide some preliminary
benchmarks of the verified engine over realistic scenarios.
To conclude, I will present some challenges that the practical
utilization of proof assistants does face these days, and briefly
discuss my ongoing work on this area.