INRIA project-team $\pi r^2$

Thematic team Analysis and conception of systems

## Analysis and conception of systems

#### Day, hour and place

#### Contact(s)

### Previous talks

#### Year 2019

Analysis and conception of systems

Monday January 7, 2019, 10:30AM, Salle 3052

**Louis Mandel** (IBM Watson) *Reactive Probabilistic Programming*

Following this approach, we propose to extends a reactive programming language with probabilistic constructs. This new language enables the modeling of probabilistic reactive systems, that is, probabilistic models in constant interaction with their environment. Examples of such systems include time-series prediction, agent-based systems, or infrastructure self-tuning.

To demonstrate this approach, we started from ReactiveML, a reactive extension of OCaml that supports parallel composition of processes, communication through signals, preemption, and suspension. We extend the language with classic probabilistic constructs (sample, factor à la WebPPL) and propose an inference scheme for reactive processes, that are, non-terminating functions.

#### Year 2018

Analysis and conception of systems

Wednesday Mars 14, 2018, 2PM, Salle UFR

**Thomas Letan** (Agence Nationale de la Sécurité des Systèmes d’Information & INRIA Rennes) *FreeSpec : Modular Verification of Systems using Effects and Effect Handlers in Coq*

Even though system components are generally carefully designed and even verified by different groups of people, the \textit{composition} of these components is often regarded with less attention.

This paves the way for “architectural attacks”, a class of security vulnerabilities where the attacker is able to threaten the security of the system even if each of its component continues to act as expected.

In this article, we introduce FreeSpec, a Coq framework built upon the key idea that components can be modelled as programs with algebraic effects to be realized by other components.

FreeSpec allows for the modular modelling of a complex system, by defining idealized components connected together, and the modular verification of the properties of their composition.

In doing so, we propose a novel approach for the Coq proof assistant to reason about programs with effects in a modular way.

Analysis and conception of systems

Monday February 19, 2018, 11AM, Salle 3052

**Adrien Boiret** (University of Warsaw) *The “Hilbert Method” for Solving Transducer Equivalence Problems*

Consider an algebra A in the sense of universal algebra, i.e. a set equipped with some operations. A grammar over A is like a context-free grammar, except that it generates a subset of the algebra A, and the rules use operations from the algebra A. The classical context-free grammars are the special case when the algebra A is the free monoid with concatenation. Using the “Hilbert method” one can decide certain properties of grammars over algebras that are fields or rings of polynomials over a field. The “Hilbert method” extends to grammars over certain well-behaved algebras that can be “coded” into fields or rings of polynomials. Finally, for these well-behaved algebras, one can also use the “Hilbert method” to decide the equivalence problem for transducers (of a certain transducer model that uses registers) that input trees and output elements of the well-behaved algebra.

In the talk, we give examples and non-examples of well-behaved algebras, and discuss the decidability / undecidability results connected to them. In particular:

-We show that equivalence is decidable for transducers that input (possibly ordered) trees and output unranked unordered trees.

-We show that equivalence is undecidable for transducers that input words and output polynomials over the rational numbers with one variable (but are allowed to use composition of polynomials).

Joint work with Mikołaj Bojańczyk, Janusz Schmude, Radosław Piórkowski at Warsaw University.

Analysis and conception of systems

Wednesday February 7, 2018, 2PM, Salle 3052

**Hongseok Yang** (University of Oxford) *Informal introduction to integral probability metrics and generative adversarial networks*

References: Wasserstein GAN https://arxiv.org/abs/1701.07875

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy https://arxiv.org/abs/1611.04488

Maximum Mean Discrepancy http://alex.smola.org/teaching/iconip2006/iconip_3.pdf

#### Year 2017

Analysis and conception of systems

Tuesday November 28, 2017, 1:30PM, Salle 3052

**Leon Gondelman** (Institute for Computation and Information Sciences (Radboud Univ.), Netherlands) *The Spirit of Ghost Code*

In this talk, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3.

Analysis and conception of systems

Wednesday October 18, 2017, 2:30PM, Salle 3052

**Nicolas Behr** (IRIF) *The stochastic mechanics of graph rewriting via the rule algebra framework - an introduction*

Analysis and conception of systems

Thursday June 8, 2017, 2PM, Salle 3052

**Eric Goubault** (LIX, École Polytechnique) *A simplicial complex model of dynamic epistemic logic for fault-tolerant distributed computing*

We choose distributed computing as a case study to work out in detail the dynamic epistemic simplicial complex theory. The reason is that distributed computability has been studied using combinatorial topology, where the set of all possible executions in a distributed system is represented by a simplicial complex. We establish a formal, categorical equivalence between Kripke models and simplicial complex epistemic models.

In one direction, the connection provides a dynamic epistemic logic semantics to distributed computability, opening the possibility of reasoning about knowledge change in distributed computing. In the other direction, the connection allows to bring in the topological invariants known in distributed computing, to dynamic epistemic logic, and in particular show that knowledge gained after an epistemic action model is intimately related to higher dimensional topological properties.

Analysis and conception of systems

Monday May 29, 2017, 2PM, Salle 3052

**Silvia Crafa** (Università di Padova) *Formal methods in action: typestate-oriented actor programming in Scala-Akka*

Analysis and conception of systems

Thursday May 18, 2017, 2:30PM, Salle 3052

**Giovanni Bernardi** (IRIF) *Full-abstraction for Must Testing Preorders*

Analysis and conception of systems

Tuesday May 2, 2017, 11AM, Salle 3052

**Joachim Breitner** (University of Pennsylvania) *Who needs theorem provers when we have compilers?*

But one powerful tool capable of doing (some of) these proofs is hidden in plain sight: GHC, the Haskell compiler! Its optimization machinery, in particular the simplifier, can prove many simple equations all by himself, simply by compiling both sides and noting that the result is the same. Furthermore, and crucially to make this approach applicable to more complicated equations, the compiler can be instructed to do almost arbitrary symbolic term rewritings by using Rewrite Rules.

In this rather hands-on talk I will show a small GHC plugin that I can use to prove the monad laws for a non-trivial functor. I am looking forward to a discussion of the merits, limits and capabilities of this approach.

#### Year 2016

Analysis and conception of systems

Thursday December 1, 2016, 10:30AM, Salle 1007

**Julien Lange** (Imperial College) *Building Graphical Choreographies From Communicating Machines: Principles and Applications*

Analysis and conception of systems

Thursday June 23, 2016, 2:30PM, Salle 1007

**Elisabeth Remy** (Institut de Mathématiques de Luminy) *Analyse qualitative des réseaux de régulation génétiques*

Analysis and conception of systems

Thursday May 26, 2016, 2:30PM, Salle 1008

**Ralf Treinen** (IRIF) *Towards the verification of file tree transformations - the Colis project*

A possible example of a program specification is absence of execution error under certain initial conditions. Automatic program verification even for this kind of specification is a challenging task. In case of Debian maintainer scripts we are faced with even more challenging properties like idempotency of scripts (required by policy), or commutation of scripts.

The project is still in the beginning, so there are no results yet to present. However, I will explain why I think that the case of Debian maintainer scripts is very interesting for program verification : some aspects of scripts (POSIX shell, manipulation of a complex data structure) make the problem very difficult, while other aspects of the Debian case are likely to make the problem easier than the task of verifying any arbitary shell script.

Analysis and conception of systems

Thursday April 7, 2016, 2:30PM, Salle UFR

**Tobias Heindel** (University of Copehagen) *Computing means and moments of occurrence counts: rule-based modeling meets adaptive uniformization and finite state projection*

` in rule-based models; computability is in the sense of Weihrauch. Roughly, established techniques, namely adaptive uniformization and finite state projection, can be reused – provided that occurrence counts (and their powers) are bounded by a polynomial in the ``size`

of the rule-based system and the ``size'' of the system does not explode. The most interesting results can be obtained for context-free systems, which can be thought of as branching processes equipped with structure; for these systems we have PTIME computability.
All results will be exemplified for the case of string rewriting, aiming for a widest possible audience of computer scientists, assuming only basic knowledge of continous time Markov chains with countable state space. Computing the time evolution of mean word counts in stochastic string rewriting is already highly non-trivial, even if it is the most basic example. The GEM OF THE TALK is the computation of mean word counts for context-free grammars!