Équipe-projet INRIA $\pi r^2$

Équipe thématique Analyse et conception de systèmes

## Analyse et conception de systèmes

#### Jour, heure et lieu

Le jeudi à 14h, salle 1007

#### Contact(s)

#### Prochaines séances

Analyse et conception de systèmes

lundi 07 janvier 2019, 10h30, Salle 3052

**Louis Mandel** (IBM Watson) *Reactive Probabilistic Programming*

The idea of Probabilistic Programming is to use the expressiveness of programming languages to build probabilistic models. To implement this idea, a common approach is to take a general purpose language and extend it with (1) a function that allows to sample a value from a distribution, (2) a function that allows to condition values of variables in a program via observations, and (3) an inference procedure that build a distribution for a program using the two previous constructs.

Following this approach, we propose to extends a reactive programming language with probabilistic constructs. This new language enables the modeling of probabilistic reactive systems, that is, probabilistic models in constant interaction with their environment. Examples of such systems include time-series prediction, agent-based systems, or infrastructure self-tuning.

To demonstrate this approach, we started from ReactiveML, a reactive extension of OCaml that supports parallel composition of processes, communication through signals, preemption, and suspension. We extend the language with classic probabilistic constructs (sample, factor à la WebPPL) and propose an inference scheme for reactive processes, that are, non-terminating functions.

#### Séances précédentes

Analyse et conception de systèmes

mercredi 14 mars 2018, 14h00, Salle UFR

**Thomas Letan** (Agence Nationale de la Sécurité des Systèmes d’Information & INRIA Rennes) *FreeSpec : Modular Verification of Systems using Effects and Effect Handlers in Coq*

Modern computing systems have grown in complexity, and the attack surface has increased accordingly.

Even though system components are generally carefully designed and even verified by different groups of people, the \textit{composition} of these components is often regarded with less attention.

This paves the way for “architectural attacks”, a class of security vulnerabilities where the attacker is able to threaten the security of the system even if each of its component continues to act as expected.

In this article, we introduce FreeSpec, a Coq framework built upon the key idea that components can be modelled as programs with algebraic effects to be realized by other components.

FreeSpec allows for the modular modelling of a complex system, by defining idealized components connected together, and the modular verification of the properties of their composition.

In doing so, we propose a novel approach for the Coq proof assistant to reason about programs with effects in a modular way.

Analyse et conception de systèmes

lundi 19 février 2018, 11h00, Salle 3052

**Adrien Boiret** (University of Warsaw) *The “Hilbert Method” for Solving Transducer Equivalence Problems*

Classical results from algebra, such as Hilbert's Basis Theorem and Hilbert’s Nullstellensatz, have been used to decide equivalence for some classes of transducers, such as HDT0L (Honkala 2000) or more recently deterministic tree-to-string transducers (Seidl, Maneth, Kemper 2015). In this talk, we propose an abstraction of these methods. The goal is to investigate the scope of the “Hilbert method” for transducer equivalence that was described above.

Consider an algebra A in the sense of universal algebra, i.e. a set equipped with some operations. A grammar over A is like a context-free grammar, except that it generates a subset of the algebra A, and the rules use operations from the algebra A. The classical context-free grammars are the special case when the algebra A is the free monoid with concatenation. Using the “Hilbert method” one can decide certain properties of grammars over algebras that are fields or rings of polynomials over a field. The “Hilbert method” extends to grammars over certain well-behaved algebras that can be “coded” into fields or rings of polynomials. Finally, for these well-behaved algebras, one can also use the “Hilbert method” to decide the equivalence problem for transducers (of a certain transducer model that uses registers) that input trees and output elements of the well-behaved algebra.

In the talk, we give examples and non-examples of well-behaved algebras, and discuss the decidability / undecidability results connected to them. In particular:

-We show that equivalence is decidable for transducers that input (possibly ordered) trees and output unranked unordered trees.

-We show that equivalence is undecidable for transducers that input words and output polynomials over the rational numbers with one variable (but are allowed to use composition of polynomials).

Joint work with Mikołaj Bojańczyk, Janusz Schmude, Radosław Piórkowski at Warsaw University.

Analyse et conception de systèmes

mercredi 07 février 2018, 14h00, Salle 3052

**Hongseok Yang** (University of Oxford) *Informal introduction to integral probability metrics and generative adversarial networks*

This is an informal whiteboard/blackboard presentation on generative adversarial networks by a non-expert. I will explain integral probability metrics and their use on training deep generative models, widely known as GAN (generative adversarial networks). My plan is to focus on two well-known metrics for probability distributions, Wasserstein distance and Maximum Mean Discrepancy. I will explain how their dual characterisations have been used in the adversarial training of deep generative models, and I will make superficial/speculative comparisons between these metrics. My talk will be based on the following papers and the slides:

References: Wasserstein GAN https://arxiv.org/abs/1701.07875

Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy https://arxiv.org/abs/1611.04488

Maximum Mean Discrepancy http://alex.smola.org/teaching/iconip2006/iconip_3.pdf

Analyse et conception de systèmes

mardi 28 novembre 2017, 13h30, Salle 3052

**Leon Gondelman** (Institute for Computation and Information Sciences (Radboud Univ.), Netherlands) *The Spirit of Ghost Code*

In the context of deductive program verification, ghost code is part of the program that is added for the purpose of specification. Ghost code must not interfere with regular code, in the sense that it can be erased without any observable difference in the program outcome. In particular, ghost data cannot participate in regular computations and ghost code cannot mutate regular data or diverge. The idea exists in the folklore since the early notion of auxiliary variables and is implemented in many state-of-the-art program verification tools. However, a rigorous definition and treatment of ghost code is surprisingly subtle and few formalizations exist.

In this talk, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3.

Analyse et conception de systèmes

mercredi 18 octobre 2017, 14h30, Salle 3052

**Nicolas Behr** (IRIF) *The stochastic mechanics of graph rewriting via the rule algebra framework - an introduction*

Reporting on joint work with V. Danos and I. Garnier, I will present a novel formulation of graph rewriting that permits to phrase the concept in a fashion akin to statistical physics. The key ingredient of this approach is the rule algebra framework, in which rewriting rules are interpreted as a type of diagrams endowed with a notion of sequential compositions. The focus of this talk will be the stochastic mechanics applications: once an associative algebra of rewriting rules is available in the form of the rule algebras, the full theory of continuous time Markov chains is at ones disposal. A number of explicit examples for computing in this formalism will be presented.

Analyse et conception de systèmes

jeudi 08 juin 2017, 14h00, Salle 3052

**Eric Goubault** (LIX, École Polytechnique) *A simplicial complex model of dynamic epistemic logic for fault-tolerant distributed computing*

The usual epistemic S5 model for multi-agent systems is a Kripke graph, whose edges are labeled with the agents that do not distinguish between two states. We propose to uncover the higher dimensional information implicit in the Kripke graph, by using as a model its dual, a chromatic simplicial complex. For each state of the Kripke model there is a facet in the complex, with one vertex per agent. If an edge (u,v) is labeled with a set of agents S, the facets corresponding to u and v intersect in a simplex consisting of one vertex for each agent of S. Then we use dynamic epistemic logic to study how the simplicial complex epistemic model changes after the agents communicate with each other. We show that there are topological invariants preserved from the initial epistemic complex to the epistemic complex after an action model is applied, that depend on how reliable the communication is. In turn these topological properties determine the knowledge that the agents may gain after the communication happens.

We choose distributed computing as a case study to work out in detail the dynamic epistemic simplicial complex theory. The reason is that distributed computability has been studied using combinatorial topology, where the set of all possible executions in a distributed system is represented by a simplicial complex. We establish a formal, categorical equivalence between Kripke models and simplicial complex epistemic models.

In one direction, the connection provides a dynamic epistemic logic semantics to distributed computability, opening the possibility of reasoning about knowledge change in distributed computing. In the other direction, the connection allows to bring in the topological invariants known in distributed computing, to dynamic epistemic logic, and in particular show that knowledge gained after an epistemic action model is intimately related to higher dimensional topological properties.

Analyse et conception de systèmes

lundi 29 mai 2017, 14h00, Salle 3052

**Silvia Crafa** (Università di Padova) *Formal methods in action: typestate-oriented actor programming in Scala-Akka*

Typestate-oriented programming is an extension of the OO paradigm in which objects are modelled not just in terms of interfaces but also in terms of their usage protocols, describing legal sequences of method calls, possibly depending on the object’s internal state. We argue that the Actor Model allows typestate-OOP in an inherently distributed setting, whereby objects/actors can be accessed concurrently by several processes, and local entities cooperate to carry out a communication protocol. In this talk we examine a “full stack development”: we skip through a process calculus of actors equipped with a behavioural type system that allows to declare, to statically check and to infer state-based protocols, and we see it in action on Scala Akka programs, showing how far the standard Scala compiler supports protocol checking and discussing current work on compiler extension.

Analyse et conception de systèmes

jeudi 18 mai 2017, 14h30, Salle 3052

**Giovanni Bernardi** (IRIF) *Full-abstraction for Must Testing Preorders*

The client Must preorder relates tests (clients) instead of processes (servers). The existing characterisation of this preorder is unsatisfactory for it relies on the notion of usable clients which, in turn, are defined using an existential quantification over the servers that ensure client satisfaction. In this talk we characterise the set of usable clients for finite-branching LTSs, and give a sound and complete decision procedure for it. We also provide a novel coinductive characterisation of the client preorder, which we use to argue that the preorder is decidable, thus proving a claim made by Bernardi and Hennessy in 2013.

Analyse et conception de systèmes

mardi 02 mai 2017, 11h00, Salle 3052

**Joachim Breitner** (University of Pennsylvania) *Who needs theorem provers when we have compilers?*

After decades of work on functional programming and on interactive theorem proving, a Haskell programmer who wants to include simple equational proofs in his programs, e.g. that some Monad laws hold, is still most likely to simply do the derivation as comments in the file, as all the advanced powerful proving tools are inconvenient.

But one powerful tool capable of doing (some of) these proofs is hidden in plain sight: GHC, the Haskell compiler! Its optimization machinery, in particular the simplifier, can prove many simple equations all by himself, simply by compiling both sides and noting that the result is the same. Furthermore, and crucially to make this approach applicable to more complicated equations, the compiler can be instructed to do almost arbitrary symbolic term rewritings by using Rewrite Rules.

In this rather hands-on talk I will show a small GHC plugin that I can use to prove the monad laws for a non-trivial functor. I am looking forward to a discussion of the merits, limits and capabilities of this approach.

Analyse et conception de systèmes

jeudi 01 décembre 2016, 10h30, Salle 1007

**Julien Lange** (Imperial College) *Building Graphical Choreographies From Communicating Machines: Principles and Applications*

Graphical choreographies, or global graphs, are general multiparty session specifications featuring expressive constructs such as forking, merging, and joining for representing application-level protocols. Global graphs can be directly translated into modelling notations such as BPMN and UML. In the first part of the talk, I will first present an algorithm whereby a global graph can be constructed from asynchronous interactions represented by communicating finite-state machines (CFSMs); and a sound characterisation of a subset of safe CFSMs from which global graphs can be constructed. In the second part, I will outline a few recent applications of this work to communicating timed automata and the Go programming language.

Analyse et conception de systèmes

jeudi 23 juin 2016, 14h30, Salle 1007

**Elisabeth Remy** (Institut de Mathématiques de Luminy) *Analyse qualitative des réseaux de régulation génétiques*

Les modèles d’interactions Booléens ont été introduits dans le contexte des réseaux de gènes par S. Kauffman dans la fin des années 60. On considère ici une variante, les modèles logiques, qui reposent sur un formalisme qualitatif avec mise à jour asynchrone des systèmes dynamiques discrets associés (R. Thomas, 1973). L’expression d'un gène y est représentée par une variable discrète et l’évolution du réseau est contrôlée par un système d'équations logiques. A partir de ce système peut être extrait un graphe de régulation : il s’agit d’un graphe orienté signé, dont les noeuds représentent les gènes, et les arcs les régulations entre gènes. On y distingue deux types de régulations : les activations (interactions positives) et les inhibitions (interactions négatives). Ces modèles génèrent des dynamiques de taille exponentielle, et nous faisons face à des problèmes d’explosion combinatoire. Ainsi, nous cherchons à caractériser des propriétés de la dynamique (attracteurs, atteignabilité,…) sans avoir à générer le graphe de transition d’états, qui représente l’ensemble des trajectoires possibles. Une méthode consiste à réduire le graphe, et étudier la dynamique réduite. L’opération de réduction crée des modifications potentiellement importantes dans la dynamique, il est donc nécessaire de bien caractériser ses propriétés de (non-)conservation. Nous verrons aussi comment identifier les attracteurs et quantifier leurs bassins d’attraction à l’aide de méthodes de Monte-Carlo adaptées. Enfin, nous mettrons en valeur ces méthodes d'analyse à travers l’étude d’un modèle concernant la tumorigénèse du cancer de vessie.

Analyse et conception de systèmes

jeudi 26 mai 2016, 14h30, Salle 1008

**Ralf Treinen** (IRIF) *Towards the verification of file tree transformations - the Colis project*

This talk describes a recently started ANR project named Colis (http://colis.irif.univ-paris-diderot.fr/), which has the goal of developing techniques and tools for the formal verification of shell scripts. More specifically, our goal is to verify the transformation of a file system tree described by so-called debian maintainer scripts. These scripts, often written in the Posix shell language, are part of the software packages in the Debian GNU/Linux distribution.

A possible example of a program specification is absence of execution error under certain initial conditions. Automatic program verification even for this kind of specification is a challenging task. In case of Debian maintainer scripts we are faced with even more challenging properties like idempotency of scripts (required by policy), or commutation of scripts.

The project is still in the beginning, so there are no results yet to present. However, I will explain why I think that the case of Debian maintainer scripts is very interesting for program verification : some aspects of scripts (POSIX shell, manipulation of a complex data structure) make the problem very difficult, while other aspects of the Debian case are likely to make the problem easier than the task of verifying any arbitary shell script.

Analyse et conception de systèmes

jeudi 07 avril 2016, 14h30, Salle UFR

**Tobias Heindel** (University of Copehagen) *Computing means and moments of occurrence counts: rule-based modeling meets adaptive uniformization and finite state projection*

The talk presents recent results on how to solve the general problem of computing the time evolution of means and moments of ``occurence counts` in rule-based models; computability is in the sense of Weihrauch. Roughly, established techniques, namely adaptive uniformization and finite state projection, can be reused – provided that occurrence counts (and their powers) are bounded by a polynomial in the ``size`

of the rule-based system and the ``size'' of the system does not explode. The most interesting results can be obtained for context-free systems, which can be thought of as branching processes equipped with structure; for these systems we have PTIME computability.

All results will be exemplified for the case of string rewriting, aiming for a widest possible audience of computer scientists, assuming only basic knowledge of continous time Markov chains with countable state space. Computing the time evolution of mean word counts in stochastic string rewriting is already highly non-trivial, even if it is the most basic example. The GEM OF THE TALK is the computation of mean word counts for context-free grammars!