Vérification
Lundi 16 décembre 2019, 11 heures, Salle 1007
Jules Villard (Facebook) The Infer Static Analysis platform

Infer is an open-source static analysis platform for Java, C, C++, and Objective-C. Infer is deployed at several companies where it helps developers write better code.

This talk will present how Infer is used at Facebook, where it analyses thousands of code changes every month, leading to thousands of bugs being found and fixed before they reach users. We will then see how to write your own inter-procedural static analysis in just a few lines of code inside Infer, and automatically be able to apply it to millions of lines of code.

Vérification
Mercredi 11 décembre 2019, 11 heures, Salle 1007
Yotam Feldman (Tel Aviv University) Complexity and Information in Invariant Inference

This work addresses the complexity of SAT-based invariant inference, a prominent approach to safety verification. We consider the problem of inferring an inductive invariant of polynomial length given a transition system and a safety property. We analyze the complexity of this problem in a black-box model, called the Hoare-query model, which is general enough to capture algorithms such as IC3/PDR and its variants. An algorithm in this model learns about the system's reachable states by querying the validity of Hoare triples.

We show that in general an algorithm in the Hoare-query model requires an exponential number of queries. Our lower bound is information-theoretic and applies even to computationally unrestricted algorithms, showing that no choice of generalization from the partial information obtained in a polynomial number of Hoare queries can lead to an efficient invariant inference procedure in this class.

We then show, for the first time, that by utilizing rich Hoare queries, as done in PDR, inference can be exponentially more efficient than approaches such as ICE learning, which only utilize inductiveness checks of candidates. We do so by constructing a class of transition systems for which a simple version of PDR with a single frame infers invariants in a polynomial number of queries, whereas every algorithm using only inductiveness checks and counterexamples requires an exponential number of queries.

Our results also shed light on connections and differences with the classical theory of exact concept learning with queries, and imply that learning from counterexamples to induction is harder than classical exact learning from labeled examples. This demonstrates that the convergence rate of Counterexample-Guided Inductive Synthesis depends on the form of counterexamples.

Joint work with Neil Immerman, Mooly Sagiv, and Sharon Shoham, that will appear in POPL'20.

Vérification
Lundi 9 décembre 2019, 11 heures, Salle 1007
Sidi Mohamed Beillahi (IRIF) Checking Robustness Against Snapshot Isolation

Transactional access to databases is an important abstraction allowing programmers to consider blocks of actions (transactions) as executing in isolation. The strongest consistency model is serializability, which ensures the atomicity abstraction of transactions executing over a sequentially consistent memory. Since ensuring serializability carries a significant penalty on availability, modern databases provide weaker consistency models, one of the most prominent being snapshot isolation. In general, the correctness of a program relying on serializable transactions may be broken when using weaker models. However, certain programs may also be insensitive to consistency relaxations, i.e., all their properties holding under serializability are preserved even when they are executed over a weak consistent database and without additional synchronization.

In this talk, we address the issue of verifying if a given program is robust against snapshot isolation, i.e., all its behaviors are serializable even if it is executed over a database ensuring snapshot isolation. We show that this verification problem is polynomial time reducible to a state reachability problem in transactional programs over a sequentially consistent shared memory. This reduction opens the door to the reuse of the classic verification technology for reasoning about sequentially-consistent programs. In particular, we show that it can be used to derive a proof technique based on Lipton's reduction theory that allows to prove programs robust.

This is a joint work with Ahmed Bouajjani and Constantin Enea.

Vérification
Vendredi 29 novembre 2019, 14 heures 30, Salle 3052
Dmitry Chistikov (University of Warwick) On the complexity of linear arithmetic theories over the integers

Given a system of linear Diophantine equations, how difficult is it to determine whether it has a solution? What changes if equations are replaced with inequalities? If some of the variables are quantified universally? These and similar questions relate to the computational complexity of deciding the truth value of statements in various logics. This includes in particular Presburger arithmetic, the first-order logic over the integers with addition and order.

In this talk, I will survey constructions and ideas that underlie known answers to these questions, from classical results to recent developments, and open problems.

First, we will recall the geometry of integer linear programming and how it interacts with quantifiers. This will take us from classical results due to von zur Gathen and Sieveking (1978), Papadimitriou (1981), and others to the geometry of the set of models of quantified logical formulas. We will look at rational convex polyhedra and their discrete analogue, hybrid linear sets (joint work with Haase (2017)), and see, in particular, how the latter form a proper sub-family of ultimately periodic sets of integer points in several dimensions (the semi-linear sets, introduced by Parikh (1961)).

Second, we will discuss “sources of hardness”: which aspects of the expressive power make decision problems for logics over the integers hard. Addition and multiplication combined enable simulation of arbitrary Turing machines, and restriction of multiplication to bounded integers corresponds to resource-bounded Turing machines. How big can these bounded integers be in Presburger arithmetic? This leads to the problem of representing big numbers with small logical formulae, and we will see constructions by Fischer and Rabin (1974) and by Haase (2014). We will also look at the new “route” for expressing arithmetic progressions (in the presence of quantifier alternation) via continued fractions, recently discovered by Nguyen and Pak (2017).

Vérification
Vendredi 29 novembre 2019, 11 heures, Salle 3052
Suresh Jagannathan (Purdue University) Mergeable Replicated Data Types

Programming geo-replicated distributed systems is challenging given the complexity of reasoning about different evolving states on different replicas. Existing approaches to this problem impose a significant burden on application developers to consider the effect of how operations performed on one replica are witnessed and applied to others. To alleviate these challenges, we present a fundamentally different approach to programming in the presence of replicated state. Our insight is based on the use of invertible relational specifications of an inductively-defined data type as a mechanism to capture salient aspects of the data type relevant to how its different instances can be safely merged in a replicated environment. Importantly, because these specifications only address a data type's (static) structural properties, their formulation does not require exposing low-level system-level details concerning asynchrony, replication, visibility, etc. As a consequence, our framework enables the correct-by-construction synthesis of rich merge functions over arbitrarily complex (i.e., composable) data types. Furthermore, the specification language allows us to extract sufficient conditions to automatically derive merge functions with meaningful convergence properties. We have implemented our ideas in a tool called Quark and have been able to demonstrate the utility of our model on a number of real-world benchmarks.

This is joint work with Gowtham Kaki, Swarn Priya, and KC Sivaramakrishnan.

Vérification
Lundi 18 novembre 2019, 11 heures, Salle 1007
Arnaud Sangnier (IRIF) The Complexity of Flat Freeze LTL

We consider the model-checking problem for freeze LTL on one-counter automata (OCAs). Freeze LTL extends LTL with the freeze quantifier, which allows one to store different counter values of a run in registers so that they can be compared with one another. As the model-checking problem is undecidable in general, we focus on the flat fragment of freeze LTL, in which the usage of the freeze quantifier is restricted. Recently, Lechner et al. showed that model checking for flat freeze LTL on OCAs with binary encoding of counter updates is decidable and in 2NEXPTIME. In this paper, we prove that the problem is, in fact, NEXPTIME-complete no matter whether counter updates are encoded in unary or binary. Like Lechner et al., we rely on a reduction to the reachability problem in OCAs with parameterized tests (OCAPs). The new aspect is that we simulate OCAPs by alternating two-way automata over words. This implies an exponential upper bound on the parameter values that we exploit towards an NP algorithm for reachability in OCAPs with unary updates. We obtain our main result as a corollary.

Vérification
Vendredi 15 novembre 2019, 14 heures 30, Salle 3052
Patrick Totzke (Liverpool University) Timed Basic Parallel Processes

I will talk about two fun constructions for reachability analysis of one-clock timed automata, which lead to concise logical characterizations in existential Linear Arithmetic.

The first one describes “punctual” reachability relations: reachability in exact time t. It uses a coarse interval abstraction and counting of resets via Parikh-Automata. The other is a “sweep line” construction to compute optimal time to reach in reachability games played on one-clock TA.

Together, these can be used to derive a (tight) NP complexity upper bound for the coverability and reachability problems in an interesting subclass of Timed Petri Nets, which naturally lends itself to parametrised safety checking of concurrent, real-time systems. This contrasts with known super-Ackermannian completeness, and undecidability results for unrestricted Timed Petri nets.

This is joint work with Lorenzo Clemente and Piotr Hofman, and was presented at CONCUR'19. Full details are available at https://arxiv.org/abs/1907.01240.

Vérification
Lundi 4 novembre 2019, 11 heures, Salle 1007
Zeinab Nehaï (IRIF) Deductive Proof of Industrial Smart Contracts Using Why3

A bug or error is a common problem that any software or computer program may encounter. It can occur from badly writing the program, a typing error or bad memory management. However, errors can become a significant issue if the unsafe program is used for critical systems. Therefore, the use of formal methods for this kind of systems is greatly required. In this work, we apply a formal method that performs deductive verification on industrial smart contracts, which are self-executing digital programs. Because smart contracts manipulate cryptocurrency and transaction information, if a bug occurs in such programs, serious consequences can happen, such as a loss of money. The aim of this work is to show that a language dedicated to deductive verification, called Why3, can be a suitable language to write correct and proven contracts. We first encode existing contracts into the Why3 program; next, we formulate specifications to be proved as the absence of RunTime Error and functional properties, then we verify the behaviour of the program using the Why3 system. Finally, we compile the Why3 contracts to the Ethereum Virtual Machine (EVM). Moreover, our approach estimates the cost of gas, which is a unit that measures the amount of computational effort during a transaction.

Vérification
Lundi 28 octobre 2019, 11 heures, Salle 1007
Pierre Ganty (IMDEA Software Institute) Deciding language inclusion problems using quasiorders

We study the language inclusion problem L1 ⊆ L2 where L1 is regular or context-free. Our approach checks whether an overapproximation of L1 is included in L2. Such overapproximations are obtained using quasiorder relations on words where the abstraction gives the language of all words “greater than or equal to” a given input word for that quasiorder. We put forward a range of quasiorders that allow us to systematically design decision procedures for different language inclusion problems such as context-free languages into regular languages and regular languages into trace sets of one-counter nets.

Vérification
Lundi 21 octobre 2019, 11 heures, Salle 1007
Mohamed Faouzi Atig (Uppsala University) On Solving String Constraints

String data types are present in all conventional programming and scripting languages. In fact, it is almost impossible to reason about the correctness and security of such programs without having a decision procedure for string constraints. A typical string constraint consists of word equations over string variables denoting strings of arbitrary lengths, together with constraints on (i) the length of strings, and (ii) the regular languages to which strings belong. Decidability of this general logic is still open. In this talk, we will discuss recent results on the decidability and decision procedures for string constraints. We will focus on decision procedures that are sound and complete for a well-defined fragment of string constraints. We will also cover a technique that uses a Counter-Example Guided Abstraction Refinement (CEGAR) framework which contains both an under- and an over-approximation module. The flow of information between these modules allows increasing the precision in an automatic manner.

Vérification
Lundi 14 octobre 2019, 11 heures, Salle 1007
Laurent Doyen (LSV, ENS Paris-Saclay) Expected Finite Horizon

A classical problem in discrete planning is to consider a weighted graph and construct a path that maximizes the sum of weights for a given time horizon T. However, in many scenarios, the time horizon is not fixed, but the stopping time is chosen according to some distribution such that the expected stopping time is T. If the stopping time distribution is not known, then to ensure robustness, the distribution is chosen by an adversary, to represent the worst-case scenario.

A stationary plan for every vertex always chooses the same outgoing edge. For fixed horizon T, stationary plans are not sufficient for optimality. Quite surprisingly we show that when an adversary chooses the stopping time distribution with expected stopping time T, then stationary plans are sufficient. While computing optimal stationary plans for fixed horizon is NP-complete, we show that computing optimal stationary plans under adversarial stopping time distribution can be achieved in polynomial time. Consequently, our polynomial-time algorithm for adversarial stopping time also computes an optimal plan among all possible plans.

Vérification
Lundi 23 septembre 2019, 11 heures, Salle 1007
Stefan Kiefer (University of Oxford) Selective monitoring

We study selective monitors for labelled Markov chains. Monitors observe the outputs that are generated by a Markov chain during its run, with the goal of identifying runs as correct or faulty. A monitor is selective if it skips observations in order to reduce monitoring overhead. We are interested in monitors that minimize the expected number of observations. We establish an undecidability result for selectively monitoring general Markov chains. On the other hand, we show for non-hidden Markov chains (where any output identifies the state the Markov chain is in) that simple optimal monitors exist and can be computed efficiently, based on DFA language equivalence. These monitors do not depend on the precise transition probabilities in the Markov chain. We report on experiments where we compute these monitors for several open-source Java projects.

Vérification
Vendredi 5 juillet 2019, 14 heures 30, Salle 1001
Mahsa Shirmohammadi (IRIF) Büchi Objectives in Countable MDPs

We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a given subset of states infinitely often. A question left open by T.P. Hill in 1979 is whether there always exist ε-optimal Markov strategies, i.e., strategies that base decisions only on the current state and the number of steps taken so far. We provide a negative answer to this question by constructing a non-trivial counterexample. On the other hand, we show that Markov strategies with only 1 bit of extra memory are sufficient. This work is in collaboration with Stefan Kiefer, Richard Mayr and Patrick Totzke, and is going to be presented in ICALP 2019. A full version is at https://arxiv.org/abs/1904.11573

Joint Automata & Verification Seminars

Vérification
Lundi 1 juillet 2019, 14 heures, Salle 3052
Nikhil Balaji (University of Oxford) The complexity of some easy subclasses of the Skolem problem

Given a linear recurrence sequence (LRS), is there an algorithm that decides if the infinite sequence of numbers generated by the LRS contains 0? This is the notorious Skolem problem. The Skolem-Mahler-Lech theorem says that the set of zeros of an LRS is semi-linear; however, this doesn't translate to an algorithm, due to lack of effective bounds on the description of the zero set.

This algorithmic question has remained open despite several decades of active research, with decidability known only for a few restricted subclasses, by either restricting the depth of the LRS (up to 4) or by restricting the structure of the LRS (especially roots of the characteristic polynomials). Recent work due to Ouaknine and Worrell has shown that decidability of closely related algorithmic questions could entail significant breakthroughs in Diophantine approximation.

In this talk, we first present an elementary proof that this problem is NP-hard (which was already observed by Blondel and Portier). We further study some easy subclasses which are known to be decidable from the point of view of proving better complexity upper bounds. We are able to show that for a wide class of special cases of LRS, the zero set has efficiently verifiable certificates.

Based on joint ongoing work with S. Akshay, Aniket Muhrekar, Rohith Varma and Nikhil Vyas.

Joint Vérification/Automates seminar

Vérification
Lundi 17 juin 2019, 11 heures, Salle 3052
Pierre-Malo Deniélou (Google) From MapReduce to Apache Beam: A Journey in Abstraction

Processing large amounts of data used to be an affair of specialists: specialized hardware, specialized software, specialized programming model, specialized engineers. MapReduce was the first widely adopted high-level API for large-scale data processing. It helped democratize big data processing by providing a clear abstraction that was supported by several efficient systems.

In this talk, I will present how the programming APIs (and underlying systems) for large-scale data processing evolved in the past 20 years, both within Google and in the open source world. I will start from MapReduce and Millwheel and finish with Apache Beam and where we're headed next.

(*) This will be a joint session of the Systèmes Complexes, Vérification and PPS seminars

Joint session of the Systèmes Complexes, Vérification and PPS seminars

Vérification
Mardi 11 juin 2019, 15 heures, Salle 3052
Burcu Külahçıoğlu Özkan (Max Planck Institute for Software Systems (MPI-SWS)) Randomized Testing of Distributed Systems with Probabilistic Guarantees

Several recently proposed randomized testing tools for concurrent and distributed systems come with theoretical guarantees on their success. The key to these guarantees is a notion of bug depth—the minimum length of a sequence of events sufficient to expose the bug—and a characterization of d-hitting families of schedules—a set of schedules guaranteed to cover every bug of given depth d. Previous results show that in certain cases the size of a d-hitting family can be significantly smaller than the total number of possible schedules. However, these results either assume shared-memory multithreading, or that the underlying partial ordering of events is known statically and has special structure. These assumptions are not met by distributed message-passing applications.

In this talk, we present a randomized scheduling algorithm for testing distributed systems. In contrast to previous approaches, our algorithm works for arbitrary partially ordered sets of events revealed online as the program is being executed. We show that for partial orders of width at most w and size at most n (both statically unknown), our algorithm is guaranteed to sample from at most w^2.n^(d−1) schedules, for every fixed bug depth d. Thus, our algorithm discovers a bug of depth d with probability at least 1/ (w^2.n^(d−1)). As a special case, our algorithm recovers a previous randomized testing algorithm for multithreaded programs. Our algorithm is simple to implement, but the correctness arguments depend on difficult combinatorial results about online dimension and online chain partitioning of partially ordered sets. In the last part of the talk, we briefly discuss how we can exploit state-space reduction strategies from model checking to provide better guarantees for the probability of hitting bugs. This will move us towards a randomized algorithm which brings together two orthogonal reductions of the search space: depth-bounded testing and dependency-aware testing.

(Joint work with Rupak Majumdar, Filip Niksic, Simin Oraee, Mitra Tabaei Befrouei, Georg Weissenbacher)

Vérification
Lundi 20 mai 2019, 11 heures, Salle 1007
Raphaël Cauderlier (Nomadic Labs) Functional Verification of Tezos Smart Contracts in Coq

Smart contract blockchains such as Ethereum and Tezos let their users program their crypto-money accounts. Bugs in these smart contracts can lead to huge losses. Yet smart contracts are necessarily small programs because checking the blockchain requires to evaluate all the smart contract calls in the history of the chain. For these reasons, smart contract languages are a good spot for formal methods.

The Tezos blockchain and its smart contract language Michelson have been designed with verification in mind. In this talk, I will present Mi-Cho-Coq, a formal specification of Michelson in the Coq proof assistant. I will also demonstrate how to use it for functional verification of a typical multisig smart contract.

Vérification
Vendredi 10 mai 2019, 11 heures, Salle 3052
Mahsa Shirmohammadi (LIS - CNRS & Univ. Aix-Marseille) A Lesson on Timed Automata Reachability

Advertisement:

The old-standing, natural, and useful problem of characterizing the reachability relation for timed automata has been solved many times: by Hubert Comon and Yan Jurski, by Catalin Dima, by Pavol Krcal, etc. Still all these characterizations are difficult to understand and to use in practise. Recently, Mahsa Shirmohammadi and Ben Worrell came up with a new solution, anod moreover they say that it is very easy! Mahsa will present it in the form of a black-(or white-)board lesson on May 10th, from 11-13hs.

Abstract:

In our preprint (submitted to IPL)[*], we give a simple and short proof of effective definability of the reachability relation in timed automata. We use a small trick that reduces this problem to that of computing the set of configurations reachable from a fixed initial configuration in timed automata. The proof uses only Parikh theorem in heart and 1-bounded region automata, so to follow the proof one may not need any prior knowledge of timed automata.

[*] https://arxiv.org/abs/1903.09773

This long seminar/lecture will run from 11hs to 13hs.

Vérification
Lundi 6 mai 2019, 11 heures, Salle 1007
Matthieu Perrin (Université de Nantes) Shared Objects in Distributed Systems: A Broadcast Perspective

It is well-known that consensus (one-set agreement) and total order broadcast are equivalent in asynchronous systems prone to process crash failures. Considering wait-free systems, this talk addresses and answers the following question: which are the communication abstraction that “captures”, on the one hand, read/write registers, and on the other hand, k-set agreement?

To this end, it introduces a new broadcast communication abstraction, called k-SCD-Broadcast, which restricts the disagreement on the local deliveries of the messages that have been broadcast: 1-SCD-Broadcast boils down to total order broadcast and n-SCD-Broadcast is equivalent to the read/write register. Hence, in this context, k = 1 is not a special number, but only the first integer in an increasing integer sequence. This establishes a new “correspondence” between distributed agreement problems and communication abstractions, which enriches our understanding of the relations linking fundamental issues of fault-tolerant distributed computing.

Vérification
Mardi 16 avril 2019, 11 heures, Salle 3052
Si Liu (University of Illinois at Urbana-Champaign) Design, Verification and Automatic Implementation of Correct-by-Construction Distributed Transaction Systems in Maude

Designing, verifying, and implementing highly reliable distributed systems is at present a hard and very labor-intensive task. Cloud-based systems have further increased this complexity due to the desired consistency, availability, scalability, and disaster tolerance. This talk addresses this challenge in the context of distributed transaction systems (DTSs) from two complementary perspectives: (i) designing DTSs in Maude with high assurance such that they satisfy desired correctness and performance requirements; and (ii) transforming verified Maude designs of DTSs into correct-by-construction distributed implementations. Regarding (i), I will talk about our Maude-based framework for automatic analysis of consistency and performance properties of DTSs. Regarding (ii), I will present a correct-by-construction automatic transformation mapping a verified formal specification of an actor-based distributed system design in Maude to a distributed implementation enjoying the same safety and liveness properties as the original formal design.

Vérification
Lundi 15 avril 2019, 11 heures, Salle 1007
John Wickerson (Imperial College London) Using Alloy to explore Weak Memory and Transactional Memory

When reasoning about concurrent programs, it is tempting to imagine that each thread executes its instructions in order, and that writes to shared memory instantly become visible to all other threads. Alas, this ideal (which is called “sequential consistency”) is rarely met. In reality, most processors and compilers allow instructions sometimes to be reordered, and loads from shared memory sometimes to retrieve stale data. This phenomenon is called “weak memory”.

This talk is about how we can formalise models of weak memory using a tool called Alloy. Using these formalisations, we can then synthesise programs that can distinguish different models of weak memory. We can also check whether a compiler mapping from one weak memory model to another is correct.

The main case study in this talk describes how we used Alloy to find problems in a prototype implementation of transaction memory being developed by ARM. I will also talk about how similar techniques can be used to check the correctness of software-to-hardware compilers (or “high-level synthesis” tools).

This talk is based on joint work with Mark Batty, Nathan Chong, George Constantinides, and Tyler Sorensen. For further reading, please see papers that appeared in [POPL'17](https://johnwickerson.github.io/papers/memalloy.pdf) and [PLDI'18](https://johnwickerson.github.io/papers/transactions.pdf).

Vérification
Jeudi 11 avril 2019, 14 heures, Salle 3052
Ismail Kuru (Drexel University) Safe Deferred Memory Reclamation with Types

Memory management in lock-free data structures remains a major challenge in concurrent programming. Design techniques including read-copy-update (RCU) and hazard pointers provide workable solutions, and are widely used to great effect. These techniques rely on the concept of a grace period: nodes that should be freed are not deallocated immediately, and all threads obey a protocol to ensure that the deallocating thread can detect when all possible readers have completed their use of the object. This provides an approach to safe deallocation, but only when these subtle protocols are implemented correctly. We present a static type system to ensure correct use of RCU memory management: that nodes removed from a data structure are always scheduled for subsequent deallocation, and that nodes are scheduled for deallocation at most once. As part of our soundness proof, we give an abstract semantics for RCU memory management primitives which captures the fundamental properties of RCU. Our type system allows us to give the first proofs of memory safety for RCU linked list and binary search tree implementations without requiring full verification.

Vérification
Vendredi 5 avril 2019, 14 heures 30, Salle 3052
Cristoph Kirsch (Universität Salzburg) On the self in Selfie

Selfie is a self-contained 64-bit, 10-KLOC implementation of (1) a self-compiling compiler written in a tiny subset of C called C* targeting a tiny subset of 64-bit RISC-V called RISC-U, (2) a self-executing RISC-U emulator, (3) a self-hosting hypervisor that virtualizes the emulated RISC-U machine, and (4) a prototypical symbolic execution engine that executes RISC-U symbolically. Selfie can compile, execute, and virtualize itself any number of times in a single invocation of the system given adequate resources. There is also a simple linker, disassembler, debugger, and profiler. C* supports only two data types, uint64_t and uint64_t*, and RISC-U features just 14 instructions, in particular for unsigned arithmetic only, which significantly simplifies reasoning about correctness. Selfie has originally been developed just for educational purposes but has by now become a research platform as well. We discuss how selfie leverages the synergy of integrating compiler, target machine, and hypervisor in one self-referential package while orthogonalizing bootstrapping, virtual and heap memory management, emulated and virtualized concurrency, and even replay debugging and symbolic execution.

This is joint work with Alireza S. Abyaneh, Simon Bauer, Philipp Mayer, Christian Mösl, Clément Poncelet, Sara Seidl, Ana Sokolova, and Manuel Widmoser, University of Salzburg, Austria

Web:

http://selfie.cs.uni-salzburg.at

Vérification
Lundi 18 mars 2019, 11 heures, Salle 1007
Glen Mével (INRIA Paris) Time Credits and Time Receipts in Iris

I present a machine-checked extension of the program logic Iris with time credits and time receipts, two dual means of reasoning about time. Whereas time credits are used to establish an upper bound on a program’s execution time, time receipts can be used to establish a lower bound. More strikingly, time receipts can be used to prove that certain undesirable events—such as integer overflows—cannot occur until a very long time has elapsed. I will present a simple machine-checked application of time receipts, and sketch how we define our extension of Iris (and prove its soundness) in a modular way, as a layer above Iris.

Vérification
Vendredi 15 mars 2019, 11 heures, Salle 3052
Sreeja Nair (LIP 6 - Sorbonne Université) Invariant safety for distributed applications

A distributed application replicates data closer to the user to ensure shorter response times and availability when the network partitions. The replicas should be able to allow updates even in case of network partitions and they send their states to converge. The application developer must also make sure that the application invariants hold true even when the replicas diverge.

In this talk, we present a proof methodology for verifying the safety of invariants of highly-available distributed applications that replicate state. The proof is modular and sequential :one can reason about each individual operation separately, and one can reason about a distributed application as if it were sequential. We automate the methodology and illustrate the use of the tool with a representative example.

Vérification
Lundi 11 mars 2019, 11 heures, Salle 1007
Richard Bonichon (CEA List) Get rid of inline assembly through trustable verification-oriented lifting

Formal methods for software development have made great strides in the last two decades, to the point that their application in safety-critical embedded software is an undeniable success. Their extension to non-critical software is one of the notable forthcoming challenges. For example, C programmers regularly use inline assembly for low-level optimizations and system primitives. This usually results in driving state-of-the-art formal analyzers developed for C ineffective. We thus propose TInA, an automated, generic, trustworthy and verification-oriented lifting technique turning inline assembly into semantically equivalent C code, in order to take advantage of existing C analyzers. Extensive experiments on real-world C code with inline assembly (including GMP and ffmpeg) show the feasibility and benefits of TInA.

Vérification
Mardi 5 mars 2019, 11 heures, 3052
Azadeh Farzan (University of Toronto) Automated Hypersafety Verification

In this talk, I will introduce an automated verification technique for hypersafety properties, which express sets of valid interrelations between multiple finite runs of a program. They can alternatively be viewed as standard safety properties of a product of the several copies of the program by itself. The key observation is that a small representative set of the product program runs, called a reduction, is sufficient to construct a formal proof. I will present an algorithm based on a counterexample-guided refinement loop that simultaneously searches for a reduction and a proof of the correctness for the reduction. The algorithm is implemented in a tool and there is evidence that itl is very effective in verifying a diverse array of hypersafety properties for a diverse class of input programs.

Vérification
Lundi 4 mars 2019, 11 heures, Salle 1007
Marie Van Den Boogard (Université Libre de Bruxelles) Beyond admissibility: Dominance between chains of strategies

In this talk, we focus on the concept of rational behaviour in multi-player games on finite graphs, taking the point of view of a player that has access to the structure of the game but cannot make assumptions on the preferences of the other players. In the qualitative setting, admissible strategies have been shown to fit the rationality requirements, as they coincide with winning strategies when these exist, and enjoy the fundamental property that every strategy is either admissible or dominated by an admissible strategy. However, as soon as there are three or more payoffs, one finds that this fundamental property does not necessarily hold anymore: one may observe chains of strategies that are ordered by dominance and such that no admissible strategy dominates any of them. Thus, to recover a satisfactory rationality notion (still based on dominance), we depart from the single strategy analysis approach and consider instead chains of strategies as families of behaviours. We establish a sufficient criterion for games to enjoy a similar fundamental property, ie, all chains are below some maximal chain, and, as an illustration, we present a class of games where admissibility fails to capture some intuitively rational behaviours, while our chain-based analysis does. Based on a joint work with N.Basset, I. Jecker, A. Pauly and J.-F. Raskin, presented at CSL'18.

Vérification
Lundi 25 février 2019, 11 heures, Salle 1007
Pierre Courtieu (CNAM) A proof framework for mobile robots protocols

Distributed algorithms are known to be very difficult to design and certify. If these algorithms need to be fault or byzantine resilient the difficulty is even greater. Examples of flawed algorithms and proofs are numerous. It is therefore a natural domain for mechanized proofs (i.e. with the help of a proof assistant): the additional work of mechanizing the proofs should be worth the effort.

We present a framework, named Pactole, for designing distributed protocols for mobile robots, stating properties about these protocols and proving these properties. We can also prove *impossibility statement* i.e. that there exists no protocol achieving some property.

Our technique cannot be compared to but can be seen as complementary to model checking in the sense that our proofs are not automated but allow for instance to conclude that a protocol is correct *for any* starting configuration, number of robots, that is not proven impossible.

Vérification
Lundi 25 février 2019, 11 heures, Salle 1007
Corto Mascle (ENS Paris-Saclay) Non encore annoncé.

Vérification
Lundi 18 février 2019, 14 heures, Salle 3052
José Ignacio Requeno (Université Grenoble Alpes) Parameter Synthesis for Extended Signal Temporal Logic Specifications

Cyber-Physical Systems (CPS) are complex environments composed of embedded computers that monitor or control physical processes. Nowadays, they are ubiquitously present in most aspects of our live, ranging from power stations to communications, transport or finances. Guaranteeing a correct behaviour and a good quality of service (QoS) of such critical systems is imperative. Signal temporal logic (STL) is a specification language that is adapted for dealing with continuous time real-value properties which are characteristic in CPSs. Used in combination with formal models and model checking techniques, it allows the verification and validation of runtime executions. In this talk, I will present recent improvements in the STL language; including a new method for inferring the validity domain of parametric STL specifications.

Vérification
Lundi 11 février 2019, 11 heures, Salle 1007
Stéphane Demri (LSV - CRNS & ENS Paris Saclay) Modal Separation Logics and Friends

Separation logic is known as an assertion language to perform verification, by extending Hoare-Floyd logic in order to verify programs with mutable data structures. The separating conjunction allows us to evaluate formulae in subheaps, and this possibility to evaluate formulae in alternative models is a feature shared with many modal logics such as sabotage logics, logics of public announcements or relation-changing modal logics. In the first part of the talk, we present several versions of modal separation logics whose models are memory states from separation logic and the logical connectives include modal operators as well as separating conjunction. Fragments are genuine modal logics whereas some others capture standard separation logics, leading to an original language to speak about memory states. In the second part of the talk, we present the decidability status and the computational complexity of several modal separation logics, (for instance, leading to NP-complete or Tower-complete satisfiability problems). In the third part, we investigate the complexity of fragments of quantified CTL under the tree semantics (or similar modal logics with propositional quantification), some of these fragments being related to modal separation logics. Part of the presented work is done jointly with B. Bednarczyk (U. of Wroclaw) or with R. Fervari (U. of Cordoba).

Vérification
Lundi 28 janvier 2019, 11 heures, Salle 1007
Mark Batty (University of Kent) A denotational semantics for weak-memory language concurrency with modular DRF-SC

Concurrent languages provide weak semantics: they admit non-sequentially-consistent (SC) behaviour to accommodate weak target architectures and compiler optimisation. Subtle behaviour and intricate specifications result, but two properties help to simplify things – no thin-air values, and DRF-SC. In the first, all values must be computed from constants or inputs to the program, rather than conjured up, for example, through abuse of speculative execution. In DRF-SC, programs without data races are guaranteed SC behaviour. Unfortunately, existing languages like C/C++ admit thin-air values and DRF-SC results only apply to an unrealistic class of programs, excluding systems code and code using concurrency libraries. We provide a modular semantics that avoids thin-air values and we prove a new modular variant of DRF-SC that lifts a key barrier to its usefulness.