Vérification
Lundi 11 décembre 2017, 11 heures, Salle 1007
Irène Guessarian (IRIF - Unviersité Paris Diderot) Congruence Preservation, Lattices and Recognizability

Looking at some monoids and (semi)rings (natural numbers, integers and p-adic integers), and more generally, residually finite algebras (in a strong sense), we prove the equivalence of two ways for a function on such an algebra to behave like the operations of the algebra. The first way is to preserve congruences or stable preorders. The second way is to demand that preimages of recognizable sets belong to the lattice or the Boolean algebra generated by the preimages of recognizable sets by “derived unary operations” of the algebra (such as translations, quotients,…).

Vérification
Jeudi 7 décembre 2017, 16 heures, Salle 1007
Luke Ong (University of Oxford) Constrained Horn clauses for automatic program verification: the higher-order case

Following Björner, McMillan and Rybalchenko, we develop an approach to the automatic verification of higher-order functional programs based on constrained Horn clauses. We introduce constrained Horn clauses in higher-order logic, and decision problems concerning their satisfiability and the safety verification of higher-order programs. We show that, although satisfiable systems of higher-order clauses do not generally have least models, there is a notion of canonical model obtained via a reduction to a problem concerning a kind of monotone logic program. Following work in higher-order program verification, we develop a refinement type system in order to reason about and automate the search for models. This provides a sound but incomplete method for solving the decision problem. Finally, we show that an extension of the decision problem in which refinement types are used directly as guards on existential quantifiers can be reduced to the original problem. This result can be used to show that properties of higher-order functions that are definable using refinement types are also expressible using higher-order constrained Horn clauses. (Joint work with Toby Cathcart Burn and Steven Ramsay.)

Vérification
Mardi 28 novembre 2017, 13 heures 30, Salle 3052
Léon Gondelman (Radboud University, The Netherlands) The Spirit of Ghost Code

In the context of deductive program verification, ghost code is part of the program that is added for the purpose of specification. Ghost code must not interfere with regular code, in the sense that it can be erased without any observable difference in the program outcome. In particular, ghost data cannot participate in regular computations and ghost code cannot mutate regular data or diverge.

The idea exists in the folklore since the early notion of auxiliary variables and is implemented in many state-of-the-art program verification tools. However, a rigorous definition and treatment of ghost code is surprisingly subtle and few formalizations exist.

In this talk, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3.

Joint session with the ACS Work group

Vérification
Lundi 20 novembre 2017, 11 heures, Salle 1007
Laurent Fribourg (LSV, CNRS & ENS de Cachan) Euler’s Method Applied to the Control of Switched Systems

Hybrid systems are a powerful formalism for modeling and reasoning about cyber-physical systems. They mix the continuous and discrete natures of the evolution of computerized systems. Switched systems are a special kind of hybrid systems, with restricted discrete behaviours: those systems only have finitely many different modes of (continuous) evolution, with isolated switches between modes. Such systems provide a good balance between expressiveness and controllability, and are thus in widespread use in large branches of industry such as power electronics and automotive control. The control law for a switched system defines the way of selecting the modes during the run of the system. Controllability is the problem of (automatically) synthezing a control law in order to satisfy a desired property, such as safety (maintaining the variables within a given zone) or stabilisation (confinement of the variables in a close neighborhood around an objective point). In order to compute the control of a switched system, we need to compute the solutions of the differential equations governing the modes. Euler’s method is the most basic technique for approximating such solutions. We present here an estimation of the Euler’s method local error, using the notion of “one-sided Lispchitz constant’’ for modes. This yields a general synthesis approach which can encompass uncertain parameters, local information and stochastic noise.

We will sketch out how the approach relates with other symbolic methods based on interval arithmetic and Lyapunov functions. We will also present some applicative examples which illustrate its advantages and limitations.

Vérification
Lundi 13 novembre 2017, 11 heures, Salle 1007
Viktor Vafeiadis (Max Planck Institute for Software Systems (MPI-SWS)) Effective Stateless Model Checking for C/C++ Concurrency

I will present a stateless model checking algorithm for verifying concurrent programs running under RC11, a repaired version of the C/C++11 memory model without dependency cycles. Unlike previous approaches, which enumerate thread interleavings up to some partial order reduction improvements, our approach works directly on execution graphs and (in the absence of RMW instructions and SC atomics) avoids redundant exploration by construction. We have implemented a model checker, called RCMC, based on this approach and applied it to a number of challenging concurrent programs. Our experiments confirm that RCMC is significantly faster, scales better than other model checking tools, and is also more resilient to small changes in the benchmarks.

(This is joint work with Michalis Kokologiannakis, Ori Lahav, and Konstantinos Sagonas, and will appear at POPL'18.)

Vérification
Lundi 30 octobre 2017, 11 heures, Salle 1007
Suha Mutluergil (Koç University) Proving linearizability using forward simulations

Linearizability is the standard correctness criterion for concurrent data structures such as stacks and queues. It allows to establish observational refinement between a concurrent implementation and an atomic reference implementation.

Proving linearizability requires identifying linearization points for each method invocation along all possible computations, leading to valid sequential executions, or alternatively, establishing forward and backward simulations. In both cases, carrying out proofs is hard and complex in general. In particular, backward reasoning is difficult in the context of programs with data structures, and strategies for identifying statically linearization points cannot be defined for all existing implementations. In this paper, we show that, contrary to common belief, many such complex implementations, including, e.g., the Herlihy & Wing Queue and the Time-Stamped Stack, can be proved correct using only forward simulation arguments. This leads to simple and natural correctness proofs for these implementations that are amenable to automation.

Vérification
Lundi 23 octobre 2017, 11 heures, Salle 1007
Raphaël Cauderlier (IRIF - Université Paris Diderot) FocaLiZe and Dedukti to the Rescue for Proof Interoperability

Numerous contributions have been made for some years to allow users to exchange formal proofs between different provers. The main propositions consist in ad hoc pointwise translations, e.g. between HOL Light and Isabelle in the Flyspeck project or uses of more or less complete certificates. We propose a methodology to combine proofs coming from different theorem provers. This methodology relies on the Dedukti logical framework as a common formalism in which proofs can be translated and combined. To relate the independently developed mathematical libraries used in proof assistants, we rely on the structuring features offered by FoCaLiZe, in particular parameterized modules and inheritance to build a formal library of transfer theorems called MathTransfer. We finally illustrate this methodology on the Sieve of Eratosthenes, which we prove correct using HOL and Coq in combination.

Vérification
Lundi 16 octobre 2017, 11 heures, Salle 1007
Noam Rineztky (Tel-Aviv University) Verifying Equivalence of Spark Programs

Apache Spark is a popular framework for writing large scale data processing applications. Our long term goal is to develop automatic tools for reasoning about Spark programs. This is challenging because Spark programs combine database-like relational algebraic operations and aggregate operations, corresponding to (nested) loops, with User Defined Functions (UDFs).

In this paper, we present a novel SMT-based technique for verifying the equivalence of Spark programs. We model Spark as a programming language whose semantics imitates Relational Algebra queries (with aggregations) over bags (multisets) and allows for UDFs expressible in Presburger Arithmetics. We prove that the problem of checking equivalence is undecidable even for programs which use a single aggregation operator. Thus, we present sound techniques for verifying the equivalence of interesting classes of Spark programs, and show that it is complete under certain restrictions. We implemented our technique, and applied it to a few small, but intricate, test cases.

Joint work with: Shelly Grossman, Sara Cohen, Shachar Itzhaky, and Mooly Sagiv Presented in CAV’17

Paper: http://www.cs.tau.ac.il/~maon/pubs/2017-cav.pdf

Vérification
Lundi 9 octobre 2017, 11 heures, Salle 1007
Javier Esparza (Technische Universität München) Polynomial Analysis Algorithms for Free-Choice Workflow Nets

Workflow Petri nets are a successful formalism for modelling, simulating, and analyzing business processes. In this area, free-choice workflow nets play an important role: Core formalisms for business processes can be translated into free-choice nets, and many industrial models are free-choice.

While the state space of a free-choice net can grow exponentially in its number of nodes, numerous control-flow properties can still be decided in polynomial time. However, these decision algorithms cannot be extended to workflow nets with data. We present a new analysis technique, based on reduction rules, that can be applied to workflow nets with data, costs, or probabilities. In particular, we give a polynomial algorithm to compute the expected cost of sound free-choice workflow nets.

Vérification
Lundi 2 octobre 2017, 11 heures, Salle 1007
Ahmed Bouajjani (IRIF - Université Paris Diderot) Verifying Robustness of Event-Driven Asynchronous Programs Against Concurrency

We define a correctness criterion, called robustness against concurrency, for a class of event-driven asynchronous programs that are at the basis of modern UI frameworks in Android, iOS, and Javascript. A program is robust when all possible behaviors admitted by the program under arbitrary procedure and event interleavings are admitted even if asynchronous procedures (respectively, events) are assumed to execute serially, one after the other, accessing shared memory in isolation. We characterize robustness as a conjunction of two correctness criteria: event-serializability (i.e., events can be seen as atomic) and event-determinism (executions within each event are insensitive to the interleavings between concurrent tasks dynamically spawned by the event). Then, we provide efficient algorithms for checking these two criteria based on polynomial reductions to reachability problems in sequential programs. This result is surprising because it allows to avoid explicit handling of all concurrent executions in the analysis, which leads to an important gain in complexity. We demonstrate via case studies on Android apps that the typical mistakes programmers make are captured as robustness violations, and that violations can be detected efficiently using our approach.

Joint work with Michael Emmi, Constantin Enea, Burcu Kulahcioglu Ozkan, and Serdar Tasiran.

Vérification
Mercredi 28 juin 2017, 14 heures, Salle 3052
Giuliano Losa (UCLA) Paxos Made EPR — Decidable Reasoning about Distributed Consensus

Distributed protocols such as Paxos play an important role in many computer systems. Therefore, a bug in a distributed protocol may have tremendous effects. Accordingly, a lot of effort has been invested in verifying such protocols. However, checking invariants of such protocols is undecidable and hard in practice, as it requires reasoning about an unbounded number of nodes and messages. Moreover, protocol actions and invariants involve higher-order concepts such as set cardinalities, arithmetic, and complex quantification.

This paper makes a step towards automatic verification of such protocols. We aim at a technique that can verify correct protocols and identify bugs in incorrect protocols. To this end, we develop a methodology for deductive verification based on effectively propositional logic (EPR)—a decidable fragment of first-order logic (also known as the Bernays-Sch\“onfinkel-Ramsey class). In addition to decidability, EPR also enjoys the finite model property, allowing to display violations as finite structures which are intuitive for users. Our methodology involves modeling protocols using general (uninterpreted) first-order logic, and then systematically transforming the model to obtain a model and an inductive invariant that are decidable to check. The steps of the transformations are also mechanically checked, ensuring the soundness of the method. We have used our methodology to verify the safety of Paxos, and several of its variants, including Multi-Paxos, Vertical Paxos, Fast Paxos and Flexible Paxos. To the best of our knowledge, this work is the first to verify these protocols using a decidable logic, and the first formal verification of Vertical Paxos and Fast Paxos.

This is joint work with O. Padon, M. Sagiv, and S. Shoham.

Vérification
Jeudi 22 juin 2017, 11 heures, Salle 3052
Ruzica Piskac (Yale University) New Applications of Software Synthesis: Verification of Configuration Files and Firewalls Repair

In this talk we present a new approach for verification for configuration files. Software failures resulting from configuration errors have become commonplace as modern software systems grow increasingly large and more complex. The lack of language constructs in configuration files, such as types and grammars, has directed the focus of a configuration file verification towards building post-failure error diagnosis tools. We describe a framework which analyzes data sets of correct configuration files and synthesizes rules for building a language model from the given data set. The resulting language model can be used to verify new configuration files and detect errors in them.

We also outline a systematic effort that can automatically repair firewalls, using the programming by example approach. Firewalls are widely employed to manage and control enterprise networks. Because enterprise-scale firewalls contain hundreds or thousands of policies, ensuring the correctness of firewalls - whether the policies in the firewalls meet the specifications of their administrators - is an important but challenging problem. In our approach, after an administrator observes undesired behavior in a firewall, she may provide input/output examples that comply with the intended behavior. Based on the given examples, we automatically synthesize new firewall rules for the existing firewall. This new firewall correctly handles packets specified by the examples, while maintaining the rest of the behavior of the original firewall.

Vérification
Lundi 12 juin 2017, 14 heures, Salle 3052
Thomas Wies (New York University) A Simple Framework for Verifying Concurrent Search Structures

We present an abstract framework for verifying concurrent algorithms on search data structures that support dictionaries. The framework can be specialized to every search data structure we know, from linked lists to complex B-trees. We obtain this framework by instantiating RGSep, which is a program logic that combines rely-guarantee reasoning and separation logic. RGSep is parameterized by a so-called resource algebra, which is typically the standard heap model of separation logic. Instead of using the standard heap model, we develop a new resource algebra that enables compositional reasoning about search graphs. Using this search graph model, we obtain simple correctness proofs of complex concurrent dictionary algorithms. The algorithms and proofs abstract from the specific search data structure that the algorithm operates on and can then be refined to a concrete data structure implementation.

Vérification
Mardi 6 juin 2017, 11 heures, Salle 3052
Sergio Rajsbaum (UNAM Mexico) Tasks, objects, and the notion of a distributed problem

The universal computing model of Turing, which was central to the birth of modern computer science, identified also the essential notion of a problem, as an input output function to be computed by a Turing machine. In distributed computing, \emph{tasks} are the equivalent of a function: each process gets only part of the input, and computes part of the output after communicating with other processes.

In distributed computing tasks have been studied from early on, in parallel, but independently of \emph{sequential objects}. While tasks explicitly state what might happen when a set of processes run concurrently, sequential objects only specify what happens when processes run sequentially. Indeed, many distributed problems considered in the literature, seem to have no natural specification neither as tasks nor as sequential objects. I will concentrate on our recent work on interval-linearizability, a notion we introduced to specify objects more general than the usual sequential objects. I will describe the bridges we establish between these two classical paradigms, and our discussions about what is a distributed problem, and what it means to solve it.

Vérification
Jeudi 1 juin 2017, 11 heures, Salle 3052
Orna Grumberg (Technion, Israel) Sound and Complete Mutation-Based Program Repair

This work presents a novel approach for automatically repairing an erroneous program with respect to a given set of assertions. Programs are repaired using a predefined set of mutations. We refer to a bounded notion of correctness, even though, for a large enough bound all returned programs are fully correct. To ensure no changes are made to the original program unless necessary, if a program can be repaired by applying a set of mutations $Mut$, then no superset of $Mut$ is later considered. Programs are checked in increasing number of mutations, and every minimal repaired program is returned as soon as found.

We impose no assumptions on the number of erroneous locations in the program, yet we are able to guarantee soundness and completeness. That is, we assure that a program is returned iff it is minimal and bounded correct.

Searching the space of mutated programs is reduced to searching unsatisfiable sets of constraints, which is performed efficiently using a sophisticated cooperation between SAT and SMT solvers. Similarities between mutated programs are exploited in a new way, by using both the SAT and the SMT solvers incrementally.

We implemented a prototype of our algorithm, compared it with a state-of-the-art repair tool and got very encouraging results. This is a joint work with Bat-Chen Rothenberg.

Vérification
Mardi 23 mai 2017, 15 heures 30, Salle 3052
Florian Zuleger (TU Wien) Inductive Termination Proofs by Transition Predicate Abstraction and their relationship to the Size-Change Abstraction

The last decade has seen renewed interest in automated techniques for proving the termination of programs. A popular termination criterion is based on the covering of the transitive hull of the transition relation of a program by a finite number of well-founded relations. In an automated analysis, this termination criterion is usually established by an inductive proof using transition predicate abstraction. Such termination proofs have the structure of a finite automaton. These automata, which we call transition automata, are the central object of study in this talk. Our main results are as follows: (1) A previous criterion for termination analysis with transition automata is not complete; we provide a complete criterion. (2) We show how to bound the height of the transition relation of the program using the termination proof by transition predicate abstraction. This result has applications in the automated complexity analysis of programs. (3) We show that every termination proof by transition predicate abstraction gives rise to a termination proof by the size-change abstraction; this connection is crucial to obtain results (1) and (2) from previous results on the size-change abstraction. Further, our result establishes that transition predicate abstraction and size-change abstraction have the same expressivity for automated termination proofs.

Vérification
Lundi 22 mai 2017, 11 heures, Salle 1007
Alain Finkel (LSV, ENS Cachan) The Erdös & Tarski theorem. A new class of WSTS without WQO.

We present the ideal framework which was recently used to obtain new deep results on Petri nets and extensions. We will present the proof of the famous but unknown Erdös-Tarski theorem. We argue that the theory of ideals prompts a renewal of the theory of WSTS by providing a way to define a new class of monotonic systems, the so-called Well Behaved Transition Systems, which properly contains WSTS, and for which coverability is still decidable by a forward algorithm.

Vérification
Lundi 27 mars 2017, 11 heures, Salle 1007
Mohamed Faouzi Atig (Uppsala University) Lossy Channel Systems with Data.

Lossy channel systems are a classical model with applications ranging from the modelling of communication protocols to programs running on weak memory models. All existing work assume that messages traveling inside the channels are picked from a finite alphabet. In this talk, we present two extensions of lossy channel systems. In the first part of the talk, we extend lossy channel systems by assuming that each message is equipped with a clock representing the age of the message, thus obtaining the model of Timed Lossy Channel Systems (TLCS). We show that the state reachability problem is decidable for TLCS.

In the second part of the talk, we extend lossy channel systems by considering systems that operate on a finite set of variables ranging over an infinite data domain. Furthermore, each message inside a channel is equipped with a data item representing its value. Although we restrict the model by allowing the variables to be only tested for (dis-)equality, we show that the state reachability problem is undecidable. In light of this negative result, we consider bounded-phase reachability, where the processes are restricted to performing either send or receive operations during each phase. We show decidability of state reachability in this case by computing a symbolic encoding of the set of system configurations that are reachable from a given configuration.

This talk is based on previous joint work with Parosh Aziz Abdula, Jonathan Cederberg and C. Aiswarya.

Vérification
Lundi 20 mars 2017, 11 heures, Salle 1007
Andreas Podelski (University of Freiburg) Proving Liveness of Parameterized Programs

Correctness of multi-threaded programs typically requires that they satisfy liveness properties. For example, a program may require that no thread is starved of a shared resource, or that all threads eventually agree on a single value. This paper presents a method for proving that such liveness properties hold. Two particular challenges addressed in this work are that (1) the correctness argument may rely on global behaviour of the system (e.g., the correctness argument may require that all threads collectively progress towards “the good thing” rather than one thread progressing while the others do not interfere), and (2) such programs are often designed to be executed by any number of threads, and the desired liveness properties must hold regardless of the number of threads that are active in the program. This is joint work with Azadeh Farzan and Zachary Kincaid and published at LICS 2016 (http://www.cs.princeton.edu/~zkincaid/pub/lics16.pdf)

Vérification
Lundi 13 mars 2017, 11 heures, Salle 1007
Ori Lahav (MPI Kaiserslautern) A Promising Semantics for Relaxed-Memory Concurrency

Despite many years of research, it has proven very difficult to develop a memory model for concurrent programming languages that adequately balances the conflicting desiderata of programmers, compilers, and hardware. In this paper, we propose the first relaxed memory model that (1) accounts for nearly all the features of the C++11 concurrency model, (2) provably validates a number of standard compiler optimizations, as well as a wide range of memory access reorderings that commodity hardware may perform, (3) avoids bad “out-of-thin-air” behaviors that break invariant-based reasoning, (4) supports “DRF” guarantees, ensuring that programmers who use sufficient synchronization need not understand the full complexities of relaxed-memory semantics, and (5) defines the semantics of racy programs without relying on undefined behaviors, which is a prerequisite for applicability to type-safe languages like Java. The key novel idea behind our model is the notion of promises: a thread may promise to execute a write in the future, thus enabling other threads to read from that write out of order. Crucially, to prevent out-of-thin-air behaviors, a promise step requires a thread-local certification that it will be possible to execute the promised write even in the absence of the promise. To establish confidence in our model, we have formalized most of our key results in Coq.

Joint work with Jeehoon Kang, Chung-Kil Hur, Viktor Vafeiadis, and Derek Dreyer, recently presented at POPL'17

Vérification
Lundi 6 mars 2017, 11 heures, Salle 1007
Germán Andrés Delbianco (IMDEA Madrid) Concurrent Data Structures Linked in Time

Arguments about correctness of a concurrent data structure are typically carried out by using the notion of linearizability and specifying the linearization points of the data structure's procedures. Such arguments are often cumbersome as the linearization points' position in time can be dynamic (depend on the interference, run-time values and events from the past, or even future), non-local (appear in procedures other than the one considered), and whose position in the execution trace may only be determined after the considered procedure has already terminated.

In this talk I will present a new method, based on a separation-style logic, for reasoning about concurrent objects with such linearization points. We embrace the dynamic nature of linearization points, and encode it as part of the data structure's auxiliary state, so that it can be dynamically modified in place by auxiliary code, as needed when some appropriate run-time event occurs.

We have named the idea linking-in-time, because it reduces temporal reasoning to spatial reasoning. For example, modifying a temporal position of a linearization point can be modeled similarly to a pointer update in separation logic. Furthermore, the auxiliary state provides a convenient way to concisely express the properties essential for reasoning about clients of such concurrent objects. In order to illustrate our approach, I will illustrate its application to verify (mechanically in Coq) an intricate optimal snapshot algorithm, due to Jayanti.

Vérification
Lundi 27 février 2017, 11 heures, Salle 1007
Oded Maler (CNRS and University of Grenoble-Alpes) Monitoring: Qualitative and Quantitative, Real and Virtual, Online and Offline

In this talk I will present some of the research I have been involved in concerning the specification and monitoring of timed, continuous and hybrid behaviors using formalism such as Signal Temporal Logic and Timed Regular Expressions. I will discuss the similarities and differences between properties/assertions and other “measures” which are used in many application domains to evaluate behaviors, as well as the difference between monitoring real systems during their execution and monitoring simulated models during the system design phase.

Vérification
Lundi 20 février 2017, 11 heures, Salle 1007
Loig Jezequel (L2SN - Nantes) Lazy Reachability Analysis in Distributed Systems

We address the problem of reachability in distributed systems, modelled as networks of finite automata and propose and prove a new algorithm to solve it efficiently in many cases. This algorithm allows to decompose the reachability objective among the components, and proceeds by constructing partial products by lazily adding new components when required. It thus constructs more and more precise over-approximations of the complete product. This permits early termination in many cases, in particular when the objective is not reachable, which often is an unfavorable case in reachability analysis. We have implemented this algorithm in a first prototype and provide some very encouraging experimental results.

Vérification
Lundi 23 janvier 2017, 11 heures, Salle 1007
Andrea Cerone (Imperial College London) Analysing Snapshot Isolation

Snapshot isolation (SI) is a widely used consistency model for transaction processing, implemented by most major databases and some of transactional memory systems. Unfortunately, its classical definition is given in a low-level operational way, by an idealised concurrency-control algorithm, and this complicates reasoning about the behaviour of applications running under SI. We give an alternative specification to SI that characterises it in terms of transactional dependency graphs of Adya et al., generalising serialization graphs. Unlike previous work, our characterisation does not require adding additional information to dependency graphs about start and commit points of transactions. We then exploit our specification to obtain two kinds of static analyses. The first one checks when a set of transactions running under SI can be chopped into smaller pieces without introducing new behaviours, to improve performance. The other analysis checks whether a set of transactions running under a weakening of SI behaves the same as when it running under SI.