Verification
Monday December 12, 2016, 11AM, Salle 1007
Bin Fang (IRIF(Paris), ECNU (China)) Hierarchical Shape Abstraction for Analysis of Free-List Memory Allocators (a logic-based approach)

We propose a hierarchical abstract domain for the analysis of free list memory allocators that tracks shape and numerical properties about both the heap and the free lists. Our domain is based on Separation Logic extended with predicates that capture the pointer arithmetics constraints for the heap list and the shape of the free list. These predicates are combined using a hierarchical composition operator to specify the overlapping of the heap list by the free list. In addition to expressiveness, this operator leads to a compositional and compact representation of abstract values and simplifies the implementation of the abstract domain. The shape constraints are combined with numerical constraints over integer arrays to track properties about the allocation policies (best-fit, first-fit, etc). Such properties are out of the scope of the existing analyzers. We implemented this domain and we show its effectiveness on several implementations of free list allocators.

Verification
Friday December 9, 2016, 11AM, Salle 3052
Alastair Donaldson (Imperial College London) Exposing Errors Related to Weak Memory in GPU Applications

In this presentation, I will describe a project led by my PhD student Tyler Sorensen, on the systematic design of a testing environment that uses stressing and fuzzing to reveal errors in GPU applications that arise due to weak memory effects. This approach is evaluated across several CUDA applications that use fine-grained concurrency, on seven GPUs spanning three Nvidia architectures. The results show that applications that rarely, or never, exhibit errors related to weak memory when executed natively can readily exhibit these errors when executed in the testing environment. The testing environment also provides a means to identify the root causes of erroneous weak effects, and automatically suggests how to insert fences that experimentally eliminate these errors. This empirical fence insertion method carries significantly lower overhead, in terms of execution time and energy consumption, than a more conservative, guaranteed-sound approach.

Verification
Monday December 5, 2016, 11AM, Salle 1007
Guilhem Jaber (IRIF) SyTeCi: Symbolic, Temporal and Circular reasoning for automatic proofs of contextual equivalence

Operational Techniques (Kripke Logical Relations and Environmental/Open/Parametric Bisimulations) have matured enough to become now formidable tools to prove contextual equivalence of effectful higher-order programs. However, it is not yet possible to automate such proofs – the problem being of course in general undecidable. In this talk, we will see how to take the best of these techniques to design an automatic procedure which is able many non-trivial examples of equivalence, including most of the examples from the literature. The talk will describe the main ingredients of this method: - Symbolic reduction to evaluate of programs, - Transition systems of heap invariants, as used with Kripke Logical Relations, - Temporal logic to abstract over the control flow between the program and its environment, - Circular proofs to automate the reasoning over recursive functions. Using them, we will see how we can reduce contextual equivalence to the problem of non-reachability in some automatically generated transition systems of invariants.

Verification
Monday November 28, 2016, 11AM, Salle 1007
Georg Zetzsche (LSV, ENS Cachan) First-order logic with reachability for infinite-state systems

First-order logic with the reachability predicate (FOR) is an important means of specification in system analysis. Its decidability status is known for some individual types of infinite-state systems such as pushdown (decidable) and vector addition systems (undecidable).

This work aims at a general understanding of which types of systems admit decidability. As a unifying model, we employ valence systems over graph monoids, which feature a finite-state control and are parameterized by a monoid to represent their storage mechanism. As special cases, this includes pushdown systems, various types of counter systems (such as vector addition systems) and combinations thereof. Our main result is a characterization of those graph monoids where FOR is decidable for the resulting transition systems.

Verification
Thursday November 24, 2016, 3PM, Salle 3052
Gennaro Parlato (University of Southampton) A Pragmatic Bug-finding Approach for Concurrent Programs

Concurrency poses a major challenge for program verification, but it can also offer an opportunity to scale when subproblems can be analysed in parallel. We propose a parameterizable code-to-code translation to generate a set of simpler program variants such that each interleaving of the program is captured by at least one of them. These variants can then be checked autonomously in parallel. Our approach is independent of the tool that is chosen for the final analysis, it is compatible with weak memory models, and it amplifies the effectiveness of existing tools, making them find bugs faster and with fewer resources. We do our experiments using Lazy-CSeq as off-the-shelf final verifier and demonstrate that our approach is able to find bugs in the hardest known concurrency benchmarks in a matter of minutes where other dynamic and static tools fail to conclude.

Verification
Monday November 14, 2016, 11AM, Salle 1007
Philipp Rümmer (University of Uppsala) Liveness of Randomised Parameterised Systems under Arbitrary Schedulers

e consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. In this paper we consider liveness under arbitrary (including unfair) schedulers, which is often considered a desirable property in the literature of self-stabilising systems. In this paper we introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method incrementally constructs a progress relation, exploiting both Angluin's $L*$ algorithm and SAT-solvers. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols.

Joint work with Anthony W. Lin.

Verification
Monday October 24, 2016, 11AM, Salle 1007
Sylvain Schmitz (LSV - ENS Cachan) Ideal Decompositions for Vector Addition Systems

Vector addition systems, or equivalently Petri nets, are one of the most popular formal models for the representation and the analysis of parallel processes. Many problems for vector addition systems are known to be decidable thanks to the theory of well-structured transition systems. Indeed, vector addition systems with configurations equipped with the classical point-wise ordering are well-structured transition systems. Based on this observation, problems like coverability or termination can be proven decidable. However, the theory of well-structured transition systems does not explain the decidability of the reachability problem. In this presentation, we show that runs of vector addition systems can also be equipped with a well quasi-order. This observation provides a unified understanding of the data structures involved in solving many problems for vector addition systems, including the central reachability problem.

Joint work with Jérôme Leroux.

Verification
Monday October 10, 2016, 11AM, Salle 1007
Steven de Oliveira (CEA) Polynomial invariants by linear algebra

One of the main issue in formal verification is the analysis of loops, considered as a major research problem since the 70s. Program verification based on Floyd-Hoare's inductive assertion and CEGAR-like techniques for model-checking uses loop invariants in order to reduce the problem to an acyclic graph analysis instead of unrolling or accelerating loops.

I will present in this talk a new technique for generating polynomial invariants, divided in two independent parts : a procedure that reduces a class of loops containing polynomial assignments to linear loops and a procedure for generating inductive invariants for linear loops.

Both of these techniques have a polynomial complexity for a bounded number of variables and we guarantee the completeness of the technique for a bounded degree which we successfully implemented for C programs verification as a Frama-C plug-in, PILAT.

Verification
Monday October 3, 2016, 11AM, Salle 1007
Giovanni Bernardi (IRIF) Robustness against Consistency Models with Atomic Visibility

To achieve scalability, modern Internet services often rely on distributed databases with consistency models for transactions weaker than serializability. At present, application programmers often lack techniques to ensure that the weakness of these consistency models does not violate application correctness. In this talk I will present criteria to check whether applications that rely on a database providing only weak consistency are robust, i.e., behave as if they used a database providing serializability, and I will focus on a consistency model called Parallel Snapshot Isolation. The results I will outline handle systematically and uniformly several recently proposed weak consistency models, as well as a mechanism for strengthening consistency in parts of an application.

Verification
Monday September 26, 2016, 11AM, Salle 1007
Arnaud Sangnier (IRIF) Fixpoints in VASS: Results and Applications

VASS (Vector Addition Systems with States), which are equivalent to Petri nets, are automata equipped with natural variables that can be decremented or incremented. VASS are a powerful model which has been intensively studied in the last decades. Many verification techniques have been developed to analyse them and the frontier between the decidable and undecidable problems related to VASS begins to be well known. One interesting point is that the model-checking of linear temporal logics (like LTL or linear mu-calculus) is decidable for this model but this is not anymore the case when considering branching time temporal logics. However some restrictions can be imposed on the logics and on the studied system in order to regain decidability. In this talk, we will present these results concerning the model-checking of VASS and the techniques leading to the decidability results. We will then show how these techniques and results can be used to analyse some extensions of VASS with probabilities, namely probabilistic VASS and VASS Markov Decision Processes.

Verification
Monday September 19, 2016, 11AM, Salle 1007
Francesco Belardinelli (Université d'Evry) Abstraction-based Verification of Infinite-state Data-aware Systems

Data-aware Systems (DaS) are a novel paradigm to model business processes in Service-oriented Computing.

DaS are best described in terms of interacting modules consisting of a data model and a lifecycle, which account respectively for the relational structure of data and their evolution over time.

However, by considering data and processes as equally relevant tenets of DaS, the typical questions concerning their verification are hardly amenable by current methodologies. For instance, the presence of data means that the number of states in DaS is infinite in general.

In this talk we present recent advances in the verification of DaS. We introduce agent-based abstraction techniques to model check DaS against specifications expressed in temporal and strategy logics. Further, we illustrate how DaS can be useful to model game-theoretic scenarios as well. Specifically, we provide an analysis of English (ascending bid) auctions through DaS.

Verification
Monday May 30, 2016, 11AM, Salle 1007
Thibaut Balabonski (LRI) Low-level C code optimisations: are they valid?

ubstantial research efforts have been devoted to tools for reasoning about -and proving properties of- programs. A related concern is making sure that compilers preserve the soundness of programs, that is making sure that the compiled code respects the behavior of the source program (see for instance the CompCert C compiler). This talk is about an attempt at proving the soundness of some basic low-level transformations for concurrent C programs. We will see some elements of the official semantics of concurrency in C, and I will relate how the focus of this work shifted from proving the soundness of program transformations to patching the official semantics.

Verification
Monday March 14, 2016, 11AM, Salle 1007
Paul Gastin (LSV) Formal methods for the verification of distributed algorithms

We introduce an automata-theoretic method for the verification of distributed algorithms running on ring networks. In a distributed algorithm, an arbitrary number of processes cooperate to achieve a common goal (e.g., elect a leader). Processes have unique identifiers (pids) from an infinite, totally ordered domain. An algorithm proceeds in synchronous rounds, each round allowing a process to perform a bounded sequence of actions such as send or receive a pid, store it in some register, and compare register contents wrt. the associated total order. An algorithm is supposed to be correct independently of the number of processes. To specify correctness properties, we introduce a logic that can reason about processes and pids. Referring to leader election, it may say that, at the end of an execution, each process stores the maximum pid in some dedicated register. Since the verification of distributed algorithms is undecidable, we propose an underapproximation technique, which bounds the number of rounds. This is an appealing approach, as the number of rounds needed by a distributed algorithm to conclude is often exponentially smaller than the number of processes. We provide an automata-theoretic solution, reducing model checking to emptiness for alternating two-way automata on words. Overall, we show that round-bounded verification of distributed algorithms over rings is PSPACE-complete. Based on a joint work with C. Aiswarya and Benedikt Bollig, extended abstract at CONCUR’15: http://www.lsv.ens-cachan.fr/~gastin/mes-publis.php?onlykey=ABG-concur15.

Verification
Monday March 7, 2016, 11AM, Salle 1007
Julien Signoles (CEA-LIST) Frama-C, a collaborative and extensible framework for C code analysis

Frama-C is a source code analysis platform that aims at conducting verification of industrial-size C programs. It provides its users with a collection of plug-ins that perform static analysis, deductive verification, testing and monitoring, for safety- and security-critical software. Collaborative verification across cooperating plug-ins is enabled by their integration on top of a shared kernel and datastructures, and their compliance to a common specification language. This talk presents a consolidated view of the platform, its main and composite analyses, and some of its industrial achievements. It focuses on its specification language ACSL, and on different ways to verify ACSL specifications through static and dynamic analyses.

Verification
Monday February 29, 2016, 11AM, Salle 1007
Pierre Fraigniaud (IRIF) Fault-Tolerant Decentralized Runtime Monitoring

Runtime verification is aiming at extracting information from a running system, and using it to detect and possibly react to behaviors violating a given correctness property. Decentralized runtime verification involves a set of monitors observing the behavior of the underlying system. In this talk, the investigation of decentralized runtime verification in which not only the elements of the observed system, but also the monitors themselves are subject to failures will be presented. In this context, it is unavoidable that the unreliable monitors may have different views of the underlying system, and therefore may have different valuations of the correctness property. We characterize the number of valuations required for monitoring a given correctness property in a decentralized manner. Our lower bound is independent of the logic used for specifying the correctness property, as well as of the way the set of valuations returned by the monitors is globally interpreted. Moreover, our lower bound is tight in the sense that we design a distributed protocol enabling any given set of monitors to verify any given correctness property using as many different valuations as the one given by this lower bound. Joint work with: Sergio Rajsbaum (UNAM, Mexico) and Corentin Travers (LaBRI, Bordeaux).