Seminar Pole Automata, structures and verification Thematic team Modeling and verification Manage talks Verification Day, hour and place Monday at 11:00am, room 1007 The calendar of events (iCal format). In order to add the event calendar to your favorite agenda, subscribe to the calendar by using this link. Contact(s) Mohammed Foughali Srinidhi Nagendra Enrique Roman-Calvo Next talks Verification Monday November 18, 2024, 11AM, Salle 3052 Romain Delpy (LaBRI, Univ. Bordeaux) An Automata-based Approach for Synchronizable Mailbox Communication We revisit finite-state communicating systems with round-based communication under mailbox semantics. Mailboxes correspond to one FIFO buffer per process (instead of one buffer per pair of processes in peer-to-peer systems). Round-based communication corresponds to sequences of rounds in which processes can first send messages, then only receive (and receives must be in the same round as their sends). A system is called synchronizable if every execution can be re-scheduled into an equivalent execution that is a sequence of rounds. We know from previous results that reachability is PSPACE-complete for synchronizable systems. In this talk we show that we can check if a mailbox system is synchronizable thanks to a novel automata-based approach. From this we get that checking synchronizability is PSPACE-complete. The same question is undecidable under peer-to-peer semantics. We also show that model-checking synchronizable systems for a natural subclass of regular properties is a PSPACE-complete problem. Joint work with Anca Muscholl & Grégoire Sutre. Verification Monday November 25, 2024, 11AM, Salle 3052 Laetitia Laversa (IRIF) Execution-time Opacity Control for Timed Automata Timing leaks in timed automata (TA) can occur whenever an attacker is able to deduce a secret by observing some timed behavior. In execution-time opacity, the attacker aims at deducing whether a private location was visited, by observing only the execution time. It can be decided whether a TA is opaque in this setting. In the work presented in this talk, we tackle control, and show that we are able to decide whether a TA can be controlled at runtime to ensure opacity. Our method is constructive, in the sens that we can exhibit such a controller. Verification Friday November 29, 2024, 11AM, Salle 3052 Raya Elsaleh (Hebrew University of Jerusalem) The Road to Improving Neural Network Verifiers Deep neural networks are becoming key components in diverse systems, but these systems often function as “black boxes,” making it difficult to verify their correctness. This opacity is particularly concerning for safety-critical applications like autonomous vehicles, where reliability is essential. Recent advances in neural network verification aim to address these challenges by formally analyzing network behavior. However, verification remains challenging, particularly due to scalability issues and the complexity of DNN verifiers, which can themselves be prone to errors. In this talk, I will review recent developments in neural network verification approaches. Additionally, I will introduce a delta-debugging technique designed to minimize inputs that trigger bugs in the verifiers by reducing network size, thereby streamlining the debugging process for more reliable outcomes. Previous talks Year 2024 Verification Friday October 4, 2024, 11AM, 3052 and Zoom link Eric Koskinen (Stevens Institute of Technology) Inferring Accumulative Effects of Higher Order Programs Many temporal safety properties of higher-order programs go beyond simple event sequencing and require an automaton register (or “accumulator”) to express, such as input-dependency, event summation, resource usage, ensuring equal event magnitude, computation cost, etc. Some steps have been made towards verifying more basic temporal event sequences via reductions to fair termination [Murase et al. 2016] or some input-dependent properties through deductive proof systems [Nanjo et al. 2018]. However, there are currently no automated techniques to verify the more general class of register-automaton safety properties of higher-order programs. We introduce an abstract interpretation-based analysis to compute dependent, register-automata effects of recursive, higher-order programs. We capture properties of a program’s effects in terms of automata that summarizes the history of observed effects using an accumulator register. The key novelty is a new abstract domain for context-dependent effects, capable of abstracting relations between the program environment, the automaton control state, and the accumulator value. The upshot is a dataflow type and effect system that computes context-sensitive effect summaries. We demonstrate our work via a prototype implementation that computes dependent effect summaries (and validates assertions) for OCaml-like recursive higher order programs. As a basis of comparison, we describe reductions to assertion checking for effect-free programs, and demonstrate that our approach outperforms prior tools Drift, MoCHi and RCaml/PCSat. Verification Monday September 30, 2024, 11AM, 3052 and Zoom link Catalin Dima (Université Paris-Est Créteil Val de Marne) Rational synthesis in the commons (based on joint work with Rodica Bozianu, Youssouf Oualhadj and Nicolas Troquard) Turn-based games on graphs are games where the states are controlled by one and only one player who decides which edge to follow. Each player has a temporal objective that he tries to achieve. One player is the designated ‘controller’, whose objective captures the desirable outcomes of the whole system. Cooperative rational synthesis is the problem of computing a Nash equilibrium that satisfies the controller’s objective, while non-cooperative rational synthesis is the task of automatically constructing a controller that ensures the satisfaction of controller's objective against any Nash equilibrium involving the rest of the agents. We extend this setting to a context in which agents have access to common pool resources, represented as non-negative numbers. We consider two types of agents: careless and careful. Careless agents only care for their temporal objective, while careful agents also pay attention not to deplete the system’s resources. We solve the problems of cooperative and non-cooperative rational synthesis in such games involving a single resource, for parity objectives and objectives given by LTL formulas. The results are based on encodings into two-player energy parity games. Verification Monday June 17, 2024, 11AM, 4071 and Zoom link Shibashis Guha (TIFR Mumbai) Strategy synthesis for global window PCTL Given a Markov decision process (MDP) M and a formula Phi, the strategy synthesis problem asks if there exists a strategy sigma such that the resulting Markov chain M[sigma] satisfies Phi. This problem is known to be undecidable for the probabilistic temporal logic PCTL. We study a class of formulae that can be seen as a fragment of PCTL where a local, bounded horizon property is enforced all along an execution. Moreover, we allow for linear expressions in the probabilistic inequalities. This logic is at the frontier of decidability, depending on the type of strategies considered. In particular, strategy synthesis is shown to be decidable when strategies are deterministic while the problem is undecidable for arbitrary strategies. Joint work with Benjamin Bordais, Damien Busatto-Gaston, and Jean-François Raskin Verification Monday May 27, 2024, 11AM, 3052 and Zoom link Nicolas Amat (IMDEA) A polyhedral Framework for Reachability Problems in Petri Nets We propose and study a method to accelerate the verification of reachability problems in Petri nets based on structural reductions. This approach, that we call polyhedral reduction, relies on a state space abstraction that combines structural reductions and linear arithmetic constraints on the marking of places. The correctness of this method is based on a new notion of equivalence between nets. Combined with an SMT-based model checker, one can transform a reachability problem about some Petri net, into the verification of an equivalent reachability property on a reduced version of this net. We also propose an automated procedure to prove that such an abstraction is correct, exploiting a connection with a class of Petri nets with Presburger-definable reachability sets. In addition, we present a data structure, called Token Flow Graph (TFG), that captures the particular structure of constraints stemming from structural reductions. We leverage TFGs to efficiently solve two problems. First, to eliminate quantifiers that appear during our transformation, in the updated formula to be checked on the reduced net. Second, to compute the concurrency relation of a net, that is all pairs of places that can be marked simultaneously in some reachable marking. We apply our approach to several symbolic model checking procedures and introduce a new semi-decision procedure for checking reachability properties in Petri nets based on the Property Directed Reachability (PDR) method. A distinctive feature of this PDR method is its ability to generate verdict certificates that can be verified using an external SMT solver. Our approach and algorithms are implemented in four open-source tools: SMPT for checking reachability properties; Kong for accelerating the computation of concurrent places; Octant for eliminating quantifiers; and Reductron for automatically proving the correctness of polyhedral reductions. We give experimental results about their effectiveness, both for bounded and unbounded nets, using a large benchmark provided by the Model Checking Contest. Verification Monday May 13, 2024, 11AM, 3052 and Zoom link Enrique Román Calvo (IRIF) Dynamic Partial Order Reduction for Checking Correctness against Transaction Isolation Levels Modern applications, such as social networking systems and e-commerce platforms are centered around using large-scale databases for storing and retrieving data. Accesses to the database are typically enclosed in transactions that allow computations on shared data to be isolated from other concurrent computations and resilient to failures. Modern databases trade isolation for performance. The weaker the isolation level is, the more behaviors a database is allowed to exhibit and it is up to the developer to ensure that their application can tolerate those behaviors. In this work, we propose stateless model checking algorithms for studying correctness of such applications that rely on dynamic partial order reduction. These algorithms work for a number of widely-used weak isolation levels, including Read Committed, Causal Consistency, Snapshot Isolation and Serializability. We show that they are complete, sound and optimal, and run with polynomial memory consumption in all cases. We report on an implementation of these algorithms in the context of Java Pathfinder applied to a number of challenging applications drawn from the literature of distributed systems and databases. Verification Monday May 6, 2024, 11AM, 3052 and Zoom link Giovanni Bernardi (IRIF) Constructive characterisations of the must-preorder for asynchrony De Nicola and Hennessy’s must-preorder is a contextual refinement which states that a server q refines a server p if all clients satisfied by p are also satisfied by q. Owing to the universal quantification over clients, this definition does not yied a practical proof method for the must-preorder, and alternative characterisations are necessary to reason on it. I will outline the first characterisations of the must-preorder that are constructive, supported by a mechanisation in Coq, and independent from any calculus: the results pertain to Selinger output-buffered agents with feedback. This is a class of Labelled Transition Systems that captures programs that communicate asynchronously via a shared unordered buffer, as in asynchronous CCS or the asynchronous π-calculus. The results are surprising: the behavioural characterisations devised for synchronous communication carry over as they stand to the asynchronous setting, if servers are enhanced to act as forwarders, i.e. they can input any message as long as they store it back into the shared buffer. Verification Monday April 29, 2024, 11AM, 3052 and Zoom link Niklas Kochdumper (IRIF) Neural Network Verification using Polynomial Zonotopes Since artificial intelligence is nowadays also increasingly applied in safety-critical applications such as autonomous driving and human robot collaboration, there is an urgent need for approaches that can efficiently verify that the corresponding neural networks are safe. In general, verification tasks involving neural networks can be classified into the two groups open-loop verification and closed-loop verification. For open-loop verification one aims to prove certain properties of a standalone neural network, while for closed-loop verification the neural network acts as a controller for a dynamic system. This presentation demonstrates a novel verification technique that is based on a polynomial abstraction of the input-output-relation of the single neurons, which proves to be beneficial for both open- as well as closed-loop neural network verification. Verification Monday April 22, 2024, 11AM, 3052 (streamed) and Zoom link Sidi Mohamed Beillahi (University of Toronto) Securing Smart Contracts Through Program Synthesis In decentralized finance (DeFi), malicious adversaries were able to steal over 2 billions dollars in crypto assets over the past two years. One very common strategy malicious adversaries use is to borrow large assets through flash loans to launch their exploits against vulnerable DeFi smart contracts deployed on the Blockchain. Flash loans are loans that are only valid within a blockchain transaction and must be repaid with fees by the end of that transaction. Unlike normal loans, flash loans allow borrowers to borrow large assets without upfront collateral deposits. In this talk, I will first present a work on automated synthesis of adversarial transactions that exploit DeFi smart contracts using flash loans. To bypass the complexity of a DeFi protocol and needs for source code analysis, we propose a new technique to approximate the DeFi protocol functional behaviors using numerical methods (polynomial linear regression and nearest-neighbor interpolation). We then construct an optimization query using the approximated functions of the DeFi protocol to find an adversarial attack that is constituted of a sequence of functions invocations with optimal parameters that provides the maximum profit. To improve the accuracy of the approximation, we propose a novel counterexample driven approximation refinement technique. I will then present a followup work studying a category of DeFi attacks known as oracle manipulation attacks and the effectiveness of the current mechanisms used to protect against those attacks. I will show that those mechanisms are inadequate to ensure DeFi smart contracts safety when confronted with significant oracle deviations. I will then present a new framework to identify parameters that allow to secure DeFi smart contracts against oracle manipulation attacks. In the framework, we first perform symbolic analysis on the given contract and construct a summary model of constraints. We then leverage an SMT solver to identify parameters values that allow to secure smart contracts against oracle deviations. The talk is based on two ICSE 2024 papers that are joint works with Zhiyang Chen, Xun Deng, Andreas Veneris, and Fan Long (University of Toronto) and Cyrus Minwalla and Han Du (Bank of Canada). Verification Monday March 25, 2024, 11AM, 3052 and Zoom link Étienne André (LIPN) Monitoring cyber-physical systems under uncertainty Monitoring cyber-physical systems attracts both scientific and practical attention. However, monitoring algorithms suffer from the methodological difficulty of only observing sampled discrete-time signals gathered in a log, while real behaviors are continuous-time signals, which implies some uncertainty. In addition, the monitoring properties can themselves feature some uncertainty: e.g., a period in a property to be monitored can be uncertain or even unknown. In this presentation, we present two ways to address uncertainty while monitoring cyber-physical systems. First, we address timed pattern matching in the presence of an uncertain specification. We want to know for which start and end dates in a log, and for what values of the timing parameters, a property holds. For instance, we look for the minimum or maximum period (together with the corresponding start and end dates) for which the property holds. We rely on two solutions, one based on model-checking, and the other one being ad-hoc. Second, to mitigate the problem of sampling uncertainties, we introduce a model-bounded monitoring scheme, where we use a (limited) prior knowledge about the target system in order to prune interpolation candidates. We express such prior knowledge by linear hybrid automata (LHAs)—the LHAs are called bounding models. We rely on reachability techniques for linear hybrid automata using polyhedra to efficiently decide whether the property holds on a given log. We implemented both approaches, and experiments on realistic benchmarks show that both directions are efficient and practically relevant. This presentation is mainly based on publications at ACM ToSEM (2023) ACM ToCPS (2022), and are joint work with Masaki Waga (Kyoto University) and Ichiro Hasuo (NII, Tokyo, Japan). Verification Monday March 18, 2024, 11AM, 3052 and Zoom link Sarah Winter (IRIF) Deterministic Regular Functions of Infinite Words Regular functions of infinite words are (partial) functions realized by deterministic two-way transducers with infinite look-ahead. Equivalently, Alur et. al. have shown that they correspond to functions realized by deterministic Muller streaming string transducers, and to functions defined by MSO-transductions. Regular functions are however not computable in general (for a classical extension of Turing computability to infinite inputs), and we consider in this paper the class of deterministic regular functions of infinite words, realized by deterministic two-way transducers without look-ahead. We prove that it is a well-behaved class of functions: they are computable, closed under composition, characterized by the guarded fragment of MSO-transductions, by deterministic Büchi streaming string transducers, by deterministic two-way transducers with finite look-ahead, and by finite compositions of sequential functions and one fixed basic function called map-copy-reverse. This is joint work with Olivier Carton, Gaëtan Douéneau-Tabot, and Emmanuel Filiot. Verification Monday March 11, 2024, 11AM, 3052 and Zoom link Uli Fahrenberg (LRE / EPITA) Quantitative Verification: the Good, the Bad, the Ugly Quantitative verification is the verification of quantitative properties of quantitative systems. This is known to be difficult, notably because sweet spots are rare in-between robustness, compositionality and generalizability. In order to ensure robustness of verification, we replace the Boolean yes-no answers of standard verification with distances. Depending on the application context, many different types of distances are being employed in quantitative verification. Consequently, there is a need for a general theory of system distances which abstracts away from the concrete distances and develops quantitative verification at a level independent of the distance. We develop such a general theory, using games with quantitative objectives to define a quantitative version of van Glebbeek's linear-time-branching-time spectrum. We also extend this to a compositional theory of quantitative specifications, using disjunctive modal transition systems which capture precisely the greatest-fixed-point part of the mu-calculus, i.e. safety properties. We currently do not know how to go beyond safety properties. Another major obstacle is that we do not know how to integrate silent transitions into our framework. Finally, we argue that quantitative verification of real-time and hybrid systems poses other challenges which force us to look beyond our framework. The first 3/4 of this talk are based on the speaker's habilitation thesis which can be found at https://arxiv.org/abs/2204.11302 or https://hal.science/hal-04206693 Verification Monday March 4, 2024, 11AM, 3052 and Zoom link Adrian Francalanza (University of Malta) An Operational Guide to Monitorability Monitorability underpins the technique of Runtime Verification because it delineates what properties can be verified at runtime. Although many monitorability definitions exist, few are defined explicitly in terms of the operational guarantees provided by monitors, i.e., the computational entities carrying out the verification. This work views monitorability as a spectrum, where the fewer guarantees that are required of monitors, the more properties become monitorable. Accordingly, we present a monitorability hierarchy based on this trade-off. For regular specifications, we give syntactic characterisations in Hennessy–Milner logic with recursion for its levels. Finally, we map existing monitorability definitions into our hierarchy. This gives a unified framework that makes the operational assumptions and guarantees of each definition explicit and provides a rigorous foundation that can inform design choices and correctness claims for runtime verification tools. This is joint work with Luca Aceto, Antonis Achilleos, Anna Ingolfsdottir and Karoliina Lehtinen Verification Monday February 26, 2024, 11AM, 3052 and Zoom link Loïc Germerie Guizouarn (Université Paris-Est Créteil) Quasi-synchronous communications and verification of distributed systems Distributed systems are typically based on asynchronous exchanges of messages. Communicating automata are a tool to reason formally on the communications of such systems, allowing to detect automatically errors, like deadlocks or loss of messages. Detecting these errors is undecidable in general for systems with two or more participants, and several restrictions of the model have been considered to restore decidability. The subject of this presentation is one of these approaches, based on systems whose executions are realisable with synchronous communications (RSC). The behaviour of these systems approximate synchronous behaviours, where messages are sent and received in an atomic action. Verification Monday February 19, 2024, 11AM, 3052 and Zoom link Dylan Marinho (Telecom SudParis) Execution-time opacity problems in (parametric) timed automata Among harmful information leaks, the timing information leakage is the ability for an attacker to deduce internal information depending on timing information. In this presentation, I focus on timing leakage through the total execution time, i. e., when a system works as an almost black-box and the ability of the attacker is limited to know the model and observe the total execution time. I will present the formalization we introduced with timed automata and how we can verify security properties in timed systems. This presentation is mainly based on publications at ACM TOSEM (2022), FTSCS 2022 and ICECCS 2023, and on join works with Étienne André, Shapagat Bolat, Engel Lefaucheux, Didier Lime and Sun Jun. Verification Monday February 12, 2024, 11AM, 3052 and Zoom link Jawher Jerray (Télécom Paris) Guaranteed properties of real-time systems under perturbations. Since real-time systems has a major impact on human development, especially critical systems that can put human lives at risk if something goes wrong. Hence, the need of studying the behavior of these systems in order to guarantee their correct functioning. Nevertheless, computing such type of system has never been an easy task, as the complexity of these systems is constantly increasing, in addition to the perturbations that may arise during their performance, as well as undefined parameters that may exist. To ensure that a system always produces the expected results and does not fail in any way, a formal verification of its behavior and properties is necessary. In this work, we study the schedulability of the flight control of a space launcher with unknown parameters and under constraints. Then, we propose a synthesis of the admissible timing values of the unknown parameters by a parametric timed model checker. We increase the complexity of the problem by taking into consideration the switch time between two threads. We extend this work by developing a tool that translates a given real-time system design into parametric timed automata in order to infer some timing constraints ensuring schedulability. Verification Monday February 5, 2024, 11AM, Zoom link Tayssir Touili (LIPN) On static malware detection The number of malware is growing extraordinarily fast. A malware may bring serious damage. Thus, it is crucial to have efficient up-to-date virus detectors. Existing antivirus systems use various detection techniques to identify viruses such as (1) code emulation where the virus is executed in a virtual environment to get detected; or (2) signature detection, where a signature is a pattern of program code that characterizes the virus. A file is declared as a virus if it contains a sequence of binary code instructions that matches one of the known signatures. These techniques are becoming insufficient. Indeed, emulation based techniques can only check the program's behavior in a limited time interval. As for signature based systems, it is very easy to virus developers to get around them. Thus, a robust malware detection technique needs to check the behavior (not the syntax) of the program without executing it. We show in this talk how using behavior signatures allow to efficiently detect malwares in a completely static way. We implemented our techniques in a tool, and we applied it to detect several viruses. Our results are encouraging. In particular, our tool was able to detect more than 800 viruses. Several of these viruses could not be detected by well-known anti-viruses such as Avira, Avast, Norton, Kaspersky and McAfee. Verification Monday January 15, 2024, 11AM, 3052 and Zoom link Raphaël Berthon (Aachen University) Natural Strategic Ability in Stochastic Multi-Agent Systems Synthesis under partial information often leads to undecidability or very high complexity, but bounding the available memory has recently proved to be an effective solution. On Multi-Agent Systems (MAS), it is also necessary to balance between the ability of agents to strategize with memory and the model-checking complexity, but until now it has been restricted to fully deterministic settings. We propose the use of natural strategies to reason about the probabilistic temporal logics PATL and PATL* under some partial information. As a main result we show that, in stochastic MAS, NatPATL model-checking is NP-complete when the active coalition is restricted to deterministic strategies. Verification Monday January 8, 2024, 11AM, 3052 and Zoom link Henry Sinclair-Banks (University of Warwick) Coverability in VASS Revisited: Improving Rackoff’s Bound to Obtain Conditional Optimality Vector Addition Systems with States (VASS) are a long-studied model that are equivalent to Petri nets. The coverability problem asks whether there exists a run from a given initial configuration to a configuration that is at least a given target configuration. Coverability is in EXPSPACE (Rackoff '78) and is EXPSPACE-hard already under unary encodings (Lipton '76). Rackoff's upper bound is derived by considering the necessary length of runs that witness coverability. In this presentation, I will present an improved upper bound on the lengths of such runs. The run length bound can be used to obtain two algorithms: an optimal exponential space algorithm and a conditionally optimal double-exponential time algorithm. I aim to show the double-exponential time lower bound that is conditioned upon the Exponential Time Hypothesis (ETH). About joint work with Marvin Künnemann, Filip Mazowiecki, Lia Schütze, and Karol Węgrzycki (ICALP’23). Year 2023 Verification Monday December 18, 2023, 11AM, 3052 and Zoom link Munyque Mitteelmann (University of Naples Federico II) Logics for Strategic Reasoning: Recent Developments and Application to Mechanism Design In recent years a wealth of logic-based languages have been introduced to reason about the strategic abilities of autonomous agents in multi-agent systems. This talk presents some of these important logical formalisms, namely, the Alternating-time Temporal Logic and Strategy Logic. We discuss recent extensions of those formalisms to capture different degrees of satisfaction, as well as to handle uncertainty caused by the partial observability of agents and the intrinsic randomness of MAS. Finally, we describe the recent application of those formalisms for Mechanism Design and explain how they can be used either to automatically check that a given mechanism satisfies some desirable property, or to produce a mechanism that does it. Verification Monday December 4, 2023, 11AM, 3052 and Zoom link Marco Campion (INRIA) Monotonicity and the Precision of Program Analysis It is widely known that the precision of a program analyzer is closely related to intensional program properties, namely, properties concerning how the program is written. This explains, for instance, the interest in code obfuscation techniques, namely, tools explicitly designed to degrade the results of program analysis by operating syntactic program transformations. Less is known about a possible relation between what the program extensionally computes, namely, its input-output relation, and the precision of a program analyzer. In this paper we explore this potential connection in an effort to isolate program fragments that can be precisely analyzed by abstract interpretation, namely, programs for which there exists a complete abstract interpretation. In the field of static inference of numeric invariants, this happens for programs, or parts of programs, that manifest a monotone (either non-decreasing or non-increasing) behavior. We first formalize the notion of program monotonicity with respect to a given input and a set of numerical variables of interest. A sound proof system is then introduced with judgments specifying whether a program is monotone relatively to a set of variables and a set of inputs. The interest in monotonicity is justified because we prove that the family of monotone programs admit a complete abstract interpretation over a specific class of non-trivial numerical abstractions and inputs. This class includes all non-relational abstract domains that refine interval analysis (i.e., at least as precise as the intervals abstraction) and that satisfy a topological convexity hypothesis. Verification Monday November 20, 2023, 11AM, Zoom link Rômulo Meira Goes (The Pennsylvania State University) On securing the next generation of critical infrastructure systems: A discrete event systems approach Cyber-Physical Systems (CPS) provide the foundation for our critical infrastructure systems, such as energy, transportation, and manufacturing, to name a few. Although CPS are already ubiquitous in our society, their security aspects were only recently incorporated into their design process, mainly in response to catastrophic incidents caused by cyber-attacks. One common class of attacks on CPS is called data-deception attacks, which involve an attacker hijacking the CPS sensors. In this talk, we focus on two questions: (1) How can we model and analyze CPS under data-deception attacks? (2) Can we automatically find vulnerabilities of a given CPS to data-deception attacks? We present a general control framework to help engineers detect and address data-deception vulnerabilities in CPS. This framework can answer the following types of questions: Does the attacker go undetected by an intrusion monitor? What is the likelihood of a safety violation? What information does the attacker need to be successful? We demonstrate that this control framework can analyze realistic case studies by detecting several vulnerabilities in a water treatment testbed and an aircraft power distribution testbed. We discuss how our formal approach can improve the security of the next generation of critical infrastructure systems by reducing vulnerabilities to data-deception of attacks. Verification Monday November 6, 2023, 11AM, 3052 and Zoom link Bernardo Jacobo Inclán (IRIF) Bandwidth of Timed Automata: 3 classes Timed languages contain sequences of discrete events (“letters”) separated by real-valued delays, they can be recognized by timed automata, and represent behaviors of various real-time systems. The notion of bandwidth of a timed language defined in a previous paper characterizes the amount of information per time unit, encoded in words of the language observed with some precision ε. In this paper, we identify three classes of timed automata according to the asymptotics of the bandwidth of their languages with respect to this precision ε: automata are either meager, with an O(1) bandwidth, normal, with a Θ(log (1/ε)) bandwidth, or obese, with Θ(1/ε) bandwidth. We define two structural criteria and prove that they partition timed automata into these 3 classes of bandwidth, implying that there are no intermediate asymptotic classes. Both criteria are formulated using morphisms from paths of the timed automaton to some finite monoids extending Puri's orbit graphs, and the proofs are based on Simon's factorization forest theorem. Joint work with E. Asarin, A.Degorre, C.Dima Verification Monday October 16, 2023, 11AM, 3052 and Zoom link Lucie Guillou (IRIF) Safety Analysis of Parameterised Networks with Non-Blocking Rendez-Vous We consider networks of processes that all execute the same finite-state protocol and communicate via a rendez-vous mechanism. When a process requests a rendez-vous, another process can respond to it and they both change their control states accordingly. We focus here on a specific semantics, called non-blocking, where the process requesting a rendez-vous can change its state even if no process can respond to it. We study the parameterised coverability problem of a configuration in this context, which consists in determining whether there is an initial number of processes and an execution allowing to reach a configuration bigger than a given one. We show that this problem is EXPSPACE-complete and can be solved in polynomial time if the protocol is partitioned into two sets of states, the states from which a process can request a rendez-vous and the ones from which it can answer one. We also prove that the problem of the existence of an execution bringing all the processes in a final state is undecidable in our context. These two problems can be solved in polynomial time with the classical rendez-vous semantics. Verification Monday October 9, 2023, 11AM, 3052 and Zoom link Emily Clément (IRIF) Layered controller synthesis for dynamic multi-agent systems. We present a layered method to solve a multi-agent state problem, involving Timed Automata, SMT and Reinforcement Learning. The first layer of our method, based on Timed Automata, efficiently models the systems and gives us a high-level plan. Then, in a second step, we refine our model and solve it with SMT. We combining these steps and call it the SWA-SMT solver. A main issue is that it cannot be executed in real time. To overcome this, we use SWA-SMT solutions as the initial training dataset for our final step, which aims to obtain a neural network control policy. We use reinforcement learning to train the policy. We develop and show an example where the use of this initial dataset is crucial for the overall success of the method. Verification Monday September 25, 2023, 11AM, 3052 and Zoom link Masaki Waga (Kyoto University) Active Learning of Deterministic Timed Automata with Myhill-Nerode Style Characterization. We present an algorithm to learn a deterministic timed automaton (DTA) via membership and equivalence queries. Our algorithm is an extension of the L* algorithm with a Myhill-Nerode style characterization of recognizable timed languages, which is the class of timed languages recognizable by DTAs. We first characterize the recognizable timed languages with a Nerode-style congruence. Using it, we give an algorithm with a smart teacher answering symbolic membership queries in addition to membership and equivalence queries. With a symbolic membership query, one can ask the membership of a certain set of timed words at one time. We prove that for any recognizable timed language, our learning algorithm returns a DTA recognizing it. We show how to answer a symbolic membership query with finitely many membership queries. We also show that our learning algorithm requires a polynomial number of queries with a smart teacher and an exponential number of queries with a normal teacher. We applied our algorithm to various benchmarks and confirmed its effectiveness with a normal teacher. Verification Monday June 26, 2023, 11AM, Olympe de Gouges 146 and Zoom link Adrian Francalanza (University of Malta) If At First You Don’t Succeed: Extended Monitorability through Multiple Executions In this talk I will discuss recent work on investigating the increase in observational capabilities of monitors that analyse systems over multiple runs. I will illustrate how this augmented monitoring setup can affect the class of properties that can be verified at runtime focussing, in particular, on branching-time properties expressed in the modal mu-calculus. Although branching-time properties are considered to be preserve of static verification techniques such as model-checking, our preliminary results show that the extended monitoring setup can be used to systematically extend previously established monitorability limits. If time permits, I will also discuss bounds that capture the correspondence between the syntactic structure of a branching-time property and the number of system runs required to conduct adequate runtime verification. This is joint work with Antonis Achilleos and Jasmine Xuereb. Verification Monday June 19, 2023, 11AM, Olympe de Gouges 146 and Zoom link Mohammed Foughali (IRIF) Compositional Verification of Embedded Real-Time Systems In an embedded real-time system (ERTS), real-time tasks (software) are typically executed on a multicore shared-memory platform (hardware). The number of cores is usually small, contrasted with a larger number of complex tasks that share data to collaborate. Since most ERTSs are safety-critical, it is crucial to rigorously verify their software against various real-time requirements under the actual hardware constraints (concurrent access to data, number of cores). Both the real-time systems and the formal methods communities provide elegant techniques to realize such verification, which nevertheless face major challenges. For instance, model checking (formal methods) suffers from the state- space explosion problem, whereas schedulability analysis (real-time systems) is pessimistic and restricted to simple task models and schedulability properties. In this paper, we propose a scalable and generic approach to formally verify ERTSs. The core contribution is enabling, through joining the forces of both communities, compositional verification to tame the state-space size. To that end, we formalize a realistic ERTS model where tasks are complex with an arbitrary number of jobs and job segments, then show that compositional verification of such model is possible, using a hybrid approach (from both communities), under the state-of-the-art partitioned fixed-priority (P-FP) with limited preemption scheduling algorithm. The approach consists of the following steps, given the above ERTS model and scheduling algorithm. First, we compute fine-grained data sharing overheads for each job segment that reads or writes some data from the shared memory. Second, we generalize an algorithm that, aware of the data sharing overheads, computes an affinity (task-core allocation) guaranteeing the schedulability of hard-real-time (HRT) tasks. Third, we devise a timed automata (TA) model of the ERTS, that takes into account the affinity, the data sharing overheads and the scheduling algorithm, on which we demonstrate that various properties can be verified compositionally, i.e., on a subset of cores instead of the whole ERTS, therefore reducing the state-space size. In particular, we enable the scalable computation of tight worst-case response times (WCRTs) and other tight bounds separating events on different cores, thus overcoming the pessimism of schedulability analysis techniques. We fully automate our approach and show its benefits on three real-world complex ERTSs, namely two autonomous robots and an automotive case study from the WATERS 2017 industrial challenge. Verification Monday June 12, 2023, 11AM, Olympe de Gouges 147 and Zoom link Themistoklis Melissourgos (University of Essex) Strong Inapproximability Results on Pizza-Sharing We study the computational complexity of finding a solution in straight-cut and square-cut pizza sharing, where the objective is to fairly divide 2-dimensional resources. We show that, for both the discrete and continuous versions of the problems, finding an approximate solution is PPA-complete even for very large constant additive approximation, while finding an exact solution for the square-cut problem is FIXP-hard and in BU. We also study the corresponding decision variants of the problems and their complexity. All our hardness results for the continuous versions apply even when the input is very simple, namely, a union of unweighted squares or triangles. Joint work with Argyrios Deligkas and John Fearnley. Verification Monday June 5, 2023, 11AM, Olympe de Gouges 146 and Zoom link Pedro Ribeiro (University of York) Co-verification for robotics: simulation and verification by proof Robots are facilitating new business models from food delivery to healthcare. Current engineering practice cannot yet provide the formal guarantees needed to allay the safety concerns of regulators. Simulation, a technique favoured by practitioners, provides an avenue for experimenting with different scenarios before committing to expensive tests and proofs. In this talk, I will discuss how different models may be brought together for the (co-)verification of system properties, with simulation complementing formal verification. This will be explored in the context of the model-driven RoboStar framework, that clearly identifies models of the software, hardware, and scenario, and has heterogeneous formal semantics amenable to verification using model-checkers and theorem provers. Verification Monday May 22, 2023, 11AM, Olympe de Gouges 146 and Zoom link Igor Walukiewicz (CNRS, Bordeaux University) Active learning deterministic negotiations The main obstacle in learning distributed systems is the absence of a canonical distributed device for a given behavior. This is why deterministic negotiations without deadlock are interesting in this context. We present an Angluin style active learning algorithm for this model. Deterministic negotiations are models of distributed systems, a kind of Petri nets or Zielonka automata with additional structure. We show that this additional structure together with the assumption of deadlock freedom allows minimizing such negotiations. Angluin's L* algorithm can learn a regular language with polynomialy many queries to Teacher. Active learning algorithms have been designed for many extensions of deterministic finite automata. The challenge in adapting this approach to negotiations is in extracting relevant information from a Teacher's answer: if a teacher says that an execution should not be accepted, it is not clear which parallel component in a negotiation should be modified. We present a learning algorithm whose complexity, measured in the number of queries to the Teacher, is practically the same as the standard algorithm for finite automata. This is interesting, as a finite automaton representing a negotiation can be exponentially bigger than the negotiation. Verification Monday May 15, 2023, 11AM, Olympe de Gouges 146 and Zoom link Salim Chehida (Université Grenoble Alpes) A Formal MDE Framework for Inter-DSL Collaboration In order to master the complexity of a system at the design stage, several models have to be defined and combined together. However, when heterogeneous and independent DSLs are used to define these models, there is a need to explicitly compose their semantics. While the composition of static semantics of DSLs is straightforward, the coordination of their execution semantics is still challenging. This issue is generally called inter-DSL collaboration. In this paper, we propose a formal Model Driven Engineering (MDE) framework built on the Meeduse language workbench that we extend with the Business Process Model and Notation (BPMN). Meeduse allows to instrument DSLs with formal semantics using the B method. BPMN provides an easy-to-use notation to define the coordination of execution semantics of these DSLs. A transformation of BPMN models into Communication Sequential Process (CSP) formal language enables the possibility for animation and verification. Our approach is successfully demonstrated by modeling the collaboration of two DSLs from a real case study. Verification Tuesday May 9, 2023, 2PM, Olympe de Gouges 146 and Zoom link Stéphane Gaubert (INRIA and CMAP, Ecole polytechnique, IP Paris, CNRS) Universal Complexity Bounds Based on Value Iteration for Stochastic Games and Entropy Games We develop value iteration-based algorithms to solve in a unified manner different classes of combinatorial zero-sum games with mean-payoff type rewards. These algorithms rely on an oracle, evaluating the dynamic programming operator up to a given precision. We provide a basic bound, showing in essence that the number of calls to the oracle needed to determine exact optimal (positional) strategies is, up to a factor polynomial in the dimension, of order R/sep, where the “separation” sep is defined as the minimal difference between distinct values arising from strategies, and R is a metric estimate, involving the norm of approximate sub and super-eigenvectors of the dynamic programming operator. We illustrate this method by two applications. The first one is a new proof, leading to improved complexity estimates, of a theorem of Boros, Elbassioni, Gurvich and Makino, showing that turn-based mean-payoff games with a fixed number of random positions can be solved in pseudo-polynomial time. The second one concerns entropy games, a model introduced by Asarin, Cervelle, Degorre, Dima, Horn and Kozyakin. The rank of an entropy game is defined as the maximal rank among all the ambiguity matrices determined by strategies of the two players. We show that entropy games with a fixed rank, in their original formulation, can be solved in polynomial time, and that an extension of entropy games incorporating weights can be solved in pseudo-polynomial time under the same fixed rank condition. We shall also discuss a more recent, alternative approach, combining the ideas of relative value iteration and of Krasnoselskii-Mann damping. This leads to improved parameterized bounds for concurrent mean-payoff games, under an irreducibility assumption. The proofs are based on an operator approach, exploiting tools from non-linear Perron-Frobenius theory and variational analysis. The talk will include an introductive survey of this operator approach. This is based on a series of works, with Akian, Grand-Clément, Guillaud, Allamigeon, Katz, Skomra, and more recently Naepels and Terver. Verification Monday April 17, 2023, 11AM, Olympe de Gouges 146 and Zoom link Björn Brandenburg (MPI SWS) PROSA: Trustworthy Schedulability Analysis for Safety-Critical Real-Time Systems *Real-time systems* are computing systems subject to stringent timing constraints and found in many safety-critical domains (e.g., avionics, automotive, robotics). An essential step in the safety certification of such systems is *schedulability analysis*: a category of formal methods that assess the system's temporal correctness (i.e., compliance with the stated timing constraints). However, how much *trust* should we place in the results of a schedulability analysis, and more generally, results obtained with formal methods, in the certification of critical systems? Using schedulability analysis as a specific example, I will examine several ways in which analysis results might be unsound or otherwise unsatisfying and argue that there is ample room for reasonable doubt. However, that begs the question: why should industry practitioners adopt methods conceived to increase confidence in a system's (temporal) correctness if these methods are not so confidence-inspiring after all? Formal verification offers a path out of this conundrum. While the implied effort and learning curve may initially seem off-putting, I will argue that it is, in fact, both viable and desirable to verify schedulability analyses formally. As a proof of concept, I will present PROSA, a Coq framework for machine-checked schedulability analysis, which we have been developing since 2015. Out of the box, PROSA provides machine-checked and hence *trustworthy* schedulability analyses suitable for use in critical systems. Additionally, PROSA has enabled us to find more general results and derive explainable analyses, as I will explain in my talk. Finally, I will report on two ongoing projects, one seeking to close the “model gap” between PROSA and a system's codebase and one aiming to enable mathematically sound below-worst-case provisioning. Verification Monday April 3, 2023, 11AM, 1007 and Zoom link Thomas Nowak (ENS Paris-Saclay) Reaching Consensus in Hostile Environments Reaching consensus in a distributed system is one of the most fundamental problems studied in distributed computing. It is non-trivial due to uncertainties related to unsuccessful communication and process faults. In particular, the celebrated FLP impossibility result shows why scheduling an event via an asynchronous communication medium such as email is difficult. In this talk, I will present our recent results on exact and approximate consensus in hostile environments such as dynamic networks (e.g., due to mobility of agents) and synthetic bacterial cultures Verification Monday March 27, 2023, 11AM, 1007 and Zoom link Lénaïg Cornanguer & Christine Largouët (IRISA Rennes) TAG: Learning Timed Automata from Logs Event logs are often one of the main sources of information to understand the behavior of a system. While numerous approaches have extracted partial information from event logs, we aim at inferring a global model of a system from its event logs. We consider real-time systems, which can be modeled with Timed Automata: our approach is thus a Timed Automata learner. There is a handful of related work, however, they might require a lot of parameters or produce Timed Automata that either are undeterministic or lack precision. In contrast, we propose an approach, called TAG, that requires only one parameter and learns, without any a-priori knowledge, a deterministic Timed Automaton having a good tradeoff between accuracy and automata complexity. This allows getting an interpretable and accurate global model of the considered real-time system. Experiments compare our approach to related work and demonstrate its merits. Verification Monday March 20, 2023, 11AM, 1007 and Zoom link Eric Munday (University of Edinburgh) Strategy complexity of lim sup in countable MDPs In this talk we look at the strategy complexity of a lim sup objective in countable MDPs equipped with transition based rewards. This lim sup objective generalises the Buchi objective which is a well studied objective in verification. We will sketch three upper bound results, one epsilon-optimal and two different optimal upper bounds. We will take special care to highlight the techniques common to each proof which will hopefully convey the essence of what characterises lim sup. Verification Monday March 13, 2023, 11AM, 1007 and Zoom link Daniel Neider (TU Dortmund) Reinforcement Learning with (Stochastic) Reward Machines Despite its great success, reinforcement learning struggles when the reward signal is sparse and temporally extended (e.g., in cases where the agent has to perform a complex series of tasks in a specific order). To expedite the learning process in such situations, a particular form of finite-state machines, called reward machines, has recently been shown to help immensely. However, designing a proper reward machine for the task at hand is challenging and remains a tedious and error-prone manual task. In this presentation, we will survey recent approaches that intertwine reinforcement learning and the inference of reward machines, thereby eliminating the need to craft a reward machine by hand. At their heart, these methods transform the inference task into a series of constraint satisfaction problems that can be solved using off-the-shelf SAT and SMT solvers. We will see how this idea can be used to integrate user-provided advice into the learning process and how it deals with stochastic reward signals. Moreover, we will briefly discuss theoretical properties and hint at empirical evidence demonstrating that reinforcement learning with reward machines outperforms existing methods, such as hierarchical and deep reinforcement learning. Verification Monday March 6, 2023, 11AM, 1007 and Zoom link Thomas Chatain (ENS Paris-Saclay) Towards Conformance Checking for Timed Process Models Conformance checking techniques asses the suitability of a process model in representing an underlying process, observed through a collection of real executions. An important problem for conformance checking is to align a log trace with a model, that is, to find the minimal changes necessary to correct a new observation to conform to a process model. The subject of this work is to study conformance checking for timed models, that is, process models that consider both the sequence of events in a process as well as the timestamps at which each event is recorded. We set our problem of timed alignment and solve two cases, each corresponding to a different metric over timed traces. Verification Monday February 20, 2023, 11AM, 1007 and Zoom link Kevin Zagalo (Inria Paris) Schedulability of stationary real-time systems In this talk, we show how queueing theory limit theorems can be used to define the domain of feasibility of probabilistic real-time systems and provide sufficient conditions of their schedulability on a single processor under a static-priority policy. We will see how we can approximate response times with the first-passage time of a Brownian motion, and build a scheduling test from this approximation. We will also discuss a possible application in an example. Verification Monday February 13, 2023, 11AM, 1007 and Zoom link Rémi Morvan (LaBRI) Universal algorithms for parity games and nested fixpoints The problem of solving parity games, or equivalently model-checking for mu-calculus, is known to be both in NP and co-NP: hence, it is widely believed that they are not NP-complete (nor coNP-complete). Yet, for 30 years, we failed to find a polynomial-time algorithm solving them, despite active research on the topic. The first quasi-polynomial algorithm was found in 2017. Soon after, McNaughton-Zielonka algorithm (an exponential-time recursive algorithm, which is arguably one of the simplest algorithm for solving parity games, and the most effective in practice) was tweaked, first by Parys and then by Lehtinen, Schewe and Wojtczak, to run in quasipolynomial time. In some sense, these algorithms forces McNaughton-Zielonka algorithm to skip enough recursive calls as to run in quasipolynomial time, but not too much so that the output remains correct. We introduce a meta-algorithm that solves parity games, parametrized by trees, which will precisely guide the recursive call made by the algorithm, and show that the output is correct when the trees satisfy some combinatorial property called universality, introduced by Fijalkow and co-authors. McNaughton-Zielonka algorithm and variations can all three be seen as an instantiation of this meta-algorithm on particular classes of universal trees. I will also emphasize and reformulate these ideas in the slightly more general context of nested fixpoints over finite lattices, and show that these algorithms can be turned into symbolic algorithm that only uses logarithmic symbolic space. The talk is mainly based on a joint work with Marcin Jurdziński and K. S. Thejaswini. The results were proven independently by André Arnold, Damian Niwiński and Paweł Parys, for nested fixpoints. Verification Monday February 6, 2023, 11AM, 1007 and Zoom link Claudio Gomes (University of Aarhus) Application of formal methods to verification of self-adaptation loops in digital twins The performance and reliability of Cyber-Physical Systems are increasingly aided through the use of digital twins, which mirror the static and dynamic behaviour of a Cyber-Physical System (CPS) in software. Digital twins enable the development of self-adaptive CPSs which reconfigure their behaviour in response to novel environments. It is crucial that these self-adaptations are formally verified at runtime, to avoid expensive re-certification of the reconfigured CPS. In this talk, I discuss formally verified self-adaptation in a digital twinning system, by constructing a non-deterministic model which captures the uncertainties in the system behaviour after a self-adaptation. We use Signal Temporal Logic to specify the safety requirements the system must satisfy after reconfiguration and employ formal methods based on verified monitoring over Flow* flowpipes to check these properties at runtime. This gives us a framework to predictively detect and mitigate unsafe self-adaptations before they can lead to unsafe states in the physical system. Verification Monday January 30, 2023, 11AM, 1007 and Zoom link Sylvain Perifel (IRIF) Deterministic pushdown automata can compress some normal sequences Joint work with O. Carton Normality has been introduced by É. Borel more than one hundred years ago. A real number is normal to an integer base if, in its infinite expansion expressed in that base, all blocks of digits of the same length have the same limiting frequency. Normality can also be characterized by non-compressibility by finite state machines. In this talk, we will present a new result showing that pushdown machines, even deterministic, can compress some normal sequences. This solves positively a question left open by V. Becher, P. A. Heiber and O. Carton. Verification Monday January 30, 2023, 11AM, 3052 and Zoom link Sarah Winter (IRIF) To be announced. Verification Monday January 23, 2023, 11AM, 1007 and Zoom link Stefan Kiefer (University of Oxford) On the state complexity of complementing unambiguous finite automata Given two finite automata A1, A2, recognising languages L1, L2, respectively, the state complexity of union (or intersection, or complement, etc.) is how many states (in terms of the number of states in A1, A2) may be needed in the worst case for an automaton that recognises L1 union L2 (or L1 intersect L2, or the complement of L1, etc.). The state complexity of complementing unambiguous finite automata has long been open, and as late as 2015 Colcombet asked whether unambiguous automata can be complemented with a polynomial blowup. I will report on progress on this question in recent years, both in terms of lower and upper bounds. For the currently best lower bound, techniques from communication complexity play an essential role. Online seminar streamed in room 1007. Verification Monday January 16, 2023, 11AM, 1007 and Zoom link Benjamin Bordais (LMF) Playing (almost-)optimally in concurrent games with reachability, Büchi and co-Büchi objectives In a two-player win/lose concurrent graph game, two players at a given state concurrently (i.e. simultaneously) choose an action (among a finite set of available actions) which leads to another state. The process is repeated indefinitely thus forming an infinite sequence of states. The winnor of the game is determined by an objective (i.e. a subset of infinite sequences of states). Concurrent games are more general than their turn-based counterparts in which, at each state, only one player chooses the next state to visit. However, this comes at the cost of not enjoying the desirable properties that turn-based games do. (For instance, in turn-based games with the parity objective, there are always optimal strategies that are pure and positional whereas optimal strategies do not exist in general in concurrent games.) In this talk, I will illustrate the turn-based and concurrent formalisms and show on specific examples why concurrent games behave more poorly than turn-based games. Then I will present how to build concurrent games that are more general than turn-based games but still enjoy similar properties that turn-based game do for specific objectives, namely reachability, Büchi and co-Büchi. This is a joint work with my PhD supervisors Patricia Bouyer and Stéphane Le Roux. Verification Monday January 9, 2023, 11AM, 1007 and Zoom link Petr Novotný (Masaryk University) Formal methods meet Monte Carlo tree search In this talk, we will survey recent advances towards safe and risk-aware probabilistic decision-making. We will study the problem of controlling an agent, represented by a Markov decision process (MDP), so as to maximize the agent’s expected utility while ensuring that the agent’s policy does not violate a given safety constraint. Problems of this form are routinely studied in verification and operations research literature (e.g. in the context of constrained MDPs). However, our focus will be on solving MDPs too large or otherwise complex (e.g. due to the presence of partial observability) to be handled by exact algorithms. To this end, we will present a framework combining Monte carlo tree search, an efficient heuristic algorithm capable of solving very large models (and staying at the core of, e.g. AlphaZero and related algorithms), with techniques inspired by probabilistic verification. The resulting algorithms are capable of solving models beyond the scope of exact algorithms while providing practical (and sometimes even theoretically guaranteed) assurances about satisfaction of the safety constraint. We will examine the presented framework in several contexts, touching topics such as 2-player graph games (discounted-payoff/energy/consumption objectives, multiobjective games), constrained MDPs, risk measures, permissive controller synthesis, shielding, and reinforcement learning. Online seminar streamed in room 1007. Year 2022 Verification Monday December 12, 2022, 11AM, 1007 and Zoom link Salim Chouai (Mohammed VI University) Synthesis of Accuracy Specification under Differential Privacy Differential privacy (DP) is a mathematical framework for developing statistical computations with provable guarantees of privacy and accuracy. In contrast to the privacy component of DP, which has a clear mathematical and intuitive meaning, the accuracy component of differential privacy does not have a generally accepted definition; accuracy claims of differential privacy algorithms vary from algorithm to algorithm and are not instantiations of a general definition. We have identified common themes in existing ad hoc definitions and introduced an alternative notion of accuracy parametrized by a reference specification φ. We say that a DP algorithm is (μ, β)−accurate w.r.t to φ iff φ is ’almost true’ with tolerance μ and a high probability 1 − β. We proposed a synthesis method, that takes as an input the program specification φ and produces a probabilistic accuracy statement. Our approach applies to programs that are made DP by introducing a random noise to their input. The approach is systematic and is reduced only on logical reasoning. It uses axiomatic properties of probability distributions combined with FO deductive techniques in order to derive an probabilistic accuracy specification about the DP program from the specification of the original program. In order to make tighter accuracy statements, we proposed an extension to our approach that takes into account assumptions about the input values and reason about accuracy without compromising the privacy. Verification Monday December 5, 2022, 11AM, 1007 and Zoom link Kohei Suenaga (Kyoto University) HELMHOLTZ: A Verifier for Tezos Smart Contracts Based on Refinement Types (Joint work with Yuki Nishida, Hiromasa Saito, Ran Chen, Akira Kawata, Jun Furuse, and Atsushi Igarashi) A smart contract is a program executed on a blockchain, based on which many cryptocurrencies are implemented, and is being used for automating transactions. Due to the large amount of money that smart contracts deal with, there is a surging demand for a method that can statically and formally verify them. This article describes our type-based static verification tool HELMHOLTZ for Michelson, which is a statically typed stack-based language for writing smart contracts that are executed on the blockchain platform Tezos. HELMHOLTZ is designed on top of our extension of Michelson's type system with refinement types. HELMHOLTZ takes a Michelson program annotated with a user-defined specification written in the form of a refinement type as input; it then typechecks the program against the specification based on the refinement type system, discharging the generated verification conditions with the SMT solver Z3. We briefly introduce our refinement type system for the core calculus Mini-Michelson of Michelson, which incorporates the characteristic features such as compound datatypes (e.g., lists and pairs), higher-order functions, and invocation of another contract. HELMHOLTZ successfully verifies several practical Michelson programs, including one that transfers money to an account and that checks a digital signature. Verification Monday November 28, 2022, 11AM, 1007 and Zoom link Nicolas Waldburger (IRISA − Université de Rennes 1) Parameterized Safety Verification of Round-Based Shared-Memory Systems We consider the parameterized verification problem for distributed algorithms where the goal is to develop techniques to prove the correctness of a given algorithm regardless of the number of participating processes. Motivated by an asynchronous binary consensus algorithm [Aspnes, 2002], we consider round-based distributed algorithms communicating with shared memory. A particular challenge in these systems is that 1) the number of processes is unbounded, and, more importantly, 2) there is a fresh set of registers at each round. A verification algorithm thus needs to manage both sources of infinity. In this setting, we prove that the safety verification problem, which consists in deciding whether all possible executions avoid a given error state, is PSPACE-complete. For negative instances of the safety verification problem, we also provide exponential lower and upper bounds on the minimal number of processes needed for an error execution and on the minimal round on which the error state can be covered. Verification Monday November 21, 2022, 11AM, 1007 and Zoom link Eric Goubault (École Polytechnique) Verification of neural nets by abstract interpretation (Joint work with Sylvie Putot). In this talk, we will discuss range analyzes for feedforward neural networks, which are basic primitives for applications such as robustness of neural networks, compliance to specifications and reachability analysis of neural-network feedback systems. We will describe too current work in progress. The first one focuses on analyzing networks in isolation, and in particular ReLU (rectified linear unit) feedforward neural nets that present specific difficulties: approaches that exploit derivatives do not apply in general, the number of patterns of neuron activations can be quite large even for small networks, and convex approximations are generally too coarse. We use here set-based methods and abstract interpretation that have been very successful in coping with similar difficulties in classical program verification. We will present an approach that abstracts ReLU feedforward neural networks using tropical polyhedra. The second one will show how to analyze networks in a closed feedback loop, as e.g. a controller in a Cyber-Physical System (CPS). We will focus here on differentiable activation functions and will show how Taylor models that have been instrumental in the analysis of CPS leverage to the analysis of networks in the loop, even allowing for computing guaranteed (robust) inner and outer-approximations. Verification Monday November 14, 2022, 11AM, 1007 and Zoom link Philip Offtermatt (Université de Shrebrooke) The Complexity of Soundness in Workflow Nets Workflow nets are a popular variant of Petri nets that allow for the algorithmic formal analysis of business processes. The central decision problems concerning workflow nets deal with soundness, where the initial and final configurations are specified. Intuitively, soundness states that from every reachable configuration one can reach the final configuration. In recent work, we settle the widely open complexity of the three main variants of soundness: classical, structural and generalised soundness. The first two are EXPSPACE-complete, and, surprisingly, the latter is PSPACE-complete, thus computationally simpler. In this talk, I will 1) introduce the soundness problem and its variants, and 2) give insights into the complexity results for the three variants, mostly focussing on the upper bounds. Verification Monday November 7, 2022, 11AM, 3052 and Zoom link Lorenzo Clemente (University of Warsaw) Zeroness and equality problems for weighted register automata over equality data We study the zeroness and equality problems for weighted register automata over equality data. We reduce these problems to the problem of counting the number of orbits of runs of the automaton. We show that the resulting orbit-counting function satisfies a system of bidimensional linear recursive equations with polynomial coefficients (linrec), which generalises analogous recurrences for the Stirling numbers of the second kind. We then show that zeroness / equality of weighted register automata reduces (in exponential time) to the zeroness problem for linrec sequences. While such a counting approach is classical and has successfully been applied to weighted finite automata over finite alphabets and weighted grammars over a unary alphabet, its application to register automata over infinite alphabets is novel. We provide an algorithm to decide the zeroness problem for the linrec sequences arising from orbit-counting functions based on skew polynomials and variable elimination. This algorithm has elementary complexity. The complexity can be improved by computing the Hermite normal form of matrices over a skew polynomial field, which leads to a EXPTIME decision procedure for the zeroness problem of linrec sequences, and a 2-EXPTIME decision procedure for the zeroness / equality problems for weighted register automata over equality data. These results have appeared in STACS'21 in joint work with Corentin Barloy. It has been shown in LICS'21 by Bojańczyk, Klin, and Moerman that the complexity of zeroness / equality of weighted register automata can be improved to EXPTIME, using novel techniques involving a deep study of orbit-finite vector spaces. Verification Monday October 24, 2022, 11AM, 1007 and Zoom link Govind R (IIT Bombay) Simulations for Event-Clock Automata Event-clock automata are a well-known subclass of timed automata which enjoy admirable theoretical properties, e.g., determinizability, and are practically useful to capture timed specifications. However, unlike for timed automata, there exist no implementations for event-clock automata. A main reason for this is the difficulty in adapting zone-based algorithms, critical in the timed automata setting, to the event-clock automata setting. This difficulty was recently studied in [Geeraerts et al 2011,2014], where the authors also proposed a solution using extrapolations. In our work, we propose an alternative zone-based algorithm, using simulations for finiteness, to solve the reachability problem for event-clock automata. Our algorithm exploits the G-simulation framework, which is the coarsest known simulation relation for reachability, and has been recently used for advances in other extensions of timed automata. Verification Tuesday October 18, 2022, 4PM, 1007 and Zoom link Edon Kelmendi (MPI) The Set of Positive Entries of a Linear Recurrence Sequence Linear recurrence sequences (LRS) are infinite sequences of integers whose nth entry is a linear combination of the previous d entries, for a fixed d. The set of indices that correspond to the positive entries of such a sequence is called its positivity set. Given a LRS, deciding whether its positivity set is empty or not is a long standing open problem. Its solution would imply decidability of the halting problem for linear loops, it would allow us to effectively solve a family of exponential diophantine equations (Skolem problem), and it would give a profound understanding of some aspects of transcendental numbers; to name just a few corollaries. In this talk I will discuss a statistical facet of the positivity set, namely its density as a subset of positive integers. I will show that this quantity is equal to the volume of a certain constructible subset of euclidian space, and how to approximate said volume using a result of Koiran. Time permitting, I will tell you how in pursuit of a procedure to decide whether the density is larger than some given rational number, one is led to questions about the algebraic relations among periods, and how in certain cases the difficulties are overcome with Chebyshev polynomials. Verification Monday October 10, 2022, 11AM, 3052 and Zoom link Cristina Seceleanu (Mälardalen University) Reinforcement Learning for Mission Plan Synthesis of Autonomous Vehicles Computing a mission plan for an autonomous vehicle consists of path planning and task scheduling, which are two outstanding problems existing in the design of multiple autonomous vehicles. Both problems can be solved by the use of exhaustive search techniques such as model checking and algorithmic game theory. However, model checking suffers from the infamous state-space-explosion problem that makes it inefficient at solving the problem when the number of vehicles is large, which is often the case in realistic scenarios. In this talk, we show how reinforcement learning can help model checking to overcome these limitations, such that mission plans can be synthesized for a larger number of vehicles if compared to what is feasibly handled by model checking alone. Instead of exhaustively exploring the state space, the reinforcement-learning-based method randomly samples the state space within a time frame and then uses these samples to train the vehicle models so that their behavior satisfies the requirements of tasks. Since every state of the model needs not be traversed, state-space explosion is avoided. Additionally, the method also guarantees the correctness of the synthesis results. The method is implemented in UPPAAL STRATEGO and integrated with a toolset to facilitate path planning and task scheduling in industrial use cases. We also discuss the strengths and weaknesses of using reinforcement learning for synthesizing strategies of autonomous vehicles. Verification Monday September 19, 2022, 11AM, 3052 and Zoom link Shachar Itzhaky (Technion University) TheSy: Theory Exploration Powered by Deductive Synthesis and E-graphs Recent years have seen tremendous growth in the amount of verified software. Proofs for complex properties can now be achieved using higher-order theories and calculi. Complex properties lead to an ever-growing number of definitions and associated lemmas, which constitute an integral part of proof construction. Following this – whether automatic or semi-automatic – methods for computer-aided lemma discovery have emerged. In this work, we introduce a new symbolic technique for bottom-up lemma discovery, that is, the generation of a library of lemmas from a base set of inductive data types and recursive definitions. This is known as the theory exploration problem, and so far, solutions have been proposed based either on counter-example generation or the more prevalent random testing combined with first-order solvers. Our new approach, being purely deductive, eliminates the need for random testing as a filtering phase and for SMT solvers. Therefore it is amenable compositional reasoning and for the treatment of user-defined higher-order functions. Our implementation, TheSy, has shown to find more lemmas than prior art, while avoiding redundancy and without having to run the programs once. In this talk, I will present results published in CAV 2021, and ongoing work on extending TheSy to richer theories. In the first work, we used e-graphs on top of egg (Willsey et al). In the follow-up, we extend egg to be able to handle conditional equalities, which are pervasive in many theories; notably, Separation Logic with inductive definitions. We hope that this will allow synthesizing a richer set of lemmas. Joint session with the PPS seminar. Verification Monday July 11, 2022, 11AM, 3052 and Zoom link Jaco Van De Pol (Aarhus University) Parallel SCC Detection for Model Checking – Model Checking for Parallel SCC detection I will introduce a multicore algorithm for the parallel detection of Strongly Connected Components. The algorithm uses a concurrent Union-Find forest to share partial SCCs. As a consequence, we get parallel speedup even if the graph consists of a single SCC. The algorithm works on-the-fly, so it is useful for parallel LTL model checking. Since parallel graph algorithms are error prone, and this one is used in verification tools, we would like to verify it. We will report a first step, modeling the algorithm in TLA+ and model checking it for instances with a few workers and small graphs. Verification Monday July 4, 2022, 11AM, 1007 and Zoom link Radosław Piórkowski (Universit of Warsaw) Synthesis games over infinite alphabets We define timed/register synthesis games as a generalisation of the classical Büchi-Landweber games. For such games, the problem of synthesising a winning controller turns out to be decidable when we bound the number of clocks/registers the Controller player has at his disposal, and undecidable otherwise. I will prove the decidability result mentioned above in the setting of register automata, which is a generalisation of the Büchi-Landweber theorem. Additionally, I will mention adaptations needed to move to the timed setting. Time permitting, I will mention an application of these games to a problem of deterministic separability of NTA/NRA. Verification Monday June 27, 2022, 11AM, 3052 and Zoom link [Cancelled] Nicolas Waldburger (IRISA − Université de Rennes 1) Parameterized Safety Verification of Round-Based Shared-Memory Systems We consider the parameterized verification problem for distributed algorithms where the goal is to develop techniques to prove the correctness of a given algorithm regardless of the number of participating processes. Motivated by an asynchronous binary consensus algorithm [Aspnes, 2002], we consider round-based distributed algorithms communicating with shared memory. A particular challenge in these systems is that 1) the number of processes is unbounded, and, more importantly, 2) there is a fresh set of registers at each round. A verification algorithm thus needs to manage both sources of infinity. In this setting, we prove that the safety verification problem, which consists in deciding whether all possible executions avoid a given error state, is PSPACE-complete. For negative instances of the safety verification problem, we also provide exponential lower and upper bounds on the minimal number of processes needed for an error execution and on the minimal round on which the error state can be covered. Verification Monday June 20, 2022, 11AM, 3052 and Zoom link Mahsa Shirmohammadi (IRIF − CNRS) Learning Weighted Automata over PIDs In this talk, I will discuss learning algorithms for weighted automata over principal ideal domains (PIDs). I start with discussing properties of the Hankel matrix of a weighted automaton. Then I reiterate briefly the idea behind learning weighted automata over fields. For automata over PIDs, I recall an existing algorithm (Heerdt et al., FoSSaCS 2020) for exact learning that has no complexity bounds (but only termination). I will recall a classical result of Fatou, and, inspired by its proof, draft a simpler learning algorithm for learning weighted automata over PIDs. Verification Monday May 30, 2022, 11AM, 3052 and Zoom link Miquel Oliu-Barton (Université Paris-Dauphine and Bruegel) New algorithms for solving zero-sum stochastic games Zero-sum stochastic games, henceforth stochastic games, are a classical model in game theory in which two opponents interact and the environment changes in response to the players’ behavior. The central solution concepts for these games are the discounted values and the value, which represent what playing the game is worth to the players for different levels of impatience. In the present manuscript, we provide algorithms for computing exact expressions for the discounted values and for the value, which are polynomial in the number of pure stationary strategies of the players. This result considerably improves all the existing algorithms. Verification Monday May 16, 2022, 11AM, 3058 and Zoom link Klara Nosan (IRIF) The Membership Problem for Hypergeometric Sequences with Rational Parameters. We investigate the Membership Problem for hypergeometric sequences: given a hypergeometric sequence $\langle u_n \rangle_{n = 0}^\infty$ of rational numbers and a target $t \in \mathbb{Q}$, decide whether $t$ occurs in the sequence. We show decidability of this problem under the assumption that in the defining recurrence $p(n) u_{n+1} = q(n) u_{n}$, the roots of the polynomials $p(x)$ and $q(x)$ are all rational numbers. Our proof relies on bounds on the density of primes in arithmetic progressions. We also observe a relationship between the decidability of the Membership problem (and variants) and the Rohrlich-Lang conjecture in transcendence theory. This work is in collaboration with Amaury Pouly, Mahsa Shirmohammadi and James Worrell. The full version of the paper is available at https://arxiv.org/abs/2202.07416 <https://arxiv.org/abs/2202.07416>. Verification Monday May 9, 2022, 11AM, Zoom link James Worrell (University of Oxford) The Skolem Landscape The Skolem Problem asks to decide whether a linearly recurrent sequence (LRS) over the rationals has a zero term. This problem — whose decidability has been open for many decades — has been considered in many contexts, including weighted automata, Markov chains, and linear loops. In this talk we describe a landscape of decision problems on LRS in and around the Skolem Problem, focussing on a recent proof of decidability of the latter for the class of simple LRS and subject to two number-theoretic conjectures: the p-adic Schanuel conjecture and the exponential-local-global principle. The talk is based on joint work with Y. Bilu, F. Luca, J. Ouaknine, D. Pursar, and J. Nieuwveld. Verification Monday April 11, 2022, 11AM, 3052 and Zoom link Aldric Degorre (IRIF − Université Paris Cité) On the bandwidth of timed automata This is a talk on an ongoing, non-finished, unpublished research. We study the evolution of “size” of a given timed language with respect to time. Intuitively, we determine, how many bits/second contains a typical word in the timed language. In previous work we studied the evolution of size with respect to the number of events (that is in bits per event, which could be less relevant). For a given regular timed language, we took all the words having n events, and summed up the volumes of all their timings (as polytopes in R^n). For languages characterized by a time bound instead of a number of events, unfortunately, this approach is not appropriate anymore. Indeed the polytopes can have different dimensions, even for the same duration. So instead we adopt a point of view inspired by information theory, with two “sizes” closely related to each other. For a timed language L: - the maximal number of different words in L of duration ≤ T that can be distinguished by a “myopic” observer; - the minimal size of a finite set of words sufficient to approximate any word in L of duration ≤ T. The two measures correspond respectively to the notions of optimal ε-separated subsets and ε-nets, instantiated for a pseudo-distance we introduced in a previous work. This distance characterizes the “myopic” observer, parametrized by its precision of observation ε. The logarithms of these two quantities (i.e. the two measures in bits) are respectively called ε-capacity and ε- entropy. In this work we are particularly interested in the asympotics of these two measures: hence we define (resp.) capacitive and entropic ε-bandwidth as the limits, as time grows to infinity, of these quantities divided by time. Our main result is (or will be) the partition of timed automata into 3 classes according to structural criteria, which happen to have bandwidths with 3 distinct asymptotic types with respect to 1/ε: no dependency, logarithmic dependency, or linear dependency. This is a joint work with Eugene Asarin and Catalin Dima. This is an “Open Talk” session of the seminar. Open talk sessions are dedicated to present one or several ongoing works from verification team members or their students: - just for showing what they are currently doing, - for sharing a new result, - to ask a scientific question to the audience, - to listen to comments and ideas related to presented research - or looking for somebody to work with. Verification Monday March 28, 2022, 11AM, Zoom link Moses Ganardi (Max Planck Institute for Software Systems (MPI-SWS)) Balancing in Grammar-Based Compression In grammar-based compression a string is represented by a context-free grammar, also called a straight-line program (SLP), that generates only that string. The hierarchical structure of SLPs makes them amenable to algorithms that work directly on the compressed representation, without decompressing the string first. I will present a recent balancing result stating that one can transform an SLP of size g in linear time into an equivalent SLP of size O(g) and height O(log N) where N is the length of the represented string. The question whether such a depth reduction was possible while keeping the size linear has been a long-standing open problem in the area of grammar-based compression. Its solution enables a wide range of algorithmic applications on SLPs, with either significantly simplified proofs or improved space/time bounds. This is joint work with Artur Jeż and Markus Lohrey. Verification Monday March 14, 2022, 11AM, Zoom link Ruiwen Dong (University of Oxford) The identity problem for unitriangular matrices of dimension four. Given as input a finite set of square matrices A_1, …, A_k, the Identity Problem asks to decide whether the identity matrix lies in the semigroup generated by A_1, …, A_k. We show that the Identity Problem is decidable for matrices in UT(4, Z), the group of unitriangular integer matrices of dimension four. This extends the previous decidability result of the Identity Problem for matrices in UT(3, Z), also known as the Heisenberg group. Our main arguments rely on the metabelian property of UT(4, Z). Indeed, the commutator subgroup of UT(4, Z) is isomorphic to Z^3. We will use some tools from convex geometry and algebraic geometry, allowing us to view UT(4, Z) as a geometric object, thus giving intuition on how the identity element may be reached as a semigroup element. If time allows, we will also discuss how similar arguments may be used to tackle the Identity Problem for other unitriangular matrix groups, notably the direct product UT(3, Z)^n, as well as how to potentially extend these arguments to UT(n, Z) for n > 4, where the group ceases to be metabelian. Verification Monday March 7, 2022, 11AM, Zoom link Bernd Finkbeiner (CISPA Helmholtz Center for Information Security, Universität des Saarlandes) Model Checking Hyperproperties Traditionally, most verification efforts have focused on the satisfaction of trace properties, such as that an assertion is satisfied at a particular program location or that the computation terminates eventually. Many policies from information-flow security, like observational determinism or noninterference, and many other system properties including promptness and knowledge can, however, not be expressed as trace properties, because these properties are hyperproperties, i.e., they relate multiple execution traces. In this talk, I will give an overview on recent efforts to develop specification logics and model checking algorithms for hyperproperties. The two principal ideas are the addition of variables for traces and paths in temporal logics, like LTL and CTL*, and the introduction of the equal-level predicate in first-order and second-order logics, like monadic first-order logic of order and MSO. Both extensions have a profound impact on the expressiveness of the logics, resulting in a hierarchy of hyperlogics that differs significantly from the classical hierarchy. Model checking remains decidable for a large part of the new hierarchy. Satisfiability is in general undecidable for most hyperlogics, but there are useful decidable fragments. I will report on first successes in translating these encouraging theoretical results into practical verification tools. Verification Monday February 28, 2022, 11AM, 3052 and Zoom link Pierre-Emmanuel Hladik (LS2N/Université de Nantes) An Informal Presentation of Formal-Model Execution Engine Hippo to Control and Verify Critical Real-Time Systems This presentation will focus on a software chain called Hippo that aims to provide tools to design, verify and generate code for embedded real-time applications to ensure their temporal behavior. It is based on the Tina toolkit and an extension of the Fiacre pivot language. The main problem addressed by Hippo concerns the generation of an executable that guarantees the respect of temporal constraints. One difficulty is to avoid a semantic gap between the model produced by the designer, the model used by the model-checker and the executable. For the Hippo tool chain, the code is generated as close as possible to the TTS (Time Transition Systems) model underlying Fiacre. Thus, during the generation step, a translator is used to produce, from a Fiacre model, a C code that guarantees a control flow identical in all points to the TTS behavior. During this presentation we will discuss these different steps and show how to use the tool on some concrete use cases. Verification Monday January 31, 2022, 11AM, Zoom link Corto Mascle (LABRI) Responsibility and verification: Importance value in temporal logics We aim at measuring the influence of the nondeterministic choices of a part of a system on its ability to satisfy a specification. For this purpose, we apply the concept of Shapley values to verification as a means to evaluate how important a part of a system is. The importance of a component is measured by giving its control to an adversary, alone or along with other components, and testing whether the system can still fulfill the specification. We study this idea in the framework of model-checking with various classical types of linear-time specification, and propose several ways to transpose it to branching ones. We also provide tight complexity bounds in almost every case. Year 2021 Verification Monday December 13, 2021, 11AM, Zoom link Angelo Ferrando (University of Genova) Towards Runtime Verification of ROS applications In this talk I am going to introduce Runtime Verification and its use in the context of robotic applications developed in ROS (Robot Operating System). Specifically, I will present a recent framework, called ROSMonitoring, which has been recently developed to verify message exchange in ROS. All the engineering steps will be presented, as well as some interesting robotic case study where ROSMonitoring has been applied. Verification Monday December 6, 2021, 11AM, 1007 and Zoom link Alessio Mansutti (University of Oxford) Efficient complementation of semilinear sets and the VC dimension of Presburger arithmetic We discuss the issue of deciding the first-order theory of integer linear arithmetic, also known as Presburger arithmetic. Whereas optimal decision procedures based on either quantifier-elimination or automata constructions are known for this theory, the current procedures based on manipulating semilinear sets (i.e. sets that geometrically characterise the solutions of a formula) are extremely inefficient. We address this issue by deriving a new algorithm for computing the complement of a semilinear set, which in particular is optimal for nested complementations. Alongside solving the aforementioned discrepancy between semilinear-based algorithms and other types of procedures, this result allows us to characterise the Vapnik-Chervonenkis dimension of Presburger arithmetic, which is found to be doubly-exponential. The results in this talk are joint work with Christoph Haase and Dmitry Chistikov. Verification Monday November 22, 2021, 11AM, 3058 and Zoom link Patricia Bouyer-Decitre (FML - CNRS) On the (Approximate) Analysis of Stochastic Real-Time Systems Analyzing stochastic systems with real-time features is notably difficult, and often calls for approximation technics. In this talk I will present a crucial property of stochastic systems, which allows for provably sound approximated model-checking technics. This property, called decisiveness, was first proposed by Abdulla et al in 2007 for infinite-state discrete Markov chains. This was later extended to general stochastic transition systems, and general approximation schemes were proven correct for large classes of properties. Finally it turns out that several classes of real-time stochastic systems that have been considered in the literature do actually satisfy the property, which implies that they can safely be analysed. This talk is based on joint works with Nathalie Bertrand, Thomas Brihaye, Pierre Carlier, Mickael Randour, Cedric Riviere and Pierre Vandenhove. This talk is proposed in the context of the ANR project MAVeriQ day. Verification Monday October 18, 2021, 11AM, 3052 and Zoom link Florian Renkin (LRDE, EPITA) Practical “Paritizing” of Emerson-Lei Automata We introduce a new algorithm that takes a Transition-based Emerson-Lei Automaton (TELA), that is, an ω-automaton whose acceptance condition is an arbitrary Boolean formula on sets of transitions to be seen infinitely or finitely often, and converts it into a Transition-based Parity Automaton (TPA). To reduce the size of the output TPA, the algorithm combines and optimizes two procedures based on a latest appearance record principle, and introduces a partial degeneralization. Our motivation is to use this algorithm to improve our LTL synthesis tool, where producing deterministic parity automata is an intermediate step. Séminaire qui peut aussi intéresser l'équipe Automates Verification Monday September 27, 2021, 11AM, 1007 and Zoom link Paula Herber (University of Münster) Contract-based Verification of Hybrid Simulink Models with Differential Dynamic Logic Cyber-physical systems, where digital components interact with a physical environment, are ubiquitous in our everyday lives. Digital devices assist our households, monitor our health, and support our transportation in airplanes, trains, or cars. While the complexity of cyber-physical systems is steadily increasing, a failure in such systems can have serious consequences. Model-driven development provides an approach to cope with the increasing complexity, but has the serious drawback that industrially used modeling languages, such as Matlab/Simulink, have only informally defined semantics. To overcome this problem, we propose to precisely capture the semantics of hybrid Simulink models with an automated transformation into the formally well-defined differential dynamic logic (dL). Our transformation enables us to formally verify crucial safety properties of hybrid Simulink models with the interactive theorem prover KeYmaera X. KeYmaera X enables deductive reasoning and thus has the potential to scale well even for larger systems. However, the interactive verification with KeYmaera X is still expensive in terms of manual effort and required expertise. To reduce the verification effort, we propose to define /hybrid contracts/, which enable compositional verification, but still have to be defined manually. In this talk, I will summarize our transformation from Simulink to dL, hybrid contracts for compositional verification, and then discuss the idea of using /hybrid contract patterns/ to ease the interactive verification of cyber-physical systems with KeYmaera X. Verification Monday September 13, 2021, 11AM, 3052 and BBB link Lukáš Holík (Brno University of Technology) Regex matching with counting-set automata We propose a solution to the problem of efficient matching regular expressions (regexes) with bounded repetition, such as (ab){1,100}, using deterministic automata. For this, we introduce novel /counting-set automata (CsAs)/, automata with registers that can hold sets of bounded integers and can be manipulated by a limited portfolio of constant-time operations. We present an algorithm that compiles a large sub-class of regexes to deterministic CsAs. This includes (1) a novel Antimirov-style translation of regexes with counting to /counting automata (CAs)/, nondeterministic automata with bounded counters, and (2) our main technical contribution, a determinization of CAs that outputs CsAs. The main advantage of this workflow is that /the size of the produced CsAs does not depend on the repetition bounds used in the regex/ (while the size of the DFA is exponential to them). Our experimental results confirm that deterministic CsAs produced from practical regexes with repetition are indeed vastly smaller than the corresponding DFAs. More importantly, our prototype matcher based on CsA simulation handles practical regexes with repetition regardless of sizes of counter bounds. It easily copes with regexes with repetition where state-of-the-art matchers struggle. Verification Monday July 5, 2021, 11AM, 3052 and BBB link Eugène Asarin (IRIF, Université de Paris) On the complexity of timed pattern matching (joint work with Thomas Ferrère, Dejan Nickovic and Dogan Ulus) Timed pattern matching consists in finding occurrences of a timed regular expression in a timed word. This problem has been addressed using several techniques, its solutions are implemented in tools (quite efficient in practice), and used, for example in log analysis and runtime verification. In this work, we explore computational complexity of timed pattern matching, and prove P, NP and PSPACE bounds, depending on connectives used in expressions and other details. We conclude with a couple of open questions. Verification Monday June 28, 2021, 11AM, BBB link Emmanuel Filiot (Université Libre de Bruxelles) On Some Transducer Synthesis Problems The seminal Büchi-Landweber theorem states that reactive synthesis from omega-automatic specifications is decidable, or equivalently, the synthesis problem from specifications given by non-deterministic synchronous (aka letter-to-letter) transducers. Moreover, when such a specification is realizable, it is realizable by a deterministic synchronous transducer. In this talk, we consider variants and generalizations of this problem, on finite and infinite words, where the synchronicity assumption is relaxed on the specification side and/or the implementation side. Verification Monday June 14, 2021, 11AM, 3052 and BBB link Arnaud Sangnier (IRIF, Université de Paris) Reachability in Distributed Memory Automata We introduce Distributed Memory Automata, a model of register automata suitable to capture some features of distributed algorithms designed for shared memory systems. In this model, each participant owns a local register and a shared register and has the ability to change its local value, to write it in the global memory and to test atomically the number of occurrences of its value in the shared memory, up to some threshold. We show that the control state reachability problem for Distributed Memory Automata is Pspace-complete for a fixed number of participants and is in Pspace when the number of participants is not fixed a priori. This is a joint work with Benedikt Bollig and Fedor Ryabinin. We try an hybrid session. Dependending on technicalities, the visioconference platform may change. Verification Monday June 7, 2021, 11AM, BBB link Bernadette Charron-Bost (LIX, École Polytechnique) Foundational Aspects of the Blockchain Consensus Decentralized cryptocurrencies such as Bitcoin have ignited much excitement, not only for their novel realization of central bank-free financial instruments, but also as an alternative approach for the development of numerous distributed applications in which agreement must be reached without central control and despite misbehaving parties. The soundness and security of these applications, however, hinge on the thorough understanding of the fundamental properties of their underlying blockchain data structure that parties (“miners”) maintain and try to extend by generating “proofs of work”. In this talk we formulate such fundamental properties of the blockchain —“common prefix”, “chain quality”, “chain growth”— and show how the blockchain consensus differs from the classical problem of consensus in distributed computing and from the consensus in control theory. We analyze the impact of different setups, computational assumptions, and network models on the various properties of the blockchain consensus. We also examine the crucial role of the “proofs of work” for achieving network synchrony on top of unstructured P2P networks. Verification Monday May 17, 2021, 11AM, BBB link Sidi Mohamed Beillahi (IRIF) Checking Robustness Between Weak Transactional Consistency Models Concurrent accesses to databases are typically encapsulated in transactions in order to enable isolation from other concurrent computations and resilience to failures. Modern databases provide transactions with various semantics corresponding to different trade-offs between consistency and availability. Since a weaker consistency model provides better performance, an important issue is investigating the weakest level of consistency needed by a given program (to satisfy its specification). As a way of dealing with this issue, we investigate the problem of checking whether a given program has the same set of behaviors when replacing a consistency model with a weaker one. This property known as robustness generally implies that any specification of the program is preserved when weakening the consistency. We focus on the robustness problem for consistency models which are weaker than standard serializability, namely, causal consistency, prefix consistency, and snapshot isolation. We show that checking robustness between these models is polynomial time reducible to a state reachability problem under serializability. We use this reduction to also derive a pragmatic proof technique based on Lipton's reduction theory that allows to prove programs robust. We have applied our techniques to several challenging applications drawn from the literature of distributed systems and databases. This a joint work with Ahmed Bouajjani and Constantin Enea. Verification Monday May 10, 2021, 11AM, BBB link Laurent Doyen (LMF — ENS Saclay) Observation and Distinction. Representing Information in Infinite Games We compare two approaches for modelling imperfect information in infinite games by using finite-state automata. The first, more standard approach views information as the result of an observation process driven by a sequential Mealy machine. In contrast, the second approach features indistinguishability relations described by synchronous two-tape automata. The indistinguishability-relation model turns out to be strictly more expressive than the one based on observations. We present a characterisation of the indistinguishability relations that admit a representation as a finite-state observation function. We show that the characterisation is decidable, and give a procedure to construct a corresponding Mealy machine whenever one exists. The talk is based on joint work with Dietmar Berwanger. Verification Monday April 12, 2021, 11AM, BBB link Jérôme Leroux (LABRI) Flat Petri nets Vector addition systems with states (VASS), or equivalently vector addition systems, or Petri nets are a long established model of concurrency with extensive applications. The central algorithmic problem is reachability: whether from a given initial configuration there exists a sequence of valid execution steps that reaches a given final configuration. The complexity of the problem has remained unsettled since the 1960s, and it is one of the most prominent open questions in the theory of computation. In this presentation, we survey results about the reachability problem focusing on flat VASS. This subclass is central for computing the reachability set of general VASS using Presburger arithmetic. In fact, the reachability set of a VASS is Presburger if, and only if, it is flattable, i.e. the computation of its reachability set is reducible to the computation of the reachability set of a flat VASS. Whereas the reachability problem for flat VASS is clearly NP-complete in any fix dimension when numbers are encoded in binary, finding the exact complexity of the reachability problem for flat VASS in unary is a difficult (still open) problem. Verification Monday March 22, 2021, 11AM, BBB link James Worrell (University of Oxford) Dynamical Systems and Program Analysis This talk is about the algorithmic analysis of dynamical systems and its relevance to the foundations of program analysis. A dynamical system is a state-based system that is specified by a rule specifying how the state changes over time (e.g., a swinging pendulum or a cobweb model of supply and demand in economics). While the study of dynamical systems permeates the quantitative sciences, in this talk we focus on computational aspects and their relevance to basic verification problems, such as termination and invariant synthesis. We highlight in particular certain problems that can be traced back to the program-analysis literature, e.g., concerning the decidability of termination for linear constraint loops and the computability of polyhedral and polynomial invariants for various classes of transition systems. In the talk we give some of the mathematical context of these problems, discuss recent progress, and highlight unsolved cases. The characteristic feature of the problems we consider is that they turn out to be much more challenging than they might first appear to the innocent-minded and in some cases have connections to problems at the frontiers of modern mathematics. Verification Monday March 15, 2021, 11AM, BBB link Stefan Kiefer (University of Oxford) How to play in infinite MDPs Markov decision processes (MDPs) are a standard model for dynamic systems that exhibit both stochastic and nondeterministic behavior. For MDPs with finite state space it is known that for a wide range of objectives there exist optimal strategies that are memoryless and deterministic. In contrast, if the state space is infinite, the picture is much richer: optimal strategies may not exist, and optimal or epsilon-optimal strategies may require (possibly infinite) memory. Based on many examples, I am going to give an introduction to a collection of techniques that allow for the construction of strategies with little or no memory in countably infinite MDPs. Video available here https://www.youtube.com/watch?v=OFXkmtbgPUo Verification Monday March 1, 2021, 11AM, BBB link Mahsa Shirmohammadi (IRIF, CNRS) Cyclotomic Identity Testing and Applications We consider the cyclotomic identity testing (CIT) problem: given a polynomial~$f(x_1,\ldots,x_k)$, decide whether $f(\zeta_n^{e_1},\ldots,\zeta_n^{e_k})$ is zero, where $\zeta_n = e^{2\pi i/n}$ is a primitive complex $n$-th root of unity and $e_1,\ldots,e_k$ are integers, represented in binary. When~$f$ is given by an algebraic circuit, we give a randomized polynomial-time algorithm for CIT assuming the generalised Riemann hypothesis (GRH), and show that the problem is in coNP unconditionally. When $f$ is given by a circuit of polynomially bounded degree, we give a randomized NC algorithm. In case $f$ is a linear form we show that the problem lies in NC. Towards understanding when CIT can be solved in deterministic polynomial-time, we consider so-called diagonal depth-3 circuits, i.e., polynomials $f=\sum_{i=1}^m g_i^{d_i}$, where $g_i$ is a linear form and $d_i$ a positive integer given in unary. We observe that a polynomial-time algorithm for CIT on this class would yield a sub-exponential-time algorithm for polynomial identity testing. However, assuming GRH, we show that if the linear forms~$g_i$ are all identical then CIT can be solved in polynomial time. Finally, we use our results to give a new proof that equality of compressed strings, i.e., strings presented using context-free grammars, can be decided in randomized NC. Verification Thursday February 25, 2021, 4PM, BBB link Shaz Qadeer (Novi Research, Seattle) Reifying Concurrent Programs Program reification is a new approach to the construction of verified concurrent programs and their proofs. This approach simplifies and scales (human and automated) reasoning by enabling a concurrent program to be represented and manipulated at multiple layers of abstraction. These abstraction layers are chained together via simple program transformations; each transformation is justified by a collection of automatically checked verification conditions. Program reification makes proofs maintainable and reusable, specifically eliminating the need to write complex invariants on the low-level encoding of the concurrent program as a flat transition system. I will present Civl, a reifier for concurrent programs. Civl has been used to construct verified low-level implementations of complex systems such as a concurrent garbage collector, consensus protocol, and shared-memory data structures. Civl is publicly available: https://civl-verifier.github.io/ <https://urldefense.com/v3/__https://civl-verifier.github.io/__;!!Bt8RZUm9aw! qKTqOLle9YgX-8Gw34-e9p0Q-WK9m5O2nhT_lXrSd7jSxOY4hpht0zir8Q$>. This work has been done jointly with Bernhard Kragl (IST Austria). Verification Monday February 22, 2021, 11AM, BBB link Mohammed Foughali (Vérimag, Université Grenoble-Alpes) On the formal verification of safety-critical systems: challenges, approaches and perspectives Safety-critical applications, e.g. those stemming from robotic, autonomous and cyber-physical systems, must be formally verified against crucial behavioral and timed properties. Yet, the use of formal verification techniques in such context faces a number of challenges, such as the absence of formal foundations of robotic frameworks and the lack of scalability of exhaustive verification techniques. In this talk, I will explore the approaches I have been proposing for the last six years to tackle these challenges, based on a global vision that favors correctness, user-friendliness and scalability of formal methods vis-à-vis real-world robotic and autonomous systems deployed on embedded platforms. I will discuss a major part of my work where safety-critical specifications are automatically translated into strictly equivalent formal models on which model checking, but also scalable non exhaustive techniques such as statistical model checking and runtime verification, may be used by practitioners to gain a considerable amount of trust in their underlying applications. Further, I will present a couple of techniques that allow to take into account hardware and OS specificities in the verification process, such as the scheduling policy and the number of processor cores provided by the platform, and thus increase the meaningfulness of the verification results. I will conclude with possible future research directions within the broad objective of deploying trustable safety-critical systems through bridging the gap between the software engineering, robotics, formal methods and real-time systems communities. Verification Monday February 15, 2021, 11AM, BBB link Martin Helfrich (Technical University of Munich) Succinct Population Protocols for Presburger Arithmetic In [Dana Angluin et al., 2006], Angluin et al. proved that population protocols compute exactly the predicates definable in Presburger arithmetic (PA), the first-order theory of addition. As part of this result, they presented a procedure that translates any formula φ of quantifier-free PA with remainder predicates (which has the same expressive power as full PA) into a population protocol with 2^????(poly(|φ|)) states that computes φ. More precisely, the number of states of the protocol is exponential in both the bit length of the largest coefficient in the formula, and the number of nodes of its syntax tree. In this paper, we prove that every formula φ of quantifier-free PA with remainder predicates is computable by a leaderless population protocol with ????(poly(|φ|)) states. Our proof is based on several new constructions, which may be of independent interest. Given a formula φ of quantifier-free PA with remainder predicates, a first construction produces a succinct protocol (with ????(|φ|³) leaders) that computes φ; this completes the work initiated in [Michael Blondin et al., 2018], where we constructed such protocols for a fragment of PA. For large enough inputs, we can get rid of these leaders. If the input is not large enough, then it is small, and we design another construction producing a succinct protocol with one leader that computes φ. Our last construction gets rid of this leader for small inputs. Verification Monday February 8, 2021, 11AM, BBB link Blaise Genest (IRISA (CNRS)) Global PAC Bounds for Learning Discrete Time Markov Chains. Learning models from observations of a system is a powerful tool with many applications. In this paper, we consider learning Discrete Time Markov Chains (DTMC), with different methods such as frequency estimation or Laplace smoothing. While models learnt with such methods converge asymptotically towards the exact system, a more practical question in the realm of trusted machine learning is how accurate a model learnt with a limited time budget is. Existing approaches provide bounds on how close the model is to the original system, in terms of bounds on local (transition) probabilities, which has unclear implication on the global behavior. In this work, we provide global bounds on the error made by such a learning process, in terms of global behaviors formalized using temporal logic. More precisely, we propose a learning process ensuring a bound on the error in the probabilities of these properties. While such learning process cannot exist for the full LTL logic, we provide one ensuring a bound that is uniform over all the formulas of CTL. Further, given one time-to-failure property, we provide an improved learning algorithm. Interestingly, frequency estimation is sufficient for the latter, while Laplace smoothing is needed to ensure non-trivial uniform bounds for the full CTL logic. Verification Monday February 1, 2021, 11AM, BBB link François Schwarzentruber (IRISA, ENS Rennes) Connected multi-agent path finding Motivated by the increasing appeal of robots in information-gathering missions, we study multi-agent path planning problems in which the agents must remain interconnected. We model an area by a topological graph specifying the movement and the connectivity constraints of the agents. In the first part of the talk, we study the theoretical complexity of the reachability and the coverage problems of a fleet of connected agents. We also introduce a new class called sight-moveable graphs which admit efficient algorithms. In the second part, we re-visit the conflict-based search algorithm known for multi-agent path finding, and define a variant where conflicts arise from disconnections rather than collisions. Verification Monday January 18, 2021, 11AM, BBB link Léo Henry (IRISA, Université de Rennes I) Active learning of timed automata with unobservable resets Active learning of timed languages is concerned with the inference of timed automata from observed timed words. The agent can query for the membership of words in the target language, or propose a candidate model and verify its equivalence to the target. The major difficulty of this framework is the inference of clock resets, central to the dynamics of timed automata, but not directly observable. Interesting first steps have already been made by restricting to the subclass of event-recording automata, where clock resets are tied to observations. In order to advance towards learning of general timed automata, we generalize this method to a new class, called reset-free event-recording automata, where some transitions may reset no clocks. This offers the same challenges as generic timed automata while keeping the simpler framework of event-recording automata for the sake of readability. Central to our contribution is the notion of invalidity, and the algorithm and data structures to deal with it, allowing on-the-fly detection and pruning of reset hypotheses that contradict observations, a key to any efficient active-learning procedure for generic timed automata. Verification Monday January 11, 2021, 11AM, BBB link Mohamed Faouzi Atig (Uppsala University) Parameterized verification under TSO is PSPACE-complete. We consider parameterized verification of concurrent programs under the Total Store Order (TSO) semantics. A program consists of a set of processes that share a set of variables on which they can perform read and write operations. We show that the reachability problem for a system consisting of an arbitrary number of identical processes is PSPACE-complete. We prove that the complexity is reduced to polynomial time if the processes are not allowed to read the initial values of the variables in the memory. When the processes are allowed to perform atomic read-modify-write operations, the reachability problem has a non-primitive recursive complexity. Year 2020 Verification Monday December 14, 2020, 11AM, BBB link Nicolas Jeannerod (IRIF) Analysing installation scenarios of Debian packages Debian GNU/Linux is a Linux distribution composed of free and open-source software. It is heavily used all over the world as an operating system for personal computers and servers, and is the basis for many other distributions. Deban currently includes more than 28 thousand maintainer scripts, almost all of them written in POSIX shell. These scripts are executed with root privileges at installation, update, and removal of a package, which make them critical for system maintenance. While Debian policy provides guidance for package maintainers producing the scripts, few tools exist to check the compliance of a script to it. This presentation reports on the application of a formal verification approach based on symbolic execution to find violations of some non-trivial properties required by Debian policy in maintainer scripts. We present our methodology and give an overview of the toolchain. We focus in particular on the tree logics used to represent symbolically a file system transformation, and the use of such a logic in the symbolic engine. TACAS 2020 paper. Verification Monday December 7, 2020, 11AM, BBB link Marie Fortin (University of Liverpool) Logics and automata over infinite message sequence charts Communicating finite-state machines (CFMs) are a standard model of concurrent message-passing systems. They consist of a fixed number of finite-state processes communicating through unbounded point-to-point FIFO channels, and they define languages of partial orders called message sequence charts (MSCs), representing possible executions of the system. In this talk, I will review past and recent characterizations of the expressive power of CFMs, in the spirit of Büchi-Elgot-Trakhtenbrot theorem for finite automata. Our main result is that, both for finite and infinite executions, CFMs have the same expressive power as existential monadic second-order logic (EMSO). As a corollary, we obtain a new proof of a known characterization in the restricted case where the channels are assumed to be of bounded size, namely, that CFMs are equivalent to full MSO logic over finite existentially-bounded MSCs. We also extend that result from finite to infinite executions. For the main result, where channels are assumed to be unbounded, the proof relies on another logic of independent interest: a star-free variant of propositional dynamic logic based on operations similar to LTL “since” and “until” modalities. This logic is equivalent to first-order logic, and its formulas can be translated into equivalent CFMs of exponential size (on the other hand, the translation from first-order logic is non-elementary). This is joint work with Benedikt Bollig and Paul Gastin. Verification Monday November 30, 2020, 11AM, BBB link Engel Lefaucheux (Max-Planck Institute for Software Systems, Sarrebrucken) On Information Control in Probabilistic Systems: a closer look on Opacity and Diagnosis The control of the information given by a system has recently seen increasing importance due to the omnipresence of communicating systems, the need for privacy, etc. This control can be used in order to disclose to an observer an information of the system, or, oppositely, to hide one. In this talk, I will consider the control of the information released by a system represented by a stochastic model such as a Markov chain. Here, an external observer is interested in detecting a particular set of relevant paths of the system. However, he can only observe those paths through an observation function which obfuscates the real behaviour of the system. Exact disclosure of the information occurs when the observer can deduce from a finite observation that the path is relevant, the approximate disclosure variant corresponding to the path being identified as relevant with high accuracy. I will discuss the problems of diagnosability and opacity, which corresponds, in spirit, to the cases where one wants to disclose all the information or hide as much of it as possible with a focus on the approximate notion of disclosure. Verification Monday November 23, 2020, 11AM, BBB link Edon Kelmendi (University of Oxford) Deciding ω-Regular Properties on Linear Recurrence Sequences We consider the problem of deciding ω-regular properties on infinite traces produced by linear loops. Here we think of a given loop as producing a single infinite trace that encodes information about the signs of program variables at each time step. Formally, our main result is a procedure that inputs a prefix-independent ω-regular property and a sequence of numbers satisfying a linear recurrence, and determines whether the sign description of the sequence (obtained by replacing each positive entry with “+”, each negative entry with “−”, and each zero entry with “0”) satisfies the given property. Our procedure requires that the recurrence be simple, \ie, that the update matrix of the underlying loop be diagonalisable. This assumption is instrumental in proving our key technical lemma: namely that the sign description of a simple linear recurrence sequence is almost periodic in the sense of Muchnik, Semënov, and Ushakov. To complement this lemma, we give an example of a linear recurrence sequence whose sign description fails to be almost periodic. Generalising from sign descriptions, we also consider the verification of properties involving semi-algebraic predicates on program variables. This is joint work with: Shaull Almagor, Toghrul Karimov, Jöel Ouaknine, and James Worrell. Verification Monday November 16, 2020, 11AM, BBB link Radu Iosif (Verimag, CNRS) Structural Invariants for the Verification of Systems with (Recursively Defined) Parameterized Architectures We consider parameterized concurrent systems consisting of a finite but unknown number of components, obtained by replicating a given set of finite state automata. Components communicate by executing atomic interactions whose participants update their states simultaneously. We introduce an interaction logic to specify both the type of interactions (e.g. rendez-vous, broadcast) and the topology of the system (e.g. pipeline, ring). Proving safety properties of such a parameterized system, like deadlock freedom or mutual exclusion, requires inferring an inductive invariant that contains all reachable states of all system instances and no unsafe state. We present a method to automatically synthesize inductive invariants directly from the formula describing the interactions, without costly fixed point iterations. The invariants are defined using the WSkS fragment of the monadic second order logic, known to be decidable by a classical automata-logic connection. This reduces the safety verification problem to checking satisfiability of a WSkS formula. We experimentally prove that this invariant is strong enough to verify safety properties of a large number of systems including textbook examples (dining philosophers, synchronization schemes), classical mutual exclusion algorithms, cache-coherence protocols and self-stabilization algorithms, for an arbitrary number of components. If time allows, in the second part of this talk I will describe a term algebra as a new formal specification language for the coordinating architectures of distributed systems consisting of a finite yet unbounded number of components. The language allows to describe infinite sets of systems whose coordination between components share the same pattern, using inductive definitions similar to the ones used to describe algebraic data types or recursive data structures. Further, we give a verification method for the parametric systems described in this language, relying on the automatic synthesis of structural invariants that enable proving general safety properties (mutual exclusion, absence of deadlocks). Verification Monday November 9, 2020, 11AM, BBB link Sadegh Soudjani (School of Computing, Newcastle University) Formal verification of stochastic systems using Markov chains and dynamic Bayesian networks In this talk I address the problem of finite-horizon probabilistic safety for discrete-time stochastic systems over general (uncountable or hybrid) state spaces. I show how to compute discrete-time, finite state Markov chains as formal abstractions of such systems. The abstraction differs from existing approaches in two ways. First, it exploits the structure of the underlying system to compute the abstraction separately for each dimension. Second, it employs dynamic Bayesian networks (DBN) as compact representations of the abstraction. In contrast, existing approaches represent and store the (exponentially large) Markov chain explicitly, which leads to heavy memory requirements. I show how to construct a DBN abstraction of a Markov process satisfying an independence assumption on the driving process noise. Guaranteed bounds on the error in the abstraction can be computed with respect to the probabilistic safety property. Additionally, I show how factor graphs and the sum-product algorithm for DBNs can be used to solve the finite-horizon probabilistic safety problem. Together, DBN-based representations and algorithms can be significantly more efficient than explicit representations of Markov chains for abstracting and model checking structured Markov processes. I will also briefly discuss two of the tools for Markov chain abstraction: FAUST2 and Amytiss. Video available here: https://www.youtube.com/watch?v=JrIcs2NY4OI Verification Monday November 2, 2020, 11AM, BBB link Damien Busatto (Université Libre de Bruxelles) Monte Carlo Tree Search guided by Symbolic Advice for MDPs We consider the online computation of a strategy that aims at optimizing the expected average reward in a Markov decision process. The strategy is computed with a receding horizon and using Monte Carlo tree search (MCTS). We augment the MCTS algorithm with the notion of symbolic advice, and show that its classical theoretical guarantees are maintained. Symbolic advice are used to bias the selection and simulation strategies of MCTS. We describe how to use QBF and SAT solvers to implement symbolic advice in an efficient way. We illustrate our new algorithm using the popular game Pac-Man and show that the performances of our algorithm exceed those of plain MCTS as well as the performances of human players. Verification Monday October 19, 2020, 11AM, BBB link Catalin Dima (LACL, Université Paris-Est Créteil) A Hennessy-Milner Theorem for ATL with Imperfect Information We show that a history-based variant of alternating bisimulation with imperfect information allows it to be related to a variant of Alternating-time Temporal Logic (ATL) with imperfect information by a full Hennessy-Milner theorem. The variant of ATL we consider has a common knowledge semantics, which requires that the uniform strategy available for a coalition to accomplish some goal must be common knowledge inside the coalition, while other semantic variants of ATL with imperfect information do not accomodate a Hennessy-Milner theorem. We also show that the existence of a history-based alternating bisimulation between two finite Concurrent Game Structures with imperfect information (iCGS) is undecidable. LICS 2020 paper Verification Monday July 13, 2020, 11AM, (online, using BigBlueButton) Jan Kretinsky (Technical University of Munich) Approximating Values of Generalized-Reachability Stochastic Games Simple stochastic games are turn-based 2.5-player games with a reachability objective. The basic question asks whether one player can ensure reaching a given target with at least a given probability. A natural extension is games with a conjunction of such conditions as objective. Despite a plethora of recent results on the analysis of systems with multiple objectives, the decidability of this basic problem remains open. In this paper, we present an algorithm approximating the Pareto frontier of the achievable values to a given precision. Moreover, it is an anytime algorithm, meaning it can be stopped at any time returning the current approximation and its error bound. Verification Monday July 6, 2020, 11AM, (online, using BigBlueButton) Richard Mayr (University of Edinburgh) Strategy Complexity: Finite Systems vs. Infinite Systems Consider 2-player perfect information turn-based stochastic games on finite or infinite graphs, and sub-cases (e.g., Markov decision processes (MDPs)). Depending on the objective of the game, optimal strategies (where they exist) and epsilon-optimal strategies may need to use a certain amount/type of memory or to use randomization. This is called the strategy complexity of the objective on the given class of games. We give an overview over the strategy complexity of common objectives on games and MDPs, with a particular focus on the differences between finite-state games/MDPs and infinite-state games/MDPs. Verification Monday June 29, 2020, 11AM, (online, using BigBlueButton) Nathalie Bertrand (INRIA Rennes) Games with arbitrarily many players Traditional concurrent games on graphs involve a fixed number of players, who take decisions simultaneously, determining the next state of the game. With Anirban Majumdar and Patricia Bouyer, we introduced a parameterized variant of concurrent games on graphs, where the parameter is precisely the number of players. Parameterized concurrent games are described by finite graphs, in which the transitions bear finite-word languages to describe the possible move combinations that lead from one vertex to another. We report on results on two problems for so-called parameterized concurrent games. To start with, we tackled the problem of determining whether the first player, say Eve, has a strategy to ensure a reachability objective against any strategy profile of her opponents as a coalition. In particular Eve's strategy should be independent of the number of opponents she actually has. We establish the precise complexities of the parameterized reachability game problem. Second, we consider a synthesis problem, where one aims at designing a strategy for each of the players so as to achieve a common objective. For safety objectives, we show that this kind of distributed synthesis problem is decidable. Verification Monday June 22, 2020, 11AM, (online, using BigBlueButton) Lorenzo Clemente (Warsaw university) Deterministic membership and separability problems for timed automata In this talk we survey the deterministic membership and deterministic separability problems for timed automata. In the deterministic membership problem (a.k.a. determinisability) we are given in input a nondeterministic timed automaton (NTA) and we have to decide whether there is a deterministic timed automaton (DTA) recognising the same language. In the deterministic separability problem, we are given in input two NTA and we have to decide whether there is a DTA separating them. We survey decidability and undecidability results for these two problems and for their variants where we put restrictions on the number of clocks of the input NTA and/or the output DTA. While the deterministic separability problem has been studied before in the timed automata literature (mostly with undecidability results), the separability problem does not appear to have been addressed before. Verification Monday June 15, 2020, 11AM, (online, using BigBlueButton) Azalea Raad (Imperial College London) Local Reasoning About the Presence of Bugs: Incorrectness Separation Logic There has been a great deal of work on local reasoning about state for proving the absence of bugs, but none for proving the presence of bugs. In this paper, we present a new formal framework for local reasoning about the presence of bugs, building on two complementary foundations: (1) separation logic and (2) the recently introduced “incorrectness logic”. In addition to exploring the theory of this new incorrectness separation logic (ISL), we show how it can be used to derive a begin-anywhere, intra-procedural forward symbolic execution analysis that has no false positives by construction. In so doing, we take a step towards transferring modular, scalable techniques from the world of program verification to the world of bug catching. Verification Monday June 8, 2020, 11AM, (online, using BigBlueButton) Joel Day (Loughborough University) On the structure of solution sets to regular word equations. For quadratic word equations, there exists an algorithm based on rewriting rules which generates a directed graph describing all solutions to the equation. For regular word equations – those for which each variable occurs at most once on each side of the equation – we investigate the properties of this graph, such as bounds on its diameter, size, and DAG-width, as well as providing some insights into symmetries in its structure. As a consequence, we obtain a combinatorial proof that the problem of deciding whether a regular word equation has a solution is in NP. Verification Wednesday June 3, 2020, 2PM, (online, using BigBlueButton) Caterina Urban (INRIA- ENS) Perfectly Parallel Fairness Certification of Neural Networks Recently, there is growing concern that machine-learning models, which currently assist or even automate decision making, reproduce, and in the worst case reinforce, bias of the training data. The development of tools and techniques for certifying fairness of these models or describing their biased behavior is, therefore, critical. In this paper, we propose a perfectly parallel static analysis for certifying causal fairness of feed-forward neural networks used for classification of tabular data. When certification succeeds, our approach provides definite guarantees, otherwise, it describes and quantifies the biased behavior. We design the analysis to be sound, in practice also exact, and configurable in terms of scalability and precision, thereby enabling pay-as-you-go certification. We implement our approach in an open-source tool and demonstrate its effectiveness on models trained with popular datasets. Verification Monday May 25, 2020, 11AM, (online, using BigBlueButton) Shaull Almagor (Technion Computer Science Faculty) O-minimal invariants for discrete and continuous linear dynamical systems Linear dynamical systems are used extensively in mathematics, computer science, physics, and engineering, to model the evolution of a system over (discrete or continuous) time. Rechability analysis of such systems thus plays a key role in many areas of computer science, including program verification, abstract interpretation, robotic motion planning, and many more. Already for the simplest variants of linear dynamical systems, the question of termination relates to deep open problems in number theory, such as the decidability of the Skolem and Positivity Problems. In this talk, we introduce the class of o-minimal invariants, which is broader than any previously considered, and study the decidability of the existence and algorithmic synthesis of such invariants as certificates of non-reachability for dynamical systems, equipped with a large class of halting conditions. In the discrete setting, we establish two main decidability results, one of them conditional on Schanuel's conjecture in transcendental number theory. We then examine the continuous setting, and the challenges that arise there. We establish some decidability resutls, and some conditional decidability results. We also demonstrate the hardness of a remaining case, with respect to open problems in number theory. Verification Monday May 18, 2020, 11AM, (online using BigBlueButton) Viktor Vafeiadis (MPI) Semantics for Persistent Memory Non-volatile memory (NVM) is emerging as a technology that may significantly impact the computing landscape in the near future. NVM ensures durability of data across power failures (i.e., as hard drives do) at a performance close to that of conventional volatile memory (RAM). Nevertheless, NVM is quite difficult to use correctly because of data caches: the writes do not persist to memory immediately when performed by a program and may further be reordered. In order to ensure that writes are propagated to memory in the correct order and that all writes have persisted to memory, the programmers have to use special barrier and cache flushing instructions. The talk will discuss the semantics of such instructions, how they interact with the architecture's weak consistency model, and how they can be used to develop higher-level primitives for safe persistent programming. Verification Monday May 11, 2020, 11AM, (online, using BigBlueButton) Viktor Kuncak (EPFL) Deriving Parsers, Deriving Type Systems I will present two recent results from my research group. First, I will show the notion of Brzozowski derivative of a formal language can be combined with the zipper data structure and used to construct efficient LL(1) parsers [1] (joint work with Romain Edelmann and Jad Hamza). Second, I will present System FR, an expressive dependent type system with refinement types and subtyping, whose design is guided by interpretation of types as sets of terms [2] (joint work with Jad Hamza and Nicolas Voirol), and which serves as foundation of Stainless verifier for Scala [3]. [1] http://lara.epfl.ch/~kuncak/paper/EdelmannETAL20ZippyLLParsingDerivatives.pdf [2] http://lara.epfl.ch/~kuncak/papers/HamzaETAL19SystemFR.pdf [3] https://stainless.epfl.ch/ Verification Monday May 4, 2020, 11AM, (online, using BigBlueButton) Dumitru Potop-Butucaru (INRIA Paris) Real-time systems compilation I introduce and advocate for the concept of Real-Time Systems Compilation. By analogy with classical compilation, real-time systems compilation consists in the fully automatic construction of running, correct-by-construction implementations from functional and non-functional specifications of embedded control systems. Like in a classical compiler, the whole process must be fast (thus enabling a trial-and-error design style) and produce reasonably efficient code. This requires the use of fast heuristics, and the use of fine-grain platform and application models. Unlike a classical compiler, a real-time systems compiler must take into account non-functional properties of a system and ensure the respect of non-functional requirements (in addition to functional correctness). I also present Lopht, a real-time systems compiler we built by combining techniques and concepts from real-time scheduling, compilation, and synchronous languages. Verification Monday April 27, 2020, 11AM, (online, using BigBlueButton) [Rescheduled] Wojciech Czerwinski (University of Warsaw) Reachability problem for fixed dimensional VASSes I will present a few recent results about reachability problem for fixed dimensional VASSes. There results invalidate some previously posed conjectures and show that the problem is harder than previously expected to be. This is a joint work with Sławomir Lasota, Ranko Lazic, Jerome Leroux and Filip Mazowiecki. Verification Monday April 20, 2020, 11AM, (online, using BigBlueButton) Amaury Pouly (CNRS & IRIF) Polynomial Invariants for Affine Programs and Introduction to Algebraic Geometry (for computer scientists) In this talk I will present some results we obtained in LICS 2018 where we exhibit an algorithm to compute the strongest polynomial (or algebraic) invariants that hold at each location of a given affine program (i.e., a program having only non-deterministic (as opposed to conditional) branching and all of whose assignments are given by affine expressions). Our main tool is an algebraic result of inde- pendent interest: given a finite set of rational square matrices of the same dimension, we show how to compute the Zariski closure of the semigroup that they generate. I will then give a small introduction to algebraic geometry for computer scientisits. The goal it to give the background necessary to understand how the proof works: we will talk about algebraic sets/varieties and constructible/definable sets. In a second talk on April 20th, I will give use the material introduced in this talk to explain the algorithm for computing the Zariski closure of a group and semigroup of matrices. Verification Wednesday April 15, 2020, 3PM, (online, using BigBlueButton) Akos Hajdu (Budapest University of Technology and Economics) SMT-Friendly Formalization of the Solidity Memory Model Solidity is the dominant programming language for smart contracts on the Ethereum blockchain. This talk presents a high-level formalization of the Solidity language with a focus on the memory model. The memory model of Solidity has various unusual and non-trivial behaviors, providing a fertile ground for potential bugs. Smart contracts have access to two classes of data storage: a permanent storage that is a part of the global blockchain state, and a transient local memory used when executing transactions. While the local memory uses a standard heap of entities with references, the permanent storage has pure value semantics (although pointers to storage can be declared locally). This memory model that combines both value and reference semantics - with all interactions between the two - poses some interesting challenges but also offers great opportunities for automation. The presented formalization covers all features of the language related to managing state and memory in an effective way: all but few features can be encoded in the quantifier-free fragment of standard SMT theories. This enables precise and efficient reasoning about the state of smart contracts written in Solidity. The formalization is implemented in the solc-verify tool and we provide an evaluation on an extensive test set that validates the semantics and shows the novelty of the approach compared to other Solidity-level contract analysis tools. Verification Monday April 6, 2020, 11AM, (online, using BigBlueButton) Amaury Pouly (CNRS & IRIF) Polynomial Invariants for Affine Programs and Introduction to Algebraic Geometry (for computer scientists) In this talk I will present some results we obtained in LICS 2018 where we exhibit an algorithm to compute the strongest polynomial (or algebraic) invariants that hold at each location of a given affine program (i.e., a program having only non-deterministic (as opposed to conditional) branching and all of whose assignments are given by affine expressions). Our main tool is an algebraic result of inde- pendent interest: given a finite set of rational square matrices of the same dimension, we show how to compute the Zariski closure of the semigroup that they generate. I will then give a small introduction to algebraic geometry for computer scientisits. The goal it to give the background necessary to understand how the proof works: we will talk about algebraic sets/varieties and constructible/definable sets. In a second talk on April 20th, I will give use the material introduced in this talk to explain the algorithm for computing the Zariski closure of a group and semigroup of matrices. Verification Monday March 30, 2020, 11AM, (online, using BigBlueButton) [Rescheduled] Filip Mazowiecki (MPI SWS) Lower bounds for polynomial recurrence sequences We study the expressiveness of polynomial recurrence sequences (PRS), a nonlinear extension of the well-known class of linear recurrence sequences (LRS). A typical example of a sequence definable in PRS but not in LRS is a_n = n!. Our main result is that the sequence u_n = n^n cannot be defined by any PRS. We also discuss the impact of our results on the expressiveness of nonlinear extensions of weighted automata. This is joint work with Michaël Cadilhac, Charles Paperman, Michał Pilipczuk and Géraud Sénizergues. Verification Monday March 30, 2020, 11AM, Salle 1007 [Cancelled] Adrian Francalanza (University of Malta) An Operational Guide to Monitorability Monitorability underpins the technique of Runtime Verification because it delineates what properties can be verified at runtime. Although many monitorability definitions exist, few are defined explicitly in terms of the operational guarantees provided by monitors, that is the computational entities carrying out the verification. We view monitorability as a spectrum, where the fewer guarantees that are required of monitors, the more properties become monitorable. Accordingly, we present a monitorability hierarchy based on this trade-off. For regular specifications, we give syntactic characterizations in terms of Hennessy–Milner logic with recursion for its levels. Finally, we map existing monitorability definitions into our hierarchy. Hence our work gives a unified framework that makes the operational assumptions and guarantees of each definition explicit. This provides a rigorous foundation that can inform design choices and correctness claims for runtime verification tools. Verification Monday March 16, 2020, 11AM, Salle 1007 [Cancelled] Matteo Mio (CNRS & ENS de Lyon) Towards a Proof Theory of Probabilistic Logics Probabilistic Logics are formal languages designed to express properties of probabilistic programs. For most probabilistic logics several key problems are still open. One of such problems is to design convenient analytical proof systems in the style of Gentzen sequent calculus. This is practically important in order to simplify the task of proof search: the CUT-elimination theorem greatly reduces the search space. In this talk I will present some recent developments in this direction. Verification Monday March 9, 2020, 11AM, Salle 1007 Nathan Thomasset (LSV & ENS de Scalay) Nash equilibria in games on graphs equipped with a communication mechanism We study infinite games played on graphs, in which players move a token along the edges of a graph. Each player has certain preferences over the resulting infinite path taken by the token, and will choose their actions accordingly. In this setting, we are particularly interesting in Nash equilibria, which describe a course of action such that no player has any incentive to deviate and is thus seen as a rational outcome of the game. We add a communication mechanism to the classical setting, such that players can send information to and receive information from other players, and we show that the resulting Nash equilibria can be expressed with a very simple communication scheme, thus allowing us to design an algorithm to compute them. Verification Monday March 2, 2020, 11AM, Salle 1007 Pascale Gourdeau (University of Oxford) On the hardness of robust classification It is becoming increasingly important to understand the vulnerability of machine learning models to adversarial attacks. In this talk, we will study the feasibility of robust learning from the perspective of computational learning theory, considering both sample and computational complexity. In particular, our definition of robust learnability requires polynomial sample complexity. We start with two negative results. We show that no non-trivial concept class can be robustly learned in the distribution-free setting against an adversary who can perturb just a single input bit. We show moreover that the class of monotone conjunctions cannot be robustly learned under the uniform distribution against an adversary who can perturb ω(log n) input bits. However if the adversary is restricted to perturbingO(log n) bits, then the class of monotone conjunctions can be robustly learned with respect to a general class of distributions (that includes the uniform distribution). Finally, we provide a simple proof of the computational hardness of robust learning on the boolean hypercube. Unlike previous results of this nature, our result does not rely on another computational model (e.g. the statistical query model) nor on any hardness assumption other than the existence of a hard learning problem in the PAC framework. Verification Monday February 24, 2020, 11AM, Salle 1007 James Worrell (University of Oxford) Termination of linear constraint loops We consider the problem of deciding termination of linear constraint loops, that is, single-path loops in which the loop body consists of a (possibly non-deterministic) assignment, specified a conjunction of linear constraints. We present several versions of this problem, according to the syntactic form of the loop and the numerical domain of the program variables. We sketch some decidability results and point out open problems. Verification Monday February 17, 2020, 11AM, Salle 1007 Corto Mascle (ENS de Cachan) On Finite Monoids over Nonnegative Integer Matrices and Short Killing Words Unambiguous automata are a popular compromise between non-deterministic and deterministic automata in terms of succinctness and complexity. A word is called a killing word for an automaton if there is no path labelled by it in this automaton. Here we will discuss the problem of computing a short killing word for an unambiguous automaton, as well as possible extensions. Unambiguous automata are closely related to codes, and this problem can be viewed as a particular instance of Restivo's conjecture, a longstanding problem in the theory of codes. It can also be interpreted as a restricted version of the matrix mortality problem. This is a joint work with Stefan Kiefer, published at STACS 2019. Verification Monday February 10, 2020, 11AM, Salle 1007 David Monniaux (University of Grenoble.) “Easy” vs hard analysis for caching policies Processors cache data to avoid slow access to main memory. When a cache set is full, one block of data is evicted to make room for new data. A replacement policy chooses which block to evict. The most natural one is to evict the least recently used (LRU) block. It however had the reputation among hardware designers of being hard to implement, and thus they proposed some “pseudo LRU” policies supposed to behave “similarly”. There is also a FIFO policy, also easy to implement. For safety critical systems it is sometimes necessary to prove bounds on the worst-case execution time by static analysis. This analysis has to take into account the cache system. There is a well known abstract interpretation for LRU caches, based on “ages”; no such analyses are known for other cache policies. Indeed, the common wisdom is that “pseudo LRU” caches are hard to analyse because their behavior may depend on events in the distant past that one cannot be sure to have flushed away. In this talk: - We show how this common wisdom can be grounded in strong theory: we prove that, in a certain model, exact static analysis for two kinds of pseudo LRU caches and for FIFO is PSPACE-complete, while that for LRU is “only” NP-complete. - We give a practical algorithm for exact static analysis for LRU. This analysis has been implemented in the OTAWA tool. Verification Friday January 10, 2020, 10AM, Salle 3052 Joseph Tassarotti (Boston College) Verifying Concurrent Randomized Programs Despite continued progress in program verification, many fundamental data structures and algorithms remain difficult to verify. Challenges arise when programs use concurrency or randomization because these effects lead to non-deterministic behavior. Although many specialized logics have been developed to reason about programs that use either concurrency or randomization, most can only handle the use of one of these features in isolation. However, many common concurrent data structures are randomized, making it important to consider the combination. This talk will describe Polaris, a logic for reasoning about programs that are both concurrent and randomized. Polaris combines ideas from three separate lines of work: (1) Varacca and Winskel's denotational model of non-determinism and probability, (2) Barthe et al.'s approach to probabilistic relational reasoning via couplings, and (3) higher-order concurrent separation logic, as realized in the Iris verification framework. Joint session between the PPS and Verification seminars. Year 2019 Verification Monday December 16, 2019, 11AM, Salle 1007 Jules Villard (Facebook) The Infer Static Analysis platform Infer is an open-source static analysis platform for Java, C, C++, and Objective-C. Infer is deployed at several companies where it helps developers write better code. This talk will present how Infer is used at Facebook, where it analyses thousands of code changes every month, leading to thousands of bugs being found and fixed before they reach users. We will then see how to write your own inter-procedural static analysis in just a few lines of code inside Infer, and automatically be able to apply it to millions of lines of code. Verification Wednesday December 11, 2019, 11AM, Salle 1007 Yotam Feldman (Tel Aviv University) Complexity and Information in Invariant Inference This work addresses the complexity of SAT-based invariant inference, a prominent approach to safety verification. We consider the problem of inferring an inductive invariant of polynomial length given a transition system and a safety property. We analyze the complexity of this problem in a black-box model, called the Hoare-query model, which is general enough to capture algorithms such as IC3/PDR and its variants. An algorithm in this model learns about the system's reachable states by querying the validity of Hoare triples. We show that in general an algorithm in the Hoare-query model requires an exponential number of queries. Our lower bound is information-theoretic and applies even to computationally unrestricted algorithms, showing that no choice of generalization from the partial information obtained in a polynomial number of Hoare queries can lead to an efficient invariant inference procedure in this class. We then show, for the first time, that by utilizing rich Hoare queries, as done in PDR, inference can be exponentially more efficient than approaches such as ICE learning, which only utilize inductiveness checks of candidates. We do so by constructing a class of transition systems for which a simple version of PDR with a single frame infers invariants in a polynomial number of queries, whereas every algorithm using only inductiveness checks and counterexamples requires an exponential number of queries. Our results also shed light on connections and differences with the classical theory of exact concept learning with queries, and imply that learning from counterexamples to induction is harder than classical exact learning from labeled examples. This demonstrates that the convergence rate of Counterexample-Guided Inductive Synthesis depends on the form of counterexamples. Joint work with Neil Immerman, Mooly Sagiv, and Sharon Shoham, that will appear in POPL'20. Verification Monday December 9, 2019, 11AM, Salle 1007 Sidi Mohamed Beillahi (IRIF) Checking Robustness Against Snapshot Isolation Transactional access to databases is an important abstraction allowing programmers to consider blocks of actions (transactions) as executing in isolation. The strongest consistency model is serializability, which ensures the atomicity abstraction of transactions executing over a sequentially consistent memory. Since ensuring serializability carries a significant penalty on availability, modern databases provide weaker consistency models, one of the most prominent being snapshot isolation. In general, the correctness of a program relying on serializable transactions may be broken when using weaker models. However, certain programs may also be insensitive to consistency relaxations, i.e., all their properties holding under serializability are preserved even when they are executed over a weak consistent database and without additional synchronization. In this talk, we address the issue of verifying if a given program is robust against snapshot isolation, i.e., all its behaviors are serializable even if it is executed over a database ensuring snapshot isolation. We show that this verification problem is polynomial time reducible to a state reachability problem in transactional programs over a sequentially consistent shared memory. This reduction opens the door to the reuse of the classic verification technology for reasoning about sequentially-consistent programs. In particular, we show that it can be used to derive a proof technique based on Lipton's reduction theory that allows to prove programs robust. This is a joint work with Ahmed Bouajjani and Constantin Enea. Verification Friday November 29, 2019, 2:30PM, Salle 3052 Dmitry Chistikov (University of Warwick) On the complexity of linear arithmetic theories over the integers Given a system of linear Diophantine equations, how difficult is it to determine whether it has a solution? What changes if equations are replaced with inequalities? If some of the variables are quantified universally? These and similar questions relate to the computational complexity of deciding the truth value of statements in various logics. This includes in particular Presburger arithmetic, the first-order logic over the integers with addition and order. In this talk, I will survey constructions and ideas that underlie known answers to these questions, from classical results to recent developments, and open problems. First, we will recall the geometry of integer linear programming and how it interacts with quantifiers. This will take us from classical results due to von zur Gathen and Sieveking (1978), Papadimitriou (1981), and others to the geometry of the set of models of quantified logical formulas. We will look at rational convex polyhedra and their discrete analogue, hybrid linear sets (joint work with Haase (2017)), and see, in particular, how the latter form a proper sub-family of ultimately periodic sets of integer points in several dimensions (the semi-linear sets, introduced by Parikh (1961)). Second, we will discuss “sources of hardness”: which aspects of the expressive power make decision problems for logics over the integers hard. Addition and multiplication combined enable simulation of arbitrary Turing machines, and restriction of multiplication to bounded integers corresponds to resource-bounded Turing machines. How big can these bounded integers be in Presburger arithmetic? This leads to the problem of representing big numbers with small logical formulae, and we will see constructions by Fischer and Rabin (1974) and by Haase (2014). We will also look at the new “route” for expressing arithmetic progressions (in the presence of quantifier alternation) via continued fractions, recently discovered by Nguyen and Pak (2017). Verification Friday November 29, 2019, 11AM, Salle 3052 Suresh Jagannathan (Purdue University) Mergeable Replicated Data Types Programming geo-replicated distributed systems is challenging given the complexity of reasoning about different evolving states on different replicas. Existing approaches to this problem impose a significant burden on application developers to consider the effect of how operations performed on one replica are witnessed and applied to others. To alleviate these challenges, we present a fundamentally different approach to programming in the presence of replicated state. Our insight is based on the use of invertible relational specifications of an inductively-defined data type as a mechanism to capture salient aspects of the data type relevant to how its different instances can be safely merged in a replicated environment. Importantly, because these specifications only address a data type's (static) structural properties, their formulation does not require exposing low-level system-level details concerning asynchrony, replication, visibility, etc. As a consequence, our framework enables the correct-by-construction synthesis of rich merge functions over arbitrarily complex (i.e., composable) data types. Furthermore, the specification language allows us to extract sufficient conditions to automatically derive merge functions with meaningful convergence properties. We have implemented our ideas in a tool called Quark and have been able to demonstrate the utility of our model on a number of real-world benchmarks. This is joint work with Gowtham Kaki, Swarn Priya, and KC Sivaramakrishnan. Verification Monday November 18, 2019, 11AM, Salle 1007 Arnaud Sangnier (IRIF) The Complexity of Flat Freeze LTL We consider the model-checking problem for freeze LTL on one-counter automata (OCAs). Freeze LTL extends LTL with the freeze quantifier, which allows one to store different counter values of a run in registers so that they can be compared with one another. As the model-checking problem is undecidable in general, we focus on the flat fragment of freeze LTL, in which the usage of the freeze quantifier is restricted. Recently, Lechner et al. showed that model checking for flat freeze LTL on OCAs with binary encoding of counter updates is decidable and in 2NEXPTIME. In this paper, we prove that the problem is, in fact, NEXPTIME-complete no matter whether counter updates are encoded in unary or binary. Like Lechner et al., we rely on a reduction to the reachability problem in OCAs with parameterized tests (OCAPs). The new aspect is that we simulate OCAPs by alternating two-way automata over words. This implies an exponential upper bound on the parameter values that we exploit towards an NP algorithm for reachability in OCAPs with unary updates. We obtain our main result as a corollary. Verification Friday November 15, 2019, 2:30PM, Salle 3052 Patrick Totzke (Liverpool University) Timed Basic Parallel Processes I will talk about two fun constructions for reachability analysis of one-clock timed automata, which lead to concise logical characterizations in existential Linear Arithmetic. The first one describes “punctual” reachability relations: reachability in exact time t. It uses a coarse interval abstraction and counting of resets via Parikh-Automata. The other is a “sweep line” construction to compute optimal time to reach in reachability games played on one-clock TA. Together, these can be used to derive a (tight) NP complexity upper bound for the coverability and reachability problems in an interesting subclass of Timed Petri Nets, which naturally lends itself to parametrised safety checking of concurrent, real-time systems. This contrasts with known super-Ackermannian completeness, and undecidability results for unrestricted Timed Petri nets. This is joint work with Lorenzo Clemente and Piotr Hofman, and was presented at CONCUR'19. Full details are available at https://arxiv.org/abs/1907.01240. Verification Monday November 4, 2019, 11AM, Salle 1007 Zeinab Nehaï (IRIF) Deductive Proof of Industrial Smart Contracts Using Why3 A bug or error is a common problem that any software or computer program may encounter. It can occur from badly writing the program, a typing error or bad memory management. However, errors can become a significant issue if the unsafe program is used for critical systems. Therefore, the use of formal methods for this kind of systems is greatly required. In this work, we apply a formal method that performs deductive verification on industrial smart contracts, which are self-executing digital programs. Because smart contracts manipulate cryptocurrency and transaction information, if a bug occurs in such programs, serious consequences can happen, such as a loss of money. The aim of this work is to show that a language dedicated to deductive verification, called Why3, can be a suitable language to write correct and proven contracts. We first encode existing contracts into the Why3 program; next, we formulate specifications to be proved as the absence of RunTime Error and functional properties, then we verify the behaviour of the program using the Why3 system. Finally, we compile the Why3 contracts to the Ethereum Virtual Machine (EVM). Moreover, our approach estimates the cost of gas, which is a unit that measures the amount of computational effort during a transaction. Verification Monday October 28, 2019, 11AM, Salle 1007 Pierre Ganty (IMDEA Software Institute) Deciding language inclusion problems using quasiorders We study the language inclusion problem L1 ⊆ L2 where L1 is regular or context-free. Our approach checks whether an overapproximation of L1 is included in L2. Such overapproximations are obtained using quasiorder relations on words where the abstraction gives the language of all words “greater than or equal to” a given input word for that quasiorder. We put forward a range of quasiorders that allow us to systematically design decision procedures for different language inclusion problems such as context-free languages into regular languages and regular languages into trace sets of one-counter nets. Verification Monday October 21, 2019, 11AM, Salle 1007 Mohamed Faouzi Atig (Uppsala University) On Solving String Constraints String data types are present in all conventional programming and scripting languages. In fact, it is almost impossible to reason about the correctness and security of such programs without having a decision procedure for string constraints. A typical string constraint consists of word equations over string variables denoting strings of arbitrary lengths, together with constraints on (i) the length of strings, and (ii) the regular languages to which strings belong. Decidability of this general logic is still open. In this talk, we will discuss recent results on the decidability and decision procedures for string constraints. We will focus on decision procedures that are sound and complete for a well-defined fragment of string constraints. We will also cover a technique that uses a Counter-Example Guided Abstraction Refinement (CEGAR) framework which contains both an under- and an over-approximation module. The flow of information between these modules allows increasing the precision in an automatic manner. Verification Monday October 14, 2019, 11AM, Salle 1007 Laurent Doyen (LSV, ENS Paris-Saclay) Expected Finite Horizon A classical problem in discrete planning is to consider a weighted graph and construct a path that maximizes the sum of weights for a given time horizon T. However, in many scenarios, the time horizon is not fixed, but the stopping time is chosen according to some distribution such that the expected stopping time is T. If the stopping time distribution is not known, then to ensure robustness, the distribution is chosen by an adversary, to represent the worst-case scenario. A stationary plan for every vertex always chooses the same outgoing edge. For fixed horizon T, stationary plans are not sufficient for optimality. Quite surprisingly we show that when an adversary chooses the stopping time distribution with expected stopping time T, then stationary plans are sufficient. While computing optimal stationary plans for fixed horizon is NP-complete, we show that computing optimal stationary plans under adversarial stopping time distribution can be achieved in polynomial time. Consequently, our polynomial-time algorithm for adversarial stopping time also computes an optimal plan among all possible plans. Verification Monday September 23, 2019, 11AM, Salle 1007 Stefan Kiefer (University of Oxford) Selective monitoring We study selective monitors for labelled Markov chains. Monitors observe the outputs that are generated by a Markov chain during its run, with the goal of identifying runs as correct or faulty. A monitor is selective if it skips observations in order to reduce monitoring overhead. We are interested in monitors that minimize the expected number of observations. We establish an undecidability result for selectively monitoring general Markov chains. On the other hand, we show for non-hidden Markov chains (where any output identifies the state the Markov chain is in) that simple optimal monitors exist and can be computed efficiently, based on DFA language equivalence. These monitors do not depend on the precise transition probabilities in the Markov chain. We report on experiments where we compute these monitors for several open-source Java projects. Verification Friday July 5, 2019, 2:30PM, Salle 1001 Mahsa Shirmohammadi (IRIF) Büchi Objectives in Countable MDPs We study countably infinite Markov decision processes with Büchi objectives, which ask to visit a given subset of states infinitely often. A question left open by T.P. Hill in 1979 is whether there always exist ε-optimal Markov strategies, i.e., strategies that base decisions only on the current state and the number of steps taken so far. We provide a negative answer to this question by constructing a non-trivial counterexample. On the other hand, we show that Markov strategies with only 1 bit of extra memory are sufficient. This work is in collaboration with Stefan Kiefer, Richard Mayr and Patrick Totzke, and is going to be presented in ICALP 2019. A full version is at https://arxiv.org/abs/1904.11573 Joint Automata & Verification Seminars Verification Monday July 1, 2019, 2PM, Salle 3052 Nikhil Balaji (University of Oxford) The complexity of some easy subclasses of the Skolem problem Given a linear recurrence sequence (LRS), is there an algorithm that decides if the infinite sequence of numbers generated by the LRS contains 0? This is the notorious Skolem problem. The Skolem-Mahler-Lech theorem says that the set of zeros of an LRS is semi-linear; however, this doesn't translate to an algorithm, due to lack of effective bounds on the description of the zero set. This algorithmic question has remained open despite several decades of active research, with decidability known only for a few restricted subclasses, by either restricting the depth of the LRS (up to 4) or by restricting the structure of the LRS (especially roots of the characteristic polynomials). Recent work due to Ouaknine and Worrell has shown that decidability of closely related algorithmic questions could entail significant breakthroughs in Diophantine approximation. In this talk, we first present an elementary proof that this problem is NP-hard (which was already observed by Blondel and Portier). We further study some easy subclasses which are known to be decidable from the point of view of proving better complexity upper bounds. We are able to show that for a wide class of special cases of LRS, the zero set has efficiently verifiable certificates. Based on joint ongoing work with S. Akshay, Aniket Muhrekar, Rohith Varma and Nikhil Vyas. Joint Vérification/Automates seminar Verification Monday June 17, 2019, 11AM, Salle 3052 Pierre-Malo Deniélou (Google) From MapReduce to Apache Beam: A Journey in Abstraction Processing large amounts of data used to be an affair of specialists: specialized hardware, specialized software, specialized programming model, specialized engineers. MapReduce was the first widely adopted high-level API for large-scale data processing. It helped democratize big data processing by providing a clear abstraction that was supported by several efficient systems. In this talk, I will present how the programming APIs (and underlying systems) for large-scale data processing evolved in the past 20 years, both within Google and in the open source world. I will start from MapReduce and Millwheel and finish with Apache Beam and where we're headed next. (*) This will be a joint session of the Systèmes Complexes, Vérification and PPS seminars Joint session of the Systèmes Complexes, Vérification and PPS seminars Verification Tuesday June 11, 2019, 3PM, Salle 3052 Burcu Külahçıoğlu Özkan (Max Planck Institute for Software Systems (MPI-SWS)) Randomized Testing of Distributed Systems with Probabilistic Guarantees Several recently proposed randomized testing tools for concurrent and distributed systems come with theoretical guarantees on their success. The key to these guarantees is a notion of bug depth—the minimum length of a sequence of events sufficient to expose the bug—and a characterization of d-hitting families of schedules—a set of schedules guaranteed to cover every bug of given depth d. Previous results show that in certain cases the size of a d-hitting family can be significantly smaller than the total number of possible schedules. However, these results either assume shared-memory multithreading, or that the underlying partial ordering of events is known statically and has special structure. These assumptions are not met by distributed message-passing applications. In this talk, we present a randomized scheduling algorithm for testing distributed systems. In contrast to previous approaches, our algorithm works for arbitrary partially ordered sets of events revealed online as the program is being executed. We show that for partial orders of width at most w and size at most n (both statically unknown), our algorithm is guaranteed to sample from at most w^2.n^(d−1) schedules, for every fixed bug depth d. Thus, our algorithm discovers a bug of depth d with probability at least 1/ (w^2.n^(d−1)). As a special case, our algorithm recovers a previous randomized testing algorithm for multithreaded programs. Our algorithm is simple to implement, but the correctness arguments depend on difficult combinatorial results about online dimension and online chain partitioning of partially ordered sets. In the last part of the talk, we briefly discuss how we can exploit state-space reduction strategies from model checking to provide better guarantees for the probability of hitting bugs. This will move us towards a randomized algorithm which brings together two orthogonal reductions of the search space: depth-bounded testing and dependency-aware testing. (Joint work with Rupak Majumdar, Filip Niksic, Simin Oraee, Mitra Tabaei Befrouei, Georg Weissenbacher) Verification Monday May 20, 2019, 11AM, Salle 1007 Raphaël Cauderlier (Nomadic Labs) Functional Verification of Tezos Smart Contracts in Coq Smart contract blockchains such as Ethereum and Tezos let their users program their crypto-money accounts. Bugs in these smart contracts can lead to huge losses. Yet smart contracts are necessarily small programs because checking the blockchain requires to evaluate all the smart contract calls in the history of the chain. For these reasons, smart contract languages are a good spot for formal methods. The Tezos blockchain and its smart contract language Michelson have been designed with verification in mind. In this talk, I will present Mi-Cho-Coq, a formal specification of Michelson in the Coq proof assistant. I will also demonstrate how to use it for functional verification of a typical multisig smart contract. Verification Friday May 10, 2019, 11AM, Salle 3052 Mahsa Shirmohammadi (LIS - CNRS & Univ. Aix-Marseille) A Lesson on Timed Automata Reachability Advertisement: The old-standing, natural, and useful problem of characterizing the reachability relation for timed automata has been solved many times: by Hubert Comon and Yan Jurski, by Catalin Dima, by Pavol Krcal, etc. Still all these characterizations are difficult to understand and to use in practise. Recently, Mahsa Shirmohammadi and Ben Worrell came up with a new solution, anod moreover they say that it is very easy! Mahsa will present it in the form of a black-(or white-)board lesson on May 10th, from 11-13hs. Abstract: In our preprint (submitted to IPL)[*], we give a simple and short proof of effective definability of the reachability relation in timed automata. We use a small trick that reduces this problem to that of computing the set of configurations reachable from a fixed initial configuration in timed automata. The proof uses only Parikh theorem in heart and 1-bounded region automata, so to follow the proof one may not need any prior knowledge of timed automata. [*] https://arxiv.org/abs/1903.09773 This long seminar/lecture will run from 11hs to 13hs. Verification Monday May 6, 2019, 11AM, Salle 1007 Matthieu Perrin (Université de Nantes) Shared Objects in Distributed Systems: A Broadcast Perspective It is well-known that consensus (one-set agreement) and total order broadcast are equivalent in asynchronous systems prone to process crash failures. Considering wait-free systems, this talk addresses and answers the following question: which are the communication abstraction that “captures”, on the one hand, read/write registers, and on the other hand, k-set agreement? To this end, it introduces a new broadcast communication abstraction, called k-SCD-Broadcast, which restricts the disagreement on the local deliveries of the messages that have been broadcast: 1-SCD-Broadcast boils down to total order broadcast and n-SCD-Broadcast is equivalent to the read/write register. Hence, in this context, k = 1 is not a special number, but only the first integer in an increasing integer sequence. This establishes a new “correspondence” between distributed agreement problems and communication abstractions, which enriches our understanding of the relations linking fundamental issues of fault-tolerant distributed computing. Verification Tuesday April 16, 2019, 11AM, Salle 3052 Si Liu (University of Illinois at Urbana-Champaign) Design, Verification and Automatic Implementation of Correct-by-Construction Distributed Transaction Systems in Maude Designing, verifying, and implementing highly reliable distributed systems is at present a hard and very labor-intensive task. Cloud-based systems have further increased this complexity due to the desired consistency, availability, scalability, and disaster tolerance. This talk addresses this challenge in the context of distributed transaction systems (DTSs) from two complementary perspectives: (i) designing DTSs in Maude with high assurance such that they satisfy desired correctness and performance requirements; and (ii) transforming verified Maude designs of DTSs into correct-by-construction distributed implementations. Regarding (i), I will talk about our Maude-based framework for automatic analysis of consistency and performance properties of DTSs. Regarding (ii), I will present a correct-by-construction automatic transformation mapping a verified formal specification of an actor-based distributed system design in Maude to a distributed implementation enjoying the same safety and liveness properties as the original formal design. Verification Monday April 15, 2019, 11AM, Salle 1007 John Wickerson (Imperial College London) Using Alloy to explore Weak Memory and Transactional Memory When reasoning about concurrent programs, it is tempting to imagine that each thread executes its instructions in order, and that writes to shared memory instantly become visible to all other threads. Alas, this ideal (which is called “sequential consistency”) is rarely met. In reality, most processors and compilers allow instructions sometimes to be reordered, and loads from shared memory sometimes to retrieve stale data. This phenomenon is called “weak memory”. This talk is about how we can formalise models of weak memory using a tool called Alloy. Using these formalisations, we can then synthesise programs that can distinguish different models of weak memory. We can also check whether a compiler mapping from one weak memory model to another is correct. The main case study in this talk describes how we used Alloy to find problems in a prototype implementation of transaction memory being developed by ARM. I will also talk about how similar techniques can be used to check the correctness of software-to-hardware compilers (or “high-level synthesis” tools). This talk is based on joint work with Mark Batty, Nathan Chong, George Constantinides, and Tyler Sorensen. For further reading, please see papers that appeared in [POPL'17](https://johnwickerson.github.io/papers/memalloy.pdf) and [PLDI'18](https://johnwickerson.github.io/papers/transactions.pdf). Verification Thursday April 11, 2019, 2PM, Salle 3052 Ismail Kuru (Drexel University) Safe Deferred Memory Reclamation with Types Memory management in lock-free data structures remains a major challenge in concurrent programming. Design techniques including read-copy-update (RCU) and hazard pointers provide workable solutions, and are widely used to great effect. These techniques rely on the concept of a grace period: nodes that should be freed are not deallocated immediately, and all threads obey a protocol to ensure that the deallocating thread can detect when all possible readers have completed their use of the object. This provides an approach to safe deallocation, but only when these subtle protocols are implemented correctly. We present a static type system to ensure correct use of RCU memory management: that nodes removed from a data structure are always scheduled for subsequent deallocation, and that nodes are scheduled for deallocation at most once. As part of our soundness proof, we give an abstract semantics for RCU memory management primitives which captures the fundamental properties of RCU. Our type system allows us to give the first proofs of memory safety for RCU linked list and binary search tree implementations without requiring full verification. Verification Friday April 5, 2019, 2:30PM, Salle 3052 Cristoph Kirsch (Universität Salzburg) On the self in Selfie Selfie is a self-contained 64-bit, 10-KLOC implementation of (1) a self-compiling compiler written in a tiny subset of C called C* targeting a tiny subset of 64-bit RISC-V called RISC-U, (2) a self-executing RISC-U emulator, (3) a self-hosting hypervisor that virtualizes the emulated RISC-U machine, and (4) a prototypical symbolic execution engine that executes RISC-U symbolically. Selfie can compile, execute, and virtualize itself any number of times in a single invocation of the system given adequate resources. There is also a simple linker, disassembler, debugger, and profiler. C* supports only two data types, uint64_t and uint64_t*, and RISC-U features just 14 instructions, in particular for unsigned arithmetic only, which significantly simplifies reasoning about correctness. Selfie has originally been developed just for educational purposes but has by now become a research platform as well. We discuss how selfie leverages the synergy of integrating compiler, target machine, and hypervisor in one self-referential package while orthogonalizing bootstrapping, virtual and heap memory management, emulated and virtualized concurrency, and even replay debugging and symbolic execution. This is joint work with Alireza S. Abyaneh, Simon Bauer, Philipp Mayer, Christian Mösl, Clément Poncelet, Sara Seidl, Ana Sokolova, and Manuel Widmoser, University of Salzburg, Austria Web: http://selfie.cs.uni-salzburg.at Verification Monday March 18, 2019, 11AM, Salle 1007 Glen Mével (INRIA Paris) Time Credits and Time Receipts in Iris I present a machine-checked extension of the program logic Iris with time credits and time receipts, two dual means of reasoning about time. Whereas time credits are used to establish an upper bound on a program’s execution time, time receipts can be used to establish a lower bound. More strikingly, time receipts can be used to prove that certain undesirable events—such as integer overflows—cannot occur until a very long time has elapsed. I will present a simple machine-checked application of time receipts, and sketch how we define our extension of Iris (and prove its soundness) in a modular way, as a layer above Iris. Verification Friday March 15, 2019, 11AM, Salle 3052 Sreeja Nair (LIP 6 - Sorbonne Université) Invariant safety for distributed applications A distributed application replicates data closer to the user to ensure shorter response times and availability when the network partitions. The replicas should be able to allow updates even in case of network partitions and they send their states to converge. The application developer must also make sure that the application invariants hold true even when the replicas diverge. In this talk, we present a proof methodology for verifying the safety of invariants of highly-available distributed applications that replicate state. The proof is modular and sequential :one can reason about each individual operation separately, and one can reason about a distributed application as if it were sequential. We automate the methodology and illustrate the use of the tool with a representative example. Verification Monday March 11, 2019, 11AM, Salle 1007 Richard Bonichon (CEA List) Get rid of inline assembly through trustable verification-oriented lifting Formal methods for software development have made great strides in the last two decades, to the point that their application in safety-critical embedded software is an undeniable success. Their extension to non-critical software is one of the notable forthcoming challenges. For example, C programmers regularly use inline assembly for low-level optimizations and system primitives. This usually results in driving state-of-the-art formal analyzers developed for C ineffective. We thus propose TInA, an automated, generic, trustworthy and verification-oriented lifting technique turning inline assembly into semantically equivalent C code, in order to take advantage of existing C analyzers. Extensive experiments on real-world C code with inline assembly (including GMP and ffmpeg) show the feasibility and benefits of TInA. Verification Tuesday March 5, 2019, 11AM, 3052 Azadeh Farzan (University of Toronto) Automated Hypersafety Verification In this talk, I will introduce an automated verification technique for hypersafety properties, which express sets of valid interrelations between multiple finite runs of a program. They can alternatively be viewed as standard safety properties of a product of the several copies of the program by itself. The key observation is that a small representative set of the product program runs, called a reduction, is sufficient to construct a formal proof. I will present an algorithm based on a counterexample-guided refinement loop that simultaneously searches for a reduction and a proof of the correctness for the reduction. The algorithm is implemented in a tool and there is evidence that itl is very effective in verifying a diverse array of hypersafety properties for a diverse class of input programs. Verification Monday March 4, 2019, 11AM, Salle 1007 Marie Van Den Boogard (Université Libre de Bruxelles) Beyond admissibility: Dominance between chains of strategies In this talk, we focus on the concept of rational behaviour in multi-player games on finite graphs, taking the point of view of a player that has access to the structure of the game but cannot make assumptions on the preferences of the other players. In the qualitative setting, admissible strategies have been shown to fit the rationality requirements, as they coincide with winning strategies when these exist, and enjoy the fundamental property that every strategy is either admissible or dominated by an admissible strategy. However, as soon as there are three or more payoffs, one finds that this fundamental property does not necessarily hold anymore: one may observe chains of strategies that are ordered by dominance and such that no admissible strategy dominates any of them. Thus, to recover a satisfactory rationality notion (still based on dominance), we depart from the single strategy analysis approach and consider instead chains of strategies as families of behaviours. We establish a sufficient criterion for games to enjoy a similar fundamental property, ie, all chains are below some maximal chain, and, as an illustration, we present a class of games where admissibility fails to capture some intuitively rational behaviours, while our chain-based analysis does. Based on a joint work with N.Basset, I. Jecker, A. Pauly and J.-F. Raskin, presented at CSL'18. Verification Monday February 25, 2019, 11AM, Salle 1007 Pierre Courtieu (CNAM) A proof framework for mobile robots protocols Distributed algorithms are known to be very difficult to design and certify. If these algorithms need to be fault or byzantine resilient the difficulty is even greater. Examples of flawed algorithms and proofs are numerous. It is therefore a natural domain for mechanized proofs (i.e. with the help of a proof assistant): the additional work of mechanizing the proofs should be worth the effort. We present a framework, named Pactole, for designing distributed protocols for mobile robots, stating properties about these protocols and proving these properties. We can also prove *impossibility statement* i.e. that there exists no protocol achieving some property. Our technique cannot be compared to but can be seen as complementary to model checking in the sense that our proofs are not automated but allow for instance to conclude that a protocol is correct *for any* starting configuration, number of robots, that is not proven impossible. Verification Monday February 25, 2019, 11AM, Salle 1007 Corto Mascle (ENS Paris-Saclay) To be announced. Verification Monday February 18, 2019, 2PM, Salle 3052 José Ignacio Requeno (Université Grenoble Alpes) Parameter Synthesis for Extended Signal Temporal Logic Specifications Cyber-Physical Systems (CPS) are complex environments composed of embedded computers that monitor or control physical processes. Nowadays, they are ubiquitously present in most aspects of our live, ranging from power stations to communications, transport or finances. Guaranteeing a correct behaviour and a good quality of service (QoS) of such critical systems is imperative. Signal temporal logic (STL) is a specification language that is adapted for dealing with continuous time real-value properties which are characteristic in CPSs. Used in combination with formal models and model checking techniques, it allows the verification and validation of runtime executions. In this talk, I will present recent improvements in the STL language; including a new method for inferring the validity domain of parametric STL specifications. Verification Monday February 11, 2019, 11AM, Salle 1007 Stéphane Demri (LSV - CRNS & ENS Paris Saclay) Modal Separation Logics and Friends Separation logic is known as an assertion language to perform verification, by extending Hoare-Floyd logic in order to verify programs with mutable data structures. The separating conjunction allows us to evaluate formulae in subheaps, and this possibility to evaluate formulae in alternative models is a feature shared with many modal logics such as sabotage logics, logics of public announcements or relation-changing modal logics. In the first part of the talk, we present several versions of modal separation logics whose models are memory states from separation logic and the logical connectives include modal operators as well as separating conjunction. Fragments are genuine modal logics whereas some others capture standard separation logics, leading to an original language to speak about memory states. In the second part of the talk, we present the decidability status and the computational complexity of several modal separation logics, (for instance, leading to NP-complete or Tower-complete satisfiability problems). In the third part, we investigate the complexity of fragments of quantified CTL under the tree semantics (or similar modal logics with propositional quantification), some of these fragments being related to modal separation logics. Part of the presented work is done jointly with B. Bednarczyk (U. of Wroclaw) or with R. Fervari (U. of Cordoba). Verification Monday January 28, 2019, 11AM, Salle 1007 Mark Batty (University of Kent) A denotational semantics for weak-memory language concurrency with modular DRF-SC Concurrent languages provide weak semantics: they admit non-sequentially-consistent (SC) behaviour to accommodate weak target architectures and compiler optimisation. Subtle behaviour and intricate specifications result, but two properties help to simplify things – no thin-air values, and DRF-SC. In the first, all values must be computed from constants or inputs to the program, rather than conjured up, for example, through abuse of speculative execution. In DRF-SC, programs without data races are guaranteed SC behaviour. Unfortunately, existing languages like C/C++ admit thin-air values and DRF-SC results only apply to an unrealistic class of programs, excluding systems code and code using concurrency libraries. We provide a modular semantics that avoids thin-air values and we prove a new modular variant of DRF-SC that lifts a key barrier to its usefulness. Year 2018 Verification Monday December 10, 2018, 11:10AM, Salle 1007 Mahsa Shirmohammadi (LIS - CNRS & Université Aix-Marseille) On the Complexity of Value Iteration Value iteration is a fundamental algorithm for solving Markov Decision Processes (MDPs). It computes the maximal n-step payoff by iterating n times a recurrence equation which is naturally associated to the MDP. At the same time, value iteration provides a policy for the MDP that is optimal on a given finite horizon n. In this talk, we settle the computational complexity of value iteration. We show that, given a horizon n in binary and an MDP, computing an optimal policy is EXP-complete, thus resolving an open problem that goes back to the seminal 1987 paper on the complexity of MDPs by Papadimitriou and Tsitsiklis. As a stepping stone, we show that it is EXP-complete to compute the n-fold iteration (with n in binary) of a function given by a straight-line program over the integers with max and + as operators. A preliminary draft of this work is available on arXiv: https://arxiv.org/abs/1807.04920v2 Verification Monday December 3, 2018, 11:10AM, Salle 1007 Vincent Rahli (SnT - University of Luxembourg) Building Reliable Systems on Reliable Foundations Current mainstream programming/verification tools are not well-suited to build safe and secure distributed programs that meet required specifications. This is a serious issue because our modern society came to heavily rely on complex and constantly evolving distributed systems that need to tolerate all kinds of threats and attacks in order to protect critical and sensitive resources. This issue is due in part to the lack of formalisms, tools, and reliable foundations. Indeed, in the past few decades, we have seen a spurt of verification tools that handle well functional sequential code, but expressive tools to reason about distributed programs are only just emerging. Moreover, a major issue with such verification tools is that they are often very complex themselves, making them error prone. In this talk I will present some of my contributions to this area, namely, I will present my work on (1) designing formal verification tools to reason about complex distributed systems (such as Byzantine fault tolerant consensus protocols as used in Blockchain technology); and (2) on ensuring that those verification tools rely on correct foundations. I will conclude by showing how I believe distributed systems will have a positive impact on modern foundational/logical frameworks, and vice versa. Verification Monday November 26, 2018, 11:10AM, Salle 1007 Matthias Függer (LSV - CNRS & ENS de Cachan) Fast Asymptotic and Approximate Consensus in Highly Dynamic Networks Agreeing on a common value in a distributed system is a problem that lies at the heart of many distributed computing problems and occurs in several flavors. Unfortunately, even modest network dynamics already prohibit solvability of exact consensus, where agents have to decide on a single output value that is within the range of the agents' initial values. For many problems (distributed control, clock synchronization, load balancing, …) it is however sufficient to asymptotically converge to the same value, or decide on values not too far from each other. We study solvability of these consensus variants in highly dynamic networks, provide time complexity results, and present fast algorithms. We then show how the results on dynamic networks are relevant for classical fault-models, such as asynchronous message passing with crashes. The talk is on previous and current research with Bernadette Charron-Bost (LIX), Thomas Nowak (LRI), and Manfred Schwarz (ECS, TU Wien). Verification Monday November 12, 2018, 11:10AM, Salle 1007 Arvid Jakobsson (Huawei Research & LIFO -Université d'Orléans) Replicated Synchronization of BSPlib Programs The BSP model (Bulk Synchronous Parallel) simplifies the construction and evaluation of parallel algorithms, with its simplified synchronization structure and cost model. Nevertheless, imperative BSP programs can suffer from synchronization errors. Programs with textually aligned barriers are free from such errors, and this structure eases program comprehension. We propose a simplified formalization of barrier inference as data flow analysis, which verifies statically whether an imperative BSP program has replicated synchronization, which is a sufficient condition for textual barrier alignment. Verification Monday November 5, 2018, 11:10AM, Salle 1007 Adrien Guatto (IRIF) Hierarchical Memory Management for Mutable State It is well known that modern functional programming languages are naturally amenable to parallel programming. Achieving efficient parallelism using functional languages, however, remains difficult. Perhaps the most important reason for this is their lack of support for efficient in-place updates, i.e., mutation, which is important for the implementation of both parallel algorithms and the run-time system services (e.g., schedulers and synchronization primitives) used to execute them. In this joint work with Sam Westrick, Ram Raghunathan, Umut Acar, and Matthew Fluet, we propose techniques for efficient mutation in parallel functional languages. To this end, we couple the memory manager with the thread scheduler to make reading and updating data allocated by nested threads efficient. We describe the key algorithms behind our technique, implement them in the MLton Standard ML compiler, and present an empirical evaluation. Our experiments show that the approach performs well, significantly improving efficiency over existing functional language implementations. Verification Monday October 29, 2018, 11:10AM, Salle 1007 Éric Tanter (PLEIAD - Universidad de Chile & INRIA Paris) The Essence of Gradual Typing Gradual typing enables programming languages to seamlessly combine dynamic and static checking. Language researchers and designers have extended a wide variety of type systems to support gradual typing. These efforts consistently demonstrate that designing a satisfactory gradual counterpart to a static type system is challenging, and this challenge only increases with the sophistication of the type system. Gradual type system designers need more formal tools to help them conceptualize, structure, and evaluate their designs. Based on an understanding of gradual types as abstractions of static types, we have developed systematic foundations for gradual typing based on abstract interpretation, called Abstracting Gradual Typing (AGT for short). Gradual languages designed with the AGT approach satisfy, by construction, the established criteria for gradually-typed languages. After a brief introduction to AGT, this talk reports on our experience applying AGT to various typing disciplines. Verification Monday October 22, 2018, 11:10AM, Salle 1007 Marie Fortin (LSV - ENS Paris-Saclay) It Is Easy to Be Wise After the Event: Communicating Finite-State Machines Capture First-Order Logic with “Happened Before” Message sequence charts (MSCs) naturally arise as executions of communicating finite-state machines (CFMs), in which finite-state processes exchange messages through unbounded FIFO channels. We study the first-order logic of MSCs, featuring Lamport's happened-before relation. We introduce a star-free version of propositional dynamic logic (PDL) with loop and converse. Our main results state that (i) every first-order sentence can be transformed into an equivalent star-free PDL sentence (and conversely), and (ii) every star-free PDL sentence can be translated into an equivalent CFM. This answers an open question and settles the exact relation between CFMs and fragments of monadic second-order logic. As a byproduct, we show that first-order logic over MSCs has the three-variable property. This is joint work with Benedikt Bollig and Paul Gastin, presented at CONCUR’18. Verification Monday October 1, 2018, 11:10AM, Salle 1007 Mahsa Shirmohammadi (LIS - CNRS & Université Aix-Marseille) On Rationality of Nonnegative Matrix Factorization Nonnegative matrix factorization (NMF) is the problem of decomposing a given nonnegative n×m matrix M into a product of a nonnegative n×d matrix W and a nonnegative d×m matrix H. A longstanding open question, posed by Cohen and Rothblum in 1993, is whether a rational matrix M always has an NMF of minimal inner dimension d whose factors W and H are also rational. We answer this question negatively, by exhibiting a matrix for which W and H require irrational entries. In this talk, we further exhibit a connection between Cohen and Rothblum's question with a question posed by Paz, in his seminal 1971 textbook. The latter question asks, given a probabilistic automaton (PA) with rational transition probabilities, does there always exist a minimal equivalent PA that also has rational transition probabilities? The PA and its minimal equivalent PA accept all finite words with equal probabilities. We use the established connection to answer Paz's question negatively, thus falsifying a positive answer claimed in 1974. (This talk is based on a paper in SODA 2017.) Verification Monday September 24, 2018, 11:10AM, Salle 1007 Adam Shimi (IRIT - ENSEEIHT) Characterizing Asynchronous Message-Passing Models Through Rounds One great issue in conceiving and specifying distributed algorithms is the sheer number of models, differing in subtle and often qualitative ways: synchrony, kind of faults, number of faults… In the context of message-passing, one solution is to restrict communication to proceed by round; A great variety of models can then be captured in the Heard-Of model, with predicates on the communication graph at each round. However, this raises another issue: how to characterize a given model by such a predicate? It depends on how to implement rounds in the model. This is straightforward in synchronous models, thanks to the upper bound on communication delay. On the other hand, asynchronous models allow unbounded message delays, which makes the implementation of rounds dependent on the specific message-passing model. I will present our formalization of this characterization for asynchronous models. Specifically, we introduce Delivered collections: the collection of all messages delivered at each round, whether late or not. Defining predicates on Delivered collections then allows us to capture message-passing models at the same level of abstraction than Heard-Of predicates. The question is then reformulated to: what Heard-Of predicates can be generated by a given Delivered predicate? I will provide an answer by considering all possible scheduling of deliveries of messages from the Delivered collections and change of rounds for the processes. Strategies of processes then constrain those scheduling by specifying when processes can change rounds; those ensuring no process is ever blocked forever generate a Heard-Of collection per run, that is a Heard-Of predicate. Finally, we use these strategies to nd a characterizing Heard-Of predicate through a dominance relation on strategies: a dominant strategy for a Delivered predicate implements the most constrained Heard-Of predicate possible. This approach oer both the dominant Heard-Of predicates for classical asynchronous models and the existence, for every Delivered predicate, of a strategy dominating large classes of strategies. On the whole, those results confirm the power of this formalization and demonstrate the characterization of asynchronous models through rounds as a worthwhile pursuit. This is joint work with Aurélie Hurault and Philippe Quéinnec. Verification Monday September 17, 2018, 11AM, Salle 1007 Franz Mayr (Universidad ORT, Uruguay) Regular inference on artificial neural networks This paper explores the general problem of explaining the behavior of artificial neural networks (ANN). The goal is to construct a representation which enhances human understanding of an ANN as a sequence classifier, with the purpose of providing insight on the rationale behind the classification of a sequence as positive or negative, but also to enable performing further analyses, such as automata-theoretic formal verification. In particular, a probabilistic algorithm for constructing a deterministic finite automaton which is approximately correct with respect to an artificial neural network is proposed. Verification Friday June 29, 2018, 11AM, Salle 3052 Mahsa Shirmohammadi (LIS, CNRS - Univ. Aix-Marseille) Costs and Rewards in Priced Timed Automata We consider Pareto analysis of reachable states of multi-priced timed automata (MPTA): timed automata equipped with multiple observers that keep track of costs (to be minimised) and rewards (to be maximised) along a computation. Each observer has a constant non-negative derivative which may depend on the location of the MPTA. We study the Pareto Domination Problem, which asks whether it is possible to reach a target location via a run in which the accumulated costs and rewards Pareto dominate a given objective vector. We show that this problem is undecidable in general, but decidable for MPTA with at most three observers. For MPTA whose observers are all costs or all rewards, we show that the Pareto Domination Problem is PSPACE-complete. We also consider an epsilon-approximate Pareto Domination Problem that is decidable without restricting the number and types of observers. We develop connections between MPTA and Diophantine equations. Undecidability of the Pareto Domination Problem is shown by reduction from Hilbert's 10th Problem, while decidability for three observers is shown by a translation to a fragment of arithmetic involving quadratic forms. Verification Monday June 25, 2018, 11AM, Salle 3052 Nicolas Jeannerod (IRIF) Deciding the First-Order Theory of an Algebra of Feature Trees with Updates The CoLiS project aims at applying techniques from deductive program verification and analysis of tree transformations to the problem of analyzing shell scripts. The data structure that is manipulated by these scripts, namely the file system tree, is a complex one. We plan to use feature trees as an abstract representation of file system tree transformations. We thus investigate an extension of these feature trees that includes the update operation, which expresses that two trees are similar except in a particular subtree where there might have been changes. We show that the first-order theory of this algebra is decidable via a weak quantifier elimination procedure which is allowed to swap existential quantifiers for universal quantifiers. Verification Monday June 18, 2018, 11AM, Salle 3052 Eugène Asarin (IRIF) Distance on timed words and applications We introduce and study a new (pseudo) metric on timed words having several advantages: - it is global: it applies to words having different number of events; - it is realistic and takes into account imprecise observation of timed events; thus it reflects the fact that the order of events cannot be observed whenever they are very close to each other; - it is suitable for quantitative verification of timed systems: we formulate and solve quantitative model-checking and quantitative monitoring in terms of the new distance, with reasonable complexity; - it is suitable for information-theoretical analysis of timed systems: due to its pre-compactness the quantity of information in bits per time unit can be correctly defined and computed. (Joint work with Nicolas Basset and Aldric Degorre) Verification Wednesday June 13, 2018, 3PM, Salle 3052 Joël Ouaknine (MPI-SWS) Program Invariants Automated invariant generation is a fundamental challenge in program analysis and verification, going back many decades, and remains a topic of active research. In this talk I'll present a select overview and survey of work on this problem, and discuss unexpected connections to other fields including algebraic geometry, group theory, and quantum computing. (No previous knowledge of these fields will be assumed.) This is joint work with Ehud Hrushovski, Amaury Pouly, and James Worrell. Séminaire du Pôle ASV Verification Monday June 11, 2018, 11AM, Salle 3052 François Pottier (INRIA Paris) Verifying a Hash Table and Its Iterators in Higher-Order Separation Logic In this talk, I describe the specification and proof of an (imperative, sequential) hash table implementation. The usual dictionary operations (insertion, lookup, and so on) are supported, as well as iteration via folds and iterators. The code is written in OCaml and verified using higher-order separation logic, embedded in Coq, via the CFML tool and library. This case study is part of ANR project Vocal, whose aim is to build a verified OCaml library of basic data structures. (This work was presented at CPP 2017.) Verification Friday June 1, 2018, 10:30AM, 3052 Derek Dreyer (Max Planck Institute for Software Systems (MPI-SWS)) RustBelt: Logical Foundations for the Future of Safe Systems Programming Rust is a new systems programming language, developed at Mozilla, that promises to overcome the seemingly fundamental tradeoff in language design between high-level safety guarantees and low-level control over resource management. Unfortunately, none of Rust's safety claims have been formally proven, and there is good reason to question whether they actually hold. Specifically, Rust employs a strong, ownership-based type system, but then extends the expressive power of this core type system through libraries that internally use unsafe features. In this talk, I will present RustBelt (http://plv.mpi-sws.org/rustbelt), the first formal (and machine-checked) safety proof for a language representing a realistic subset of Rust. Our proof is extensible in the sense that, for each new Rust library that uses unsafe features, we can say what verification condition it must satisfy in order for it to be deemed a safe extension to the language. We have carried out this verification for some of the most important libraries that are used throughout the Rust ecosystem. After reviewing some essential features of the Rust language, I will describe the high-level structure of the RustBelt verification and then delve into detail about the secret weapon that makes RustBelt possible: the Iris framework for higher-order concurrent separation logic in Coq (http://iris-project.org). I will explain by example how Iris generalizes the expressive power of O'Hearn's original concurrent separation logic in ways that are essential for verifying the safety of Rust libraries. I will not assume any prior familiarity with concurrent separation logic or Rust. This is joint work with Ralf Jung, Jacques-Henri Jourdan, Robbert Krebbers, and the rest of the Iris team. Verification Monday May 28, 2018, 11AM, Salle 1007 Ana Sokolova (Universität Salzburg, Austria) Linearizability of Concurrent Data Structures via Order Extension Theorems The semantics of concurrent data structures is usually given by a sequential specification and a consistency condition. Linearizability is the most popular consistency condition due to its simplicity and general applicability. Verifying linearizability is a difficult, in general undecidable, problem. In this talk, I will discuss the semantics of concurrent data structures and present recent order extension results (joint work with Harald Woracek) that lead to characterizations of linearizability in terms of violations, a.k.a. aspects. The approach works for pools, queues, and priority queues; finding other applications is ongoing work. In the case of pools and queues we obtain already known characterizations, but the proof method is new, elegant, and simple, and we expect that it will lead to deeper understanding of linearizability. Verification Monday May 14, 2018, 11AM, Salle 1007 Raphaël Cauderlier (IRIF - Université Paris Diderot) A Verified Implementation of the Bounded List Container Standard libraries of programming languages provide efficient implementations for common data containers. The details of these implementations are abstracted away by generic interfaces which are specified in terms of well understood mathematical structures such as sets, multisets, sequences, and partial functions. The intensive use of container libraries makes important their formal verification. I will present a case study of full functional verification of the bounded doubly-linked list container of from the standard library of Ada 2012. This library is specified in SPARK and several client programs depend on this specification. However, the low-level invariants required to verify this library cannot be expressed in SPARK. For this reason, the proof of functional correctness is done using VeriFast, a verification environment for Separation Logic extended with algebraic data types. The specifications proved entail the contracts of the Ada library and include new features. The verification method we used employs a precise algebraic model of the data structure and we show that it facilitates the verification and captures entirely the library contracts. Verification Monday May 7, 2018, 11AM, Salle 1007 Adrien Pommellet (IRIF) Static Analysis of Multithreaded Recursive Programs Communicating via Rendez-vous We present in this paper a generic framework for the analysis of multi-threaded programs with recursive procedure calls, synchronisation by rendez-vous between parallel threads, and dynamic creation of new threads. To this end, we consider a model called \emph{Synchronized Dynamic Pushdown Networks} (SDPNs) that can be seen as a network of pushdown processes executing synchronized transitions, spawning new pushdown processes, and performing internal pushdown actions. The reachability problem for this model is unfortunately undecidable. Therefore, we tackle this problem by introducing an abstraction framework based on Kleene algebras in order to compute an abstraction of the execution paths between two regular sets of configurations. We combine an automata theoretic saturation procedure with constraint solving in a finite domain. We then apply this framework to a Counter-Example Guided Abstraction Refinement (CEGAR) scheme, using multiple abstractions of increasing complexity and precision. Verification Monday April 9, 2018, 11AM, Salle 1007 Ilya Sergey (University College London, UK) Programming and Proving with Distributed Protocols Distributed systems play a crucial role in modern infrastructure, but are notoriously difficult to implement correctly. This difficulty arises from two main challenges: (a) correctly implementing core system components (e.g., two-phase commit), so all their internal invariants hold, and (b) correctly composing standalone system components into functioning trustworthy applications (e.g., persistent storage built on top of a two-phase commit instance). Recent work has developed several approaches for addressing (a) by means of mechanically verifying implementations of core distributed components, but no methodology exists to address (b) by composing such verified components into larger verified applications. As a result, expensive verification efforts for key system components are not easily reusable, which hinders further verification efforts. In my talk, I will present Disel, the first framework for implementation and reusable compositional verification of distributed systems and their clients, all within the mechanized, foundational context of the Coq proof assistant. In Disel, users implement distributed systems using a domain specific language shallowly embedded in Coq and providing both high-level programming constructs as well as low-level communication primitives. Components of composite systems are specified in Disel as protocols, which capture system-specific logic and disentangle system definitions from implementation details. By virtue of Disel’s dependent type system, well-typed implementations always satisfy their protocols’ invariants and never go wrong, allowing users to verify system implementations interactively using Disel’s Hoare-style Separation Logic, which extends state-of-the-art techniques for concurrency verification to the distributed setting. KEYWORDS: Distributed Systems, Separation Logic, Dependent Types, Coq Verification Monday March 26, 2018, 11AM, Salle 1007 Ivan Gazeau (LORIA & INRIA Nancy - Grand Est) Automated Verification of Privacy-type Properties for Security Protocols The applied pi-calculus is a powerful framework to model protocols and to define security properties. In this symbolic model, it is possible to verify automatically complex security properties such as strong secrecy, anonymity and unlinkability properties which are based on equivalence of processes. In this talk, we will see an overview of a verification method used by a tool, Akiss. The tool is able to handle - a wide range of cryptographic primitives (in particular AKISS is the only tool able to verify equivalence properties for protocols that use xor); - protocols with else branches (the treatment of disequalities is often complicated). We will also provide some insights on how interleaving due to concurrency can be effectively handled. Verification Friday March 23, 2018, 2:30PM, Salle 3052 Javier Esparza (Tecnhische Universität München) One Theorem to Rule Them All: A Unified Translation of LTL into omega-Automata We present a unified translation of LTL formulas into deterministic Rabin automata, limit-deterministic Büchi automata, and nondeterministic Büchi automata. The translations yield automata of asymptotically optimal size (double or single exponential, respectively). All three translations are derived from one single Master Theorem of purely logical nature. The Master Theorem decomposes the language of a formula into a positive boolean combination of languages that can be translated into omega-automata by elementary means. In particular, the breakpoint, Safra, and ranking constructions used in other translations are not needed. Joint work with Jan Kretinsky and Salomon Sickert. Séminaire du Pôle: Automata, Structures, et Vérification Verification Monday March 19, 2018, 11AM, Salle 1007 Rupak Majumdar (Max Planck Institute for Software Systems (MPI-SWS)) Effective Random Testing for Concurrent and Distributed Programs Random testing has proven to be an effective way to catch bugs in distributed systems. This is surprising, as the space of executions is enormous. We provide a theoretical justification of the effectiveness of random testing under various “small depth” hypotheses. First, we show a general construction, using the probabilistic method from combinatorics, that shows that whenever a random test covers a fixed coverage goal with sufficiently high probability, a small randomly-chosen set of tests achieves full coverage with high probability. In particular, we show that our construction can give test sets exponentially smaller than systematic enumeration. Second, based on an empirical study of many bugs found by random testing in production distributed systems, we introduce notions of test coverage which capture the “small depth” hypothesis and are empirically effective in finding bugs. Finally, we show using combinatorial arguments that for these notions of test coverage we introduce, we can find a lower bound on the probability that a random test covers a given goal. Our general construction then explains why random testing tools achieve good coverage—and hence, find bugs—quickly. Verification Monday March 12, 2018, 11AM, Salle 1007 Thomas Colcombet (IRIF & CNRS) Automata and Program Analysis This will be a presentation of a collaboration with Laure Daviaud and Florian Zuleger. We show how recent results concerning quantitative forms of automata help providing refined understanding of the properties of a system (for instance, a program). In particular, combining the size-change abstraction together with results concerning the asymptotic behavior of tropical automata yields extremely fine complexity analysis of some pieces of code. Verification Monday February 26, 2018, 11AM, Salle 1007 Cătălin Hriţcu (INRIA Paris) When Good Components Go Bad: Formally Secure Compilation Despite Dynamic Compromise We propose a new formal criterion for secure compilation, giving strong end-to-end security guarantees for software components written in unsafe, low-level languages with C-style undefined behavior. Our criterion is the first to model _dynamic_ compromise in a system of mutually distrustful components running with least privilege. Each component is protected from all the others—in particular, from components that have encountered undefined behavior and become compromised. Each component receives secure compilation guarantees up to the point when it becomes compromised, after which an attacker can take complete control over the component and use any of its privileges to attack the remaining uncompromised components. More precisely, we ensure that dynamically compromised components cannot break the safety properties of the system at the target level any more than equally privileged components without undefined behavior already could in the source language. To illustrate this model, we build a secure compilation chain for an unsafe language with buffers, procedures, and components. We compile it to a simple RISC abstract machine with built-in compartmentalization and provide thorough proofs, many of them machine-checked in Coq, showing that the compiler satisfies our secure compilation criterion. Finally, we show that the protection guarantees offered by the compartmentalized abstract machine can be achieved at the machine-code level using either software fault isolation or a tag-based reference monitor. Verification Monday February 19, 2018, 11AM, Salle 3052 Adrien Boiret (University of Warsaw) The “Hilbert Method” for Solving Transducer Equivalence Problems. Classical results from algebra, such as Hilbert's Basis Theorem and Hilbert’s Nullstellensatz, have been used to decide equivalence for some classes of transducers, such as HDT0L (Honkala 2000) or more recently deterministic tree-to-string transducers (Seidl, Maneth, Kemper 2015). In this talk, we propose an abstraction of these methods. The goal is to investigate the scope of the “Hilbert method” for transducer equivalence that was described above. Consider an algebra A in the sense of universal algebra, i.e. a set equipped with some operations. A grammar over A is like a context-free grammar, except that it generates a subset of the algebra A, and the rules use operations from the algebra A. The classical context-free grammars are the special case when the algebra A is the free monoid with concatenation. Using the “Hilbert method” one can decide certain properties of grammars over algebras that are fields or rings of polynomials over a field. The “Hilbert method” extends to grammars over certain well-behaved algebras that can be “coded” into fields or rings of polynomials. Finally, for these well-behaved algebras, one can also use the “Hilbert method” to decide the equivalence problem for transducers (of a certain transducer model that uses registers) that input trees and output elements of the well-behaved algebra. In the talk, we give examples and non-examples of well-behaved algebras, and discuss the decidability / undecidability results connected to them. In particular: -We show that equivalence is decidable for transducers that input (possibly ordered) trees and output unranked unordered trees. -We show that equivalence is undecidable for transducers that input words and output polynomials over the rational numbers with one variable (but are allowed to use composition of polynomials). Joint work with Mikołaj Bojańczyk, Janusz Schmude, Radosław Piórkowski at Warsaw University. Joint Session w/ the ACS working group Verification Thursday February 8, 2018, 10:30AM, Salle 3052 Vincent Laporte (IMDEA Software) Provably secure compilation of side-channel countermeasures: the case of cryptographic “constant-time” Software-based countermeasures provide effective mitigation against side-channel attacks, often with minimal efficiency and deployment overheads. Their effectiveness is often amenable to rigorous analysis: specifically, several popular countermeasures can be formalized as information flow policies, and correct implementation of the countermeasures can be verified with state-of-the-art analysis and verification techniques. However, in absence of further justification, the guarantees only hold for the language (source, target, or intermediate representation) on which the analysis is performed. We consider the problem of preserving side-channel countermeasures by compilation for cryptographic “constant-time”, a popular countermeasure against cache-based timing attacks. We present a general method, based on the notion of 2-simulation, for proving that a compilation pass preserves the constant-time countermeasure. Using the Coq proof assistant, we verify the correctness of our method and of several representative instantiations. (Joint session with the PPS seminar) Verification Monday January 29, 2018, 11AM, Salle 1007 Engel Lefaucheaux (ENS Cachan / IRISA Rennes) Probabilistic Disclosure: Maximisation vs. Minimisation We consider opacity questions where an observation function provides to an external attacker a view of the states along executions and secret executions are those visiting some state from a fixed subset. Disclosure occurs when the observer can deduce from a finite observation that the execution is secret. In a probabilistic and nondeterministic setting, where an internal agent can choose between actions, there are two points of view, depending on the status of this agent: the successive choices can either help the attacker trying to disclose the secret, if the system has been corrupted, or they can prevent disclosure as much as possible if these choices are part of the system design. In the former situation, corresponding to a worst case, the disclosure value is the supremum over the strategies of the probability to disclose the secret (maximisation), whereas in the latter case, the disclosure is the infimum (minimisation). We address quantitative problems (comparing the optimal value with a threshold) and qualitative ones (when the threshold is zero or one) related to both forms of disclosure for a fixed or finite horizon. For all problems, we characterise their decidability status and their complexity. We discover a surprising asymmetry: on the one hand optimal strategies may be chosen among deterministic ones in maximisation problems, while it is not the case for minimisation. On the other hand, for the questions addressed here, more minimisation problems than maximisation ones are decidable. Verification Monday January 22, 2018, 11AM, Salle 1007 Josef Widder (TU Wien) Synthesis of Distributed Algorithms with Parameterized Thresholds Guards Fault-tolerant distributed algorithms are notoriously hard to get right. I present an automated method that helps in that process: the designer provides specifications (the problem to be solved) and a sketch of a distributed algorithm that keeps arithmetic details unspecified. Our tool then automatically fills the missing parts. In particular, I will consider threshold-based distributed algorithms, where a process has to wait until the number of messages it receives reaches a certain threshold, in order to perform an action. Such algorithms are typically used for achieving fault tolerance in distributed algorithms for broadcast, atomic commitment, or consensus. Using this method, one can synthesize distributed algorithms that are correct for every number n of processes and every number t of faults, provided some resilience condition holds, e.g., n > 3t. This approach combines recent progress in parameterized model checking of distributed algorithms—which I also address—with counterexample-guided synthesis. This is joint work with Marijana Lazić, Igor Konnov, and Roderick Bloem. Verification Monday January 15, 2018, 11AM, Salle 1007 Chao Wang (IRIF) Checking Linearizability of Concurrent Priority Queues Efficient implementations of concurrent objects such as atomic collections are essential to modern computing. Unfortunately their correctness criteria — linearizability with respect to given ADT specifications — are hard to verify. Verifying linearizability is undecidable in general, even on classes of implementations where the usual control-state reachability is decidable. In this work we consider concurrent priority queues which are fundamental to many multi-threaded applications like task scheduling or discrete event simulation, and show that verifying linearizability of such implementations is reducible to control-state reachability. This reduction entails the first decidability results for verifying concurrent priority queues with an unbounded number of threads, and it enables the application of existing safety-verification tools for establishing their correctness. Verification Monday January 8, 2018, 11AM, Salle 1007 Jean-Jacques Lévy (IRIF & INRIA) Proofs of graph algorithms with automation and their readability In this talk, we present a proof of correctness for an algorithm computing strongly connected components in a directed graph by Tarjan [1972]. It will be the starting point of a discussion about the readability of formal proofs. This proof was achieved with the Why3 system and the Coq/ssreflect proof assistant. The same proof was redone fully in Coq/ssreflect and Isabelle/HOL. This is joint work with Ran Chen and was presented at VSTTE 2017. The Coq and Isabelle proofs were achieved by Cyril Cohen, Laurent Théry and Stephan Merz. Year 2017 Verification Monday December 11, 2017, 11AM, Salle 1007 Irène Guessarian (IRIF - Unviersité Paris Diderot) Congruence Preservation, Lattices and Recognizability Looking at some monoids and (semi)rings (natural numbers, integers and p-adic integers), and more generally, residually finite algebras (in a strong sense), we prove the equivalence of two ways for a function on such an algebra to behave like the operations of the algebra. The first way is to preserve congruences or stable preorders. The second way is to demand that preimages of recognizable sets belong to the lattice or the Boolean algebra generated by the preimages of recognizable sets by “derived unary operations” of the algebra (such as translations, quotients,…). Verification Thursday December 7, 2017, 4PM, Salle 1007 Luke Ong (University of Oxford) Constrained Horn clauses for automatic program verification: the higher-order case Following Björner, McMillan and Rybalchenko, we develop an approach to the automatic verification of higher-order functional programs based on constrained Horn clauses. We introduce constrained Horn clauses in higher-order logic, and decision problems concerning their satisfiability and the safety verification of higher-order programs. We show that, although satisfiable systems of higher-order clauses do not generally have least models, there is a notion of canonical model obtained via a reduction to a problem concerning a kind of monotone logic program. Following work in higher-order program verification, we develop a refinement type system in order to reason about and automate the search for models. This provides a sound but incomplete method for solving the decision problem. Finally, we show that an extension of the decision problem in which refinement types are used directly as guards on existential quantifiers can be reduced to the original problem. This result can be used to show that properties of higher-order functions that are definable using refinement types are also expressible using higher-order constrained Horn clauses. (Joint work with Toby Cathcart Burn and Steven Ramsay.) Verification Tuesday November 28, 2017, 1:30PM, Salle 3052 Léon Gondelman (Radboud University, The Netherlands) The Spirit of Ghost Code In the context of deductive program verification, ghost code is part of the program that is added for the purpose of specification. Ghost code must not interfere with regular code, in the sense that it can be erased without any observable difference in the program outcome. In particular, ghost data cannot participate in regular computations and ghost code cannot mutate regular data or diverge. The idea exists in the folklore since the early notion of auxiliary variables and is implemented in many state-of-the-art program verification tools. However, a rigorous definition and treatment of ghost code is surprisingly subtle and few formalizations exist. In this talk, we describe a simple ML-style programming language with mutable state and ghost code. Non-interference is ensured by a type system with effects, which allows, notably, the same data types and functions to be used in both regular and ghost code. We define the procedure of ghost code erasure and we prove its safety using bisimulation. A similar type system, with numerous extensions which we briefly discuss, is implemented in the program verification environment Why3. Joint session with the ACS Work group Verification Monday November 20, 2017, 11AM, Salle 1007 Laurent Fribourg (LSV, CNRS & ENS de Cachan) Euler’s Method Applied to the Control of Switched Systems Hybrid systems are a powerful formalism for modeling and reasoning about cyber-physical systems. They mix the continuous and discrete natures of the evolution of computerized systems. Switched systems are a special kind of hybrid systems, with restricted discrete behaviours: those systems only have finitely many different modes of (continuous) evolution, with isolated switches between modes. Such systems provide a good balance between expressiveness and controllability, and are thus in widespread use in large branches of industry such as power electronics and automotive control. The control law for a switched system defines the way of selecting the modes during the run of the system. Controllability is the problem of (automatically) synthezing a control law in order to satisfy a desired property, such as safety (maintaining the variables within a given zone) or stabilisation (confinement of the variables in a close neighborhood around an objective point). In order to compute the control of a switched system, we need to compute the solutions of the differential equations governing the modes. Euler’s method is the most basic technique for approximating such solutions. We present here an estimation of the Euler’s method local error, using the notion of “one-sided Lispchitz constant’’ for modes. This yields a general synthesis approach which can encompass uncertain parameters, local information and stochastic noise. We will sketch out how the approach relates with other symbolic methods based on interval arithmetic and Lyapunov functions. We will also present some applicative examples which illustrate its advantages and limitations. Verification Monday November 13, 2017, 11AM, Salle 1007 Viktor Vafeiadis (Max Planck Institute for Software Systems (MPI-SWS)) Effective Stateless Model Checking for C/C++ Concurrency I will present a stateless model checking algorithm for verifying concurrent programs running under RC11, a repaired version of the C/C++11 memory model without dependency cycles. Unlike previous approaches, which enumerate thread interleavings up to some partial order reduction improvements, our approach works directly on execution graphs and (in the absence of RMW instructions and SC atomics) avoids redundant exploration by construction. We have implemented a model checker, called RCMC, based on this approach and applied it to a number of challenging concurrent programs. Our experiments confirm that RCMC is significantly faster, scales better than other model checking tools, and is also more resilient to small changes in the benchmarks. (This is joint work with Michalis Kokologiannakis, Ori Lahav, and Konstantinos Sagonas, and will appear at POPL'18.) Verification Monday October 30, 2017, 11AM, Salle 1007 Suha Mutluergil (Koç University) Proving linearizability using forward simulations Linearizability is the standard correctness criterion for concurrent data structures such as stacks and queues. It allows to establish observational refinement between a concurrent implementation and an atomic reference implementation. Proving linearizability requires identifying linearization points for each method invocation along all possible computations, leading to valid sequential executions, or alternatively, establishing forward and backward simulations. In both cases, carrying out proofs is hard and complex in general. In particular, backward reasoning is difficult in the context of programs with data structures, and strategies for identifying statically linearization points cannot be defined for all existing implementations. In this paper, we show that, contrary to common belief, many such complex implementations, including, e.g., the Herlihy & Wing Queue and the Time-Stamped Stack, can be proved correct using only forward simulation arguments. This leads to simple and natural correctness proofs for these implementations that are amenable to automation. Verification Monday October 23, 2017, 11AM, Salle 1007 Raphaël Cauderlier (IRIF - Université Paris Diderot) FocaLiZe and Dedukti to the Rescue for Proof Interoperability Numerous contributions have been made for some years to allow users to exchange formal proofs between different provers. The main propositions consist in ad hoc pointwise translations, e.g. between HOL Light and Isabelle in the Flyspeck project or uses of more or less complete certificates. We propose a methodology to combine proofs coming from different theorem provers. This methodology relies on the Dedukti logical framework as a common formalism in which proofs can be translated and combined. To relate the independently developed mathematical libraries used in proof assistants, we rely on the structuring features offered by FoCaLiZe, in particular parameterized modules and inheritance to build a formal library of transfer theorems called MathTransfer. We finally illustrate this methodology on the Sieve of Eratosthenes, which we prove correct using HOL and Coq in combination. Verification Monday October 16, 2017, 11AM, Salle 1007 Noam Rineztky (Tel-Aviv University) Verifying Equivalence of Spark Programs Apache Spark is a popular framework for writing large scale data processing applications. Our long term goal is to develop automatic tools for reasoning about Spark programs. This is challenging because Spark programs combine database-like relational algebraic operations and aggregate operations, corresponding to (nested) loops, with User Defined Functions (UDFs). In this paper, we present a novel SMT-based technique for verifying the equivalence of Spark programs. We model Spark as a programming language whose semantics imitates Relational Algebra queries (with aggregations) over bags (multisets) and allows for UDFs expressible in Presburger Arithmetics. We prove that the problem of checking equivalence is undecidable even for programs which use a single aggregation operator. Thus, we present sound techniques for verifying the equivalence of interesting classes of Spark programs, and show that it is complete under certain restrictions. We implemented our technique, and applied it to a few small, but intricate, test cases. Joint work with: Shelly Grossman, Sara Cohen, Shachar Itzhaky, and Mooly Sagiv Presented in CAV’17 Paper: http://www.cs.tau.ac.il/~maon/pubs/2017-cav.pdf Verification Monday October 9, 2017, 11AM, Salle 1007 Javier Esparza (Technische Universität München) Polynomial Analysis Algorithms for Free-Choice Workflow Nets Workflow Petri nets are a successful formalism for modelling, simulating, and analyzing business processes. In this area, free-choice workflow nets play an important role: Core formalisms for business processes can be translated into free-choice nets, and many industrial models are free-choice. While the state space of a free-choice net can grow exponentially in its number of nodes, numerous control-flow properties can still be decided in polynomial time. However, these decision algorithms cannot be extended to workflow nets with data. We present a new analysis technique, based on reduction rules, that can be applied to workflow nets with data, costs, or probabilities. In particular, we give a polynomial algorithm to compute the expected cost of sound free-choice workflow nets. Verification Monday October 2, 2017, 11AM, Salle 1007 Ahmed Bouajjani (IRIF - Université Paris Diderot) Verifying Robustness of Event-Driven Asynchronous Programs Against Concurrency We define a correctness criterion, called robustness against concurrency, for a class of event-driven asynchronous programs that are at the basis of modern UI frameworks in Android, iOS, and Javascript. A program is robust when all possible behaviors admitted by the program under arbitrary procedure and event interleavings are admitted even if asynchronous procedures (respectively, events) are assumed to execute serially, one after the other, accessing shared memory in isolation. We characterize robustness as a conjunction of two correctness criteria: event-serializability (i.e., events can be seen as atomic) and event-determinism (executions within each event are insensitive to the interleavings between concurrent tasks dynamically spawned by the event). Then, we provide efficient algorithms for checking these two criteria based on polynomial reductions to reachability problems in sequential programs. This result is surprising because it allows to avoid explicit handling of all concurrent executions in the analysis, which leads to an important gain in complexity. We demonstrate via case studies on Android apps that the typical mistakes programmers make are captured as robustness violations, and that violations can be detected efficiently using our approach. Joint work with Michael Emmi, Constantin Enea, Burcu Kulahcioglu Ozkan, and Serdar Tasiran. Verification Wednesday June 28, 2017, 2PM, Salle 3052 Giuliano Losa (UCLA) Paxos Made EPR — Decidable Reasoning about Distributed Consensus Distributed protocols such as Paxos play an important role in many computer systems. Therefore, a bug in a distributed protocol may have tremendous effects. Accordingly, a lot of effort has been invested in verifying such protocols. However, checking invariants of such protocols is undecidable and hard in practice, as it requires reasoning about an unbounded number of nodes and messages. Moreover, protocol actions and invariants involve higher-order concepts such as set cardinalities, arithmetic, and complex quantification. This paper makes a step towards automatic verification of such protocols. We aim at a technique that can verify correct protocols and identify bugs in incorrect protocols. To this end, we develop a methodology for deductive verification based on effectively propositional logic (EPR)—a decidable fragment of first-order logic (also known as the Bernays-Sch\“onfinkel-Ramsey class). In addition to decidability, EPR also enjoys the finite model property, allowing to display violations as finite structures which are intuitive for users. Our methodology involves modeling protocols using general (uninterpreted) first-order logic, and then systematically transforming the model to obtain a model and an inductive invariant that are decidable to check. The steps of the transformations are also mechanically checked, ensuring the soundness of the method. We have used our methodology to verify the safety of Paxos, and several of its variants, including Multi-Paxos, Vertical Paxos, Fast Paxos and Flexible Paxos. To the best of our knowledge, this work is the first to verify these protocols using a decidable logic, and the first formal verification of Vertical Paxos and Fast Paxos. This is joint work with O. Padon, M. Sagiv, and S. Shoham. Verification Thursday June 22, 2017, 11AM, Salle 3052 Ruzica Piskac (Yale University) New Applications of Software Synthesis: Verification of Configuration Files and Firewalls Repair In this talk we present a new approach for verification for configuration files. Software failures resulting from configuration errors have become commonplace as modern software systems grow increasingly large and more complex. The lack of language constructs in configuration files, such as types and grammars, has directed the focus of a configuration file verification towards building post-failure error diagnosis tools. We describe a framework which analyzes data sets of correct configuration files and synthesizes rules for building a language model from the given data set. The resulting language model can be used to verify new configuration files and detect errors in them. We also outline a systematic effort that can automatically repair firewalls, using the programming by example approach. Firewalls are widely employed to manage and control enterprise networks. Because enterprise-scale firewalls contain hundreds or thousands of policies, ensuring the correctness of firewalls - whether the policies in the firewalls meet the specifications of their administrators - is an important but challenging problem. In our approach, after an administrator observes undesired behavior in a firewall, she may provide input/output examples that comply with the intended behavior. Based on the given examples, we automatically synthesize new firewall rules for the existing firewall. This new firewall correctly handles packets specified by the examples, while maintaining the rest of the behavior of the original firewall. Verification Monday June 12, 2017, 2PM, Salle 3052 Thomas Wies (New York University) A Simple Framework for Verifying Concurrent Search Structures We present an abstract framework for verifying concurrent algorithms on search data structures that support dictionaries. The framework can be specialized to every search data structure we know, from linked lists to complex B-trees. We obtain this framework by instantiating RGSep, which is a program logic that combines rely-guarantee reasoning and separation logic. RGSep is parameterized by a so-called resource algebra, which is typically the standard heap model of separation logic. Instead of using the standard heap model, we develop a new resource algebra that enables compositional reasoning about search graphs. Using this search graph model, we obtain simple correctness proofs of complex concurrent dictionary algorithms. The algorithms and proofs abstract from the specific search data structure that the algorithm operates on and can then be refined to a concrete data structure implementation. Verification Tuesday June 6, 2017, 11AM, Salle 3052 Sergio Rajsbaum (UNAM Mexico) Tasks, objects, and the notion of a distributed problem The universal computing model of Turing, which was central to the birth of modern computer science, identified also the essential notion of a problem, as an input output function to be computed by a Turing machine. In distributed computing, \emph{tasks} are the equivalent of a function: each process gets only part of the input, and computes part of the output after communicating with other processes. In distributed computing tasks have been studied from early on, in parallel, but independently of \emph{sequential objects}. While tasks explicitly state what might happen when a set of processes run concurrently, sequential objects only specify what happens when processes run sequentially. Indeed, many distributed problems considered in the literature, seem to have no natural specification neither as tasks nor as sequential objects. I will concentrate on our recent work on interval-linearizability, a notion we introduced to specify objects more general than the usual sequential objects. I will describe the bridges we establish between these two classical paradigms, and our discussions about what is a distributed problem, and what it means to solve it. Verification Thursday June 1, 2017, 11AM, Salle 3052 Orna Grumberg (Technion, Israel) Sound and Complete Mutation-Based Program Repair This work presents a novel approach for automatically repairing an erroneous program with respect to a given set of assertions. Programs are repaired using a predefined set of mutations. We refer to a bounded notion of correctness, even though, for a large enough bound all returned programs are fully correct. To ensure no changes are made to the original program unless necessary, if a program can be repaired by applying a set of mutations $Mut$, then no superset of $Mut$ is later considered. Programs are checked in increasing number of mutations, and every minimal repaired program is returned as soon as found. We impose no assumptions on the number of erroneous locations in the program, yet we are able to guarantee soundness and completeness. That is, we assure that a program is returned iff it is minimal and bounded correct. Searching the space of mutated programs is reduced to searching unsatisfiable sets of constraints, which is performed efficiently using a sophisticated cooperation between SAT and SMT solvers. Similarities between mutated programs are exploited in a new way, by using both the SAT and the SMT solvers incrementally. We implemented a prototype of our algorithm, compared it with a state-of-the-art repair tool and got very encouraging results. This is a joint work with Bat-Chen Rothenberg. Verification Tuesday May 23, 2017, 3:30PM, Salle 3052 Florian Zuleger (TU Wien) Inductive Termination Proofs by Transition Predicate Abstraction and their relationship to the Size-Change Abstraction The last decade has seen renewed interest in automated techniques for proving the termination of programs. A popular termination criterion is based on the covering of the transitive hull of the transition relation of a program by a finite number of well-founded relations. In an automated analysis, this termination criterion is usually established by an inductive proof using transition predicate abstraction. Such termination proofs have the structure of a finite automaton. These automata, which we call transition automata, are the central object of study in this talk. Our main results are as follows: (1) A previous criterion for termination analysis with transition automata is not complete; we provide a complete criterion. (2) We show how to bound the height of the transition relation of the program using the termination proof by transition predicate abstraction. This result has applications in the automated complexity analysis of programs. (3) We show that every termination proof by transition predicate abstraction gives rise to a termination proof by the size-change abstraction; this connection is crucial to obtain results (1) and (2) from previous results on the size-change abstraction. Further, our result establishes that transition predicate abstraction and size-change abstraction have the same expressivity for automated termination proofs. Verification Monday May 22, 2017, 11AM, Salle 1007 Alain Finkel (LSV, ENS Cachan) The Erdös & Tarski theorem. A new class of WSTS without WQO. We present the ideal framework which was recently used to obtain new deep results on Petri nets and extensions. We will present the proof of the famous but unknown Erdös-Tarski theorem. We argue that the theory of ideals prompts a renewal of the theory of WSTS by providing a way to define a new class of monotonic systems, the so-called Well Behaved Transition Systems, which properly contains WSTS, and for which coverability is still decidable by a forward algorithm. Verification Monday March 27, 2017, 11AM, Salle 1007 Mohamed Faouzi Atig (Uppsala University) Lossy Channel Systems with Data. Lossy channel systems are a classical model with applications ranging from the modelling of communication protocols to programs running on weak memory models. All existing work assume that messages traveling inside the channels are picked from a finite alphabet. In this talk, we present two extensions of lossy channel systems. In the first part of the talk, we extend lossy channel systems by assuming that each message is equipped with a clock representing the age of the message, thus obtaining the model of Timed Lossy Channel Systems (TLCS). We show that the state reachability problem is decidable for TLCS. In the second part of the talk, we extend lossy channel systems by considering systems that operate on a finite set of variables ranging over an infinite data domain. Furthermore, each message inside a channel is equipped with a data item representing its value. Although we restrict the model by allowing the variables to be only tested for (dis-)equality, we show that the state reachability problem is undecidable. In light of this negative result, we consider bounded-phase reachability, where the processes are restricted to performing either send or receive operations during each phase. We show decidability of state reachability in this case by computing a symbolic encoding of the set of system configurations that are reachable from a given configuration. This talk is based on previous joint work with Parosh Aziz Abdula, Jonathan Cederberg and C. Aiswarya. Verification Monday March 20, 2017, 11AM, Salle 1007 Andreas Podelski (University of Freiburg) Proving Liveness of Parameterized Programs Correctness of multi-threaded programs typically requires that they satisfy liveness properties. For example, a program may require that no thread is starved of a shared resource, or that all threads eventually agree on a single value. This paper presents a method for proving that such liveness properties hold. Two particular challenges addressed in this work are that (1) the correctness argument may rely on global behaviour of the system (e.g., the correctness argument may require that all threads collectively progress towards “the good thing” rather than one thread progressing while the others do not interfere), and (2) such programs are often designed to be executed by any number of threads, and the desired liveness properties must hold regardless of the number of threads that are active in the program. This is joint work with Azadeh Farzan and Zachary Kincaid and published at LICS 2016 (http://www.cs.princeton.edu/~zkincaid/pub/lics16.pdf) Verification Monday March 13, 2017, 11AM, Salle 1007 Ori Lahav (MPI Kaiserslautern) A Promising Semantics for Relaxed-Memory Concurrency Despite many years of research, it has proven very difficult to develop a memory model for concurrent programming languages that adequately balances the conflicting desiderata of programmers, compilers, and hardware. In this paper, we propose the first relaxed memory model that (1) accounts for nearly all the features of the C++11 concurrency model, (2) provably validates a number of standard compiler optimizations, as well as a wide range of memory access reorderings that commodity hardware may perform, (3) avoids bad “out-of-thin-air” behaviors that break invariant-based reasoning, (4) supports “DRF” guarantees, ensuring that programmers who use sufficient synchronization need not understand the full complexities of relaxed-memory semantics, and (5) defines the semantics of racy programs without relying on undefined behaviors, which is a prerequisite for applicability to type-safe languages like Java. The key novel idea behind our model is the notion of promises: a thread may promise to execute a write in the future, thus enabling other threads to read from that write out of order. Crucially, to prevent out-of-thin-air behaviors, a promise step requires a thread-local certification that it will be possible to execute the promised write even in the absence of the promise. To establish confidence in our model, we have formalized most of our key results in Coq. Joint work with Jeehoon Kang, Chung-Kil Hur, Viktor Vafeiadis, and Derek Dreyer, recently presented at POPL'17 Verification Monday March 6, 2017, 11AM, Salle 1007 Germán Andrés Delbianco (IMDEA Madrid) Concurrent Data Structures Linked in Time Arguments about correctness of a concurrent data structure are typically carried out by using the notion of linearizability and specifying the linearization points of the data structure's procedures. Such arguments are often cumbersome as the linearization points' position in time can be dynamic (depend on the interference, run-time values and events from the past, or even future), non-local (appear in procedures other than the one considered), and whose position in the execution trace may only be determined after the considered procedure has already terminated. In this talk I will present a new method, based on a separation-style logic, for reasoning about concurrent objects with such linearization points. We embrace the dynamic nature of linearization points, and encode it as part of the data structure's auxiliary state, so that it can be dynamically modified in place by auxiliary code, as needed when some appropriate run-time event occurs. We have named the idea linking-in-time, because it reduces temporal reasoning to spatial reasoning. For example, modifying a temporal position of a linearization point can be modeled similarly to a pointer update in separation logic. Furthermore, the auxiliary state provides a convenient way to concisely express the properties essential for reasoning about clients of such concurrent objects. In order to illustrate our approach, I will illustrate its application to verify (mechanically in Coq) an intricate optimal snapshot algorithm, due to Jayanti. Verification Monday February 27, 2017, 11AM, Salle 1007 Oded Maler (CNRS and University of Grenoble-Alpes) Monitoring: Qualitative and Quantitative, Real and Virtual, Online and Offline In this talk I will present some of the research I have been involved in concerning the specification and monitoring of timed, continuous and hybrid behaviors using formalism such as Signal Temporal Logic and Timed Regular Expressions. I will discuss the similarities and differences between properties/assertions and other “measures” which are used in many application domains to evaluate behaviors, as well as the difference between monitoring real systems during their execution and monitoring simulated models during the system design phase. Verification Monday February 20, 2017, 11AM, Salle 1007 Loig Jezequel (L2SN - Nantes) Lazy Reachability Analysis in Distributed Systems We address the problem of reachability in distributed systems, modelled as networks of finite automata and propose and prove a new algorithm to solve it efficiently in many cases. This algorithm allows to decompose the reachability objective among the components, and proceeds by constructing partial products by lazily adding new components when required. It thus constructs more and more precise over-approximations of the complete product. This permits early termination in many cases, in particular when the objective is not reachable, which often is an unfavorable case in reachability analysis. We have implemented this algorithm in a first prototype and provide some very encouraging experimental results. Verification Monday January 23, 2017, 11AM, Salle 1007 Andrea Cerone (Imperial College London) Analysing Snapshot Isolation Snapshot isolation (SI) is a widely used consistency model for transaction processing, implemented by most major databases and some of transactional memory systems. Unfortunately, its classical definition is given in a low-level operational way, by an idealised concurrency-control algorithm, and this complicates reasoning about the behaviour of applications running under SI. We give an alternative specification to SI that characterises it in terms of transactional dependency graphs of Adya et al., generalising serialization graphs. Unlike previous work, our characterisation does not require adding additional information to dependency graphs about start and commit points of transactions. We then exploit our specification to obtain two kinds of static analyses. The first one checks when a set of transactions running under SI can be chopped into smaller pieces without introducing new behaviours, to improve performance. The other analysis checks whether a set of transactions running under a weakening of SI behaves the same as when it running under SI. Year 2016 Verification Monday December 12, 2016, 11AM, Salle 1007 Bin Fang (IRIF(Paris), ECNU (China)) Hierarchical Shape Abstraction for Analysis of Free-List Memory Allocators (a logic-based approach) We propose a hierarchical abstract domain for the analysis of free list memory allocators that tracks shape and numerical properties about both the heap and the free lists. Our domain is based on Separation Logic extended with predicates that capture the pointer arithmetics constraints for the heap list and the shape of the free list. These predicates are combined using a hierarchical composition operator to specify the overlapping of the heap list by the free list. In addition to expressiveness, this operator leads to a compositional and compact representation of abstract values and simplifies the implementation of the abstract domain. The shape constraints are combined with numerical constraints over integer arrays to track properties about the allocation policies (best-fit, first-fit, etc). Such properties are out of the scope of the existing analyzers. We implemented this domain and we show its effectiveness on several implementations of free list allocators. Verification Friday December 9, 2016, 11AM, Salle 3052 Alastair Donaldson (Imperial College London) Exposing Errors Related to Weak Memory in GPU Applications In this presentation, I will describe a project led by my PhD student Tyler Sorensen, on the systematic design of a testing environment that uses stressing and fuzzing to reveal errors in GPU applications that arise due to weak memory effects. This approach is evaluated across several CUDA applications that use fine-grained concurrency, on seven GPUs spanning three Nvidia architectures. The results show that applications that rarely, or never, exhibit errors related to weak memory when executed natively can readily exhibit these errors when executed in the testing environment. The testing environment also provides a means to identify the root causes of erroneous weak effects, and automatically suggests how to insert fences that experimentally eliminate these errors. This empirical fence insertion method carries significantly lower overhead, in terms of execution time and energy consumption, than a more conservative, guaranteed-sound approach. Verification Monday December 5, 2016, 11AM, Salle 1007 Guilhem Jaber (IRIF) SyTeCi: Symbolic, Temporal and Circular reasoning for automatic proofs of contextual equivalence Operational Techniques (Kripke Logical Relations and Environmental/Open/Parametric Bisimulations) have matured enough to become now formidable tools to prove contextual equivalence of effectful higher-order programs. However, it is not yet possible to automate such proofs – the problem being of course in general undecidable. In this talk, we will see how to take the best of these techniques to design an automatic procedure which is able many non-trivial examples of equivalence, including most of the examples from the literature. The talk will describe the main ingredients of this method: - Symbolic reduction to evaluate of programs, - Transition systems of heap invariants, as used with Kripke Logical Relations, - Temporal logic to abstract over the control flow between the program and its environment, - Circular proofs to automate the reasoning over recursive functions. Using them, we will see how we can reduce contextual equivalence to the problem of non-reachability in some automatically generated transition systems of invariants. Verification Monday November 28, 2016, 11AM, Salle 1007 Georg Zetzsche (LSV, ENS Cachan) First-order logic with reachability for infinite-state systems First-order logic with the reachability predicate (FOR) is an important means of specification in system analysis. Its decidability status is known for some individual types of infinite-state systems such as pushdown (decidable) and vector addition systems (undecidable). This work aims at a general understanding of which types of systems admit decidability. As a unifying model, we employ valence systems over graph monoids, which feature a finite-state control and are parameterized by a monoid to represent their storage mechanism. As special cases, this includes pushdown systems, various types of counter systems (such as vector addition systems) and combinations thereof. Our main result is a characterization of those graph monoids where FOR is decidable for the resulting transition systems. Verification Thursday November 24, 2016, 3PM, Salle 3052 Gennaro Parlato (University of Southampton) A Pragmatic Bug-finding Approach for Concurrent Programs Concurrency poses a major challenge for program verification, but it can also offer an opportunity to scale when subproblems can be analysed in parallel. We propose a parameterizable code-to-code translation to generate a set of simpler program variants such that each interleaving of the program is captured by at least one of them. These variants can then be checked autonomously in parallel. Our approach is independent of the tool that is chosen for the final analysis, it is compatible with weak memory models, and it amplifies the effectiveness of existing tools, making them find bugs faster and with fewer resources. We do our experiments using Lazy-CSeq as off-the-shelf final verifier and demonstrate that our approach is able to find bugs in the hardest known concurrency benchmarks in a matter of minutes where other dynamic and static tools fail to conclude. Verification Monday November 14, 2016, 11AM, Salle 1007 Philipp Rümmer (University of Uppsala) Liveness of Randomised Parameterised Systems under Arbitrary Schedulers e consider the problem of verifying liveness for systems with a finite, but unbounded, number of processes, commonly known as parameterised systems. Typical examples of such systems include distributed protocols (e.g. for the dining philosopher problem). Unlike the case of verifying safety, proving liveness is still considered extremely challenging, especially in the presence of randomness in the system. In this paper we consider liveness under arbitrary (including unfair) schedulers, which is often considered a desirable property in the literature of self-stabilising systems. In this paper we introduce an automatic method of proving liveness for randomised parameterised systems under arbitrary schedulers. Viewing liveness as a two-player reachability game (between Scheduler and Process), our method is a CEGAR approach that synthesises a progress relation for Process that can be symbolically represented as a finite-state automaton. The method incrementally constructs a progress relation, exploiting both Angluin's $L*$ algorithm and SAT-solvers. Our experiments show that our algorithm is able to prove liveness automatically for well-known randomised distributed protocols, including Lehmann-Rabin Randomised Dining Philosopher Protocol and randomised self-stabilising protocols (such as the Israeli-Jalfon Protocol). To the best of our knowledge, this is the first fully-automatic method that can prove liveness for randomised protocols. Joint work with Anthony W. Lin. Verification Monday October 24, 2016, 11AM, Salle 1007 Sylvain Schmitz (LSV - ENS Cachan) Ideal Decompositions for Vector Addition Systems Vector addition systems, or equivalently Petri nets, are one of the most popular formal models for the representation and the analysis of parallel processes. Many problems for vector addition systems are known to be decidable thanks to the theory of well-structured transition systems. Indeed, vector addition systems with configurations equipped with the classical point-wise ordering are well-structured transition systems. Based on this observation, problems like coverability or termination can be proven decidable. However, the theory of well-structured transition systems does not explain the decidability of the reachability problem. In this presentation, we show that runs of vector addition systems can also be equipped with a well quasi-order. This observation provides a unified understanding of the data structures involved in solving many problems for vector addition systems, including the central reachability problem. Joint work with Jérôme Leroux. Verification Monday October 10, 2016, 11AM, Salle 1007 Steven de Oliveira (CEA) Polynomial invariants by linear algebra One of the main issue in formal verification is the analysis of loops, considered as a major research problem since the 70s. Program verification based on Floyd-Hoare's inductive assertion and CEGAR-like techniques for model-checking uses loop invariants in order to reduce the problem to an acyclic graph analysis instead of unrolling or accelerating loops. I will present in this talk a new technique for generating polynomial invariants, divided in two independent parts : a procedure that reduces a class of loops containing polynomial assignments to linear loops and a procedure for generating inductive invariants for linear loops. Both of these techniques have a polynomial complexity for a bounded number of variables and we guarantee the completeness of the technique for a bounded degree which we successfully implemented for C programs verification as a Frama-C plug-in, PILAT. Verification Monday October 3, 2016, 11AM, Salle 1007 Giovanni Bernardi (IRIF) Robustness against Consistency Models with Atomic Visibility To achieve scalability, modern Internet services often rely on distributed databases with consistency models for transactions weaker than serializability. At present, application programmers often lack techniques to ensure that the weakness of these consistency models does not violate application correctness. In this talk I will present criteria to check whether applications that rely on a database providing only weak consistency are robust, i.e., behave as if they used a database providing serializability, and I will focus on a consistency model called Parallel Snapshot Isolation. The results I will outline handle systematically and uniformly several recently proposed weak consistency models, as well as a mechanism for strengthening consistency in parts of an application. Verification Monday September 26, 2016, 11AM, Salle 1007 Arnaud Sangnier (IRIF) Fixpoints in VASS: Results and Applications VASS (Vector Addition Systems with States), which are equivalent to Petri nets, are automata equipped with natural variables that can be decremented or incremented. VASS are a powerful model which has been intensively studied in the last decades. Many verification techniques have been developed to analyse them and the frontier between the decidable and undecidable problems related to VASS begins to be well known. One interesting point is that the model-checking of linear temporal logics (like LTL or linear mu-calculus) is decidable for this model but this is not anymore the case when considering branching time temporal logics. However some restrictions can be imposed on the logics and on the studied system in order to regain decidability. In this talk, we will present these results concerning the model-checking of VASS and the techniques leading to the decidability results. We will then show how these techniques and results can be used to analyse some extensions of VASS with probabilities, namely probabilistic VASS and VASS Markov Decision Processes. Verification Monday September 19, 2016, 11AM, Salle 1007 Francesco Belardinelli (Université d'Evry) Abstraction-based Verification of Infinite-state Data-aware Systems Data-aware Systems (DaS) are a novel paradigm to model business processes in Service-oriented Computing. DaS are best described in terms of interacting modules consisting of a data model and a lifecycle, which account respectively for the relational structure of data and their evolution over time. However, by considering data and processes as equally relevant tenets of DaS, the typical questions concerning their verification are hardly amenable by current methodologies. For instance, the presence of data means that the number of states in DaS is infinite in general. In this talk we present recent advances in the verification of DaS. We introduce agent-based abstraction techniques to model check DaS against specifications expressed in temporal and strategy logics. Further, we illustrate how DaS can be useful to model game-theoretic scenarios as well. Specifically, we provide an analysis of English (ascending bid) auctions through DaS. Verification Monday May 30, 2016, 11AM, Salle 1007 Thibaut Balabonski (LRI) Low-level C code optimisations: are they valid? ubstantial research efforts have been devoted to tools for reasoning about -and proving properties of- programs. A related concern is making sure that compilers preserve the soundness of programs, that is making sure that the compiled code respects the behavior of the source program (see for instance the CompCert C compiler). This talk is about an attempt at proving the soundness of some basic low-level transformations for concurrent C programs. We will see some elements of the official semantics of concurrency in C, and I will relate how the focus of this work shifted from proving the soundness of program transformations to patching the official semantics. Verification Monday March 14, 2016, 11AM, Salle 1007 Paul Gastin (LSV) Formal methods for the verification of distributed algorithms We introduce an automata-theoretic method for the verification of distributed algorithms running on ring networks. In a distributed algorithm, an arbitrary number of processes cooperate to achieve a common goal (e.g., elect a leader). Processes have unique identifiers (pids) from an infinite, totally ordered domain. An algorithm proceeds in synchronous rounds, each round allowing a process to perform a bounded sequence of actions such as send or receive a pid, store it in some register, and compare register contents wrt. the associated total order. An algorithm is supposed to be correct independently of the number of processes. To specify correctness properties, we introduce a logic that can reason about processes and pids. Referring to leader election, it may say that, at the end of an execution, each process stores the maximum pid in some dedicated register. Since the verification of distributed algorithms is undecidable, we propose an underapproximation technique, which bounds the number of rounds. This is an appealing approach, as the number of rounds needed by a distributed algorithm to conclude is often exponentially smaller than the number of processes. We provide an automata-theoretic solution, reducing model checking to emptiness for alternating two-way automata on words. Overall, we show that round-bounded verification of distributed algorithms over rings is PSPACE-complete. Based on a joint work with C. Aiswarya and Benedikt Bollig, extended abstract at CONCUR’15: http://www.lsv.ens-cachan.fr/~gastin/mes-publis.php?onlykey=ABG-concur15. Verification Monday March 7, 2016, 11AM, Salle 1007 Julien Signoles (CEA-LIST) Frama-C, a collaborative and extensible framework for C code analysis Frama-C is a source code analysis platform that aims at conducting verification of industrial-size C programs. It provides its users with a collection of plug-ins that perform static analysis, deductive verification, testing and monitoring, for safety- and security-critical software. Collaborative verification across cooperating plug-ins is enabled by their integration on top of a shared kernel and datastructures, and their compliance to a common specification language. This talk presents a consolidated view of the platform, its main and composite analyses, and some of its industrial achievements. It focuses on its specification language ACSL, and on different ways to verify ACSL specifications through static and dynamic analyses. Verification Monday February 29, 2016, 11AM, Salle 1007 Pierre Fraigniaud (IRIF) Fault-Tolerant Decentralized Runtime Monitoring Runtime verification is aiming at extracting information from a running system, and using it to detect and possibly react to behaviors violating a given correctness property. Decentralized runtime verification involves a set of monitors observing the behavior of the underlying system. In this talk, the investigation of decentralized runtime verification in which not only the elements of the observed system, but also the monitors themselves are subject to failures will be presented. In this context, it is unavoidable that the unreliable monitors may have different views of the underlying system, and therefore may have different valuations of the correctness property. We characterize the number of valuations required for monitoring a given correctness property in a decentralized manner. Our lower bound is independent of the logic used for specifying the correctness property, as well as of the way the set of valuations returned by the monitors is globally interpreted. Moreover, our lower bound is tight in the sense that we design a distributed protocol enabling any given set of monitors to verify any given correctness property using as many different valuations as the one given by this lower bound. Joint work with: Sergio Rajsbaum (UNAM, Mexico) and Corentin Travers (LaBRI, Bordeaux).