~~NOCACHE~~
/* DO NOT EDIT THIS FILE */
[[seminaires:acs:index|Analyse et conception de systèmes]]\\
Mercredi 14 mars 2018, 14 heures, Salle UFR\\
**Thomas Letan** (Agence Nationale de la Sécurité des Systèmes d’Information & INRIA Rennes) //FreeSpec : Modular Verification of Systems using Effects and Effect Handlers in Coq//
\\
Modern computing systems have grown in complexity, and the attack surface has increased accordingly.
Even though system components are generally carefully designed and even
verified by different groups of people, the \textit{composition} of these
components is often regarded with less attention.
This paves the way for “architectural attacks”, a class of security
vulnerabilities where the attacker is able to threaten the security of the
system even if each of its component continues to act as expected.
In this article, we introduce FreeSpec, a Coq framework built upon
the key idea that components can be modelled as programs with
algebraic effects to be realized by other components.
FreeSpec allows for the modular modelling of a complex system, by
defining idealized components connected together, and the modular
verification of the properties of their composition.
In doing so, we propose a novel approach for the Coq proof assistant
to reason about programs with effects in a modular way.
[[seminaires:acs:index|Analyse et conception de systèmes]]\\
Lundi 19 février 2018, 11 heures, Salle 3052\\
**Adrien Boiret** (University of Warsaw) //The "Hilbert Method" for Solving Transducer Equivalence Problems//
\\
Classical results from algebra, such as Hilbert's Basis Theorem and Hilbert’s Nullstellensatz, have been used to decide equivalence for some classes of transducers, such as HDT0L (Honkala 2000) or more recently deterministic tree-to-string transducers (Seidl, Maneth,
Kemper 2015). In this talk, we propose an abstraction of these methods. The goal is to investigate the scope of the “Hilbert method” for transducer equivalence that was described above.
Consider an algebra A in the sense of universal algebra, i.e. a set equipped with some operations. A grammar over A is like a context-free grammar, except that it generates a subset of the algebra A, and the rules use operations from the algebra A. The classical context-free
grammars are the special case when the algebra A is the free monoid with concatenation. Using the “Hilbert method” one can decide certain properties of grammars over algebras that are fields or rings of polynomials over a field. The “Hilbert method” extends to grammars
over certain well-behaved algebras that can be “coded” into fields or rings of polynomials. Finally, for these well-behaved algebras, one can also use the “Hilbert method” to decide the equivalence problem for transducers (of a certain transducer model that uses registers)
that input trees and output elements of the well-behaved algebra.
In the talk, we give examples and non-examples of well-behaved algebras, and discuss the decidability / undecidability results connected to them. In particular:
-We show that equivalence is decidable for transducers that input (possibly ordered) trees and output unranked unordered trees.
-We show that equivalence is undecidable for transducers that input words and output polynomials over the rational numbers with one variable (but are allowed to use composition of polynomials).
Joint work with Mikołaj Bojańczyk, Janusz Schmude, Radosław Piórkowski at Warsaw University.
Joint session with the Verification seminar
[[seminaires:acs:index|Analyse et conception de systèmes]]\\
Mercredi 7 février 2018, 14 heures, Salle 3052\\
**Hongseok Yang** (University of Oxford) //Informal introduction to integral probability metrics and generative adversarial networks//
\\
This is an informal whiteboard/blackboard presentation on generative adversarial networks by a non-expert. I will explain integral probability metrics and their use on training deep generative models, widely known as GAN (generative adversarial networks). My plan is to focus on two well-known metrics for probability distributions, Wasserstein distance and Maximum Mean Discrepancy. I will explain how their dual characterisations have been used in the adversarial training of deep generative models, and I will make superficial/speculative comparisons between these metrics. My talk will be based on the following papers and the slides:
References:
Wasserstein GAN
https://arxiv.org/abs/1701.07875
Generative Models and Model Criticism via Optimized Maximum Mean Discrepancy
https://arxiv.org/abs/1611.04488
Maximum Mean Discrepancy
http://alex.smola.org/teaching/iconip2006/iconip_3.pdf