Semantics
Wednesday December 21, 2016, 11AM, Salle 3052
Victoria Lebed (Trinity College, Dublin) Que savent les tresses sur les tableaux de Young ?
Les tableaux de Young portent une multiplication associative, décrite
par l'algorithme de Schensted. Le monoïde ainsi obtenu est appelé
plaxique. Il joue un rôle fondamental dans de nombreuses applications
combinatoires et algébriques. On montrera que cette multiplication sur
les tableaux est entièrement déterminée par un tressage sur l'ensemble
(bien plus simple !) des colonnes. Ici un tressage est une solution
ensembliste de l'équation de Yang-Baxter. Notre tressage s'inscrit
également dans le cadre de la réécriture. Comme application, on
identifiera la cohomologie de Hochschild du monoïde plaxique, qui
résiste à toutes les approches classiques, à la cohomologie tressée
des tableaux-colonnes, beaucoup plus accessible. Toutes les notions et
constructions seront rappelées.
Semantics
Tuesday December 20, 2016, 11AM, Salle 3052
Clovis Eberhart (LAMA, Chambéry) Séquences justifiées et diagrammes de cordes : deux approches de la sémantique de jeux concurrents
Plusieurs modèles de sémantique de jeux concurrents ont émergé ces
dernières années. Dans ce travail, on se concentre sur certaines de ces
approches, qui ont donné des modèles de CCS, du pi-calcul et du
lambda-calcul non-déterministe. Un point commun informel mais saillant
entre ces approches est que les stratégies innocentes sont définies
commes les faisceaux pour la topologie de Grothendieck où les parties
sont recouvertes par leurs vues. On propose ici de tisser un lien formel
entre deux de ces approches au niveau des parties et des stratégies.
Semantics
Tuesday December 6, 2016, 11AM, Salle 3052
Yann Régis-Gianas (IRIF) Explaining Program Differences Using Oracles
Program differences are pervasive in software development, but most of the
time they are represented as textual differences with no regards to the
syntactic nature of programming languages or their semantics. Thus, they may
be hard to read and often fail to convey insight on the changes on
semantics. Furthermore, such low-level, unstructured presentations are hard to
reason about.
We present a formal framework to characterize differences between
deterministic programs in terms of their small-step semantics. This
language-agnostic framework defines the notion of /difference languages/ in
which /oracles/—programs that relate small-step executions of pairs of program
written in a same language—can be written, reasoned about and composed.
We illustrate this framework by instantiating it on a toy imperative language
and by presenting several /difference languages/ ranging from trivial
equivalence-preserving syntactic transformations to characterized semantic
differences. Through those examples, we will present the basis of our
framework, show how to use it to relate syntactic changes with their effect on
semantics, how one can abstract away from the small-step semantics
presentation, and discuss the composability of oracles.
This is joint work with Thibaut Girka (IRIF - MERCE) and David Mentré (MERCE)
Semantics
Tuesday November 15, 2016, 11AM, Salle 3052
José Espírito Santo (University of Minho) Call-by-name, call-by-value and intuitionistic logic
Call-by name (CBN) and call-by-value (CBV) have been defined as the dual solutions to a critical pair at the heart of cut-elimination in classical sequent calculus. When one moves to intuitionistic logic, this symmetric picture breaks, but what happens precisely? How are such definitions adapted to the intuitionistic case? How do the resulting calculi relate to the many other proposals for intuitionistic CBV and CBN calculi? And what does that break of duality say about intuitionistic logic? We find that proof-theory alone distills new CBV and CBN lambda-calculi which agree, in a technical sense, with the computational lambda-calculus and the ordinary lambda-calculus, respectively; but the former distilled calculus is contained in the latter - a vindication of Plotkin. This containment, in turn, rests on another asymmetry: a certain permutative/focalization conversion, named T, solves what remains of the classical critical pair in favor of CBN, while the symmetric solution with a certain CBV conversion Q does not exist. In this sense we may say intuitionistic logic enjoys a CBN bias. Full answers to the above questions require us to work simultaneously with sequent calculus and its equivalent bidirectional natural deduction. The former syntax is our starting point, where we track the (lost) symmetries and find the idea of permutative conversions; the latter syntax is the bridge towards more traditional programming notations. For instance, only in bidirectional natural deduction one sees that T eliminates the difference between “let” and explicit substitution, and only there one understands what calling paradigm corresponds to the superimposition of CBN and CBV.
Semantics
Tuesday November 8, 2016, 11AM, Salle 3052
Gustavo Petri (IRIF) Safe and Scalable Cloud Programming
I will present AEON (Atomic Events and Ownership Networks), a programming model for safe and scalable cloud computing. AEON is based on the well-known paradigms of object-orientation and event-driven programming. AEON’s runtime system guarantees a linearizable and deadlock-free semantics for events that span across multiple objects and servers. Moreover, to achieve an economical and scalable solution, our runtime system provides seamless elasticity (the capability to adapt the resources usage to the actual workload), and exploits the object graph — the ownership network — to enhance parallelism. In this talk will concentrate on the safety aspects of the semantics of AEON, and show some experimental scalability results.
Semantics
Tuesday October 18, 2016, 10:30AM, Salle 3052
Pablo Barenbaum (Université de Buenos Aires - CONICET) Finite Family Developments for the Linear Substitution Calculus
We prove a finite family developments theorem (FFD) for the calculus
with explicit substitutions Linear Substitution Calculus (LSC). The
notion of redex family is obtained by adapting Lévy labels to
support the two distinctive features of LSC, namely its use of
context rules that allow substitutions to act “at a distance” and
also the set of equations modulo which it rewrites. We then apply FFD to
prove a number of results for LSC including: an optimal reduction result,
an algorithm for standardisation by selection, and normalisation of a
linear call-by-need reduction strategy.
Joint work with Eduardo Bonelli.
Semantics
Tuesday October 18, 2016, 11:30AM, Salle 3052
Masahito Hasegawa (Research Institute in Mathematical Sciences (RIMS)) Traced star-autonomous categories are compact closed
For years we have been bothered by a small mistery on categorical models
of Linear Logic (star-autonomous categories) and Geometry of Interaction
(compact closed categories, typically arising from traced monoidal categories
via Int-construction): is there a proper intermediate structure between
star-autonomous categories and compact closed categories, that is, a traced
star-autonomous category which is not compact closed?
We settle this issue by showing that any traced star-autonomous category is
actually compact closed.
We also discuss what would happen if we remove symmetry, thus the following
question: is there a non-symmetric star-autonomous category with a (suitably
generalized) trace which is not pivotal? Our recent attempt suggests that
the answer is again no, but the situation is much subtler than the symmetric
case.
The result on the symmetric case is a joint work with Tamas Hajgato.
Semantics
Tuesday July 5, 2016, 11AM, Salle 3052
Shane Mansfield (IRIF) Empirical Models and Contextuality
I will begin by presenting a structural description of the kind of
empirical data which arises in physical experiments, using the
language of sheaf-theory. A potential feature of such data is
contextuality (including as a special case non-locality) which can be
neatly characterised in this setting. The feature does not arise in
classical data, and a number of recent results indicate that it is the
key feature in enabling quantum advantages or speedups over classical
behaviour in a variety of computational scenarios. The sheaf-theoretic
perspective exposes a qualitative hierarchy of strengths of
contextuality and I will also mention recent efforts to quantify
contextuality via linear programming. Finally I will report on work in
progress on developing a resource theory for contextuality, beginning
with an algebra of empirical models under whose operations
contextuality is monotonically decreasing.
Semantics
Tuesday June 7, 2016, 11AM, Salle 3052
Thomas Leventis (I2M Marseille) Probabilistic lambda-theories
In the usual deterministic lambda-calculus the confluence of the unique computation rule, the beta-reduction, makes it easy to turn the reduction into an equality, the beta-equivalence. This idea to consider not computation but equalities on terms gives rise to the notion of lambda-theory. On the other hand the probabilistic lambda-calculus is usually treated only in terms of computation. In particular we know that we must impose a reduction strategy for the probabilistic choice if we want the calculus to make sense.
In this talk we will recover this notion of presentation through equalities in the probabilistic lambda-calculus. We will show how some usual notions can be translated in the probabilistic world. In particular we will define a notion of probabilistic Böhm tree, and show a separability result. This will implies that just like in the deterministic setting the Böhm tree equality is the same as the observational equivalence.
Attention, mardi 7 double séance
Semantics
Tuesday June 7, 2016, 10AM, Salle 3052
Antoine Allioux (IRIF) Krivine machine and Taylor expansion in a non-uniform setting
The Krivine machine is an abstract machine implementing the linear head reduction of $\lambda$-calculus. Ehrhard and Regnier gave a resource sensitive version returning the annotated form of a $\lambda$-term accounting for the resources used by the linear head reduction. These annotations take the form of terms in the resource $\lambda$-calculus.
We generalize this resource-driven Krivine machine to the case of the algebraic $\lambda$-calculus. The latter is an extension of the pure $\lambda$-calculus allowing for the linear combination of $\lambda$-terms with coefficients taken from a semiring. Our machine associates a $\lambda$-term $M$ and a resource annotation $t$ with a scalar $\alpha$ in the semiring describing some quantitative properties of the linear head reduction of $M$.
In the particular case of non-negative real numbers and of algebraic terms $M$ representing probability distributions, the coefficient $\alpha$ gives the probability that the linear head reduction actually uses exactly the resources annotated by $t$. In the general case, we prove that the coefficient $\alpha$ can be recovered from the coefficient of $t$ in the Taylor expansion of $M$ and from the normal form of $t$.
Attention, mardi 7 double séance !
Semantics
Tuesday May 17, 2016, 11AM, Salle 3052
Daniel Leivant (Indiana University Bloomington) Syntax directed coproofs for Propositional Dynamic Logic
Propositional dynamic logic (PDL) is at the core of
reasoning about dynamic systems. We present a syntax directed
formalism for PDL, built on coproofs.
Contrary to proofs, which are generated inductively top-down,
coproofs are generated coinductively bottom-up, and may have infinite branches.
However, our inference rules allow this only via state-changes,
and as a result are sound.
We also prove the semantic completeness of our formalism, and apply it
to obtain new interpolation theorems for PDL.
Semantics
Thursday April 7, 2016, 4PM, Salle 3052
Flavien Breuvart (INRIA Focus (Université de Bologna, IT)) L'encodage probabiliste: vers une formalisation systématique des langages probabilistes, déterministiques et potentiellement divergents.
Il existe deux formalisations des langages/systèmes probabilistes: l'une est de considérer l'arbre des dérivations (avec branchements probabilistes), l'autre est de considérer des distributions de probabilités. Avoir accès à l'arbre de dérivation est pratique pour les preuves car on a accès à l'historique complet, mais lorsque l'arbre est infini, le formalisme est très lourd et peu explicite. La formulation à base de distributions est plus simple et donne des théorèmes élégants mais engendre des preuves très complexes (où il faut souvent retrouver l'historique du calcul).
Dans cet exposé nous verrons qu'il est possible de récupérer le meilleur des deux mondes grâce à un théorème d'encodage probabilistique inspiré de la théorie catégorique des coalgèbres. Nous verrons qu'il s'agit d'un outil systématique qui, à l'instar de l'induction ou la coinduction, permet de structurer des preuves et des définitions complexes. Pour des raisons de temps, nous n'évoquerons que brièvement comment ce schéma de preuve nous permet de redéfinir une version probabiliste de notions usuelles telles que les types intersection probabilistes ou les candidats de réductibilité probabilistes.
Attention au jour et horaire inhabituels !
Semantics
Tuesday March 29, 2016, 10:30AM, Salle 3052
Andrew Polonsky (IRIF) Interpreting infinite terms
Most standard models of the lambda calculus validate the principles of
infinitary rewriting (which include identification of meaningless terms).
For example, in Scott models, the interpretation of every fixed point
combinator is equal to the limit (in the model) of interpretations of
\f.f(Ω), \f.f^2(Ω), \f.f^3(Ω), …, \f.f^n(Ω), ….
The infinite term \f.f(f(f…)) is the limit of the above sequence, and in
the infinitary lambda calculus all fpcs reduce to this term, which is an
infinitary normal form.
It would seem natural to extend the interpretation function to cover
infinitary terms as well, however, naive attempts to do this fail.
The problem is already present in the first-order case: finite terms over
a signature ∑ can be interpreted by means of the initial semantics – where
the free algebra of terms admits a canonical homomorphism to any other
∑-algebra.
However, the infinite terms are elements of the cofinal coalgebra, which
universal property concerns maps INTO the algebra, rather than out of it.
We are thus faced with a “koan”:
While we shall not provide a (co)final solution to this koan, we will offer
germs of some possible approaches, with the hope of starting a discussion
that could follow this short talk.
Attention à l'horaire inhabituelle