Séminaire des membres non-permanents
Mercredi 20 décembre 2017, 11 heures, Salle 3052
Leo Stefanesco (Équipes “Preuves et Programmes” et “Algèbre et Calcul”) “A concise introduction to logical relations” followed by “A Logical Relation for Monadic Encapsulation of State”

1st part (E for Everyone): A concise introduction to logical relations

Logical relations are a powerful technique to prove properties about programs. In particular, for proving that two programs are contextually equivalent.

In this talk, we will see that, in System F (aka the polymorphic lambda calculus), the only program of type ∀ a, a → a is the identity.

I will also sketch how to extend logical relations to realistic languages such as ML.

2nd part (POPL talk rehearsal):

A Logical Relation for Monadic Encapsulation of State

We present a logical relations model of a higher-order functional programming language with impredicative polymorphism, recursive types, and a Haskell-style ST monad type with runST. We use our logical relations model to show that runST provides proper encapsulation of state, by showing that effectful computations encapsulated by runST are heap independent. Furthermore, we show that contextual refinements and equivalences that are expected to hold for pure computations do indeed hold in the presence of runST. This is the first time such relational results have been proven for a language with monadic encapsulation of state. We have formalized all the technical development and results in Coq.

Séminaire des membres non-permanents
Mercredi 13 décembre 2017, 11 heures, Salle 3052
Simon Halfon (ENS Cachan) Well Quasi-Orders and Extreme Stratospheric-Complexity-Classes of Death — Beaux pré-ordres et classes de complexité stratosphériques de la mort

It is widely known how to prove the termination of an algorithm using well-founded orderings. It is however usually believed that no complexity upper bound can be derived from these termination proof, often seen as non-constructive. In this talk, I will present some ideas to infer upper bounds from such termination proofs. I will try to give you a taste of the complicated combinatorics behind, that results in surprisingly high complexities. Oxygen bottles and pressure suits required, we are going beyond Elementary.

No special knowledge is required to follow the talk.

Nous savons tous qu’il est très pratique pour prouver la terminaison d’un algorithme d’utiliser des ordres bien fondés. Cependant, il est commun de penser que cette technique ne donne aucune information sur la complexité de l’algorithme, car la preuve est non constructive. Dans cet exposé, je présenterai quelques idées pour extraire une borne supérieur de complexité d’une telle preuve de terminaison. J’essaierai de vous donner une idée de la combinatoire compliquée que cela engendre, et qui résulte en des complexité très élevée. Bouteilles d’oxygène et combinaisons pressurisées obligatoire, on s’envole au delà de l’Élémentaire.

Aucune connaissance spécifique n'est nécessaire pour suivre l'exposé.

Séminaire des membres non-permanents
Mercredi 6 décembre 2017, 11 heures, Salle 3052
Axel Osmond & Yassine Hamoudi ([1] “Algebra and calculus” and “Proofs and programs” teams, [2] Algorithms and complexity team) New PhD student introduction session. [1] From pointless topology to formal topology [2] ACC0 and multiparty communication: fighting the log n barrier.

[1] While point-set topology is highly practical as a framework to do spatial reasoning, one can rise some ontological and logical suspicions about naive notion of points as they constitute idealized objects with somewhat inaccessible aspects in a constructive and finite point of view. Locales and Frames theories are two deeply entangled realms aimed at rebuilding topology on latticial and categorical foundations, in which usual notions of points, opens, subspaces, separability, compacity… can be reexpressed as first order algebraic properties in latticial context, or both generalized and made constructive.

Formal topology goes further into this last direction, using systems of axioms about coverings as a deductive systems which leads to a type-theoretic flavored, predicative and constructive topology, endowed with multiple and finer notions of points, separability… and suited for intuitionistic reasoning.

[2] The Number On the Forehead model is a multiparty communication game between k players that collaboratively want to evaluate a given function F : X1 x … x Xk → Y on some input (x1, …, xk) by broadcasting bits according to a predetermined protocol. The input is distributed between the players in such a way that each player i sees all of it except xi (as if xi is written on the forehead of player i).

A central open question in this model, called the log n barrier, is to find a function which is hard to compute when the number of players is polylog(n) or more (where the xi's have size poly(n)). This has an important application in circuit complexity, as it could help to separate ACC0 from other complexity classes.

In this talk, we will recall first the connection between ACC0 and communication complexity, and then describe a new efficient communication protocol that prevents some important functions from breaking the log n barrier.

Séminaire des membres non-permanents
Mercredi 22 novembre 2017, 11 heures, Salle 3052
Jules Chouquet (Proofs and Programs team) Linear logic proof nets and Taylor expansion.

I will first introduce linear logic proof nets, for the multiplicative and exponential fragment (MELL), and I will especially insist on the computational meaning of the exponential boxes: these are constructions containing the possibility of duplication and deletion of entire parts of the structures (all the non linear part of the calculus).

Once these notions are introduced, I will explain how it is possible to express this computational paradigm in a linear setting through a syntactical Taylor expansion. The idea is to understand exponential boxes in a differential variant of linear logic, and to represent it with linear combination.

If we have time, I will try to give an idea of some algebraic issues concerning this construction, and a method to show for example, that the normal form of the Taylor expansion of a MELL always converges.

NB: Taylor expansion is here analogical to the lambda calculus (with its differential version too) one, if someone heard about it, it can give a first intuition.

Séminaire des membres non-permanents
Mercredi 15 novembre 2017, 11 heures, Salle 3052
Chaitanya Leena Subramaniam (“Algebra and calculus” and “proofs and programs” teams) Homotopy type theory and the fibred structure of dependent types

This is another session of the PhD introduction series.

Chaitanya will do a 30 minutes talk. Given that the talk will not last as long as usual, we will also take advantage of the opportunity to discuss the organization of the seminar.

Abstract:

The talk will not be about my particular research problem. It will seek instead to give a gentle pictorial introduction to the inherent structure of dependent types and how this structure determines what dependent types and dependently-typed programs (or proofs) *mean*.

The focus will be on why this structure naturally leads to homotopy type theory and univalence. As a bonus, and if time permits, there will be some remarks on univalence and extensionality.

Séminaire des membres non-permanents
Mercredi 8 novembre 2017, 11 heures, Salle 3052
Francesco Antonio Genco (Technische Universität Wien) Typing Parallelism and Communication through Hypersequents

We present a Curry–Howard correspondence for Gödel logic based on a simple natural deduction reformulating the hypersequent calculus for this logic. The resulting system extends simply typed λ-calculus by a symmetric higher-order communication mechanism between parallel processes. The normalization proof employs reductions that implement forms of code mobility. We consider this result from a broader perspective and, following A. Avron's 1991 thesis on the connection between hypersequents and parallelism, we discuss the generalisation of the employed techniques for other intermediate logics.

Séminaire des membres non-permanents
Mercredi 25 octobre 2017, 11 heures, Salle 3052
Cédric Ho Thanh & Isaac Konan (Algebra and calculus team / Combinatorics team) New PhD student introduction session

This special session will introduce two new PhD students who will each be giving a short talk.

Isaac Konan, Bijective proof and generalization of Siladic's theorem

In a recent paper, Dousse introduced a refinement of Siladic̀’s theorem on partitions, by using the method of weighted words, where the different parts could take 2 colours. The proof of that refined theorem used some recursive equations with q-series. In this presentation, I will give the big lines of a bijective proof of the Dousse’s theorem, moreover which could be extended on a coloring with more than 2 colours.

Cédric Ho Thanh, Opétopes, Réécriture, et Koszulité

Ma thèse consiste en 3 mots que je vais tenter d'expliquer.

Séminaire des membres non-permanents
Mercredi 11 octobre 2017, 11 heures, Salle 3052
Hadrien Batmalle (Équipe Preuves et Programmes) From Cohen's Forcing to Classical Realisability: A New Approach

English abstract below

Du forcing à la réalisabilité classique: une nouvelle approche

La réalisabilité classique permet d'interpréter des théories mathématiques classiques, comme la théorie des ensembles ZF, dans divers modèles de calculs (lambda-calcul avec continuations, domaines…). L'intérêt est double: côté informatique, il s'agit d'extraire des interprétations calculatoires de preuves classiques; côté mathématique, on obtient de nouveaux modèles de ces théories classiques (les deux aspects étant intimement liés). Une grande partie de la recherche en réalisabilité classique étudie la structure de ces modèles, qui apparaissent comme une généralisation du forcing de Cohen. Nous nous intéresserons ici à une nouvelle méthode pour exporter des propriétés des modèles de forcing aux modèles de réalisabilité, qui permet de construire des interprétations de deux théories contradictoires dans un même modèle de calcul, ce qui ouvre la voie à une analyse fine des conséquences calculatoires de la présence ou non de tel ou tel axiome.

From Cohen's Forcing to Classical Realisability: A New Approach

Classical realisability is a framework for interpreting classical theories in mathematics, such as the ZF set theory, in various models of computation (lambda-calculus with continuations, domains…). The goal is twofold: from the computer scientist's point of view, this is a method for extracting computational interpretations out of classical proofs; from the mathematician's, this is a trove of new models for these classical theories (both sides being tightly interwoven). A good deal of the research in this area is focussing on the structure of these models, arising as a generalisation of Cohen's forcing. In this talk we'll present some consequences of a new method for exporting properties of Cohen's forcing models into properties of classical realisability models. In particular we obtain interpretations of two incompatible theories in the same model of computation, which clears the path to studying the computational consequences of the presence, or lack thereof, of some axiom.

Séminaire des membres non-permanents
Mercredi 27 septembre 2017, 11 heures, Salle 3052
Baptiste Louf & Victor Lanvin (Combinatorics and PPS teams) New PhD student introduction session

This special session will introduce two new PhD students who will each be giving a short talk. Here are the titles and abstracts of the talks:

Combinatorial maps : algebraic and bijective enumeration

Combinatorial maps (which are embeddings of graphs on surfaces) are well studied objects in combinatorics, which have applications in other domains, such as quantum gravity. The goal is to enumerate them (sometimes exactly, sometimes asymptotically). For this purpose, one can resort to (among other things) bijective or algebraic methods. The algebraic method is often more powerful and yields results more easily, however bijections give a more in-depth understanding of the models. Often, formulas are found via powerful methods, then people try to re-prove them bijectively. In this talk, I present what I’m focusing on, on the bijective side (Carell-Chappy formula) and on the algebraic side (KP equations). If time permits, I will explain a simple bijection I discovered during my Master’s internship.

Gradual Set-Theoretic Types

A static type system can be an extremely powerful tool for a programmer, providing early error detection, and offering strong compile-time guarantees on the behavior of a program. However, compared to dynamic typing, static typing often comes at the expense of development speed and flexibility, as statically- typed code might be more difficult to adapt to changing requirements. Gradual typing is a recent and promising approach that tries to get the best of both worlds, by allowing the programmer to finely tune the distribution of dynamic and static checking over a program. However, this “gradualization” is sometimes too coarse, and an expression is often either fully dynamic or fully static. We argue that adding full-fledged union and intersection types (a.k.a. set-theoretic types) to a gradual type system solves this issue by making the transition between dynamic typing and static typing smoother.

Séminaire des membres non-permanents
Mercredi 28 juin 2017, 11 heures, Salle 3052
Tommaso Petrucciani (PPS team) Semantic subtyping: an introduction

Many type systems for programming languages include a notion of subtyping. Subtyping is often defined syntactically by a formal system, but this gets increasingly complex when union, intersection, and negation types are introduced.

In the semantic subtyping approach, instead, types are interpreted as sets and subtyping is defined in terms of set containment. Then, an algorithm is derived from the definition. While the algorithm is complex, the interpretation of types serves as a fairly simple specification. This approach also ensures that union and intersection on types behave as the corresponding operations on sets.

I will give an introduction to this approach and show how to define subtyping semantically for types including arrows, union, intersection, and negation, following [Frisch et al., 2008]. Then, we will look at ongoing work on adapting this approach (originally studied for call-by-value languages) to lazy semantics.

[Frisch et al., 2008] A. Frisch, G. Castagna, and V. Benzaken, Semantic subtyping, JACM, 2008.

Séminaire des membres non-permanents
Mercredi 14 juin 2017, 11 heures, Salle 3052
Timo Zijlstra & Emmanuel Arrighi Quantum algorithms and Learning With Errors- based Cryptography & Distance Labels and Tree Skeletons

Quantum algorithms and Learning With Errors- based Cryptography:

Post quantum cryptography is meant to replace today's standards like RSA and Elliptic Curve Cryptography (ECC), since these standards are threathened by quantum algorithms. The most researched post-quantum candidates are based on lattice problems, and in particular the Learning with Errors (LWE) problem. It is assumed that there exists no quantum algorithm that solves this problem efficiently. However, in a particular setting and under some strong hypothesis, it is very easy to solve LWE using a generalization of the Bernstein-Vazirani quantum algorithm. We will take a look at possible quantum cryptanalysis on LWE-based cryptographic applications.

Distance Labels and Tree Skeletons:

To answer distance queries on a fix known graph, it is interesting to do precalculation in order to reduce query time. A methode is to use Hub Labeling. Hub Labeling works well on road transport network. We will take a look at this methode and introduce the notion of Skeleton Dimension which give an insight on why it works well on road network.

Séminaire des membres non-permanents
Mercredi 31 mai 2017, 11 heures, Salle 3052
Clément Jacq (PPS team) A playful introduction to game semantics (category-light)

(English abstract below)

Une introduction ludique à la sémantique des jeux (allégée en catégories)

La sémantique des jeux est une branche de la théorie des modèles dont l'objectif est d'interpréter des formules de certaines logiques sous forme de jeux à deux joueurs. Son objectif initial était de lier les notions de vérité et de validité à des concepts de théorie des jeux tels que l'existence de stratégies gagnantes…

Après quelques exemples historiques, nous nous intéresserons dans cet exposé de manière informelle à un cas plus récent ou la sémantique des jeux modélise désormais des langages de programmation.

En guise de conclusion, nous évoquerons l'aspect formel avec la notion de structure catégorielle.


A playful introduction to game semantics (category-light)

Game semantics is a branch of model theory that aims at interpreting formulas of a given logic as two-player games. Initially, it was developed to link the notions of truth and validity to game-theoretic notions such as the existence of winning strategies.

After an historical example, we will look informally at a more recent case of game semantics where the games are used to model programming languages.

At the end, we'll mention the formal part with the notion of categorical model.

Séminaire des membres non-permanents
Mercredi 17 mai 2017, 11 heures, Salle 3052
Pierre Vial (PPS team) An Introduction to Intersection Type Systems, and a New Answer to Klop's Problem

Although the following abstract is (mostly) in French, the talk will be in English if there are non-French speakers in the room.

L'exposé aura deux buts:

1) Présenter les systèmes de types-intersection (ITS, intersection type systems), en particulier, les ITS à intersection non-idempotente. Je commencerai par des rappels basiques en lambda-calcul. On verra en quoi la représentation des lambda-termes par des arbres (bien qu'élémentaire) permet de comprendre la façon dont les ITS sont conçus et vérifient naturellement des propriétés que les systèmes de types simples ne peuvent (raisonnablement) pas avoir. Par exemple, dans un ITS, un terme est normalisable (i.e. il termine) ssi il est typable. Par opposition, dans un système de types simples, on aura seulement l'implication “Si le terme t est typable, alors il est normalisable” (*Propriété de Terminaison*). La notion de normalisation (i.e. terminaison) admet de nombreuses variantes: on en verra deux, la réduction de tête (HN, Head Normalization) et la réduction faible (WN, Weak Normalization). On verra aussi que les ITS ont des conséquences en lambda-calcul qui sont *externes* à la théorie des types.

Etant donné un système de type Sys (que Sys soit un ITS ou un système de types simples, d'ordre supérieur ou non), la propriété de terminaison (typable dans Sys ⇒ normalisable) est souvent difficile à établir (on doit généralement recourir à un argument dit de réalisabilité, attribué à Tait). Cependant, je présenterai le système R, introduit par Gardner et de de Carvalho, dans lequel l'opérateur d'intersection peut être vu comme non-idempotent et la terminaison repose sur un argument très simple que nous verrons ensemble.

2) Présenter un article (accepté récemment à LICS) traitant de lambda-calcul et de réduction infinitaires et dont voici l'abstract:

Infinitary Intersection Types as Sequences: a New Answer to Klop’s Problem

We provide a type-theoretical characterization of weakly-normalizing terms in an infinitary lambda-calculus. We adapt for this purpose the standard quantitative (with non-idempotent intersections) type assignment system of the lambda-calculus to our infinite calculus. Our work provides a positive answer to a semi-open question known as Klop’s Problem, namely, finding out if there is a type system characterizing the set of hereditary head-normalizing (HHN) lambda-terms. Tatsuta showed in 2007 that HHN could not be characterized by a finite type system. We prove that an infinitary type system endowed with a validity condition called approximability can achieve it. As it turns out, approximability cannot be expressed when intersection is represented by means of multisets. Multisets are then replaced coinductively by sequences of types indexed by integers, thus defining a type system called System S.

Séminaire des membres non-permanents
Mercredi 3 mai 2017, 11 heures, Salle 3052
Alexandre Nolin (Algorithms and complexity Group) Quantum, a look through nonlocality

In this talk I will try to give an introduction to the mathematical framework of quantum computing, completely disconnected of the underlying theory of quantum physics. After an exposition of a circuit model for quantum computing, we will split those circuits into two parts (the infamous Alice and Bob) and talk about the behaviours of those bipartite systems, more specifically somewhat counter-intuitive phenomenons like entanglement and nonlocality. Finally, we will see how those phenomenons are relevant in the study of communication complexity, and discuss recent results.

Séminaire des membres non-permanents
Mercredi 19 avril 2017, 11 heures, Salle 3052
Pierre Cagne (PPS team) Lawvere’s hyperdoctrines and notions of equality

This talk will present hyperdoctrines, a widget invented by Lawvere in the late 70’s to give a categorical account of type theories. It has the advantage to dissociate every construction/rules of a type theory: structural rules, logical rules, quantifier rules, equality rules, etc. As a welcomed side effect, it questions the legitimacy of such rules. In particular, we will take some time to study the equality and the relevance of its usual definition, and try to give a feeling of Lawvere’s seminal thoughts on HoTT. If time permits, we shall discuss the insight of such an approach about a (for now non well established) “directed type theory”, by which is roughly meant a type theory in which paths between terms are not necessarily reversible.

Séminaire des membres non-permanents
Mercredi 5 avril 2017, 11 heures, Salle 3052
Guillaume Lagarde (Automata and applications Group) On the stability of the Lempel-Ziv compression algorithm

LZ78 is a very simple lossless data compression algorithm published by Abraham Lempel and Jacob Ziv in 1978. It is fair enough to expect a certain stability from a data compression algorithm against small perturbation on the input. In this direction, Jack Lutz among with others asked the following question, so-called “the one-bit catastrophe”: given an infinite word w, can the compression ratio of w be different from 1w? In this presentation, I will give some results in this fashion; in particular I would like to give the intuition of the fact that a catastrophe can indeed occur in LZ78. Joint work with Sylvain Perifel.

The presentation will just use basic combinatorics.

Séminaire des membres non-permanents
Mercredi 22 mars 2017, 11 heures, Salle 3052
Gabriel Radanne (PPS team) GADTs gone mild

Generalized Algebraic Data Types, often also called “Poor man's dependent types”, are an extension of regular sum and product types that is available in OCaml and Haskell.

Since their adoption in “mainstream” languages, GADTs have been known for allowing to elegantly write toy typed interpreter at the cost of horrible type error messages and numerous headaches. Or, as Yaron Misky said, “I assumed that it was the kind of nonsense you get when you let compiler writers design your programming language.”.

In this talk, I will present GADTs, what they are, and what useful things we can do with them. This will take us on quite a journey, with some traces of C, a pinch of memory layout, a cameo from pushdown automata and a healthy amount of Prolog. The only requirements will be a passing familiarity with OCaml and the caffeinated beverage of your choice.

Séminaire des membres non-permanents
Mercredi 8 mars 2017, 11 heures, Salle 3052
Pablo Eduardo Rotondo (Automata and applications Group) Continued Fractions and the Recurrence of Sturmian Words

In this talk I will present various aspects of Continued Fractions, motivating and explaining their relation to the factors of the so-called Sturmian Words. We conclude by a probabilistic study of the recurrence function of Sturmian Words, which is common work with Brigitte Vallée (CNRS, Univ. Caen).

Séminaire des membres non-permanents
Mercredi 22 février 2017, 11 heures, Salle 3052
Lucas Boczkowski (Algorithms and complexity Group) Minimizing Message Size in Stochastic Communication Patterns: Fast Self-Stabilizing Protocols with 3 bits

The talk is based on a paper whose abstract is the following:

This paper considers the basic PULL model of communication, in which in each round, each agent extracts information from few randomly chosen agents. We seek to identify the smallest amount of information revealed in each interaction (message size) that nevertheless allows for efficient and robust computations of fundamental information dissemination tasks. We focus on the Majority Bit Dissemination problem that considers a population of n agents, with a designated subset of source agents. Each source agent holds an input bit and each agent holds an output bit. The goal is to let all agents converge their output bits on the most frequent input bit of the sources (the majority bit). Note that the particular case of a single source agent corresponds to the classical problem of Broadcast (also termed Rumor Spreading). We concentrate on the severe fault-tolerant context of self-stabilization, in which a correct configuration must be reached eventually, despite all agents starting the execution with arbitrary initial states. In particular, the specification of who is a source and what is its initial input bit may be set by an adversary.

We first design a general compiler which can essentially transform any self-stabilizing algorithm with a certain property (called “the bitwise-independence property”) that uses l-bits messages to one that uses only (log l)-bits messages, while paying only a small penalty in the running time. By applying this compiler recursively we then obtain a self-stabilizing Clock Synchronization protocol, in which agents synchronize their clocks modulo some given integer T, within O(log n log T) rounds w.h.p., and using messages that contain 3 bits only. We then employ the new Clock Synchronization tool to obtain a self-stabilizing majority broadcast protocol which converges in O(log n) time, w.h.p., on every initial configuration, provided that the ratio of sources supporting the minority opinion is bounded away from half. Moreover, this protocol also uses only 3 bits per interaction.

Based on joint work with A. Korman and E. Natale.

Séminaire des membres non-permanents
Mercredi 25 janvier 2017, 11 heures, Salle 3052
Thibaut Girka (PPS team) Oracle-based Differential Operational Semantics (or Explaining program differences with programs)

We present a formal framework to characterize differences between deterministic programs in terms of their small-step semantics. This language-agnostic framework defines the notion of /difference languages/ in which /oracles/—programs that relate small-step executions of pairs of programs written in a same language—can be written, reasonned about and composed.

We illustrate this framework by instantiating it on a toy imperative language and by presenting several /difference languages/ ranging from trivial equivalence-preserving syntactic transformations to characterized semantic differences. Through those examples, we will present the basis of our framework, show how to use it to relate syntactic changes with their effect on semantics, how one can abstract away from the small-step semantics presentation, and discuss the composability of oracles.

Séminaire des membres non-permanents
Mercredi 11 janvier 2017, 11 heures, Salle 3052
Fabian Reiter (Automata and applications Group) Asynchronous Distributed Automata

The goal of this talk is to raise interest in the connections between distributed computing and formal logic. I will illustrate this relatively unexplored area of research by presenting an equivalence result between two very specific systems. The distributed computing side will be represented by a network of identical finite-state machines that communicate in an asynchronous manner, while the formal logic side will be represented by a small fragment of least fixpoint logic (more specifically, a fragment of the modal mu-calculus).