IRIF, the Research Institute on the Foundations of Computer Science, is a research laboratory of CNRS and Université Paris-Diderot, also hosting two INRIA project-teams.

The scientific objectives of IRIF are at the core of computer science and, in particular, they focus on the conception, analysis, proof, and verification of algorithms, programs, and programming languages. They are built upon fundamental research activities developed at IRIF on combinatorics, graphs,logics, automata, type, semantics, and algebras.

IRIF hosts about 200 people. Six of its members have been distinguished by the European Research Council (ERC), three are members of the Institut Universitaire de France (IUF), and two are members of the Academia Europæa.

(refresh the page for a new one)


IRIF co-organizes the Workshop on Emergent Algorithms and Network Dynamics (Wendy) that will take place at Institut Henri-Poincaré in Paris on October 10-11, 2018. Registration is free but mandatory. More details:


The October 2, IRIF organizes the next Annual workshop of the French Working Group on Complexity and Algorithms (CoA). This year, the workshop will consist of a series of introductory and survey talks about various hot topics in algorithms, including Blockchains, Machine Learning, Sum-of-Square, etc.


The EU QuantAlgo project workshop will be held in Paris at IRIF September 25-28, 2018. This is a joint workshop with the IRIF-IQC Cooperation project between CNRS and U. Waterloo. QuantAlgo project aims to combine research on the fundamentals of quantum algorithms with the development of new applications.


IRIF organizes the 59th IEEE Symposium on Foundations of Computer Science (FOCS 2018) on October 7-9, 2018. FOCS is a leading annual conference in Theoretical Computer Science, and has served in the last 60 years as a venue for announcing the major scientific advances in the field. The list of accepted papers is now available, and the registration is now open (deadline for early rate: September 9, 2018).


IRIF organizes the 7th FILOFOCS (French-Israeli Laboratory on Foundations of Computer Science) workshop which will be held at the institut Henri Poincaré (IHP), on 3-5 October, 2018. A preliminary list of speakers in now available and registration (mandatory, but free) is now open.

Michel Habib

At the occasion of the retirement of Michel Habib and in celebration of his achievements, IRIF organizes a two-day conference “40 années d'algorithmique de graphes”, 11-12 Oct, Amphi Turing (Sophie Germain, Univ. Paris Diderot). Free mandatory registration.

mardi 18 septembre 2018, 10h00, Salle 3052
Paul-André Mellies (IRIF) Template games: a model of differential linear logic

Game semantics is the art of interpreting formulas (types) as games and proofs (programs) as strategies interacting in space and time with their environment. In order to reflect the interactive behaviour of programs, strategies are required to follow specific scheduling policies. Typically, in the case of a purely sequential programming language, the program (Player) and its environment (Opponent) play one after the other, in a strictly alternating way. In the case of a concurrent language, on the other hand, Player and Opponent are allowed to play several moves in a row, in a non alternating way. In the two cases, the scheduling policy is designed very carefully in order to ensure that the strategies synchronise properly and compose well when plugged together. A longstanding conceptual problem has been to understand when and why a given scheduling policy works and is compositional in that sense. In this talk, I will introduce the notion of template game and exhibit a number of simple and fundamental combinatorial properties which ensure that a given scheduling policy defines a monoidal closed bicategory of games, strategies and simulations. The notion of template game will be illustrated by constructing two game models of linear logic with different flavors (alternating and non alternating) using the same categorical combinatorics, performed in the category of small categories.

mardi 18 septembre 2018, 11h00, Salle 3052
Jean-Simon Lemay (University of Oxford) Differential Categories Revisited

Differential categories were introduce to provide a minimal categorical semantics for differential linear logic. However, there exists three approaches to axiomatizing the derivative by deriving transformations, coderelictions, and creation maps - long thought to be distinct notions. Recently, Blute, Cockett, Seely and myself have revisited the axioms of a differential category and showed that for categorical models of differential linear logic the three approaches are equivalent. Thus, there is only one notion of differentiation.

Les catégories différentielles ont été introduites afin de fournir une sémantique catégorique minime pour la logique linéaire différentielle. Cependant, il existe trois approches d'axiomatiser la dérivée par les “deriving transformations”, “coderelictions”, “creation maps” - considérés pendant longtemps comme des notions distinctes. Récemment, avec Blute, Cockett et Seely, nous avons revisité les axiomes d’une catégorie différentielle et démontré que pour les modèles catégoriques de logique linéaire différentielle, les trois approches sont équivalentes. Il y a donc qu’une seule notion de différentielle.

Soutenances de thèses
mardi 18 septembre 2018, 14h00, 580F (Halle aux Farines)
Guillaume Claret (IRIF) Program in Coq

In this thesis, we develop new techniques to conveniently write formally verified programs. To proceed, we study the use of Coq as a programming language in different settings. Coq being a purely functional language, we mainly focus on the representation and on the specification of impure effects, like exceptions, mutable references, inputs-outputs, and concurrency.

First, we work on two preliminary projects helping us to understand the challenges of programming in Coq. The first project, Cybele, is a Coq plugin to write efficient proofs by reflection with effects. We compile and execute the impure effects in OCaml to generate a prophecy, a kind of certificate, and then interpret the effects in Coq using the prophecy. The second project, the compiler CoqOfOCaml, imports OCaml programs with effects into Coq, using an effect inference system.

Next, we describe different generic and composable representations of impure effects in Coq. The breakable computations combine the standard exceptions and mutable references effects, with a pause mechanism to make explicit the evaluation steps in order to represent the concurrent evaluation of two terms. By implementing the Pluto web server in Coq, we realize that the most important effects to program are the asynchronous inputs-outputs. Indeed, these effects are ubiquitous and cannot be encoded in a purely functional manner. Thus, we design the asynchronous computations as a first way to represent and compile programs with events and handlers in Coq.

Then, we study techniques to prove properties about programs with effects. We start with the verification of the blog system ChickBlog written in the language of the interactive computations. This blog runs one worker with synchronous inputs-outputs per client. We verify our blog using the method of specification by use cases. We adapt this technique to type theory by expressing a use case as a well-typed co-program over the program we verify. Thanks to this formalism, we can present a use case as a symbolic test program and symbolically debug it, step by step, using the interactive proof mode of Coq. To our knowledge, this is the first such adaptation of the use case specifications in type theory. We believe that the formal specification by use cases is one of the keys to verify effectful programs, as the method of use cases proved to be convenient to express (informal) specifications in the software industry. We extend our formalism to concurrent and potentially non-terminating programs with the language of concurrent computations. Apart from the use case method, we design a model-checker to verify the deadlock freeness of concurrent computations, by compiling the parallel composition to the non-deterministic choice operator.

Soutenances de thèses
mercredi 19 septembre 2018, 14h00, Salle 3052, Bâtiment Sophie Germain
Laurent Feuilloley (IRIF) Certification locale en calcul distribué : sensibilité aux erreurs, uniformité, redondance et interactivité

Cette thèse porte sur la notion de certification locale, un sujet central en décision distribuée, un domaine du calcul distribué. Le mécanisme de la décision distribuée consiste, pour les nœuds d'un réseau, à décider de manière distribuée si le réseau est dans une configuration correcte ou non, selon un certain prédicat. Cette décision est dite locale, car les nœuds du réseau ne peuvent communiquer qu'avec leurs voisins. Après avoir communiqué, chaque nœud prend une décision, exprimant si le réseau est correct ou non localement, c'est-à-dire correct étant donné l'information partielle récoltée jusque-là. Le réseau est déclaré correct globalement s'il est déclaré correct localement par tous les nœuds.

Du fait de la contrainte de localité, peu de prédicats peuvent être vérifiés de cette manière. La certification locale est un moyen de contourner cette difficulté, et permet de décider tous les prédicats. C'est un mécanisme qui consiste à étiqueter les nœuds du réseau avec ce que l'on appelle des certificats, qui peuvent être vérifiés localement par un algorithme distribué. Un schéma de certification locale est correct si seuls les réseaux dans une configuration correcte peuvent être certifiés. L'idée de la certification locale est non seulement séduisante d'un point de vue théorique, comme une forme de non-déterminisme distribué, mais c'est surtout un concept très utile pour l'étude des algorithmes tolérants aux pannes, où une étape-clé consiste à vérifier l'état du réseau en se basant sur des informations stockées par les nœuds.

Cette thèse porte sur quatre aspects de la certification locale : la sensibilité aux erreurs, l'uniformité, la redondance et l'interactivité. L'étude de ces quatre sujets est motivée par une question essentielle : comment réduire les ressources nécessaires à la certification et/ou permettre une meilleure tolérance aux pannes? Pour aborder cette question, il est nécessaire de comprendre le mécanisme de certification en profondeur. Dans cette optique, dans cette thèse, nous apportons des réponses aux questions suivantes. À quel point les certificats doivent-ils être redondants, pour assurer une certification correcte? Les schémas de certification classiques sont-ils robustes à un changement de la condition de correction? Le fait d'introduire de l'interactivité dans le processus change-t-il la complexité de la certification?

Mots-clefs: Calcul distribué sur réseau, décision distribuée, certification locale, schéma d'étiquetage de preuve, tolérance aux pannes.

Exposés hors-séries
jeudi 20 septembre 2018, 10h30, Amphi Turing (bâtiment Sophie Germain)
Leonid Libkin (University of Edinburgh) Certain Answers Meet Zero-One Laws

The talk will start with presenting a brief overview of querying incomplete information in databases and the main computational challenges it presents. Querying incomplete data invariably relies on the very coarse classification of query answers into those that are certain and those that are not. Such a classification is often very costly, and we propose to refine it by measuring how close an answer is to certainty.

This measure is defined as the probability that the query is true under a random interpretation of missing information in a database. Since there are infinitely many such interpretations, to pick one at random we adopt the approach used in the study of asymptotic properties and 0-1 laws for logical sentences and define the measure as the limit of a sequence. We show that in the standard model of missing data, the 0-1 law is observed: this limit always exists and can be only 0 or 1 for a very large class of queries. Thus, query answers are either almost certainly true, or almost certainly false, and this classification behaves very well computationally. When databases satisfy constraints, the measure is defined as the conditional probability of the query being true if the constraints are true. This can now be an arbitrary rational number, which is always computable. Another refinement of the notion of certainty views answers with a larger set of interpretations that make them true as better ones. We pinpoint the exact complexity of finding best answers for first-order queries.

Combinatoire énumérative et analytique
jeudi 20 septembre 2018, 11h00, Institut Poincaré, salle 314
Arnaud De Mesmay, Frédéric Jouhet, Bénédicte Haas (-) Séminaire Flajolet

Soutenances de thèses
jeudi 20 septembre 2018, 10h00, 1828 (Olympe de Gouges)
Matthieu Boutier () Routage sensible à la source

En routage next-hop, paradigme de routage utilisé dans l'Internet Global, chaque routeur choisit le next-hop de chaque paquet en fonction de son adresse destination. Le routage sensible à la source est une extension compatible du routage next-hop où le choix du next-hop dépend de l'adresse source du paquet en plus de son adresse destination. Nous montrons dans cette thèse que le routage sensible à la source est adapté au routage des réseaux multihomés avec plusieurs adresses, qu'il est possible d'étendre de manière compatible les protocoles de routage à vecteur de distance existants et que ce paradigme de routage offre avantageusement plus de flexibilité aux hôtes. Nous montrons d'abord que certains systèmes n'ordonnent pas correctement les entrées sensibles à la source dans leurs tables de routage et nous définissons un algorithme adapté aux protocoles de routage pour y remédier. Nous montrons comment étendre les protocoles à vecteur de distances au routage sensible à la source de manière compatible. Nous validons notre approche en concevant une extension d'un protocole existant (Babel), en réalisant la première implémentation complète d'un protocole sensible à la source et en utilisant ce protocole pour router un réseau multihomé. Enfin, nous montrons que le routage sensible à la source offre des possibilités de multichemin aux couches supérieures des hôtes. Nous vérifions qu'il s'intègre aux technologies existantes (MPTCP) et nous concevons des techniques d'optimisation pour les applications légères. Nous évaluons ces techniques après les avoir implémentées dans le cadre d'une application existante (mosh).

lundi 24 septembre 2018, 11h10, Salle 1007
Adam Shimi (IRIT - ENSEEIHT) Characterizing Asynchronous Message-Passing Models Through Rounds

One great issue in conceiving and specifying distributed algorithms is the sheer number of models, differing in subtle and often qualitative ways: synchrony, kind of faults, number of faults… In the context of message-passing, one solution is to restrict communication to proceed by round; A great variety of models can then be captured in the Heard-Of model, with predicates on the communication graph at each round. However, this raises another issue: how to characterize a given model by such a predicate? It depends on how to implement rounds in the model. This is straightforward in synchronous models, thanks to the upper bound on communication delay. On the other hand, asynchronous models allow unbounded message delays, which makes the implementation of rounds dependent on the specific message-passing model.

I will present our formalization of this characterization for asynchronous models. Specifically, we introduce Delivered collections: the collection of all messages delivered at each round, whether late or not. Defining predicates on Delivered collections then allows us to capture message-passing models at the same level of abstraction than Heard-Of predicates. The question is then reformulated to: what Heard-Of predicates can be generated by a given Delivered predicate?

I will provide an answer by considering all possible scheduling of deliveries of messages from the Delivered collections and change of rounds for the processes. Strategies of processes then constrain those scheduling by specifying when processes can change rounds; those ensuring no process is ever blocked forever generate a Heard-Of collection per run, that is a Heard-Of predicate. Finally, we use these strategies to nd a characterizing Heard-Of predicate through a dominance relation on strategies: a dominant strategy for a Delivered predicate implements the most constrained Heard-Of predicate possible. This approach oer both the dominant Heard-Of predicates for classical asynchronous models and the existence, for every Delivered predicate, of a strategy dominating large classes of strategies. On the whole, those results confirm the power of this formalization and demonstrate the characterization of asynchronous models through rounds as a worthwhile pursuit.

This is joint work with Aurélie Hurault and Philippe Quéinnec.