Institut de Recherche en Informatique Fondamentale

Research Institute for Foundations of Computer Science

IRIF Université Paris 7 CNRS IRIF is a research laboratory co-founded by the CNRS and the University Paris-Diderot, resulting from the merging of the two research units LIAFA and PPS on January 1st, 2016.

The scientific objectives of the laboratory are at the core of computer science, focusing on: the mathematical foundations of computer science; computational models and proofs; models, algorithms and system design.


Upcoming seminars


Mardi 17 octobre 2017 · 14h00 · Salle 3052

Algorithmes et complexité · Claire Mathieu (DI - ENS) · Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.


Mardi 17 octobre 2017 · 14h00 · Salle 3052

Combinatoire énumérative et analytique · Claire Mathieu (École Normale Supérieure - DI) · Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.


Mardi 17 octobre 2017 · 14h00 · Salle 3052

Graphes · Claire Mathieu (DI - ENS) · Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.


Mardi 17 octobre 2017 · 14h00 · Salle 3052

Systèmes complexes · Claire Mathieu (DI - ENS) · Online k-compaction

Given, at each time t = 1, 2, …, n, a new file of length l(t) and a read rate r(t), an online k-compaction algorithm must maintain a collection of at most k files, choosing (at each time t, without knowing future inputs) some of the files to merge into one, thereby incurring a merge cost equal to the total length of the merged files and a read cost equal to the read rate r(t) times the number of files present at time t. The goal is to minimize the total cost over time. K-compaction algorithms are a key component of log-structured merge trees, the file-based data structure underlying NoSQL databases such as Accumulo, Bigtable, Cassandra, HBase,and others. We initiate the theoretical study of k-compaction algorithms. We formalize the problem, consider worst-case, average-case and competitive analysis (per-instance optimality), and propose new algorithms that are optimal according to these metrics.

This is joint work with Carl Staelin, Neal E. Young, and Arman Yousefi.


Mercredi 18 octobre 2017 · 14h30 · Salle 3052

Analyse et conception de systèmes · Nicolas Behr (IRIF) · The stochastic mechanics of graph rewriting via the rule algebra framework - an introduction

Reporting on joint work with V. Danos and I. Garnier, I will present a novel formulation of graph rewriting that permits to phrase the concept in a fashion akin to statistical physics. The key ingredient of this approach is the rule algebra framework, in which rewriting rules are interpreted as a type of diagrams endowed with a notion of sequential compositions. The focus of this talk will be the stochastic mechanics applications: once an associative algebra of rewriting rules is available in the form of the rule algebras, the full theory of continuous time Markov chains is at ones disposal. A number of explicit examples for computing in this formalism will be presented.

Jeudi 19 octobre 2017 · 10h30 · Salle 3052

Preuves, programmes et systèmes · Martin Hyland (DPMMS, University of Cambridge) · Understanding Computation in Game Semantics


Vendredi 20 octobre 2017 · 14h30 · Salle 3058

Automates · Sylvain Perifel (IRIF) · tba

tba

Lundi 23 octobre 2017 · 11h00 · Salle 1007

Vérification · Raphaël Caudelier (IRIF - Université Paris Diderot) · FocaLiZe and Dedukti to the Rescue for Proof Interoperability

Numerous contributions have been made for some years to allow users to exchange formal proofs between different provers. The main propositions consist in ad hoc pointwise translations, e.g. between HOL Light and Isabelle in the Flyspeck project or uses of more or less complete certificates. We propose a methodology to combine proofs coming from different theorem provers. This methodology relies on the Dedukti logical framework as a common formalism in which proofs can be translated and combined. To relate the independently developed mathematical libraries used in proof assistants, we rely on the structuring features offered by FoCaLiZe, in particular parameterized modules and inheritance to build a formal library of transfer theorems called MathTransfer. We finally illustrate this methodology on the Sieve of Eratosthenes, which we prove correct using HOL and Coq in combination.


Events


Jeudi 12 et vendredi 13 octobre 2017 · Salle 3052

Journées de rentrée PPS


Lundi 6 novembre 2017 · Amphi Turing

Journée d'accueil des nouveaux membres de l'IRIF