Logo of University Paris Diderot-Paris 7 Logo of the LIAFA   Logo of PPS

Journées d'Informatique Fondamentale de Paris Diderot

April 22-26, 2013

Université Paris Diderot - Paris 7

Amphithéâtre Buffon, 15 rue Hélène Brion, 75013 Paris
Logo of the CNRS   Logo of Fédération de Recherche en Mathématiques de Paris Centre

Programme with abstracts

Monday, April 22, 2013
13:30 - 14:00  opening by Thomas Ehrhard and Pierre Fraigniaud
14:00 - 14:50 Glynn Winskel (Cambridge) Distributed strategies
It has been a strange feature in the history of semantics of computation that the more semantics has attempted to capture operational features the more it has come to share with the seemingly much more specialised area of distributed computation. For instance the pioneering work of Kahn and Plotkin on formalising the notion of sequentially-computable function was based on concrete data structures, expressed in terms of events, their causal dependencies and conflicts. In this talk I will advance distributed strategies and games as a promising replacement for continuous functions and domains in the foundations of semantics. At the crossroads of algorithmics, logic and semantics, strategies and games lie in the intersection of the interests of LIAFA and PPS. While the talk will have a semantics bias I will try to set issues in a historic perspective, motivate the definition of distributed strategies and sketch ideas for their range of application.
15:00 - 15:50 Pavol Hell (Simon Fraser University) Complexity of list homomorphisms
I will survey the recent progress on understanding the time and space complexity of this combinatorial problem.
16:00 - 16:30 coffee break
16:30 - 17:20 Vincent Danos (Edinburgh) Thermodynamic graph rewriting
We develop a 'thermodynamic' approach to stochastic graph-rewriting. The ingredients are a finite set of reversible graph-rewriting rules G, a finite set of subgraphs P, and an energy cost function on P. The idea is that G defines the qualitative dynamics by showing which transformations are possible, while P and E assign a probability π to any graph reachable under G. Given G, P, we construct a finite set of rules GP which has the same qualitative transition system as G, and, when equipped with suitable rates, defines a continuous-time Markov chain of which π is the unique fixed point. The construction relies on a 'growth policy' technique for quantitative rule refinement. Nothing is assumed of G or P and only the choice of rates on the rule set GP depend on E. The 'division of labour' between G and P leads to intuitive and concise descriptions for realistic models. It also guarantees thermodynamical consistency, otherwise known to be undecidable, which is important for some applications. Finally, it leads to parsimonious parameterizations of models, again an important point in some applications.
Tudesday, April 23, 2013
  9:30 - 10:00 coffee
10:00 - 10:50 Thomas A. Henzinger (IST Austria) Quantitative Reactive Modeling
Formal verification aims to improve the quality of hardware and software by detecting errors before they do harm. At the basis of formal verification lies the logical notion of correctness, which purports to capture whether or not a circuit or program behaves as desired. We suggest that the boolean partition into correct and incorrect systems falls short of the practical need to assess the behavior of hardware and software in a more nuanced fashion against multiple criteria such as function, performance, cost, reliability, and robustness. For this purpose, we propose quantitative fitness measures for reactive models of concurrent and embedded systems. Besides measuring the "fit" between a system and a requirement numerically, the goal of the ERC project QUAREM (Quantitative Reactive Modeling) is to obtain quantitative generalizations of the paradigms on which the success of qualitative reactive modeling rests, such as compositionality, abstraction, model checking, and synthesis.
11:00 - 11:50 Rajeev Alur (U Penn) Regular Functions and Cost Register Automata
12:00 - 14:00 lunch
14:00 - 14:50 Alexandre Miquel (ENS Lyon) From forcing to classical realizability
The technique of "forcing" was introduced by Cohen in 1963 to prove the independence of the Continuum hypothesis, thus giving an answer to Hilbert's 1st problem. Since then, this technique has been used successfully to prove the independence of other axioms (especially in the theory of large cardinals), and it now constitutes a standard item in the toolbox of model-theorists. In this talk, I will explain why the theory of classical realizability developped by Krivine can be seen as a generalization of Cohen forcing, and can even be considered as its non-commutative form. For that, I will present and compare the two techniques - forcing and classical realizability - and show how they can be combined in the more general framework of realizability algebras. I will all discuss the intriguing perspectives of classical realizability in a model-theoretic perspective.
15:00 - 15:50 Bruno Salvy (INRIA) Newton iteration in computer algebra and combinatorics
Classically, Newton's iteration is used in numerical computations and exhibits a very fast (quadratic) convergence close to the root under suitable regularity conditions. This idea has been generalized in many directions over the centuries. In order to solve a nonlinear problem, the process consists in solving a sequence of linear ones, each computed from the previous iteration. In the case of power series, quadratic convergence then corresponds to doubling the number of correct coefficients at each stage. From this point of view, the quadratic convergence amounts to a divide-and-conquer approach where a problem of size N is solved from its solution in half its size. This idea leads to a great variety of (quasi-)optimal algorithms in computer algebra. Another, more surprising, field of application is combinatorics where families of recursive structures can be approximated by a combinatorial variant of Newton's iteration. From there, the enumeration problem (counting the number of objects of size up to N) is solved in quasi-optimal complexity again. Next, this iteration transfers to a numerical scheme that converges unconditionally to the values of the generating series inside their disk of convergence. These results provide important subroutines in random generation.
16:00 - 16:30 coffee break
16:30 - 17:20 Justin Salez (LPMA, Paris 7) Spectral theory of sparse random graphs
Wednesday, April 24, 2013
  9:30 - 10:00 coffee
10:00 - 10:50 Hagit Attiya (Technion) Indistinguishability: Friend and Foe of Concurrent Data Structures
Uncertainty about the global state is a major obstacle for achieving synchronization in concurrent systems. Formally, uncertainly is captured by showing that a process cannot distinguish two different global states. Indistiguishability arguments play a key role in many lower bounds for concurrent data structures, one of them, on the need for memory barriers, will be presented in this talk. Surprisingly, however, indistiguishability can also help in the verification of concurrent data structures, as demonstrated by a reduction theorem we will describe, or in understanding their specification, as we will show in the context of transactional memory.
11:00 - 11:50 Kathrin Bringmann (Cologne) Ramanujan and asymptotic formulas
In my talk I will speak about several asymptotic results related to the famous indian genus Ramanujan who unfortunately died in the very young age of 32. I will consider various generating functions and explain how modularity (in the widest sense) helps to make asymptotic statements. In particular I will focus on 4 classes of functions: modular forms, mock modular forms, mixed mock modular forms and non-modular forms.
12:00 - 14:00 lunch
14:00 - 14:15 opening of the CNRS silver medal session for Pierre Fraigniaud by Michel Bidoit (directeur adjoint scientifique de l'INS2I du CNRS)
14:15 - 15:05 David Peleg (Weizmann Institute) The Role of Randomness in Local Distributed Decision
15:15 - 15:45 coffee break
15:45 - 16:35 Sergio Rajsbaum (UNAM, Mexico) Computability in Distributed Computing and a Locality Notion - The missing topic in Turing's Centenary Celebrations
The equivalence between single-tape and multi-tape Turing machines is often interpreted as implying that sequential computing and distributed computing may differ in questions of efficiency, but not computability. It has been clear from early on that the addition of non-determinism or parallelism to Turing machines does not change the fundamental notion of Turing-computability.
However, a discussion of a more basic issue has not received much attention in the Turing Centenary celebrations. In the theory of Distributed Computing, we study computability aspects in the presence of failures and timing uncertainties. Turing computability is concerned with functions. A Turing machine starts with a single input, computes for a finite duration, and halts with a single output. In distributed computing, the analog of a function is called a task. Here, the input is split among the processes: initially each process knows its own part of the input, but not the others'. As each process computes, it communicates with the others, and eventually it halts with its own output value. Collectively, the individual output values form the task's output. If the participants could reliably communicate with one another, then each one could assemble the individual inputs, and then compute sequentially the output to the task. In many realistic models of distributed computing, however, uncertainties caused by failures and unpredictable timing limit each participant to an incomplete picture of the global system state.
In this talk we overview some important developments over almost 25 years of developments in this area. The story is a quest for a theory of distributed computability, and for a distributed akin to the Church-Turing Thesis. Two insights have been learned from this story.
It is well understood that task computability is very sensitive to the details of the model: failures (type and number of process and communication failures), communication (various forms of message passing and shared memory), and timing (relative process speeds and communication delays). Yet, it has been discovered that one model plays a fundamental role: the model where asynchronous processes communicate by reading and writing shared variables, and any number of them can fail by crashing.
Furthermore, there is an intimate relation between distributed computability and topology. Roughly speaking, the question whether a task is computable in a given model can be reduced to the question whether an associated geometric structure, called a simplicial complex has "holes" of certain dimensions.
There are various implications of these insights. In particular, Turing computability is orthogonal to distributed computability. In distributed computing we allow for individual processes to have an infinite number of states, thus each one individually can solve the Halting problem, however, there there are simple Turing-computable tasks that are not computable even in the presence of a single process crash. In this area the interconnection network can be abstracted away, and processes are assumed to be able to communicate directly with each other. Thus, physical distance locality notions disappear. Yet, other forms of locality become interesting, related to the various scales of computation, that arise from the necessity that processes may fail, and correct processes may have to complete a computation without hearing from the failed processes.
Joint work with Maurice Herlihy, Pierre Fraigniaud, and others.
18:00 promotion of Maurice Nivat to the grade of Commandeur in the Ordre des Palmes Académiques by Annie Veyre (Professeur émérite de biophysique et médecine nucléaire à l'Université d'Auvergne, Présidente honoraire de l'Université d'Auvergne)
Thursday, April 25, 2013
  9:30 - 10:00 coffee
10:00 - 10:50 Gérard Berry (Collège de France) Time / space exchange games (pdf slides without animations)
En informatique, la notion de temps est souvent restreinte au temps de calcul, préoccupation centrale de la complexité des algorithmes et programmes avec l'espace mémoire et l'énergie consommée. L'échange entre espace et temps est effectivement fondamental pour tous les circuits électroniques et pour bon nombres d'algorithmes logiciels, et il a beaucoup été étudié. Mais cette vision classique doit maintenant être considérablement élargie pour bien parler de nombreux systèmes modernes: systèmes informatiques distribués, systèmes embarqués dans lesquels les ordinateurs contrôlent des processus physiques ou des interactions permanentes avec les utilisateurs, simulations sur ordinateur de phénomènes physiques, musique contemporaine mêlant interprètes humains et ordinateurs, ou encore codage et calcul de l'information dans le cerveau. Nous verrons que la notion de temps doit devenir physico-logique, multiforme, hiérarchique, plus ou moins régulière, plus ou moins répartie, en bref, beaucoup plus riche et encore largement inexplorée.
11:00 - 11:50 Xavier Leroy (INRIA) From functional programming to deductive software verification
Since Milner's seminal work on the ML language in the 1970's, typed functional programming languages have made tremendous progress towards supporting expressive, type-safe, mathematically-precise programming. Yet, when it comes to machine-assisted formal verification of critical software, functional programming languages lag behind mainstream imperative languages. In this talk, I will review the current state of formal verification of functional programs, building on my experience with programming and proving correct the CompCert compiler using the Coq assistant both as a programming language and as a verification tool.
12:00 - 14:00 lunch
14:00 - 14:50 Ohad Kammar (Cambridge) A general theory of type-and-effect systems via universal algebra
We present a general theory of Gifford-style type-and-effect annotations, where effect annotations are sets of effects, such as memory accesses or exceptions. Our theory accounts for effect-dependent program transformations for functional-imperative languages, as used in optimising compilers. Using our theory, we validate and generalise many existing optimisations and add new ones. Our general theory also suggests a classification of these optimisations into three classes, structural, local, and global. We also give modularly-checkable necessary and sufficient conditions for validating the optimisations.
Generality is achieved by recourse to the theory of algebraic effects, a development of Moggi's monadic theory of computational effects that emphasises the operations causing the effects at hand. The universal algebraic perspective gives an elementary and concrete view on the monadic concepts. The key observation is that annotation effects can be identified with the algebraic operation symbols. We describe how the universal algebraic approach gives particularly simple characterisations of the optimisations: structural optimisations always hold; local ones depend on the effect theory at hand; and global ones depend on the global nature of that theory, such as idempotency or absorption laws. Time permitting, we outline how the theory generalises to more sophisticated notions of universal algebra via enriched Lawvere theories and factorisation systems.
Joint work with Gordon Plotkin.
15:00 - 15:50 Mathilde Noual (I3S, Nice) Updating automata networks
An automata network is made of a set of entities (computational units seen as black boxes) called automata, that can make each other change states. The possible state changes of automata define the possible overall network transitions. Two questions follow naturally: how does the structure of interactions between automata, on the one hand, and the organisation of local changes in time. on the other, impact on the global evolution possibilities and the asymptotic behaviours of the network? I propose to approach these questions with networks of Boolean automata in order to put emphasis on (possibilities of) state changes rather than on the nature of changes.
16:00 - 16:30 coffee break
16:30 - 17:20 Michał Skrzypczak (Warsaw) Descriptive complexity vs. decidability for Monadic Second-Order logic
A theorem of Büchi, Elgot, and Trakhtenbrot says that, over finite words, Monadic Second-Order (MSO) logic has the same expressive power as finite automata or regular expressions. Classical results of Büchi and Rabin show that the correspondence between automata and MSO logic carries over into more complicated structures, namely infinite words and trees. In all these cases, an automata model can be seen as a way of simplifying all formulas of MSO logic into a simple canonical form.
In particular, such a canonical form bounds topological complexity of languages definable in MSO. I will present a reversed reasoning: for logics (extensions of MSO) defining languages with high topological complexity, there is no automata model and satisfiability can be undecidable.
Friday, April 26, 2013
  9:30 - 10:00 coffee
10:00 - 10:50 Ronitt Rubinfeld (MIT and Tel Aviv) Something for almost nothing: Advances in sub-linear time algorithms
Linear-time algorithms have long been considered the gold standard of computational efficiency. Indeed, it is hard to imagine doing better than that, since for a nontrivial problem, any algorithm must consider all of the input in order to make a decision. However, as extremely large data sets are pervasive, it is natural to wonder what one can do in sub-linear time. Over the past two decades, several surprising advances have been made on designing such algorithms. We will give a non-exhaustive survey of this emerging area, highlighting recent progress and directions for further research.
11:00 - 11:50 Mark Braverman (Princeton) Information complexity and exact communication bounds
In this talk we will discuss information complexity -- a measure of the amount of information Alice and Bob need to exchange to solve a problem over distributed inputs. We will present an information-theoretically optimal protocol for computing the AND of two bits distributed between Alice and Bob. We prove that the information complexity of AND is ~1.4923 bits. We use the optimal protocol and its properties to obtain tight bounds for the Disjointness problem, showing that the randomized communication complexity of Disjointness on n bits is ~0.4827n ± o(n).
12:00 - 14:00 lunch
14:00 - 14:50 Nicole Schweikardt (Frankfurt) Gaifman's theorem revisited
15:00 - 15:50 Howard Straubing (Boston) Algebra meets finite model theory
We begin with a broad survey of some four decades of research on the use of the algebraic theory of finite semigroups to answer questions about what properties of words can be expressed in various logics. When the property in question defines a regular language of words, algebra proves to be an immensely powerful tool: deep theoretical results on the structure and decomposition of semigroups provide precise effective characterizations of a large number of variants and fragments of first-order logic and linear temporal logic.
We then explore the prospects for extending the reach of these methods beyond regular languages of words. Here one finds striking correspondences suggesting tantalizing conjectures, but far less in the way of satisfying results. We examine two such directions: lower bounds for circuit complexity, and the characterization of temporal and predicate logics for trees.
16:00 - 17:00 closing

Main page