# Reading Group LAAG (Logic, Automata, Algebra and Games)

## Principle

The goal of this reading group is to present in two hours blackboard sessions some classical or more recent results concerning logic, automata theory, algebraic methods and games.

## Organisation

The reading group happens the Thursday from 10:30 AM to 12:30 PM room 4033 building Sophie Germain, every two weeks on average.

## Contacts

For questions, proposals of talks or registration to the mailing list, please contact Arnaud Carayol, Thomas Colcombet or Stefan Göller.

# Next talks

# Speakers and subject that are planned

- Nathalie Bertrand: Probalistic automata and stochastic games
- Olivier Carton: Prophetic automata
- Christian Choffrut
- Anuj Dawar: the quests of logics for P
- Blaise Genest: Zielonka’s theorem
- Guillaume Malod
- Sylvain Perifel
- Marc Zeitoun
- Amina Doumane
- CSP and universal algebra

# Past talks

Florent Capelli, Thursday June 16 2015, Room 4033: Lyndon’s theorem.

A first-order formula is positive if it does not contain any negations. It is not so difficult to prove that such formulas are preserved under surjective homomorphism, meaning that if some model $M$ satisfies it, and there is a surjective homomorphism from $M$ onto ${M}^{\prime}$, then ${M}^{\prime}$ also satisfies the formula. Lyndon’s theorem states that all first-order formulae preserved in this way are equivalent to a positive formula. Hence, Lyndon’s theorem characterizes syntactically this preservation property. Statement of similar forms are usually called ‘preservation theorems’. This theorem, as it is often the case for preservation theorems for first-order is not true in the finite: this signifies that there are first-order formulae which are preserved under surjective homomorphism between finite models, but are not equivalent to any positive formula.

In this presentation, we will give a panorama of various classical preservation theorems for first-order logic. We will then present a proof of Lyndon’s theorem, and a proof that it does not hold in the finite case (this last construction is a joint work with Arnaud Durant, Amélie Gheerbrant and Cristina Sirangelo).

Christoph Haase, Thursday February 4 2015, Room 4033: A Survey on Classical and Novel Results on Presburger Arithmetic.

Presburger arithmetic is the first-order theory of the natural numbers with addition and order. This theory was shown decidable by M. Presburger in 1929. In the 1960s, Presburger arithmetic was studied in the context of formal language theory. Two of the main results from this period are that the sets of natural numbers definable in Presburger arithmetic are the so-called semi-linear sets, and that Parikh images of languages generated by context-free grammars are semi-linear and hence Presburger-definable. Subsequently, from the 1970s onwards the computational complexity of Presburger arithmetic received interest. The landmark results from this period are a doubly exponential lower bound shown by Fischer and Rabin in 1974 and a triply-exponential upper bound shown by Oppen in 1978. The 1980ies brought a more fine-grained complexity analysis of (fragments of) Presburger arithmetic by Fürer, Grädel and Scarpellini. In this talk, I will survey those classical results and show the relevant proofs or proof ideas for most of them. I will in depth discuss a recent result that shows that Presburger arithmetic with a fixed number of quantifier alternations is complete for every level of the weak EXP hierarchy. If time permits, I will discuss some recent work in which Presburger arithmetic has been used for showing new lower bounds for equivalence problems for commutative grammars.

Christoph Haase, Thursday January 21 2015, Room 4033: A Survey on Classical and Novel Results on Presburger Arithmetic.

First part.

Sylvain Schmitz, Thursday January 7 2015, Room 4033: Ideals in VAS Reachability.

The decidability of the reachability problem for vector addition systems is a notoriously difficult result. First shown by Mayr in 1981 after a decade of unsuccessful attempts and partial results, its proof has been simplified and refined several times, by Kosaraju, Reutenauer, and Lambert, and re-proven by very different techniques by Leroux in 2012. The principles behind the original proof remained however obscure.

In this seminar, I will present the ideas behind the algorithms of Mayr, Kosaraju, and Lambert (the KLM algorithm) in the light of ideals of well-quasi-orders. The interest here is that ideals provide a semantics to the structures manipulated in the KLM algorithm, bringing some new understanding to its proof of correctness.

Joint work with Jérôme Leroux (LaBRI) presented at LICS’15; see http://arxiv.org/abs/1503.00745

Ranko Lazic, Thursday December 17 2015, Room 3048: Rackoff’s Coverability Technique.

Rackoff’s small witness property for the coverability problem is the standard means to prove tight upper bounds in vector addition systems (VAS) and many extensions. We show how to derive the same bounds directly on the computations of the VAS instantiation of the generic backward coverability algorithm. This relies on a dual view of the algorithm using ideal decompositions of downwards-closed sets, which exhibits a key structural invariant in the VAS case. The same reasoning readily generalises to several VAS extensions.

Joint work with Sylvain Schmitz presented at RP ’15; see https://hal.inria.fr/hal-01176755/.

Sylvain Schmitz, Thursday December 3 2015, Room 3048: Ideals of Well-Quasi-Orders.

The purpose of this seminar is to introduce the notion of ideals of well-quasi-orders. These irreducible downwards-closed sets of elements were first invented in the 1970’s but rediscovered in recent years in the theory of well-structured transition systems, notably by Finkel and Goubault-Larrecq. Ideals provide indeed finite effective representations of downwards-closed sets, in the same way as bases of minimal elements provide representations of upwards-closed sets.

After defining ideals and establishing some of their properties, I will illustrate their use in a concrete setting. I will present some recent results by Czerwiński, Martens, van Rooijen, and Zeitoun (FCT’15) on the decidability of piecewise-testable separability in the light of ideals. This seminar is also a warm-up for the next seminar on reachability in vector addition systems.

Luc Segoufin, Thursday November 12 2015, Room 3058: Abiteboul-Vianu theorem.

Descriptive complexity measure a problem in terms of the difficulty to express it. At first look this seems to have nothing to do with computational measures such as time and space. However it does have intimate links with classical computational complexity.

The most celebrated one is Fagin’s theorem linking the complexity class NP with the expressive power of existential second-order logic.

In this lecture we will study the Abiteboul-Vianu theorem saying that the separation of PTime and PSpace can be rephrased in terms of expressive powers of partial fixpoints versus least fixpoints.

Thomas Colcombet, Thursday November 5 2015, Room 3058: “À la Bojańcyk” proof of limitedness.

The limitedness is a question of boundedness: given a function from words to integers described by some suitable model of finite state machines (distance automata, B-automata, ...), determine whether the function it computes is bounded. It happens that for distance automata [Hashiguchi81, Leung82, Simon84] and B-automata [Kirsten05, Bojańczyk&C.06, C.09], this problem turns out to be decidable. This deep result is used in many situations, and in particular for solving the famous star-height problem [Hashiguchi88, Kirsten05]. (This problems asks, given a regular language $L$ and a non-negative integer $k$, whether it is possible to describe $L$ by means of a regular expression with nesting of Kleene stars bounded by $k$).

In his recent LICS15 paper, Bojańczyk gave a much shorter and self-contained proof of the decidability of limitedness (and in fact also star-height). It relies on a reduction to finite games of infinite duration, and involves arguments of positionality of stragegies in quantitative games [C.&Löding08]. The topic of this talk is the presentation of this elegant proof.

The decidability of limitedness was already presented in the LAAG seminar (using the algebraic approach of stabilization monoids), but the proof here is entirely different.

Stefan Göller, Wednesday June 17 and July 1st, Room 4068: Decidability of DPDA equivalence, part III and IV.

These two talks will continue the study of the equivalence problem of deterministic pushdown automata (DPDA).

The main focus of these talks will be on the computational complexity of this problem. We will give a proof of Stirling’s result that DPDA equivalence is primitive recursive.

More precisely, we will present a recent paper by Jancar that shows that in case two DPDAs are inequivalent they can be distinguished by a word whose length is bounded by a tower of exponentials of elementary height.

As in Arnaud’s talk we will view DPDA equivalence in terms of trace equivalence of deterministic first-order grammars.

This talk aims at going hand-in-hand with Arnaud’s talk but is intended to be self-contained as well.

These talks will be completely self-contained and require no background knowledge.

Arnaud Carayol, Wednesday May 27 and June 3, Room 4068a: Decidability of DPDA equivalence, part I and II.

In this series of talks, we will present a proof of the decidability of the equivalence problem for deterministic pushdown automata. Namely given two deterministic pushdown automata, it is decidable if they accept the same language. This result was first proved by Géraud Sénizergues in 1997. His proof consists in two semi-decision procedures and hence while establishing decidability does not give an upper-bound on the complexity of the problem. In 2002, Colin Stirling gave a primitive recursive algorithm to decide the problem.

The aim of the talk is to present what seems to be the simplest self-contained proof of this result. We will present a proof based on the recent work of Petr Jancar which uses first-order grammars instead of deterministic pushdown automata. To keep the proof as simple as possible, we will present the proof giving two semi-decision procedures.

Nathanaël Fijalkow, Wednesday April 15th, Room Ada Lovelace, 4067: Algorithmics Properties of Probabilistic Automata, part II.

This talk is a continuation of the talk from two weeks ago, but it will be independent, so please consider coming even if you didn’t attend the first part.

The plan for this second session is:

- explain how to obtain a universally non-regular automaton (by adding one state to last time’s example),
- explain how to encode Minsky machines using bounded-error probabilistic automata, with two-way and one-way variants,
- explain how to encode probabilistic automata using only one random probabilistic transition,
- show the decidability of two problems: the equivalence and the finiteness problem. Both proofs rely on one observation: the reals with the addition and the multiplication form a field, and some linear algebra.

Nathanaël Fijalkow, Wednesday April first, Room 4068: The Naughty Side of Probabilistic Automata.

This talk is about probabilistic automata over finite words: such machines, when reading a letter, flip a coin to determine which transition to follow. Although quite simple, this computational model is quite powerful, and many natural problems are undecidable.

In this talk, I will review some of the most important undecidability results, with new simplified constructions.

Michał Skrzypczak, Wednesday March 11, Room 4068: On topology in automata theory.

This talk will be a continuation of my talk from 11th February. I will not assume knowledge of any material presented then except for basic notions of topology. During this talk I will move to the case of regular languages of infinite trees. Since there are known examples of such languages that are not Borel I will introduce the projective hierarchy. This hierarchy allows to study complexity of sets defined using ”big” quantifiers (i.e. ranging over sets of naturals). I will focus on correspondence between topological complexity and automata-theoretic complexity measured by the Rabin-Mostowski index. My aim will be to prove certain results of strict containment using purely topological methods. At the end I hope to present some recent results about extensions of MSO.

Michał Skrzypczak, Wednesday February 11, Room 4068: On topology in automata theory.

The interplay between topology and automata theory has a form of synergy: descriptive set theory motivates new problems and methods in automata theory but on the other hand, automata theory introduces natural examples for classical topological concepts.

During this talk I will introduce basic notions of topology and descriptive set theory focusing on the case of infinite words. I’ll say what is the Borel hierarchy and the Wadge order. Then I’ll show how these tools are related to automata theory. I’ll try to argue that even from purely automata theoretic point of view, it is possible to obtain new results and new proofs by referring to topological concepts.

Gabriele Puppis, Wednesday January 28, Room 4068: Non-determinism vs two-way.

Non-determinism and walking mechanisms were considered already at the very beginning of automata theory. I will survey a few results connecting models of automata and transducers enhanced with non-determinism and two-wayness.

A classical result by Rabin and Scott shows that deterministic finite state automata are as expressive as their non-deterministic and two-way counterparts. The translation from two-way automata to one-way automata is easily seen to imply an exponential blowup in the worst case, even when one maps deterministic two-way models to non-deterministic one-way models. The converse translation, which is based on the subset construction and maps non-deterministic one-way automata to deterministic ones, also implies an exponential blowup. This latter translation, however, is not known to be optimal when one aims at removing non-determinism by possibly introducing two-wayness. In particular, the question of whether the exponential blowup is unavoidable in transforming non-determinism to two-wayness is open.

When one turns to models of transducers, some translations are not anymore possible. For example, one-way transducers (even those that exploit functional non-determinism) are easily seen to be less powerful than two-way transducers (even input-deterministic ones). A natural problem thus arises that amounts at characterizing the two-way transductions that can be implemented by one-way functional transducers. The latter problem was solved in recent paper by Filiot, Gauwin, Reynier, and Servais – however, some questions related to complexity remains open. Another interesting, but older, result by Hopcroft shows that functional non-determinism can be removed from two-way transducers, at the cost of an exponential blow-up. A similar transformation can be used to prove that two-way transducers are closed under functional composition.

Stefan Göller, Wednesday December 17, Room 4068: Equivalence checking of infinite state systems: A couple of lower bounds.

We will recall a generic technique for proving lower bounds for bisimulation equivalence checking (for which there was not enough time in my last LAAG talk) . First, we will introduce bisimulation equivalence and recall Jancar’s and Srba’s Defender’s Forcing lower bound technique. Then, we will apply this technique to different classes of infinite state systems for showing the following hardness results : (1) undecidability for prefix-recognizable graphs, (2) undecidability for vector addition systems, (3) nonelementary hardness for pushdown graphs, and (4) Ackermann hardness for equational graphs of finite out-degree.

Thomas Colcombet, Wednesday December 3, Room 4068: Stabilization monoids and cost functions.

In this third part, we will introduce stabilization monoids and cost functions, and show ho this constitutes a good framework in which the results of the previous talks can be unified.

Thomas Colcombet, Wednesday November 19, Room 4068: Decidability of Boundedness and Limitedness.

This talk constitutes the continuation of the previous one, in which the star-height problem was reduced to a boundedness question.

In this second part, we will establish the decidability of this second problem, following the ideas of Leung, Simon, and Kirsten.

Thomas Colcombet, Wednesday November 5, Room 4068: The star-height problem and its link to boundedness and limitedness.

This talk is the first in a series of lecture intended to describe the main ideas and the use of automata limitedness and related questions.

The goal of the first session is to present the famous (restricted) star-height problem, and show the first steps toward its resolution. The question is the following: given a regular language $L$ and a non-negative integer $k$, can we decide whether $L$ can bee represented as a regular expression of star-height (i.e., nesting of Kleene stars) at most $k$. This problem has been open for 25 years unit Hashiguchi provided a proof in a series of four paper, between 81 and 88. This proof is notoriously difficult, and the presentation here will be based on the ideas in the more modern, and much simpler, proof of Kirsten (2005).

The idea behind both these proofs is to reduce the original problem to a question of existence of bounds for a function computed by some specific forms of automata (distance automata, nested-distance desert automata, B-automata, …). This idea goes much beyond the scope of the star-height problem, and I will try to convey this idea through several examples.

David Xiao, Thursday April 10, 2014: Une introduction aux graphes expandeurs.

Les graphes expandeurs sont des graphes creux mais ”fortement” connexes. Ils peuvent se substituer aux graphes denses ou même aux graphes complets dans beaucoup d’applications, et ainsi permettre une réduction de complexité de calcul ou d’aléa. Ils jouent un rôle clé dans plusieurs domaines de l’informatique fondamentale, notamment dans la dérandomisation et le pseudo-aléa.

Dans cet exposé nous présenterons les graphes expandeurs, leur(s) définition(s), quelques lemmes structurels fondamentaux, et quelques exemples de leur utilisation dans la dérandomisation. Nous esquisserons également la construction basée sur le ”produit zigzag”.

Michael Vanden Boom, Thursday Mars 27, 2014: Automata characterizations of WMSO.

Weak monadic second-order logic (WMSO) over the infinite binary tree has the same syntax as MSO, but second-order quantification is interpreted only over finite sets. It captures a strict subclass of the regular languages of infinite trees, but is expressive enough to subsume temporal logics like CTL.

In this talk, I will sketch the proofs showing the equivalence of WMSO and a form of automata called weak alternating automata. I will also prove Rabin’s characterization of WMSO: a language is weakly definable iff the language and its complement are recognizable by nondeterministic Büchi automata.

I will conclude by describing connections with recent work (joint with Thomas Colcombet, Denis Kuperberg, and Christof Löding) showing that given a Büchi automaton as input, it is decidable whether the language is definable in WMSO.

Olivier Serre, Thursday Mars 13, 2014: Rabin Theorem.

In 1969, Michael O. Rabin established that the class of regular tree languages (i.e. languages of infinite node-labelled complete binary trees accepted by non-deterministic tree automata) form a Boolean algebra. The goal of this talk is to present the underlying objects of this statement as well as a complete proof of it.

Hence, the structure of the talk will be as follows:

- Main definitions and examples : infinite trees, parity tree automata and regular tree languages
- Basic recalls on positional determinacy of two-player parity games on graphs (see previous LAAG seminar by Nathanaël Fijalkow)
- Acceptance game and emptiness game/test for parity tree automata
- Complementation of parity tree automata relying on positional determinacy and determinisation of automata on infinite words (see previous LAAG seminar).

I will assume no specific knowledge on automata nor games for that talk.

Nathanaël Fijalkow, Thursday February 1, 2014: Positional determinacy part II.

Last week, Thomas Colcombet presented a proof of Positional Determinacy for Parity Games, following a ”forward approach”. In this second part, I will present a completely different proof of the same result based on ideas by Muller and Schupp, following a ”backward approach”.

I will highlight the differences between the two approaches, and the applications of both techniques. In particular, I will explain why backward approaches naturally induce determinization procedures for automata over infinite words.

The talk will be self-contained, and in particular I will quickly recall all required definitions at the beginning.

Thomas Colcombet, Thursday February 6, 2014: Positional determinacy part I.

In this sequence of two seminars, we will recall the basic definitions of games and parity games. We will give several proofs and applications of the most famous result in this context: the memoryless determinacy theorem. We will also see how some of these techniques naturally provide determinization procedures over infinite words.

This is in preparation to the seminars after the holidays, that will aim at presenting the results on the monadic theory of the infinite binary tree.

GDR - Informatique Mathématique, Thursday January 30, 2014: National meeting.

[?GDR - IM, GT Jeux?], Thursday January 23, 2014: Annual meeting.

Sylvain Schmitz, Thursday 19th of December, 2014, Room 4068: Complexity classes beyond elementary.

I will present a hierarchy of fast-growing complexity classes and show its suitability for completeness statements of many non elementary problems, which occur naturally in logic, combinatorics, formal language, verification, etc., with complexities ranging from simple towers of exponentials to Ackermannian and beyond.

I plan to skim over the following topics:

- subrecursive hierarchies: these are functions and classes that allow to define rapidly-growing functions by recursion over ordinal indices,¡/li¿
- length function theorems for well-quasi-orders, which are the key combinatorial statements for the upper bounds of most of the non-elementary problems I will mention,¡/li¿
- the definition of fast-growing complexity classes and some easy properties.

The recent results will be taken from work together with S. Abriola, D. Figueira, S. Figueira, C. Haase, S. Haddad, and Ph. Schnoebelen.

Joël Ouaknine, Thursday 28th of November 2013, 10:30 AM, Room 4068: Decision Problems for Linear Recurrence Sequences.

Linear recurrence sequences (such as the Fibonacci numbers) permeate a vast number of areas of mathematics and computer science (in particular: program termination and probabilistic verification), and also have many applications in other fields such as economics, theoretical biology, and statistical physics. In this talk, I will focus on three fundamental decision problems for linear recurrence sequences, namely the Skolem Problem (does the sequence have a zero?), the Positivity Problem (are all terms of the sequence positive?), and the Ultimate Positivity Problem (are all but finitely many terms of the sequence positive?).

This is joint work with James Worrell.

Thomas Colcombet, Thursday 7th of November 2013, 10:30 AM, Room 4068: Green’s relations and automata theory.

Green’s relations form a fundamental tool in the analysis of the structure of semigroups and monoids, in particular for the finite ones. Understanding Green’s relations can give remarkable intuitions concerning these objects, and as a consequence for understanding the structure of regular languages of words (finite or infinite). In particular, this description generalizes the understanding of automata through their decomposition into strongly connected components that is used in many proofs.

During this talk, I will present some of the key results concerning Green’s relations, emphasizing on how one can think about regular languages using them. I will also show how these can be used for deriving non-trivial automata related results (such as McNaughton’s determinization result, Schützenberger’s characterization of star-free languages, Simon’s factorization forest theorem, ... though unfortunately I will not have time do to all these).

Arnaud Durand, Thursday 17th of October 2013, Room 4068: How to count in weak formalisms?.

The counting ability of weak formalisms (e.g. determining the number of $1$’s in a string of length $N$) is of interest as a measure of their expressive power, and also resorts to complexity theoretic motivations : the more we can count the closer we get to real computing power. The question was investigated in several papers in complexity theory and in weak arithmetic a long time ago. In each case, the considered formalism (constant-depth circuits, MSO with addition or first-order bounded arithmetic) was shown to be able to count up to a polylogarithmic number. An essential part of the proofs is the construction of a 1–1 mapping from a small subset of $\left\{0,...,N-1\right\}$ into a small initial segment. In each case the expressibility of this mapping depends on some strong and non-constructive argument (group theoretic device or prime number theorem).

In this talk we will evocate these different methods and present a completely elementary (and constructive) approach to reprove uniformly all these results (based on an old paper with M. More and C. Lautemann). We will also review why some of these results cannot be extended.

Łukasz Kaiser, Thursday 13th of June 2013: Machine Learning Viewed by a Logician.

Machine learning guys do things like linear regression, neural networks and support vector machines. Logic guys do relational structures, formulas, automata and such stuff. These two type of guys hardly ever meet to work together. So few logic guys know that neural networks are sometimes considered with only binary weights and signals, which makes them exactly circuits. And few machine learning guys know quantitative logics good enough to recognize thier standard neural network in a formula. In this talk, I will define very basic concepts used in machine learning in a way that exposes their logic counterparts. I will then argue that such relationship can be beneficial for both fields - providing theorems and deeper understanding to machine learning and practically relevant questions and experimental results to logic.

Gabriele Puppis, Thursday 30th of May 2013: The Cost of Repairing Regular Specifications.

What do you do if a computational object (e.g., a document, a program trace) fails a specification? An obvious approach is to perform a ”repair”: modify the object minimally to get something that satisfies the constraints. This approach has been extensively investigated in the database community for relational integrity constraints, and in the AI community for propositional logics. Different modification operators have been considered on the basis of the application scenarios. For instance, a repair of an XML document usually consists of applying to the underlying tree structure a certain number of editing operations such as relabelings, deletions, and insertions of nodes. In this talk I will survey some results related to the worst-case cost of repairing documents between regular specifications. Precisely, I will focus on the number of edits that are needed to get from a document (i.e., a word or a tree) that satisfies a source specification (i.e., a regular language $S$) to some document that satisfies a target specification (i.e., a regular language $T$). As this number may well be unbounded, I will consider the problem of determining those pairs of languages $\left(S,T\right)$ such that one can get from any word/tree in $S$ to a word/tree in $T$ using a finite, uniformly bounded number of editing operations. I will give effective characterizations of these pairs when $S$ and $T$ are given by finite state automata (word case) or stepwise tree automata (tree case), and derive some complexity bounds for the corresponding problems. Time permitting, I will also talk about the normalized repair cost; more specifically, about the problem of computing the worst-case repair cost between two languages $S$ and $T$ normalized by the length of the input word.

The presentation is based on joint works with Michael Benedikt, Cristian Riveros, Sławek Staworko, and Pierre Bourhis.

Howard Straubing, Thursday 16th of May 2013: A New Proof of Simon’s Theorem, and Separation by Piecewise Testable Languages.

I am going to present a brand-new (about a week old) proof of the classic theorem of I. Simon showing the equivalence of piecewise testability of regular languages and J-triviality of the syntactic monoid. Mikolaj Bojanczyk and I found this proof while contemplating recent results, discovered independently by Czerwinski-Martens-Masopust and by van Rooijen-Zeitoun, which show that one can determine in polynomial time in the size of NFAs recognizing L and M, whether or not L and M can be separated by a piecewise testable language. (In particular, if M is the complement of L, this gives an efficient algorithm for determining if a given language is piecewise testable.) After giving the proof of Simon’s Theorem, I will explain this more general result.

Thomas Colcombet, Thursday 18th of April 2013: Tree Walking Automata.

Tree walking automata were introduced during the 70s as a model for describing regular languages of trees [AU77]. The run of such an automaton, instead of being a tree as for standard tree automata, form a path in the input tree. At each step of the run, the (unique) head of the automato is located in some node, and based on the current configuration of the tree and the current state of the automaton goes to one of the children of the current node, or go to the parent of the current node, and changes of states. Iterating this process, the automaton walks a path in the tree, possibly ending accepting the input tree. Tree walking automata are easy to show to define regular languages of finite trees.

In this talk, we will cover several subjects related to tree walking automata, including, the separation properties (deterministic tree walking automata are strictly weaker than non-deterministic ones [BC06], which in turn are strictly weaker than regular languages [BC08]) and closure properties (under complementation in particular [MSS06]). We will also present the models of pebble automata and relate it to first-order logic with transitive closure [EH99]. We will conclude with the separation of this logic from monadic logic over finite trees [BSSS06,BS10].

Achim Blumensath, Thursday 4th of April 2013: The Transduction Hierarchy and Definable Orderings.

I will give a survey on my recent work with Bruno Courcelle concerning the expressive power of monadic second-order logic on certain graph classes. In particular, I will present our work on the transduction hierarchy, a classification of graph classes according to their encoding power with respect to transductions. I will also give an overview on more recent work on the definability of linear orderings on certain classes of graphs. Here we aim at simple combinatorial characterisations of whether such an ordering is definable or not.

Christof Löding, Wednesday 20th of March 2013: Definability of uniformisation and choice in monadic second order logic.

In this talk I will prove that it is not possible to define a choice function in monadic second-order logic (MSO) over the infinite binary tree. Such a choice function would be given by a formula that has one free set variable and one free element variable, and for each non-empty set of nodes the formula becomes true for exactly one element from this set. This result has first been obtained by Gurevich and Shelah in 1983 using set theoretic methods. The proof presented in this talk is (in my opinion) much simpler, and is completely based on automata theoretic arguments.

A consequence of this result is that MSO-definable relations of infinite trees cannot be uniformized in MSO (where a uniformization of a binary relation is a function that is a subset of the relation and has the same domain as the relation). One can also use the result to show that there are nondeterministic automata on infinite trees that are not equivalent to an unambiguous automaton, i.e., an automaton that has at most one accepting run for each input.

Christian Choffrut, Thursday 21st of February 2013: Uniformization.

To uniformize a binary relation means to find a partial function whose graph lies in the relation and whose domain coincides with that of the relation. Said differently it consists of selecting a unique image for each element of the domain. Depending on the class of relations, it is possible or not to uniformize the relations by functions in the same class.

I will not give a survey, rather talk of what I know of the topic, which is mainly binary rational relations over finite and infinite words and some connected problems.

Thomas Colcombet, Thursday 7th of February 2013: The construction of Safra.

In this talk, we will present the classical construction of Safra, which transforms non-deterministic Büchi automata to deterministic Rabin automata. We will adopt the point of view of modern presentations of this result (in particular of Schewe), and cover the lower bound results.

Stefan Göller, Thursday 17th of January 2013: Two lower bound techniques in formal verification.

First, I will talk about ”Defender’s Forcing”, a technique introduced by Jancar and Srba that is used for proving lower bounds in bisimilarity checking of infinite state systems, i.e. the problem, given two infinite state systems, to decide whether they are bisimilar. I plan to apply this simple but powerful technique to some classes of infinite state systems for which simple representations exist, such as for prefix-recognisable graphs, Petri Nets or PA processes.

Second, I will recall two classical results from complexity theory (i) ”PSPACE is serializable” meaning roughly that for determining whether a word lies in a fixed PSPACE language one only needs to check if the yield (the leaf string) of the computation tree of an NP machine lies in a fixed regular language (ii) Converting a natural number in binary to its Chinese remainder representation is in NCˆ1. I plan to use these results for proving lower bounds for reachability problems and model checking problems for one-counter automata and timed automata, respectively.

Sam van Gool, Thursday 20th of December 2012: A topological proof of Gödel’s completeness theorem for first-order logic.

In the fifties of the previous century, Rasiowa and Sikorski gave a topological proof of Gödel’s completeness theorem for first-order logic. Their proof ultimately relies on the Baire category theorem, which is mainly known for its powerful applications in real and functional analysis. Therefore, the application of Baire’s theorem to prove this fundamental result in logic is a surprising and interesting one. The main goal of this talk will be to give an exposition of this elegant topological proof by Rasiowa and Sikorski.

Colin Riba, Thursday 13th of December 2012: A Model Theoretic Proof of Completeness of an Axiomatization of Monadic Second-Order Logic on Infinite Words.

We discuss the completeness of an axiomatization of Monadic Second-Order Logic (MSO) on infinite words (or streams). By using model-theoretic tools, we give an alternative proof of D. Siefkes’ result that a fragment with full comprehension and induction of second-order Peano’s arithmetic is complete w.r.t. the validity of MSO-formulas on streams. We rely on Feferman-Vaught Theorems and the Ehrenfeucht-Fraissé method for Henkin models of second-order arithmetic. Our main technical contribution is an infinitary Feferman-Vaught Fusion of such models. We show it using Ramseyan factorizations similar to those for standard infinite words. We also discuss a Ramsey’s theorem for MSO-definable colorings, and show that in linearly ordered Henkin models, Ramsey’s theorem for additive MSO-definable colorings implies Ramsey’s theorem for all MSO-definable colorings.

Frédéric Magniez, Thursday 14th of June 2012: An Introduction to Communication Complexity.

We will give a self-content course on the basics of Communication Complexity together with simple and elegant applications for proving lower bounds in different computational models. For instance, we will give a space time tradeoff for recognizing a palindrome on a Turing machine.

Thomas Colcombet, Thursday 31th of May 2012: The collapse of monadic second order logic over countable linear orderings.

This talk corresponds to a joint work with Olivier Carton and Gabriele Puppis and provides a prolongation of the previous seminar. The subject wil be the algebraic approach to monadic second order logic over infinite words. We will in particular show how this notion of recognition is equivalent to the logic. This techniques provide at the same time a collapse of the logic to the one with only one alternation of monadic quantifiers.

Thomas Colcombet, Thursday 10th of May 2012: The monadic theory of orderings.

In this talk, we present the aproaches of Rabin (69) and Shelah (75) for deciding the monadic theory of the rationals with order.

Michel de Rougemont, Thursday 3rd of May 2012: Games for Monadic ${\Sigma}_{1}^{1}$ (EMSO) and probabilistic methods.

I’ll survey the classical Ehrenfeucht-Fraisse and Ajtai-Fagin games for this Logic. I’ll explain the role of the probabilistic method to prove that s,t-connexity is not ${\Sigma}_{1}^{1}$ (EMSO) on graphs, using the Ajtai-Fagin games.

Olivier Carton, Thursday 12th of April 2012: Kamp’s Theorem.

Kamp’s theorem states the equivalence between temporal logics and temporal logics with past operators over complete linear orderings.