IRIF Distinguished Talks Series
Tuesday December 3, 2024, 11AM, Amphi Turing
Dexter Kozen (Cornell University) Probability and Angelic Nondeterminism with Multiset Semantics (joint work with Shawn Ong and Stephanie Ma)

Many modern computational systems involve both probability and nondeterminism, but the combination of the two in a formal logic is a notoriously challenging problem. One would like to reason about the correctness of programs involving constructs for both probabilistic and nondeterministic choice, but it is known that the standard way of modeling nondeterministic choice as a set of possibilities does not interact well with probabilistic choice, chiefly due to the lack of a suitable distributive law between the two. Many researchers over the years have grappled with this issue and proposed a variety of workarounds.

In this talk I will describe an alternative approach that models nondeterministic choice as a multiset operator instead of a set operator. It turns out that this simple change makes everything work much more smoothly, as there is a distributive law between probability and multisets. I will describe a class of probabilistic automata with nondeterminism modeled by multisets, along with a corresponding class of expressions analogous to regular expressions in this context, and show that they are equivalent in expressive power. In these models, an input string is not just accepted with some probability, but accepted with some finite multiplicity with some probability. I will give several examples and describe exactly why multisets work where sets do not.

IRIF Distinguished Talks Series
Tuesday May 14, 2024, 11AM, Amphi Turing
Omer Reingold (Stanford) The multitude of group affiliations: Algorithmic Fairness, Loss Minimization and Outcome Indistinguishability

We will survey a rather recent and very fruitful line of research in algorithmic fairness, coined multi-group fairness. We will focus on risk prediction, where a machine learning algorithm tries to learn a predictor to answer questions of the form “what is the probability that patient x will experience a particular medical condition?” Training a risk predictor to minimize a loss function fixed in advance is the dominant paradigm in machine learning. However, global loss minimization may create predictions that are mis-calibrated on sub-populations, causing harm to individuals of these populations. Multi-group fairness tries to prevent forms of discrimination to a rich (possibly exponential) collection of arbitrarily intersecting groups. In a sense, it provides a computational perspective on the meaning of individual risks and the classical tension between clinical prediction, which uses individual-level traits, and actuarial prediction, which uses group-level traits.

While motivated in fairness, this alternative paradigm for training an indistinguishable predictor is finding a growing number of appealing applications, where the same predictor can later be used to optimize one of a large set of loss functions, under a family of capacity and fairness constraints and instance distributions.

Based on a sequence of works joint with (subsets of) Cynthia Dwork, Shafi Goldwasser, Parikshit Gopalan, Úrsula Hébert-Johnson, Lunjia Hu, Adam Kalai, Christoph Kern, Michael P. Kim, Frauke Kreuter, Guy N. Rothblum, Vatsal Sharan, Udi Wieder, Gal Yona and others.

IRIF Distinguished Talks Series
Wednesday February 7, 2024, 11AM, Amphi Turing, Bâtiment Sophie Germain
Véronique Cortier (Laboratoire lorrain de recherche en informatique et ses applications (LORIA)) Electronic voting: design and formal verification

Electronic voting aims at guaranteeing apparently conflicting properties: no one should know how I voted and yet, I should be able to check that my vote has been properly counted. Electronic voting belongs to the large family of security protocols, that aim at securing communications against powerful adversaries that may read, block, and modify messages. In this talk, we will see how to design secure electronic voting protocols and how to analyze them using formal methods, in order to detect attacks at an early stage, or prove security, yielding a better understanding of the security guarantees and the threat model.