~~NOCACHE~~
/* DO NOT EDIT THIS FILE */
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday December 13, 2016, 11AM, Salle 1007\\
**Amos Korman** (CNRS, IRIF) //From Ants to Query Complexity//
\\
I will talk about my recent adventures with ants. Together with biologists we study P. longicornis ants as they collaboratively transport a large food item to their nest. This collective navigation process is guided by pheromones which are laid by individual ants. Using a new methodology to detect scent marks, we identify a new kind of ant trail characterized by very short and dynamic pheromone markings and highly stochastic navigation response to them. We argue that such a trail can be highly beneficial in conditions in which knowledge of individual ants regarding the underlying topological structure is unreliable. This gives rise to a new theoretical search model under unreliable guiding instructions, which is of independent computational interest. To illustrate the model, imagine driving a car in an unknown country
that is in the aftermath of a major hurricane which has randomly flipped a certain small fraction
of the road-signs. Under such conditions of unreliability, how can you still reach your
destination fast? I will discuss the limits of unreliability that allow for efficient navigation. In trees, for example, there is a phase transition phenomenon that occurs roughly around
1/sqrt{D}. That is, if noise is above this threshold then any algorithm cannot avoid finding the target in exponential time (in the original distance), while below the threshold we identify an optimal, almost linear, walking algorithm. Finally, I will discuss algorithms that under such a noisy model aim to minimize the number of queries to find a target (rather than the number of moves).
This talk is based on joint works with biologists: Ofer Feinerman, Udi Fonio, Yael Heyman and Aviram Gelblum, and CS co-authors: Lucas Boczkowski, Adrian Kosowski and Yoav Rodeh.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday December 6, 2016, 11AM, Salle 1007\\
**Omar Fawzi** //Algorithmic aspects of optimal channel coding//
\\
We study the problem of finding the maximum success probability for transmitting messages over a noisy channel from an algorithmic point of view. In particular, we show that a simple greedy polynomial-time algorithm computes a code achieving a (1-1/e)-approximation of the maximum success probability and that it is NP-hard to obtain an approximation ratio strictly better than (1-1/e). Moreover, the natural linear programming relaxation of this problem corresponds to the Polyanskiy-Poor-Verdú bound, which we also show has a value of at most
1/(1-1/e) times the maximum success probability.
Based on joint work with Siddharth Barman.
arXiv:1508.04095
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Friday December 2, 2016, 11AM, Salle 1007\\
**Luc Sanselme** //Determinism and Computational Power of Real Measurement-based Quantum Computation//
\\
Measurement-based quantum computing (MBQC) is a universal model for quantum computation. The combinatorial characterisation of determinism in this model, powered by measurements, and hence, fundamentally probabilistic, is the cornerstone of most of the breakthrough results in this field. The most general known sufficient condition for a deterministic MBQC to be driven is that the underlying graph of the computation has a particular kind of flow called Pauli flow. The necessity of the Pauli flow was an open question. We show that the Pauli flow is necessary for real-MBQC, and not in general providing counter-examples for (complex) MBQC.
We explore the consequences of this result for real MBQC and its applications. Real MBQC and more generally real quantum computing is known to be universal for quantum computing. Real MBQC has been used for interactive proofs by McKague. The two-prover case corresponds to real-MBQC on bipartite graphs. While (complex) MBQC on bipartite graphs are universal, the universality of real MBQC on bipartite graphs was an open question. We show that real bipartite MBQC is not universal proving that all measurements of real bipartite MBQC can be parallelised leading to constant depth computations. As a consequence, McKague techniques cannot lead to two-prover interactive proofs.
Joint work with Simon Perdrix.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday November 22, 2016, 11AM, Salle 1007\\
**Anca Nitulescu** (ENS Paris) //On the (In)security of SNARKs in the Presence of Oracles//
\\
In this work we study the feasibility of knowledge extraction for succinct non-interactive arguments of knowledge (SNARKs) in a scenario that, to the best of our knowledge, has not been analyzed before. While prior work focuses on the case of adversarial provers that may receive (statically generated) {\em auxiliary information}, here we consider the scenario where adversarial provers are given {\em access to an oracle}. For this setting we study if and under what assumptions such provers can admit an extractor. Our contribution is mainly threefold.
First, we formalize the question of extraction in the presence of oracles by proposing a suitable
proof of knowledge definition for this setting. We call SNARKs satisfying this definition O
SNARKs. Second, we show how to use O-SNARKs to obtain formal and intuitive security proofs
for three applications (homomorphic signatures, succinct functional signatures, and SNARKs on
authenticated data) where we recognize an issue while doing the proof under the standard
proof of knowledge definition of SNARKs. Third, we study whether O-SNARKs exist, providing
both negative and positive results. On the negative side, we show that, assuming one way
functions, there do not exist O-SNARKs in the standard model for every signing oracle family
(and thus for general oracle families as well). On the positive side, we show that when
considering signature schemes with appropriate restrictions on the message length O-SNARKs for the corresponding signing oracles exist, based on classical SNARKs and assuming
extraction with respect to specific distributions of auxiliary input.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday November 8, 2016, 11AM, Salle 1007\\
**Arpita Korwar** //Polynomial Identity Testing of Sum of ROABPs//
\\
Polynomials are fundamental objects studied in mathematics. Though univariate polynomials are fairly well-understood, multivariate polynomials are not. Arithmetic circuits are the primary tool used to study polynomials in computer science. They allow for the classification of polynomials according to their complexity.
Polynomial identity testing (PIT) asks if a polynomial, input in the form of an arithmetic circuit, is identically zero.
One special kind of arithmetic circuits are read-once arithmetic branching programs (ROABPs), which can be written as a product of univariate polynomial matrices over distinct variables. We will be studying the characterization of an ROABP. In the process, we can give a polynomial
time PIT for the sum of constantly many ROABPs.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday October 25, 2016, 11AM, Salle 1007\\
**Eric Angel** (Université d'Évry Val d'Essonne IBISC) //Clustering on k-edge-colored graphs.//
\\
We study the Max k-colored clustering problem, where, given an edge-colored graph with k colors, we seek to color the vertices of the graph so as to find a clustering of the vertices maximizing the number (or the weight) of matched edges, i.e. the edges having the same color
as their extremities. We show that the cardinality problem is NP-hard even for edge-colored bipartite graphs with a chromatic degree equal to two and k ≥ 3. Our main result is a constant approximation algorithm for the weighted version of the Max k-colored clustering problem which is based on a rounding of a natural linear programming relaxation. For graphs with chromatic degree equal to two, we improve this ratio by exploiting the relation of our problem with the Max 2-and problem. We also present a reduction to the maximum-weight independent set (IS) problem in bipartite graphs which leads to a polynomial time algorithm for the case of two colors.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday October 18, 2016, 11AM, Salle 1007\\
**Carola Doerr** //Provable Performance Gains via Dynamic Parameter Choices in Heuristic Optimization//
\\
In many optimization heuristics there are a number of parameters to be chosen. These
parameters typically have a crucial impact on the performance of the algorithm. It is
therefore of great interest to set these parameters wisely. Unfortunately, determining the
optimal parameter choices for a randomized search heuristic via mathematical means is a
rather difficult task. Even worse, for many problems the optimal parameter choices seem to
change during the optimization process. While this seems quite intuitive, little theoretical
evidence exist to support this claim.
In a series of recent works we have proposed two very simple success-based update rules
for the parameter settings of some standard search heuristics. For both these rules we can
prove that they yield a better performance than any static parameter choice.
Based on joint work with Benjamin Doerr (Ecole Polytechnique), Timo Koetzing (HPI
Potsdam, Germany), and Jing Yang (Ecole Polytechnique).
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday October 11, 2016, 11AM, Salle 1007\\
**Dieter van Melkebeek** (University of Wisconsin, Madison) //Deterministic Isolation for Space-Bounded Computation//
\\
Isolation is the process of singling out a solution to a problem that may have many solutions. It plays an important role in the design of efficient parallel algorithms as it ensures that the various parallel processes all work towards a single global solution rather than towards individual solutions that may not be compatible with one another. For example, the best parallel algorithms for finding perfect matchings in graphs hinge on isolation for this reason. Isolation is also an ingredient in some efficient sequential algorithms. For example, the best running times for certain NP-hard problems like finding hamiltonian paths in graphs are achieved via isolation.
All of these algorithms are randomized, and the only reason is the use of the Isolation Lemma -- that for any set system over a finite universe, a random assignment of small integer weights to the elements of the universe has a high probability of yielding a unique set of minimum weight in the system. For each of the underlying problems it is open whether deterministic algorithms of similar efficiency exist.
This talk is about the possibility of deterministic isolation in the space-bounded setting. The question is: Can one always make the accepting computation paths of nondeterministic space-bounded machines unique without changing the underlying language and without blowing up the space by more than a constant factor? Or equivalently, does there exist a deterministic logarithmic space mapping reduction from directed st-connectivity to itself that transforms positive instances into ones where there is a unique path from s to t?
I will present some recent results towards a resolution of this question, obtained jointly with Gautam Prakriya. Our approach towards a positive resolution can be viewed as derandomizing the Isolation Lemma in the context of space-bounded computation.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday September 20, 2016, 11AM, Salle 1007\\
**Eldar Fischer** (Faculty of CS, Technion - Israel Institue of Technology) //Improving and extending testing of distributions for shape restrictions.//
\\
Distribution property testing deals with what information can be deduced
about an unknown distribution over {1,...,n}, where we are only allowed
to obtain a relatively small number of independent samples from the
distribution. In the basic model the algorithm may only base its
decision on receiving a sequence of samples from the distribution, while
in the conditional model the algorithm may also request samples out of
the conditional distribution over subsets of {1,...,n}.
A test has to distinguish a distribution satisfying a given property
from a distribution that is far in the variation distance from
satisfying it. A range of properties such as monotonicity and
log-concavity has been unified under the banner of L-decomposable
properties. Here we improve upon the basic model test for all such
properties, as well as provide a new test under the conditional model
whose number of queries does not directly depend on n. We also provide a
conditional model test for a wider range of properties, that in
particular yields tolerant testing for all L-decomposable properties.
For tolerant testing conditional samples are essential, as an efficient
test in the basic model is known not to exist.
Our main mechanism is a way of efficiently producing a partition of
{1,...,n} into intervals satisfying a small-weight requirement with
respect to the unknown distribution. Also, we show that investigating
just one such partition is sufficient for solving the testing question,
as opposed to prior works where a search for the "correct" partition was
performed.
Joint work with Oded Lachish and Yadu Vasudev.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday September 13, 2016, 11AM, Salle 1007\\
**Tatiana Starikovskaya** (IRIF, Université Paris Diderot) //Streaming and communication complexity of Hamming distance//
\\
We will discuss the complexity of one of the most basic problems in pattern matching, that of approximating the Hamming distance. Given a pattern P of length n the task is to output an approximation of the Hamming distance (that is, the number of mismatches) between P and every n-length substring of a longer text. We provide the first efficient one-way randomised communication protocols as well as a new, fast and space efficient streaming algorithm for this problem.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Monday August 29, 2016, 11AM, Room 2002\\
**Sanjeev Khanna** (University of Pennsylvania) //On the Single-Pass Streaming Complexity of the Set Cover Problem//
\\
In the set cover problem, we are given a collection of $m$ subsets over a universe of $n$ elements, and the goal is to find a sub-collection of sets whose union covers the universe. The set cover problem is a fundamental optimization problem with many applications in computer science and related disciplines. In this talk, we investigate the set cover problem in the streaming model of computation whereby the sets are presented one by one in a stream, and the goal is to solve the set cover problem using a space-efficient algorithm.
We show that to compute an $\alpha$-approximate set cover (for any $\alpha= o(\sqrt{n})$) via a single-pass streaming algorithm, $\Theta(mn/\alpha)$ space is both necessary and sufficient (up to an $O(\log{n})$ factor). We further study the problem of estimating the size of a minimum set cover (as opposed to finding the actual sets), and show that this turns out to be a distinctly easier problem. Specifically, we prove that $\Theta(mn/\alpha^2)$ space is both sufficient and necessary (up to logarithmic factors) for estimating the size of a minimum set cover to within a factor of $\alpha$. Our algorithm in fact works for the more general problem of estimating the optimal value of a covering integer program. These results provide a tight resolution of the space-approximation tradeoff for single-pass streaming algorithms for the set cover problem.
This is joint work with my students Sepehr Assadi and Yang Li.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday July 5, 2016, 11AM, Salle 1007\\
**Alexandra Kolla** (University of Illinois at Urbana-Champaign) //Towards Constructing Expanders via Lifts: Hopes and Limitations.//
\\
In this talk, I will examine the spectrum of random k-lifts of d-regular
graphs. We show that, for random shift k-lifts (which includes 2-lifts), if all the nontrivial eigenvalues of the base graph G are at most \lambda in absolute value, then with high probability depending only on the number n of nodes of G (and not on k), if k is *small enough*, the absolute value of every nontrivial eigenvalue of the lift is at most O(\lambda).
While previous results on random lifts were asymptotically true with high probability in the degree of the lift k, our result is the first upperbound on spectra of lifts for bounded k. In particular, it implies that a
typical small lift of a Ramanujan graph is almost Ramanujan. I will present a quasi-polynomial time algorithm for constructing almost-Ramanujan expanders through such lifts. I will also discuss some impossibility results for large k, which, as one consequence, imply that there is no hope of constructing large Ramanujan graphs from large abelian k-lifts.
Based on joint work with Naman Agarwal Karthik Chandrasekaran and Vivek Madan.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday June 21, 2016, 11AM, Salle 1007\\
**Nathanaël Fijalkow** //Alternating Communication Complexity, with Applications to Online Space Complexity//
\\
We study the model of alternating communication complexity introduced by Babai, Frankl and Simon in 1986. We extend the rank lower bound to this setting. We show some applications of this technique for online space complexity, as defined by Karp in the 60s. This measure of complexity quantifies the amount of space used by a Turing machine whose input tape can read each symbol only once, from left to right.
In particular, we obtain logarithmic lower bounds on the alternating online space complexity of the set of prime numbers written in binary, which is an exponential improvement over the previous result due to Hartmanis and Shank in 1968.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday May 31, 2016, 11AM, Salle 1007\\
**Stacey Jeffery** (Institute for Quantum Information and Matter, Caltech) //Span Programs, NAND-Trees, and Graph Connectivity//
\\
We show a connection between NAND-tree evaluation and st-connectivity problems on certain graphs to generalize a superpolynomial quantum speedup of Zhan et al. for a promise version of NAND-tree formula evaluation. In particular, we show that the quantum query complexity of evaluating NAND-tree instances with average choice complexity at most W is O(W), where average choice complexity is a measure of the difficulty of winning the associated two-player game. Our results follow from relating average choice complexity to the effective resistance of these graphs, which itself corresponds to the span program witness size. These connections suggest an interesting relationship between st-connectivity problems and span program algorithms, that we hope may have further applications.
This is joint work with Shelby Kimmel.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday May 24, 2016, 11AM, Salle 1007\\
**Tim Black** (University of Chicago) //Monotone Properties of k-Uniform Hypergraphs are Weakly Evasive.//
\\
The decision-tree complexity of a Boolean function is the number of input bits that must be queried (adaptively) in the worst case to determine the value of the function. A Boolean function in n variables is weakly evasive if its decision-tree complexity is Omega(n). By k-graphs we mean k-uniform hypergraphs. A k-graph property on v vertices is a Boolean function on n = \binom{v}{k} variables corresponding to the k-subsets of a v-set that is invariant under the v! permutations of the v-set (isomorphisms of k-graphs).
Rivest and Vuillemin (1976) proved that all non-constant monotone graph properties (k=2) are weakly evasive, confirming a conjecture of Aanderaa and Rosenberg (1973). Kulkarni, Qiao, and Sun (2013) proved the analogous result for 3-graphs. We extend these results to k-graphs for every fixed k. From this, we show that monotone Boolean functions invariant under the action of a large primitive group are weakly evasive.
While KQS (2013) employ the powerful topological approach of Kahn, Saks, and Sturtevant (1984) combined with heavy number theory, our argument is elementary and self-contained (modulo some basic group theory). Inspired by the outline of the KQS approach, we formalize the general framework of "orbit augmentation sequences" of sets with group actions. We show that a parameter of such sequences, called the "spacing," is a lower bound on the decision-tree complexity for any nontrivial monotone property that is G-invariant for all groups G involved in the orbit augmentation sequence, assuming all those groups are p-groups. We develop operations on such sequences such as composition and direct product which will provide helpful machinery for our applications. We apply this general technique to k-graphs via certain liftings of k-graphs with wreath product action of p-groups.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday May 17, 2016, 11AM, Salle 1007\\
**Nikhil Bansal** (Eindhoven University of Technology) //Solving optimization problems on noisy planar graphs//
\\
Many graph problems that are hard to approximate on general graphs become much more tractable on
planar graphs. In particular, planar graphs can be decomposed into small pieces or into bounded
treewidth graphs, leading to PTASes for these problems. But little is known about the noisy setting
where the graphs are only nearly planar, i.e. deleting few edges makes them planar.
One obstacle is that current planar decomposition techniques fail completely with noise.
Another obstacle is that the known guarantees for the planarization problem are too weak for our purpose.
We show that using linear programming methods such as configuration LPs and spreading metrics, one can get
around these barriers and obtain PTASes for many problems on noisy planar graphs.
This resolves an open question of Magen and Moharrami, that was recently popularized by Claire Mathieu.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday May 10, 2016, 11AM, Salle 1007\\
**Jean Cardinal** //Solving k-SUM using few linear queries//
\\
The k-SUM problem is given n input real numbers to determine whether any k of them sum to zero. The problem is of tremendous importance in the emerging field of complexity theory within P, and it is in particular open whether it admits an algorithm of complexity O(n^c) with c<⌈k/2⌉. Inspired by an algorithm due to Meiser (1993), we show that there exist linear decision trees and algebraic computation trees of depth O(n^3 log^3(n)) solving k-SUM. Furthermore, we show that there exists a randomized algorithm that runs in Õ (n^{⌈k/2⌉+8}) time, and performs O(n^3 log^3(n)) linear queries on the input. Thus, we show that it is possible to have an algorithm with a runtime almost identical (up to the +8) to the best known algorithm but for the first time also with the number of queries on the input a polynomial that is independent of k. The O(n^3 log^3(n)) bound on the number of linear queries is also a tighter bound than any known algorithm solving k-SUM, even allowing unlimited total time outside of the queries. By simultaneously achieving few queries to the input without significantly sacrificing runtime vis-à-vis known algorithms, we deepen the understanding of this canonical problem which is a cornerstone of complexity-within-P.
We also consider a range of tradeoffs between the number of terms involved in the queries and the depth of the decision tree. In particular, we prove that there exist o(n)-linear decision trees of depth o(n^4).
Joint work with John Iacono and Aurélien Ooms
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday May 3, 2016, 11AM, Salle 1007\\
**Mehdi Mhalla** (LIG Grenoble) //Pseudotelepathy games with graph states, contextuality and multipartiteness width.//
\\
Analyzing pseudotelepathy graph games, we propose a way to build contextuality scenarios exhibiting the quantum supremacy using graph states. We consider the combinatorial structures generating equivalent scenarios. We investigate which scenarios are more multipartite and show that there exists graphs generating scenarios with a linear multipartiteness width.
This is based on a joint work with Peter Hoyer and Simon Perdrix.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday April 26, 2016, 11AM, Salle 4033\\
**Jan Hackfeld** (TU Berlin) //Undirected Graph Exploration with Θ(log log n) Pebbles//
\\
We consider the fundamental problem of exploring an undirected and initially unknown graph by an agent with little memory. The vertices of the graph are unlabeled, and the edges incident to a vertex have locally distinct labels. In this setting, it is known that Θ(log n) bits of memory are necessary and sufficient to explore any graph with at most n vertices. We show that this memory requirement can be decreased significantly by making a part of the memory distributable in the form of pebbles. A pebble is a device that can be dropped to mark a vertex and can be collected when the agent returns to the vertex. We show that for an agent O(log log n) distinguishable pebbles and bits of memory are sufficient to explore any bounded-degree graph with at most n vertices. We match this result with a lower bound exhibiting that for any agent with sub-logarithmic memory, Ω(log log n) distinguishable pebbles are necessary for exploration.
This talk is based on joint work with Yann Disser and Max Klimm (SODA'16).
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday April 19, 2016, 11AM, Salle 1007\\
**Charles Bennett** //Is there such a thing as private classical information?//
\\
Classical secret information lies on a slippery slope between public information and quantum information. Even leaving aside fanciful attacks like neutrino tomography, a typical classical secret---say a paper document locked in a safe---quickly decoheres and becomes recoverable in principle from the environment outside the safe. On the other hand, if a system is so well insulated from its environment that it does not decohere, it can be used as a quantum memory, capable of existing in a superposition of classical states and of being entangled with other other quantum memories. We discuss the practical and theoretical difficulty of recovering a classical secret from its decohered environment, and of protecting a classical secret by arranging that some information required to recover it escapes into parts of the environment inaccessible to the eavesdropper.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Wednesday March 30, 2016, 2PM, Salle 3058\\
**Manoj Prabhakaran** (University of Illinois, Urbana-Champaign) //Rényi Information Complexity and an Information Theoretic Characterization of the Partition Bound//
\\
We introduce a new information-theoretic complexity measure IC∞ for
2-party functions which is a lower-bound on communication complexity,
and has the two leading lower-bounds on communication complexity as
its natural relaxations: (external) information complexity (IC) and
logarithm of partition complexity (prt). These two lower-bounds had so
far appeared conceptually quite different from each other, but we show
that they are both obtained from IC∞ using two different, but natural
relaxations:
* IC∞ is similar to information complexity IC, except that it uses Rényi mutual information of order ∞ instead of Shannon's mutual information (which is Rényi mutual information of order 1). Hence, the relaxation of IC∞ that yields IC is to change the order of Rényi mutual information used in its definition from ∞ to 1.
* The relaxation that connects IC∞ with partition complexity is to replace protocol transcripts used in the definition of IC∞ with what we term "pseudotranscripts," which omits the interactive nature of a protocol, but only requires that the probability of any transcript given inputs x and y to the two parties, factorizes into two terms which depend on x and y separately. While this relaxation yields an apparently different definition than (log of) partition function, we show that the two are in fact identical. This gives us a surprising characterization of the partition bound in terms of an information-theoretic quantity.
Further understanding IC∞ might have consequences for important
direct-sum problems in communication complexity, as it lies between
communication complexity and information complexity.
We also show that if both the above relaxations are simultaneously
applied to IC∞, we obtain a complexity measure that is lower-bounded
by the (log of) relaxed partition complexity, a complexity measure
introduced by Kerenidis et al. (FOCS 2012). We obtain a similar (but
incomparable) connection between (external) information complexity and
relaxed partition complexity as Kerenidis et al., using an arguably
more direct proof.
This is joint work with Vinod Prabhakaran.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Friday March 11, 2016, 11AM, Salle 1007\\
**Christian Konrad** (Reykjavik University) //Streaming Algorithms for Partitioning Sequences and Trees//
\\
Partitioning sequences and trees are classical load balancing problems that received considerable attention in the 80ies and 90ies. For both problems exact algorithms with different characteristics exist. However, in the context of massive data sets, these algorithms fail since they assume random access to the input, an assumption that can hardly be granted. The key motivation of this work is the partitioning of current XML databases, some of which reach many terrabytes in size. In an XML database, data is organized in a tree structure.
In this talk, I will present streaming algorithms for both problems. The presented algorithms require a random access memory whose size is only logarithmic in the size of the input, which makes them good candidates for performing well in practice. This work will be presented next week at ICDT 2016, the 19th International Conference on Database Theory.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday February 23, 2016, 11AM, Salle 1007\\
**Ashwin Nayak** (University of Waterloo) //Sampling quantum states//
\\
A classic result in information theory, the source coding theorem, shows how we may compress a sample from a random variable X, into essentially H(X) bits on average, without any loss. (Here H(X) denotes the Shannon entropy of X.) We revisit the analogous problem in quantum communication, in the presence of shared entanglement. No characterization of the communication needed for lossless compression is known in this scenario. We study a natural protocol for such compression, quantum rejection sampling, and give bounds on its complexity. Eventhough we do not have a precise characterization of the complexity, we show how it may be used to derive some consequences of lossless compression.
Joint work with Ala Shayeghi.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday February 16, 2016, 11AM, Salle 1007\\
**Johan Thapper** (Université Paris-Est, Marne-la-Vallée, LIGM) //Constraint Satisfaction Problems, LP relaxations and Polymorphisms//
\\
An instance of the Constraint Satisfaction Problem (CSP) is given by a set of constraints over a set of variables. The variables take values from a (finite) domain and the constraints are specified by relations over the domain that need to hold between various subsets of variables. In the late 90s, it was realised that the complexity of the CSP restricted to some fixed set of relations is captured by an associated set of operations called polymorphisms. This connection has lead to a great influx of ideas and tools (as well as researchers) from universal algebra, a field of mathematics that in particular studies algebras of such operations.
A quite general optimisation version of the CSP is obtained by replacing the relations by arbitrary functions from tuples of domain values to the rationals extended with positive infinity. The goal of this problem, called the Valued Constraint Satisfaction Problem (VCSP), is to minimise a sum of such functions over all assignments. The complexity classification project of the VCSP has taken some great strides over the last four years and has recently been reduced to its more famous decision problem counterpart: the dichotomy conjecture for the CSP.
I will talk about how polymorphisms appear in the study of the CSP and some of what universal algebra has taught us. I will then show how these results can be used for characterising the efficacy of Sherali-Adams linear programming relaxations of the VCSP.
This is based on joint work with Standa Zivny, University of Oxford (UK).
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday February 2, 2016, 11AM, Salle 1007\\
**Balthazar Bauer** //Compression of communication protocols//
\\
In communication theory, there are two big characteristics for a protocol: its communication complexity and its information complexity. There is a huge activity in the research community to find interesting relations between these two characteristics. An idea is to try to compress a protocol and to see the efficiency of the compression for a protocol with a known communication and information. Here we will see some recent schemes of compression. It will be also a good occasion to discover some common tricks and algorithms in communication and information theory.
[[en:seminaires:algocomp:index|Algorithms and complexity]]\\
Tuesday January 26, 2016, 11AM, Salle 1007\\
**Chien-Chung Huang** (Chalmers University of Technology and Göteborg University) //Exact and Approximation Algorithms for Weighted Matroid Intersection//
\\
We propose new exact and approximation algorithms for the weighted matroid intersection problem. Our exact algorithm is faster than previous algorithms when the largest weight is relatively small. Our approximation algorithm delivers a $(1-\epsilon)$-approximate solution with a running time significantly faster than most known exact algorithms.
The core of our algorithms is a decomposition technique: we decompose an instance of the weighted matroid intersection problem into a set of instances of the unweighted matroid intersection problem. The computational advantage of this approach is that we can make use of fast unweighted matroid intersection algorithms as a black box for designing algorithms. Precisely speaking, we prove that we can solve the weighted matroid intersection problem via solving $W$ instances of the unweighted matroid intersection problem, where $W$ is the largest given weight.
Furthermore, we can find a $(1-\epsilon)$-approximate solution via solving $O(\epsilon^{-1} \log r)$
instances of the unweighted matroid intersection problem, where $r$ is the smallest rank of the given two matroids.