Voluntary talks

What are voluntary talks and why ?

Due to the COVID-19 epidemic, we faced a number of travel cancellations. To guarantee the quality of the scientific forum, we offered to the participants the opportunity to give a voluntary talk. These talks deal with works already presented at some other conferences, publshed in some journals or recently submitted. We thanks all the 15 speakers that promptly and positively answered. Below is the list of voluntary talks.

List of voluntary talks

  • Martin Dietzfelbinger and Stefan Walzer. Efficient Gauss Elimination for Near-Quadratic Matrices with One Short Random Block per Row, with Applications
    • presented at ESA 2019
    • Abstract: In this paper we identify a new class of sparse near-quadratic random Boolean matrices that have full row rank over F_2 = {0,1} with high probability and can be transformed into echelon form in almost linear time by a simple version of Gauss elimination. The random matrix with dimensions n(1-epsilon) x n is generated as follows: In each row, identify a block of length L = O((log n)/epsilon) at a random position. The entries outside the block are 0, the entries inside the block are given by fair coin tosses. Sorting the rows according to the positions of the blocks transforms the matrix into a kind of band matrix, on which, as it turns out, Gauss elimination works very efficiently with high probability. For the proof, the effects of Gauss elimination are interpreted as a ("coin-flipping") variant of Robin Hood hashing, whose behaviour can be captured in terms of a simple Markov model from queuing theory. Bounds for expected construction time and high success probability follow from results in this area. They readily extend to larger finite fields in place of F_2. By employing hashing, this matrix family leads to a new implementation of a retrieval data structure, which represents an arbitrary function f: S -> {0,1} for some set S of m = (1-epsilon)n keys. It requires m/(1-epsilon) bits of space, construction takes O(m/epsilon^2) expected time on a word RAM, while queries take O(1/epsilon) time and access only one contiguous segment of O((log m)/epsilon) bits in the representation (O(1/epsilon) consecutive words on a word RAM). The method is readily implemented and highly practical, and it is competitive with state-of-the-art methods. In a more theoretical variant, which works only for unrealistically large S, we can even achieve construction time O(m/epsilon) and query time O(1), accessing O(1) contiguous memory words for a query. By well-established methods the retrieval data structure leads to efficient constructions of (static) perfect hash functions and (static) Bloom filters with almost optimal space and very local storage access patterns for queries.
  • Max Bannach, Malte Skambath, Till TantauKernelizing the Hitting Set Problem in Linear Sequential and Constant Parallel Time
    • Submitted to SWAT 2020
    • Abstract: We analyze a reduction rule for computing kernels for the hitting set problem: In a hypergraph, the \emph{link} of a set~$c$ of vertices consists of all edges that are supersets of~$c$. We call such a set \emph{critical} if its link has certain easy-to-check size properties. Our rule states that the link of a critical~$c$ can be replaced by~$c$. We derive two algorithms from this rule: First, we obtain a simple linear-time algorithm for computing hitting set kernels of size (number of edges) at most $k^d$ ($k$ is the hitting set size, $d$ is the maximum edge size), beating the current state-of-the-art kernel algorithm regarding both, the kernel size and the algorithmic simplicity. Second, by parallelising the algorithm we obtain the first AC$^0$ kernel algorithm that outputs polynomial-size kernels. Previously, such algorithms were not even known for artificial problems. An interesting application of our methods lies in traditional, non-parameterized approximation theory: Our results imply that uniform AC$^0$-circuits can compute a hitting set whose size is polynomial in the size of an optimal hitting set.
  • Antoine Amarilli, Pierre Bourhis, Stefan Mengel, Matthias Niewerth. Enumeration of MSO Queries on trees with constant delay and logarithmic updates
    • The main result was presented at PODS 2019. I will give an high-level overview of the proof, showing how it connects several partial results that were presented at ICALP 2017, ICDT 2018, and LICS 2018.
    • Abstract: We give an algorithm to enumerate the results on trees of monadic second-order (MSO) queries represented by nondeterministic tree automata. After linear time preprocessing (in the input tree), we can enumerate answers with linear delay (in each answer). We allow updates on the tree to take place at any time, and we can then restart the enumeration after logarithmic time in the tree. Further, all our combined complexities are polynomial in the automaton. Our result implies that, for MSO queries with free first-order variables, we can enumerate the results with linear preprocessing and constant-delay and update the underlying tree in logarithmic time, which improves on several known results for words and trees.
  • Markus Bläser, Christian Ikenmeyer, Vladimir Lysikov, Anurag Pandey, Frank-Olaf Schreyer. Variety Membership Testing, Algebraic Natural Proofs, and Geometric Complexity Theory
    • arXiv:1911.02534
    • Abstract: We study the variety membership testing problem in the case when the variety is given as an orbit closure and the ambient space is the set of all 3-tensors. The first variety that we consider is the slice rank variety, which consists of all 3-tensors of slice rank at most r. We show that the membership testing problem for the slice rank variety is $\NP$-hard. While the slice rank variety is a union of orbit closures, we define another variety, the minrank variety, expressible as a single orbit closure. Our next result is the $\NP$-hardness of membership testing in the minrank variety, hence we establish the $\NP$-hardness of the orbit closure containment problem for 3-tensors. Algebraic natural proofs were recently introduced by Forbes, Shpilka and Volk and independently by Grochow, Kumar, Saks and Saraf. Bläser et al. gave a version of an algebraic natural proof barrier for the matrix completion problem which relies on $\coNP \subseteq \exists \BPP$. It implied that constructing equations for the corresponding variety should be hard. We generalize their approach to work with any family of varieties for which the membership problem is $\NP$-hard and for which we can efficiently generate a dense subset. Therefore, a similar barrier holds for the slice rank and the minrank varieties, too. This allows us to set up the slice rank and the minrank varieties as a test-bed for geometric complexity theory (GCT). We determine the stabilizers of the tensors that generate the orbit closures of the two varieties and prove that these tensors are almost characterized by their symmetries. We prove several nontrivial equations for both the varieties using different GCT methods. Many equations also work in the regime where membership testing in the slice rank or minrank varieties is $\NP$-hard. We view this as a promising sign that the GCT approach might indeed be successful.
  • Andrei Romashchenko and Marius Zimand. An operational characterization of mutual information in algorithmic information theory
    • presented at ICALP 2018, J of ACM 2019
    • Abstract: We show that the mutual information, in the sense of Kolmogorov complexity, of any pair of strings $x$ and $y$ is equal, up to logarithmic precision, to the length of the longest shared secret key that two parties, one having $x$ and the complexity profile of the pair and the other one having $y$ and the complexity profile of the pair, can establish via a probabilistic protocol with interaction on a public channel. For $\ell > 2$, the longest shared secret that can be established from a tuple of strings $(x_1, . . . , x_\ell)$ by  $\ell$ parties, each one having one component of the tuple and the complexity profile of the tuple, is equal, up to logarithmic precision, to the complexity of the tuple minus the minimum communication necessary for distributing the tuple to all parties. We establish the communication complexity of secret key agreement protocols  that produce a secret key of maximal length, for protocols with public randomness.  We also show that if the communication complexity drops below the established threshold then only very short secret keys can be obtained.
  • Mika Hirvensalo, Etienne Moutot, Abuzer Yakaryılmaz. Affine automata: power and limitations
    • presented at LATA 2017
    • Abstract: Affine automata are a computational model introduced by A. Díaz-Caro and A. Yakaryılmaz in 2016. They are a finite-memory model, whose goal is to investigate interferences caused by real-valued states, like quantum automata, but with an extra non-linear behavior that provide them more computational power. In this presentation we will see an overview of computational limitations of this model, and compare it to other finite memory models like finite, probabilistic and quantum automata.
  • Édouard Bonnet, Nicolas Bousquet, Stéphan Thomassé, and Rémi Watrigant. When Maximum Stable Set can be solved in FPT time
    • Presented at ISAAC 2019
    • Abstract: Maximum Independent Set (MIS for short) is in general graphs the paradigmatic W[1]-hard problem.In stark contrast, polynomial-time algorithms are known when the inputs are restricted to structured graph classes such as, for instance, perfect graphs (which includes bipartite graphs, chordal graphs, co-graphs, etc.) or claw-free graphs. We introduce some variants of co-graphs with \emph{parameterized noise}, that is, graphs that can be made into disjoint unions or complete sums by the removal of a certain number of vertices and the addition/deletion of a certain number of edges per incident vertex, both controlled by the parameter. We give a series of FPT Turing-reductions on these classes and use them to make progress on the parameterized complexity of MIS in H-free graphs. We show that for every fixed $t \geqslant 1$, \textsc{MIS} is FPT in $P(1,t,t,t)$-free graphs, where $P(1,t,t,t)$ is the graph obtained by substituting all the vertices of a four-vertex path but one end of the path by cliques of size $t$. This was the ""next open case"" towards a complete dichotomy and in particular helped classifying all 5-vertex graphs.
  • Krishnendu Chatterjee, Laurent Doyen. Graph Planning with Expected Horizon
    • Presented at LICS 2019
    • Abstract: A classical problem in discrete planning is to consider a weighted graph and construct a path that maximizes the sum of weights for a given time horizon T. However, in many scenarios, the time horizon in not fixed, but the stopping time is chosen according to some distribution such that the expected stopping time is T. If the stopping time distribution is not known, then to ensure robustness, the distribution is chosen by an adversary, to represent the worst-case scenario. A stationary plan for every vertex always chooses the same outgoing edge. For fixed horizon T, stationary plans are not sufficient for optimality. Quite surprisingly we show that when an adversary chooses the stopping time distribution with expected stopping time T, then stationary plans are sufficient. While computing optimal stationary plans for fixed horizon is NP-complete, we show that computing optimal stationary plans under adversarial stopping time distribution can be achieved in polynomial time. Consequently, our polynomial-time algorithm for adversarial stopping time also computes an optimal plan among all possible plans.
  • Philip Bille, Jonas Ellert, Johannes Fischer, Inge Li Gørtz, Florian Kurpicz, Ian Munro, Eva Rotenberg. Space Efficient Construction of Lyndon Arrays in Linear Time
    • Submitted to ICALP 2020
    • Abstract: Given a string $S$ of length $n$, its Lyndon array identifies for each suffix $S[i..n]$ the next lexicographically smaller suffix $S[j..n]$, i.e. the minimal index $j > i$ with $S[i..n] > S[j..n]$. Apart from its plain $(n\log_2 n)$-bit array representation, the Lyndon array can also be encoded as a succinct parentheses sequence that requires only $2n$ bits of space. While linear time construction algorithms for both representations exist, it has previously been unknown if the same time bound can be achieved with less than $\Omega(n \lg n)$ bits of additional working space. We show that, in fact, $o(n)$ additional bits are sufficient to compute the succinct $2n$-bit version of the Lyndon array in linear time. For the plain $(n \log_2 n)$-bit version, we only need $O(1)$ additional words to achieve linear time. Our space efficient construction algorithm makes the Lyndon array more accessible as a fundamental data structure in applications like full-text indexing.
  • Lech Duraj, Krzysztof Kleiner, Adam Polak, Virginia Vassilevska Williams. Equivalences between triangle and range query problems
    • Presented at SODA 2020
    • Abstract: We define a natural class of range query problems, and prove that all problems within this class have the same time complexity (up to polylogarithmic factors). The equivalence is very general, and even applies to online algorithms. This allows us to obtain new improved algorithms for all of the problems in the class. We then focus on the special case of the problems when the queries are offline and the number of queries is linear. We show that our range query problems are runtime-equivalent (up to polylogarithmic factors) to counting for each edge $e$ in an $m$-edge graph the number of triangles through $e$. This natural triangle problem can be solved using the best known triangle counting algorithm, running in $O(m^{2\omega/(\omega+1)}) \leq O(m^{1.41})$ time.  Moreover, if $\omega=2$, the $O(m^{2\omega/(\omega+1)})$ running time is known to be tight (within $m^{o(1)}$ factors) under the $3$SUM Hypothesis. In this case, our equivalence settles the complexity of the range query problems. Our problems constitute the first equivalence class with this peculiar running time bound. To better understand the complexity of these problems, we also provide a deeper insight into the family of triangle problems, in particular showing black-box reductions between triangle listing and per-edge triangle detection and counting. As a byproduct of our reductions, we obtain a simple triangle listing algorithm matching the state-of-the-art for all regimes of the number of triangles. We also give some not necessarily tight, but still surprising reductions from variants of matrix products, such as the $(\min,\max)$-product.
  • Daniele D’Angeli, Dominik Francoeur, Emanuele Rodaro and Jan Philipp Wächter. Infinite Automaton Semigroups and Groups Have Infinite Orbits
    • Published in Journal of Algebra (https://doi.org/10.1016/j.jalgebra.2020.02.014)
    • Abstract: An automaton group is a group generated by a finite state, letter-to-letter transducer. Automaton groups are known for their rich algebraic structure and turn out to be a source for many groups with interesting properties (such as infinite torsion groups, groups with intermediate growth, ...). By their very definition, automaton groups act on the set of finite and right-infinite words and the same holds for the natural generalization to automaton semigroups. Using a classical argument, it is easy to see that an infinite automaton (semi)group admits a sequence of words with growing orbit sizes. However, this sequence does not necessarily converge to a word with an infinite orbit. In the talk, we will see that, nevertheless, for every infinite automaton semigroup, there is a right-infinite word with an infinite orbit and we will briefly discuss some consequences for algorithmic problems over automaton (semi)groups.
  • Stephen Piddock. Quantum walk search algorithms and effective resistance
    • Available at https://arxiv.org/abs/1912.04196
    • Abstract: We consider the problem of finding a marked vertex in a graph from an arbitrary starting distribution, using a quantum walk based algorithm. We work in the framework introduced by Belovs which showed how to detect the existence of a marked vertex in O(\sqrt{RW}) quantum walk steps, where R is the effective resistance and W is the total weight of the graph. Our algorithm outputs a marked vertex in the same runtime up to a logarithmic factor in the number of marked vertices. When starting in the stationary distribution, this recovers the recent results of Ambainis et al. We also describe a new algorithm to estimate the effective resistance R.
  • Laurent Bartholdi, Michael Figelius, Markus Lohrey, Armin WeißGroups with ALOGTIME-hard word problems and PSPACE-complete compressed word problems
    • Availbale at http://arxiv.org/abs/1909.13781 
    • Abstract: We give lower bounds on the complexity of the word problem of certain non-solvable groups: for a large class of non-solvable  infinite groups, including in particular free groups, Grigorchuk's group and Thompson's groups, we prove that their word problem is $NC^1$-hard. For some of these groups (including Grigorchuk's  group and Thompson's groups) we prove that the compressed word problem (which is equivalent  to the circuit evaluation problem) is PSPACE-complete.
  • Robert Ganian and Sebastian Ordyniak. Group Activity Selection with Few Agent Types
    • Presented at ESA 2019 
    • Abstract: The Group Activity Selection Problem (GASP) models situations where a group of agents needs to be distributed to a set of activities while taking into account preferences of the agents w.r.t. individual activities and activity sizes. The problem, along with its well-known variants sGASP and gGASP, has previously been studied in the parameterized complexity setting with various parameterizations, such as number of agents, number of activities and solution size. However, the complexity of the problem parameterized by the number of types of agents, a natural parameter proposed already in the first paper that introduced GASP, has so far remained unexplored. In this paper we establish the complexity map for GASP, sGASP and gGASP when the number of types of agents is the parameter. Our positive results, consisting of one fixed-parameter algorithm and one XP algorithm, rely on a combination of novel Subset Sum machinery (which may be of general interest) and identifying certain compression steps which allow us to focus on solutions which are "acyclic". These algorithms are complemented by matching lower bounds, which among others close a gap to a recently obtained tractability result of Gupta, Roy, Saurabh and Zehavi (2017). In this direction, the techniques used to establish W[1]-hardness of sGASP are of particular interest: as an intermediate step, we use Sidon sequences to show the W[1]-hardness of a highly restricted variant of multi-dimensional Subset Sum, which may find applications in other settings as well.
  • José Correa, Andrés Cristi, Boris Epstein, José Soto. The Two-Sided Game of Googol and Sample-Based Prophet Inequalities
    • Presented at SODA 2020 - https://doi.org/10.1137/1.9781611975994.127
    • Abstract: The secretary problem or the game of Googol are classic models for online selection problems that have received significant attention in the last five decades. In this paper we consider a variant of the problem and explore its connections to data-driven online selection. Specifically, we are given n cards with arbitrary nonnegative numbers written on both sides. The cards are randomly placed on n consecutive positions on a table, and for each card, the visible side is also selected at random. The player sees the visible side of all cards and wants to select the card with the maximum hidden value. To this end, the player flips the first card, sees its hidden value and decides whether to pick it or drop it and continue with the next card. We study algorithms for two natural objectives. In the first one, similar to the secretary problem, the player wants to maximize the probability of selecting the maximum hidden value. We show that this can be done with probability at least 0.45292. In the second objective, similar to the prophet inequality, the player wants to maximize the expectation of the selected hidden value. Here we show a guarantee of at least 0.63518 with respect to the expected maximum hidden value. Our algorithms result from combining three basic strategies. One is to stop whenever we see a value larger than the initial n visible numbers. The second one is to stop the first time the last flipped card's value is the largest of the currently n visible numbers in the table. And the third one is similar to the latter but to stop it additionally requires that the last flipped value is larger than the value on the other side of its card. We apply our results to the prophet secretary problem with unknown distributions, but with access to a single sample from each distribution. In particular, our guarantee improves upon 1 – 1/e for this problem, which is the currently best known guarantee and only works for the i.i.d. prophet inequality with samples.

 

Online user: 1