Talks

Danel Ahman

By temporal resources we mean resources in programming whose usage is time-sensitive or -critical, and which, while perhaps already physically available to a program, can be used or acted upon only after a certain amount of time passes or some prescribed events take place. In our past work, we have shown that such temporal aspects of resources can be statically modularly specified and verified with a combination of graded modal types and a graded-monadic effect system for algebraic effects and handlers. In this talk, I will discuss how to make this approach more applicable by showing how to extend a Hindley-Milner style type inference algorithm to this combination of modal types and a graded effect system for automatically checking whether programs use their resources correctly according to temporal specifications.

(This is joint work with Joosep Tavits and Vesal Vojdani.)

Ago-Erik Riet

In 1979, Erdős conjectured that if $m = O(n^{2/3})$, then $ex(n, m, {C_4, C_6 }) = O(n)$, i.e. the largest possible number of edges in a simple bipartite graph with vertex class sizes $m,n$ and no 4- or 6-cycles is of order at most $n$. This conjecture was disproven by several papers and the current best-known bounds for this problem are $$c_1n^{1 + \frac{1}{15}} \leq ex(n, n^{2/3}, {C_4, C_6}) \leq c_2n^{1 + 1/9}$$ for some constants $c_1, c_2$. A consequence of our work proves that $$ex(n, n^{2/3}, { C_4, \theta_{3, 4} }) = \Theta(n^{1 + 1/9}).$$ More generally, for each integer $t \geq 2$, we establish that $$ex(n, n^{\frac{t+2}{2t+1}}, { C_4, \theta_{3, t} }) = \Theta(n^{1 + \frac{1}{2t+1}}) $$ by demonstrating that subsets of points $S \subseteq \text{PG}(n,q)$ for which no $t+1$ points lie on a line give rise to ${ C_4, \theta_{3, t} }$-free graphs, where $\theta_{3,t}$ is the graph consisting of $t$ internally disjoint 3-edge paths with common endpoints, and PG$(n,q)$ is the projective space of dimension $n$ over the finite field of $q$ elements.

This is joint work with Baran Düzgün and Vladislav Taranchuk.

Vitaly Skachek

In this talk, we discuss two families of codes suitable for various uses in the distributed data storage systems (DDSSs): batch codes and codes for private information retrieval (PIR). These two families can be viewed as a natural generalizations of the locally repairable codes, which were extensively studied in the context of coding for fault tolerance in the DDSSs. We present some fundamental bounds on the parameters of these two families of codes, as well as code constructions. We also discuss various generalizations of these concepts, such as functional batch codes and functional codes for PIR, and asynchronous batch codes.

Based on the joint works with Henk D.L. Hollmann, Karan Khathuria, Ago-Erik Riet, Eldho K. Thomas.

Alexey Leontyev

The work addresses the problem of reliability in economic and financial system optimization, focusing on long-term solution stability rather than short-term optimality. Traditional optimization methods in economic policy, such as tax system design, often ignore time-dependent system degradation and external shocks. This leads to suboptimal solutions that may rapidly lose relevance after implementation. For the reliability modelling, both two-parameter and three-parameter Weibull distributions are considered, as well as piecewise Weibull models, to represent various failure rate behaviors. Furthermore, for additional flexibility in reliability modeling and the possibility of analyzing heavy-tailed and non-monotonic data distributions, the Marshall-Olkin Alpha Power Extended Weibull family was considered, which enables the modeling of extreme economic events and systemic shocks. Due to the possible high-dimensionality of the search space of optimization and modelling problems of such type, and taking into account that reliability modelling cannot always be carried out at the post-processing stage, as it modifies the search space, the direct embedding of reliability functions into quantum circuits is proposed. The reliability constraint is added to the cost function as a penalty term, modifying the Hamiltonian used in quantum variational algorithms such as QAOA and VQE. Additionally, the QAE is used to accelerate Monte Carlo simulations for reliability forecasting. The approach is applied to the analysis and optimization of complex economic systems, such as national tax policies, where the search space is combinatorially large and includes multiple interrelated constraints. The presented framework allows policymakers and researchers to anticipate system degradation, optimize reforms proactively, and model multiple long-term scenarios with higher computational efficiency using quantum technologies.

Joint work by by Alexey Leontyev and Abuzer Yakaryilmaz.

Vincent Moreau

For several years now, there has been a growing connection between automata theory and $\lambda$-calculus, in terms of syntax, semantics and logic. The notion of language of $\lambda$-terms recognized by a cartesian closed category, introduced by Salvati, is a fundamental contribution to this convergence as it generalizes to the higher-order the usual regular language of finite words and trees. We present recent advances on that theme. First, we show that the notion of higher-order regular language is robust, by providing a natural criterion for cartesian closed categories to recognize exactly these languages. Second, we introduce profinite $\lambda$-terms, a higher-order generalization of the notion of profinite word related to regular languages by Stone duality. These assemble into a cartesian closed category which verifies an adequate universal property with respect to finite models, justifying the denomination of profinite $\lambda$-calculus.

Abuzer Yakaryilmaz

We present My Photonic Ship, a modular one-day outreach workshop designed to introduce high school and university students to quantum superposition through experimental, conceptual, and creative activities. The program integrates photon simulation experiments, quantum programming, wave-based and ket notation explanations, and philosophical discussions, culminating in a hands-on project where students develop a Python game and extend it by embedding quantum superposition. Implemented in 2025 with groups in Latvia, Turkey, Poland, and Lithuania, the workshop included pre- and post-tests that showed increased curiosity about quantum technologies and greater confidence in engaging with abstract concepts. In the ADEQUATE quantum course, the game modules also served as the basis for student-designed final projects, further highlighting the value of creative programming tasks for deep learning and engagement. This presentation reports on the workshop design, theoretical underpinnings, and student outcomes, offering insights into how quantum concepts can be meaningfully introduced in diverse educational contexts.

Michel Viana Smykalla

In 1931, Kurt Gödel proved his incompleteness theorems showing that any sound logical system with the minimum desirable properties to formalize mathematics is incomplete. It means that there are mathematical formulas in the system that we can neither prove nor prove their negation. However, it was only in 1964 that one mathematical conjecture, known as the continuum hypothesis (CH), was proved to be formally undecidable from the axiomatic the Zermelo-Fränkel set theory. In this talk, we will talk about the basic notions of forcing developed by Paul Cohen, and how to use it to construct a model for set theory where the CH fails.

Laura Leja

Robotic manipulation in dynamic and unpredictable environments poses significant challenges, especially when the robot must interact accurately with moving objects. We propose a motion planning method based on differentiable simulation, enabling a robotic arm to perform industrial peg-insertion (bottles into the sockets) tasks while the target is in motion. The approach combines real-time target tracking with continuous trajectory adaptation. A neural network controller is trained to produce control actions along the motion path, leveraging differentiable physics for full end-to-end optimization. Both the task objectives and contact constraints are formulated in a differentiable manner, allowing gradients to be propagated throughout the simulation. We look at 2 scenarios, when there is a moving socket and when there is a static socket where the robot arm needs to insert the bottle. Experiments show that this training method learns quickly, stays stable during training, and produces smooth and accurate robot movements, performing better than a standard reinforcement learning approach.

Joint work by Laura Leja, Guntis Vilnis Strazds, Kārlis Freivalds.

Antonio Cruciani

We study robust and efficient distributed algorithms for building and maintaining distributed data structures in dynamic Peer-to-Peer (P2P) networks. P2P networks are characterized by a high level of dynamicity with abrupt heavy node churn (nodes that join and leave the network continuously over time). We present a novel algorithmic framework to build and maintain, with high probability, a skip list for $\text{poly}(n)$ rounds despite a churn rate of $O(n/\log n)$, which is the number of nodes joining and/or leaving per round; n is the stable network size. We assume that the churn is controlled by an oblivious adversary that has complete knowledge and control of what nodes join and leave and at what time and has unlimited computational power, but is oblivious to the random choices made by the algorithm. Importantly, the maintenance overhead in any interval of time (measured in terms of the total number of messages exchanged and the number of edges formed/deleted) is (up to log factors) proportional to the churn rate. Furthermore, the algorithm is scalable in that the messages are small (i.e., at most $\text{polylog}(n)$ bits) and every node sends and receives at most $\text{polylog}(n)$ messages per round. To the best of our knowledge, our work provides the first-known fully-distributed data structure and associated algorithms that provably work under highly dynamic settings (i.e., high churn rate that is near-linear in $n$). Furthermore, the nodes operate in a localized manner. Our framework crucially relies on new distributed and parallel algorithms to merge two $n$-element skip lists and delete a large subset of items, both in $O(\log n)$ rounds with high probability. These procedures may be of independent interest due to their elegance and potential applicability in other contexts in distributed data structures. Finally, we believe that our framework can be generalized to other distributed and dynamic data structures including graphs, potentially leading to stable distributed computation despite heavy churn.

This is a joint work with John Augustine (IIT Madras) and Iqra Altaf Gillani (NIT Srinagar). Preprint: https://arxiv.org/abs/2409.10235.

Russell Lai

A succinct argument system allows a prover to convince a verifier that an NP statement is true using only a small amount of communication, collectively called a “proof”, which is much smaller than the plain witness. A fully-succinct argument system further requires the proof to be verifiable quickly, possibly after preprocessing the statement, much faster than verifying the witness against the statement.

In lattice-based cryptography, a fundamental relation to build succinct arguments for is the Inhomogeneous Short Integer Solution (ISIS) relation: “I know a short integral vector $x$ such that $Ax = y \pmod q$ and the norm of $x$ is at most $\beta$”. A succinct argument system for such a relation can be bootstrapped to one for all of NP.

In this talk, I will overview the development of lattice-based succinct arguments for the ISIS relation, starting from the first theoretical construction in 2020 to somewhat practical systems today in 2025, and highlight some future directions. A particular focus will be our recent frameworks, “RoK, Paper, SISsors” and “RoK and Roll”, which enable modular constructions of lattice-based succinct arguments.

Krišjānis Petručeņa

The random number generation capabilities of GNU/Linux are subject to certain limitations. As of Linux 5.6, /dev/random no longer satisfies the criteria for a true random number generator (TRNG). While dedicated quantum random number generator (QRNG) hardware is the preferred source of unpredictable entropy, it is often expensive and difficult to deploy in virtualized/cloud environments and IoT devices. Furthermore, explicit vendor-specific support from software is required to make the switch from /dev/random to the installed hardware RNG. This talk presents a user-space integration approach for a shared, potentially remote QRNG device via D-Bus, a ubiquitous interprocess communication framework on Linux, which serves as a universal interface for applications to retrieve true random numbers. The approach separates the concerns of entropy sourcing from cryptographic applications.

Matt Earnshaw

Mazurkiewicz traces are a classical model of systems with concurrent actions. By taking a diagrammatic perspective on this classical notion, we can see them as a “untyped” special case of the more expressive notion of effectful category, which in turn leads to a novel notion of presentation for effectful categories. We can use this notion of presentation to construct natural combinations of concurrent systems, for example, the commuting tensor product, or interleavings. This is joint work with Chad Nester and Mario Román.

Massimo Equi

Centralised models of computation have been studied since the introduction of quantum computing, but whether quantum advantage exists in distributed settings, and in which forms, is a question that grew in interest only later. There are many different ways to define a distributed model of computation, and thus different types of quantum advantage that we can achieve. In this talk, we focus on the LOCAL model of distributed computing, and its quantum counterpart quantum-LOCAL. In the LOCAL model, a graph defines a network where nodes are computers that can communicate with their neighbours in synchronous communication rounds; in quantum-LOCAL, the computers are quantum computers, and they can send qubits over the edges. Computation and bandwidth are unlimited, and we are only interested in how many communication rounds are needed to solve a certain problem. We present two main results: a positive result, that is the existence of problems for which LOCAL algorithms need asymptotically more communication rounds than quantum-LOCAL ones, as a function of the nodes of the graph; and a negative result, that is the absence of quantum advantage for certain distributed optimization problems.

Maksim Dimitrijev

As quantum computers become more powerful and accessible, industry is increasingly exploring how they can address practical problems. At the Center for Quantum Computing Science of the University of Latvia, we have collaborated with industrial partners — ForseAI and Accenture Latvia — to investigate real-world applications of quantum algorithms.

In this talk, I will share our experiences of translating theory into practice through joint projects with industry. Specifically, I will discuss the construction of quantum oracles for NP-hard problems, and I will highlight how iterative improvements in implementation enable more extensive testing on classical simulators. I will also present our research on hybrid quantum-classical approaches for time-series forecasting, demonstrating how such methods can open new opportunities for practical use cases.

Through these examples, I aim to provide insights into both the challenges and the potential of applying quantum computing in real-world industrial contexts.

Chad Mitchell Nester

In this talk I will introduce recent work with Niels Voorneveld concerning the dynamics of the free cornering with choice. This is a term rewriting system for modelling session-typed process interaction. The system admits pleasing categorical semantics via single-object double categories. The presentation will endeavour not to assume any background in category theory.

Ilja Sobolev

AEff is a lambda-calculus that describes asynchronous programs. It is inspired by algebraic effects. It consists of the sequential part describing the behaviour of one process and the parallel part describing the behaviour of a multiprocess program. We prove that the sequential part is strongly normalising and demonstrate the formalisation of this proof in Agda. Additionally, we discuss the normalisation properties of higher order extensions of this language.

This is joint work with Danel Ahman.

Francisco José Maldonado Torralba

Spectral analysis is a tool that can be used to extract underlying structure in dynamical systems. The eigenfunctions of an operator often reveal hidden geometric and topological properties of the system. In this talk, I will show how this perspective applies not only to classical problems in mathematics and computer science, but also to neural computation in the brain.

I will begin by reviewing how eigenvalue problems arise naturally in modelling patterns in nature, such as Laplacian eigenmodes and diffusion operators. I will then turn to the brain’s computation of space: place cells, grid cells, and border cells. In particular, I will discuss how grid-cell activity can be modelled as eigenfunctions of operators, both in theoretical frameworks developed by Ganguli et. al. and DeepMind, and in the successor representation used in reinforcement learning. This spectral viewpoint provides a concise explanation for several experimental observations, including boundary effects, environmental rescaling, and reward-driven distortions.

Callum Reader

String diagrams are a family of geometric languages that are used to understand categories with additional structure. Their purpose is to replace usual arguments – using algebra or pasting diagrams – with something much more intuitive. Symmetric, braided and compact closed monoidal categories have long had associated string diagram languages and accompanying proofs soundness and completeness. In this context, soundness and completeness guarantees that algebraic proofs are exactly as expressive as those given using the associated string diagrams.

A myriad recent papers have given examples of how string diagrams can be used to reason effectively about other areas of mathematics: classical logic and probability theory, for example, have recently been the focus of string diagrammatic approaches. However, a geometrically intuitive string diagram languages for closed and cartesian closed symmetric monoidal categories – which model the linear $\lambda$-calculus and $\lambda$-calculus respectively – have remained elusive. There have been several attempts at making the additional closed structure legible, but these typically involve the introduction of ‘decorations’ atop string diagrams. This greatly reduces the geometric intuition that give such diagrams their utility.

The work that we present here corrects this gap in the literature and provides a language that is sound and complete for closed symmetric monoidal categories. Crucially, this language is fully geometric – the closed structure is given by strings that behave much as one would expect. In this talk we introduce our string diagram language, and demonstrate its utility. We show that it gives geometric intuition for well-known constructions, such as $\beta$- and $\eta$-reduction, and currying.

Alessandro Di Giorgio

String diagrams form a sound and complete graphical calculus for symmetric monoidal categories. In a similar vein, tape diagrams provide a graphical language for rig categories with finite biproducts—categories equipped with two monoidal structures, one of which is a biproduct distributing over the other.

In this work, we extend tape diagrams to the setting of Kleisli categories of symmetric monoidal monads presented by algebraic theories. Our main example is the monad of finitely supported subdistributions on Set, which arises from the theory of pointed convex algebras.

This is joint work with Filippo Bonchi, Cipriano Junior Cioffo and Elena Di Lavore.

Peeter Laud

We are all using Smart-ID, a two-party signing protocol creating signatures that are functionally interoperable with RSA. They days of RSA are numbered, because it is vulnerable to quantum computers. During the period when we still haven’t managed to migrate to quantum-safe schemes, we would at least like to recognize and convince others of the case, where someone (a quantum-capable attacker) has forged our signature. At the same time, we should not be able to create a forgery proof ourselves. These two properties form the core definition of fail-stop signatures. In our talk we see, why functionally interoperable (i.e. standards-compliant) fail-stop signature schemes are difficult to construct, and how the threshold setting of Smart-ID comes to rescue. We present an extension of the Smart-ID protocol that has the fail-stop properties and still creates signatures that are functionally interoperable with RSA. We discuss the security of the protocol in the Universal Composability setting, which is probably the best setting for expressing the complicated security properties of such a functionality.

Joint work with Nikita Snetkov and Jelizaveta Vakarjuk.

Rebeka Birziņa, Kārlis Freivalds

This talk explores how deep learning can be applied to tackle NP-hard combinatorial optimization problems. We will present recent approaches developed by our team for solving key problems such as Boolean Satisfiability (SAT), the Traveling Salesman Problem (TSP), and Integer Factorization. These methods rely on unsupervised training techniques and diffusion models to learn effective solution strategies. We’ll also examine how increasing test-time compute can scale these methods to handle more complex problem instances. The talk concludes with examples and possibilities for using optimization techniques to enhance the stability and efficiency of electrical power grids in real-life scenarios.

Aleksandrs Belovs

Given an algorithm that outputs the correct answer with bounded error, it is sometimes desirable to reduce this error to some arbitrarily small. The usual method, for both quantum and randomized algorithms, is a procedure called majority voting, which incurs a multiplicative overhead of from calling the algorithm this many times. A recent paper introduced a model of quantum computation called transducers, and showed how to reduce the error of a transducer arbitrarily with only constant overhead, using a construction analogous to majority voting called purification. In this paper, we present a new highly simplified construction of a purifier, that can be understood as a weighted walk on a line similar to a random walk interpretation of majority voting. In addition to providing a new perspective that is easier to contrast with majority voting, our purifier has exponentially better space complexity than the previous one, and quadratically better dependence on the soundness-completeness gap of the algorithm being purified. Our new purifier has nearly optimal query complexity, even down to the constant, which matters when one composes quantum algorithms to super-constant depth.

Joint work with Stacey Jeffery. Based on arXiv:2502.09249.

Silvio Capobianco

Cellular automata (CA) are synchronous parallel models of computation where identical devices on a grid update their state based on the current state of finitely many neighbors arranged in a regular fashion. Previous formalizations of CA theory inside category theory were based on characterizations of CA behavior inside the theory of dynamical systems, which hides the role of the neighborhood.

In this talk, after an introduction to CA theory, we discuss the notion of graded monad due to Smirnov, Katsumata, and Melliès. We then apply the dual notion of graded comonad to the theory of cellular automata, describing them as coKleisli maps (the local rules) and cofree coalgebra maps (the global functions) with the neighborhoods as grades, thus recovering important combinatorial aspects of the original formulation.

Baran Düzgün

The notion of $L$-nice family with parameters $n=2k+1$ was first introduced by Polyanskii and Vorobyev in 2019 for constructing primitive batch codes. Later in 2021, a generalization of $L$-nice families, Almost Affinely Disjoint (AAD) Families, was introduced for all values of $n$.

In 2023, Arıkan, Düzgün, Otal, Özbudak gave a construction of large AAD families of size $q$ for $L$-nice families with the parameters $[n=2k+1,k,k]_{q}$. We showed that our construction is asymptotically optimal for $k=2$, $k=3$, and conjectured that this construction is still asymptotically optimal for any $k>3$ where $L=k$. Later on, in 2024, we proved that the conjecture made in 2023 is true for $k>3$.

Recently, we provided an alternative proof for the same conjecture for all $k>3$, using a different approach than that given in 2024. As a result of this new approach, we were able to generate $[n=t(2k+1),tk,k]_{q}$-AAD families of size $q^{t}$ with the help of block matrices.

In this talk, I will present some explicit constructions of $L$-nice families, used to obtain primitive batch codes with highly desirable properties in distributed storage, and our approaches in these papers.

Jevgēnijs Vihrovs

In this work we study quantum algorithms for Hopcroft’s problem which is a fundamental problem in computational geometry. Given n points and n lines in the plane, the task is to determine whether there is a point-line incidence. The classical complexity of this problem is well-studied, with the best known algorithm running in $O(n^{4/3})$ time, with matching lower bounds in some restricted settings. Our results are two different quantum algorithms with time complexity $\tilde O(n^{5/6})$. The first algorithm is based on partition trees and the quantum backtracking algorithm. The second algorithm uses a quantum walk together with a history-independent dynamic data structure for storing line arrangement which supports efficient point location queries. In the setting where the number of points and lines differ, the quantum walk-based algorithm is asymptotically faster. The quantum speedups for the aforementioned data structures may be useful for other geometric problems.

Joint work with Vladimirs Andrejevs, Aleksandrs Belovs.

Niccolò Veltri

Coalgebras for a functor F are a mathematical tool for representing a large class of transition systems, where F describes the collection of reachable states. The terminal coalgebra for F, when it exists, collects the possible runs of each transition system specified by F. In the late 1980s, Aczel and Mendler proved a general theorem exhibiting the existence of terminal coalgebras for a class of set functors which they dubbed “set-based”. In this talk I will discuss a reformulation of their proof in the constructive setting of homotopy type theory. As an application, I will show how the resulting construction is useful for modeling non-wellfounded material set theory in homotopy type theory.

Maiara F. Bollauf

In this talk, we will discuss how to wisely select points from a Gaussian distribution over a lattice, known as Gaussian sampling. This process lies at the heart of many lattice-based cryptographic constructions, particularly in the design of digital signature schemes that are believed to be resistant to quantum attacks. We will introduce the mathematical foundations of Gaussian sampling in the context of lattices, review the current techniques, and present our recent contributions to advances in the field.

Janno Siim

In applications such as blockchain or electronic voting, it is often important to prove that a computation was performed correctly without leaking any additional information. A popular cryptographic tool called zk-SNARK can do this for any computation with a remarkably small proof (a few hundred bytes), and the verification of the proof takes orders of magnitude less time than the computation itself. However, many of these zk-SNARKs rely on cryptographic assumptions that are on a shaky footing. This talk discusses our Eurocrypt 2024 paper, which proves an important family of zk-SNARKs under a significantly better-understood class of assumptions.