Filter off: No filter for categories
We propose a continuous-time stochastic model to analyze the dynamics of impermanent loss in liquidity pools in decentralized finance (DeFi) protocols. We replicate the impermanent loss using option portfolios for the individual tokens. We estimate the risk-neutral joint distribution of the tokens by minimizing the Hansen–Jagannathan bound, which we then use for the valuation of options on relative prices and for the calculation of implied correlations. In our analyses, we investigate implied volatilities and implied correlations as possible drivers of the impermanent loss and show that they explain the cross-sectional returns of liquidity pools. We test our hypothesis on options data from a major centralized derivative exchange.
In this talk, we introduce a novel mesh-free and direct method for computing the shape derivative in PDE-constrained shape optimization problems. Our approach is based on a probabilistic representation of the shape derivative and is applicable for second-order semilinear elliptic PDEs with Dirichlet boundary conditions and a general class of target functions. The probabilistic representation derives from a boundary sensitivity result for diffusion processes due to Costantini, Gobet and El Karoui. Via so-called Taylor tests we verify the numerical accuracy of our methodology.
In this talk, we shall discuss long time behavior of solutions to parabolic stochastic partial differential equations with singular nonlinear divergence-type diffusivity. As these kinds of equations usually lack good coercivity estimates in higher spatial dimensions, we choose to address the general well-posedness question by variational weak energy methods. Examples include the stochastic singular $p$-Laplace equation, and the stochastic curve shortening flow with additive Gaussian noise. We shall present improved pathwise regularity results and improved moment and decay estimates for a general class of singular divergence-type PDEs.
Based on joint works with Benjamin Gess (Leipzig and Bielefeld), Wei Liu (Xuzhou), Florian Seib (Berlin), and Wilhelm Stannat (Berlin).
The theory we present aims at expanding the classical Arbitrage Pricing Theory to a setting where N agents invest in stochastic security markets while also engaging in zero-sum risk exchange mechanisms. We introduce in this setting the notions of Collective Arbitrage and of Collective Super-replication and accordingly establish versions of the fundamental theorem of asset pricing and of the pricing-hedging duality. When computing the Collective Super-replication price for a given vector of contingent claims, one for each agent in the system, allowing additional exchanges among the agents reduces the overall cost compared to classical individual super-replication. The positive difference between the aggregation (sum) of individual superhedging prices and the Collective Super-replication price represents the value of cooperation. Finally, we explain how these collective features can be associated with a broader class of risk measurement or cost assessment procedures beyond the superhedging framework. This leads to the notion of Collective Risk Measures, which generalize the idea of risk sharing and inf-convolution of risk measures.
I am going to provide an overview of recent progress in the analysis of bifurcations for dynamical systems with a focus on early-warnings signs. The need for a more detailed understanding of stochastic multiscale bifurcations has emerged over the last decades in the context of tipping points (or critical transitions) in many complex systems. As a benchmark application, I am going to motivate the development of the mathematical analysis via problems in climate systems, and in particular the possibility of the tipping of the Atlantic Meridional Overturning Circulation. We shall discuss the ideas, how to prove and utilize scaling laws as early-warning signs for finite-dimensional fast-slow stochastic ODEs and then proceed to try to carry over this theory for stochastic PDEs. The work presented is based upon the series of papers.
Many-body open quantum systems, described by Lindbladian master equations, are a rich class of physical models that display complex phenomena which remain to be understood. Here we theoretically analyze noisy analogue quantum simulation of geometrically local open quantum systems and provide evidence that this problem is both hard to simulate on classical computers and could be approximately solved on near-term quantum devices.
Forecast reconciliation has attracted significant research interest in recent years, with most studies taking the hierarchy of time series as given. We extend existing work that uses time series clustering to construct hierarchies, with the goal of improving forecast accuracy. First, we investigate multiple approaches to clustering, including not only different clustering algorithms, but also the way time series are represented and how distance between time series is defined. Second, we devise an approach based on random permutation of hierarchies, keeping the structure of the hierarchy fixed, while time series are randomly allocated to clusters. Third, we propose an approach based on averaging forecasts across hierarchies constructed using different clustering methods, that is shown to outperform any single clustering method. Our findings provide new insights into the role of hierarchy construction in forecast reconciliation and offer valuable guidance on forecasting practice.
Abstract: Audiences often think of music as primarily a product of the heart, but pianist / composer / coder Dan Tepfer argues that algorithms - rules that are followed consistently - are just as important. Without constraints underlying creativity, whether they're conscious or not, music tends to lack the deep structure that makes it timeless. In his newest project, Natural Machines, he's taken this idea to the limit, programming rules into his computer that enable it to respond in real time to the music he improvises. The computer creates immediate structure around whatever he plays at the Yamaha Disklavier player piano, which in turn guides him to improvise in certain ways, for an unprecedented melding of natural and mechanical processes. The idea of music living at the intersection of the algorithmic and the spiritual is far from new. It was Pythagoras who first codified the logic behind harmonic consonance. Renaissance composers such as Ockeghem created music that followed strict mathematical procedures. And Bach, whose Goldberg Variations Tepfer has been performing worldwide since the 2011 release of his album Goldberg Variations / Variations, in which he follows each of Bach's variations with an improvised variation of his own, seemed to gain endless creative results from imposing constraints on himself. Join Tepfer as he explains the deep connections between the high-tech Natural Machines, the timeless music of Bach, and the algorithms that support it all.
About the speaker: Dan Tepfer is an internationally renown pianist and composer based in New York City who has performed and recorded around the world with leading musicians both in Jazz and classical music, such as Lee Konitz, Paul Motion and Renee Fleming. Dan Tepfer earned global acclaim for his 2011 release Goldberg Variations / Variations, where he performs J.S. Bach's masterpiece as well as improvising upon it to "elegant, thoughtful and thrilling" effect (New York magazine). Tepfer's 2019 video album Natural Machines stands as one of his most ingeniously forward-minded yet, finding him exploring in real time the intersection between science and art, coding and improvisation, digital algorithms and the rhythms of the heart. His 2023 return to Bach, Inventions / Reinventions, an exploration of the narrative processes behind Bach's beloved Inventions, became a best-seller, spending two weeks in the #1 spot on the Billboard Classical Charts. Besides his musical career, including a degree in Jazz Piano from the New England Conservatory in Bosten, he has earned a Bachelor's degree in Astrophysics from the University of Edinbourgh. From a young age, Dan Tepfer has been interested in coding, which he now uses in very creative ways for making music, such as in Natural Machines. During the pandemic, his belief that music brings people together in times of crisis led him to dive into live-streaming, performing close to two hundred online concerts. As part of this effort, he pioneered ultra-low-latency audio technology enabling him to perform live through the internet with musicians in separate locations, culminating in the development of his own app, FarPlay, which is now distributed by a company of which he is the CEO.
One pervasive task found throughout the empirical sciences is to determine the effect of interventions from observational (non-experimental) data. It is well-understood that assumptions are necessary to perform causal inferences, which are commonly articulated through causal diagrams (Pearl, 2000). Despite the power of this approach, there are settings where the knowledge necessary to fully specify a causal diagram may not be available, particularly in complex, high-dimensional domains. In this talk, I will briefly present two recent causal effect identification results that relax the stringent requirement of fully specifying a causal diagram. The first is a new graphical modeling tool called cluster DAGs (for short, C-DAGs) that allows for the specification of relationships among clusters of variables, while the relationships between the variables within a cluster are left unspecified [1]. The second includes a complete calculus and algorithm for effect identification from a Partial Ancestral Graph (PAG), which represents a Markov equivalence class of causal diagrams, fully learnable from observational data [2]. These approaches are expected to help researchers and data scientists to identify novel effects in real-world domains, where knowledge is largely unavailable and coarse. \[ \] References: [1] Anand, T. V., Ribeiro, A. H., Tian, J., & Bareinboim, E. (2023). Causal Effect Identification in Cluster DAGs. In Proceedings of the AAAI Conference on Artificial Intelligence, Vol. 37, No. 10, pp. 12172-12179. [2] Jaber, A., Ribeiro, A., Zhang, J., & Bareinboim, E. (2022). Causal identification under markov equivalence: Calculus, algorithm, and completeness. Advances in Neural Information Processing Systems, 35, 3679-3690.
Fault-tolerant protocols and quantum error correction (QEC) are essential to building reliable quantum computers from imperfect components that are vulnerable to errors. Optimizing the resource and time overheads needed to implement QEC is one of the most pressing challenges that will facilitate a transition from NISQ to the fault tolerance era. In this talk, I will discuss two intriguing ideas that can significantly reduce these overheads. The first idea, erasure qubits, relies on an efficient conversion of the dominant noise into erasure errors at known locations, greatly enhancing the performance of QEC protocols. The second idea, single-shot QEC, guarantees that even in the presence of measurement errors one can perform reliable QEC without repeating measurements, incurring only constant time overhead.
We introduce the Rigged Dynamic Mode Decomposition (Rigged DMD) algorithm, which computes generalized eigenfunction decompositions of Koopman operators. By considering the evolution of observables, Koopman operators transform complex nonlinear dynamics into a linear framework suitable for spectral analysis. While powerful, traditional Dynamic Mode Decomposition (DMD) techniques often struggle with continuous spectra. Rigged DMD addresses these challenges with a data-driven methodology that approximates the Koopman operator's resolvent and its generalized eigenfunctions using snapshot data from the system's evolution. At its core, Rigged DMD builds wave-packet approximations for generalized Koopman eigenfunctions and modes by integrating Measure-Preserving Extended Dynamic Mode Decomposition with high-order kernels for smoothing. This provides a robust decomposition encompassing both discrete and continuous spectral elements. We derive explicit high-order convergence theorems for generalized eigenfunctions and spectral measures. Additionally, we propose a novel framework for constructing rigged Hilbert spaces using time-delay embedding, significantly extending the algorithm's applicability. We provide examples, including systems with a Lebesgue spectrum, integrable Hamiltonian systems, the Lorenz system, and a high-Reynolds number lid-driven flow in a two-dimensional square cavity, demonstrating Rigged DMD's convergence, efficiency, and versatility. This work paves the way for future research and applications of decompositions with continuous spectra. This talk is based on joint work with Catherine Drysdale (University of Birmingham) and Andrew Horning (MIT).
The training of modern machine learning models often consists in solving high-dimensional non-convex optimisation problems that are subject to large-scale data. Here, momentum-based stochastic optimisation algorithms have become especially popular. The stochasticity arises from data subsampling which reduces computational cost. Both, momentum and stochasticity help the algorithm to converge globally. In this work, we propose and analyse a continuous-time model for stochastic gradient descent with momentum. This model is a piecewise-deterministic Markov process that represents the optimiser by an underdamped dynamical system and the data subsampling through a stochastic switching. We investigate longtime limits, the subsampling-to-no-subsampling limit, and the momentum-to-no-momentum limit. We are particularly interested in the case of reducing the momentum over time: Under convexity assumptions, we show convergence of our dynamical system to the global minimiser when reducing momentum over time and letting the subsampling rate go to infinity. We then propose a stable, symplectic discretisation scheme to construct an algorithm from our continuous-time dynamical system. In experiments, we study our scheme in convex and non-convex test problems. Additionally, we train a convolutional neural network in an image classification problem. Here, our algorithm reaches competitive results compared to stochastic gradient descent with momentum.
I'll introduce a certain generalization of a martingale with the following property: at each time, the conditional expectation of a future value given the past, is a weighted average of all the values comprising the past. We'll assume only that more recent values are weighted no less than older values. We'll discuss motivations and constructions, and conditions under which martingale-like behaviors, such as maximal inequalities and convergence, are present in an appropriate form.
Polyforms—shapes constructed by gluing together copies of cells in an underlying grid—are a convenient experimental tool with which to probe problems in tiling theory. Unlike shapes more generally, they can be enumerated exhaustively, and are amenable to analysis using discrete computation. Furthermore, polyforms appear to be quite expressive in terms of the range of tiling-theoretic behaviours they can exhibit. I discuss the computation of isohedral numbers and Heesch numbers, both of which are connected to a variety of unsolved problems in tiling theory, and the connection of these problems to the world's first aperiodic monotiles, discovered in 2023.
https://www.math.cit.tum.de/math/aktuelles/article/department-kolloquium-sommer-2024/
This talk contains two parts. Part one gives an overview of my resarch as part of the TUM Habilitation tradition. In part two, I will present my latest result.
In the first part, I will focus on a high-level overview of my results in gSQG point vortex dynamics. I will then gently flip over to some results on traveling waves in heterogeneous environments.
My latest result, which I will present in greater detail, is an fine analysis of the structure of propagating phase boundaries in a viscous diffusion equation. We characterize the relevant traveling waves, and study the two sharp-interface regimes related to two different limits, namely vanishing viscosity or the bilinear limit. This result is joint work with Michael Herrmann and Dirk Janssen.
The ultimate goal of causal inference is so-called causal effect identification (ID), which refers to quantifying the causal influence of a subset of variables on a target set. A stepping stone towards performing ID is learning the causal relationships among the variables which is commonly called causal structure learning (CSL). In this talk, I mainly focus on the problems pertaining to CSL and ID in linear structural causal models, which serve as the basis for problem abstraction in various scientific fields. In particular, I will review the identifiability results and algorithms for CSL and ID in the presence of latent confounding. Then, I will present our recent result on the ID problem using cross-moments among observed variables and discuss its applications to natural experiments and proximal causal inference. Finally, I conclude the presentation with possible future research directions.
"The spread of infectious diseases is significantly influenced by the underlying structure of social and contact networks. Epidemic models often use moment systems to describe the evolution of the expected average prevalence of infections. Employing moment closures, which commonly involve network structure, simplifies the moment system by reducing the number of coupled ODEs. The clustering of nodes accelerates disease spread through dense local connections, suggesting the use of closures that account for this phenomenon. Closures for networks with highly heterogeneous degree distributions, such as the Super Compact Pairwise (SCP) closure, may more accurately predict the spread of diseases on real-world complex networks.
We explore the dynamics of Susceptible-Infected-Susceptible (SIS) epidemics on small-world networks, characterized by high clustering and low average path length. Using the Watts–Strogatz model, we investigate how topological features of networks, such as clustering and degree distribution, impact epidemic spread, thresholds, and endemic prevalence. We provide an overview of various closures and compare their effectiveness in modelling disease dynamics. Results indicate that the closure involving clustering shows better agreement with stochastic simulations, particularly near critical epidemic thresholds."
Maker-Breaker is a two player game performed on a graph, in which Breaker tries to cut off a special vertex (e.g. origin or root) by erasing edges while Maker tries to prevent that by fixing them. In this talk we consider the game to be played on supercritical Galton-Watson trees and determine the corresponding winning probabilities given different information regimes.
We consider the problem of finding a basis of a matroid with weight exactly equal to a given target. Here weights can be small discrete values or more generally m-dimensional vectors of small discrete values. We resolve the parameterized complexity completely, by presenting an FPT algorithm parameterized by the maximum weight and m for arbitrary matroids. Prior to our work, no such algorithms were known even when weights are in 0/1, or arbitrary and m = 1. Our main technical contributions are new proximity and sensitivity bounds for matroid problems, independent of the number of elements. These bounds imply FPT algorithms via matroid intersection. This is joint work with Friedrich Eisenbrand and Karol Węgrzycki.
The evolutionary dynamics of cell-to-cell communication in bacterial populations (living in biofilms or growing in nutrient media) can be examined using mathematical modelling and computer simulations. A significant trend in recent decades has been the development of hybrid models for describing complex, difficult to formalize microbial systems to predict and control their states. The current project aims at the more in-depth development of hybrid approaches for in silico studies of the cell-to-cell bacterial communication processes in microbial populations embedded within different lifestyles.
The first mathematical model is proposed to describe the pattern formation of bacteria grown on a nutrition medium and the corresponding bacterial communication characteristics of the microbial system. The conceptualization includes the deterministic model of bacterial quorum sensing, and the Allen-Cahn-based model of bacterial colony evolution combined with the model of changes in biomass-dependent nutrient concentration. Various computation experiments were performed to examine different scenarios of the spatio-temporal dynamics of key substances of the biosystem, taking into account the Allee effect. In the second part, we develop the hybrid cellular automation-based model of biofilm evolution with the mechanism of cell-to-cell bacterial communication. We proposed the simulation algorithm for biofilm formation given the mechanism of bacterial cell-to-cell communication. The results of the discrete-dynamic simulations indicate various spatial biofilm structures formed by variations in nutritional regimes, quorum levels, and mechanisms of inoculation processes.
The semi-geostrophic equations are a simplified model of large-scale atmospheric flows and frontogenesis. In this talk I will discuss existence and numerical approximation of weak solutions of the semi-geostrophic equations for a compressible fluid. This is joint work with Charlie Egan (Göttingen) and Théo Lavier and Beatrice Pelloni (Heriot-Watt).
Insurance claims are often not paid out immediately. In long-tail lines such as liability or motor liability, it can take years or even decades until a claim is settled. In order to set up adequate reserves, so-called IBNR methods are used to predict future payments. Chain ladder is probably the most popular IBNR method worldwide. Since large losses behave quite differently from attritional losses, it is advisable to separate the two loss categories in the IBNR calculation. We introduce a stochastic model for the development of attritional and large claims in long-tail lines of business and present a corresponding chain ladder-like IBNR method that predicts attritional and large losses in a consistent way.
Given an open n-dimensional set $\Omega$ with Lipschitz boundary, a set $E$ is an almost-minimizer of the relative perimeter if it minimizes the functional $P(E,\Omega)$ (roughly speaking, the $(n-1)$-area of $\partial E \cap \Omega$) among local competitors, up to a suitably quantified error. While interior regularity theory for almost-minimizers has been established since 1984, much less is known about the boundary behavior even of perimeter minimizers, when the boundary of $\Omega$ is not at least of class $C^{1,1}$. We present some results in this direction: a boundary monotonicity formula, that is valid under a so-called visibility property of $\Omega$ at a given point $x\in \partial\Omega$, and a vertex-skipping property for almost-minimizers in 3-dimensional convex domains, under no extra smoothness assumptions on $\partial \Omega$. The optimality of the restriction to dimension 3 of the second result will also be discussed. This research is in collaboration with Giacomo Vianello (UniTN)
Fully-localised planar patterns with dihedral symmetry, including cellular hexagons and squares, have been found experimentally and numerically in various continuum models; for example, in nonlinear optics, semi-arid vegetation, and on the surface of a ferrofluid (a magnetic fluid). However, there is currently no mathematical theory for the emergence of these types of patterns. In this talk, I will present recent progress regarding the existence of localised dihedral patterns (not necessarily hexagon or square) emerging from a Hamiltonian--Hopf bifurcation for a general class of two-component reaction-diffusion systems.
The planar problem is approximated through a Galerkin scheme, where a finite-mode Fourier decomposition in polar coordinates yields a large, but finite, system of coupled radial differential equations. We then apply techniques from radial spatial dynamics to prove the existence of a zoo of localised dihedral patterns in the finite-mode reduction, subject to solving an (N+1)-dimensional algebraic matching condition. We conclude by studying this matching condition for various finite-mode reductions, and present a computer-assisted proof for the existence of localised patches with 6m-fold symmetry for arbitrarily large Fourier decompositions.
This work is in collaboration with Jason Bramburger (Concordia University) and David Lloyd (University of Surrey).
In this work, we consider a modification of the usual Branching Random Walk (BRW), where the position of each particle at the last generation 𝑛 is modified by an i.i.d. copy of a random variable 𝑌, which may differ from the driving increment distribution. This model was introduced by Bandyopadhyay and Ghosh (2021) and they termed it as Last Progeny Modified Branching Random Walk (LPM-BRW). Depending on the asymptotic properties of the tail of 𝑌, we describe the asymptotic behaviour of the extremal process of this model as 𝑛 → ∞.
This short course covers recent developments in graphical and causal modeling in Statistics/Machine Learning. It is comprised of the following three lectures, each two hours long. \[ \] June 25, 2024; Lecture 1: “Learning from conditional independence when not all variables are measured: Ancestral graphs and the FCI algorithm” \[ \] June 27, 2024; Lecture 2: “Identification of causal effects: A reformulation of the ID algorithm via the fixing operation” \[ \] July 2, 2024; Lecture 3: “Nested Markov models” \[ \] The course targets an audience with exposure to basic concepts in graphical and causal modeling (e.g., conditional independence, DAGs, d-separation, Markov equivalence, definition of causal effects/the do-operator).
Ongoing changes in the residential energy sector, along with the emergence of Distributed Energy Resources (DER) such as photovoltaic (PV) energy generation and battery storage, as well as newer (and potentially more flexible) energy demands like heat pumps and electric vehicle charging, have significantly increased the importance of demand response management to ensure grid stability.
The goal of this thesis is to formulate a linear program to model the operation of DER technologies such as electric vehicle charging, battery energy storage, heat pumps, and thermal storages in residential households. The objective is to minimize customers' energy costs based on given energy prices and demands.
Different use-case scenarios are established. In these scenarios, households can either be considered individually or as a community where households have the option to share excess energy with their neighbors in a coordinated manner. Furthermore, the scenarios are distinguished based on whether they enforce network limitations, i.e., constrain the available power a transformer substation can supply, or not.
After defining the corresponding LPs for these scenarios, some necessary conditions for solutions to the problems are analyzed and proven. Numerical experiments on exemplary data sets provided by Siemens are performed using the Pyomo modeling framework and Gurobi. Key performance indicators such as substation and household load profiles, substation peak loads, and energy costs for households are evaluated.
This thesis was done in cooperation with the Siemens AG.
Fire blight is a bacterial plant disease that affects apple and pear trees. We present a mathematical model for its spread in an orchard during bloom. This is a PDE-ODE coupled system, consisting of two semilinear PDEs for the pathogen, coupled to a system of three ODEs for the stationary hosts. Exploratory numerical simulations suggest the existence of travelling waves, which we subsequently prove, under some conditions on parameters, using the method of upper and lower bounds and Schauder’s fix point theorem. Our results are likely not optimal in the sense that our constraints on parameters, which can be interpreted biologically, are sufficient for the existence of travelling waves, but probably not necessary. Possible implications for fire blight biology and management are discussed.
This short course covers recent developments in graphical and causal modeling in Statistics/Machine Learning. It is comprised of the following three lectures, each two hours long. \[ \] June 25, 2024; Lecture 1: “Learning from conditional independence when not all variables are measured: Ancestral graphs and the FCI algorithm” \[ \] June 27, 2024; Lecture 2: “Identification of causal effects: A reformulation of the ID algorithm via the fixing operation” \[ \] July 2, 2024; Lecture 3: “Nested Markov models” \[ \] The course targets an audience with exposure to basic concepts in graphical and causal modeling (e.g., conditional independence, DAGs, d-separation, Markov equivalence, definition of causal effects/the do-operator).
x
x