Filter off: No filter for categories
in this work, we provide the stability of generic polycycles of hybrid planar vector fields, extending previous known results in the literature. The polycycles considered here may have hyperbolic saddles, tangential singularities and jump singularities
There has been increasing interest in the estimation of risk capital within enterprise risk models, particularly through Monte Carlo methods. A key challenge in this area is accurately characterizing the distribution of a company’s available capital, which depends on the conditional expected value of future cash flows. Two prominent approaches are available: the “regress-now” method, which projects cash flows and regresses their discounted values on basis functions, and the “regress-later” method, which approximates cash flows using realizations of tractable processes and subsequently calculates the conditional expected value in a closed form. This paper demonstrates that the left and right singular functions of the valuation operator serve as robust approximators for both approaches, enhancing their performance. Furthermore, we describe situations where each method outperforms the other, with the regress-later method demonstrating superior performance under certain conditions, while the regress-now method generally exhibiting greater robustness.
Geometric packing problems have been investigated for centuries in mathematics and a notable example is the Kepler's conjecture, from the 1600s, about the packing of spheres in the Euclidean space.
In this talk, we mainly address the knapsack problem of hyperspheres. The input is a set of hyperspheres associated with profits and a hyperrectangular container, the knapsack. The goal is to pack a maximum profit subset of the hyperspheres into the knapsack. For this problem, we present a PTAS. For a more general version, where we can have arbitrary fat objects instead of hyperspheres, we present a resource augmentation scheme (an algorithm that gives a super optimal solution in a slightly increased knapsack).
Quantum walks (QWs) can be viewed as quantum analogs of classical random walks. Mathematically, a QW is described as a unitary, local operator acting on a grid and can be written as a product of shift and coin operators. We highlight differences to classical random walks and stress their connection to quantum algorithms (see Grover’s algorithm). If the QW is assumed to be translation invariant, applying the Fourier transform yields a multiplication operator, whose bandstructure we briefly study. After equipping the underlying lattice with random phases, we turn to dynamical localization. This means that the probability to move from one lattice site to another decreases on average exponentially in the distance, independently of how many steps the QW may take. We sketch the proof of dynamical localization on the hexagonal lattice in the regime of strong disorder, which uses a finite volume method.
Decision makers desire to implement decision rules that, when applied to individuals in the population of interest, yield the best possible outcomes. For example, the current focus on precision medicine reflects the search for individualized treatment decisions, adapted to a patient's characteristics. In this presentation, I will consider how to formulate, choose and estimate effects that guide individualized treatment decisions. In particular, I will introduce a class of regimes that are guaranteed to outperform conventional optimal regimes in settings with unmeasured confounding. I will further consider how to identify or bound these "superoptimal" regimes and their values. The performance of the superoptimal regimes will be illustrated in two examples from medicine and economics.
A fundamental challenge in causal induction is inferring the underlying graph structure from observational and interventional data. While traditional algorithms rely on candidate graph generation and score-based evaluation, my research takes a different approach. I developed neural causal models that leverage the flexibility and scalability of neural networks to infer causal relationships, enabling more expressive mechanisms and facilitating analysis of larger systems.
Building on this foundation, I further explored causal foundation models, inspired by the success of large language models. These models are trained on massive datasets of diverse causal graphs, learning to predict causal structures from both observational and interventional data. This "black box" approach achieves remarkable performance and scalability, significantly surpassing traditional methods. This model exhibits robust generalization to new synthetic graphs, resilience under train-test distribution shifts, and achieves state-of-the-art performance on naturalistic graphs with low sample complexity.
We then leverage LLMs and their metacognitive processes to causally organize skills. By labeling problems and clustering them into interpretable categories, LLMs gain the ability to categorize skills, which acts as a causal variable enabling skill-based prompting and enhancing mathematical reasoning.
This integrated perspective, combining causal induction in graph structures with emergent skills in LLMs, advances our understanding of how skills function as causal variables. It offers a structured pathway to unlock complex reasoning capabilities in AI, paving the way from simple word prediction to sophisticated causal reasoning in LLMs.
Currently, there is no universal definition of a random bifurcation. Many of the existing definitions either fail or are often too complicated to validate in applications. To address this, new definitions of a random bifurcation were proposed and subsequently validated for the supercritical pitchfork bifurcation with additive Brownian noise ([1]). However, the in [1] used stochastic integration suffers from multiple drawbacks, which can be circumvented by using rough integrals instead. In my master thesis, we translated their work to the rough path formalism (made precise in [2], [3]). We were able to reproduce the results in [1] using rough instead of stochastic integration. Further, taking advantage of the rough path formalism, we could additionally investigate the case, where the additive noise is given by fractional Brownian motion with Hurst parameter H ∈ ( 1 3 , 1 2 ). In this talk, we discuss previously existing definitions of a random bifurcation and compare them to the in [1] newly proposed concepts. In particular, we point out where existing definitions fall short. Further, we introduce the rough path formalism and argue why it is such a fitting choice when investigating random bifurcations. Moreover, we compare rough to stochastic integrals and highlight the major advantages of the former. Finally, we present our bifurcation results for the rough fractional pitchfork equation with additive (fractional) Brownian noise. References [1] M. Callaway, T. S. Doanb, J. S. W. Lamb and Martin Rasmussen. “The dichotomy spectrum for random dynamical systems and pitchfork bifurcations with additive noise”. In: Annales de l’Institut Henri Poincar´e - Probabilit´es et Statistiques 53.4 (2017). doi: 10.1214/16-AIHP763. [2] P.K. Friz and M. Hairer. A Course on Rough Paths. Berlin: Springer, 2014. [3] P.K. Friz and N.B. Victor. Multidimensional Stochastic Processes as Rough Paths: Theory and Ap- plications. Cambridge: Cambridge University Press, 2009.
In this talk, we present a physically-motivated model of diffusion dynamics for a system of n hard spheres (colloids) evolving in a bath of infinitely-many very small particles (polymers). We first show that this two-type system with reflection admits a unique strong solution. We then explore the main feature of the model: by projecting the stationary measure onto the subset of the large spheres, these now feel a new attractive short-range dynamical interaction between each other, known in the physics literature as a depletion force, due to the (hidden) small particles. We are able to construct a natural gradient system with depletion interaction, having the projected measure as its stationary measure. Finally, we will see how this dynamics yields, in the high-density limit for the small particles, a constructive dynamical approach to the famous discrete geometry problem of maximising the contact number of n identical spheres. Based on joint work with M. Fradon, J. Kern, and S. Rœlly.
Applying causal discovery algorithms to real-world problems can be challenging, especially when unobserved confounders may be present. In theory, constraint-based approaches like FCI are able to handle this, but implicitly rely on sparse networks to complete within a reasonable amount of time. However, in practice many relevant systems are anything but sparse. For example, in protein-protein interaction networks, there are often high-density nodes (‘hub nodes’) representing key proteins that are central to many processes in the cell. Other systems, like electric power grids and social networks, can exhibit high-clustering tendencies, leading to so-called small-world networks. In such cases, even basic constrained based algorithms like PC can get stuck in the high-density regions, failing to produce any useful output. Existing approaches to deal with this, like Anytime-FCI, were designed to interrupt the search at any stage and then produce a sound causal model output a.s.a.p., but unfortunately can require even more time to complete than just letting FCI run by itself.
In this talk I will present a new approach to causal discovery in graphs with high-density regions. Based on a radically new search strategy and modified orientation rules, it builds up the causal graph on the fly, updating the model after each validated edge removal, while remaining sound throughout. It exploits the intermediate causal model to efficiently reduce number and size of the conditional independence tests, and automatically prioritizes ‘low-hanging fruit', leaving difficult (high-density) regions until last. The resulting ‘anytime-anywhere FCI+’ is a true anytime algorithm that is not only faster than its traditional counterparts, but also more flexible, and can easily be adapted to handle arbitrary edge removals as well, opening up new possibilities for e.g. targeted cause-effect recovery in large graphs.
t.b.a.
TBA
Deep neural networks have led to breakthrough results in numerous practical machine learning tasks. In this lecture, we will attempt a journey through the mathematical universe behind these practical successes, elucidating the theoretical underpinnings of deep neural networks in functional analysis, harmonic analysis, complex analysis, approximation theory, dynamical systems, Kolmogorov complexity, optimal transport, fractal geometry, mathematical logic, and automata theory. ___________________________
Invited by Prof. Gitta Kutyniok.
We extend the scope of a recently introduced dependence coefficient between a scalar response Y and a multivariate covariate X to the case where X takes values in a general metric space. Particular attention is paid to the case where X is a curve. While on the population level, this extension is straight forward, the asymptotic behavior of the estimator we consider is delicate. It crucially depends on the nearest neighbor structure of the infinite-dimensional covariate sample, where deterministic bounds on the degrees of the nearest neighbor graphs available in multivariate settings do no longer exist. The main contribution of this paper is to give some insight into this matter and to advise a way how to overcome the problem for our purposes. As an important application of our results, we consider an independence test.