Filter off: No filter for categories
Abstract: Part 1: First, I will describe an adaptive moving mesh method for solving space-fractional partial differential equations of fractional order between 1 and 2. The fractional Laplacian in the PDE model is defined in terms of the Riesz-derivative. The approach extends the so-called L2 method to the non-uniform mesh case. The spatial mesh generation makes use of a moving mesh PDE, MMPDE5 with additional filtering. Numerical experiments are given for the space-fractional Gray-Scott reaction-diffusion model. They reveal a rich set of different patterns, showing interesting and surprising differences in behaviour, compared to the well-known integer order case. The adaptive method detects self-replication patterns, travelling waves, and chaotic solutions, along with two remarkable evolution processes depending on the fractional order: from self-replication to standing waves and from travelling waves back to self-replication.
Part 2: Secondly, I will address a PDE model with a half-Laplacian operator. The analysis of this model relies on on the relationship between the Hilbert transform and the half-Laplacian. A doubling-splitting method is proposed, which results in a backward wave equation (BWE). Next, a second-order parallel boundary value method is applied over a large time scale, showing that the method is convergent and stable, even in ill-posed cases. Two special cases are discussed: an advection-dominated PDE and the space-fractional Schrödinger equation. It is shown that the solution to the BWE is equivalent to the one of the original PDE, both analytically and numerically. As an additional surprising result, we find traveling wave solutions for the linear fractional-order Schrödinger equation.
We consider the task of learning dynamical systems from data in a system-agnostic framework. This presentation is divided into two parts. First, we utilize a simulation-based approach to investigate algorithms for learning chaotic Ordinary Differential Equations (ODEs). We demonstrate that for noise-free data and low-dimensional systems, this task is effectively solved, as polynomial regression-based methods can achieve machine-precision forecasts. However, we show that observational noise remains a significant challenge for most algorithms. In the second part, we address this challenge by developing nonparametric statistical theory for learning ODEs from noisy observations. Specifically, we establish minimax optimal error rates for two contrasting observational models: the Stubble model, consisting of many short trajectories, and the Snake model, consisting of a single long trajectory. We conclude by discussing challenges at the intersection of dynamical systems and statistical learning.
Clouds are important features of the atmosphere, determining the energy budget by interacting with incoming solar radiation and outgoing thermal radiation. For pure ice clouds, the net impact of the different radiative effects is still unknown, and there is no generally accepted theory of clouds in terms of a closed system of partial differential equations or similar.
In this talk, I will present a simple but physically consistent ice cloud model which is a 3D nonlinear ODE system (depending on several parameters). This model constitutes a nonlinear oscillator with two Hopf bifurcations in the relevant parameter regime. Apart from the equilibrium points and bifurcations, limits cycles and scaling behaviours of the system for varying parameters can be determined numerically. Finally, the model shows very good agreement with measurement data, indicating that the main physics is captured and such a simple model might be a helpful tool for investigating ice clouds.
This joint work with Peter Spichtinger.
Let \(p\) be a real polynomial in $n$ variables of even degree \(2d\). A fundamental computational task, with applications in optimization and real algebraic geometry, is to decide whether \(p\) can be written as a sum of squares of polynomials. That is, whether there exist polynomials \(q_1, \ldots, q_m\) such that \(p = q_1^2 + q_2^2 + \cdots + q_m^2\). In this talk, I will discuss the computational complexity of this question.
(based on joint work with Nikolas Gärtner and Victor Magron)
Suppose we are given some data, and we hypothesize a structural causal model to describe them: how can we narrow the set of causal graphs compatible with our observations? The theory of identifiability aims to answer this question. We show that, in the case of additive noise models, the score function of the data contains all the information about the causal graph. However, this requires strong and, crucially, hard-to-verify modeling assumptions, like additivity of the noise. When direct experiments to infer causality are not feasible, this raises the question: how can we move past these restrictions? Borrowing ideas from independent component analysis, we show how multiple environments (read: non i.i.d. data) can overcome these limitations: for structural causal models with arbitrary causal mechanisms, data from only three environments uniquely identify the causal graph from the Jacobian of the score function. Thus, non-i.i.d.-ness turns from a curse into a blessing for causal discovery.
In this talk, we investigate mathematically how capillary-driven viscous thin fluid films evolve on microscopic length scales, in which case thermal noise due to fluctuations of the fluid particles comes into play. The underlying stochastic partial differential equation (SPDE) is a stochastic thin-film equation, a fourth-order degenerate-paraolic PDE driven by nonlinear gradient noise. This equation was first suggested in the physical literature approximately 20 years ago and existence of solutions for nonlinear noise was only established very recently. The key observation is that the Stratonovich formulation of the equation is the physically correct mathematical formulation, leading to a suitable balance of fluctuations and dissipation of the underlying physics and the correct balance in the energy-entropy dissipation relations. Specifically we are able to establish existence of nonnegative martingale solutions for nonlinear mobilities and we further prove existence of measure-valued solutions for initial values with non-full support. The latter forms a first step towards proving finite speed of propagation and for investigating contact-line dynamics on microscopic length scales.
This talk is based on joint works with Konstantinos Dareiotis (Leeds), Benjamin Gess (TU Berlin and Max Planck Institute MiS, Leipzig), Günther Grün (Erlangen), and Max Sauerbrey (formerly TU Delft, now Max Planck Institute MiS, Leipzig).
We analyze the qualitative behavior of stochastic partial differential equations (SPDEs) with a particular focus on bifurcations. To this aim we investigate a change of sign in the finite-time Lyapunov exponents (FTLEs) of the SPDE in a small noise regime and close to a phase transition. Under suitable assumptions, the FTLEs are positive and thus indicate a change of stability. These results are applied to the stochastic Allen-Cahn and Burgers equation with non-Markovian noise and to singular SPDEs. Moreover, we also discuss properties of FTLEs, in particular large deviations type results.
We study uniform computability properties of PAC learning using Weihrauch complexity. We focus on closed concept classes, which are either represented by positive, by negative or by full information. Among other results, we prove that proper PAC learning from positive information is equivalent to the limit operation on Baire space, whereas improper PAC learning from positive information is closely related to Weak König's Lemma and even equivalent to it, when we have some negative information about the admissible hypotheses. If arbitrary hypotheses are allowed, then improper PAC learning from positive information is still in a finitary DNC range, which implies that it is non-deterministically computable, but does not allow for probabilistic algorithms. These results can also be seen as a classification of the degree of constructivity of the Fundamental Theorem of Statistical Learning. All the aforementioned results hold if an upper bound of the VC dimension is provided as an additional input information. We also study the question of how these results are affected if the VC dimension is not given, but only promised to be finite or if concept classes are represented by negative or full information. Finally, we also classify the complexity of the VC dimension operation itself, which is a problem that is of independent interest. For positive or full information it turns out to be equivalent to the binary sorting problem, for negative information it is equivalent to the jump of sorting. This classification allows also conclusions regarding the Borel complexity of PAC learnability. (joint work with Guillaume Chirache, École polytechnique, France) More information: https://theory.cca-net.de/seminar.php (prior registration requrired)
Neural networks have achieved striking empirical success, yet their theoretical foundations remain only partially understood. When equipped with the widely used ReLU activation function, such networks compute continuous piecewise linear (CPWL) functions, which makes it possible to study them using tools from polyhedral geometry, combinatorics, and computational complexity. In this talk, I will outline results and open questions concerning three aspects of ReLU networks: expressivity, verification, and parameterization. I will discuss how architectural choices constrain the class of CPWL functions that can be represented, complexity-theoretic challenges in verifying basic properties of the functions they compute, and how different parameter choices may correspond to the same function.
Creating physician rosters is a challenging task due to varying shift structures, qualifications, and department- or hospital-specific regulations. These variations mean that department-specific tools often fail to generalize across hospital settings. To address this, we developed a flexible mixed-integer programming (MIP) model capable of representing different roster structures, and we embedded it into an adaptable web application with an advanced graphical user interface (GUI), allowing physicians to specify preferences and hospital staff to configure the MIP model to their roster requirements without any mathematical or technical background.
The practical implementation of such a system is essential for ensuring long-term acceptance in clinical environments. A sustainable solution must be easy to use, accessible, and well integrated into existing IT workflows. This talk presents the implementation process of our physician rostering framework and highlights the key design decisions that support its usability in practice. Particular emphasis is placed on interface elements that must be operated frequently during the rostering process, as well as on features that allow departments to tailor the system to their specific requirements. We conclude by discussing challenges during integration and showing how the application has been successfully deployed in a hospital department.
Model-based simulation approaches for complex physical systems often require the identification of unknown parameters from scarce measurements provided by a finite number of sensors.
In order to maximize the amount of information provided, the optimal placement of measurement sensors based on the a priori solution of mathematical programs has become a widespread paradigm. While this is, naturally, a bi-level problem, standard approaches rely on single-level optimality criteria involving the Hessian of a suitable, linearized least-squares estimator which often explicitly depends on the measurement setup.
On the other hand, variational regularization approaches involving structure-enhancing, complex regularization terms are a cornerstone of modern inverse problem theory.
In this talk, we give a first principled derivation of optimal sensor placement problems for the latter for a particular example, sparse minimization problems over spaces of Radon measures, which are prevalent models in machine learning applications as well as for challenging tasks such as source location problems. Starting from a suitable estimator, we derive meaningful optimality criteria and present numerical as well as analytical results for deconvolution talks The talk will, in particular, discuss the challenges in transferring these preliminary results to real-world scenarios and explores first avenues in this direction.