Filter off: No filter for categories
In this talk, we consider the following optimal control problem of stochastic reaction-diffusion equations. First we apply the spike variation method which relies on introducing the first and second order adjoint state. We give a novel characterization of the second order adjoint state as the solution to a backward SPDE on the space L2(Λ) ⊗ L2(Λ) ∼= L2(Λ2). Using this representation, we prove the maximum principle for controlled SPDEs. As another application of our characterization of the second order adjoint state, we derive additional necessary optimality conditions in terms of the value function. These results generalize a classical relationship between the adjoint states and the derivatives of the value function to the case of viscosity dif- ferentials. We also show how the necessary conditions lead us directly to a non-smooth version of the classical verification theorem in the framework of viscosity solutions. In the last part, we analyze an optimal control problem governed by the stochastic Nagumo model with a view towards efficient numerical approximations. We develop a gradient descent method for the approximation of optimal controls and present numerical examples. This talk is based on the following three joint works with Wilhelm Stannat: • W. Stannat and L. Wessels, Deterministic control of stochastic reaction- diffusion equations, Evol. Equ. Control Theory 10 (2021), pp. 701–722, https://doi.org/10.3934/eect.2020087. • W. Stannat and L. Wessels, Peng’s maximum principle for stochastic par- tial differential equations, SIAM J. Control Optim. 59 (2021), pp. 3552– 3573, https://doi.org/10.1137/20M1368057. • W. Stannat and L. Wessels, Necessary and sufficient conditions for optimal control of semilinear stochastic partial differential equations, submitted, https://arxiv.org/abs/2112.09639, 2022.
In this talk, I present a partial exclusion process in random environment, a system of random walks where the random environment is obtained by assigning a maximal occupancy to each site of the Euclidean lattice. This maximal occupancy is allowed to randomly vary among sites, and partial exclusion occurs. Under the assumption of ergodicity under translation and uniform ellipticity of the environment, we prove that the quenched hydrodynamic limit is a heat equation with a homogenized diffusion matrix. The first part of the talk is based on a joint work with Frank Redig (TU Delft) and Federico Sau (IST Austria).Finally, I will discuss some recent progresses in the understanding of what happens when removing the uniform ellipticity assumption. After recalling some results on the Bouchaud’s trap model, I will show that, when assuming that the maximal occupancies are heavy tailed and i.i.d., the hydrodynamic limit is the fractional-kinetics equation.The second part of the talk is based on an ongoing project with Alberto Chiarini (University of Padova) and Frank Redig (TU Delft).
Manifold learning can be used to obtain a low-dimensional representation of the underlying manifold given the high-dimensional data. However, kernel density estimates of the low-dimensional embedding with a fixed bandwidth fail to account for the way manifold learning algorithms distort the geometry of the underlying Riemannian manifold. We propose a novel kernel density estimator for any manifold learning embedding by introducing the estimated Riemannian metric of the manifold as the variable bandwidth matrix for each point. The geometric information of the manifold guarantees a more accurate density estimation of the true manifold, which subsequently could be used for anomaly detection. To compare our proposed estimator with a fixed-bandwidth kernel density estimator, we run two simulations with 2-D metadata mapped into a 3-D swiss roll or twin peaks shape and a 5-D semi-hypersphere mapped in a 100-D space, and demonstrate that the proposed estimator could improve the density estimates given a good manifold learning embedding and has higher rank correlations between the true and estimated manifold density. A shiny app in R is also developed for various simulation scenarios. The proposed method is applied to density estimation in statistical manifolds of electricity usage with the Irish smart meter data. This demonstrates our estimator's capability to fix the distortion of the manifold geometry and to be further used for anomaly detection in high-dimensional data.
I will describe the influence of a nonlocal nonlinear term on the well-known dynamics of the model equation usually ascribed to Swift and Hohenberg (1977). Although a nonlocal term allows a continuous transition between purely local and completely global coupling there are interesting and perhaps unexpected aspects of the dynamics in intermediate cases.
Intriguingly, it turns out that the analysis of this model problem is closely related to the problem Alan Turing worked on after the publication of his well-known 1952 paper in mathematical biology. Unfinished, unpublished archive material reveals fascinating insights into his attempt to tackle a much more complex mathematical pattern formation problem. I will survey this material and show how it goes far beyond the 1952 paper in both mathematical content and ambition.
We consider Bernoulli percolation on a graph G =(V,E). Interpreting some chosen reference vertex o in V as the origin of an infection, the percolation cluster of o corresponds to the set of all infected vertices. It is very natural to expect that the probability for a vertex v in V to be infected should (in some sense) be decreasing in the distance of v to o. One possible rigorous formulation of this property is the famous bunkbed conjecture, which dates back to the 80s and still remains wide open. It seems that this kind of spatial monotonicity property of percolation in general is difficult to obtain. Here we present several new results relying on symmetry considerations or a Markov chain approach. Some of these results are joint work with Philipp König.
Autonomous dynamical systems and their attractors have been extensively studied in mathematical literature. In particular, in recent times persistence of attractors has been investigated under the lenses of many different resilience indicators, which try to capture the ability of the attractor to endure different kinds of perturbations. These may include variation of the initial condition, or of the differential equation itself. In this work we first develop a rigorous theory of measure driven differential equations, a generalization of classical ODEs in which the Lebesgue measure is substituted by some other signed measure, possibly introducing an impulsive component and thus discontinuities in the solutions. These notions are largely based on Schmaedeke's and Das and Sharma's works. Then, we extend a resilience indicator, Meyer and McGehee's intensity of attraction, to the nonautonomous measure driven setting, generalizing the relevant results accordingly. In particular, we will see that by controlling the effect of time--dependent bounded perturbations of the differential equation we also gain meaningful information about nonautonomous bounded perturbations of such equation. We stress that the indicator we obtain is very general, as it can be applied to any measure driven dynamical system, but it can in particular be employed on autonomous systems in order to impose a greater variety of perturbations. More specifically, thanks to the measure driven component, it is possible to express initial data perturbations in the form of perturbations of the driving force of the differential equation. This allows to quantitatively compare our measure driven intensity with other autonomous resilience indicators whose perturbations assume the form of a variation of the initial condition (possibly repeatedly), namely the distance to threshold and the flow--kick resilience.
We study the asymptotic behaviour of a real-valued diffusion whose non-regular drift is given as a sum of a dissipative term and a bounded measurable one. We prove that two trajectories of that diffusion converge almost surely to one another at an exponential explicit rate as soon as the dissipative coefficient is large enough. A similar result in Lp is obtained. Institute of Geophysics National Academy of Sciences of Ukraine, Kyiv. Now Friedrich–Schiller–Universität Jena
The classical (discrete) Minkowski problem asks for necessary and sufficient conditions such that a given set of unit vectors \(a_i\) and positive numbers \(\alpha_i\), \(1\leq i\leq m\), are the facet data of a polytope, i.e., there exists a polytope \(P\) having facets in the directions \(a_i\) of area \(\alpha_i\). This problem was solved by Minkowski and it is a corner stone of classical Brunn-Minkowski theory. The analogous problem in modern convex geometry and within the \(L_p\)-Brunn-Minkowski-theory is known as the \(L_p\)-Minkowski problem. Of particular interest is the limit case \(p=0\) and the associated so called logarithmic Minkowski problem. Here the problem is to decide when the given data are the cone data of a convex polytope \(P\) containing the origin, i.e., \(\alpha_i\) is the volume of the cone generated by the origin and the facet in direction \(a_i\).
In the talk we survey on the state of the art of the logarithmic Minkowski problem.
Rough fractional stochastic models have been used for several decades to model natural phenomena. In mathematical finance, the family of so-called `rough volatility models' ---where the volatility process has rougher sample paths than Brownian motion--- has attracted tremendous interest in the past years, due to its ability to reproduce several key features of (i) financial time series [Gatheral-Jaisson-Rosenbaum '18], and (ii) of the observed skew of implied volatility [Alos-Leon-Vives '07, and Fukasawa '11], as well as (iii) due to the fact that rough volatility models arise as scaling limit of microstructure models [El Euch-Fukasawa-Rosenbaum `18, Jaisson-Rosenbaun '20].
Since the emergence of Deep Pricing and Hedging the need for models that reliably reproduce key features (stylised facts) of financial markets has become even more pronounced. Deep generative modelling techniques make it possible to model financial time series in a fully flexible data-driven way that is not limited to the choice of a stochastic financial model, but instead where features are encoded through in the rough path signature of the price path.
In this talk we discuss why realistic stochastic models (such as rough volatility) are of paramount importance for such data driven deep algorithms and also highlight some of the most recent challenges that arise for mathematical finance in this new setting.
The overarching theme of this talk is the application of stochastic geometric and data-driven model reduction methods to dynamical systems. In the first part of the talk I will focus on kinetic plasma theory. I will recast the collisional Vlasov-Maxwell and Vlasov-Poisson equations as systems of coupled stochastic and partial differential equations, and I will derive stochastic variational principles which underlie such reformulations. I will also propose a stochastic particle method for the collisional Vlasov-Maxwell equations and provide a variational characterization of it, which can be used as a basis for a further development of stochastic structure-preserving particle-in-cell integrators.
In the second part of the talk I will discuss data-driven model reduction strategies for stochastic differential equations. I will demonstrate that SVD-based model reduction techniques known for ordinary differential equations, such as the proper orthogonal decomposition, can be extended to stochastic differential equations in order to reduce the computational cost arising from both the high dimension of the considered stochastic system and the large number of independent Monte Carlo runs. I will also extend the proper symplectic decomposition method to stochastic Hamiltonian systems and argue that preserving the underlying symplectic or variational structures results in more accurate and stable solutions that conserve energy better than when the non-geometric approach is used. I will also present the results of my numerical experiments for a semi-discretization of the stochastic nonlinear Schrödinger equation and the Kubo oscillator.
Reaction–diffusion equations (RDEs) are often derived as continuum limits of lattice-based discrete models. Recently, a discrete model which allows the rates of movement, proliferation and death to depend upon whether the agents are isolated has been proposed, and this approach gives various RDEs where the diffusion term is convex and can become negative (Johnston et al., 2017), i.e. forward–backward–forward diffusion. Numerical simulations suggest these RDEs support both smooth and shock-fronted travelling waves. In this talk, I will formalise these preliminary numerical observations by analysing the smooth and shock-fronted travelling waves
In this thesis, we develop a Reed-Frost model based on random intersection graphs. Our interest is the size of the set of ultimately recovered individuals as population size grows to infinity. Several branching processes will be constructed as approximating processes to serve this purpose. Eventually, benefiting from the clique-based structure provided by the random intersection graph, we will discuss the exact distribution of the quantity of interest in both small and large outbreaks.
Im Rahmen dieses Vortrags erarbeiten wir zwei verschiedene Herangehensweisen für das Klären von Existenz- und Eindeutigkeitsfragen von räumlichen Geburts- und Todesprozessen. Dies umfasst zum einen die Möglichkeit, den Geburts- und Todesprozess als Sprungprozess aufzufassen. Dieser Ansatz geht auf Preston 1975 zurück und erfordert das eingehende Studium des Phänomens der Explosion und Kopplungen von Sprungrozessen. Zum Anderen stellen wir eine Möglichkeit vor, den räumlichen Geburts- und Todesprozess als eine Art Projektion aus einem Poisson-Prozess aufzufassen. Dies ermöglicht die Konstruktion und Untersuchung von Prozessen mit mehr als endlich vielen Geburten in endlicher Zeit. Dieser Ansatz geht auf Kurtz 1980, Garcia 1995, 2006, Bezborodov 2019 und andere zurück.
We study statistical models of regular Gaussian distributions given by assumptions about the signs of partial correlations. This includes conditional independence models and graphical modeling devices such as Markov and Bayes networks. For these models, we consider the following basic questions: (1) How hard is it (complexity-theoretically) to check if the model specification is inconsistent? (2) If it is consistent, how hard is it (algebraically) to write down a covariance matrix from the model? (3) How badly shaped (homotopy-theoretically) can these models be? For all of these questions the answer is "it is as bad as it could possibly be".
In this work, a linear singularly perturbed Fredholm integro-differential initial-value problem with integral boundary condition is being considered. On a Shishkin-type mesh, a fitted finite difference approach is applied using a composite trapezoidal rule in both; in the integral part of the equation and in the initial condition. The proposed technique acquires a uniform second-order convergence in respect to the perturbation parameter. Further provided the numerical results to support the theoretical estimates.
Link and Passcode: https://tum-conf.zoom.us/j/96536097137 Code 101816
This is a research colloquium about mathematical epidemiology. We aim to discuss epidemiological questions both from a mathematical as well as from a statistical point of view. A central theme is the exchange of ideas between scientists from different background.
Detailed program and registration at our webpage https://www.mathematik.uni-muenchen.de/~heyden/EpidemiologyColloq2022.html.
Derived categories have come to play a decisive role in a wide range of topics. Several recent developments, in particular in the context of topological Fukaya categories, arouse the desire to study not just single categories, but rather complexes of categories. In this talk, we will discuss examples of such complexes in algebra, topology, algebraic geometry, and symplectic geometry, along with some results involving them.
Cross-diffusion systems are non-linear parabolic systems with relevant applications in biology and ecology. In this talk, we study the existence of strong solutions for a triangular cross-diffusion system with reaction terms which include the Lotka-Volterra type. The main idea consists in analysing an auxiliary system in a non-divergence form which is equivalent to the cross- diffusion system, by introducing a convenient change of variable. Then, we regularize the auxiliary system, we prove the existence of strong solutions by a fixed-point theorem and we pass to the limit. Moreover, we also investigate the regularity and the uniqueness of the solution. In particular, we prove that the solution is bounded in L∞((0, T ) × Ω), with T > 0 and the space domain Ω ⊂ RN , provided that N ≤ 3, and it is unique if N ≤ 2.
Can we reconstruct a directed acyclic graph having only access to its v-structures, encoding conditional independence between the sites, but without knowing its edge directions? In this talk, we study the probability to have a unique way of such a reconstruction when the directed acyclic graph G is chosen uniformly at random on a fixed number of sites. More generally, we study the size of its Markov equivalence class, containing all graphs with the same edge set as G when forgetting the edge directions, and having the same v-structures.
This talk is based on ongoing work with Allan Sly (Princeton University).
In this talk we consider a continuous-time frog model on Z^d. As the discrete-time random walk is a.s. bounded for every fixed time, the original discrete-time frog model grows linearly with time no matter how heavy-tailed the distribution of the number of sleeping frogs per site is. This is no longer the case for the continuous-time model, and we discuss conditions on the initial distribution μ (mu) of number of sleeping particles per site ensuring linear growth, faster than linear growth, or explosion. The proof technique is based on a comparison with certain percolation-type models such as totally asymmetric discrete Boolean percolation or greedy lattice animals. We also discuss how these techniques can be applied to similar stochastic growth models.
Despite steady progresses over the last half century, the numerical simulation of fusion plasmas remains a huge challenge for applied mathematicians and computational physicists, mostly due to the complex nonlinear interactions that occur between multiple physical scales. The objective of this talk is to present some promising tools and open projects in this field: In a first part I will describe a novel structure-preserving discrete framework that provides stable, high-order and efficient solvers for Maxwell's equations in complex domains. This framework builds upon the Finite Element Exterior Calculus theory which preserves at the discrete level the de Rham geometric structure of the exact problems, but also allows a greater locality in the computations and a greater modularity in the implementation. In a second part I will show how to couple these generic field solvers with particle approximations of the Vlasov equation describing the evolution of a collisionless plasma, while preserving the Hamiltonian structure of the exact system. I will conclude with some open problems that are currently tackled in the NMPP division of the Max-Planck Institute for Plasma Physics.
Copulas are multivariate distribution functions with uniformly distributed margins on the unit interval [0,1]. Despite their simplicity, they are very helpful for modeling a dependence structure of multivariate data in applied science. In the talk, I outline a flexible construction of copulas using graphical models vines. Recently, the so-called vine copulas are successfully used to model univariate and multivariate time series. A time series consists of multiple observations indexed by time. Classical time series models allow for only linear dependence between variables and time points. Vine copulas can conveniently capture cross-sectional and temporal dependence of multivariate time series. In the talk, I derive the maximal class of graph structures that guarantee stationarity under a natural and verifiable condition. I also discuss computationally efficient methods for estimation, simulation, and prediction. The theoretical results allow for misspecified models and, even when specialized to the iid case, go beyond what is available in the literature. The talk is based on the joint work with Thomas Nagler and Daniel Krüger.