Filter off: No filter for categories
In this talk, I will introduce our recent results on periodic traveling wave solutions with large speed in general two component reaction-diffusion systems. More specifically, we applied the invariant manifold theory developed by Fenichel, along with bifurcation theory of planar dynamical systems, to investigate the existence, period functions and stability of periodic solutions.
After a brief introduction to the Model Risk Assessment literature, this talk will present two recent results in this field. The first part of this talk focuses on risk aggregation problems under partial dependence uncertainty. The main point of our analysis is to show that the knowledge of a dependence measure such as Pearson correlation, Spearman's rho or the average correlation, has typically no effect on the worst-case scenario of the aggregated (Range)Value-at-Risk, with respect to the case of full dependence uncertainty. The second part of the talk deals with the robust assessment of a life insurance contract when there is ambiguity regarding the residual lifetime distribution function of the policyholder. Specifically, we show that if the ambiguity set is described using an L^2 distance constraint from a benchmark distribution function, then the net premium bounds can be reformulated as a convex linear program that enjoys many desirable properties.
In this talk we consider continuous time random walks on $\mathbb{Z}^d$ among random conductances that permit jumps of arbitrary length, where the law of the conductances is assumed to be stationary and ergodic. Under a suitable moment condition we obtain a quenched local limit theorem and Hölder regularity estimates for solutions of the heat equation for the associated non-local discrete operator. Our results apply to random walks on long-range percolation graphs with connectivity exponents larger than 2d when all nearest-neighbour edges are present. This talk is based on a joint work with Martin Slowik (Mannheim).
Causal discovery procedures are popular methods for discovering causal structure across the physical, biological, and social sciences. However, most procedures for causal discovery only output a single estimated causal model or single equivalence class of models. In this work, we propose a procedure for quantifying uncertainty in causal discovery. Specifically, we consider structural equation models where a unique graph can be identified and propose a procedure which returns a confidence sets of causal orderings which are not ruled out by the data. We show that asymptotically, a true causal ordering will be contained in the returned set with some user specified probability. In addition, the confidence set can be used to form conservative sets of ancestral relationships.
Variational problems involving nonlocal perimeter has been studied for a decade. In particular, minimizing nonlocal perimeter under some constraint has attracted many authors because of its application to other fields such as the Levy process, phase-transition problem, and so on. In this talk, we will focus on the shape of sets that minimize the nonlocal perimeter and see how different those sets are from ones that minimize the classical perimeter. We will show you what is known so far and present our recent example of sets minimizing the nonlocal perimeter, which can behave quite differently from the classical ones. This talk is partially based on joint work with Serena Dipierro and Enrico Valdinoci from the University of Western Australia.
Mathematische Begriffe und Objekte sind als Gegenstände des Denkens ihrer abstrakten Natur nach unanschaulich. Sie können nur über die Betrachtung in verschiedenen Darstellungen, wie etwa in Graphen, Tabellen, Formeln usw. zugänglich gemacht werden. Für ein grundlegendes Verständnis eines mathematischen Begriffs ist das Wissen um seinen Facettenreichtum notwendig, welches auf der mehrperspektivischen Abbildung in multiplen Repräsentationen aufbaut. Eine zentrale Kompetenz der Lernenden ist – wie in den Bildungsstandards formuliert – der flexible Umgang mit diesen Repräsentationsformen. Dazu gehören Fähigkeiten der Entschlüsselung verschiedener Zeichensysteme, der Integration sowie der Produktion bzw. Übersetzung von Repräsentationen. Diese Prozesse der Wechsel zwischen und Verknüpfung von Repräsentationen sind bei allen mathematischen Aktivitäten von Bedeutung. Die empirische Forschung zur Verarbeitung multipler Repräsentationen hat sich bisher überwiegend auf heterogene Darstellungsformate (z. B. Bild und Text) bezogen, wenig ist hingegen über das Zusammenwirken homogener Repräsentationen (z.B. Text und Formel) bekannt. Am Beispiel elementarer Aufgaben im Bereich der Aussagenlogik wurde in einer Studie dieser Aspekt speziell untersucht. Im Vortrag werden ausgewählte Aspekte von Theorie und empirischen Befunden vorgestellt und anhand unterrichtspraktischer Beispiele diskutiert.
The large deviation behavior of lacunary sums Michael Juhos Universität Passau We study the large deviation behavior of lacunary sums (Sn/n)n∈N with Sn := Pn k=1 f (akU ), n ∈ N, where U is uniformly distributed on [0, 1], (ak)k∈N is an Hadamard gap sequence, and f : R → R is a 1-periodic, (Lipschitz-)continuous mapping. In the case of large gaps, we show that the normalized partial sums satisfy a large deviation principle at speed n and with a good rate function which is the same as in the case of independent and identically distributed random variables Uk, k ∈ N, having uniform distribution on [0, 1]. When the lacunary sequence (ak)k∈N is a geometric progression, then we also obtain large deviation principles at speed n, but with a good rate function that is dif- ferent from the independent case, its form depending in a subtle way on the interplay between the function f and the arithmetic properties of the gap sequence. Our work generalizes some results recently obtained by Aistleitner, Gantert, Kabluchko, Prochno, and Ramanan [Large deviation principles for lacunary sums, 2023] who initiated this line of research for the case of lacunary trigonometric sums.
We present a Markov-chain analysis of blockwise-stochastic algorithms for solving partially block-separable optimization problems. Our main contributions to the extensive literature on these meth- ods are statements about the Markov operators and distributions behind the iterates of stochastic algo- rithms, and in particular the regularity of Markov operators and rates of convergence of the distributions of the corresponding Markov chains. This provides a detailed characterization of the moments of the se- quences beyond just the expected behavior. This also serves as a case study of how randomization restores favorable properties to algorithms that iterations of only partial information destroys. We demonstrate this on stochastic blockwise implementations of the forward-backward and Douglas-Rachford algorithms for nonconvex (and, as a special case, convex), nonsmooth optimization.
Ice sheets and glaciers contribute to sea level rise as they lose mass due to enhanced climate warming. We focus on one of the largest ice sheets, the Greenland ice sheet. Passing a critical tipping point through positive melt-elevation feedback would trigger an irreversible process, resulting in the Greenland ice sheet no longer growing and hence rising the sea level by more than 7 meters.
Thus, the goal is to analyze the impacts of parameter uncertainties on a nonlinear and one-dimensional Greenland ice sheet model. To do this, probabilistic and sensitivity analyses are applied to the model. Further, an approach to analyze the sensitivity of the branches of equilibria of a bifurcation is given to gain more understanding about the model and to enhance its quality.
In this talk we describe a probabilistic methodology to derive the precise asymptotic for the probability of observing a maximal component containing more than n^{2/3} vertices in the (near-) critical Erdös-Rényi random graph. Our approach is mostly based on ballot-type estimates for one-dimensional, integer-valued random walks, and improves upon the martingale-based method introduced by Nachmias and Peres in 2009. We also briefly discuss how our method has been adapted to study the same type of problem for (near-) critical percolation on a random d-regular graph, as well as possible future developments.
The classical Brunn-Minkowski inequality in the $n$-dimensional Euclidean space asserts that the volume (Lebesgue measure) to the power $1/n$ is a concave functional when dealing with convex bodies (non-empty compact convex sets). It quickly yields, among other results, the classical isoperimetric inequality, which can be summarized by saying that the Euclidean balls minimize the surface area measure (Minkowski content) among those convex bodies with prescribed positive volume. Moreover, it implies the so-called Rogers-Shephard inequality, which provides a sharp upper bound for the volume of the difference set in terms of the volume of the original convex body.
There exist various facets of the previous results, due to their different versions, generalizations, and extensions. In this talk, after recalling the above classical inequalities for the volume, we will discuss and show certain discrete analogues of them for the lattice point enumerator, which gives the number of integer points of a bounded set. Moreover, we will show that these new discrete inequalities imply the corresponding classical results for convex bodies.
This is about joint works with David Alonso-Gutiérrez (Universidad de Zaragoza), David Iglesias (Universidad de Murcia), Eduardo Lucas (Universidad de Murcia), and Artem Zvavitch (Kent State University).
Classical machine learning (ML) provides a potentially powerful approach to solving challenging quantum many-body problems in physics and chemistry. However, the advantages of ML over traditional methods have not been firmly established. In this work, we prove that classical ML algorithms can efficiently predict ground-state properties of gapped Hamiltonians after learning from other Hamiltonians in the same quantum phase of matter. By contrast, under a widely accepted conjecture, classical algorithms that do not learn from data cannot achieve the same guarantee.
Our proof technique combines mathematical signal processing with quantum many-body physics and also builds upon the recently developed framework of classical shadows. I will try to convey the main proof ingredients and also present numerical experiments that address the anti-ferromagnetic Heisenberg model and Rydberg atom systems.
This is joint work with Hsin-Yuan (Robert) Huang, Giacomo Torlai, Victor Albert and John Preskill, see [Huang et al., Provably efficient machine learning for quantum many-body problems, Science 2022]
We review some recent results on the Euler system describing the motion of a perfect (meaning inviscid) compressible fluid. The main topics include:
1. Existence and density of ``wild'' initial data giving rise to infinitely many solutions 2. Solutions with anomalous (discontinuous) energy profile 3. Violating of determinism in the class of weak solutions 4. Possibilities how to restore order in chaos
The Weight-Dependent Random Connection model combines long-range percolation with scale-free network models. The talk focuses on the "weak decay regime" where connection probability tails are heavy enough to circumvent many geometrical difficulties that arise in short-range perclation models in low dimensions. I will summarise known sufficient conditions for existence and transience of an infinite component and discuss a new local existence theorem which improves upon a result of Berger (2002) and which implies the most general sufficient condition for transience hitherto known, as well as the continuity of the percolation function.
In this talk, I will discuss reachable sets in planar affine control systems. The special structure of these systems can be leveraged to construct maximal subsets of the state space on which we have exact controllability. The results find application in recent models of cancer, where the treatment enters as a bounded, time-varying input. Of particular interest here are so-called eco-evolutionary models.
There are strong indications from physics at both infinitesimal and cosmic distances that our current understanding of the laws of nature is only approximate, and must be replaced by deeper principles. Novel geometric objects, in the setting of amplituhedra and Feynman integrals, hint at new mathematical structures. Combinatorics and algebraic geometry have been connected to particle physics and cosmology in an entirely unexpected way. This seminar, organised by Bernd Sturmfels, is an invitation to join the discussion. Programm 16:00-16:10 Bernd Sturmfels (MPI-MiS Leipzig) "A Lightning Introduction" 16:15-16:35 Johannes Henn (MPI Physics, Munchen): "Scattering Amplitudes" 16:40-17:00 Lizzie Pratt (UC Berkeley): "Integrals and their D-Modules" BREAK 17:15-17:35 Simon Telen (CWI Amsterdam): "Amplituhedra of Lines in 3-Space" 17:40-18:00 Hofie Hannesdottir (IAS Princeton): "Infrared Divergences"
For further information please visit: https://www.cit.tum.de/cit/aktuelles/article/workshop-positive-geometry/
We consider a Poisson point process on \(\mathbb R ^d\) with intensity \(\lambda\) for \(d\ge2\). On each point, we independently center a ball whose radius is distributed according to some power-law distribution \(\mu\). When the distribution \(\mu\) has a finite \(d\)-moment, there exists a non-trivial phase transition in \(\lambda\) associated to the existence of an infinite connected component of balls. We aim here to prove subcritical sharpness that is that the subcritical regime behaves well in some sense. For distribution \(\mu\) with a finite \(5d−3\)-moment, Duminil-Copin--Raoufi--Tassion proved subcritical sharpness using randomized algorithm. We prove here using different methods that the subcritical regime is sharp for all but a countable number of power-law distributions. Joint work with Vincent Tassion.
Random Forests (RFs) are at the cutting edge of supervised machine learning in terms of prediction performance, especially in genomics. Iterative RFs (iRFs) use a tree ensemble from iteratively modified RFs to obtain predictive and stable nonlinear or Boolean interactions of features. They have shown great promise for Boolean biological interaction discovery that is central to advancing functional genomics and precision medicine. However, theoretical studies into how tree-based methods discover Boolean feature interactions are missing. Inspired by the thresholding behavior in many biological processes, we first introduce a discontinuous nonlinear regression model, called the “Locally Spiky Sparse” (LSS) model. Specifically, the LSS model assumes that the regression function is a linear combination of piecewise constant Boolean interaction terms. Given an RF tree ensemble, we define a quantity called “Depth-Weighted Prevalence” (DWP) for a set of signed features S. Intuitively speaking, DWP(S) measures how frequently features in S appear together in an RF tree ensemble. We prove that, with high probability, DWP(S) attains a universal upper bound that does not involve any model coefficients, if and only if S corresponds to a union of Boolean interactions under the LSS model. Consequentially, we show that a theoretically tractable version of the iRF procedure, called LSSFind, yields consistent interaction discovery under the LSS model as the sample size goes to infinity. Finally, simulation results show that LSSFind recovers the interactions under the LSS model, even when some assumptions are violated.
Reference: https://www.pnas.org/doi/10.1073/pnas.2118636119
Co-authors: Yu Wang, Xiao Li, and Bin Yu (UC Berkeley)
We study the homogenisation problem for elliptic high-contrast operators Aε whose coecients degenerate as ε goes to 0 on a set of randomly distributed inclusions. We discuss the limit operator (in the sense of the resolvent convergence) and the convergence of spectrum. On the bounded domain the limiting spectrum is equal to the spectrum of the limit operator, while in the whole space setting the spectrum of the limit operator is the subset of the limiting spectrum. Additionally we characterize the limiting spectrum in the case of nite correlation. This is a joint work with Matteo Capoferri (University of Cardiff), Mikhail Cherdantsev (University of Cardiff) and Kirill Cherednichenko (University of Bath).
I will describe the polynomial of the title through the eyes of a differential geometer. The story involves 4-manifolds, Einstein metrics, the Calabi conjecture and complex hyperbolic geometry. All of these concepts will be introduced in the talk; the only prerequisites are the definition of a polynomial and the implicit function theorem.
The aim of this talk is to introduce a class of cross-diffusion systems involving Cahn-Hilliard terms. This class arises in modelling mixtures composed of several species that interact with one another via cross-diffusion effects and also have the tendency to separate from each other. In the case under consideration, only one species (that accounts for the void) does separate from the others. The interest for such a model stems from the fact that in many real world applications there exist multiphase systems where miscible entities may coexist in one single phase of the system. After an introduction of the model, we will present some results on the existence of weak and stationary solutions, and then conclude with an outlook to some future developments.
The theory of graph limits considers the convergence of sequences of graphs with a diverging number of vertices. From an applied perspective, it aims to represent very large networks conveniently. Until recently, however, particular cases for graph limits have been investigated separately, while hypergraph limits are even less well-developed. In this talk I will give a brief introduction to action convergence, a recent unified approach to graph limits based on functional analysis and measure theory. Moreover, I will present some work in progress on the extension of action convergence to hypergraphs.
The aim of this presentation is to briefly present generalized (sectional) curvature and see which kind of relations between data points it evaluates and what kind of information is revealed through this quantity. While in topological data analysis the objective is to extract qualitative features, the shape of data, geometric data analysis mainly deals with quantitative features of data. For instance, the prominent scheme of manifold learning is applied to find the comparatively low dimensional Riemannian manifold on which the data set fits best. It then raises the question of whether one can anticipate some geometric properties from initial model before finding this manifold structure. The most important quantitative measures that in a good extent reveal the geometry of a Riemannian manifold are its (sectional) curvatures. Therefore, we wish to see how one can determine the curvature of data and how does it help to derive the salient structural features of a data set.
Despite being very different in nature, martingales and rough paths have many similarities and their interplay is most fruitful. As a concrete example, I will introduce the recent notion of rough stochastic differential equations and explain its importance in filtering, pathwise control theory and option pricing under (possibly rough) stochastic volatility. (Joint work with numerous people, including Pavel Zorin-Kranich, Khoa Lê, Antoine Hocquet, Peter Bank, Christian Bayer and Luca Pelizzari.)
When considering non-symmetric gauges there are several ways to define the diameter of a convex body. These correspond to different symmetrizations of the gauge, i.e., means of the gauge $C$ and $-C$. We study inequalities involving the inradius, circumradius and diameter and present examples and results that confirm that not only does studying the symmetrizations help to understand the diameters better but also the other way around.
In „Ellipsoids of maximal volume in convex bodies“ Keith Ball proved a general bound on the volume of k-dimensional ellipsoids in n-dimensional convex bodies in relation to their John ellipsoid. A stronger bound is known in the symmetric case. Our goal was to connect these results by establishing a bound depending on the John asymmetry s_0. We could prove a tight bound for all k and all asymmetry values s_0 not in (1,1+2/n), and characterize the equality cases.