Filter off: No filter for categories
In geometric singular perturbation theory, it is well-known that normally hyperbolic critical manifolds perturb to slow manifolds. Typically these slow manifolds are only finitely smooth. In an analytic setting, Gevrey asymptotic expansions can be employed to show existence of local slow manifolds perturbing from, not only normally hyperbolic, but also normally elliptic critical manifolds. Moreover, better local smoothness properties can be achieved. Under the condition that no singularities in the slow flow are present, there exist manifolds allowing a Gevrey-1 asymptotic expansion. When the slow flow does have a singularity, there are cases, including when the singularity is a node or focus, where slow manifolds can be found being 1-summable in a certain direction.
Semi-discrete optimal transport between a discrete source and a continuous target has intriguing geometric properties and applications in modelling and numerical methods. Unbalanced transport, which allows the comparison of measures with unequal mass, has recently been studied in great detail by various authors. In this talk we consider the combination of both concepts. The tessellation structure of semi-discrete transport survives and there is an interplay between the length scales of the discrete source and unbalanced transport which leads to qualitatively new regimes in the crystallization limit. Based on joint work with David P. Bourne and Benedikt Wirth.
The Gromov-Wasserstein (GW) distance is a generalization of the standard Wasserstein distance between two probability measures on a given ambient metric space. The GW distance assumes that these two probability measures might live on different ambient spaces and therefore implements an actual comparison of pairs of metric measure spaces. Metric-measure spaces are triples (X,dX,muX) where (X,dX) is a metric space and muX is a Borel probability measure over X and serve as a model for datasets. In practical applications, this distance is estimated either directly via gradient based optimization approaches, or through the computation of lower bounds which arise from distributional invariants of metric-measure spaces. One particular such invariant is the so called ‘global distance distribution’ of pairwise distances. This talk will overview the construction of the GW distance, the stability of distribution based invariants, and will discuss some recent results regarding the injectivity of the global distribution of distances for smooth planar curves, hypersurfaces, and metric trees.
Equilibrium states and ground states of both classical and quantum systems can often be understood in terms of their underpinning random geometric structures. A case in point is a pair of well recognized quantum models: 1) spin-S quantum spins with the SU(2S+1) invariant Affleck Hamiltonian, and 2) S=1/2 spins with the XXZ Hamiltonian, and anisotropy delta. Depending on its parameter, each exhibits a quantum transition to a translation symmetry broken phase with a pair of ground state. In one case that is manifested in energy oscillations and in the other in Neel order. While the models exhibit different physical characteristics, their ground state functionals are expressible in terms of a common random lop system, one which is associated also with the classical Fortuin-Kasteleyn Q-state random cluster model. As it turns out, these classical and quantum systems are simpler to understand when considered from a cross perspective.
The commitment problem for natural kinds resides in an incompatibility between a substantial ontological commitment to mind-independent natural kinds and the observation that classifications are practice-relative. A novel account of `natural kinds as real patterns' is proposed, which, unlike its realist and naturalist competitors, can resolve the commitment problem. The account combines three main ingredients to resolve the commitment problem. First, the notion of `real patterns' is refined to obtain an ontological strategy that can deliver a commitment to natural kinds. Second, the distinction between `research traditions' and `perspectives' is introduced to extract an attainable notion of mind-independence. Third, the connection between natural kinds and real patterns of high indexical redundancy is rendered practice-relative via a dual commitment to both real patterns (qua relations) and objects (qua relata).
The 2D-Griffith energy coming from the variational model of crack propagation, takes into competition the elastic energy of a deformed body in a cracked domain, with the length of the unknown crack. From some aspects it looks like a vector valued version of the standard Mumford-Shah functional, but due to the elastic energy which controls only the symmetric part of the gradient, the analysis of minimizers for the Griffith functional, even just existence, is highly more difficult. In this talk I will review some of the most recent results about this functional, comparing with the standard Mumford-Shah theory. One of the main motivation of the talk being the partial C^1 regularity result recently obtained in collaboration with J.-F. Babadjian and F. Iurlano (preprint 2019).
Suppose I believe sincerely and with conviction that today I ought to repay my friend Ann the 10 euro that she lent me. But I do not make any plan for repaying my debt: Instead, I arrange to spend my entire day at the Englischer Garten, sipping Weißbier and enjoying the autumn chill. This seems wrong. Enkrasia is the principle of rationality that rules out the above situation. More specifically, by (an interpretation of) the Enkratic principle, rationality requires that if an agent sincerely and with conviction believes she ought to X, then X-ing is a goal in her plan. This principle plays a central role within the domain of practical rationality, and has recently been receiving considerable attention in practical philosophy (see the seminal Broome 2013). In this presentation, I want to analyze the logical structure of Enkrasia in light of the interpretation just described. To this end, I elaborate on the distinction between so-called “basic oughts" and “derived oughts", and provide a multi-modal neighborhood logic with three characteristic operators: A non-normal operator for “basic oughts”, a non-normal operator for “goals in plans”, and a normal operator for “derived oughts”. Furthermore, special attention will be devoted to the dynamic relation between those notions. I will present a dynamic extension of the logic, and discuss its repercussions to debates surrounding the validity of Enkrasia, and the stability of oughts and goals. Based on a joint work with Dominik Klein (U. Bamberg & U. Bayreuth).
The energy balance of light and matter imposes diffusive behavior in the asymptotic limit of high density. The numerical approximation of this limit is quite delicate and discretization methods must be designed with some care in order to achieve it. On the other hand, violation of the asymptotic limit by the numerical scheme yields qualitatively wrong approximations for even moderate densities.
We discuss the reasons for breakdown of the standard method and ways to preserve correct asymptotic behavior. In numerical experiments, we show that multilevel domain decomposition solvers work almost out of the box for asymptotic preserving discretizations.
In this talk, we provide a glimpse to the statistical challenges of learning from small sample population. In particular, we discuss the problem of hypothesis testing for large graphs and demonstrate that a conventional approach can result in ‘unsolvable’ statistical problems. However, when the questions are posed appropriately, one can develop methods with performance guarantees. We present some applications of these methods in testing communication and biological networks.
We examine the almost sure asymptotics of the solution to the stochastic heat equation driven by a Lévy space-time white noise. When a spatial point is fixed and time tends to infinity, we show that the solution develops unusually high peaks over short time intervals, even in the case of additive noise, which leads to a break down of an intuitively expected strong law of large numbers. More precisely, if we normalize the solution by an increasing nonnegative function, we either obtain convergence to $0$, or the limit superior and/or inferior will be infinite. A detailed analysis of the jumps further reveals that the strong law of large numbers can be recovered on discrete sequences of time points increasing to infinity. This leads to a necessary and sufficient condition that depends on the Lévy measure of the noise and the growth and concentration properties of the sequence at the same time.
Structural causal models of event processes imply certain local independencies among the coordinates of the processes. The local independencies form an independence model that can be encoded as a graphical separation model in a directed graph via δ- or μ-separation. If only some of the process coordinates are observed, it is important to understand what can be learned about the causal structure in terms of the local independence model. We recently showed that independence models given by μ-separation in directed mixed graphs are closed under marginalization, and we characterized the Markov equivalence class of a graph. This naturally leads to a causal structure learning algorithm when a local independence oracle is available. We propose a way to replace the oracle by statistical tests of local independence to obtain an empirical learning algorithm. The tests are based on expanding a general intensity process as a Volterra series of iterated integrals.
Three intrinsic features describe Volterra processes, arising as solutions to stochastic Volterra integral equations with irregular coefficients. They fail to be semimartingales, are not Markovian and admit paths that are almost surely of lower Hölder regularity than the driving Brownian motion. In particular, the last effect, called roughness, is used in mathematical finance to model the volatility of the price process of a risky asset. New fields on rough volatility modelling and pricing therein evolved with a focus on affine Volterra processes. Convinced by the range of applications, we seek to provide unique nonextendible solutions to deterministic and stochastic Volterra integral equations that are allowed to be path-dependent, provide growth and error estimates for Picard iterations and establish regularity of solutions relative to the initial data.
We definite a general measure theoretical setting for the cluster expansion method and study a criterion for its convergence.
It is widely assumed that we have no reliable methods for ranking a candidate hypothesis H against its own negation ¬H, the so-called “catch-all” hypothesis containing all alternatives to H. Here I will resist this view on the basis of case studies drawn from biology. I will argue that scientists routinely operate within exhaustive hypothesis spaces where all relevant alternatives can be assessed in a restricted but non-trivial sense.
This talk will introduce a new logical system in which formulas represent "effects" (e.g. of assertions), such that these "effects" correspond formally to vectors. In a slogan: meaning is a vector. The theory involves a deductive system, a semantics with an appropriate notion of logical consequence, and extensions to inductive logic and belief revision. The resulting system allows for a probabilistic and a neural network interpretation, and it can be applied to reconstruct some well-known problems and puzzles from cognitive psychology and computational linguistics.
David Lewis (and others) have famously argued against Adams's Thesis (that the probability of a conditional is the conditional probability of its consequent, given it antecedent) by proving various "triviality results." In this paper, I argue for two theses -- one negative and one positive. The negative thesis is that the "triviality results" do not support the rejection of Adams's Thesis, because Lewisian "triviality based" arguments against Adams's Thesis rest on an implausibly strong understanding of what it takes for some credal constraint to be a rational requirement (an understanding which Lewis himself later abandoned in other contexts). The positive thesis is that there is a simple (and plausible) way of modeling the epistemic probabilities of conditionals, which (a) obeys Adams's Thesis, and (b) avoids all of the existing triviality results.
I will discuss Banach representations of the general linear group GL (n;C) from its very beginnings to recent advances due to Elias and Williamson. In the end, I plan to briefly explain the bridge to motives and knot theory.
The aim of the talk is to present results on stability of traveling waves for integro-differential equations connected with branching Markov processes. In other words, the limiting law of the left-most particle of a time-continuous branching Markov process with a Lévy non-branching part is shown. In particular, Bramson's correction is obtained. The key idea is to approximate the branching Markov process by a branching random walk and apply the result of Aïdékon on the limiting law of the latter one.
The modelling of reaction-subdiffusion processes is more subtle than normal diffusion and causes different phenomena. The resulting equations feature a spatial Laplacian with a temporal memory term through a time fractional derivative. It is known that the precise form depends on the interaction of dispersal and reaction, and leads to qualitative differences. We refine these results by defining generalised spectra through dispersion relations, which allows us to examine the stability and onset of instability and in particular inspect Turing type instabilities. Moreover, we show that one class of reaction-subdiffusion equations algebraic decay for stable spectrum, whereas for another class this is exponential.
While topological phases of matter have mostly been studied for closed, Hermitian systems, a recent shift has been made towards considering these phases in the context of non-Hermitian Hamiltonians, which form a useful approach to describe dissipation. These Hamiltonians feature many exotic properties, which are radically different from their Hermitian counterparts, such as the generic appearance of degeneracies known as exceptional points, the general break down of bulk-boundary correspondence, and the piling up of bulk states at the boundaries known as the skin effect. During my talk, I will address the basic properties of non-Hermitian Hamiltonians, and show how generic exceptional points may appear in one dimension in the presence of symmetries. In addition, I will show that while the Bloch bands of the periodic Hamiltonian fail to provide useful information for certain non-Hermitian systems with open boundary conditions, such models can be accurately quantified by making use of so-called biorthogonal quantum mechanics leading to the concept of biorthogonal bulk-boundary correspondence.
Scalar optimization problems with non-smooth PDEs have been researched considerably over the last years. When optimal compromises (i.e. Pareto optimal points) for optimization problems with multiple objectives and non-smooth PDE constraints are sought after, only few results are known. This talk addresses the multiobjective optimal control of a non-smooth semi-linear elliptic PDE with max- type nonlinearity. The presentation covers existence of Pareto optimal points, C- and strong stationarity conditions in the multiobjective setting as well as corresponding numerical results for examples with up to 3 cost functionals. This is joint work with Constantin Christof (TU Munich).
Why is our language vague? One plausible explanation is that in contexts in which a cooperative speaker is not perfectly informed about the world, the use of vague expressions offers an optimal tradeoff between truthfulness and informativeness. In this paper, this hypothesis is substantiated by examining the meaning of the numerical approximator ``around''. We compare the use of ``around'' with the expression of precise intervals involving ``between'', and explain, using a Bayesian model of interpretation, how ``around'' allows a rational hearer to infer a better probabilistic representation of the uncertain distribution the speaker has in mind, and allows a rational speaker to better communication the uncertain information he or she has in mind.
We will discuss an infinite dimensional linear programming (IDLP) problem, which along with its dual allow one to characterize the limit optimal values of the infinite time horizon optimal control (OC) problem with time discounting and time averaging criteria. One of the results that we will concentrate on is that establishing that the Abel and Cesaro limits of the optimal value of the OC problem are bounded from above by the optimal value of the IDLP problem and from below by the optimal value of its dual, this implying, in particular, that the Abel and Cesaro limits exist and are equal if there is no duality gap. We will also discuss IDLP based sufficient and necessary optimality conditions for the long-run-average optimal control problem applicable when there is no duality gap. The novelty of our consideration is that it is focused on the general case, when the limit optimal values may depend on initial conditions of the system. The talk is based on results obtained in collaboration with V. Borkar and I. Shvartsman.
In recent years, a number of numerical methods for the solution of fractional Laplace and, more generally, fractional diffusion problems have been proposed. The approaches are quite diverse and include, among others, the use of best uniform rational approximations, quadrature for Dunford-Taylor-like integrals, finite element approaches for a localized elliptic extension into a space of increased dimensions, and time stepping methods for a parabolic reformulation of the fractional differential equation. We review these methods and observe that all approaches mentioned above can, in fact, be interpreted as realizing different rational approximations of a univariate function over the spectrum of the original (non-fractional) diffusion operator.
This observation allows us to cast all described methods into a unified theoretical and computational framework, which has a number of benefits. Theoretically, it enables us to give new convergence proofs for several of the studied methods, clarifies similarities and differences between the approaches, suggests how to design new and improved methods, and allows a direct comparison of the relative performance of the various methods. Practically, it provides a single, simple to implement, efficient and fully parallel algorithm for the realization of all studied methods; for instance, this does away with the need for constructing specific multilevel methods for the efficient realization of the extension methods and lets us parallelize the otherwise inherently sequential time stepping approach.
In a detailed numerical study, we compare all investigated methods for various fractional exponents and draw conclusions from the results.
Random constraint satisfaction problems play an important role in computer science and combinatorics. For example, they provide challenging benchmark instances for algorithms and they have been harnessed in probabilistic constructions of combinatorial structures with peculiar features. In an important contribution, physicists made several predictions on the precise location and nature of phase transitions in random constraint satisfaction problems. Specifically, they predicted that their satisfiability thresholds are quite generally preceded by several other thresholds that have a substantial impact both combinatorially and computationally. These include the condensation phase transition, where long-range correlations between variables emerge, and the reconstruction threshold. In this paper we prove these physics predictions for a broad class of random constraint satisfaction problems. Additionally, we obtain contiguity results that have implications on Bayesian inference tasks, a subject that has received a great deal of interest recently.
This is joint work with Amin Coja-Oghlan and Tobias Kapetanopoulos.
We consider the (non-spatial) coalescent model (sometimes called the Marcus-Lushnikov model), starting with $N$ particles with mass one each, where each two particles coalesce after independent exponentially distributed times. The corresponding coagulation kernel ist multiplicative in the two masses, hence the coalescent is also called multiplicative. There are strong relations with the time-dependent Erd\H{o}s-R\'enyi graph. We work in the thermodynamic limit $N\to\infty$ at a fixed time $t$ and derive a joint large-deviations principle for all relevant quantities (microscopic, mesoscopic and macroscopic particle sizes) with an explicit rate function. We deduce laws of large numbers and in particular derive from that the well-known phase transition at time $t=1$, the time at which a macroscopic particle appears, as well as the well-known Smoluchowski characterisation of the statistics of the finite-sized particles. (joint work with Luisa Andreis and Robert Patterson.)
In recent years, the usage of embedded boundary or cut cell meshes has become increasingly popular. They are an alternative to body-fitted meshes, which may be harder to generate and more complex in the bulk of the flowfield. Cut cell methods cut the flow body out of an structured background grid. This creates so called cut cells along the boundary of the embedded object, which have irregular shape and may become very small. These cells need special treatment. For the solution of hyperbolic conservation laws, a major issue caused by cut cells is the small cell problem: standard explicit time stepping schemes are not stable on the arbitrarily small cut cells if the time step is chosen according to the background mesh and does not respect the size of small cut cells. In this talk we present a new stabilization for overcoming the small cell problem in the context of piecewise linear DG schemes in one and two dimensions for the linear advection equation [1]. Our stabilization is designed to only let a certain portion of the inflow of a small cut cell stay in that small cut cell and to transport the remaining portion directly into the cut cell’s outflow neighbors. As a by-product, we reconstruct the proper domain of dependence of the small cut cell’s outflow neighbors. In that sense our stabilization relies on similar ideas as the h-box method [2] but without an explicit geometry reconstruction. The approach for realizing these ideas in a DG setting was inspired by the ghost penalty method [3] but significant changes were necessary to adjust the terms that were developed for elliptic problems to the setting of hyperbolic equations. Using the proposed stabilization, one can use explicit time stepping even on cut cells. In one dimension the stabilized scheme can be shown to be mono- tone for piecewise constant polynomials and total variation diminishing in the means for piecewise linear polynomials in combination with explicit time step- ping schemes. We conclude our talk with numerical results in one and two dimensions.
In this talk I am going to describe the holonomic gradient method for the matrix Fisher model on SO(3). The holonomic gradient method was developed by a group of Japanese scholars to approximate functions which are difficult to evaluate otherwise. This is done by representing the function of interest by a system of partial differential equations and non-commutative algebra. I am going to describe this method by example of the Fisher model and show how it can be used to perform MLE. I will present our results by example of well known data sets in directional statistics.
Model predictive control (MPC) is a popular control method, in which a feedback control for a problem on a variably or infinitely long time horizon is computed from the successive numerical solution of optimal control problems on relatively short time horizons. It can thus be seen as a model reduction technique in time for optimal control. Clearly, an optimal control problem must have a certain amount of redundancy for MPC to work properly. In the first part of this talk, we will show that the so-called turnpike property from optimal control provides the desired redundancy.
In the second part we address computational issues when applying MPC to PDEs. Here we exploit a particular feature of MPC, i.e., that typically the optimal control problems are solved on overlapping horizons, implying that only a small portion of the computed optimal control function is actually used. This suggests that an adapted discretization in time and/or space may offer a large benefit for MPC of PDEs. We explain the theoretical justification of this approach based on novel sensitivity results for the optimal control of general evolution equations. Then the efficiency of the proposed method is illustrated by numerical experiments.
The talk is based on joint work with Roberto Guglielmi (L'Aquila), Matthias Müller (Hannover), Manuel Schaller and Anton Schiela (both Bayreuth), Marleen Stieler (BASF Ludwigshafen)
Philosophical interpretations of physical theories often appeal to two analytical principles: the notion of “surplus structure” and the “method of symmetry”. A theory has surplus structure if it has mathematical structure that don’t reflect physical structure. The method of symmetry picks out features of a theory that do and do not vary under some distinguished transformations of the mathematics. These two tools are often used in tandem, identifying the surplus structure of a theory with that which varies under the distinguished transformations. This leads to inconsistency. I exhibit this inconsistency in a particular disagreement over the interpretation of quantum field theory, and I argue against the usefulness of the notion of surplus structure.
We present a monolithic Newton multigrid solver for the nonlinear-systems occurring in every time-step in optimal control of three dimensional fluid-structure interaction. To compute gradient information, an adjoint equation is solved. The key idea of the presented algorithm is to neglect the derivatives with respect to mesh deformation in the Jacobian of the Newton algorithm. This step allows to rewrite the Newton equation in three smaller systems. The linear systems are now much better conditioned than the full Jacobian such that a geometric multigrid solver can be applied. To compute the adjoint problem, the used Richardson iteration can be modified in a similar way. Thereby, state and sensitivity information of fluid-structure interaction problems with a large number of degrees of freedom, as in 3D configurations, can be computed. The new solver enables parameter estimation and optimal control for various applications. For example unknown parameters in the outflow condition can be determined to model blood flow in a vein or artery segment.
In the Tractatus, Wittgenstein held that all meaningful sentences are truth-functions of logically independent elementary parts. Wittgenstein’s remarks from ten years later suggest that this this vision cannot accommodate material implications such as color exclusion or spatial asymmetries, and that this lack is a refutation of the Tractatus view. But those later remarks are misleading. Wittgenstein had in his early view the wherewithal to account for such material implications, and almost certainly had worked out a procedure for doing so. Among the tools he used for this was his notion of a “formal series”, a notion that he also used to criticize the Frege-Russell logicist reduction and to supply him with an alternative grounding for arithmetic.