Filter off: No filter for categories
A mixed-mode oscillation (MMO) is a complex waveform with a pattern of alternating small- and large-amplitude oscillations. MMOs have been observed experimentally in many physical and biological applications, and most notably in chemical reactions. We are mainly interested in MMOs that appear in dynamical systems with different time scales. In particular, we consider an autocatalytic model with an explicit time-scale separation parameter. The mathematical analysis of MMOs is very geometric in nature and based on singular limits of the time-scale ratios. Near the singular limit one finds so-called slow manifolds that guide the dynamics on the slow time scale. In the considered autocatalator model, slow manifolds are surfaces that can be either attracting or repelling. Transversal intersections between attracting and repelling slow manifolds are called canard orbits. Our aim is to study a parameter regime where the time-scale ratio is relatively large. We use continuation methods based on two-point boundary value problems to investigate the underlying complex dynamics of the autocatalator in such a parameter regime. By employing these methods, we observe unexpected phenomena such as twin canard orbits and ribbons of the attracting slow manifold.
TBA
We discuss the parameter choice in learning algorithms generated by general regularization scheme. In contrast to classical deterministic regularization, the performance of regularized learning algorithms is influenced not only by the smoothness of a target function, but also by the capacity of a regularization space. In supervised learning both the smoothness and the capacity are intrinsically unknown. Therefore, we are interested in a posteriori regularization parameter choice rules and propose a new form of the balancing principle. We provide the analysis of the proposed rule and demonstrate its advantages in simulations. Joint research with Peter Mathe (WIAS-Berlin) and Shuai Lu (Fudan University, Shanghai).
https://www.statistik.lmu.de/~abender/Kolloquium/abstracts/ws1617/geppert.pdf
n the talk I will present the behavior of a cascade measure near a typical point i.e. the measure of a small ball around a point sampled according to this measure. I will also describe the connections with Liouville measures and random planar maps.
We consider a phase field model for dislocations introduced by Koslowski, Cuitino, and Ortiz in 2002. The model describes a single slip plane and consists of a Peierls potential penalizing non-integer slip and a long range interaction modeling elasticity. Forest dislocations are introduced as a restriction to the allowable phase field functions: they have to vanish at the union of a number of small disks in the plane. Garroni and Müller proved large scale limits of these models in terms of Gamma-convergence, obtaining a line-tension energy for the dislocations and a bulk term penalizing slip. This bulk term is a capacity stemming from the forest dislocations.
In the present work, we show that the contribution of the forest dislocations to the viscous gradient flow evolution is small. In particular it is much slower than the timescale for other effects like elastic attraction/repulsion of dislocations, which, by a recent result due to del Mar Gonzales and Monneau is already slower than the time scale from line tension energy. Overall, this leads to an effective behavior like a gradient flow in a wiggly potential. On the other hand, of course, when adding a driving force in the direction of increasing slip, one needs to spend the energy to overcome the obstacles. The forest dislocations thus act like a dissipation for increasing slip, but their effect on the propagation is absent for decreasing slip.
Motive wurden in den 60-er Jahren des letzten Jahrhunderts von Alexander Grothendieck eingeführt. Sie spielen eine zentrale Rolle im Verständnis der Kohomologie von Schemata, und damit in der algebraischen Geometrie an sich. Inzwischen sind sie zu einem grundliegenden Werkzeug in der algebraischen Geometrie geworden. Außerdem hat die Theorie der Motive eine breite Auswirkung auf die algebraische Zahlentheorie und auf die Darstellungstheorie. In meinem Vortrag werde ich Motive einführen und einige Anwendungen der Motive zu klassischen Problemen der Algebra präsentieren. Insbesondere werde ich mithilfe der Motive eine Frage von Jean-Pierre Serre beantworten, unter welchen Bedingungen bestimmte endliche Gruppen in die algebraische Gruppe vom Typ $\mathrm{E}_8$ eingebettet werden können.
Dislocations are topological line defects found in crystals, and their motion governs the plastic behaviour of such materials. Due to the long-range stress fields they induce, their collective behaviour is complex, so understanding it, and therefore obtaining improved predictive models of plasticity, remains a major challenge in Materials Science. The first part of this talk presents a series of results concerning a simple lattice model in which it can be shown that deformations containing dislocations exist as globally and locally stable equilibria. In the second part of the talk, we build on these results to construct a thermodynamic Markovian model for dislocation motion. In a certain low-temperature regime, we show that this model satisfies a Large Deviations Principle, and that the most probable trajectories of the system correspond to solutions of Discrete Dislocation Dynamics with an explicit non-quadratic,structure-dependent mobility.
In the theory of autonomous ordinary differential equations we often have param¬eter depending systems and we are interested in bifurcations such as pitchfork or Hopf. In order to study those systems near a stationary point it is possible to com¬pute Poincaré-Dulac normal forms and to reduce the system using invariants of the linearization. However, in practice it turns out that those computations are not feasible in general. We provide a new approach and a new algorithm to study bifurcations without explicitly computing normal forms.
In this talk we present an existence result for Lévy-type processes. Lévy-type processes behave locally like a Lévy process, but the Lévy triplet may depend on the current position of the process. They can be characterized by their, so-called, symbol; this is the analogue of the characteristic exponent in the Lévy case. Using a parametrix construction, we prove the existence of Lévy-type processes processes with a given symbol under weak assumptions on the regularity (with respect to the space variable) of the symbol. We derive heat kernel estimates for the transition density as well as its time derivative, and prove the well-posedness of the corresponding martingale problem. Our result gives, in particular, existence results for stable-like, relativistic stable-like and normal tempered stable-like processes. Moreover, in dimension d=1, we obtain existence and uniqueness results for solutions of Lévy-driven SDEs with Hölder continuous coefficients.
In this talk I will review some basic results in the modeling and control of multi-agent system. The prototype problem will account a large system of interacting agents, whose dynamics are influenced by a policy maker acting in order to enforce a desired behavior. Different examples will be shown: from opinion-formation processes to crowd-safety management. From the mathematical view-point, this situation can be described by means of a mean-field optimal control problem, governing the dynamics of the probability distribution of the agent population. In order to deal numerically with the high-dimensionality and the non-linearities of this type of problems, I will introduce a novel approximating hierarchy of sub-optimal controls based on a stochastic-Boltzmann approach, whose computation requires a moderate computational effort compared with standard direct approaches. I will compare the behavior of the control hierarchy with respect to the solution of the optimal control problem. Several numerical examples will show the effectiveness of the proposed strategies.
For more than thirty years it was thought that the efficient construction of pressure-robust mixed methods for the incompressible Navier-Stokes equations, whose velocity error is pressure-independent, was practically impossible. However, a novel, quite universal construction approach shows that it is indeed rather easy to construct pressure-robust mixed methods. The approach repairs a certain (L2-)orthogonality between gradient fields and discretely divergence-free test functions, and works for families of arbitrary-order mixed finite element methods, arbitrary-order discontinuous Galerkin methods, and finite volume methods. Novel benchmarks for the incompressible Navier-Stokes equations show that the approach promises significant speedups in computational practice, whenever the continuous pressure is complicated.
Programmpunkte:
- Eröffnung durch den Präsidenten der DMV (Prof. Volker Bach)
- Misstöne zwischen Mathematik und Musik: Die Tonleiter (Prof. Werner Kirsch)
- Gauss-Vorlesung: Mathematik an der Schnittstelle von Design und Technik (Prof. Helmut Pottmann)
- Empfang
Helmut Pottmann ist Professor für Geometrie an der Technischen Universität Wien und wissenschaftlicher Leiter des Center for Geometry and Computational Design. Sein Arbeitsschwerpunkt ist geometrische Modellierung für industrielle Anwendungen. Er ist der Begründer der Architekturgeometrie. Viele der von Helmut Pottmann entwickelten Konzepte haben einen unmittelbaren praktischen Nutzen: wellenförmige Fassaden aus planaren Glasstücken, Mittagsschatten im Glashaus dank geschickt positionierter Stahlträger, selbsttragende Decken mit minimalem Materialverbrauch.
In seinem Vortrag wird Herr Pottmann die Querbeziehungen zwischen mathematischer Theorie, der Erstellung von Computersimulation und der praktischen Umsetzung bei der Konstruktion von Gebäuden erläutern. Er wird zeigen, wie man durch gezielten Einsatz von Mathematik zu innovativeren Designs und effizienteren Herstellungsprozessen gelangt.
Several applications require not only the knowledge of one optimal trajectory but the knowledge of the behaviour of all trajectories and a good approximation of all end points of feasible trajectories at a given end time forming the reachable set. Reachable sets of nonlinear state-constrained control problems with bounded controls can be calculated by various approaches, e.g., by solving partial differential equations and level set approaches, by iterative set-valued Runge-Kutta methods based on boxes in state-space or by overestimating methods.
This talk suggests an adaptive method based on optimization solvers. By solving a series of parametric optimal control problems with a varying objective function, suitable OCP solver like OCPID-DAE1, WORHP or Ipopt can be applied for the original set-valued problem. In this approach, the feasible set equals the reachable set of the control problem and the optimal value involves the distance function of a varying grid point to the (yet unknown) reachable set.
Applying a subdivision technique to this method yields rather simple convergence proofs, a refining overestimation of the reachable set by collection of boxes and an adaptive implementation that outperforms the algorithm if applied only with a regular state space discretization. As applications lower-dimensional projected reachable sets of a robot model and a single-track model for collision avoidance with more than three states and two controls are computed. Features and possible speedups of the algorithm by parallelization are also demonstrated.
Two-stage risk-averse stochastic optimization is concerned with the minimization of a risk measure of a random cost function over the feasible choices of a deterministic and a random decision variable. We study the multi-objective version of this problem in which case the cost function is vector-valued and its risk is quantified via a multivariate (set-valued) risk measure. We reformulate the resulting problem as a convex vector optimization problem with set-valued constraints and propose customized versions of Benson’s algorithm to solve it. In particular, by randomizing the deterministic decision variable, we develop convex duality-based decomposition methods to solve the scalar subproblems appearing in Benson’s algorithm. The algorithm is illustrated on examples including the multi-asset portfolio optimization problem with transaction costs.
We introduce a multi-factor stochastic volatility model based on the CIR/Heston volatility process that incorporates seasonality and the Samuelson effect. First, we give conditions on the seasonal term under which the corresponding volatility factor is well-defined. These conditions appear to be rather mild. Second, we calculate the joint characteristic function of two futures prices for different maturities in the proposed model. This characteristic function is analytic. Finally, we provide numerical illustrations in terms of implied volatility and correlation produced by the proposed model with five different specifications of the seasonality pattern. The model is found to be able to produce volatility smiles at the same time as a volatility term-structure that exhibits the Samuelson effect with a seasonal component. Correlation, instantaneous or implied from calendar spread option prices via a Gaussian copula, is also found to be seasonal.
Cover's celebrated theorem states that the long run yield of a properly chosen "universal" constant rebalanced portfolio is as good as the long run yield of the best retrospectively chosen constant rebalanced portfolio. The "universality" pertains to the fact that this result is modelfree, i.e., not dependent on an underlying stochastic process. We extend Cover's theorem to the setting of stochastic portfolio theory as initiated by R. Fernholz: the rebalancing rule need not to be constant anymore but may depend on the present state of the stock market. This result is complimented by a comparison with the log-optimal numéraire portfolio when fixing a stochastic model of the stock market. Roughly speaking, under appropriate assumptions, the optimal long run yield coincides for the three approaches mentioned in the title. We present our results in discrete as well as in continuous time. The talk is based on joint work with Walter Schachermayer and Leonard Wong.
In this talk I will present the theory of structured deformations. Starting from the original formulation by Del Piero and Owen, I will move to the variational formulation proposed by Choksi and Fonseca, to conclude with some recent results regarding relaxation of non-convex energies. I will also present some examples and explicit formulas as an application to specific surface energies.
Title: Asymptotic preserving methods and multiscale PDEs
Many applications involve partial differential equations with multiple space-time scales. Numerically resolving such scales may be computationally prohibitive and therefore one resorts on the use of some asymptotic analysis in order to derive reduced models which are valid in the small scales regime. The derivation of numerical schemes which are capable to describe correctly such asymptotic behavior without resolving the small scales has attracted a lot of attention in the recent years leading to the so-called asymptotic-preserving (AP) methods.
A typical AP method permits the use of the same numerical scheme for the multiscale PDE and its small scale limit, with fixed discretization parameters. This allows to match regions where the perturbation parameters have very different orders of magnitude without adopting a multi-physics approach that couples different physical models at different scales. The design of AP schemes needs special care for both time and space discretizations. Often the time discretization is more crucial and the use of Implicit-Explicit (IMEX) techniques represent a powerful tool for the construction of efficient AP schemes.
In this talk we first survey the basic concepts of AP methods and emphasize the analogies with singular perturbed systems and differential algebraic equations. Next we consider the design principles of IMEX techniques based on Runge-Kutta and linear multistep methods and present some representative examples in the case of hyperbolic balance laws and kinetic equations.
see http://www.ma.tum.de/Mathematik/FakultaetsKolloquium#AbstractPareschi
https://www.statistik.lmu.de/~abender/Kolloquium/abstracts/ws1617/bowman.pdf
Uncertainty quantification (UQ) is a research area which deals with the impact of parameter, data and model uncertainties in complex systems. The fast development of UQ as a field is driven by applications in all areas of engineering, and environmental, physical, biological and social systems, e.g. groundwater flow and transport, shape optimisation, and geotechnical engineering.
A building block of UQ is the development of efficient algorithms to include and address uncertainties in a mathematical model of complex systems. In this talk we focus on models which are based on partial differential equations (PDEs). For deterministic PDEs there are many classical analytical and numerical tools available. The treatment of PDEs with random inputs, however, requires novel ideas and tools from numerical analysis, statistics, probability theory and computational science and engineering.
We illustrate the mathematical and algorithmic challenges of UQ using a standard model problem, a diffusion equation with random diffusion coefficient. We give an overview of numerical methods for this problem with focus on stochastic Galerkin formulations and multilevel Monte Carlo estimators. As a recent example, we discuss a novel multilevel estimator for optimal control problems constrained by an elliptic PDE with a lognormal coefficient.
In the celebrated paper [2], Li and Yau proved the parabolic Harnack inequality for Riemannian manifolds with Ricci curvature bounded from below. The key step in their proof was a completely new type of Harnack estimate, namely a pointwise gradient estimate, called "differential Harnack inequality", which, by integration along a path, yields the classical parabolic Harnack estimate. If one tries to apply this method to discrete structures (graphs) one is faced with two big obstacles. The main difficulty is that the chain rule for the Laplace operator fails on graphs. Another problem is that in the graph setting, it is a priori not clear how to define a proper notion of curvature, or more precisely the concept of lower bounds for the Ricci curvature. A first successful attempt to circumvent these difficulties was made in the very recent paper [1] and is based on the square-root approach. In my talk, I will present a different approach, which, as in the classical case ([2]), leads to logarithmic Li-Yau inequalities, and also significantly improves the results from [1]. This is joint work with D. Dier (Ulm) and M. Kassmann (Bielefeld).
[1] F. Bauer, P. Horn, Y. Lin, G. Lippner, D. Mangoubi, S.-T. Yau: Li-Yau inequality on graphs. J. Differential Geom. 99 (2015), 359-405. [2] P. Li, S.-T. Yau: On the parabolic kernel of the Schrödinger operator. Acta. Math. 156 (1986), 153-201.
We will discuss recent progress in analytical results and numerical schemes for the characterization of relaxed energy functionals related to models within the framework of nonlinear elasticity for phase transforming materials. Classical relaxation results rely on growth conditions which are not appropriate for models that include nonlinear constraints like a positive determinant of the deformation gradient or incompressibility. Since it is generally impossible to obtain explicit formulas for the relaxed energy we will also briefly comment on suitable numerical schemes and their predictive power.
Perkolation ist ein Modell der modernen Wahrscheinlichkeitstheorie, bei dem von einem (meist unendlichen) Graphen G ein zufälliger Teilgraph mit vorgegebener Kantendichte ausgewählt wird. Trotz seines einfachen Mechanismus weist dieses Modell einen Phasenübergang auf, und das Verhalten an diesem Phasenübergang (sog. kritisches Verhalten) ist von besonderem Interesse. Ich werde einige Ergebnisse dazu vorstellen, insbesondere für den Fall dass G ein hochdimensionales Gitter ist.
Complementing earlier work by Mountford, Mourrat, Valesin and Yao, we study metastable behavior of the contact process on general finite and connected graphs. For the contact process with infection rate $\lambda$ on a graph $G$, the extinction time is the random amount of time until the process started from all individuals infected reaches the trap state in which the infection is absent. We prove, without any restriction on $G$, that if $\lambda$ is larger than the critical rate of the one-dimensional process, then the extinction time grows faster than $\exp(|G|/(\log |G|)^a)$ for any constant $a > 1$, where $|G|$ denotes the number of vertices of $G$. Also for general graphs, we show that the extinction time divided by its expectation converges in distribution, as the number of vertices tends to infinity, to the exponential distribution with parameter 1. Joint work with Bruno Schapira.
The aim of this talk is to present a variational model of the quasi-static evolution of hydraulic crack in the general framework of rate-independent processes.