Kategorien-Filter ist aus: keine Filterung nach Kategorien.

Fatigue effects in elastic materials with variational damage modelsMI 02.10.011 (Boltzmannstr. 3, 85748 Garching)

In this talk I will present an existence result concerning quasistatic evolutions for a family of gradient damage models which take into account fatigue, that is the process of weakening in a material due to repeated applied loads. The main feature of these models is the fact that damage is favoured in regions where the cumulation of the elastic strain (or other relevant variables, depending on the model) is higher. The existence of a quasistatic evolution is proven via a vanishing viscosity approach based on two steps: first let the time-step of the time-discretisation and later the viscosity parameter go to zero. As the time-step goes to zero, one finds approximate viscous evolutions; then, as the viscosity parameter goes to zero, one finds a rescaled approximate evolution satisfying an energy-dissipation balance. The result is based on a work in collaboration with Roberto Alessi and Vito Crismale.

On the stochastic heat equation with mutiplicative noiseBC1 2.02.01 (Parkring 11, 85748 Garching)

We study a parsimonious but non-trivial model of the latent limit order book where orders get placed with a fixed displacement from a center price process, i.e. some process in-between best bid and best ask, and get executed whenever this center price reaches their level. This mechanism corresponds to the fundamental solution of the stochastic heat equation with multiplicative noise for the relative order volume distribution, for which we provide a solution via a local time functional. Moreover, we classify various types of trades, and introduce the trading excursion process which is a Poisson point process. This allows to derive the Laplace transforms of the times to various trading events under the corresponding intensity measure.

Large Deviations for McKean Vlasov Equations and Importance SamplingBC1 2.02.01 (Parkring 11, 85748 Garching)

We discuss two Freidlin-Wentzell large deviation principles for McKean-Vlasov equations (MV-SDEs) in certain path space topologies. The equations have a drift of polynomial growth and an existence/uniqueness result is provided. We apply the Monte-Carlo methods for evaluating expectations of functionals of solutions to MV-SDE with drifts of super-linear growth. We assume that the MV-SDE is approximated in the standard manner by means of an interacting particle system and propose two importance sampling (IS) techniques to reduce the variance of the resulting Monte Carlo estimator. In the "complete measure change" approach, the IS measure change is applied simultaneously in the coefficients and in the expectation to be evaluated. In the "decoupling" approach we first estimate the law of the solution in a first set of simulations without measure change and then perform a second set of simulations under the importance sampling measure using the approximate solution law computed in the first step.

Perturbations of dynamical systems with additive noise, noise induced order and linear responseMI 03.06.011 (Boltzmannstr. 3, 85748 Garching)

Dynamical systems perturbed by noise appear naturally as models of physical systems. In several interesting cases it can be approached rigorously by computational methods. As a nontrivial example of this, we show a computer aided proof to rigorously show the existence of noise induced order in the model of chaotic chemical reactions where it was first discovered numerically by Matsumoto and Tsuda in 1983. We show that in this random dynamical system the increase of noise causes the Lyapunov exponent to decrease from positive to negative, stabilizing the system. The method is based on a certified approximation of the stationary measure in the L1 norm. This is done by an efficient algorithm which is general enough to be adapted to any dynamical system with additive noise on the interval. Time permitting we will also talk about linear response of such systems when the deterministic part of the system is perturbed deterministically.

Fresh approach to dynamical systems software using Julia MI 03.06.011 (Boltzmannstr. 3, 85748 Garching)

In my talk I will present award-winning software for dynamical systems that are in the GitHub organization JuliaDynamics. These are written in "Julia", a new programming language aimed at scientific computing. Due to the relatively new release of Julia, I will not assume basic knowledge and instead introduce its basic features.

I will then show in detail how we use Julia in our organization, what is possible to do with our software and why they are a fresh approach compared to what exists already. I will also showcase how can one make beautiful and responsive interactive applications without using JavaScript, HTML and the likes, allowing the source code to be kept simple, understandable and re-usable. Everything is achieved through multiple dispatch and metaprogramming. If time permits I will delve into the internals of the software I will present and/or give an outlook of what are the future plans.

Understanding sparsity properties of frames using function spacesMI 02.10.011 (Boltzmannstr. 3, 85748 Garching)

We present a systematic approach towards understanding the sparsity properties of different frame constructions like Gabor systems, wavelets, shearlets, and curvelets. We use the following terminology: Analysis sparsity means that the frame coefficients are sparse (in an \ell^p sense), while synthesis sparsity means that the function can be written as a linear combination of the frame elements using sparse coefficients. While these two notions are completely distinct for general frames, we show that if the frame in question is sufficiently nice, then both forms of sparsity of a function are equivalent to membership of the function in a certain decomposition space. These decomposition spaces are a common generalization of Besov spaces and modulation spaces. While Besov spaces can be defined using a dyadic partition of unity on the Fourier domain, modulation spaces employ a uniform partition of unity, and general decomposition spaces use an (almost) arbitrary partition of unity on the Fourier domain. To each decomposition space, there is an associated frame construction: Given a generator, the resulting frame consists of certain translated, modulated and dilated versions of the generator. These are chosen so that the frequency concentration of the frame is similar to the frequency partition of the decomposition space. For instance, Besov spaces yield wavelet systems, while modulation spaces yield Gabor systems. We give conditions on the (possibly compactly supported!) generator of the frame which ensure that analysis sparsity and synthesis sparsity of a function are both equivalent to membership of the function in the decomposition space.

Workshop Dynamics & NumericsMI 02.08.011 (Boltzmannstr. 3, 85748 Garching)

\(Numerics: Vanja Nikolic (30 Min)\)

Title: Analytical and numerical aspects of nonlinear acoustic wave propagation

Abstract: The need to analyze and accurately simulate nonlinear sound propagation has increased with the rise in the number of ultrasound applications in medicine and industry. In this talk, I will present some of my recent work on the well-posedness and numerical simulation of partial differential equations that model nonlinear sound propagation. In addition, I will briefly discuss the treatment of shape optimization problems that arise in the practical use of high-intensity focused ultrasound.

\(Dynamics: Maxime Breden (30 Min)\)

Title: An introduction to a posteriori validation techniques, illustrated on the study of minimum energy paths.

Abstract: To understand the global behavior of a nonlinear system, the first step is to study its invariant set. Indeed, specific solutions like steady states, periodic orbits and connections between them are building blocks that organize the global dynamics. While there are many deep, general and theoretical mathematical results about the existence of such solutions, it is often difficult to apply them to a specific example. Besides, when dealing with a precise application, it is not only the existence of these solutions, but also their qualitative properties that are of interest. In that case, a powerful and widely used tool is numerical simulations, which is well adapted to the study of an explicit system and can provide invaluable insight for problems where the nonlinearities hinder the use of purely analytical techniques. The aim of a posteriori validation techniques is to obtain mathematically rigorous and quantitative existence theorems, using those numerical simulations. Given an approximate solution, the general strategy is to combine a posteriori estimates with analytical ones to apply a fixed point theorem, which then yields the existence of a true solution in an explicit neighborhood of the numerical one. In the first part of the talk, I'll present the main ideas of a posteriori validation in more detail, and describe the general framework in which they are applicable. In the second part, I'll then focus on a specific example and explain how to validate minimum energy paths for stochastic differential equations.

\(Numerics: Elisabeth Ullmann (30 Min)\)

Title: Multilevel Sequential^2 Monte Carlo for Bayesian Inverse Problems

Abstract: The identification of parameters in mathematical models using noisy observations is a common task in uncertainty quantification. We employ the framework of Bayesian inversion: we combine monitoring and observational data with prior information to estimate the posterior distribution of a parameter. Specifically, we are interested in the distribution of a diffusion coefficient of an elliptic PDE. In this setting, the sample space is high-dimensional, and each sample of the PDE solution is expensive. To address these issues we propose and analyse a novel Sequential Monte Carlo (SMC) sampler for the approximation of the posterior distribution. Classical, single-level SMC constructs a sequence of measures, starting with the prior distribution, and finishing with the posterior distribution. The intermediate measures arise from a tempering of the likelihood, and the resolution of the PDE discretisation is fixed. In contrast, our estimator employs a hierarchy of PDE discretisations to decrease the computational cost. We construct a sequence of intermediate measures by decreasing the temperature or by increasing the discretisation level at the same time. This idea builds on and generalises the multi-resolution sampler proposed by P.S. Koutsourelakis (J. Comput. Phys. 228, 2009, pp. 6184-6211) where a bridging scheme is used to transfer samples from coarse to fine discretisation levels. Importantly, our choice between tempering and bridging is fully adaptive, and can also be generalized to time-dependent problems.

\(Dynamics: Christian Kühn (30 Min)\)

Title: Numerical Continuation of Ellipsoids for Stochastic Problems

Abstract: In this talk, I shall explain a method, how to analyze certain aspects of stochastic dynamical systems from a purely deterministic, discrete, and geometric perspective. In particular, we study fluctuations around steady states in stochastic differential equations using ellipsoids calculated via Lyapunov matrix equations. The method will be embedded in a numerical continuation framework to effectively study parametrized problems. I am also going to briefly mention rigorous error estimates for the numerics and several applications.

The Random Connection Model at Criticality B 252 (Theresienstr. 39, 80333 München)

We consider the random connection model, which is a continuum percolation model. After introducing the model along with some basic tools, we adapt the lace expansion to the framework of the underlying continuum space Poisson point process. This allows us to derive the triangle condition above the upper critical dimension and furthermore to establish the infra-red bound. From this, mean-field behavior of the model can be deduced.

New mathematical models and numerical algorithms for Newtonian and general relativistic continuum physicsMI HS 3 (Boltzmannstr. 3, 85748 Garching)

The three body problem in four dimensionsMI HS 3 (Boltzmannstr. 3, 85748 Garching)

The Newtonian three body problem has undergone a Renaissance in recent years. I will present an overview of old and new results on periodic solutions, symbolic dynamics, and chaos in this problem. Then I will describe new results about the symplectic symmetry reduction and dynamics of relative equilibria when the spatial dimension is at least four. In particular we will show that there are families of relative equilibria that are minima of the reduced Hamiltonian, and hence are Lyapunov stable. This establishes the first proof of Lyapunov stable periodic orbits in the three body problem, albeit in dimension four.

High-dimensional approximation and sparse FFT using (multiple) rank-1 latticesMI 02.10.011 (Boltzmannstr. 3, 85748 Garching)

We consider the approximate reconstruction of a high-dimensional (e.g. d=10) periodic function from samples using trigonometric polynomials. As sampling schemes, we use rank-1 lattices, which can be constructed by a component-by-component approach when the locations of the approximately largest Fourier coefficients are known. With the help of a single one-dimensional fast Fourier transform (FFT), we are able to compute the Fourier coefficients, also in the high-dimensional case. For functions from Sobolev Hilbert spaces of generalized mixed smoothness, error estimates are presented where the sampling rates are best possible up to logarithmic factors. We give numerical results which confirm our theoretical estimates. Additionally, we discuss an approach where we use multiple instances of rank-1 lattices. This allows for efficient construction algorithms and we obtain improved error estimates where the sampling rates are optimal up to a small constant offset in the exponent. In particular, we consider the case where we do not know the locations of important Fourier coefficients. Here, we present a method which approximately reconstructs high-dimensional sparse periodic signals in a dimension-incremental way based on projections. The sampling nodes are adaptively chosen (multiple) rank-1 lattices and we use 1-dimensional FFTs for the computations. This is based on joint work with Glenn Byrenheid, Lutz Kämmerer, Daniel Potts, and Tino Ullrich.

Renormalisation of singular SPDEsMI 03.06.011 (Boltzmannstr. 3, 85748 Garching)

In this talk, we will present some recent developments on the resolution of singular SPDEs using the theory of Regularity Structures introduced by Martin Hairer. With powerful algebraic tools, we can derive in a systematic way the renormalised equation and give a meaning to geometric stochastic heat equations.

TBAB 252 (Theresienstr. 39, 80333 München)

TBA

Wigner measures and effective mass theoremsMI 03.10.011 (Boltzmannstr. 3, 85748 Garching)

In this talk, we shall present recent results concerning the dynamics of an electron in a crystal in the presence of impurities. It is well-known that under suitable assumptions on the initial data, the wave function can be approximated in the semi-classical limit by the solution of a simpler equation, the effective mass equation. Using Floquet-Bloch decomposition, we establish effective mass equations for rather general initial data, by introducing a new type of effective mass equations which are operator-valued and of Heisenberg form.

Optimal Control with Bang-Bang Solutions: Regularization Techniques and ApplicationsGebäude 33, Raum 1431 (Werner-Heisenberg-Weg 39, 85577 Neubiberg)

The talk consists of two parts. In the first part, we study the impact of different regularization techniques on a class of linear-quadratic optimal control problems where the control variables are box-constrained and only appear linearly. With some structural assumptions on the switching function, those problems typically yield so-called bang-bang solutions. Adding regularization terms to the cost functional changes the structure of the optimal control. It is well-known that

(1) optimal control problems with (squared) $L^2$-control costs produce Lipschitz continuous solutions, and

(2) optimal control problems with $L^1$-control costs promote sparse solutions, i.e., the optimal control is zero on whole intervals.

We present a novel $L^{1,2}$-sparsity functional that promotes a so-called group sparsity structure of the optimal controls. In this case, the components of the control function take the value zero on parts of the interval, simultaneously. These problems are both theoretically interesting and practically relevant. The usefulness of our approach is demonstrated by solving a two-dimensional variant of the well-known rocket car problem.

In the second part of the talk, we consider the process of automatic optical material testing in the manufacturing of glass panels. To model this problem, we use an optimal control approach with a discontinuous cost functional and box constraints for both, the control and the state variables. The resulting problem turns out to have great similarities with the rocket car problem. We implement a prototype for this application which aims for computing the optimal control at run time. The algorithm will be demonstrated and tested with the help of an illustrative example where it turns out that the optimal control is of bang-bang or bang-zero-bang type, depending on the state constraints.

BV functions and Federer's characterization of sets of finite perimeter in metric spacesRoom 2004, 1st floor, Building L1 (Universitätsstr. 14, 86159 Augsburg)

We consider the theory of functions of bounded variation (BV functions) in the general setting of a complete metric space equipped with a doubling measure and supporting a Poincaré inequality. Such a theory was first developed by Ambrosio (2002) and Miranda (2003). I will give an overview of the basic theory and then discuss a metric space proof of Federer's characterization of sets of finite perimeter, i.e. sets whose characteristic functions are BV functions. This characterization states that a set is of finite perimeter if and only if the n-1-dimensional (in metric spaces, codimension one) Hausdorff measure of the set's measure-theoretic boundary is finite. The proof relies on fine potential theory in the case p=1, much of which seems to be new even in Euclidean spaces.

Reduced Basis Methods - Prospects and ChallengesRoom 2004, 1st floor, Building L1 (Universitätsstr. 14, 86159 Augsburg)

Model reduction has become a must in realistic multi-query and/or realtime situations. Huge progress has been made in the last decade both in the analysis of such methods and their application also in industrial frameworks. However, from a mathematical point of view most results are for elliptic and parabolic linear PDEs where the solution depends smoothly on the parameter. Most model reduction techniques rely on linear approximation schemes and this fact clearly limits their scope. In this talk, we report on both success stories and recent limitations and challenges for model reduction.

Variational Approach to Fourier Phase RetrievalMI 02.10.011 (Boltzmannstr. 3, 85748 Garching)

This talk will discuss infinite-dimensional Fourier phase retrieval problem motivated by applications in X-ray crystallography. Assuming prior knowledge on the object (such as positivity or support), we reformulate the Error-Reduction algorithm as a discretized gradient flow (without the need to explicitly impose object space constraints), and show that the corresponding non-linear equation posesses global weak solutions. We use the gradient flow approach to analyze fixed point stability of the Error-Reduction algorithm and outline connections of this approach to the state-of-the-art algorithms. Joint work with Gero Friesecke.

Approximation theoretic properties of deep ReLU neural networksMI 03.06.011 (Boltzmannstr. 3, 85748 Garching)

Studying the approximation theoretic properties of neural networks with smooth activation function is a classical topic.The networks that are used in practice, however, most often use the non-smooth ReLU activation function. Despite the recent incredible performance of such networks in many classification tasks, a solid theoretical explanation of this success story is still missing.

In this talk, we will present recent results concerning the approximation theoretic properties of deep ReLU neural networks which help to explain some of the characteristics of such networks; in particular we will see that deeper networks can approximate certain classification functions much more efficiently than shallow networks, which is not the case for most smooth activation functions. We emphasize though that these approximation theoretic properties do not explain why simple algorithms like stochastic gradient descent work so well in practice, or why deep neural networks tend to generalize so well; we purely focus on the expressive power of such networks.

As a model class for classifier functions we consider the class of (possibly discontinuous) piecewise smooth functions for which the different "smooth regions" are separated by smooth hypersurfaces. Given such a function, and a desired approximation accuracy, we construct a neural network which achieves the desired approximation accuracy, where the error is measured in L^p. We give precise bounds on the required size (in terms of the number of weights) and depth of the network, depending on the approximation accuracy, on the smoothness parameters of the given function, and on the dimension of its domain of definition. Finally, we show that this size of the networks is optimal, and that networks of smaller depth would need significantly more weights than the deep networks that we construct, in order to achieve the desired approximation accuracy.

Last passage times in discontinuous environments.2.01.10 (Parkring 11, 85748 Garching-Hochbrück)

We are studying a last passage percolation model on the two dimensional lattice, where the environment is a field of independent random exponential weights with different parameters. Each variable is associated with a lattice vertex and its parameter is selected according to a discretization of lower semi-continuous parameter function that may admit discontinuities on a set of curves. We prove a law of large numbers for the sequence of last passage times, defined as the maximum sum of weights which a directed path can collect from (0, 0) to a target point (Nx, Ny) as N tends to infinity and the mesh of the discretisation of the parameter function tends to 0 as 1/N. The LLN is cast in the form of a variational formula, optimised over a given set of macroscopic paths. Properties of maximizers to the variational formula above are investigated in two models where the parameter function allows for analytical tractability. This is joint work with Federico Ciech.

tbaMI 03.10.011 (Boltzmannstr. 3, 85748 Garching)

tba