Filter off: No filter for categories
We develop constructive algorithms to represent functions defined on a metric measure space within a prescribed accuracy. The constructions can be based on either spectral information or scattered samples of the target function. Our algorithmic scheme is asymptotically optimal in the sense of nonlinear n-widths and asymptotically optimal up to a logarithmic factor with respect to the metric entropy. The talk is based on joint work with Martin Ehler, University of Vienna.
How regular does a solution to the (incompressible or compressible) Euler system need to be in order to conserve energy? In the incompressible context, this question is the subject of Onsager's famous conjecture from 1949. We will review the elegant proof of energy conservation for the incompressible system in Besov spaces with exponent greater than 1/3 by Constantin-E-Titi, and explain how their arguments can be refined to handle the isentropic compressible Euler equations. We we also see that these methods are insufficient for so-called statistical solutions, and explain how to handle them instead. Joint work with E. Feireisl, P. Gwiazda, A. Świerczewska-Gwiazda, and with U. S. Fjordholm.
We investigate the energy landscape for plastically deformed metals after annealing. The model for the energy has two terms: a finite core energy where the core radius is assumed to be proportional to the lattice constant and an elastic energy which is proportional to the squared distance of the strain to the group of rotations. Assuming the curl to be quantized (a lattice vector) we prove in two dimensions a lower bound proportional to the Shockley Read formula for small angle grain boundaries. In particular this estimate has as a consequence that small energy configurations are basically piecewise constant rotations in terms of strain. (joint work with G.Lauteri)
Das Tröpfchenmodell ist ein isoperimetrisches Problem mit einem zusätzlichen nichtlokalen Term. Es erschien ursprünglich 1930 in der Kernphysikliteratur und hat neuerdings viel Beachtung in der Variationsrechnung gefunden. Wir diskutieren neue Resultate und offene Fragen, und wir zeigen, wie die Einsichten in dieses Problems uns erlaubt haben, die Ionisierungvermutung in einem Dichtefunktionalmodell zu beweisen.
When discussing the nature of nonnegative solutions of SDEs, making the distinction between a strict local martingale and a true martingale can be very important. The papers of Delbaen, Shirakawa, Mijatovic, Urusov, Lions, Musiela, Andersen, Piterbarg, Bernard, Cui, and finally McLeish have studied the case of a one dimensional SDE, with or without stochastic volatility. We present two concepts: how a solution of an SDE which is a martingale can become a strict local martingale by the addition of new information to the underlying filtration, and how various components of a vector of SDEs can be strict local martingales for some components of the system, and martingales for others. This is based on joint work with Philip Protter, Professor at Columbia University.
In financial researches and among risk management practitioners the analysis of multiple time-series is often conducted in a non-linear context. In addition, capturing the quantile conditional dependence structure could prove of interest in order to measure financial contagion risk. We propose a 3-stage estimation copula-based method applied to non-linear quantile dependence analysis of time-series vectors. This method aims to analyse the serial and cross-section dependence of time-series given specified quantiles, reducing the computational complexity. To the best of our knowledge, this is the first approach that combines the conditional quantile dependence analysis of multiple time-series with non-linear modelling by means of copula functions. Finally, we examine the conditional quantile behaviour of financial time-series with a non-linear copula quantile VAR model.
The talk is based on joint work with Giovanni De Luca.
A wide class of Gaussian processes, including fractional Brownian motion, can be represented as linear functions of an infinite-dimensional affine process. This opens the door to analyzing such processes using tools from Markov processes and SPDEs. Moreover, the affine structure makes computations tractable, and the representation lends itself to numerical implementation. We will look into some of this theory and its applications in mathematical finance.
Continuous-time moving average processes, defined as integrals of a deterministic kernel function integrated with respect to a two sided Lévy process, provide a unifying framework to different types of processes, including the popular examples of fractional Brownian motion and fractional Lévy processes on the one side and Ornstein-Uhlenbeck processes on the other side. The whole class of processes especially allows for a combination of a given correlation structure with an infinitely divisible marginal distribution as it is desirable for applications in finance, physics and hydrology. So far inference for these processes is mainly concerned with estimating parameters entering the kernel function which is responsible for the correlation structure. We now consider the estimating problem for the driving Lévy process. We will provide two methods working on different sets of conditions, one is based on a suitable integral transform, the other on the Mellin transform.
Dates, times and rooms in detail under: https://igdk1754.ma.tum.de/IGDK1754/CCAmbrosio2017
In the lectures I will cover the theory of well-posedness for ODE's associated to nonsmooth vector fields, initiated by a seminal paper by DiPerna-Lions at the end of the 80's. This problem arises in many areas of Mathematical Physics, where the coupling between velocity and density (as for the semi-geostrophic equation, the Vlasov-Poisson equation, etc) rules out the possibility to apply the standard Cauchy- Lipschitz theory. I will cover particular classes of vector fields, as the Sobolev and BV ones, and I will also illustrate the quantitative side of this theory, mainly developed by G.Crippa and C.De Lellis. If time permits, I will also cover more recent developments of the theory in metric measure spaces, studied in a joint work with D.Trevisan.
The flow of motorized vehicles through urban road networks is known to be one of the main reasons for high pollution in metropolitan areas. So far little scientific research has been spent on the effects of coordinated traffic lights on emissions. In our approach to simulate traffic flow through a network of roads we resort to a well-posed macroscopic conservation law coupled with a one-dimensional pollution model. Model Predictive Control (MPC) is used as a responsive optimization technique to manage the movement of cars close to junctions, mirroring the use of traffic signals. On basis of an exemplaric road network in Munich we show that by optimizing traffic dynamics in this manner a decrease in CO-emissions by 5-10% can be achieved.
The flow of motorized vehicles through urban road networks is known to be one of the main reasons for high pollution in metropolitan areas. So far little scientific research has been spent on the effects of coordinated traffic lights on emissions. In our approach to simulate traffic flow through a network of roads we resort to a well-posed macroscopic conservation law coupled with a one-dimensional pollution model. Model Predictive Control (MPC) is used as a responsive optimization technique to manage the movement of cars close to junctions, mirroring the use of traffic signals. On basis of an exemplaric road network in Munich we show that by optimizing traffic dynamics in this manner a decrease in CO-emissions by 5-10% can be achieved.
Over the span of tens of thousands of years, humans have created an elaborate body of theory unequaled in size and complexity: mathematics.
There is a profound philosophical question: Where do all the results of this gigantic body of theory come from? Are they already present in some hidden, possibly metaphysical, location and then discovered by inquisitive minds, or are they created in the same way that engineers design various machines for energy conversion, production of goods, or transportation?
Over hundreds of years, this question has been answered in various, often diametrically opposed, ways. We examine this question using the approach of the philosopher Ludwig Wittgenstein for the resolution of philosophical problems.
With the help of modern brain science, we also look into the strange aspect that eminent researchers arrived at and vigorously defended diametrically opposed answers. Indeed, that amazing process is ongoing today.
The talk assumes no prior knowledge in mathematics or philosophy.
In this talk I will discuss a result of delocalization for the Anderson model on the regular tree (Bethe lattice). The Anderson model is a random Schrodinger operator, where we add a random i.i.d. perturbation to the adjacency matrix. Localization at high disorder is well understood today for a wide variety of models, both in the sense of a.s. pure point spectrum with exponentially decaying eigenfunctions, and in a dynamical sense. Delocalization remains a great challenge. For tree models, it is known that for weak disorder, large parts of the spectrum are a.s. purely absolutely continuous, and the dynamical transport is ballistic. In this work, we try to complete the picture by proving that in such AC regime, the eigenfunctions are also delocalized in space, in the sense that if we consider a sequence of regular graphs converging to the regular tree, then the eigenfunctions become asymptotically uniformly distributed. The precise result is a quantum ergodicity theorem, which holds in a much more general framework. This is a joint work with Nalini Anantharaman.
In this talk two TUM alumni will highlight their journey from founding their company *Algoriddim* during college to creating their award-winning app “djay”, one of the world’s most popular music apps with over 30 million downloads. It’s a journey that shows how passion for music combined with a solid foundation in mathematics and computer science can create a lasting impact on people around the world and the entire app industry.
Karim Morsy is CEO and co-founder of Algoriddim, a world-leading software company for music and video apps. Karim co-founded Algoriddim in 2006 together with two classmates while he was still enrolled as a student at TU Munich. His entrepreneurial impetus is accompanied by a strong academic foundation in computer science, music, and psychology. He received his diploma from TU Munich in 2010. Prior to founding Algoriddim, Karim worked at Apple Inc. developing key components of iMovie during the advent of the mobile revolution in 2006. He is an avid pianist, composer, and DJ with over 15 years of professional experience. As a speaker and artist Karim was featured in Apple’s 2016 keynote introducing the next generation in music software.
Karim will be joined by Vlad Popa, senior software engineer at Algoriddim. Vlad received his Master’s degree in computer science and mathematics from TU Munich in 2014. He has a strong algorithmic and mathematical background. During his studies he was a key member of the TU Munich ICPC team winning numerous national and international awards. He published and presented at the renowned Formal Methods conference for Industrial Critical Systems and is now part of Algoriddim’s core R&D team working on cutting-egde signal processing algorithms.
About Algoriddim: Algoriddim, founded in 2006, creates world-class music and video applications for desktop and mobile devices. Algoriddim’s flag ship product “djay" is available for Mac, PC, Chromebook, iPad, iPhone, Apple Watch, and Android devices. It received over 30 million downloads and won two Apple Design Awards. Algoriddim has revolutionized the DJ workflow by partnering with Spotify to give djay users instant access to millions of songs and provide cloud-based music recommendations based on what the DJ is currently playing. Used by beginners, enthusiasts and professional artists around the globe, djay provides a rich hardware ecosystem from industry-leading manufacturers enabling users to connect entry-level to high-end DJ gear to the app. Algoriddim has licensed core components of its advanced audio and video technology to world-leading tech companies such as Twitter Inc. Algoriddim has also co-branded with renowned brands including PRODUCT(RED), Apple Music Festival, Spotify, David Guetta, Microsoft, and Philips.
We consider stochastic evolution equations with nonlinear boundary conditions driven by an infnite-dimensional fractional Brownian motion in Banach spaces. A suitable transformation allows us to reduce the stochastic equation into a pathwise problem from which we derive a random dynamical system. We investigate its long-time behavior and prove the existence of a random attractor.
Optimization subject to PDE constraints is crucial in many application ranging from image processing to the life sciences. Numerical analysis has contributed a great deal to allow for the efficient solution of these problems and our focus in this talk will be on the solution of the large scale linear systems that represent the first order optimality conditions or found are at the heart of a nonlinear optimization method.
We illustrate that these systems, while being of very large scale, usually contain a lot of mathematical structure. In particular, we focus on a low-rank methods that utilize the Kronecker product structure of the system matrices. These methods allow the solution of a time-dependent problem with the storage requirements of a small multiple of the steady problem. We then illustrate that this low-rank technique extends to problems in uncertainty quantification and allows the solution of otherwise intractable problems.
see https://www.ma.tum.de/Mathematik/FakultaetsKolloquium#AbstractStoll
The matching problem consists in finding the optimal coupling between a random distribution of N points in a d-dimensional domain and another (possibly random) distribution. There is a large literature on the asymptotic behaviour as N tends to infinity of the expectation of the minimum cost, and the results depend on the dimension d and the choice of cost, in this random optimal transport problem. In a recent work, Caracciolo, Lucibello, Parisi and Sicuro proposed an ansatz for the expansion in N of the expectation. I will illustrate how a combination of semigroup smoothing techniques and Dacorogna-Moser interpolation provide first rigorous results for this ansatz.
Joint work with Federico Stra and Dario Trevisan, ArXiv:1611.04960
I shall describe Thouless original finding that the Hall conductance is related to a Chern numbers and some of the beautiful mathematical physics that grew from it. No background in condensed matter physics or topology will be assumed. The talk will be elementary.
We consider an energy minimization problem for N points constrained to stay in a compact set, under repulsive pairwise interactions. For the case of long-range such interactions including the Coulomb ones, I will describe the asymptotics of the energy as N grows. It is possible in this case to obtain a precise splitting of the energy between macroscale and microscale energies. We find that the next term beyond the mean-field limit in our asymptotics is governed by a renormalized energy on microscale blow-up configurations. This allows to formulate a rigorous version of the Abrikosov conjecture on microscale crystallization. I will also present several intriguing open questions regarding next-order asymptotics.
The talk will be based, among others, on joint papers with S. Serfaty, S. Rota-Nodari and L. Betermin.
In low dimensional Tonelli (i.e. convex) Hamiltonian dynamics, an important role is played by action minimizing periodic orbits. An instance of this can be seen in the remarkable result of Bangert, showing that the existence of a length minimizing closed geodesic (a so-called "waist”) on a closed surface forces the existence of infinitely many more closed geodesics. In contrast with the case of Riemannian (or Finsler) surfaces, that need not have waists in general, Tonelli Hamiltonian systems over closed surfaces always do, provided that the energy is small enough. In this talk we explain this existence result more in detail by putting it into the context of Mané’s critical values and briefly discuss the “dynamical" consequences it bears.
The aim of this presentation is to introduce a framework for the asymptotic enumeration of graph classes with many components. By "many" it is meant that the number of components grows linearly in the number of nodes. Firstly, existing results from present literature covering the asymptotic enumeration of (connected) block-stable graph classes are presented. Therefore, exponential generating functions and the symbolic method are needed in order to translate combinatorial problems into analytic ones. The second half of the presentation is devoted to discussing random sampling by Boltzmann samplers, which leads to the exact asymptotic behaviour of the number of graphs with certain properties taking into consideration the number of components. More precisely, Boltzmann samplers allow for transitioning into the field of probability theory by analysing sums of i.i.d. integer-valued random variables.
Prony's problem - estimating the frequencies of an exponential sum - and its higher dimensional analogs have attracted a lot of attention in recent years. A somewhat neglected question is whether this problem is well-posed. In this talk, some results in this direction will be presented. The most important techniques we need are e cient estimates of certain exponential sums. Inci- dentally, they can be used to improve classic estimates of the condition numbers of matrices arising when one interpolates with a positive definite kernel. If time permits, we will discuss this connection. This talk is based on joint work with Armin Iske.
We define a generalized finite element method for the discretization of elliptic partial differential equations in heterogeneous media. In [L. Grasedyck, I. Greff, and S. Sauter, 2012] a method has been introduced to set up an adaptive local finite element basis (AL basis) on a coarse mesh with mesh size $H$ which, typically, does not resolve the matrix of the media while the textbook finite element convergence rates are preserved. This method requires $O(\log(\frac{1}{H})^{d+1})$ basis functions per mesh point where $d$ denotes the spatial dimension of the computational domain. Since the continuous differential operator is involved in the construction, the method presented in [L. Grasedyck, I. Greff, and S. Sauter, 2012] is only semidiscrete. In this talk we present a fully discrete version of the method, where the AL basis is constructed by solving finite-dimensional localized problems. A key tool for the discretization of the differential operator is the theory developed in [D. Peterseim and S. Sauter, 2012]. We will prove that the localized method converges linearly with respect to the energy norm. Important tools for the error analysis are Caccioppoli's inequality and the construction of a local cutoff function in an annular domain. This construction is based on some new results concerning the $W^{1,p}$-regularity of the Poisson problem with complicated coefficients. Bounds for the gradient of the solution in the $L^p$-norm are derived and it is shown that they only depend on the size of the jumps in the coefficients.
This talk concerns the approximation of bivariate functions by using the well- established filtered back projection (FBP) formula from computerized tomography, which allows us to reconstruct a bivariate function from given Radon data. Our aim is to analyse the inherent FBP approximation error which is incurred by the application of a low-pass filter. To this end, we present error estimates in Sobolev spaces of fractional order. The obtained error bounds depend on the bandwidth of the utilized filter, on the flatness of the filter’s window function at the origin, on the smoothness of the target function, and on the order of the considered Sobolev norm. Finally, we prove convergence for the approximate FBP reconstruction in the treated Sobolev norms along with asymptotic convergence rates, as the filter’s bandwidth goes to infinity. The theoretical results are supported by numerical experiments. This talk is based on joint work with Armin Iske.
Mapper is probably the most widely used TDA (Topological Data Analysis) tool in the applied sciences and industry. Its main application is in exploratory analysis, where it provides novel data representations that allow for a higher-level understanding of the geometric structures underlying the data. The output of Mapper takes the form of a graph, whose vertices represent homogeneous subpopulations of the data, and whose edges represent certain types of proximity relations. Nevertheless, the inherent instability of the output and the difficult parameter tuning make the method rather difficult to use in practice. This talk will focus on the study of the structural properties of the graphs produced by Mapper, together with their partial stability properties, with a view towards the design of new tools to help users set up the parameters and interpret the outputs.
Eigenstate order extends the concept of phases of quantum matter beyond the conventional equilibrium paradigm which is central for inherently dynamical phenomena such as many-body localization or quantum time crystals. While eigenstate order is not visible in thermodynamic ensembles, it is rather imprinted in the properties of single eigenstates. In this talk I discuss how it is nevertheless possible to construct dynamical potentials capturing the macroscopic properties of eigenstate phases and which share many formal analogies to conventional thermodynamic potentials such as Gibbs-Duhem and Maxwell relations. The presented formalism opens up a route towards a macroscopic and phenomenological description of eigenstate phases and potentially also the respective transitions.
In this talk I will give a short introduction to gradient flows in abstract metric spaces and their constructive existence theory therein. We consider a second order semi-discretization in time, namely the Backward-Differentiation-Formula 2 method, and investigate the analogy to the implicit Euler scheme. A key feature of these algorithmic schemes is the variational formulation. Furthermore, these methods preserve automatically some properties of the gradient flows like step-size independent bounds on potential and kinetic energy of the discrete solution. The talk is based on joint work with Jonathan Zinsl and Daniel Matthes.
Recent works on the structure of social, biological and internet networks have attracted much attention on random graphs G(D) chosen uniformly at random among all graphs with a fixed degree sequence D = (d_1,...,d_n), where the vertex i has degree d_i. On this topic, a big step forward is represented by the result achieved by Joos, Perarnau, Rautenbach and Reed (1). It determines whether such a random simple graph G(D) has a giant component or not by imposing only one condition: the sum of all degrees which are not 2 must go to infinity with n. Furthermore, if it is not the case, they show that both the probability that G(D) has a giant component and the probability that G(D) has no giant component lie between p and 1-p, for a positive constant p. In this Thesis we present their work, traveling trough the main theorems and the generalization of the previous results again and adding some missing calculations and intermediate steps in order to elucidate it completely. Furthermore, we offer some examples and direct applications of these new criteria. Finally, we attaching implementation and graphical illustrations of almost all the treated cases.
In this talk, we will discuss random models with varying degrees of imposed structure for different applications in signal processing and data analysis.
First, we will study a matrix factorization problem as motivated by applications in bioinformatics. To establish uniqueness under a random model, we develop new tools in probabilistic combinatorics.
Motivated by applications in wireless communication, we consider the problem of simultaneous demixing and deconvolution for randomly embedded signals. We improve upon recent results by Ling and Strohmer, establishing for the first time near-optimal parameter dependence.
Lastly, we show near-optimal recovery guarantees for analog-to-digital conversion in combination with compressed sensing for structured random measurement systems. These are joint works with the speaker’s PhD students David James, Dominik Stöger, and Joe-Mei Feng as well as with Matthias Hein (Universität des Saarlands), Peter Jung (TU Berlin), and Rayan Saab (UC San Diego).
In this talk I will present an variational approach to the free boundary problem of fluid motion over a planar surface, review analytical results, and present a novel algorithmic approach for the contact line motion.