In most materials, photoelasticity describes how the optical properties will change under mechanical deformation. In recent work, I have shown that two materials with zero photoelasticity can be combined to make a composite that is strongly photoelastic. Such behaviour is not observed when calculating the optical, acoustic, or thermal properties of composites, where effective values are given by some weighted average of the constituent values. I will examine the role of the unexpected photoelastic contribution in a selection of composite geometries, describing the effect in closed-form. I will also discuss implications for the homogenisation, opto-mechanics, and optics communities.
In order to describe the dynamics of oscillating wave packets in complicated dispersive evolutionary systems, the Nonlinear Schrödinger (NLS) equation can be formally derived as an approximation equation for the dynamics of the envelopes.To understand to which extent this approximation yield correct predictions of the qualitative behavior of the original systems it is important to justify the validity of the NLS approximation by estimates of the approximation errors in the physically relevant length and time scales. If the original systems are quasilinear, the justification of the NLS approximation is a highly nontrivial problem. In this talk, we give an overview on the NLS approximation and its applications, for example, for modeling water waves, light pulses or spin waves, and prove the validity of the NLS approximation for typical examples of quasilinear dispersive systems.
Simulation-based strategies bring the machine learning toolbox to numerical resolution of stochastic control models. I will begin by reviewing the history of this idea, starting with the seminal work by Longstaff-Schwartz and through the popularized Regression Monte Carlo framework. I will then describe the Dynamic Emulation Algorithm (DEA) that we developed, which unifies the different existing approaches in a single modular template and emphasizes the two central aspects of regression architecture and experimental design. Among novel DEA implementations, I will discuss Gaussian process regression, as well as numerous simulation designs (space-filling, sequential, adaptive, batched). The overall DEA template is illustrated with multiple examples drawing from Bermudan option pricing, natural gas storage valuation, and optimal control of back-up generator in a power microgrid. This is partly joint work with Aditya Maheshwari (UCSB).
Classical methods for the spectral analysis of time series account for covariance-related serial dependencies. This talk will begin with a brief introduction to these traditional procedures. Then, an alternative method is presented, where, instead of covariances, differences of copulas of pairs of observations and the independence copula are used to quantify serial dependencies. The Fourier transformation of these copulas is considered and used to define quantile-based spectral quantities. They allow to separate marginal and serial aspects of a time series and intrinsically provide more information about the conditional distribution than the classical location-scale model. Thus, quantile-based spectral analysis is more informative than the traditional spectral analysis based on covariances. For an observed time series the new spectral quantities are then estimated. The asymptotic properties, including the order of the bias and process convergence, of the estimator (a function of two quantile levels) are established. The results are applicable without restrictive distributional assumptions such as the existence of finite moments and only a weak form of mixing, such as alpha-mixing, is required.
We address the curse of dimensionality in dynamic covariance estimation by modeling the underlying co-volatility dynamics of a time series vector through latent time-varying stochastic factors. The use of a global-local shrinkage prior for the elements of the factor loadings matrix pulls loadings on superfluous factors towards zero. To demonstrate the merits of the proposed framework, the model is applied to simulated data as well as to daily log-returns of 300 S&P 500 members. Our approach yields precise correlation estimates, strong implied minimum variance portfolio performance and superior forecasting accuracy in terms of log predictive scores when compared to typical benchmarks. Furthermore, we discuss the applicability of the method to capture conditional heteroskedasticity in large vector autoregressions.
We prove the existence of global-in-time weak solutions to reaction-cross-diffusion systems for an arbitrary number of competing population species. These equations can be derived from an on-lattice random-walk model with general transition rates. In the case of linear transition rates, the model extends the two-species population model of Shigesada, Kawasaki, and Teramoto. The equations are considered in a bounded domain with homogeneous Neumann boundary conditions. Our existence result is based on a refined entropy method and a new approximation scheme. Global existence follows under a detailed balance or weak cross-diffusion condition, where detailed balance is related to the symmetry of the mobility matrix, which mirrors Onsager’s principle in thermodynamics. We can show that under detailed balance (and without reaction), the entropy is nonincreasing in time, but counter-examples suggest that the entropy may increase initially if detailed balance does not hold. This is a joint work with X. Chen and A. Juengel.
With the advent of every improving information technologies, science and engineering is being being evermore guided by data-driven models and large-scale computations. In this setting, one often is forced to work with models for which the nonlinearities are not derived from first principles and quantitative values for parameters are not known. With this in mind, I will describe an alternative approach formulated in the language of combinatorics and algebraic topology that is inherently multiscale, amenable to mathematically rigorous results based on discrete descriptions of dynamics, computable, and capable of recovering robust dynamic structures. To keep the talk grounded, I will discuss the ideas in the context of modeling of gene regulatory networks.
The gradient method is probably one of the oldest optimization algorithms going back as early as 1847 with the initial work of Cauchy. Surprisingly, it is still the basis for many of the most relevant algorithms nowadays that are capable of solving very large-scale problems arising from many diverse fields of applications such as image processing and data science. This talk will explore the evolution of the method from the 19th century to this date.
The P-vs-NP problem describes one of the most famous open questions in mathematics and theoretical computer science. The media are reporting regularly about proof attempts, all of them being later shown to contain flaws. Some of these approaches where based on small-size linear programs that were designed to solve problems such as the traveling salesman problem efficiently. Fortunately, a few years ago, in a breakthrough result researchers were able to show that no such linear programs can exist and hence that all such attempts must fail, answering a 20-year old conjecture. In this lecture, I would like to present a quite simple approach to obtain such a strong result. Besides an elementary proof, we will hear about (i) the review of all reviews, (ii) why having kids can boost your career, and (iii) a nice interplay of theoretical computer science, geometry, and combinatorics.
We discuss the structure of approximate solutions of variational and optimal control
problems on large intervals, and show that a turnpike property holds for large classes of problems.
To have this property means, roughly speaking, that the approximate optimal trajectories are
determined mainly by the integrand, and are essentially independent of the choice of
time intervals and data, except in regions close to the endpoints of the time interval.
Synchronization is a collective phenomenon observed, for instance, in fireflies, in a clapping audience or in the pacemaker cells in the cardiac pacemaker. Mathematical models for this type of synchronization are based on systems of coupled oscillators. We will start by reviewing the Kuramoto model, introduced by Yoshiki Kuramoto in 1975. It has been successfully analyzed, including many generalizations. In particular, the emergence of synchronization in the Kuramoto model is well understood by now, while much less is known about the effect of noise on sychronization of Kuramoto oscillators. We will address the questions of emergence and of persistence of synchronziation in the presence of random perturbations for an arbitrary \(finite\) number of non-identical oscillators. The main results and ideas will be explained in the special case of two oscillators which is particularly easy to study since the model can be reduced to a stochastic version of the Adler equation in this case.
I will explain a general model to deal with the evolution of a population density ρ which is advected by a velocity field u, but is subject to a non-overcrowding constraint ρ≤1. This model (rather, a meta-model) mainly refers to the motion of a crowd of pedestrians, but can be adapted to many different situations according to how u is given or depends on ρ. Since in general u will not preserve the density constraint, the main assumption is that motion will be advected by the projection of u onto the cone of feasible velocities. This takes its inspiration from granular contact models, when the crowd is described by a collection of particles. I will present the equations, the main ideas to prove existence of solutions (in particular, using tools from optimal transport and gradient flows), and to simulate them. We will see how this continuous PDE model provides results which are stinkingly qualitatively similar to the simulations obtained by granular models, but could require a much smaller complexity. The talk summarizes joint works with several colleagues in Orsay as well as numerical methods developed both by us and by the INRIA team MOKAPLAN, and will try not to be exhaustive but just focus on the main features of the theory.
The reacting flow produced by turbulent flames is recognized as the main generator of acoustic waves in gas turbines and rockets (flame -> acoustics). Because modern gas turbines operate at lean regimes, the flame has become extremely sensitive to the surrounding acoustic field. Consequently, a two-way coupling may easily arise (flame <-> acoustics), which, if constructive, leads to the appearance of combustion instabilities. Combustion instabilities have been a constant nuisance in the design of gas turbines and rockets since their conception nine decades ago.
Combustion instabilities are generally studied by a divide and conquer approach: The relations ‘acoustics -> flame’ and ‘flame -> acoustics’ are characterized separately and, subsequently, combined in a unique model. Fundamental progress has been done in the last decades concerning the linear regime, i.e. when the aforementioned relations are linear. It is, therefore, possible to assess the stability of a combustor at a given (well-defined) operating condition by establishing whether a given initial acoustic perturbation will grow or decay.
Linear stability analysis is not sufficient for the conception of stable gas turbines and rockets within a wide-range of operating conditions. The acoustic field associated with limit cycles (combustion instabilities) and chaos (combustion ‘noise’) has to be consciously investigated and corresponding models developed. The present talk will give an overview of some relevant studies carried out during the last two decades on nonlinear dynamics and combustion instabilities.
In this talk, we want to give an overview of our proof for the mean field limit and propagation of chaos regarding the following particle system. Consider N particles in dimension 3 acting via a Brownian motion, proliferation and a pair interaction force scaling like $ \frac{1}{|x|^{\lambda}}, \lambda < 2 $ with cut-off width $N^{- \frac{1}{3}} $. Proliferation times of the particles are exponentially distributed which leads to Poisson processes for the number of proliferation events. The proof we present is based on a Gronwall argument to control the distance between the (exact) microscopic dynamics and the approximate mean field dynamics.
The talk is about a comparison of many particle systems. While in one system the particles move independently, in the other an interaction force acts on those. Solving the vlasov equation, under certain assumptions we will show that those two systems behave similiar in a suitable sense. Apart from that we will have a look at some assumptions concerning the interaction force to get the result of similiar behavior.
In this talk, I shall explain from an overview perspective some of the main concepts in dynamical systems. The idea is to provide a relatively general road map to allow different disciplines to easily follow most dynamical systems talks using this road map. No prior knowledge will be assumed (i.e., experts in dynamical systems are likely going to know all the concepts already) so that the talk should be easy to follow for a general mathematical audience.