Filter off: No filter for categories
We study two aspects of making optimal investment decisions for pension investors in the savings phase. First, we explore the impact of an investor’s perception towards inflation risk on their investment strategy. We find that mis-specifying inflation risk reduces the expected utility of the risk averse investors, and more risk averse investors face larger reductions. For investors who adopt terminal wealth constraints (e.g. minimum guarantee), ignoring inflation results in real wealth not adhering to the real constraints. The conclusion is that investors ignore inflation at their peril. Secondly, we compare the retirement outcomes derived from the risk averse and loss averse utility functions. We use a numerical dynamic programming approach and a model that includes ongoing pension contributions to savings, prohibits short-selling and borrowings, and, when applicable, includes wealth constraints. We find that the loss averse utility function, without wealth constraints, naturally results in a more favourable retirement income distribution that peaks at the investor's chosen income goal with some level of robustness. We conclude that the investor can benefit from adopting a loss aversion-derived optimal investment strategy to target a sufficient level of income at retirement.
We propose a simple lifecycle strategy entailing contributions made during accumulation being invested entirely into a risky portfolio until pre-specified ‘switch age’ and then entirely into a risk-free portfolio after the switch age, followed by withdrawing during decumulation from both portfolios based on annuitization rates that vary with age according to remaining life expectancy. First, we show analytically that the strategy is optimal for range of investors with HARA risk preferences, and derive the dynamics of the investment strategy. Second, we show numerically that the proposed strategy delivers limited loss of utility versus an optimal solution for investors with CRRA preferences and low risk aversion, while significantly outperforming strategies commonly used in practice. The proposed strategy offers an attractive alternative for use in practical settings as it is simple to follow and removes the need for portfolio rebalancing.
The theory of Abstract Wiener Spaces is the basis for many fundamental results of Gaussian measure theory: Large Deviations, Cameron-Martin theorems, Malliavin Calculus, Support theorems, etc. Analogues of these classical theorems exist also in the context of Gaussian Rough Paths and Regularity Structures. It is our goal to investigate the role of an “enhanced” Cameron-Martin subspace in this setting. In particular, we present two approaches to a generalization based on Large Deviation theory and apply them to examples of Rough Path theory and Regularity Structures.
The theory of abstract Wiener spaces is the basis for many fundamental results of Gaussian measure theory: Large Deviations, Cameron-Martin Theorems, Malliavin Calculus, Support Theorems, etc. Analogues of these classical Theorems exist also in the context of Gaussian Rough Paths and Regularity Structures. It is our goal to investigate the role of an “enhanced” Cameron-Martin subspace in this setting. In particular, we present two approaches to a generalization based on Large Deviation theory and apply them to examples of Rough Path theory and Regularity Structures.
The finite element methods is well-established numerical method for obtaining approximation solution to differential equations and systems. The mathematical theory started in early seventies and very mature by now. The method is based primary on variational principle and natural operating norms are energy and $L^2$ based norms. However, since early seventies, it has been a substantial interest in obtaining pointwise and more general $L^p$ error estimates for $1\le p\le \infty$. This direction of research is surprisingly still active now. In my talk I will review various available techniques, illustrate their main ideas, as well as advantages and disadvantages.
We develop a simple and flexible technique to price executive stock options (ESOs) with vesting periods and liquidation penalties. The vesting period implies that the ESO is activated when a designed performance measure triggers a prespecified barrier. The performance measure is usually an accounting figure, such as the ROE or the EBITDA, normally correlated with the stock price. Once the option is activated, the holder has the right to buy the stock whenever she wants during the residual life of the option. The bivariate strutucture of the ESO, whose payoff depends jointly on the performance measure and the stock, makes usual lattice techniques difficult to apply. We first reduce the ESO to a compound forward-starting American call option on the stock. We then show how to evaluate the ESO option by means of an intuitive hybrid method that uses simulation to determine the bivariate distribution of the foward-starting date of the option and the corresponding price of the stock, and lattice techniques to retrieve the initial value of the activated call option. Liquidation penalties are common in ESOs, aiming at lowering the chances of selling the ESOs and the underlying company shares. We show that the presence of even mild liquidation penalties triggers the existence of optimal exercise opportunities for the ESOs that are absent when the option can be fully liquidated. Joint with M. De Donno and Alessandro Sbuelz
In this talk I will discuss the evolution of a system of two nonlocally interacting species possibly with nonlinear mobility on a graph, which may be infinite. This evolution, which is induced by an upwind interpolation, can be rigorously understood as a Finslerian gradient flow. Weakening the notion of Minkowski norm and nonlocal gradient, the geometric interpretations and the analysis can be carried over to non-quadratic settings. The analytical studies are accompanied by numerical simulations on finite graphs of different shapes, showcasing phenomena such as aggregation of a species or the separation of different species.
Furthermore, in the quadratic setting with a single species and linear mobility, I will indicate how our non-symmetric graph gradient structures approximate a symmetric Otto-Wasserstein gradient structure by means of evolutionary Gamma-convergence. In particular, this implies existence of solutions to the nonlocal interaction equation on Euclidean space.
In statistical physics, the zero-freeness property of the grand canonical partition function guarantees the analyticity of the pressure as we approach the infinite volume limit, as shown by Lee and Yang in 1952. Moreover, computer scientists have leveraged the zero-freeness property of the grand canonical partition function to approximate it using various algorithms, such as Barvinok's algorithm. We introduce a novel approach, rooted in computer science, known as the recursion method. This method gives a zero-free region of the partition function. Specifically, we investigate the application of this method to the hard-core lattice gas model, following the work by Peters and Regts in 2019. Additionally, we briefly discuss how Michelen and Perkins (2023) adapted this method for studying gas particles in a continuum space, which interact via a repulsive potential.
Die Umsetzung von inklusivem Mathematikunterricht ist eine zentrale Aufgabe, die Lehrkräfte wahrzunehmen haben. Dabei stellen sich häufig folgende Fragen: Wie viel gemeinsames Lernen und wie viel spezielle Förderung sind notwendig? Welche Faktoren und Rahmenbedingungen müssen berücksichtigt werden, um gelingenden inklusiven Unterricht realisieren? Ist es die Einstellung der Regellehrkraft zur Inklusion? Ist es deren fachliches und fachdidaktisches Wissen? Sind es die zur Verfügung stehenden Unterstützungsstunden einer Förderlehrkraft? Kommt es auf die Zusammenarbeit der Lehrkräfte an? Ausgehend von Ergebnissen aus verschiedenen empirischen Studien wird diskutiert, welche Rahmenbedingungen und Faktoren wichtig sind für die Umsetzung von gelingendem inklusivem Mathematikunterricht.
This paper extends the technique of gradient boosting in mortality forecasting. The two novel contributions are to use stochastic mortality models as weak learners in gradient boosting rather than trees, and to include a penalty that shrinks the forecasts of mortality in adjacent age groups and nearby geographical regions closer together. The proposed method demonstrates superior forecasting performance based on US male mortality data from 1969 to 2019. The boosted model with age-based shrinkage yields the most accurate national-level mortality forecast. For state-level forecasts, spatial shrinkage provides further improvement in accuracy in addition to the benefits achieved by age-based shrinkage. This additional improvement can be attributed to data sharing across states with both large and small populations in adjacent regions, as well as states which share common risk factors.
We introduce Winfree model, which was proposed by Arthur Winfree to describe the collective behavior of pulsatile oscillators. In this talk, we focus on Winfree model with higher-order influences, which is the first attempt to make mathematical analysis for the approximated pulse-coupled model. We study the sufficient conditions for coupling strengths, in terms of order in influence, for death, locking and incoherence. Next, we add randomness on the order in influence. In this case, we mainly consider the complete oscillator death. We prove exponential-relaxation toward equilibrium and provide a local sensitivity in probability space.
Wigner’s jellium is a theoretical model that describes a gas composed of electrons.In this concept, the overall charge is neutralised by n particles, each with a negativeunit charge, floating in a medium of uniformly distributed positive charges. The interactionsbetween the particles are dictated by the Coulomb potential. In this thesis, the Maxwell-Boltzmann distribution is used to describe the statistical behaviour of the quantum jelliummodel in a one-dimensional environment. We state a process-level large deviation principlefor the empirical field and prove it using similar techniques as done by Hirsch, Jansen and Jung (2022).
Consider a biased random walk in positive random conductances on Z^d in dimension 5 and above. In the sub-ballistic regime, Fribergh and Kious (2018) proved the convergence, under the annealed law, of the properly rescaled random walk towards a Fractional Kinetics. I will explain that a quenched equivalent of this theorem is true and a strategy to simplify the question. This is joint work with A. Fribergh and T. Lions
Multiparameter persistence modules appear in topological data analysis when dealing with noisy data. They defined over a wild algebra and therefore they do not admit a complete discrete invariant. One thus tries to “approximate” such a module by a more manageable class of modules. Using that approach we define a class of invariants for persistence modules based on ideas from homological algebra.
This is a report on joint work with Claire Amiot, Benjamin Blanchette and Eric Hanson.
The characterization of biological phenomena related to cell evolution and their interactions with the microenvironment often involves several processes occurring on different spatial and temporal scales. Thus, mathematical models aimed at describing cell dynamics have to feature this inherently multiscale nature. In this seminar, we discuss a multiscale mathematical framework based on a kinetic formulation of cell dynamics at the mesoscopic level for studying the process of cell migration. Precisely, at the microscopic level, single-cell dynamics related to cell motion are given in terms of ODE systems. From them, it is possible to formulate the kinetic equations describing the statistical distribution of the cell population and its evolution in response to microenvironmental interactions. Then, from the mesoscopic level, the related macroscopic models for the evolution at the tissue scale are derived in the appropriate regime. Focusing on the case of tumor cell migration, we then present two possible applications of this framework for studying the impact of the different environmental cues on tumor cell migration.
The Zero Range Process is an important example of particle movements in physics. It models particles jumping on a finite set, which surprisingly results in independent occupation numbers in the limit. We will give an overview of Large Deviations in the Zero Range Process and present some important results that arise in the chosen setting of heavy tailed occupation numbers. This gives rise to some related theory like the Catastrophe Principle and the Large Deviations Principle which we will also give a brief introduction to.
The simulation of brittle fracture problems has long been deemed to be very sensitive to the selection of the mesh, namely, convergence of the crack path as the mesh is refined would often be difficult to establish. We argue that the culprit behind these observations is often the low accuracy of the computed stress intensity factors, which define the evolution of the crack. With this in mind, we will present a collection of methods we introduced in the last few years in 2D and 3D whose end results are: (a) the stress intensity factors can be computed with arbitrary order of accuracy (in 2D), (b) the mesh does not need to be refined around the crack tip for accuracy (in 2D), and (c) numerical experiments show convergence of the computed crack paths. As part of this presentation, we will introduce the notion of Universal Meshes, a robust algorithm to deform a background mesh to conform to the crack geometry as it grows. We demonstrate these methods with applications to thermally driven cracks on thin glass plates.