Filter off: No filter for categories
Localised structures have taken an important place within the study of reaction-transport systems as they are ubiquitous within the theory of such systems and different applications. If we study these structures critically, we must start with the premise that nature dislikes wasting energy. In that context, nature and systems try to minimise the energy they need. My talk will delve into amplitude equations and the formulation of energy functionals that allow us to study Maxwell points, which are the points at which two solutions have the same minimal energy and the system oscillates around them. Furthermore, I will give the main ideas for this in the stationary case close to a codimension-2 Turing bifurcation point and spatiotemporal codimension-2 Turing-wave bifurcations.
We derive closed-form solutions to optimal stopping problems related to the pricing of perpetual American standard and lookback put and call options in extensions of the Black-Merton-Scholes model under progressively enlarged filtrations. It is assumed that the information available from the market is modelled by Brownian filtrations progressively enlarged with the random times at which the underlying process attains its global maximum or minimum, that is, the last hitting times for the underlying risky asset price of its running maximum or minimum over the infinite time interval, which are supposed to be progressively observed by the holders of the contracts. We show that the optimal exercise times are the first times at which the asset price process reaches certain lower or upper stochastic boundaries depending on the current values of its running maximum or minimum depending on the occurrence of the random times of the global maximum or minimum of the risky asset price process. The proof is based on the reduction of the original necessarily three-dimensional optimal stopping problems to the associated free-boundary problems and their solutions by means of the smooth-fit and either normal-reflection or normal-entrance conditions for the value functions at the optimal exercise boundaries and the edges of the state spaces of the processes, respectively.
This is a joint work with Libo Li (Sydney).
I will describe the new approach to scattering theory, which extends to time dependent potentials, including nonlinear equations. In particular, recent progress for one dimensional cases, Long-Range scattering and Local-Decay estimates.
Contemporary data analysis pipelines often involve the use and reuse of data. For instance, a scientist may explore a dataset to select an interesting hypothesis, and then wish to test this hypothesis with the same data. From a statistical perspective, this double use of data is highly problematic: it induces dependence between the hypothesis generation and testing stages, which complicates inference. Failure to account for this dependence renders classical inference techniques invalid.
I will present "data thinning", a set of strategies for obtaining independent training and test sets so that the former can be used to select a hypothesis, and the latter to test it. Data thinning enables valid selective inference in settings for which no solutions were previously available. However, it is also restrictive, in the sense that it requires strong distributional assumptions. Therefore, I will also present two strategies inspired by data thinning that enable valid post-selection inference without such assumptions. One strategy considers thinning summary statistics of the data, rather than the data itself, in order to take advantage of asymptotic properties of the summary statistics. The second strategy involves generating training and test sets that are not independent, and then orthogonalizing the latter with respect to the former in order to conduct valid inference.
Score-based generative modeling, implemented through probability flow ODEs, has shown impressive results in numerous practical settings. However, most convergence guarantees rely on restrictive regularity assumptions on the target distribution -- such as strong log-concavity or bounded support. This work establishes non-asymptotic convergence bounds in the 2-Wasserstein distance for a general class of probability flow ODEs under considerably weaker assumptions: weak log-concavity and Lipschitz continuity of the score function. Our framework accommodates non-log-concave distributions, such as Gaussian mixtures, and explicitly accounts for initialization errors, score approximation errors, and effects of discretization via an exponential integrator scheme. Bridging a key theoretical challenge in diffusion-based generative modeling, our results extend convergence theory to more realistic data distributions and practical ODE solvers. We provide concrete guarantees for the efficiency and correctness of the sampling algorithm, complementing the empirical success of diffusion models with rigorous theory. Moreover, from a practical perspective, our explicit rates might be helpful in choosing hyperparameters, such as the step size in the discretization.
In this talk, we will provide a high-level explanation of why stochasticity is a necessary tool in computational turbulence modelling. While the discussion focuses on fluid dynamics, the underlying concepts belong to a wider mathematical framework suited for dynamical systems where one can only observe and simulate a projection of the full phase space. This framework is particularly relevant for systems where “coarse” dynamics take place in a significantly lower-dimensional space than the “true” dynamics. The wide range of scales of motion in turbulent flow provides a major challenge in the prediction of fluid-dynamical processes. Feasible simulation strategies require solving computational problems with reduced complexity, for example by filtering out the smaller scales of motion and/or numerically solving the governing equations on a coarse grid. However, these necessary simplifications induce systematic errors and uncertainty in flow prediction. In this presentation, we explain how errors and the loss of information inherent in complexity reduction motivate a probabilistic modelling approach. We will explain the need for i) (data-driven) stochastic closure models to account for discretisation errors and unresolved scales, and ii) data assimilation methods to steer predictions toward observations.
TBA
An optimal control problem for a system of nonlinear parabolic equations modeling the migration/proliferation dichotomy of glioma cells is investigated. The solvability of the optimal control problem is proven, and necessary first-order optimality conditions are obtained. A justification for the weak bang-bang principle for optimal control is presented. A numerical algorithm based on the finite element method has been developed and implemented. Numerical experiments demonstrate the effect of additional oxygen supply on vascular density and the switching of tumor cell phenotype from invasive to proliferative.
Given a simple graph G = (V, E) and a map l0 : V → {+1, −1}, the majority dynamics on G with initial assignment of states l0 is a process that begins on day 0, and for each t ≥ 0 produces a new assignment of states lt+1 where each vertex takes the state of the majority of its neighbours, and remains at its previous state in the case of a tie. Specifically, for each v ∈ V , lt+1(v) = ( +1 if P u∈N (v) lt(u) > 0, or P u∈N (v) lt(u) = 0 and lt(v) = +1, −1 otherwise. (1) This process is a model for opinion exchange dynamics, with applications in many areas, such as politics, sociology, biophysics. While there exist results about the process even- tually reaching a 2-periodic stable state on all graphs, a natural question to study would be under which initial conditions is unanimity reached, and how quickly. When considering the question specifically in the case of Binomial Random Graphs, a longstanding conjecture, due to Benjamini, Chan, O’Donnel, Tamuz and Tan (2016) is the following: Conjecture. Let G ∼ G(n, p) be the binomial random graph with p = ω(1/n) and l0(v) be sampled uniformly at random from {+1, −1} for each v ∈ V . Then w.h.p. the majority dynamics process reaches unanimity after sufficiently many steps t. Steps towards proving the conjecture have been taken taken by gradually improving the range densities d = np for which the conjecture is known to be true, with the current best bound being d ≫ n1/3 log2/3 n due to Kim and Tran. Our work aims to prove the conjecture for the range n1/4 ≪ d ≤ O(n1/3), and lays the groundwork for proving the conjecture in the general case n1/(k+1) ≪ d ≤ O(n1/k). This is based on joint work with Nikolaos Fountoulakis, University of Birmingham.
t.b.a.