We use the semigroup method, which consists of two steps of applying the continuous and discrete Gronwall lemmas, to study the asymptotic behavior of a rough differential equation. The system is constructed in a way so that the deterministic system (with only the drift coefficient) possesses a unique equilibrium that is exponentially asymptotically stable. The stationary states could then be studied under the framework of random dynamical systems, and the existence of a global random pullback attractor is proved. We go a further step to prove that the attractor is indeed a singleton in case the diffusion coefficient is of linear form or bounded with sufficiently small Lipschitz coefficient.
Consensus dynamics is a prelude to self-organizing collective dynamics in multi-agent systems, one of the interesting topics in network science. With directed graphs, we can model one-way communication between agents in a system. The dynamics concerning different non-linear communication protocols between agents on a digraph can be easily explained by bifurcation theory, and then extended to fast-slow systems. But, how does the topology of the digraph affect these dynamics? Some results have already been proven for strongly connected digraphs, but we demonstrate similar behavior by a larger set of digraphs. Another important question is if these results on a digraph are equivalent to those on undirected graphs which are used to model two-way communication? A symmetrization algorithm paves the way for a solution to this question. These are some aspects of the consensus problem that I will address in my talk.
Many stochastic particle systems have well-defined continuum limits: as the number of particles tends to infinity, the density of particles converges to a deterministic limit that satisfies a partial differential equation. In this talk I will discuss one example of this. The particle system consists of particles that have finite size: in two and three dimensions they are spheres, in one dimension rods. The particles can not overlap each other, leading to a strong interaction with neighbouring particles. Such systems of particles have been much studied, but for the continuum limit in dimensions two and up there is currently no rigorous result. There are conjectures about the form of the limit equation, often in the form of Wasserstein gradient flows, but to date there are no proofs. We also can not give a proof of convergence in higher dimensions, but in the one-dimensional situation we can give a complete picture, including both the convergence and the gradient-flow structure that derives from the large-deviation behaviour of the particles. This gradient-flow structure shows clearly the role of the free energy and the Wasserstein-metric dissipation, and how they derive from the underlying stochastic particle system. The proof is based on a special mapping of the particle system to a system of independent particles, that is unique to the one-dimensional setup. This mapping is an isometry for the Wasserstein metric, leading to a beautiful connection between limit equations for interacting and non-interacting particle systems. This is joint work with Nir Gavish and Pierre Nyquist.
In many real-life applications, it is of interest to study how the distribution of a (continuous) response variable changes with covariates. Dependent Dirichlet process (DDP) mixture of normal models, a Bayesian nonparametric method, successfully addresses such goal. The approach of considering covariate independent mixture weights, also known as the single weights dependent Dirichlet process mixture model, is very popular due to its computational convenience but can have limited flexibility in practice. To overcome the lack of flexibility, but retaining the computational tractability, this work develops a single weights DDP mixture of normal model, where the components’ means are modelled using Bayesian penalised splines (P-splines). We coin our approach as psDDP. A practically important feature of psDDP models is that all parameters have conjugate full conditional distributions thus leading to straightforward Gibbs sampling. In addition, they allow the effect associated with each covariate to be learned automatically from the data. The validity of our approach is supported by simulations and applied to a study concerning the association of a toxic metabolite on preterm birth.
In my talk I am going to present an application of switched dynamical systems in ecology. In particular, a Filippov system with two Rosenzweig-MacArthur subsystems will be considered describing a 1 predator-2 prey interaction with prey switching. Discontinuity in the system in question arises due to prey switching, which is frequency-dependent predation characterized by predator’s adaptive change in diet in response to prey abundance. The system was analyzed combining analytical and numerical approaches with the main focus on the sliding motion and discontinuity-induced bifurcations. It was observed that the effect of switching on stability of the considered system is quite complex and can both stabilize and destabilize population dynamics
We consider the following simple model: one starts with a set V, a random partition of V and a parameter p in [0,1]. We then obtain a {0,1}-valued process indexed by V obtained by independently, for each partition element in the random partition chosen, with probability p assigning all the elements of the partition element the value 1, and with probability 1−p, assigning all the elements of the partition element the value 0. Many models fall into this context: in particular the 0 external field Ising model (where this is called the Fortuin-Kasteleyn representation). I will first describe earlier work with Johan Tykesson and then move on to describe work with Malin Palö Forsström, where we study the question of which threshold Gaussian and stable vectors have such a representation: (A threshold Gaussian (stable) vector is a vector obtained by taking a Gaussian (stable) vector and a threshold h and looking where the vector exceeds the threshold h). The answer turns out to be quite varied depending on properties of the vector and the threshold; it turns out that h=0 behaves quite differently than h different from 0. Among other results, in the large h regime, we obtain a phase transition in the stability exponent alpha for stable vectors where the critical value turns out to be alpha=1/2.
In this talk I will present a result on the existence of a random attractor for stochastic partly dissipative systems. These are coupled systems of a partial and an ordinary differential equation, where both equations are perturbed by additive, infinite-dimensional noise. Systems of this form appear in numerous applications in the natural sciences, for example in the famous FitzHugh-Nagumo model for neuronal dynamics. Random attractors are compact, invariant sets of the phase space, that capture the longtime behaviour of random systems. Our proof is based on suitable regularity results for stochastic convolutions, a-priori estimates and compactness arguments. This is joint work with Christian Kuehn and Alexandra Neamtu.
Motivated by many characteristics of real world networks such as clustering and power law degree distributions many random graph models reproducing these have been introduced. Processes shaping real world networks are often also local, i.e. they often rely on properties of the network in the neighbourhood of a vertex. A random walk can be regarded as such a local selection process for creating or reinforcing edges. In the talk we look at a process where repetitively a n-step random walk from a random starting vertex A to vertex B leads to the reinforcement of the edge from A to B. Different approaches to analyse this process and in particular associated random limits are discussed.
Over-parameterized models, in particular deep networks, often exhibit a ``double-descent'' phenomenon, where as a function of model size, error first decreases, increases, and decreases at last. This intriguing double-descent behavior also occurs as a function of training time, and it has been conjectured that such ``epoch-wise double descent'' arises because training time controls the model complexity. In this paper, we show that double descent arises for a different reason: It is caused by two overlapping bias-variance tradeoffs that arise because different parts of the network are learned at different speeds.