The aim of this workshop is to bring together experts from different communities in the fields of algebraic and arithmetic geometry: those with a more classical and geometric background, who work on moduli spaces, birational geometry, and the classification of higher dimensional objects, as well as those with a more arithmetic background or those, who work on the foundations of crystalline cohomology, the de Rham-Witt complex, and general theories of period spaces and period maps. To make this conference more easily accessible also for younger researchers (e.g. Ph.D. students and early PostDocs) and non-experts, Luc Illusie’s lecture series will introduce the deRham-Witt complex and its applications to algebraic and arithmetic geometry.
Stochastic simulation algorithms (SSA) play an important role in a modelling of chemically reacting systems with a small abundance of some chemical species. The Gillespie algorithm is an exact SSA, however it often becomes computationally too expensive for most realistic systems. Several algorithms were developed to accelerate the Gillespie SSA at the expense of a reduced accuracy, mainly the tau- and R-leaping methods. Each of these algorithms provides speed up under different conditions. However, the nature of the simulated system can dramatically change over the time evolution or even with a small change in the simulation set up, thus reducing the efficiency of the selected algorithm. We proposed an adaptive accelerate algorithm, S-Leaping, which maintains its efficiency under various conditions, and in the case of a large and stiff system even outperforms the existing methods. Moreover, the adaptive representation allows to efficiently switch between an implicit and explicit formulation depending on the current stiffness of the system. The proposed algorithm is tested in comparison with the traditional methods on a number of benchmarks systems involving biological reactions networks.
I will give an overview of recent results on classical spin systems. Spin systems are prototypical models for phase transitions. Their complex behaviour has turned out to be partially accessible from a large variety of complementary points of view, and has made them objects are intense mathematical study. I will provide an impression of these, with focus on rigorous renormalisation as a powerful method. The talk is based on joint works with David Brydges and Gordon Slade, with Gordon Slade and Ben Wallace, and with Thierry Bodineau.
We propose a stochastic logistic model with mating limitation and stochastic immigration. Incorporating stochastic immigration into a continuous time Markov chain model, we derive and analyze the associated master equation. By a standard result, there exists a uniquely ergodic stationary distribution. It turns out that for finite population size, such stationary distribution admits a bimodal profile reflecting the bistability in the stochastic model. However, such bistability disappears and threshold phenomenon emerges as the total population size goes to infinity. Stochasticity vanishes and the deterministic model is recovered, as the total population size goes to infinity. Such limiting result interprets differently from the classical strong Allee effect: The species either dies out or survives eventually regardless of the initial population density but depending on a critical inherent constant determined by model itself via birth, death and mating limitation.
We will consider a random walk on integers in site-dependent random environment. We will show how to use the associated branching process to obtain a precise large deviation asymptotic for the random walk. The talk is based on a joint work with Dariusz Buraczewski.
Graphene samples are identified as minimizers of configurational energies featuring both two- and three-body atomic-interaction terms. This variational viewpoint allows for a detailed description of ground-state geometries as connected subsets of a regular hexagonal lattice. In this talk we will discuss how these geometries evolve as the number n of carbon atoms in the graphene sample increases. By means of an equivalent characterization of minimality via a discrete isoperimetric inequality, we will prove that ground states converge to the ideal hexagonal Wulff shape as n tends to infinity. In particular, we will show that ground states deviate from such hexagonal Wulff shape by at most Kn^{3/4}+o(n^{3/4}) atoms, where both the constant K and the rate n^{3/4} are sharp.
Integrodifference equations (IDEs for short) are a popular tool in theoretical ecology to describe the spatial dispersal of populations with nonoverlapping generations.
From a mathematical perspective, IDEs are recursions on ambient spaces of continuous or integrable functions and thus generate infinite-dimensional dynamical systems. Hence, for simulation purposes an appropriate numerical approximation yielding a finite-dimensional state space is due. Our goal is to study dynamical properties of IDEs (e.g. existence of reference solutions, attractors, invariant manifolds) which are preserved under corresponding numerical methods and to establish convergence for increasingly more accurate schemes.
The talk deals with the Cauchy-Dirichlet problem for parabolic systems of p-Laplace type on non-cylindrical domains in space-time. In other words, we consider spatial domains whose shape varies in time. In the case of a growing domain, the boundary values can be interpreted as additional initial conditions, while in the case of a shrinking domain, the boundary values can be seen as a kind of obstacle condition. The treatment of time-varying domains turns out to be significantly harder than the standard case of cylindric domains. We present an existence result for solutions to such problems under very weak regularity assumptions on the domain. A first regularity result for the constructed solutions guarantees that they depend continuously on time with respect to the L2-norm if the domain does not shrink too fast.
The talk will be devoted to the question of computing the optimal change of measure for certain classes of rare event simulation problems that appear in statistical mechanics, e.g. in molecular dynamics. The method is based on a representation of the rare event sampling problem as an equivalent (or: dual) stochastic optimal control problem, whose value function characterizes the optimal (i.e. minimum variance) change of measure. The specific duality behind the problem is then used to devise numerical algorithms for computing the optimal change of measure. I will describe two approaches in some detail that are built on a semi-parametric representation of the value function: a cross-entropy based stochastic approximation algorithm and a Monte-Carlo based least-squares discretisation of a related forward-backward stochastic differential equation. I will discuss the general approach, with a particular focus on the choice of the ansatz functions and the solution of high-dimensional problems, and illustrate the numerical method with simple toy examples.
The Euler characteristic is an invariant of a topological space that in a precise sense captures its canonical notion of size, akin to the cardinality of a set. The Euler characteristic is closely related to the homology of a space, as it can be expressed as the alternating sum of its betti numbers, whenever the sum is well-defined. Thus, one says that homology categorifies the Euler characteristic. In his work on the generalisation of cardinality-like invariants, Leinster introduced the magnitude of a metric space, a real number that gives the “effective number of points” of the space. Recently, Leinster and Shulman introduced a homology theory for metric spaces, called magnitude homology, which categorifies the magnitude of a space. In their paper Leinster and Shulman list a series of open questions, two of which are as follows: Magnitude homology only “notices” whether the triangle inequality is a strict equality or not. Is there a “blurred” version that notices “approximate equalities”? Almost everyone who encounters both magnitude homology and persistent homology feels that there should be some relationship between them. What is it?
In this talk I will introduce magnitude and magnitude homology, give an answer to these questions and show that they are intertwined: it is the blurred version of magnitude homology that is related to persistent homology. Leinster and Shulman's paper can be found at https://arxiv.org/abs/1711.00802.
This lecture discusses the role of algebraic geometry in data science. We report on recent work with Paul Breiding, Sara Kalisnik and Madeline Weinstein. We seek to determine a real algebraic variety from a fixed finite subset of points. Existing methods are studied and new methods are developed. Our focus lies on topological and algebraic features, such as dimension and defining polynomials. All algorithms are tested on a range of datasets and made available in a Julia package.
In this presentation, we want to examine the probability that a Galton-Watson tree with n nodes, has exactly d nodes with k children and find a local limit theorem for it. Therefore, I will give a short introduction about the conditioned Galton-Watson trees, summaries some of their properties and explain, which restrictions we make for this examination. Furthermore, I will present some already known approximations, before I state and roughly proof the new form of the local limit theorem.
We introduce a novel data-driven approach to discover and decode features in the neural code coming from large population neural recordings with minimal assumptions, using cohomological learning. We apply our approach to neural recordings of mice moving freely in a box, where we find a circular feature. We then observe that the decoded value corresponds well to the head direction of the mouse. Thus we capture head direction cells and decode the head direction from the neural population activity without having to process the behaviour of the mouse.
Using the fractional-moments method (FMM) and Furstenberg's theorem we prove that a disordered-analog of the SSH model exhibits complete dynamical localization at all energies except for the special energy value zero, if the probability measures defining the model are sufficiently rich and regular. If furthermore they are properly tuned so that the Lyapunov spectrum at zero energy doesn't contain zero, then the system exhibits localization also at zero energy, which is important for topological reasons. Our method also applies to the usual Anderson model on the strip, showing the proof of complete 1D dynamical localization using the FMM, previously available using the multi-scale analysis. (Joint with G. M. Graf)
In this presentation we give an overview of the basis ideas underlying time-dependent density-functional theory and, in particular, of the density to potential mapping [1,2] which is based on the time-dependent Schrödinger equation. We review the original derivation of Runge and Gross [3,4] and address some fundamental issues relating to time-analyticity. We reformulate the existence and uniqueness of the mapping as a fixed point problem on certain Banach spaces. We further discuss a numerical construction of the mapping and show some examples. We finally outline some basic open issues regarding a complete existence and uniqueness proof of the density to potential mapping.
[1] “Existence, uniqueness, and construction of the density-potential mapping in time-dependent density-functional theory”, M.Ruggenthaler, M. Penz, R. van leeuwen, J. Phys. Condens. Matter 27, 203202 (2015)
[2] “The density-potential mapping in quantum dynamics”, M.Penz, PhD thesis, arXiv:1610.05552 (2016)
[3] “Density-Functional Theory for Time-Dependent Systems”, E.Runge and E.K.U. Gross, Phys.Rev.Lett.52, 997 (1984)
[4] “Coulomb potentials and Taylor expansions in time-dependent density-functional theory”, S.Fournais, J.Lampart, M.Lewin and T.Östergaard Sörensen, Phys.Rev. A93, 062510 (2016)
This talk will address deterministic mean field games for which agents are restricted in a closed domain of Euclidean space. In this case, the existence, uniqueness, and regularity of Nash equilibria cannot be deduced as for unrestricted state space because, for a large set of initial conditions, the uniqueness of solutions to the minimization problem which is solved by each agent is no longer guaranteed. We will therefore attack the problem by considering a relaxed version of it, for which the existence of equilibria can be proved by set-valued fixed point arguments. We will then give a uniqueness result for such equilibria under a classical monotonicity assumption. Finally, we will analyze the regularity of the relaxed solution and show that it satisfies the typical first order PDE system of mean field games.
Normal forms can be regarded as a simple representation a mathematical object takes after an appropriate transformation. Normal forms are particularly important in Dynamical Systems where they can greatly simplify the study of several problems, such as qualitative analysis of vector fields near equilibrium points and bifurcations.
In this talk, after providing a brief introduction to slow-fast systems and classical results on normal form theory, I will present some recent progress on normal form theory for slow-fast systems with non-hyperbolic points. We shall also discuss the usefulness of such normal forms and digress on some open problems and perspectives in the field.