In diffusion models, few suitably chosen financial securities allow to complete the market. As a consequence, the efficient allocations of static Arrow-Debreu equilibria can be attained in Radner equilibria by dynamic trading. We show that this celebrated result generically fails if there is Knightian uncertainty about volatility. A Radner equilibrium with the same efficient allocation as in an Arrow-Debreu equilibrium exists if and only if the discounted net trades of the equilibrium allocation display no ambiguity in the mean. This property is violated generically in endowments, and thus Arrow-Debreu equilibrium allocations are generically unattainable by dynamically trading few long-lived assets.
Link zum Paper: https://pub.uni-bielefeld.de/publication/2901673
We solve explicitly a two-dimensional singular control problem of finite fuel type in infinite time horizon. The problem stems from the optimal liquidation of an asset position in a financial market with finite stochastic illiquidity. Price impact is multiplicative and transient with stochastic resilience. The optimal control is obtained as a diffusion process reflected at a non-constant free boundary. To solve the HJB variational inequality and prove optimality, we apply new results on the Laplace transforms of the inverse local times for diffusions reflected at elastic boundaries. This talk is based on joint papers with Todor Bilarev and Peter Frentrup.
We propose a model for scheduling jobs in a parallel machine setting that takes into account the cost of migrations by assuming that the processing time of a job may depend on the specific set of machines among which the job is migrated. For the makespan minimization objective, the model generalizes classical scheduling problems such as R||C max and P|pmtn|Cmax, as well as novel scenarios such as semi-partitioned and clustered scheduling. In the case of a k-level hierarchical family of machines, we prove an upper bound on the approximation ratio of the problem equal to \(1 + H_k\), where \(H_k\) is the kth harmonic number. When k = 2, an improved upper bound of \(2 + 1/m\) is provided, where $m$ is the number of machines. The results are achieved via an improved rounding scheme for assignment/packing constraints. An extension that incorporates memory capacity constraints is also discussed.
While absence of arbitrage in frictionless financial markets (i.e. without transaction costs) requires price processes to be semimartingales, non-semimartingales can be used to model prices in an arbitrage-free way, if proportional transaction costs are taken into account. In this talk, I will present an overview over several results that provide a way how to use non-semimartingale price processes such as the fractional Black-Scholes model in portfolio optimisation under proportional transaction costs by establishing the existence of a so-called shadow price. This is a semimartingale price process, taking values in the bid ask spread, such that frictionless trading for that price process leads to the same optimal strategy and utility as the original problem under transaction costs.
The talk is based on joint work with Walter Schachermayer.
In the lectures I shall go through basic elements of undirected graphical Gaussian models, their maximum likelihood theory, and discuss features arising when additional structure such as symmetry and total positivity is taken into account. I shall describe and discuss alternative methods of estimation and associated existence problems. (2. Termin: 4. Mai 2016, 3. Termin: 9. Mai 2016)
Over the last years, the new paradigm of Isogeometric Analysis -- IGA has demonstrated its potential to bridge the gap between Computer Aided Design and the Finite Element Method -- FEM. The distinctive aspect of IGA is the usage of one common basis for creating geometry models, for meshing, and for numerical simulation. In this way, a seamless integration of all computational tools within a single design loop comes into reach. Moreover, increased smoothness of the basis functions and an exact representation of the boundary are properties which are also attractive from a numerical viewpoint.
The presentation is aimed at the application of IGA in the field of solid mechanics, in particular vibrational analysis. We start with a short overview on the methodology, point out the common features and differences when compared to the FEM, and concentrate then on the analysis of linear and nonlinear problems where the numerical advantages of higher smoothness become apparent. The last part of the talk is devoted to the field of shape optimization, which benefits in particular from the IGA framework.
Wir betrachten die Hardy-Ungleichung für Sobolevräume reellwertiger Ordnung auf ganz \(R^N\). Dazu verallgemeinern wir die Methode der Substitution des Grundzustandes und erhalten, neben einer Verschärfung mit Restglied, vor allem die scharfe Konstante der Ungleichung. Der Vortrag stellt dabei wesentliche Punkte der Masterarbeit des Vortragenden vor.
Meta-analyses and systematic reviews are the cornerstones of evidence based medicine and inform treatment, diagnosis, or prevention of individual patients as well as policy decisions in health care. Statistical methods for the meta-analysis of intervention studies are well established today. Metaanalysis for diagnostic accuracy trials, however, has been a vivid research area in recent years which is especially due to the increased complexity of diagnostic studies with their bivariate outcome of sensitivity and specificity. An even more increased complexity arises when single studies do not only report a single pair of sensitivity and specificity, but a full ROC curve with several pairs of sensitivity and specificity, each pair for a different threshold. Researchers frequently ignore this information and use only one pair of sensitivity and specificity from each study to arrive at meta-analytic estimates. Although methods to deal with the full information have been proposed [1- 5], these are not without problems, e.g., they are two-step approaches where estimation uncertainty from the first step is ignored in the second step, the number of thresholds has to be identical across studies, or the concrete values of thresholds are ignored thus making impossible clinically relevant inference on sensitivity and specificity at given thresholds. We propose two approaches for the meta-analysis of full ROC curves that use the information from all thresholds. The first approach simply expands the standard bivariate random effects model to a meta-regression model. The second approach uses the interpretation of an ROC curve as a bivariate time-to-event model for interval-censored data. This work is motivated by two systematic reviews on population-based screening for type 2 diabetes mellitus [6,7] which report on 38 single studies to assess the HbA1c as a diagnostic marker. Both reviews report only single pairs of sensitivity and specificity from each single study, but an intensified search yields 124 pairs of sensitivity and specificity for 26 different HbA1c thresholds from the 38 single studies.
In the lectures I shall go through basic elements of undirected graphical Gaussian models, their maximum likelihood theory, and discuss features arising when additional structure such as symmetry and total positivity is taken into account. I shall describe and discuss alternative methods of estimation and associated existence problems. (3. Termin: 9. Mai 2016)
The lecture presents a recent methodology allowing one to execute numerical computations with finite, infinite, and infinitesimal numbers on a new type of a computer – the Infinity Computer – patented in EU, USA, and Russia. The new approach is based on the principle ‘The whole is greater than the part’ (Euclid’s Common Notion 5) that is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework different from that of the non-standard analysis. The new methodology evolves ideas of Cantor and Levi-Civita in a more applied way and, among other things, introduces new infinite integers that possess both cardinal and ordinal properties as usual finite numbers. It is emphasized that the philosophical triad – researcher, object of investigation, and tools used to observe the object – existing in such natural sciences as Physics and Chemistry, exists in Mathematics, too. In natural sciences, the instrument used to observe the object influences the results of observations. The same happens in Mathematics where numeral systems used to express numbers are among the instruments of observations used by mathematicians. The usage of powerful numeral systems gives the possibility to obtain more precise results in Mathematics, in the same way as the usage of a good microscope gives the possibility to obtain more precise results in Physics. A numeral system using a new numeral called grossone is described. It allows one to express easily infinities and infinitesimals offering rich capabilities for describing mathematical objects, mathematical modeling, and practical computations. The concept of the accuracy of numeral systems is introduced. The accuracy of the new numeral system is compared with traditional numeral systems used to work with infinity. The new methodology has been successfully used in a number of applications: Turing machines and lexicographic ordering, cellular automata, percolation and biological processes, numerical differentiation, optimization, and ODEs, fractals, infinite series, set theory, hyperbolic geometry, etc. The Infinity Calculator using the Infinity Computer technology is presented during the talk.
This talk concerns minimal energy configurations as well maximal polarization (Chebyshev) configurations on manifolds, which are problems that are asymptotically related to best-packing and best-covering. In particular, we discuss how to generate non-structured grids of N points on a d-dimensional manifold that have the desirable qualities of well-separation and optimal order covering radius, while asymptotically having a uniform distribution. Even for certain small numbers of points like N=5, optimal arrangements with regard to energy and polarization can be a challenging problem.
In the lectures I shall go through basic elements of undirected graphical Gaussian models, their maximum likelihood theory, and discuss features arising when additional structure such as symmetry and total positivity is taken into account. I shall describe and discuss alternative methods of estimation and associated existence problems.
In this talk, we focus on a one-dimensional model of individuals/particles performing independent random walks on Z in which only pairs of individuals can produce offspring (cooperative branching) and individuals that land on an occupied site merge with the individual present on that site (coalescence). In a biological context, the resulting cooperative branching-coalescent describes a simple population dynamics with reproducing pairs of particles. Coalescence models death due to competition for resources. We argue that the model can also be used as an approximation to a population with two sexes in which only pairs of the opposite sex can reproduce. In addition, the process also describes the interface dynamics of a multi type voter model in which rare types have an advantage. Mathematically, the cooperative branching-coalescent has interesting properties: We show that the system undergoes a phase transition as the branching rate is increased. For small branching rates the upper invariant law is trivial and the process started with finitely many individuals a.s. ends up with a single individual. Both statements are not true for high branching rates. We also study the decay of the population density of the process started in the fully occupied state if the branching rate is small enough. This talk is based on joint work with Jan Swart (UTIA Prague).
After the introduction of random matrices to nuclear physics by Eugene Wigner in1955, random quantum systems have grown in popularity. Wigner's idea was to consider families of Hamiltonians that underlie a certain probability distribution to describe overlycomplicated systems. Of particular interest are, of course, the spectra of these Hamiltonians. In this talk we consider random, in general non-self-adjoint, tridiagonal operatorson the Hilbert space of square-summable sequences. To model randomness, we use an approach by Davies that eliminates probabilistic arguments. Despite the rising interest, not much is known about the spectra of non-self-adjoint random operators. The Feinberg-Zee random hopping matrix reveals this in a beautiful manner. The boundary of its spectrum appears to be fractal, but a proof has yet to be found. We take a step in that direction by showing that the spectrum has an infinite sequence of polynomial symmetries. This not only enlarges known subsets of the spectrum by sizeable amounts, but also implies that the spectrum contains an infinite sequence of Julia sets.
Large pedestrian crowds often exhibit complex dynamics. There is a vast literature on different mathematical approaches ranging from the microscopic description of the individual dynamics to macroscopic equations for the evolution of the crowd. In this talk, we focus on optimal control models, which describe the evolution of a large pedestrian group trying to reach a specific target with minimal cost. We discuss different models regarding the cost functionals and PDE-constraints as well as the connection to the Hughes model for pedestrian flow. We propose a space-time method which is based on the Benamou and Brenier formulation of optimal transport problems and illustrate the dynamics with numerical simulations.
We study frustration-free quantum lattice systems with a non-vanishing spectral gap above one or more (infinite-volume) ground states. The ground states are called stable if arbitrary perturbations of the Hamiltonian that are uniformly small throughout the lattice have only a perturbative effect. In the past several years such stability results have been obtained in increasing generality. We review results by Bravyi-Hastings, Bravyi-Hastings-Michalakis, and Michalakis-Zwolak, as well as some recent refinements. This is joint work with Robert Sims and Amanda Young.
Compressive sensing in its most practical form aims to recover a function that exhibits sparsity in a given basis from as few function samples as possible. One of the fundamental results of compressive sensing tells us that \(O(s \log^4 N)\) samples suffice in order to robustly and efficiently recover any function that is a linear combination of $s$ arbitrary elements from a given bounded orthonormal set of size $N > s$. Furthermore, the associated recovery algorithms (e.g., Basis Pursuit via convex optimization methods) are efficient in practice, running in just polynomial-in-$N$ time. However, when $N$ is very large (e.g., if the domain of the given function is high-dimensional), even these runtimes may become infeasible.
If the orthonormal basis above is Fourier, then the sparse recovery problem above can also be solved using Sparse Fourier Transform (SFT) techniques. Though these methods aim to solve the same problem, they have a different focus. Principally, they aim to reduce the runtime of the recovery algorithm as much as absolutely possible, and are willing to sample the function a bit more often than a compressive sensing method might in order to achieve that objective. By doing so, one can indeed achieve similar recovery guarantees to Basis Pursuit, but with radically reduced runtimes that depend only logarithmically on $N$. However, SFTs are highly adapted to the special properties of the Fourier basis, making their extension to other orthonormal bases difficult.
In this talk we will present a general framework that can be used in order to construct a highly efficient SFT algorithm. The framework abstracts many of the components required for SFT design in an attempt to simplify the application of SFT ideas to other basis choices. Extension of arbitrary SFTs to the Chebyshev and Legendre polynomial bases will also be discussed.
Partial differential equations (PDEs) are widely used to model phenomena in nature. In this talk we will see that they also have a high potential to compress digital images.
The idea sounds temptingly simple: We keep only a small amount of the pixels and reconstruct the remaining data with PDE-based interpolation. This gives rise to three interdependent questions:
1. Which data should be kept?
2. What are the most useful PDEs?
3. How can the selected data be encoded efficiently?
Solving these problems requires to combine ideas from different mathematical disciplines such as mathematical modelling, optimisation, interpolation and approximation, and numerical methods for PDEs.
Since the talk is intended for a broad audience, we focus on intuitive ideas, and no specific knowledge in image processing is required.
In 1961, Ciesielski established a remarkable isomorphism of spaces of Hölder continuous functions and Banach spaces of real valued se- quences. The isomorphism can be established along Fourier type ex- pansions of (rough) Holder continuous functions by means of the Haar- Schauder wavelet. We will use Schauder representations for a pathwise approach of the integral of one rough function with respect to another one. In a more general and analytical setting, this pathwise approach of rough path analysis can be understood in terms of Paley-Littlewood de- compositions of distributions, and Bony paraproducts in Besov spaces. It allows a smooth approach of formal products of singular distributions, and consequently of SPDE with rough and multiplicative noise. Also, pathwise solutions of BSDE are in reach. This talk is based on work with M. Gubinelli (U Bonn) and N. Perkowski (HU Berlin).
To reduce the x-ray dose in computerized tomography (CT), many optimization approaches have been proposed aiming at minimizing the sum of a term that measures lack of consistency with the detected attenuation of x-rays and a regularizing term that measures lack of consistency with some prior knowledge about the object that is being imaged.
One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing term, a particular recent choice for this is the l1-norm shearlet transform.
Proponents of the l1-norm of the shearlet transform for the regularizing term claim that the reconstructions so obtained are better than those produced using TV. We report results, based on simulated CT data collection of the head, that contradict the general validity of this claim.
Our experiments for making such comparisons use the superiorization methodology for both regularizing terms. Superiorization is a procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that according to the primary criterion are as good as those produced by the original algorithm, but are superior to them according to a secondary (regularizing) criterion.
We develop a fast phase retrieval method which can utilize a large class of local phase- less correlation-based measurements in order to recover a given signal x ∈ Cd (up to an unknown global phase) in near-linear O(d log4 d)-time. Accompanying theoretical analysis proves that the proposed algorithm is guaranteed to deterministically recover all signals x satisfying a natural flatness (i.e., non-sparsity) condition for a particular choice of deterministic correlation-based mea- surements. A randomized version of these same measurements is then shown to provide nonuniform probabilistic recovery guarantees for arbitrary signals x ∈ Cd. Numerical experiments demonstrate the method’s speed, accuracy, and robustness in practice.
In its simplest form, our proposed phase retrieval method employs a modified lifting scheme akin to the one utilized by the well-known PhaseLift algorithm. In particular, it interprets quadratic magnitude measurements of x as linear measurements of a restricted set of lifted variables, xixj, for |j − i| < δ ≪ d. This leads to a linear system involving a total of (2δ − 1)d unknown lifted variables, all of which can then be solved for using only O(δd) measurements. Once these lifted variables, xixj for |j − i| < δ ≪ d, have been recovered, a fast angular synchronization method can then be used to propagate the local phase difference information they provide across the entire vector in order to estimate the (relative) phases of every entry of x. In addition, the lifted variables corresponding to xjxj = |xj|2 automatically provide magnitude estimates for each entry, xj, of x. The proposed phase retrieval method then approximates x by carefully combining these entry-wise phase and magnitude estimates.
Finally, we conclude by developing an extension of the proposed method to the sparse phase retrieval problem; specifically, we demonstrate a sublinear-time compressive phase retrieval algo- rithm which is guaranteed to recover a given s-sparse vector x ∈ Cd with high probability in just O(s log5 s · log d)-time using only O(s log4 s · log d) magnitude measurements. In doing so we demon- strate the existence of compressive phase retrieval algorithms with near-optimal linear-in-sparsity runtime complexities.
An efficient adaptive algorithm for computing stochastic Galerkin finite element approximations of elliptic PDE problems with random data will be outlined in this talk. The underlying differential operator will be assumed to have affine dependence on a large, possibly infinite, number of random parameters. Stochastic Galerkin approximations are sought in a tensor-product space comprising a standard \(h-\)finite element space associated with the physical domain, together with a set of multivariate polynomials characterising a \(p-\)finite-dimensional manifold of the (stochastic) parameter space.
Our adaptive strategy is based on computing distinct error estimators associated with the two sources of discretisation error. These estimators, at the same time, will be shown to provide effective estimates of the error reduction for enhanced approximations. Our algorithm adaptively 'builds' a polynomial space over a low-dimensional manifold of the infinitely-dimensional parameter space by reducing the energy of the combined discretisation error in an optimal manner. Convergence of the adaptive algorithm will be demonstrated numerically.
In this talk, we consider the two-dimensional case and discuss the properties of minimal spectral partitions, illustrate the difficulties by considering a simple case like the rectangle and then give a "magnetic" characterization of these minimal partitions. We also discuss when minimal spectral partitions are nodals and estimate the number of critical points. This work has started in collaboration with T. Hoffmann-Ostenhof (with a preliminary work with M. and T. Hoffmann-Ostenhof and M. Owen) and has been continued with him and other coauthors : V. Bonnaillie-Noël, S. Terracini, G. Vial, C. Lena, P. Bérard, M. Persson Sundqvist.
Online algorithms are algorithms that have to make decisions before knowing the entire input. In a worst-case analysis, one considers the worst possible input for the algorithm in relation to the best offline solution. For a number of problems, however, this perspective is too pessimistic because a hypothetical adversary can trick the algorithm. In such cases, it is useful to look at input models with stochastic components. For example, the adversary may still determine the input but not its order. This is drawn at random from all possible permutations.
We first consider weighted bipartite online matching. The nodes of one side of the graph appear one after the other and have to be matched to the other side. In the case of random arrival order, our algorithm calculates a matching that is in expectation at most a 1/e-factor worse than the optimal matching in the graph. This is optimal. An important generalization of this problem are linear packing programs. The variables appear one after the other and immediately upon each arrival of a variable its value must be set. Also in this case, we achieve optimal guarantees. Finally, we show how to use the techniques in more complex scenarios such as scheduling problems or with respect to other objective functions.
In this lecture I will introduce the interested participant to the dynamics of surface-tension driven flows of liquids in networks of channels. The origins of interest in these flows lie with the study of palm beetles and their clever defense mechanism which consists of excreting oily liquids and maintaining strong ground suction by controlling the resultant liquid bridges. Needless to say, a couple of engineers earned a lot of money by exploiting this mechanism to build a novel suction device.
These channel flows are driven by surface-tension induced volume scavenging where fluid droplets of varying sizes leech off one another to increase in volume. They are modeled by a system of nonlinear ODEs when one exploits the relationship of pressure gradient and flow rate in form of the Hagen-Poiseuille law or similar such law. Of interest in this presentation are the stability and long-term behavior of such flows in the case of power-law liquids where equilibria are non-hyperbolic.
The analytical techniques used to study some basic aspects of these fluid flows are neat, even though they fall largely within the context of "elementary math" and hardly exceed what bachelor's students know. Hence this presentation is intended as a mathematical excursion for any interested participant who wants to spend some quality time on applied mathematics and see a physics-based example where nice results can be had without brute force analysis. I'll even show some moving pictures and address some related questions for which the answers are “physically intuitive,” but mathematically unproven.
The presentation is based on joint work with Paul Steen (Cornell).
-
-
In 2010, M.A. Iwen (in Found. Comput. Math., 10(3):303-338, 2010) introduced a deter- ministic combinatorial sublinear-time Fourier algorithm for estimating the best k term Fourier representation for a given frequency sparse signal, relying heavily on the Chinese Remainder Theorem and combinatorial concepts. In 2016 a different deterministic sublinear Fourier algo- rithm for input signals with small support length was proposed, which employs periodizations of the signal and requires that the signal length is a power of 2 (Plonka and Wannenwetsch in Numerical Algorithms, 71(4):889-905, 2016). In this talk we will develop Iwen’s algorithm from examples for the case of an input function with small support length, combining the Chinese Remainder Theorem approach for arbitrary signal lengths with the structure given by the small support. This reduces the runtime of the algorithm as the effortful combinatorial part can be omitted.
Manifold-based image models are assumed in many engineering applications involving imaging and image classification. In the setting of image classification, in particular, proposed designs for small and cheap cameras motivate compressive imaging applications involving manifolds. Interesting mathematics results when one considers that the problem one needs to solve in this setting ultimately involves questions concerning how well one can embed a low-dimensional smooth sub-manifold of high-dimensional Euclidean space into a much lower dimensional space without knowing any of its detailed structure. We will motivate this problem and discuss how one might accomplish this seemingly difficult task using random projections. Little if any prerequisites will be assumed.
In the talk, the long-time behaviour of numerical methods for Hamiltonian differential equations is discussed, in particular the near-conservation of energy by symplectic numerical methods on long time intervals. In the case of Hamiltonian ordinary differential equations, this can be rigorously shown by a backward error analysis. After an introduction to this classical result, the difficulties in extending such a result to Hamiltonian partial differential equations like the nonlinear Schrödinger equation are described. Finally, a recent result on long-time near-conservation of energy by the (symplectic) split-step Fourier method applied to the (Hamiltonian) nonlinear Schrödinger equation is presented.
Coherent structures in geophysical flows play fundamental roles by organising fluid flow and obstructing transport. For example, in the ocean, coherence manifests itself at global scales down to scales of at least tens of kilometres, and strongly influences the transportation of heat, salt, nutrients, phytoplankton, pollution, and garbage. I will describe some recent mathematical constructions, ranging across dynamical systems, probability, and geometry, which enable the accurate identification and tracking of such structures, and the quantification of associated mixing and transport properties. I will present case studies from a variety of geophysical settings.
Onyx is a free software environment for creating and estimating Structural Equation Models (SEM). It provides a graphical user interface that facilitates an intuitive creation of models, and a powerful backend for performing maximum likelihood estimation of parameters. In this presentation, some concepts of Onyx will be presented, and the operating of the program will be demonstrated. We will also have a quick look under the hood, and the optimization algorithms used in Onyx.
Our interest lies in understanding a free boundary problem to a fourth-order thin-film equation with quadratic mobility and a zero contact angle at the triple junction, where air, liquid, and solid meet. This equation can be derived from the Navier-Stokes system with Navier-slip at the liquid-solid interface, removing the contact-line singularity that occurs if no slip is assumed. While for linear mobility (Dary dynamics) a strong analogy to the second-order porous medium equation is valid, this is not the case anymore in our setting, leading to singular expansions of solutions at the free boundary.
I will first discuss the model problem of source-type self-similar solutions, where ODE and dynamical systems theory are available, to characterize the contact-line singularity. This is based on two publications with Lorenzo Giacomelli and Felix Otto, and with Fethi Ben Belgacem and Christian Kühn, respectively. Then I will present a well-posedness result for solutions close to a traveling wave (joint with Lorenzo Giacomelli, Hans Knüpfer, and Felix Otto). I will further discuss how to obtain regularity of solutions at the free boundary, which one may view as a generalized smoothing property. The talk is concluded by outlooks on how to potentially treat general mobilities and the multi-dimensional thin-film problem.
In this talk, I present a solution method for convex optimization. In the framework of this method, the original convex optimization problem is first reduced to a convex feasibility problem. Then several instances of the method of alternating projections are applied simultaneously. Each instance is assigned its own subproblem. In a polynomial number of iterations, at least one instance of the alternating projections solves its subproblem. If this subproblem is infeasible, the original optimization problem is simplified. This method leads to polynomial algorithms for linear optimization and for some combinatorial problems which are representable as convex problems by means of polynomial separation oracles.
We consider the critical behaviour of long-range O(n) models for n greater than or equal to 0. For n=1,2,3,... these are phi^4 spin models. For n=0 it is the weakly self-avoiding walk. We prove existence of critical exponents for the susceptibility and the specific heat, below the upper critical dimension. This is a rigorous version of the epsilon expansion in physics. The proof is based on a rigorous renormalisation group method developed in previous work with Bauerschmidt and Brydges.
In 1939, Frenkel and Kontorova proposed a model for the motion of a dislocation (an imperfection in a crystal). The model is simple, a chain of atoms following Newton's equation of motion. The atoms interact with their nearest neighbours via a harmonic spring and are exposed to a periodic (non-convex) on-site potential. Despite the simplicity, the model has proved to be a mathematical challenge. Iooss and Kirchg\"assner made a fundamental contribution regarding the existence of small solutions, using centre manifold theory. The talk will introduce the model and then present recent results for the coherent motion of dislocations with periodic, and possibly anharmonic, on-site potentials. For anharmonic wave-trains, the proof establishes the existence of possibly large wave-trains via centre manifold theory and then employs a fixed point argument to show the existence of a travelling dislocation. These results are joint work with Boris Buffoni (Lausanne) and Hartmut Schwetlick (Bath).
Spatial random permutations are implemented by probability measures on permutations of a set with spatial structure; these measures are made so that they favor permutations that map points to nearby points. The strength of this effect is encoded in a parameter alpha > 0, where larger alpha means stronger bias toward short jumps. I will introduce some variants of the model, and explain the connections to the theory of Bose-Einstein condensation. Then I will present a few older results, as well as very recent progress made jointly with Lorenzo Taggi (TU Darmstadt) for the regime of large alpha. Finally, I will discuss two conjectures suggested by numerical simulation: in two dimensions, the model appears to exhibit a Kosterlitz-Thouless phase transition, and there are reasons to believe that in the phase of algebraic decay of correlations, long cycles are Schramm-Löwner curves, with parameter between 4 and 8 depending on alpha.
This talk will introduce Sparse Power Factorization, which is an algorithm for reconstructing a rank-one matrix with sparsity constraints from a few (linear) measurements. In particular, the speaker will talk about sufficient conditions for recovery. Furthermore, open questions and numerical results will be discussed. This talk is based on the author's master thesis.
The simplest model of a bicycle is a segment of fixed length that can move, in n-dimensional Euclidean space, so that the velocity of the rear end is always aligned with the segment (the rear wheel is fixed on the frame). The rear wheel track and a choice of direction uniquely determine the front wheel track; changing the direction to the opposite, yields another front track. The two track are related by the bicycle (Darboux) transformation which defines a discrete time dynamical system on the space of curves. I shall discuss the symplectic, and in dimension 3, bi-symplectic, nature of this transformation and, in dimension 3, its relation with the filament equation. An interesting problem is to describe the curves that are in the bicycle correspondence with themselves (in this case, given the front and rear tracks, one cannot tell which way the bicycle went). In dimension two, such curves yield solutions to Ulam's problem: is the round ball the only body that floats in equilibrium in all positions? I shall discuss F.~Wegner's results on this problem and relate them with the planar filament equation. Open problems and conjectures will be emphasized.
The study of transport and mixing processes in dynamical systems is important for the analysis of mathematical models of physical systems. I will describe a novel, direct geometric method to identify subsets of phase space that remain strongly coherent over a finite time duration. The method is based on a dynamic extension of classical (static) isoperimetric problems; the latter are concerned with identifying submanifolds with the smallest boundary size relative to their volume. I will introduce dynamic isoperimetric problems; the study of sets with small boundary size relative to volume as they are evolved by a general dynamical system. I will state dynamic versions of the fundamental (static) isoperimetric (in)equalities; a dynamic Federer-Fleming theorem and a dynamic Cheeger inequality. I will also introduce a dynamic Laplace operator and describe a computational method to identify coherent sets based on eigenfunctions of the dynamic Laplacian. Our results include formal mathematical statements concerning geometric properties of finite-time coherent sets, whose boundaries can be regarded as Lagrangian coherent structures. The computational advantages of this approach are a well-separated spectrum for the dynamic Laplacian, and flexibility in appropriate numerical approximation methods. Finally, we demonstrate that the dynamic Laplace operator can be realised as a zero-diffusion limit of a recent probabilistic transfer operator method for finding coherent sets, based on small diffusion.