KategorienFilter ist aus: keine Filterung nach Kategorien.
In diffusion models, few suitably chosen financial securities allow to complete the market. As a consequence, the efficient allocations of static ArrowDebreu equilibria can be attained in Radner equilibria by dynamic trading. We show that this celebrated result generically fails if there is Knightian uncertainty about volatility. A Radner equilibrium with the same efficient allocation as in an ArrowDebreu equilibrium exists if and only if the discounted net trades of the equilibrium allocation display no ambiguity in the mean. This property is violated generically in endowments, and thus ArrowDebreu equilibrium allocations are generically unattainable by dynamically trading few longlived assets.
Link zum Paper: https://pub.unibielefeld.de/publication/2901673
We solve explicitly a twodimensional singular control problem of ﬁnite fuel type in inﬁnite time horizon. The problem stems from the optimal liquidation of an asset position in a ﬁnancial market with finite stochastic illiquidity. Price impact is multiplicative and transient with stochastic resilience. The optimal control is obtained as a diﬀusion process reﬂected at a nonconstant free boundary. To solve the HJB variational inequality and prove optimality, we apply new results on the Laplace transforms of the inverse local times for diﬀusions reﬂected at elastic boundaries. This talk is based on joint papers with Todor Bilarev and Peter Frentrup.
We propose a model for scheduling jobs in a parallel machine setting that takes into account the cost of migrations by assuming that the processing time of a job may depend on the specific set of machines among which the job is migrated. For the makespan minimization objective, the model generalizes classical scheduling problems such as RC max and PpmtnCmax, as well as novel scenarios such as semipartitioned and clustered scheduling. In the case of a klevel hierarchical family of machines, we prove an upper bound on the approximation ratio of the problem equal to \(1 + H_k\), where \(H_k\) is the kth harmonic number. When k = 2, an improved upper bound of \(2 + 1/m\) is provided, where $m$ is the number of machines. The results are achieved via an improved rounding scheme for assignment/packing constraints. An extension that incorporates memory capacity constraints is also discussed.
While absence of arbitrage in frictionless financial markets (i.e. without transaction costs) requires price processes to be semimartingales, nonsemimartingales can be used to model prices in an arbitragefree way, if proportional transaction costs are taken into account. In this talk, I will present an overview over several results that provide a way how to use nonsemimartingale price processes such as the fractional BlackScholes model in portfolio optimisation under proportional transaction costs by establishing the existence of a socalled shadow price. This is a semimartingale price process, taking values in the bid ask spread, such that frictionless trading for that price process leads to the same optimal strategy and utility as the original problem under transaction costs.
The talk is based on joint work with Walter Schachermayer.
In the lectures I shall go through basic elements of undirected graphical Gaussian models, their maximum likelihood theory, and discuss features arising when additional structure such as symmetry and total positivity is taken into account. I shall describe and discuss alternative methods of estimation and associated existence problems. (2. Termin: 4. Mai 2016, 3. Termin: 9. Mai 2016)
Over the last years, the new paradigm of Isogeometric Analysis  IGA has demonstrated its potential to bridge the gap between Computer Aided Design and the Finite Element Method  FEM. The distinctive aspect of IGA is the usage of one common basis for creating geometry models, for meshing, and for numerical simulation. In this way, a seamless integration of all computational tools within a single design loop comes into reach. Moreover, increased smoothness of the basis functions and an exact representation of the boundary are properties which are also attractive from a numerical viewpoint.
The presentation is aimed at the application of IGA in the field of solid mechanics, in particular vibrational analysis. We start with a short overview on the methodology, point out the common features and differences when compared to the FEM, and concentrate then on the analysis of linear and nonlinear problems where the numerical advantages of higher smoothness become apparent. The last part of the talk is devoted to the field of shape optimization, which benefits in particular from the IGA framework.
Wir betrachten die HardyUngleichung für Sobolevräume reellwertiger Ordnung auf ganz \(R^N\). Dazu verallgemeinern wir die Methode der Substitution des Grundzustandes und erhalten, neben einer Verschärfung mit Restglied, vor allem die scharfe Konstante der Ungleichung. Der Vortrag stellt dabei wesentliche Punkte der Masterarbeit des Vortragenden vor.
Metaanalyses and systematic reviews are the cornerstones of evidence based medicine and inform treatment, diagnosis, or prevention of individual patients as well as policy decisions in health care. Statistical methods for the metaanalysis of intervention studies are well established today. Metaanalysis for diagnostic accuracy trials, however, has been a vivid research area in recent years which is especially due to the increased complexity of diagnostic studies with their bivariate outcome of sensitivity and specificity. An even more increased complexity arises when single studies do not only report a single pair of sensitivity and specificity, but a full ROC curve with several pairs of sensitivity and specificity, each pair for a different threshold. Researchers frequently ignore this information and use only one pair of sensitivity and specificity from each study to arrive at metaanalytic estimates. Although methods to deal with the full information have been proposed [1 5], these are not without problems, e.g., they are twostep approaches where estimation uncertainty from the first step is ignored in the second step, the number of thresholds has to be identical across studies, or the concrete values of thresholds are ignored thus making impossible clinically relevant inference on sensitivity and specificity at given thresholds. We propose two approaches for the metaanalysis of full ROC curves that use the information from all thresholds. The first approach simply expands the standard bivariate random effects model to a metaregression model. The second approach uses the interpretation of an ROC curve as a bivariate timetoevent model for intervalcensored data. This work is motivated by two systematic reviews on populationbased screening for type 2 diabetes mellitus [6,7] which report on 38 single studies to assess the HbA1c as a diagnostic marker. Both reviews report only single pairs of sensitivity and specificity from each single study, but an intensified search yields 124 pairs of sensitivity and specificity for 26 different HbA1c thresholds from the 38 single studies.
In the lectures I shall go through basic elements of undirected graphical Gaussian models, their maximum likelihood theory, and discuss features arising when additional structure such as symmetry and total positivity is taken into account. I shall describe and discuss alternative methods of estimation and associated existence problems. (3. Termin: 9. Mai 2016)
The lecture presents a recent methodology allowing one to execute numerical computations with finite, infinite, and infinitesimal numbers on a new type of a computer – the Infinity Computer – patented in EU, USA, and Russia. The new approach is based on the principle ‘The whole is greater than the part’ (Euclid’s Common Notion 5) that is applied to all numbers (finite, infinite, and infinitesimal) and to all sets and processes (finite and infinite). It is shown that it becomes possible to write down finite, infinite, and infinitesimal numbers by a finite number of symbols as particular cases of a unique framework different from that of the nonstandard analysis. The new methodology evolves ideas of Cantor and LeviCivita in a more applied way and, among other things, introduces new infinite integers that possess both cardinal and ordinal properties as usual finite numbers. It is emphasized that the philosophical triad – researcher, object of investigation, and tools used to observe the object – existing in such natural sciences as Physics and Chemistry, exists in Mathematics, too. In natural sciences, the instrument used to observe the object influences the results of observations. The same happens in Mathematics where numeral systems used to express numbers are among the instruments of observations used by mathematicians. The usage of powerful numeral systems gives the possibility to obtain more precise results in Mathematics, in the same way as the usage of a good microscope gives the possibility to obtain more precise results in Physics. A numeral system using a new numeral called grossone is described. It allows one to express easily infinities and infinitesimals offering rich capabilities for describing mathematical objects, mathematical modeling, and practical computations. The concept of the accuracy of numeral systems is introduced. The accuracy of the new numeral system is compared with traditional numeral systems used to work with infinity. The new methodology has been successfully used in a number of applications: Turing machines and lexicographic ordering, cellular automata, percolation and biological processes, numerical differentiation, optimization, and ODEs, fractals, infinite series, set theory, hyperbolic geometry, etc. The Infinity Calculator using the Infinity Computer technology is presented during the talk.
This talk concerns minimal energy configurations as well maximal polarization (Chebyshev) configurations on manifolds, which are problems that are asymptotically related to bestpacking and bestcovering. In particular, we discuss how to generate nonstructured grids of N points on a ddimensional manifold that have the desirable qualities of wellseparation and optimal order covering radius, while asymptotically having a uniform distribution. Even for certain small numbers of points like N=5, optimal arrangements with regard to energy and polarization can be a challenging problem.
In the lectures I shall go through basic elements of undirected graphical Gaussian models, their maximum likelihood theory, and discuss features arising when additional structure such as symmetry and total positivity is taken into account. I shall describe and discuss alternative methods of estimation and associated existence problems.
In this talk, we focus on a onedimensional model of individuals/particles performing independent random walks on Z in which only pairs of individuals can produce offspring (cooperative branching) and individuals that land on an occupied site merge with the individual present on that site (coalescence). In a biological context, the resulting cooperative branchingcoalescent describes a simple population dynamics with reproducing pairs of particles. Coalescence models death due to competition for resources. We argue that the model can also be used as an approximation to a population with two sexes in which only pairs of the opposite sex can reproduce. In addition, the process also describes the interface dynamics of a multi type voter model in which rare types have an advantage. Mathematically, the cooperative branchingcoalescent has interesting properties: We show that the system undergoes a phase transition as the branching rate is increased. For small branching rates the upper invariant law is trivial and the process started with finitely many individuals a.s. ends up with a single individual. Both statements are not true for high branching rates. We also study the decay of the population density of the process started in the fully occupied state if the branching rate is small enough. This talk is based on joint work with Jan Swart (UTIA Prague).
After the introduction of random matrices to nuclear physics by Eugene Wigner in1955, random quantum systems have grown in popularity. Wigner's idea was to consider families of Hamiltonians that underlie a certain probability distribution to describe overlycomplicated systems. Of particular interest are, of course, the spectra of these Hamiltonians. In this talk we consider random, in general nonselfadjoint, tridiagonal operatorson the Hilbert space of squaresummable sequences. To model randomness, we use an approach by Davies that eliminates probabilistic arguments. Despite the rising interest, not much is known about the spectra of nonselfadjoint random operators. The FeinbergZee random hopping matrix reveals this in a beautiful manner. The boundary of its spectrum appears to be fractal, but a proof has yet to be found. We take a step in that direction by showing that the spectrum has an infinite sequence of polynomial symmetries. This not only enlarges known subsets of the spectrum by sizeable amounts, but also implies that the spectrum contains an infinite sequence of Julia sets.
Large pedestrian crowds often exhibit complex dynamics. There is a vast literature on different mathematical approaches ranging from the microscopic description of the individual dynamics to macroscopic equations for the evolution of the crowd. In this talk, we focus on optimal control models, which describe the evolution of a large pedestrian group trying to reach a specific target with minimal cost. We discuss different models regarding the cost functionals and PDEconstraints as well as the connection to the Hughes model for pedestrian flow. We propose a spacetime method which is based on the Benamou and Brenier formulation of optimal transport problems and illustrate the dynamics with numerical simulations.
We study frustrationfree quantum lattice systems with a nonvanishing spectral gap above one or more (infinitevolume) ground states. The ground states are called stable if arbitrary perturbations of the Hamiltonian that are uniformly small throughout the lattice have only a perturbative effect. In the past several years such stability results have been obtained in increasing generality. We review results by BravyiHastings, BravyiHastingsMichalakis, and MichalakisZwolak, as well as some recent refinements. This is joint work with Robert Sims and Amanda Young.
Compressive sensing in its most practical form aims to recover a function that exhibits sparsity in a given basis from as few function samples as possible. One of the fundamental results of compressive sensing tells us that \(O(s \log^4 N)\) samples suffice in order to robustly and efficiently recover any function that is a linear combination of $s$ arbitrary elements from a given bounded orthonormal set of size $N > s$. Furthermore, the associated recovery algorithms (e.g., Basis Pursuit via convex optimization methods) are efficient in practice, running in just polynomialin$N$ time. However, when $N$ is very large (e.g., if the domain of the given function is highdimensional), even these runtimes may become infeasible.
If the orthonormal basis above is Fourier, then the sparse recovery problem above can also be solved using Sparse Fourier Transform (SFT) techniques. Though these methods aim to solve the same problem, they have a different focus. Principally, they aim to reduce the runtime of the recovery algorithm as much as absolutely possible, and are willing to sample the function a bit more often than a compressive sensing method might in order to achieve that objective. By doing so, one can indeed achieve similar recovery guarantees to Basis Pursuit, but with radically reduced runtimes that depend only logarithmically on $N$. However, SFTs are highly adapted to the special properties of the Fourier basis, making their extension to other orthonormal bases difficult.
In this talk we will present a general framework that can be used in order to construct a highly efficient SFT algorithm. The framework abstracts many of the components required for SFT design in an attempt to simplify the application of SFT ideas to other basis choices. Extension of arbitrary SFTs to the Chebyshev and Legendre polynomial bases will also be discussed.
Partial differential equations (PDEs) are widely used to model phenomena in nature. In this talk we will see that they also have a high potential to compress digital images.
The idea sounds temptingly simple: We keep only a small amount of the pixels and reconstruct the remaining data with PDEbased interpolation. This gives rise to three interdependent questions:
1. Which data should be kept?
2. What are the most useful PDEs?
3. How can the selected data be encoded efficiently?
Solving these problems requires to combine ideas from different mathematical disciplines such as mathematical modelling, optimisation, interpolation and approximation, and numerical methods for PDEs.
Since the talk is intended for a broad audience, we focus on intuitive ideas, and no specific knowledge in image processing is required.
In 1961, Ciesielski established a remarkable isomorphism of spaces of Hölder continuous functions and Banach spaces of real valued se quences. The isomorphism can be established along Fourier type ex pansions of (rough) Holder continuous functions by means of the Haar Schauder wavelet. We will use Schauder representations for a pathwise approach of the integral of one rough function with respect to another one. In a more general and analytical setting, this pathwise approach of rough path analysis can be understood in terms of PaleyLittlewood de compositions of distributions, and Bony paraproducts in Besov spaces. It allows a smooth approach of formal products of singular distributions, and consequently of SPDE with rough and multiplicative noise. Also, pathwise solutions of BSDE are in reach. This talk is based on work with M. Gubinelli (U Bonn) and N. Perkowski (HU Berlin).
To reduce the xray dose in computerized tomography (CT), many optimization approaches have been proposed aiming at minimizing the sum of a term that measures lack of consistency with the detected attenuation of xrays and a regularizing term that measures lack of consistency with some prior knowledge about the object that is being imaged.
One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing term, a particular recent choice for this is the l1norm shearlet transform.
Proponents of the l1norm of the shearlet transform for the regularizing term claim that the reconstructions so obtained are better than those produced using TV. We report results, based on simulated CT data collection of the head, that contradict the general validity of this claim.
Our experiments for making such comparisons use the superiorization methodology for both regularizing terms. Superiorization is a procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that according to the primary criterion are as good as those produced by the original algorithm, but are superior to them according to a secondary (regularizing) criterion.
We develop a fast phase retrieval method which can utilize a large class of local phase less correlationbased measurements in order to recover a given signal x ∈ Cd (up to an unknown global phase) in nearlinear O(d log4 d)time. Accompanying theoretical analysis proves that the proposed algorithm is guaranteed to deterministically recover all signals x satisfying a natural flatness (i.e., nonsparsity) condition for a particular choice of deterministic correlationbased mea surements. A randomized version of these same measurements is then shown to provide nonuniform probabilistic recovery guarantees for arbitrary signals x ∈ Cd. Numerical experiments demonstrate the method’s speed, accuracy, and robustness in practice.
In its simplest form, our proposed phase retrieval method employs a modified lifting scheme akin to the one utilized by the wellknown PhaseLift algorithm. In particular, it interprets quadratic magnitude measurements of x as linear measurements of a restricted set of lifted variables, xixj, for j − i < δ ≪ d. This leads to a linear system involving a total of (2δ − 1)d unknown lifted variables, all of which can then be solved for using only O(δd) measurements. Once these lifted variables, xixj for j − i < δ ≪ d, have been recovered, a fast angular synchronization method can then be used to propagate the local phase difference information they provide across the entire vector in order to estimate the (relative) phases of every entry of x. In addition, the lifted variables corresponding to xjxj = xj2 automatically provide magnitude estimates for each entry, xj, of x. The proposed phase retrieval method then approximates x by carefully combining these entrywise phase and magnitude estimates.
Finally, we conclude by developing an extension of the proposed method to the sparse phase retrieval problem; specifically, we demonstrate a sublineartime compressive phase retrieval algo rithm which is guaranteed to recover a given ssparse vector x ∈ Cd with high probability in just O(s log5 s · log d)time using only O(s log4 s · log d) magnitude measurements. In doing so we demon strate the existence of compressive phase retrieval algorithms with nearoptimal linearinsparsity runtime complexities.
An efficient adaptive algorithm for computing stochastic Galerkin finite element approximations of elliptic PDE problems with random data will be outlined in this talk. The underlying differential operator will be assumed to have affine dependence on a large, possibly infinite, number of random parameters. Stochastic Galerkin approximations are sought in a tensorproduct space comprising a standard \(h\)finite element space associated with the physical domain, together with a set of multivariate polynomials characterising a \(p\)finitedimensional manifold of the (stochastic) parameter space.
Our adaptive strategy is based on computing distinct error estimators associated with the two sources of discretisation error. These estimators, at the same time, will be shown to provide effective estimates of the error reduction for enhanced approximations. Our algorithm adaptively 'builds' a polynomial space over a lowdimensional manifold of the infinitelydimensional parameter space by reducing the energy of the combined discretisation error in an optimal manner. Convergence of the adaptive algorithm will be demonstrated numerically.
Online algorithms are algorithms that have to make decisions before knowing the entire input. In a worstcase analysis, one considers the worst possible input for the algorithm in relation to the best offline solution. For a number of problems, however, this perspective is too pessimistic because a hypothetical adversary can trick the algorithm. In such cases, it is useful to look at input models with stochastic components. For example, the adversary may still determine the input but not its order. This is drawn at random from all possible permutations.
We first consider weighted bipartite online matching. The nodes of one side of the graph appear one after the other and have to be matched to the other side. In the case of random arrival order, our algorithm calculates a matching that is in expectation at most a 1/efactor worse than the optimal matching in the graph. This is optimal. An important generalization of this problem are linear packing programs. The variables appear one after the other and immediately upon each arrival of a variable its value must be set. Also in this case, we achieve optimal guarantees. Finally, we show how to use the techniques in more complex scenarios such as scheduling problems or with respect to other objective functions.
In this lecture I will introduce the interested participant to the dynamics of surfacetension driven flows of liquids in networks of channels. The origins of interest in these flows lie with the study of palm beetles and their clever defense mechanism which consists of excreting oily liquids and maintaining strong ground suction by controlling the resultant liquid bridges. Needless to say, a couple of engineers earned a lot of money by exploiting this mechanism to build a novel suction device.
These channel flows are driven by surfacetension induced volume scavenging where fluid droplets of varying sizes leech off one another to increase in volume. They are modeled by a system of nonlinear ODEs when one exploits the relationship of pressure gradient and flow rate in form of the HagenPoiseuille law or similar such law. Of interest in this presentation are the stability and longterm behavior of such flows in the case of powerlaw liquids where equilibria are nonhyperbolic.
The analytical techniques used to study some basic aspects of these fluid flows are neat, even though they fall largely within the context of "elementary math" and hardly exceed what bachelor's students know. Hence this presentation is intended as a mathematical excursion for any interested participant who wants to spend some quality time on applied mathematics and see a physicsbased example where nice results can be had without brute force analysis. I'll even show some moving pictures and address some related questions for which the answers are “physically intuitive,” but mathematically unproven.
The presentation is based on joint work with Paul Steen (Cornell).


In 2010, M.A. Iwen (in Found. Comput. Math., 10(3):303338, 2010) introduced a deter ministic combinatorial sublineartime Fourier algorithm for estimating the best k term Fourier representation for a given frequency sparse signal, relying heavily on the Chinese Remainder Theorem and combinatorial concepts. In 2016 a different deterministic sublinear Fourier algo rithm for input signals with small support length was proposed, which employs periodizations of the signal and requires that the signal length is a power of 2 (Plonka and Wannenwetsch in Numerical Algorithms, 71(4):889905, 2016). In this talk we will develop Iwen’s algorithm from examples for the case of an input function with small support length, combining the Chinese Remainder Theorem approach for arbitrary signal lengths with the structure given by the small support. This reduces the runtime of the algorithm as the effortful combinatorial part can be omitted.
Manifoldbased image models are assumed in many engineering applications involving imaging and image classification. In the setting of image classification, in particular, proposed designs for small and cheap cameras motivate compressive imaging applications involving manifolds. Interesting mathematics results when one considers that the problem one needs to solve in this setting ultimately involves questions concerning how well one can embed a lowdimensional smooth submanifold of highdimensional Euclidean space into a much lower dimensional space without knowing any of its detailed structure. We will motivate this problem and discuss how one might accomplish this seemingly difficult task using random projections. Little if any prerequisites will be assumed.
In the talk, the longtime behaviour of numerical methods for Hamiltonian differential equations is discussed, in particular the nearconservation of energy by symplectic numerical methods on long time intervals. In the case of Hamiltonian ordinary differential equations, this can be rigorously shown by a backward error analysis. After an introduction to this classical result, the difficulties in extending such a result to Hamiltonian partial differential equations like the nonlinear Schrödinger equation are described. Finally, a recent result on longtime nearconservation of energy by the (symplectic) splitstep Fourier method applied to the (Hamiltonian) nonlinear Schrödinger equation is presented.
Coherent structures in geophysical flows play fundamental roles by organising fluid flow and obstructing transport. For example, in the ocean, coherence manifests itself at global scales down to scales of at least tens of kilometres, and strongly influences the transportation of heat, salt, nutrients, phytoplankton, pollution, and garbage. I will describe some recent mathematical constructions, ranging across dynamical systems, probability, and geometry, which enable the accurate identification and tracking of such structures, and the quantification of associated mixing and transport properties. I will present case studies from a variety of geophysical settings.
Onyx is a free software environment for creating and estimating Structural Equation Models (SEM). It provides a graphical user interface that facilitates an intuitive creation of models, and a powerful backend for performing maximum likelihood estimation of parameters. In this presentation, some concepts of Onyx will be presented, and the operating of the program will be demonstrated. We will also have a quick look under the hood, and the optimization algorithms used in Onyx.
Our interest lies in understanding a free boundary problem to a fourthorder thinfilm equation with quadratic mobility and a zero contact angle at the triple junction, where air, liquid, and solid meet. This equation can be derived from the NavierStokes system with Navierslip at the liquidsolid interface, removing the contactline singularity that occurs if no slip is assumed. While for linear mobility (Dary dynamics) a strong analogy to the secondorder porous medium equation is valid, this is not the case anymore in our setting, leading to singular expansions of solutions at the free boundary.
I will first discuss the model problem of sourcetype selfsimilar solutions, where ODE and dynamical systems theory are available, to characterize the contactline singularity. This is based on two publications with Lorenzo Giacomelli and Felix Otto, and with Fethi Ben Belgacem and Christian Kühn, respectively. Then I will present a wellposedness result for solutions close to a traveling wave (joint with Lorenzo Giacomelli, Hans Knüpfer, and Felix Otto). I will further discuss how to obtain regularity of solutions at the free boundary, which one may view as a generalized smoothing property. The talk is concluded by outlooks on how to potentially treat general mobilities and the multidimensional thinfilm problem.
In this talk, I present a solution method for convex optimization. In the framework of this method, the original convex optimization problem is first reduced to a convex feasibility problem. Then several instances of the method of alternating projections are applied simultaneously. Each instance is assigned its own subproblem. In a polynomial number of iterations, at least one instance of the alternating projections solves its subproblem. If this subproblem is infeasible, the original optimization problem is simplified. This method leads to polynomial algorithms for linear optimization and for some combinatorial problems which are representable as convex problems by means of polynomial separation oracles.
We consider the critical behaviour of longrange O(n) models for n greater than or equal to 0. For n=1,2,3,... these are phi^4 spin models. For n=0 it is the weakly selfavoiding walk. We prove existence of critical exponents for the susceptibility and the specific heat, below the upper critical dimension. This is a rigorous version of the epsilon expansion in physics. The proof is based on a rigorous renormalisation group method developed in previous work with Bauerschmidt and Brydges.
In 1939, Frenkel and Kontorova proposed a model for the motion of a dislocation (an imperfection in a crystal). The model is simple, a chain of atoms following Newton's equation of motion. The atoms interact with their nearest neighbours via a harmonic spring and are exposed to a periodic (nonconvex) onsite potential. Despite the simplicity, the model has proved to be a mathematical challenge. Iooss and Kirchg\"assner made a fundamental contribution regarding the existence of small solutions, using centre manifold theory. The talk will introduce the model and then present recent results for the coherent motion of dislocations with periodic, and possibly anharmonic, onsite potentials. For anharmonic wavetrains, the proof establishes the existence of possibly large wavetrains via centre manifold theory and then employs a fixed point argument to show the existence of a travelling dislocation. These results are joint work with Boris Buffoni (Lausanne) and Hartmut Schwetlick (Bath).
Spatial random permutations are implemented by probability measures on permutations of a set with spatial structure; these measures are made so that they favor permutations that map points to nearby points. The strength of this effect is encoded in a parameter alpha > 0, where larger alpha means stronger bias toward short jumps. I will introduce some variants of the model, and explain the connections to the theory of BoseEinstein condensation. Then I will present a few older results, as well as very recent progress made jointly with Lorenzo Taggi (TU Darmstadt) for the regime of large alpha. Finally, I will discuss two conjectures suggested by numerical simulation: in two dimensions, the model appears to exhibit a KosterlitzThouless phase transition, and there are reasons to believe that in the phase of algebraic decay of correlations, long cycles are SchrammLöwner curves, with parameter between 4 and 8 depending on alpha.
This talk will introduce Sparse Power Factorization, which is an algorithm for reconstructing a rankone matrix with sparsity constraints from a few (linear) measurements. In particular, the speaker will talk about sufficient conditions for recovery. Furthermore, open questions and numerical results will be discussed. This talk is based on the author's master thesis.
tba
tba