Filter off: No filter for categories
The autumn school “Mathematical Foundations of Physics” takes place at the Mathematical Institute of LMU Munich. It focuses on three main topics: electrodynamics, quantum mechanics, and many-body problems and effective equations. Each day, there are three main lectures, one about each of the conference topics. In the afternoons, there are short talks and round table question sessions. The last day of the school is dedicated to Detlef Dürr’s farewell from LMU Munich.
http://www.math.lmu.de/light-and-matter/autumn-school
Estimation of the algebraic error in numerical PDEs presents a very broad topic. Historically, most a posteriori analysis in numerical PDEs focuses on estimating the discretization error and is based on the assumption of the exact solution of the discretized problem. This assumption is certainly restrictive. The fact that the numerical computation is, in general, not accurate, and it is even desirable in many cases not to perform it to a high accuracy, has several consequences that has to be taken into account in practical computations.
This presentation first illustrates, using simple model problems, that the algebraic error can have large local components, which can significantly dominate the total error in some parts of the discretization domain. This can happen despite the fact that the norm of the algebraic error is small in comparison to the norm of the discretization error, and motivates developing a posteriori error estimators that provide the information not only about the global norms but also about the local distribution of the errors.
Using the so-called residual-based error estimator as an example, we study the impact of abandoning the assumption of the exact algebraic solution. In order to allow the evaluation of the estimator at the presence of the algebraic error, the construction of the estimator has to be carefully revisited.
Finally, we show a methodology for computing upper and lower bounds for the algebraic and total error norms based on the flux reconstruction. The derived bounds also allow for estimating the local distribution of the errors over the computational domain. We will discuss bounds on the discretization error, application of the results for constructing mathematically justified stopping criteria for iterative algebraic solvers, and the relationship to the previously published estimates on the algebraic error. The presented results demonstrate the difficulties one has to cope with in rigorous approach for including algebraic errors into the a posteriori estimates. There is also a nonnegligible price in terms of the additional computational cost.
References:
Papež, Liesen, Strakoš: Distribution of the discretization and algebraic error in numerical solution of partial differential equations. Linear Algebra Appl. 449: 89-114 (2014).
Papež, Strakoš: Galerkin orthogonality and the multiplicative factors in the residual-based a posteriori error estimator for total error. Preprint MORE/2016/14.
Papež, Strakoš, Vohralík: Estimating and localizing the algebraic and total numerical errors using flux reconstructions. Preprint MORE/2016/12.
In this talk we present a new mathematical model of mitochondria swelling scenario, which include spatial effects. This spatial effect is of great importance in applications. Based on the experimental data we derived a new mathematical model which leads to PDE-ODE coupling. Our model considers three mitochondrial subpopulations varying in the degree of swelling. We will discuss the derived model with respect to existence and long-time behavior of solutions and obtain a complete mathematical classification of the swelling process.
Stochastische Steuerung für Versicherungen ist mittlerweile ein lebendiges Teilgebiet der Mathematik, es hat sich nach anfänglichen Versuchen, Methoden aus dem Finance zu adaptieren, etwas eigenständiger weiterentwickelt. Dazu gehören Methoden zur Lösung von Integro-Differentialgleichungen, von Nebenbedingungen, von Modifizierungen der Argumente für Viskositätslösungen und nicht-stationäre Ansätze für Mischungsmodelle. Schwerpunkt liegt in der numerischen Umsetzung und der Interpretation der numerischen Resultate.
For liquid films with a thickness in the order of 10 1 - 10 3 molecule layers, classical models of continuum mechanics do not always give a precise description of thin-film evolution: While morphologies of film dewetting are captured by thin-film models, discrepancies arise with respect to time-scales of dewetting. In this talk, we study stochastic thin-film equations which differ from corresponding deterministic equations by multiplicative noise inside an additional convective term. We present numerical simulations which indicate that the aforementioned discrepancies may be overcome under the influence of noise. In the second part of the talk, we prove existence of almost surely nonnegative martingale solutions. Our analysis relies on spatial semi-discretization, discrete variants of energy and entropy estimates as well as on recent tools for martingale convergence. The results have been obtained in collaboration with K. Mecke and M. Rauscher and with J. Fischer, respectively.
Wichtige Methoden des Risikomanagements, insbesondere der Extremwerttheorie und multivariaten Statistik, wurden von Emil J. Gumbel entwickelt und in verschiedenen Anwendungsgebieten (Hydrologie, Meteorologie, Wirtschaftswissenschaften, etc.) popularisiert. Zeugnis dafür sind die Gumbel-Verteilung und die Gumbel-Copula. Anlässlich seines 125. Geburtstages, er wurde am 18. Juli 1891 in München geboren, geben wir einen Einblick in seinen mathematischen Nachlass und beleuchten seine Biographie als Wissenschaftler, Zeitzeuge der Weimarer Republik und Pazifist. Neben seinen mathematischen Arbeiten veröffentlichte er mehrere politische Bücher und unzählige Zeitungsartikel über politische Morde, das Justizsystem und nationalistische Geheimbünde in der Weimarer Republik, was ihn 1932 seine Stelle an der Universität Heidelberg kostete und 1933 auf die erste Ausbürgerungsliste des Deutschen Reiches brachte. Er emigrierte 1932 nach Frankreich und musste, um den deutschen Besatzungstruppen zu entkommen, 1940 weiter in die USA fliehen. Am 10. September 1966 verstarb er in New York.
In the first part of the talk I will review Bob Fernholz' theory of functionally generated portfolios. In the second part I will discuss questions related to the existence of short-term arbitrage opportunities. This is joint work with Bob Fernholz and Ioannis Karatzas.
We study optimal investment and the valuation of assets whose payouts cannot be replicated by trading other assets. Our market model allows for constraints and illiquidity effects that are encountered in practice. We review two hedging-based notions of asset value and relate them to the classical notions of risk neutral and net present values. Many classical results e.g. on attainability and duality are extended to the illiquid market setting. This is joint work with Ari-Pekka Perkkiö.
In this talk among it others we are mainly interested in the study of stability of nanocraft position and orientation inside intense laser beam column. The nanocraft driven by intense laser beam pressure acting on its lightsail is sensitive to the torques and lateral forces reacting on surface of sail. This forces influence the orientation and lateral displacement of spacecraft. The assumptions in choosing the model: 1. flat or concave (part of the sphere, conical) circular sail; 2. configuration of nanocraft is treated as rigid body (applicability of Euler equations); 3. mirror reflection of laser beam from surface of the lightsail. We will discuss how sail shape and profile of the laser beam (Gaussian or flat) can affect stability of nanocraft position. We will also discuss the effect of nanocraft rotation around the axis along which it is moving on the stability.
A target volatility (call) option is an option where the total underlying exposure is random and determined at maturity as the ratio of a predicted target volatility level with the volatility effectively realized by the underlying. This reduces the option price for a similar payoff if the volatility prediction was correct, thus allowing an affordable option position when vega hedging is too expensive or market implied volatilities are too high.
A target volatility strategy (TVS) is a bond-equity dynamic portfolio using the risky asset historical volatility as an allocation rule. High realized volatility decreases the equity exposure reducing the portfolio downside risk. Lower volatility increases the equity position as to benefit from the bearish market conditions. These adjustments over time should maintain the volatility of the investment constant around the investor’s desired target level.
In a market with stochastic volatility, we present a stochastic model for a TVS using a delayed differential system. We derive an approximate finite-dimensional Markovian approximation for the equations which we implement for the Heston model using a Euler scheme. This framework allows the valuation of guarantee costs of target volatility funds; if the constant volatility assumption is correct, such a value should be of Black-Scholes type. We investigate this claim within the presented model.
We will look at the possibility of recovering a sum of Dirac measures on the rotation group SO(3) from its low degree moments with respect to Wigner-D functions only. Topics in the talk will cover theoretical results for recovery guarantee and algorithmic aspects of the problem.
A Salem number is a real algebraic integer greater than one whose all conjugates have absolute value at most one and at least one of them has absolute value one. In particular, a Salem number is a root of a reciprocal monic polynomial with integer coefficients. In complex dynamics the logarithms of Salem numbers are realized as topological entropy of an automorphism of an algebraic surface. In my talk I will explain when such an automorphism exists and which Salem numbers occur in this way.
We will discuss the classification problem for constant mean curvature surfaces with Delaunay ends, its relation to surface group representations of punctured compact Riemann surfaces into a loop group, and Fuchsian connections (differential equations) with coefficients in a loop algebra.
Mobile multi-agent systems are perfect platforms for exploration or monitoring tasks in hazardous or inhospitable environments, where a human operator would be at risk. This pertains to emergency scenarios caused by nuclear or toxic accidents as well as to exploration scenarios in extraterrestrial environments.
The presentation addresses the development of an efficient information gathering and exploration strategy for robotic missions. The focus of this talk is a multi-agent system – or a swarm – which consists of several mobile sensing platforms with the goal to identify the parameters of a spatio-temporal process modeled by a Partial Differential Equation under sparsity constraints. A probabilistic approach for the PDE modelling with a factor graphs, sparse Bayesian learning methods, and a message passing algorithm for PDE identification problem are shown. Presented results from first simulations demonstrate the potential of the approach.
Introducing new information into a probabilistic description of knowledge is typically performed via some kind of application of Bayes ́s by now classical theorem. To avoid ambiguities (which did arise historically), the mathematically precise description of conditional probabilities in Bayes ́s theorem, especially when conditioning on events of vanishing probability, is formulated via conditional expectations, and is due to Kolmogorov. Nevertheless, most sampling approaches to Bayesian updating typically start from the classical formulation involving conditional measures and densities. These are usually the distributions of some random variables describing the prior knowledge. Here an alternative track is taken, in that the notion of conditional expectation is also taken computationally as the prime object. Being able to numerically approximate conditional expectations, one has a complete description of the posterior probability. A further task is to construct a new – transformed, or filtered – random variable which has a distribution as required by the (posterior) conditional expectations. In the talk, the abstract task and its solution will be presented first, and then different computational approximations will be sketched, as well as different ways of stochastic discretisations, adding another level of approximation.
Sequential allocation is a simple mechanism for sharing multiple indivisible items. Agents are given a sequence of pre-specified turns, in which they take an item from amongst those not yet taken. Sequential allocation is not strategy-proof. How then should an agent act? We study strategic behavior in sequential allocation when, as often the case in practice, it is viewed as an one shot game. In particular, we consider Nash dynamics, as well as the computation and Pareto optimality of pure Nash equilibria.
Bio: Toby Walsh is a professor of artificial intelligence at the University of New South Wales and Data61 (formerly NICTA). He received an M.A. degree in theoretical physics and mathematics from the University of Cambridge and a M.Sc. and Ph.D. degree in artificial intelligence from the University of Edinburgh. He was recently named in the inaugural Knowledge Nation 100, the one hundred "rock stars" of Australia's digital revolution. Professor Walsh’s research focuses on how computers can interact with humans to optimise decision-making for the common good. He is also a passionate advocate for limits to ensure AI is used to improve our lives. In 2015, Professor Walsh was one of the people behind an open letter calling for a ban on autonomous weapons or ‘killer robots’ that was signed by more than 3000 AI researchers and high profile scientists, entrepreneurs and intellectuals, including Stephen Hawking, Noam Chomsky, Apple co-founder Steve Wozniak, and Tesla founder Elon Musk. He was subsequently invited by Human Rights Watch to talk at the United Nations in both New York and Geneva. Professor Walsh is a Fellow of the Australian Academy of Science and of the Association for the Advancement of Artificial Intelligence, a Humboldt Award winner and recently won the 2016 NSW Premier's Prize for Excellence in Engineering and Information and Communications Technologies. He is currently on sabbatical at TU Berlin.
Tolerance intervals and tolerance regions are important tools for statistical quality control and process monitoring of univariate and multivariate data, respectively. This paper discusses the generalization of tolerance intervals/regions to tolerance tubes in the infinite dimensional setting for functional data. In addition to the generalizations of the commonly accepted definitions of the tolerance level of beta-content or beta-expectation, we introduce the new definition of alpha-exempt beta-expectation tolerance tube. The latter loosens the definition of beta-expectation tolerance tube by allowing alpha (usually pre-set by domain experts) portion of each functional be exempt from the requirement. More specifically, an alpha exempt beta-expectation tolerance tube of a sample of n functional data is expected to contain [n x beta] functionals in such a way that at least (1-alpha)x100% portion of each functional is contained within the boundary of the tube.
Those proposed tolerance tubes are completely nonparametric and thus broadly applicable. We investigate their theoretical justification for and properties. We also show that the alpha exempt beta-expectation tolerance tube is particularly useful in the setting where occasional short term aberrations of the functional data are deemed acceptable if those aberrations do not cause substantive deviation of the norm. This desirable property is elaborated and illustrated further with both simulations and real applications in continuous monitoring of blood glucose level in diabetes patients as well as of aviation risk pattern during aircraft landing operations.
This joint work with Yi Fan, JP Morgan Chase.
Functionality, autonomy, and complexity of products and production processes is steadily increasing due to growing computing resources. The advanced capabilities of new systems make it possible to automate tasks that were previously performed by humans, such as (semi-)automated operation of road vehicles, surgical robots, smart grids, flight control systems, and collaborative human-robot systems, to name only a few. It is obvious that most of those systems are either safety- or operation-critical, demanding methods that automatically verify their safety and correct operation.
In this talk, I present reachability analysis as a method to formally verify cyber-physical and embedded systems. I use hybrid automata to jointly model the discrete behavior of computing elements and the continuous behavior of the physical elements. Based on this modeling formalism, I present algorithms for reachability analysis, which automatically explore all possible states of the system for a given set of initial states, parameters, and inputs. If the set of reachable states is not in a set of forbidden states, the correct behavior of the system is proven. Note that this is not possible with classical simulation techniques, since they only generate a finite and thus incomplete set of behaviors.
The applicability of the method is demonstrated for autonomous driving, phase-locked loops, and smart grids.
*Compact Course with 5 units from 22. - 25.11.16*
Details under: http://www.ma.tum.de/IGDK1754/CCStrakos2016
Modern matrix iterative methods represented, in particular, by Krylov subspace methods, are fascinating mathematical objects that integrate many lines of thought and are linked with hard theoretical problems. Krylov subspace methods can be seen as highly nonlinear model reduction that can be very efficient in some cases and not easy to handle in others. Convergence behaviour is well understood for the self-adjoint and normal operators (matrices), where we can conveniently rely on the spectral decomposition. That does not have a parallel in non-normal cases. Theoretical analysis of efficient preconditioners is therefore complicated and it is often based on a simplified view to Krylov subspace methods as linear contractions. In numerical solution of boundary value problems, e.g., the infinite dimensional formulation, discretization, and algebraic iteration (including preconditioning) should be tightly linked to each other. Computational efficiency requires accurate, reliable and cheap a posteriori error estimators that relate the discretization and algebraic errors in order to construct an appropriate (problem dependent) stopping criteria. Understanding numerical stability issues is crucial and this becomes even more urgent with increased parallelism where the communication cost becomes a prohibitive factor.
The presentation will concentrate on ideas and connections between them with emphasizing the historical perspective. Technical details will be kept at minimum so that the lecture is accessible to a wide audience.
The presentation will benefit from the material present in the recent monographs coauthored with Josef Malek [Preconditioning and the Conjugate Gradient Method in the Context of Solving PDEs, SIAM Spotlights, SIAM, Philadelphia, 2015, http://bookstore.siam.org/productimage.php?product_id=579], and with Jorg Liesen [Krylov Subspace Methods, Principles and Analysis, Oxford University Press, Oxford, 2013, https://global.oup.com/academic/product/krylov-subspace-methods-9780199655410?cc=cz&lang=en&], as well as from several recent papers with Martin Vohralik, Jan Papez and Tomas Gergelits.
The jargon "flocking" denotes the phenomenon in which self-propelled individuals using only limited environmental information and simple rules organize into an ordered motion. In this talk, we study the Cucker-Smale flocking model which is a type of Newtonian equations. We first present the large-time behavior of solutions and discuss modified Cucker-Smale models that address several drawbacks of the original model. In the second part, we deal with the rigorous derivation of kinetic Cucker-Smale type equations when sharp sensitivity regions are considered.
Title: Hierarchical matrices in scattered data approximation
Scattered data approximation deals with the problem of producing a function \(s\) that in some sense represents some given (typically scattered) data and allows to make predictions at other times/locations/parameter settings. Applications are quite diverse: Surface reconstruction, image compression, numerical solution of PDEs (with their diverse applications), to name just a few.
In a scattered data interpolation problem, the interpolant is typically of the form \( s(x)=\sum_{i=1}^N c_i b_i (x) \) for some given functions \( b_i \). The coefficient vector \( c\in R^N \) of the interpolant may be computed as the solution of a linear system \( Bc=y \) which results from enforcing the interpolation conditions for the given scattered data. While properties of the matrix \( B \) obviously depend on the choice of functions \( b_i \), several of the most commonly used approaches yield highly ill-conditioned, dense matrices \( B \), resulting in a challenge to solve the linear system \( Bc=y \), and hence to solve the scattered data interpolation problem. This talk deals with these challenges and some possible strategies for the solution of this system \( Bc=y \).
In particular, we study the application of techniques from the \({\cal H}\)-matrix framework both for the approximation of the system matrix \(B\) itself as well as for the construction of preconditioners. \({\cal H}\)-matrices provide a data-sparse matrix format that permits storage and matrix arithmetic in complexity \( {\cal O}(N\log^{\alpha} N) \) for moderate \(\alpha\). It turns out that several typical sets of basis functions from the (scattered data) literature lead to matrices \(B\) that fit into this framework, yielding a cost-effective approximation scheme to be illustrated in this talk.
see http://www.ma.tum.de/Mathematik/FakultaetsKolloquium#AbstractLeBorne
https://www.statistik.lmu.de/~abender/Kolloquium/abstracts/ws1617/liu.pdf
Die Theorie des Optimalen Massentransportes hat ihren Ursprung in &uouml;konomischen Fragestellungen des 18. Jahrhunderts; die daraus abgeleitete Wasserstein Metrik für Wahrscheinlichkeitsmaße spielt seit Langem in der Stochastik eine wichtige Rolle für die Quantifizierung der Stabilität von statistischen Modellen. In den neunziger Jahren wurde in einer Reihe von bahnbrechenden Arbeiten von Jordan, Kinderlehrer und besonders Otto eine neue Interpretation dieses Abstands im Sinne einer unendlich dimensionalen Riemann'schen Geometrie auf dem Raum der Warhscheinlichkeitsmaß entwickelt. In dieser Geometrie lassen sich fundamentale Evolutionsgleichungen (z.B. Wärmeleitung, Schrödingergleichung) auf physikalisch intuitive Weise als kanonische dynamische Systeme darstellen. Eine zentrale Rolle spielt dabei das Boltzmann-Entropiefunktional als Potential auf der Menge der Wahrscheinlichkeitsmaße sowie diessen Geometrie im Zusammenhang mit der Ricci-Krümmung der unterlegten Mannigfaltigkeit. Als Konsequenz ergibt sich u.A. eine neue Theorie der Ricci Krümmung für allgemeine metrische Maßräume (von Renesse-Sturm 2005, siehe auch sog. Sturm-Lott-Villani bzw. RCD(k,n) curvature bounds). Es stellt sich ferner die Frage nach der Existenz einer assoziieriten Brown'schen Molekularbewegung auf der unendlich dimensionalen Otto'schen Mannigfaltigkeit als physikalisch intuitives Modell einer stochastisch diffundierenden Massenverteilung (von Renesse-Sturm 2009). Hier stellen wir neuere Entwicklungen im Bereich der interagierenden Partikelsysteme vor (von Renesse-Konarovskiy 2015). In einem letzten Abschnitt gebe ich, soferen die Zeit reicht, einen kurzen Ausblick auf neue Anwendungen vom Optimalen Massentransport im Bereich der modellunabhäniggen Preisschranken für Finanzderivate (Beiglböck, Cox, Huessmann, Juillet et al.).
In recent years, interest in time changes of stochastic processes according to irregular measures has arisen from various sources. Fundamental examples of such time-changed processes include the so-called Fontes-Isopi-Newman (FIN) diffusion, the introduction of which was motivated by the study of the localization and aging properties of physical spin systems, and the two-dimensional Liouville Brownian motion, which is the diffusion naturally associated with planar Liouville quantum gravity.
This FIN diffusion is known to be the scaling limit of the one-dimensional Bouchaud trap model, and the two-dimensional Liouville Brownian motion is conjectured to be the scaling limit of simple random walks on random planar maps.
We will provide a general framework for studying such time changed processes and their discrete approximations in the case when the underlying stochastic process is strongly recurrent, in the sense that it can be described by a resistance form, as introduced by J. Kigami. In particular, this includes the case of Brownian motion on tree-like spaces and low-dimensional self-similar fractals. If time permits, we also discuss heat kernel estimates for the relevant time-changed processes.
This is a joint work with D. Croydon (Warwick) and B.M. Hambly (Oxford).
The advent of genome-wide associations studies (GWAS) has already advanced our understanding of the genetic basis of many relevant traits, including human diseases. The traditional approach of GWAS analyses is predominantly based on simplistic one-by-one tests, correlating individual genetic variants with single quantitative or categorical traits. Thanks to technological advances and large international efforts, phenotypic profiles of individuals are increasingly deep and comprehensive, ranging from molecular to cellular traits to physiological phenotypes in people. These rich and high-dimensional phenotypes challenge established analysis methods and at the same time provide new opportunities for advanced and integrative statistical approaches.
In this talk, I will review these open needs in the field and discuss some of progress we have made in the area. I will describe advanced methods based on multivariate linear mixed model that enable joint tests between multiple variants and groups of phenotypes, while retaining scalability to cohorts with hundreds of thousands of individuals. These approaches both offer major gains in statistical power for detecting genetic effects, and yield improved interpretation of how genetic variability shapes phenotype correlations. In particular, I will revisit tests for genotype-environment interactions which we generalize to consider multiple causal variants and allelic heterogeneity. (http://campar.in.tum.de/Events/InvitedTalkOliverStegle)
We consider the coupling of a fluid domain with elastic solid bodies. As opposed to traditional approaches to Fluid-Structure-Interaction we will not describe these domains by a moving grid. Instead, an implicit description by a phase-field function will be employed to characterize the (conceivably evolving) domains, which offers some interesting advantages from a modeling point of view. We will discuss a simple approach leading to a fully Eulerian two-phase flow problem with an additional elastic stress in the solid bulk. The approach will be validated in an axisymmetric setting by comparison to an ALE method. Extensions to viscoelastic fluids and elastic membranes will be presented along with a recent application to biological cells in flow.
The area of dynamical systems where one investigates dynamical properties that can be described in topological terms is called Topological Dynamics. Investigating the topological properties of spaces and maps that can be described in dynamical terms is in a sense the opposite idea. This area is called Dynamical Topology. Some results of this talk can be considered as a contribution to Dynamical Topology. To link the Auslander point dynamics property with topological transitivity, we introduced the notion of Dynamical Compactness with respect to a family as a new concept of chaoticity of a dynamical system (X,T) given by a compact metric space X and a continuous surjective self-map T:X --> X. In particular, we will show that all dynamical systems are dynamically compact with respect to a Furstenberg family if and only if this family has the finite intersection property. Observe that each topologically weak mixing system is dynamically (transitive) compact. We will discuss the relationships among it and other several stronger forms of sensitivity.
TBA
We propose a new Iteratively Reweighted Least Squares (IRLS) algorithm for the problem of recovering a low-rank matrix from incomplete linear observations. Unlike in previous IRLS approaches for the problem, we exploit the information in the column as well as in the row space of the iterates, using so-called harmonic mean weight matrices. Our non-convex algorithm exhibits local convergence guarantees comparable to previous IRLS algorithms for generic random measurement operators with a number of measurements in the optimal order. Additionally, we prove that the exhibited rate of convergence is superlinear, which is unprecedented even for other iterative approaches for the low-rank recovery problem. Our theoretical findings align very well with our numerical experiments, which also suggest that convergence to non-global minimizers is not an issue and that the proposed algorithm needs fewer measurements for successful recovery than any other algorithm proposed in the literature.
Matrix equations and matrix functions are encountered in the solution of diverse problems of the Real World. A meaningful example is the class of quadratic matrix equations of the kind \(AX^2+BX+C=0\) which play a fundamental role in the analysis of Quasi-Birth--Death stochastic processes; here, \(A,B\) and \(C\) are given \(n\times n\) matrices and \(X\) is the matrix unknown. A typical example of matrix function is given by the matrix exponential \(\exp(A)=\sum_{i=0}^\infty\frac1{i!}A^i\) which is encountered, for instance, in the analysis of complex networks and in the solution of linear differential equations.
In certain cases, say, in the tandem Jackson queue or in bi-dimensional random walks in the quarter plane, the input matrices have infinite size and are quasi-Toeplitz, that is, they can be expressed in the form \(A=T(a)+E\), where \(T(a)=(a_{j-i})_{i,j\in\mathbb Z^+}\) is the Toeplitz matrix associated with the bi-infinite sequence \(\{a_k\}_{k\in\mathbb Z}\), while \(E=(e_{i,j})_{i,j\in\mathbb Z^+}\) is such that \(\sum_{i,j\in\mathbb Z^+}|e_{i,j}|<+\infty\).
In this talk we introduce and examine the problem of solving quadratic matrix equations and computing matrix functions which model stochastic processes, and put specific emphasis on the computational issues. We discuss the case of finite matrices and then we deal with the more difficult problem where the coefficients are infinite quasi-Toeplitz matrices.
More specifically, we show that, the set of quasi-Toeplitz matrices endowed with a suitable norm, is a Banach space and that the subset formed by matrices \(T(a)+E\), where the sequence \(\{a_k\}_{k\in\mathbb Z}\) defines an analytic function \(a(z)=\sum_{i\in\mathbb Z}a_iz^i\) over a suitable domain, is a matrix algebra endowed with a sub-multiplicative norm. Relying on these properties, we introduce a general framework to deal with infinite quasi-Toeplitz matrices and show some effective algorithms for solving matrix equations and computing matrix functions in a very effective way.
We discuss Newton’s method as a root finder for polynomials of large degrees, and as an interesting dynamical system in its own right. There is recent progress on Newton’s method in at least three directions:
1. We present recent experiments on finding all roots of Newton’s method for a number of large polynomials, for degrees exceeding 100 million, on standard laptops with surprising ease and in short time, with observed complexity \(O(d log^2 d)\) (joint with Simon Schmitt and Robin Stoll). 2. We outline theory about the complexity of Newton’s method as a root finder: unlike various other known methods, Newton as a root finder has both good theory and good implementation results in practice (partly joint work with Magnus Aspenberg, Todor Bilarev, Bela Bollobas, and Malte Lackmann). 3. We discuss Newton’s method as a dynamical system: if \(p\) is a polynomial, then the Newton map is a rational map that very naturally “wants to be iterated”. Among all rational maps, Newton’s method has the best understood dynamics, and these dynamical systems can be classified (in the sense of a theory developed by Bill Thurston). As a byproduct, we offer an answer to a question of Steve Smale on existence of attracting cycles of higher period (joint work with Russell Lodge and Yauhen Mikulich).
https://www.statistik.lmu.de/~abender/Kolloquium/abstracts/ws1617/ruegamer.pdf