Filter off: No filter for categories
We present a way to embed nonlinear phenomena based on their intrinsic variability. In particular we apply diffusion maps, a non-linear dimensionality reduction method, to extract useful axes from measurements or simulation data of different non-linear phenomena. Useful hereby refers to the degree of coarsening in which we want to observe our system, with this degree being specified by appropriately tuning the kernel scale in the diffusion maps approach. In addition, we demonstrate that these axes are, provided we have sufficient data, independent of the particular nature of the measurement entity. As illustrative examples, we apply our method on spatio-temporal chaotic dynamics apparent in the complex Ginzburg-Landau equation, on modulated traveling waves in the Kuramoto-Sivashinksy equation and on chimera states, states of coexisting coherence and incoherence. For the latter, we show that it is possible to extract insightful order parameters, allowing further understanding of these intricate dynamics. In addition, this embedding can be done invariant to the measurement function, showing similarities to the Koopman operator.
Time domain Galerkin boundary elements provide an efficient tool for the numerical solution of boundary value problems for the homogeneous wave equation. We review recent advances in time domain boundary elements with mesh refinements for 3D problems. We present an a priori error analysis for graded meshes to resolve geometric edge and corner singularities. On the other hand, an a posteriori error analysis gives rise to adaptive mesh refinement procedures based on error indicators of residual type. Applications include nonlinear dynamic contact problems and engineering problems in the sound emission of car tires. Numerical experiments underline our theoretical results.
A conductance graph on Z^d is a nearest-neighbor graph where all of the edges have positive weights assigned to them. In this talk, we will consider the spread of information between particles performing continuous time simple random walks on a conductance graph. We do this by developing a general multi-scale percolation argument using a two-sided Lipschitz surface that can also be used to answer other questions of this nature. Joint work with Alexandre Stauffer.
It is well-known that large deviations of random walks driven by independent and identically distributed heavy-tailed random variables are governed by the so-called principle of one large jump. As observed in many set-ups with heavy-tails, we note that further subtleties hold for such random walks in the large deviation scale which we call hidden large deviation. We apply this idea in the context of queueing processes with heavy-tailed service times and study approximations of severe congestion times for (buffered) queues. Possible directions towards going beyond the iid set-up is indicated. We discuss our results with simulated examples. (This is a joint work with Harald Bernhard)
In a symmetric two-player game, a symmetric equilibrium can only be dynamically stable if it has positive index. The sum of the indices of all equilibria is 1, so a unique equilibrium has index 1. The index is a topological notion related to geometric orientation, and defined in terms of the sign of the determinant of the payoffs in the equilibrium support. We prove a simple strategic characterization of the index conjectured by Hofbauer: In a nondegenerate symmetric game, an equilibrium has index 1 if and only if it is the unique equilibrium in a larger symmetric game obtained by adding further strategies (it suffices to add a linear number of strategies).
Our elementary proof introduces "unit-vector games" where one player's payoff matrix consists of unit vectors, and applies in a novel way simplicial polytopes. In addition, we employ a very different known result that any matrix with positive determinant is the product of three P-matrices, a class of matrices important in linear complementarity.
Joint work with Anne Balthasar.
Biography: Bernhard von Stengel is professor of mathematics at the London School of Economics. He is interested in the geometry and computation of Nash equilibria and other mathematical questions of game theory and operations research. His professional degrees are in mathematics and in computer science.
Smoluchowski's coagulation equation is a kinetic model which describes aggregation processes in many different applications such as raindrop formation, creation of planets and algal growth. The long-time behavior for solutions to this equation is conjectured to be given by self-similar profiles. However, this 'scaling-hypothesis' has so far only been established for special 'solvable' cases where explicit solution formulas can be computed.
In this talk, we will give an overview of Smoluchowski's equation and the recent development towards self-similar behavior for 'non-solvable' models.
TBA
A planar set that contains a unit segment in every direction is called a Kakeya set. These sets have been studied intensively in geometric measure theory and harmonic analysis since the work of Besicovich (1928); we find a new connection to game theory and probability. A hunter and a rabbit move on the integer points in [0,n) without seeing each other. At each step, the hunter moves to a neighboring vertex or stays in place, while the rabbit is free to jump to any node. Thus they are engaged in a zero sum game, where the payoff is the capture time. The known optimal randomized strategies for hunter and rabbit achieve expected capture time of order n log n. We show that every rabbit strategy yields a Kakeya set; the optimal rabbit strategy is based on a discretized Cauchy random walk, and it yields a Kakeya set K consisting of 4n triangles, that has minimal area among such sets (the area of K is of order 1/log(n)). Passing to the scaling limit yields a simple construction of a random Kakeya set with zero area from two Brownian motions. (Joint work with Y. Babichenko, Y. Peres, R. Peretz and P. Winkler).
The restricted isometry property (RIP) has been an integral tool in the analysis of various inverse problems with sparsity models. We propose generalized notions of sparsity and provide a unified framework on the RIP for structured measurements, in particular when combined with isotropic group actions. Our results extend the RIP for partial Fourier measurements by Rudelson and Vershynin to a much broader context and provide upper bounds on the number of group structured measurements for the RIP on generalized sparsity models. We illustrate the main results with an infinite dimensional example, where the sparsity represented by a smoothness condition approximates the total variation. We also discuss fast dimensionality reduction on generalized sparsity models. In generalizing models, the sparsity parameter becomes no longer subadditive. Therefore, the RIP does not preserve distances among sparse vectors. We show a weaker version with additive distortion, which is similar to analogous property in 1-bit compressed sensing. This is a joint work with Marius Junge.
Statistics in Berlin (and Prussia) has been developed very successfully in the 19th century. The Royal Prussian Statistical Bureau (sic) was established in 1805, in 1862 the first statistical office of a town was opened - in Berlin. And in 1863 the 5th International Statistical Congress was held in Berlin (founded in 1853). From 1860 to 1882 the statistician Ernst Engel (1821-1896) was the director of the Royal Prussian Statistical Bureau, and here he introduced the first seminar on statistics, i.e. teaching and training courses in descriptive and state statistics. After his demission a special seminar was established at the Berlin University - the Economic-Statistical Seminar in 1886. Two famous statisticians became the first directors: August Meitzen (1822-1910) and Richard Boeckh (1824-1907), the other two directors were the economists Adolph (Adolf) Wagner (1835-1917) and Gustav von Schmoller (1838-1917). After 1907/1910 statistics became less important, although the famous Ladislaus von Bortkiewicz (1868-1931) taught at the Berlin University from 1901 until his death in 1931 and at the Berlin School of Economics (Berliner Handels-Hochschule) from 1906 until 1923.
First, I'll give an overview on the history of the Economic-Statistical Seminar at the Berlin University from the beginning. Second, I'll describe the development of the seminar in the 1920s and 1930s, including the close relations between the Philosophical Faculty of the Berlin University and the Berlin School of Economics (Berliner Handels-Hochschule) which was opened in 1906. Third, I'll give an overview on the situation until the 1950s, describing the deep break because of the Nazi's, and the reconstruction after 1946 when the Economics Faculty was opened at the Berlin University (from 1948 on Humboldt University).
The University was re-opened in January 1946, the School of Economics and the fields economics and statistics of the Law Faculty of the University were united and founded the new Faculty of Economics. In the tradition of August Meitzen and Richard Boeckh a statistician became the first dean of the Faculty: Bruno Gleitze (1903-1980), who left Berlin-East in December 1948 and moved to Berlin-West.
see here:
http://www.ma.tum.de/Mathematik/FakultaetsKolloquium#AbstractVogt
In this presentation, we give an overview on generalized sparse grid methods for higher dimensional approximation. We focus on optimal numerical schemes based on sparse grids where the product between the spatial variables, the temporal variable, the stochastic variables and the modelling parameters of a parametrized PDE is collectively taken into account. To this end, we especially employ the adaptive combination technique.
We give examples from incompressible non-Newtonian fluid simulations involving multi-scale viscoelastic flows.
see
http://www.ma.tum.de/Mathematik/FakultaetsKolloquium#AbstractGriebel
The talk describes a surprisingly rich family of function spaces which can be defined on general LCA (locally compact Abelian groups, such as G = R^d). The start point is the Banach Gelfand triple (SO,L2,SO’), consisting of the Segal algebra SO(G) as a space of test functions and the dual space as the minimal resp. maximal space in this family. One of the most attractive (and surprising) facts about this setting, which requires only the use of Banach spaces and their dual spaces, is the existence of a kernel theorem, which extends the classical association of L2-kernels with the family of Hilbert-Schmidt operators. As time permits a number of questions arising from classical analysis and time-frequency analysis resp. Gabor analysis are mentioned.
We study Landauer’s principle for repeated interaction systems consisting of a reference quantum system S in contact with a environment E consisting of a chain of independent quantum probes. The system S interacts with each probe sequentially and the Landauer principle relates the energy variation of E and the decrease of entropy of S by the entropy production of the dynamical process. We address the adiabatic regime where the environment, consisting of T ≫ 1 probes, displays variations of order 1/T between the successive probes. We analyze Landauer’s bound and its refinements at the level of the full statistics associated to a two-time measurement protocol of, essentially, the energy of E. Joint work with E. Hanson, Y. Pautrat, R. Raquépas
Outer measures such as for example Lebesgue outer measure are usually a stepping stone in the development of measure theory. We discuss that a useful sub-additive outer integration theory can be developed even in situations when too few measurable sets exists to build a measure theory. This theory has helped to understand time-frequency analysis, we will present a conceptual way of looking at almost everywhere convergence of Fourier series and more recent developments.
We formulate a quasistatic nonlinear model for nonsimple viscoelastic materials at a finite-strain setting in the Kelvin's-Voigt's rheology where the viscosity stress tensor complies with the principle of time-continuous frame-indifference. We identify weak solutions in the nonlinear framework as limits of time-incremental problems for vanishing time increment. Moreover, we show that linearization around the identity leads to the standard system for linearized viscoelasticity and that solutions of the nonlinear system converge in a suitable sense to solutions of the linear one. This is joint work with Martin Kruzik (Prague).
Festkolloquium Das Mathematische Institut lädt zu einem Festkolloquium am 14. Juli 2017 ein anlässlich des 80. Geburtstags von Otto Forster PROGRAMM 14 Uhr: Tee in B 448 (Theresienstraße 39)
15:00 Uhr: Vortrag Prof. Dr. G. Frey (Essen), Hörsaal B 052: Arithmetische Geometrie: Tiefe Theorie, effiziente Algorithmen und überraschende Anwendungen
Anschließend: Kaffee in B 448
17:45 Uhr: Vortrag Prof. Dr. F. Forstnerič (Ljubljana), Hörsaal B 052: Proper Holomorphic Mappings of Stein Manifolds.
Anschließend: Umtrunk
19:30 Uhr: Dinner (Lokalität wird noch bekanntgegeben)
Anmeldungen bitte an Frau Heinemann: Sekretariat.Merkl@mathematik.uni-muenchen.de Wir helfen Ihnen gerne bei der Reservierung eines Zimmers.
Festkolloquium Das Mathematische Institut lädt zu einem Festkolloquium am 14. Juli 2017 ein anlässlich des 80. Geburtstags von Otto Forster
___ PROGRAMM 14 Uhr: Tee in B 448 (Theresienstraße 39)
15:00 Uhr: Vortrag Prof. Dr. G. Frey (Essen), Hörsaal B 052: Arithmetische Geometrie: Tiefe Theorie, effiziente Algorithmen und überraschen-de Anwendungen
Anschließend: Kaffee in B 448
17:45 Uhr: Vortrag Prof. Dr. F. Forstnerič (Ljubljana), Hörsaal B 052: Proper Holomorphic Mappings of Stein Manifolds.
Anschließend: Umtrunk
19:30 Uhr: Dinner (Lokalität wird noch bekanntgegeben)
Anmeldungen bitte an Frau Heinemann: Sekretariat.Merkl@mathematik.uni-muenchen.de Wir helfen Ihnen gerne bei der Reservierung eines Zimmers.
We study bifurcations of dynamical systems perturbed by bounded noise. A simplified approach to look at such systems is to consider the induced set-valued dynamical system (by ignoring the involved probabilities), and bifurcations in set-valued dynamical systems can be observed as discontinuous changes in the minimal invariant sets. On the other hand, a more detailed description is possible when considering the induced random dynamical systems instead of the set-valued dynamical system. Here we study bifurcations induced by a breakdown of topological equivalence and discuss in particular one-dimensional monotone random maps and random circle homeomorphisms. Differences and similarities of both approaches will be highlighted in the talk. Joint work with Thai Son Doan, Jeroen Lamb and Julian Newman.
Given the hypercubic lattice Z^d, we can create random subgraphs by retaining each edge independently from each other with probability 0<p<1 and deleting it otherwise. Percolation studies the behaviour of the remaining sub-lattice, and we will consider percolation for a specific “critical” probability. Gady Kozma and Asaf Nachmias have shown how one may calculate the respective one-arm exponent which approximates the probability that the remaining sub-lattice in critical percolation offers a shortest path of length r>0. This talk is dedicated to present their results and extend them by calculating the multi-arm exponent which approximates the probability that in critical percolation on Z^d, there are even n disjoint paths of length r>0 starting close to the origin.
Abstract: In this presentation we deal with the existence of solutions to stochastic partial differential equations in scales of Hilbert spaces, and show how this is related to the existence of invariant manifolds. As a particular example, we will treat an equation in the space of tempered distributions; here the Hilbert scales are given by Hermite-Sobolev spaces.
In the last two decades one can observe an increased interest in the analysis of discrete structures. One one hand the fact that increased computational power is nowadays available to everybody and that computers can essentially work only with discrete values sparked an increased interest in working with discrete structures. This is true even for persons who are originally unrelated to the field. An outstanding example can be seen in the change of the philosophy of the Finite Element Method. From the classical point of view of being essentially a method for discretization of partial differential equations via a variational formulation the modern approach lifts the problem and, therefore, the finite element modelation directly on to the mesh, resulting in the so-called Finite Element Exterior Calculus. This means that one requires discrete structures which are equivalent to the usual continuous structures. On the other hand, the increased computational power also means that problems in physics which are traditionally modeled by means of continuous analysis are more and more directly studied on the discrete level, the principal example being the Ising model from statistical physics as opposed to the continuous Heisenberg model which has been studied by S. Smirnov and his collaborators using discrete complex analysis. Unfortunately, a higher dimensional analogue of discrete function theories is only in its infancy. In this talk we will present two principal approaches: the classic one based on finite differences as well as a more general version called script geometry. Furthermore, we will present the basic ingredients of a function theory, such as Fischer decomposition and power series as well as discuss potential-theoretical arguments like discrete Cauchy kernels, discrete Hilbert/Riesz-transforms and Hardy spaces. Among possible applications we are going to discuss discrete Riemann boundary value problems and their importance for image processing.
Most of the physical processes arising in nature are modeled by differential equations, either ordinary (example: the spring/mass/damper system) or partial (example: heat diffusion). From the point of view of analog computability, the existence of an effective way to obtain solutions (either exact or approximate) of these systems is essential.
A pioneering model of analog computation is the General Purpose Analog Computer (GPAC), introduced by Shannon as a model of the Differential Analyzer and improved by Pour-El, Lipshitz, Rubel, Costa, Graça and others. The GPAC is capable of manipulating real-valued data streams. Its power is known to be characterized by the class of differentially algebraic functions, which includes the solutions of initial value problems for ordinary differential equations.
We address one of the limitations of this model, which is its fundamental inability to reason about functions of more than one independent variable (the ‘time’ variable). In particular, the Shannon GPAC cannot be used to specify solutions of partial differential equations.
We extend the class of data types using networks with channels which carry information on a general complete metric space X; here we take X to be the class of continuous functions of one real (spatial) variable.
We consider the original modules in Shannon’s construction (constants, adders, multi- pliers, integrators) and we add a differential module which has one input and one output. For input u, it outputs the spatial derivative v(t) = ∂xu(t).
We then define an X-GPAC to be a network built with X-stream channels and the above-mentioned modules. This leads us to a framework in which the specifications of such analog systems are given by fixed points of certain operators on continuous data streams. Such a framework was considered by Tucker and Zucker. We study the properties of these analog systems and their associated operators, and present a characterization of the X-GPAC-generable functions which generalizes Shannon’s results.
In this talk, we consider the application of optimal control methods in the field of vehicle engineering. In the latter, the considered systems, modern vehicles, are highly complex mechatronic systems, in which subsystems and components from a variety of different physical domains (mechanics, hy- draulics, electrics,...) dynamically interact. The basis of a corresponding mathematical model is typically a multibody system. Mathematically, the model is set up as (nonlinear) ordinary differential equation (ODE) or as a (nonlinear) differential-algebraic equation (DAE). Thus, optimal control methods for this class of systems have to be investigated and applied in a numerically efficient way. We discuss this general situation and, moreover, we present selected specific application scenarios.
The first scenario is the dynamic inversion of mechanical systems using op- timal control methods. Here, the task is to determine input signals (e.g., vertical road profiles) that track certain given reference quantities (displace- ments, accelerations, section forces), which are typically obtained by (test- track-) measurements. This task can be formulated as ODE-/DAE-optimal control problem. Additionally, considering road profiles that excite a vehicle model may also lead to delays in the input. We discuss the problem set-up - (delay-)DAE optimal control problems - and present a solution approach by so called function space methods (projected gradient, Gauss-Newton) as well as some numerical results.
Secondly, we are concerned with the prediction of speed profiles based on geo- referenced data and a longitudinal dynamic vehicle model. Speed profiles are characteristic both for the dynamic loads and for fuel consumption and en- ergy demands, respectively. On a given route in the world, we obtain data (curvature, slope, legal speed limits, traffic-lights,...) from a digital map. The vehicle model accounts for the longitudinal dynamic characteristics of the considered vehicle; driver and traffic are modeled stochastically. We end up with a constrained ODE-optimal control problem of mixed-integer type(due to gear-selection as input). We present a solution strategy by dynamic programming and give numerical results. With similar models and optimal control approaches, it is possible to predict steering angles on given routes in the same context; we also briefly discuss this task.
Last, we shortly cover the area of autonomous and (partially) automated driving scenarios. Here, dynamic vehicle models linked with optimal control methods and possibly model predictive control strategies may be used for vehicle control or driver assistance systems in certain traffic scenarios.
We consider a class of monostable evolution equations with non-local reaction and diffusion terms, where the non-localities are given by convolutions with probability densities. We show that the long-time behavior of solutions depends on the asymptotic of both the dispersion kernel and the initial condition. Namely, if the dispersal kernel and the initial condition are decaying not slower than exponentially, we demonstrate constant speed of the propagation for the solution; we also study traveling waves, that appear in this case. In contrast, if either the dispersal kernel or the initial condition decays slower than exponentially, then we demonstrate acceleration of the solution. In particular, polynomially, semi-exponentially and exponentially fast propagation is shown for appropriate parameters of the equation. New estimates for solutions to a linear non-local equation are obtained as well.
Sprecherinnen sind: Codina Cotar, Aurelia Deshayes, Margherita Disertori, Mylene Maida, Constanza Rojas-Molinas, Kavita Ramanan, Wioletta Ruszel, Anita Winter
SCoNDO is a conference on applied problems in discrete and nonlinear optimization. It is held each year in July at Technische Universität München in München or Garching near München. Talks are given by the participants of the lectures Case Studies Discrete Optimization and Case Studies Nonlinear Optimization of the TUM Master's Programs in Mathematics.
Guests are welcome to join the conference, participation is free of charge. If you would like to register, please write an email to klemm@ma.tum.de or to any of the organizers.
http://www.ma.tum.de/SCoNDO/
Systemic risk refers to the risk that a financial system is susceptible to failures due to the characteristics of the system itself. The tremendous cost of this type of risk requires the design and implementation of tools for the efficient macroprudential regulation of financial institutions. The first part of the talk presents a comprehensive model of a financial system that integrates network effects, bankruptcy costs, cross-holdings, and fire sales. The second part discusses a multivariate approach to measuring systemic risk. The talk is based on joint work with Zachary G. Feinstein, Birgit Rudloff, and Kerstin Weske.
TBA
Consider the following problem. An edge-weighted graph is presented edge by edge in uniform random order to an algorithm that wants to construct a large weight forest. When an edge appears, the algorithm must decide whether to take it or not. But there is a catch: the algorithm cannot see the weights, it can only compare pairs of revealed elements. Can the algorithm output a set whose weight is large compare to the optimum forest? If we replace "finding a forest" by "finding an independent set on a given matroid" we obtain the description of the ordinal matroid secretary problem (MSP).
In this talk, I will present a technique based on forbidden sets, to design, for certain matroids, algortihms with the following property: every element of the global optimum appears in the output solution with good probability. Our technique allows us to improve the current best competitive guarantees of the MSP on most of the matroid classes studied, including graphical, transversal and laminar.
This is joint work with Victor Verdugo and Abner Turkieltaub
Dirk Blömker (University of Augsburg): Stochastic interface motion and slow manifolds Patrick Dondl (University of Freiburg): Sharp Interface Limits of Phase Field Models Britta Nestler (KIT): Fundamentals of Phase-Field Modelling for HPC Systems and Coupling with Continuum Mechanics Robert Nürnberg (Imperial College, London): Numerical Approximation of Phasefield Models