Nov. December 2024 Jan. 2025
Modify filter

Filter off: No filter for categories


02.12.2024 11:00 André Röhm:
Information processing with driven dynamical systems via Reservoir Computing Online: attendMI 02.08.011 (Boltzmannstr. 3, 85748 Garching)

Reservoir computing is a popular scheme to utilize the naturally occurring computational power of dynamical systems and natural phenomena. Originally inspired by recurrent neural networks, it is now applied to a wide variety of substrates and systems. It can be used for many typical machine learning tasks, such as time series prediction or classification, but in recent years it has gathered particular interest for the ease with which a reservoir computer can emulate another dynamical system in the so called "autonomous mode" or "closed loop" configuration. This talk will introduce a new method that extends the existing research, by changing the "target system" class from merely modelling autonomous dynamical systems, to explicitly allowing the reservoir computer to also try and emulate "driven dynamical systems". Of great interest here is the fact, that the "reservoir" itself is a driven dynamical systems. Thus, within this new framework of "semi-closed loop operation", we can turn the reservoir computing scheme on itself and emulate reservoirs with other reservoirs. Many open questions remain, in particular on what the ordering of "emulatable" reservoirs is and how to measure this efficiently.

02.12.2024 16:30 Gideon Chiusole (TUM):
Stochastics, Geometry, and Stability of Noisy PatternsBC1 2.01.10 (8101.02.110) (Parkring 11, 85748 Garching-Hochbrück)

Pattern formation phenomena are omnipresent in the natural sciences (especially chemistry, biology, and physics). Consequently, patterns, their formation, and their stability have been studied extensively in the theory of dynamical systems. However, while the deterministic side is in large parts well understood, very little is known on the stochastic side. We want to use geometric methods and techniques from dynamical systems and regularity structures to extend certain deterministic stability results to generalised patterns in singular SPDEs.

03.12.2024 16:00 Ruilong Zhang:
Fair Allocation with Scheduling ConstraintsMI 02.06.011 (Boltzmannstr. 3, 85748 Garching)

We study a fair resource scheduling problem, where a set of interval jobs are to be allocated to heterogeneous machines controlled by intellectual agents. Each job is associated with release time, deadline, and processing time, such that it can be processed if its complete processing period is between release time and deadline. The machines gain possibly different utilities by processing different jobs, and all jobs assigned to the same machine should be processed without overlap. We consider two widely studied solution concepts, namely, maximin share fairness and envy-freeness. For both criteria, we discuss the extent to which fair allocations exist and present constant approximation algorithms for various settings.

04.12.2024 12:15 Heather Battey (Imperial College London):
Inducement of population-level sparsitySeminarraum 8101.02.110 (Parkring 11, 85748 Garching)

The work on parameter orthogonalisation by Cox and Reid (1987) is presented as an inducement of abstract population-level sparsity. The latter is taken as a unifying theme for the talk, in which sparsity-inducing parameterisations or data transformations are sought. Examples from some of my work are framed in this light, with emphasis on recent developments addressing the following question: for a given statistical problem, not obviously sparse in its natural formulation, can a sparsity-inducing reparametrisation be deduced? In general, the solution strategy for the problem of exact or approximate sparsity inducement appears to be context specific and may entail, for instance, solving one or more partial differential equation, or specifying a parametrised path through data-transformation or parametrisation space.

04.12.2024 16:00 Angela Capel:
Rapid thermalisation of quantum dissipative many-body systems00.10.011 CIT meeting room 1 (Boltzmannstr. 3, 85748 Garching)

Quantum systems typically reach thermal equilibrium when in weak contact with a large external bath. Understanding the speed of this thermalisation is a challenging problem, especially in the context of quantum many-body systems where direct calculations are intractable. The usual way of bounding the speed of this process is by estimating the spectral gap of the dissipative generator, but this does not always yield a reasonable estimate for the thermalisation time. When the system satisfies instead a modified logarithmic Sobolev inequality (MLSI), the thermalisation time is at most logarithmic in the system size, yielding wide-ranging applications to the study of many-body in and out-of-equilibrium quantum systems, such as stability against local perturbations (in the generator), efficient preparation of Gibbs states (the equilibria of these processes), etc.

In this talk, I will present an overview on a strategy to prove that a system satisfies a MLSI provided that correlations decay sufficiently fast between spatially separated regions on the Gibbs state of a local, commuting Hamiltonian in any dimension. I will subsequently review the current state of the art for Davies dissipative generators thermalising to such Gibbs states.

05.12.2024 17:00 Prof. Dr. Serge Nicaise:
A posteriori goal-oriented error estimators based on equilbrated flux and potential reconstructionsGebäude 33, Raum 1431 (Werner-Heisenberg-Weg 39, 85577 Neubiberg)

Many engineering problems require the calculation of certain quantities of interest, which are usually defined by linear functionals depending on the solution of a partial differential equation. Examples include the local or global mean value of a temperature, or the magnetic flux density at a given point of an electromagnetic device. In this talk, we focus on estimating the error of such functionals using a wide variety of numerical methods (finite elements, discontinuous galerkin and finite volumes), within a unified framework for elliptic and parabolic problems. The key point lies in solving a dual problem and using guaranteed equilibrated estimators for the primal and the dual problems, computed using flux and potential reconstructions. In all cases, we prove that the goal-oriented error can be split into a fully computable estimator and a remainder term that can be bounded above by computable energy- based estimators. We present some numerical tests to underline the capability of this goal-oriented estimator in different contexts : reaction-diffusion problems, heat equation, and harmonic formulation of eddy-current problems.

09.12.2024 15:00 David Hien:
Cycling Signatures: Analyzing Recurrence using Algebraic TopologyMI 03.06.011 (Boltzmannstr. 3, 85748 Garching)

Nonlinear dynamical systems often exhibit rich and complicated recurrent dynamics. Understanding these dynamics is challenging, particularly in higher dimensions where visualization is limited. Additionally, in many applications, time series data is all that is available. In this talk, I present an algebraic topological tool to identify elementary oscillations and the transitions between them. More precisely, we introduce the cycling signature which is constructed by taking persistent homology of time series segments in a suitable ambient space. Oscillations in a time series can then be identified by analyzing the cycling signatures of its segments.

09.12.2024 16:30 Orphée Collin:
The Random Field Ising ChainBC1 2.01.10 (8101.02.110) (Parkring 11, 85748 Garching-Hochbrück)

The Ising Model is a classical model in statistical physics describing the behavior of ferromagnetic moments on a lattice interacting via a local interaction. When the lattice is one-dimensional and in the case of homogeneous nearest-neighbor interaction, the model is known to be exactly solvable (and simple). However, the disordered version of the one-dimensional Ising Model (called the Random Field Ising Chain), where the chain interacts with an i.i.d environment, is a much more challenging model. In particular, it exhibits a pseudo-phase transition as the strength Gamma of the inner-interaction goes to infinity. A description of the typical configurations when Gamma is large has been given in the physical literature in terms of a renormalisation group fixed point. In this talk, we will present and discuss the RFIC model, on the level of the free energy and on the level of configurations. We will consider the cases of both centered and uncentered external fields. The notion of Gamma-extrema of the Brownian motion, introduced by Neveu and Pitman, will play a crucial role in our analysis.

11.12.2024 12:15 Han Xiao (Rutgers University, Piscataway, NJ):
Dynamic Matrix Factor Models8101.02.110 / BC1 2.01.10 (Parkring 11, 85748 Garching)

Matrix time series, which consist of matrix-valued data observed over time, are prevalent in various fields such as economics, finance, and engineering. Such matrix time series data are often observed in high dimensions. Matrix factor models are employed to reduce the dimensionality of such data, but they lack the capability to make predictions without specified dynamics in the latent factor process. To address this issue, we propose a two-component dynamic matrix factor model that extends the standard matrix factor model by incorporating a bilinear autoregressive structure for the low-dimensional latent factor process. This two-component model injects prediction capability to the matrix factor model and provides deeper insights into the dynamics of high-dimensional matrix time series. We present the estimation procedures of the model and their theoretical properties, as well as empirical analysis of the proposed procedures.

12.12.2024 16:30 Toan Nguyen (Pennsylvania State University):
Landau dampingA 027 (Theresienstr. 39, 80333 München)

Of great interest in plasma physics is to determine whether excited charged particles in a non-equilibrium state will relax to neutrality or transition to a nontrivial coherent state. Due to the long range interaction between particles, the self-consistent generating electric field oscillates in time and disperses in space like a Klein-Gordon wave, known in the physical literature as plasma oscillations or Langmuir’s oscillatory waves. Landau in his original work addresses the decay of such an electric field, namely the energy exchange between the oscillatory electric field and charged particles, in a linearized setting. This talk will provide an overview of the recent mathematical advances in the nonlinear setting. The talk should be accessible to graduate students and the general audience.

___________________________________

Invited by Prof. Phan Thành Nam

16.12.2024 14:00 Probability Colloquium Augsburg-Munich in Munich, LMU:
TBATBA (Theresienstr. 39, 80333 München)

16.12.2024 15:00 Alejandro Martinez Sanchez:
Characterising exchange of stability in scalar reaction-diffusion equations via geometric blow-up MI 03.06.011 (Boltzmannstr. 3, 85748 Garching)

We study the exchange of stability in scalar reaction-diffusion equations which feature a slow passage through a pitchfork type singularity in the reaction term, using a novel adaptation of the geometric blow-up method. Our results are consistent with known results on bounded spatial domains which were obtained using comparison principles like upper and lower solutions. However, from a methodological point of view, our approach is motivated by the analysis of a closely related ODE problem which was analyzed using geometric blow-up techniques by Krupa and Szmolyan in 2001. After applying the blow-up transformation, we obtain a system of PDEs which can be studied in local coordinate charts. Importantly, the blow-up procedure resolves a spectral degeneracy in which the spectrum is ‘pushed back’ so as to create a spectral gap in the linearization about particular steady states. This makes it possible to extend slow-type invariant manifolds into and out of a neighbourhood of the singular point using center manifold theory, in a manner which is conceptually analogous to the approach adopted by Krupa and Szmolyan in their analysis of the aforementioned ODE problem. We expect that the approach can be adapted and applied to the study of dynamic bifurcations in PDEs in a wide variety of different contexts.

18.12.2024 16:00 Mario Kieburg (Melbourne):
CANCELLED: Random Matrices and their Impact on our LifeMI HS 3 (MI 00.06.011) (Boltzmannstr. 3, 85748 Garching)

Leider wegen Krankheit ABGESAGT! Cancelled due to illness!

Random matrices are amazing mathematical objects as they combine various mathematical fields, from linear algebra to probability, differential geometry, group theory, etc. Due to this richness, it is not surprising that they are versatile tools employed in Engineering, Physics, Statistics, and many more. Modern applications can be found in machine learning, quantum information, and wireless telecommunications. In my presentation, I will show you the basic concepts of random matrices, what kinds of objects are usually studied, and which important theorems have been proven. To the end I will turn to the relation of Harmonic Analysis and Random Matrix Theory and how it has helped to solve some contemporary problems.

19.12.2024 16:30 Felix Dietrich (TUM):
Machine Learning with Data-Driven Random Feature ModelsA 027 (Theresienstr. 39, 80333 München)

The success of machine learning algorithms is to a large part due to advances in optimization. Most deep learning models involving artificial neural networks are trained on large data sets using versions of stochastic gradient descent. After many iterations, and several passes through the total data set, the network parameters are - hopefully - close to a minimum of the loss function. The iterative nature and a large number of hyperparameters make this optimization procedure challenging in practice.

In my talk, we discuss a sampling scheme for a specific, training data-dependent probability distribution of the parameters of feed-forward neural networks that removes the need for iterative updates of the hidden parameters. After they have been chosen at random from the constructed distribution, only a single linear problem must be solved to obtain a fully trained network. Such networks fall in the class of random feature models, but their hidden parameters now depend on the training data. They are provably dense in the continuous functions, and have a convergence rate in the number of neurons that is independent of the input dimension. Using sampled neurons as basis functions in an ansatz allow us to effectively construct models for regression and classification tasks, create recurrent networks, construct neural operators, and solve partial differential equations. In computational experiments, the sampling scheme outperforms iterative, gradient-based optimization by several orders of magnitude in both training speed and accuracy. We will discuss benefits and drawbacks of the approach, as well as future directions regarding new network architectures.

___________________________________

Invited by Prof. Johannes Maly.