Nov. December 2024 Jan. 2025
Modify filter

Filter off: No filter for categories


02.12.2024 11:00 André Röhm:
Information processing with driven dynamical systems via Reservoir Computing Online: attendMI 02.08.011 (Boltzmannstr. 3, 85748 Garching)

Reservoir computing is a popular scheme to utilize the naturally occurring computational power of dynamical systems and natural phenomena. Originally inspired by recurrent neural networks, it is now applied to a wide variety of substrates and systems. It can be used for many typical machine learning tasks, such as time series prediction or classification, but in recent years it has gathered particular interest for the ease with which a reservoir computer can emulate another dynamical system in the so called "autonomous mode" or "closed loop" configuration. This talk will introduce a new method that extends the existing research, by changing the "target system" class from merely modelling autonomous dynamical systems, to explicitly allowing the reservoir computer to also try and emulate "driven dynamical systems". Of great interest here is the fact, that the "reservoir" itself is a driven dynamical systems. Thus, within this new framework of "semi-closed loop operation", we can turn the reservoir computing scheme on itself and emulate reservoirs with other reservoirs. Many open questions remain, in particular on what the ordering of "emulatable" reservoirs is and how to measure this efficiently.

02.12.2024 16:30 Gideon Chiusole (TUM):
Stochastics, Geometry, and Stability of Noisy PatternsBC1 2.01.10 (8101.02.110) (Parkring 11, 85748 Garching-Hochbrück)

Pattern formation phenomena are omnipresent in the natural sciences (especially chemistry, biology, and physics). Consequently, patterns, their formation, and their stability have been studied extensively in the theory of dynamical systems. However, while the deterministic side is in large parts well understood, very little is known on the stochastic side. We want to use geometric methods and techniques from dynamical systems and regularity structures to extend certain deterministic stability results to generalised patterns in singular SPDEs.

03.12.2024 16:00 Ruilong Zhang:
Fair Allocation with Scheduling ConstraintsMI 02.06.011 (Boltzmannstr. 3, 85748 Garching)

We study a fair resource scheduling problem, where a set of interval jobs are to be allocated to heterogeneous machines controlled by intellectual agents. Each job is associated with release time, deadline, and processing time, such that it can be processed if its complete processing period is between release time and deadline. The machines gain possibly different utilities by processing different jobs, and all jobs assigned to the same machine should be processed without overlap. We consider two widely studied solution concepts, namely, maximin share fairness and envy-freeness. For both criteria, we discuss the extent to which fair allocations exist and present constant approximation algorithms for various settings.

04.12.2024 12:15 Heather Battey (Imperial College London):
Inducement of population-level sparsitySeminarraum 8101.02.110 (Parkring 11, 85748 Garching)

The work on parameter orthogonalisation by Cox and Reid (1987) is presented as an inducement of abstract population-level sparsity. The latter is taken as a unifying theme for the talk, in which sparsity-inducing parameterisations or data transformations are sought. Examples from some of my work are framed in this light, with emphasis on recent developments addressing the following question: for a given statistical problem, not obviously sparse in its natural formulation, can a sparsity-inducing reparametrisation be deduced? In general, the solution strategy for the problem of exact or approximate sparsity inducement appears to be context specific and may entail, for instance, solving one or more partial differential equation, or specifying a parametrised path through data-transformation or parametrisation space.

04.12.2024 16:00 Angela Capel:
Rapid thermalisation of quantum dissipative many-body systems00.10.011 CIT meeting room 1 (Boltzmannstr. 3, 85748 Garching)

Quantum systems typically reach thermal equilibrium when in weak contact with a large external bath. Understanding the speed of this thermalisation is a challenging problem, especially in the context of quantum many-body systems where direct calculations are intractable. The usual way of bounding the speed of this process is by estimating the spectral gap of the dissipative generator, but this does not always yield a reasonable estimate for the thermalisation time. When the system satisfies instead a modified logarithmic Sobolev inequality (MLSI), the thermalisation time is at most logarithmic in the system size, yielding wide-ranging applications to the study of many-body in and out-of-equilibrium quantum systems, such as stability against local perturbations (in the generator), efficient preparation of Gibbs states (the equilibria of these processes), etc.

In this talk, I will present an overview on a strategy to prove that a system satisfies a MLSI provided that correlations decay sufficiently fast between spatially separated regions on the Gibbs state of a local, commuting Hamiltonian in any dimension. I will subsequently review the current state of the art for Davies dissipative generators thermalising to such Gibbs states.

05.12.2024 17:00 Prof. Dr. Serge Nicaise:
A posteriori goal-oriented error estimators based on equilbrated flux and potential reconstructionsGebäude 33, Raum 1431 (Werner-Heisenberg-Weg 39, 85577 Neubiberg)

Many engineering problems require the calculation of certain quantities of interest, which are usually defined by linear functionals depending on the solution of a partial differential equation. Examples include the local or global mean value of a temperature, or the magnetic flux density at a given point of an electromagnetic device. In this talk, we focus on estimating the error of such functionals using a wide variety of numerical methods (finite elements, discontinuous galerkin and finite volumes), within a unified framework for elliptic and parabolic problems. The key point lies in solving a dual problem and using guaranteed equilibrated estimators for the primal and the dual problems, computed using flux and potential reconstructions. In all cases, we prove that the goal-oriented error can be split into a fully computable estimator and a remainder term that can be bounded above by computable energy- based estimators. We present some numerical tests to underline the capability of this goal-oriented estimator in different contexts : reaction-diffusion problems, heat equation, and harmonic formulation of eddy-current problems.

09.12.2024 15:00 David Hien:
Cycling Signatures: Analyzing Recurrence using Algebraic TopologyMI 03.06.011 (Boltzmannstr. 3, 85748 Garching)

Nonlinear dynamical systems often exhibit rich and complicated recurrent dynamics. Understanding these dynamics is challenging, particularly in higher dimensions where visualization is limited. Additionally, in many applications, time series data is all that is available. In this talk, I present an algebraic topological tool to identify elementary oscillations and the transitions between them. More precisely, we introduce the cycling signature which is constructed by taking persistent homology of time series segments in a suitable ambient space. Oscillations in a time series can then be identified by analyzing the cycling signatures of its segments.

09.12.2024 16:30 Orphée Collin:
TBABC1 2.01.10 (8101.02.110) (Parkring 11, 85748 Garching-Hochbrück)

TBA

11.12.2024 12:15 Han Xiao (Rudgers University, Piscataway, NJ):
t.b.a.8101.02.110 / BC1 2.01.10 (Parkring 11, 85748 Garching)

t.b.a.

12.12.2024 16:30 Toan Nguyen (Pennsylvania State University):
Landau dampingA 027 (Theresienstr. 39, 80333 München)

Of great interest in plasma physics is to determine whether excited charged particles in a non-equilibrium state will relax to neutrality or transition to a nontrivial coherent state. Due to the long range interaction between particles, the self-consistent generating electric field oscillates in time and disperses in space like a Klein-Gordon wave, known in the physical literature as plasma oscillations or Langmuir’s oscillatory waves. Landau in his original work addresses the decay of such an electric field, namely the energy exchange between the oscillatory electric field and charged particles, in a linearized setting. This talk will provide an overview of the recent mathematical advances in the nonlinear setting. The talk should be accessible to graduate students and the general audience.

___________________________________

Invited by Prof. Phan Thành Nam

16.12.2024 14:00 Probability Colloquium Augsburg-Munich in Munich, LMU:
TBATBA (Theresienstr. 39, 80333 München)

18.12.2024 16:00 Mario Kieburg (Melbourne):
Random Matrices and their Impact on our LifeMI HS 3 (MI 00.06.011) (Boltzmannstr. 3, 85748 Garching)

Random matrices are amazing mathematical objects as they combine various mathematical fields, from linear algebra to probability, differential geometry, group theory, etc. Due to this richness, it is not surprising that they are versatile tools employed in Engineering, Physics, Statistics, and many more. Modern applications can be found in machine learning, quantum information, and wireless telecommunications. In my presentation, I will show you the basic concepts of random matrices, what kinds of objects are usually studied, and which important theorems have been proven. To the end I will turn to the relation of Harmonic Analysis and Random Matrix Theory and how it has helped to solve some contemporary problems.

19.12.2024 16:30 Felix Dietrich (TUM):
Machine Learning with Data-Driven Random Feature ModelsA 027 (Theresienstr. 39, 80333 München)

The success of machine learning algorithms is to a large part due to advances in optimization. Most deep learning models involving artificial neural networks are trained on large data sets using versions of stochastic gradient descent. After many iterations, and several passes through the total data set, the network parameters are - hopefully - close to a minimum of the loss function. The iterative nature and a large number of hyperparameters make this optimization procedure challenging in practice.

In my talk, we discuss a sampling scheme for a specific, training data-dependent probability distribution of the parameters of feed-forward neural networks that removes the need for iterative updates of the hidden parameters. After they have been chosen at random from the constructed distribution, only a single linear problem must be solved to obtain a fully trained network. Such networks fall in the class of random feature models, but their hidden parameters now depend on the training data. They are provably dense in the continuous functions, and have a convergence rate in the number of neurons that is independent of the input dimension. Using sampled neurons as basis functions in an ansatz allow us to effectively construct models for regression and classification tasks, create recurrent networks, construct neural operators, and solve partial differential equations. In computational experiments, the sampling scheme outperforms iterative, gradient-based optimization by several orders of magnitude in both training speed and accuracy. We will discuss benefits and drawbacks of the approach, as well as future directions regarding new network architectures.

___________________________________

Invited by Prof. Johannes Maly.