Aug. September 2022 Oct.

07.09.2022 12:15 Marco Scutari (Polo Universitario Lugano, Switzerland):
Bayesian Network Models for Continuous-Time and Structured Data Online: attend (Parkring 11, 85748 Garching)

Bayesian networks (BNs) are a versatile and powerful tool to model complex phenomena and the interplay of their components in a probabilistically principled way. Moving beyond the comparatively simple case of completely observed, static data, which has received the most attention in the literature, I will discuss how BNs can be extended to model continuous data and data in which observations are not independent and identically distributed.

For the former, I will discuss continuous-time BNs. For the latter, I will show how mixed effects models can be integrated with BNs to get the best of both worlds.

14.09.2022 12:00 Leena C. Vankadara (University of Tübingen):
Is Memorization Compatible with Causal Learning? The Case of High-Dimensional Linear Regression.Online: attendBC1 2.01.10 (Parkring 11, 85748 Garching)

Deep learning models exhibit a rather curious phenomenon. They optimize over hugely complex model classes and are often trained to memorize the training data. This is seemingly contradictory to classical statistical wisdom, which suggests avoiding interpolation in favor of reducing the complexity of the prediction rules. A large body of recent work partially resolves this contradiction. It suggests that interpolation does not necessarily harm statistical generalization and may even be necessary for optimal statistical generalization in some settings. This is, however, an incomplete picture. In modern ML, we care about more than building good statistical models. We want to learn models which are reliable and have good causal implications. Under a simple linear model in high dimensions, we will discuss the role of interpolation and its counterpart --- regularization --- in learning better causal models.

14.09.2022 13:45 Johannes Lederer (Ruhr-University Bochum):
Sparse Deep LearningOnline: attendBC1 2.01.10 (Parkring 11, 85748 Garching)

Sparsity is popular in statistics and machine learning, because it can avoid overfitting, speed up computations, and facilitate interpretations. In deep learning, however, the full potential of sparsity still needs to be explored. This presentation first recaps sparsity in the framework of high-dimensional statistics and then introduces sparsity-inducing methods and corresponding theory for modern deep-learning pipelines.

14.09.2022 15:00 Michaël Lalancette (University of Toronto, CAN):
Estimation of bivariate and spatial tail models under asymptotic dependence and independenceOnline: attendBC1 2.01.10 (Parkring 11, 85748 Garching)

Multivariate extreme value theory mostly focuses on asymptotic dependence, where the probability of observing a large value in one of the variables is of the same order as that of observing a large value in all variables simultaneously. There is growing evidence, however, that asymptotic independence prevails in many data sets. Available statistical methodology in this setting is scarce and not well understood theoretically. We revisit non-parametric estimation of bivariate tail dependence and introduce rank-based M-estimators for parametric models that may include both asymptotic dependence and asymptotic independence, without requiring prior knowledge on which of the two regimes applies. We further show how the method can be leveraged to obtain parametric estimators in spatial tail models. All the estimators are proved to be asymptotically normal under minimal regularity conditions. The methodology is illustrated through an application to extreme rainfall data.

22.09.2022 18:00 Jürgen Richter-Gebert (TUM), Christian Liedtke (TUM):
Die 7 größten Abenteuer der Mathematik - Die Hodge VermutungBayerische Akademie der Wissenschaften (Alfons-Goppel-Str. 11, 80539 München)

Im Rahmen der bundesweiten Veranstaltung “Die 7 größten Abenteuer der Mathematik” findet am 22.9 um 18:00 im Plenarsaal der Bayerischen Akademie der Wissenschaften die Veranstaltung “Die Hodge Vermutung” statt. Die von der Jungen Akademie der Leopoldina, der Deutschen Forschungsgemeinschaft und der Deutschen Mathematiker-Vereinigung organisierte Veranstaltungsreihe richtet sich an ein allgemeines, wissenschaftlich interessiertes Publikum und soll Einblicke in die Welt der Milleniumsprobleme der Mathematik geben. Sieben Probleme auf deren Lösung jeweils ein Preisgeld von 1Mio $ ausgesetzt ist.

Bei der von der Fakultät Mathematik der TU München ausgerichteten Veranstaltung erwarten Sie 2 Vorträge und eine Poster- und Modelle Ausstellung rund um die Thematik großer mathematischer Probleme und der Hodge Vermutung.

Der Eintritt ist frei. Weitere Informationen und Abstracts: https://www.ma.tum.de/hodge