The Laplacian matrix of an undirected graph with positive edge weights encodes graph properties in matrix form. In this talk, we will discuss how Laplacian matrices appear prominently in multiple applications in statistics and machine learning. Our interest in Laplacian matrices originates in graphical models for extremes. For Hüsler--Reiss distributions, which are considered as an analogue of Gaussians in extreme value theory, they characterize an extremal notion of multivariate total positivity of order 2 (MTP2). This leads to a consistent estimation procedure with a typically sparse graphical structure. Furthermore, the underlying convex optimization problem under Laplacian constraints allows for a simple block descent algorithm that we implemented in R. An active area of research in machine learning are Laplacian-constrained Gaussian graphical models. These models admit structure learning under various connectivity constraints. Multiple algorithms for these problems with different lasso-type penalties are available in the literature. A surprising appearance of Laplacian matrices is in the design of discrete choice experiments. Here, the Fisher information of a discrete choice design is a Laplacian matrix, which gives rise to a new approach for learning locally D-optimal designs.