Combining Bayesian principles with the power of deep learning has long been an attractive direction of research, but its real-world impact has fallen short of the promises. Especially in the context of uncertainty estimation, there seem to be simpler methods that perform at least as well. In this talk, I want to argue that uncertainties are not the only reason to use Bayesian deep learning models, but that they also offer improved model selection and incorporation of prior knowledge. I will showcase these benefits supported by the results of two recent papers and situate them in the context of current research trends in Bayesian deep learning. \[ \] Bio: Vincent Fortuin is a tenure-track research group leader at Helmholtz AI in Munich, leading the group for Efficient Learning and Probabilistic Inference for Science (ELPIS), and a faculty member at the Technical University of Munich. He is also a Branco Weiss Fellow, an ELLIS Scholar, a Fellow of the Konrad Zuse School of Excellence in Reliable AI, and a Senior Researcher at the Munich Center for Machine Learning. His research focuses on reliable and data-efficient AI approaches, leveraging Bayesian deep learning, deep generative modeling, meta-learning, and PAC-Bayesian theory. Before that, he did his PhD in Machine Learning at ETH Zürich and was a Research Fellow at the University of Cambridge. He is a regular reviewer and area chair for all major machine learning conferences, an action editor for TMLR, and a co-organizer of the Symposium on Advances in Approximate Bayesian Inference (AABI) and the ICBINB initiative.