Abstract. In the first part of the talk, we discuss convex risk measures with weak optimal transport penalties. We show that these risk easures allow for an explicit representation via a nonlinear transform of the loss function. We also discuss computational aspects related to the nonlinear transform as well as approximations of the risk measures using, for example, neural networks. Our setup comprises a variety of examples, such as classical optimal transport penalties, parametric families of models, divergence risk measures, uncertainty on path spaces, moment constraints, and martingale constraints. In the second part, we focus on classical transport penalties and consider a suitable multi-stage iteration of these risk measures. Such multi-stage iteration naturally defines a dynamically consistent risk measure which takes into account uncertainty around the increments of an underlying Lévy process. Using direct arguments, we show that passing to the limit in the number of iterations yields a convex monotone semigroup, and obtain explicit convergence rates for the approximation. Finally, we associate this semigroup with the solution to a drift control problem.