The problem of simultaneously estimating multiple means from independent samples has a long history in statistics, from the seminal works of Stein, Robbins in the 50s, Efron and Morris in the 70s and up to the present day. This setting can be also seen as an (extremely stylized) instance of "personalized federated learning" problem, where each user has their own data and target (the mean of their personal distribution), but potentially want to share some relevant information with "similar" users (though there is no information available a priori about which users are "similar"). In this talk I will concentrate on contributions to the high-dimensional case, where the samples and their means belong to R^d with "large" d. \[ \] We consider a weighted aggregation scheme of empirical means of each sample, and study the possible improvement in quadratic risk over the simple empirical means. To make the stylized problem closer to challenges encountered in practice, we allow (a) full heterogeneity of sample sizes (b) zero a priori knowledge of the structure of the mean vectors (c) unknown and possibly heterogeneous sample covariances. \[ \] We focus on the role of the effective dimension of the data in a "dimensional asymptotics'' point of view, highlighting that the risk improvement of the proposed method satisfies an oracle inequality approaching an adaptive (minimax in a suitable sense) improvement as the effective dimension grows large. \[ \] (This is joint work with Jean-Baptiste Fermanian and Hannah Marienwald)