We study the optimal sample complexity of structure learning in Gaussian structural equation models. In the first part of the talk, we compare the complexity of structure learning via the PC algorithm and distribution learning via the Chow-Liu algorithm in directed polytrees. We will show how both algorithms are optimal under different assumptions, and lead to different statistical complexities. Moving beyond polytrees, we then investigate the problem of neighbourhood selection, which is an important primitive when learning the overall structure of a graphical model. We will introduce a new estimator, called klBSS, and compare its performance to best subset selection (BSS). We show by example that—even when the structure is unknown—the existence of underlying structure can reduce the sample complexity of neighbourhood selection compared to classical methods such as BSS and the Lasso.