Aspects of Information Geometry and Efficiency for Kronecker CovariancesBC1 2.01.10 / 8101.02.110 (Parkring 11, 85748 Garching)

The Kronecker covariance structure for array data posits that the covariances along comparable modes, such as rows and columns, of an array are similar. For example, when modelling a multivariate time series, it might be assumed that each individual series follows the same AR process, up to changes in scale, while at each particular timepoint the observations across series have the same correlation structure. Over and above being a plausible model for many types of data, the Kronecker covariance assumption is especially useful in high-dimensional settings, where unconstrained covariance matrix estimates are typically unstable. In this talk we explore the information geometric aspects of the estimation of Kronecker covariance matrices. The asymptotic properties of two estimators, the maximum likelihood estimator and an estimator based on partial traces, are contrasted. It is shown that the partial trace estimator is inefficient, where the relative performance of this estimator can be quantified in terms of a principle angle between tangent spaces. This principle angle can be related to the eigenvalues of the underlying Kronecker covariance matrix. By defining a rescaled version of the partial trace operator, an asymptotically efficient correction to the partial trace estimator is proposed. This estimator has a closed-form expression and also has a useful equivariance property. An orthogonal parameterization of the collection of Kronecker covariances is subsequently motivated by the rescaled partial trace estimator. Orthogonal parameterizations imply that the components of the parameterization are asymptotically independent, which in the Kronecker case has implications for tests concerning row and column covariances.