03.11.2025 15:00 Kathryn Lindsey:
Functional Redundancy in ReLU Neural Networks Online: attend

The parameterized function classes used in modern deep learning are highly redundant, meaning that many different parameter values can correspond to the same function. These redundancies, or parameter space symmetries, shape the geometry of the loss landscape and thereby govern optimization dynamics, generalization behavior, and computational efficiency. Focusing on fully connected multilayer perceptrons (MLPs) with ReLU activations, I will explain how the degree of this redundancy varies in highly inhomogeneous ways across parameter space. I will describe how this structure influences the topology of loss level sets and discuss its implications for optimization dynamics and model identifiability. Finally, I will present experimental evidence suggesting that the functional dimension of a network tracks the intrinsic complexity of the learning task.