We investigate finite-time Lyapunov exponents (FTLEs), a measure for exponential separation of input perturbations, of deep neural networks. Within the framework of neural ODEs, we demonstrate that FTLEs serve as a powerful organizer for input-to-output mappings, allowing the comparison of distinct model architectures. We establish a direct connection between Lyapunov exponents and adversarial vulnerability, and propose a novel training algorithm that improves robustness by FTLE regularization.