Research Statement
Information processing systems in the real world must compute reliably under a diversity of stimulus transformations; in vision alone, these include changes in viewpoint, lighting, and appearance. To date, most artificial learning systems achieve robustness primarily through increased scale of both data and parameters, rather than ingrained structure. Natural systems, embedded in a world with physical constraints, have no such luxury; they must generalize systematically despite finite data, finite energy, and finite time.
In learning theory, such data efficiency is governed by inductive biases: a priori constraints that restrict the space of solutions a system can represent (Wolpert 1996). In artificial neural networks, many of the most powerful inductive biases arise from symmetry and geometry – when a model is constrained to respect the abstract structure of transformations in its inputs, it can generalize predictably far beyond its training distribution with dramatically fewer examples.
The canonical case is the convolutional layer, notably inspired by biology, which builds translation symmetry into vision models (Fukushima, 1980). Congruently, modern neuroscience continues to reveal increasing geometric structure in biological computation and connectivity. From topographic maps and toroidal grid codes, to ring attractor circuits and low-dimensional population manifolds, geometric structure appears to be a central design principle in the blueprint for natural intelligence (Zhang 1996; Churchland et al. 2012; Gardner et al. 2022).
My research investigates the hypothesis that generalizable symmetric and geometric inductive biases are fundamental to natural intelligence, and seeks to discover the computational primitives that implement them.
In particular, my recent research to date falls into three themes:
- formalizing the notion of time-parameterized symmetries unique to recurrent computation
- evaluating the computational implications of natural spatiotemporal dynamics, and
- modeling how natural systems leverage the underlying low-dimensional geometry of high-dimensional data.
Time-Parameterized Symmetries
- ICLR '26 Under review H. Lillemark, B. Huang, F. Zhan, Y. Du, and T. A. Keller (2026). Flow Equivariant World Modeling for Partially Observed Dynamic Environments. In: International Conference on Learning Representations (ICLR). Under review.
- NeurIPS '25 Spotlight
(Top 13%) T. A. Keller (2025). Flow Equivariant Recurrent Neural Networks. In: Advances in Neural Information Processing Systems (NeurIPS). Spotlight, Top 13% accepted. - Nat. Comms. Under Review T. A. Keller, L. Muller, T. J. Sejnowski, and M. Welling (2024). A Spacetime Perspective on Dynamical Computation in Neural Information Processing Systems. arXiv: 2409.13669 [q-bio.NC].
Modeling Spatiotemporal Neural Dynamics
- NeurIPS '25 Y. Song, T. A. Keller, S. Brodjian, T. Miyato, Y. Yue, P. Perona, and M. Welling (2025). Kuramoto Orientation Diffusion Models. In: Advances in Neural Information Processing Systems (NeurIPS).
- CCN '25 Oral (Top 7%) M. Jacobs, R. C. Budzinski, L. Muller, D. E. Ba, and T. A. Keller (2025). Traveling Waves Integrate Spatial Information Through Time. In: Conference on Cognitive Computational Neuroscience (CCN). Oral presentation, Top 7%.
- COSYNE '25 Abstract T. A. Keller (2025). Nu-Wave State Space Models: Traveling Waves as a Biologically Plausible Context. In: Science Communications Worldwide. doi: 10.57736/b30b-8eed.
- PNAS '25 L. H. B. Liboni, R. C. Budzinski, A. N. Busch, S. Löwe, T. A. Keller, M. Welling, and L. E. Muller (2025). Image segmentation with traveling waves in an exactly solvable recurrent neural network. In: Proceedings of the National Academy of Sciences (PNAS) 122.1, e2321319121. doi: 10.1073/pnas.2321319121.
- ICLR '24 T. A. Keller, L. Muller, T. Sejnowski, and M. Welling (2024). Traveling Waves Encode the Recent Past and Enhance Sequence Learning. In: International Conference on Learning Representations (ICLR).
- ICML '23 T. A. Keller and M. Welling (2023). Neural Wave Machines: Learning Spatiotemporally Structured Representations with Locally Coupled Oscillatory Recurrent Neural Networks. In: Proceedings of the 40th International Conference on Machine Learning (ICML). vol. 202. Proceedings of Machine Learning Research, pp. 16168–16189.
- COSYNE '22 Abstract T. A. Keller and M. Welling (2022). Locally Coupled Oscillator Networks Learn Traveling Waves and Topographic Organization.
- SVRHM '22 Best Paper T. A. Keller, Q. Gao, and M. Welling (2021). Modeling Category-Selective Cortical Regions with Topographic Variational Autoencoders. In: Shared Visual Representations in Humans and Machines (SVRHM) Workshop @ NeurIPS. Best Paper Award.
Learning Latent Symmetries
- Springer '25 Book Y. Song, T. A. Keller, N. Sebe, and M. Welling (May 2025). Structured Representation Learning. Synthesis Lectures on Computer Vision. Cham, Switzerland: Springer International Publishing.
- CCN '25 Poster Y. Song, T. A. Keller, Y. Yue, P. Perona, and M. Welling (2025). Langevin Flows for Modeling Neural Latent Dynamics. In: Conference on Cognitive Computational Neuroscience (CCN). arXiv: 2507.11531 [cs.LG].
- TPAMI '24 Journal Y. Song, T. A. Keller, Y. Yue, P. Perona, and M. Welling (2024). Unsupervised Representation Learning from Sparse Transformation Analysis. In: IEEE Transactions on Pattern Analysis and Machine Intelligence. arXiv: 2410.05564 [cs.LG].
- NeurIPS '23 Y. Song, T. A. Keller, N. Sebe, and M. Welling (2023). Flow Factorized Representation Learning. In: Advances in Neural Information Processing Systems (NeurIPS). vol. 36. Curran Associates, Inc., pp. 49761–49782.
- ICCVW '21 Oral T. A. Keller and M. Welling (2021). Predictive Coding with Topographic Variational Autoencoders. In: IEEE/CVF International Conference on Computer Vision Workshops (ICCVW). Oral presentation, pp. 1086–1091. doi: 10.1109/ICCVW54120.2021.00127.
- NeurIPS '21 T. A. Keller and M. Welling (2021). Topographic VAEs Learn Equivariant Capsules. In: Advances in Neural Information Processing Systems (NeurIPS). vol. 34. Curran Associates, Inc., pp. 28585–28597.