Research Statement

Information processing systems in the real world must compute reliably under a diversity of stimulus transformations; in vision alone, these include changes in viewpoint, lighting, and appearance. To date, most artificial learning systems achieve robustness primarily through increased scale of both data and parameters, rather than ingrained structure. Natural systems, embedded in a world with physical constraints, have no such luxury; they must generalize systematically despite finite data, finite energy, and finite time.

In learning theory, such data efficiency is governed by inductive biases: a priori constraints that restrict the space of solutions a system can represent (Wolpert 1996). In artificial neural networks, many of the most powerful inductive biases arise from symmetry and geometry – when a model is constrained to respect the abstract structure of transformations in its inputs, it can generalize predictably far beyond its training distribution with dramatically fewer examples.

The canonical case is the convolutional layer, notably inspired by biology, which builds translation symmetry into vision models (Fukushima, 1980). Congruently, modern neuroscience continues to reveal increasing geometric structure in biological computation and connectivity. From topographic maps and toroidal grid codes, to ring attractor circuits and low-dimensional population manifolds, geometric structure appears to be a central design principle in the blueprint for natural intelligence (Zhang 1996; Churchland et al. 2012; Gardner et al. 2022).

My research investigates the hypothesis that generalizable symmetric and geometric inductive biases are fundamental to natural intelligence, and seeks to discover the computational primitives that implement them.

In particular, my recent research to date falls into three themes:

  1. formalizing the notion of time-parameterized symmetries unique to recurrent computation
  2. evaluating the computational implications of natural spatiotemporal dynamics, and
  3. modeling how natural systems leverage the underlying low-dimensional geometry of high-dimensional data.

Time-Parameterized Symmetries

Modeling Spatiotemporal Neural Dynamics

Learning Latent Symmetries