Relative Representations for Model-to-Brain Mappings

Relative_Reps Relative Representations are a method for mapping points (such as the green circle) from a high dimensional space (left) to a lower dimensional space (right), by represeniting it in a new coordinate system relative to a select set of anchor points (red and blue star). In this work we apply such an idea of relative representations to model-brain mappings and show that it improves interpretability and computational efficiency – surprisingly model-brain RSA scores are roughly consistent even with as few as 10 randomly selected anchor points (10 dimensions) compared to the original 1000’s of dimensions.

Flow Factorized Representation Learning

ffrl Illustration of our flow factorized representation learning: at each point in the latent space we have a distinct set of tangent directions \(\nabla u^k\) which define different transformations we would like to model in the image space. For each path, the latent sample evolves to the target on the potential landscape following dynamic optimal transport.

Traveling Waves Encode the Recent Past and Enhance Sequence Learning

WaveField Illustration of three input signals (top) and a corresponding wave-field with induced traveling waves (bottom). From an instantaneous snapshot of the wave-field at each timestep we are able decode both the time of onset and input channel of each input spike. Furthermore, subsequent spikes in the same channel do not overwrite one-another.

DEUT -- 2D Structured and Approximately Equivariant Representations

duet Visualization of the DUET framework. The backbone \(f\) yields a 2d representation for each transformed image \(f(\tau_g(\mathbf{x}))\) (e.g. \(\tau_g\) is a rotation by \(g\) degrees). The group marginal is obtained as the softmax (sm) of the sum of the rows, and is compared to the prescribed target (red) with our group loss \(L_G\). The content is obtained by summing the columns, and contrasted (\(L_C\)) with the other view through a projection head \(h\). The final representation for downstream tasks is the 2d one, which has been optimized through its marginals.

Latent Traversals in Generative Models as Potential Flows

poflow Comparison of latent traversals found with our method compared with state of the art baselines (WarpedSpace and SeFa). We see prior work tends to conflate multiple semantic concepts simultaneously due to the enforced linearity of the transformations. In our work, the inherently non-linear nature of the potential flow transformations more accurately disentangles semantically separate transformations.

Locally Coupled Oscillatory Recurrent Networks Learn Topographic Organization

Orientation Columns Measured orientation selectivity of neurons, as color coded by the bars on the left. We see our LocoRNN’s simulated cortical sheet learns selectivity reminiscent of the orientation columns observed in the Macaque primary visual cortex (source: Principles of Neural Science. E. Kandel, J. Schwartz, T. Jessell, S. Siegelbaum, & A. Hudspeth. 2013.).

Neural Wave Machines

MNIST_Waves_Recon Observed transformation (left), Latent Variable Waves (middle), and Reconstruction (right). We see the Neural Wave Machine learns to encode the observed transformations as traveling waves. In our paper, we show that such coordinated synchronous dynamics ultimately result in improved forecasting ability and efficiency when similarly modeling smooth continutous transformations as input.

Pagination