Relative Representations for Model-to-Brain Mappings
Current model-to-brain mappings are computed over thou- sands of features. These high-dimensional mappings are computationally expensive and often difficult to interpret, due in large part to the uncertainty surrounding the re- lationship between the inherent structures of the brain and model feature spaces. Relative representations are a recent innovation from the machine learning literature that allow one to translate a feature space into a new co- ordinate frame whose dimensions are defined by a few select ‘anchor points’ chosen directly from the original input embeddings themselves. In this work, we show that computing model-to-brain mappings over these new coor- dinate spaces yields brain-predictivity scores comparable to mappings computed over full feature spaces, but with far fewer dimensions. Furthermore, since these dimen- sions are effectively the similarity of known inputs to other known inputs, we can now better interpret the structure of our mappings with respect to these known inputs. Ulti- mately, we provide a proof-of-concept that demonstrates the flexibility and performance of these relative represen- tations on a now-standard benchmark of high-level vision and firmly establishes them as a candidate model-to-brain mapping metric worthy of further exploration.
T. Anderson Keller, Talia Konkle, and Colin Conwell
Accepted at CCN 2024 (Poster)
Conference Abstract: https://2024.ccneuro.org/pdf/492_Paper_authored_AnchorEmbeddings_CCN2024_named.pdf