I recommend checking out my review! I discuss some takeaways and there are interesting visuals from the paper and related papers.
However in quick take form, the TL;DR is:
Summary
Meta’s Brain & AI Research team won first place at Algonauts 2025 with TRIBE, a deep neural network trained to predict brain responses to stimuli across multiple modalities (text, audio, video), cortical areas (superior temporal lobe, ventral, and dorsal visual cortices), and individuals (four people).
The model is the first brain encoding pipeline which is simultaneously non-linear, multi-subject, and multi-modal.
The team show that these features improve model performance (exemplified by their win!) and provide extra insights, including improved neuroanatomical understandings.
Specifically, the model predicts blood-oxygen-level-dependent (BOLD) signals (a proxy for neural activity) in the brains of human participants exposed to video content: Friends, The Bourne Supremacy, Hidden Figures, The Wolf of Wall Street and Life (a BBC Nature documentary).
As promised yesterday — I reviewed and wrote up my thoughts on the research paper that Meta released yesterday:
Full review: Paper Review: TRImodal Brain Encoder for whole-brain fMRI response prediction (TRIBE)
I recommend checking out my review! I discuss some takeaways and there are interesting visuals from the paper and related papers.
However in quick take form, the TL;DR is:
Summary
Meta’s Brain & AI Research team won first place at Algonauts 2025 with TRIBE, a deep neural network trained to predict brain responses to stimuli across multiple modalities (text, audio, video), cortical areas (superior temporal lobe, ventral, and dorsal visual cortices), and individuals (four people).
The model is the first brain encoding pipeline which is simultaneously non-linear, multi-subject, and multi-modal.
The team show that these features improve model performance (exemplified by their win!) and provide extra insights, including improved neuroanatomical understandings.
Specifically, the model predicts blood-oxygen-level-dependent (BOLD) signals (a proxy for neural activity) in the brains of human participants exposed to video content: Friends, The Bourne Supremacy, Hidden Figures, The Wolf of Wall Street and Life (a BBC Nature documentary).