bsky—arxiv: brain vs DINOv3, has pretty charts: ”...compare the activation of DINOv3, a SOTA self-supervised computer vision model trained on natural images, to the activations of the human brain in response to the same images using both fMRI (naturalscenesdataset.org) and MEG (openneuro.org/datasets/ds0...)”
bsky—philxiv: ”...difficult theory-choice situations, where division of cognitive labor is needed. Network epistemology models suggest that reducing connectivity is needed to prevent premature convergence on bad theories”
I posted this to procrastinate sleep and haven’t opened any of these papers. this has been your (ir)regular Suggestiom From Lauren to find a good “new papers” feed that is relatively raw (ie, probably not a “new papers I liked” substack) in case something inspires you even if you don’t look closer
today on preprint discourse:
bsky—arxiv: brain vs DINOv3, has pretty charts: ”...compare the activation of DINOv3, a SOTA self-supervised computer vision model trained on natural images, to the activations of the human brain in response to the same images using both fMRI (naturalscenesdataset.org) and MEG (openneuro.org/datasets/ds0...)”
bsky—philxiv: ”...difficult theory-choice situations, where division of cognitive labor is needed. Network epistemology models suggest that reducing connectivity is needed to prevent premature convergence on bad theories”
bsky—arxiv apparently claims “diffusion models don’t learn score function”
bsky—arxiv “adversarial collaboration testing IIT & predictive processing theories of consciousness”
I posted this to procrastinate sleep and haven’t opened any of these papers. this has been your (ir)regular Suggestiom From Lauren to find a good “new papers” feed that is relatively raw (ie, probably not a “new papers I liked” substack) in case something inspires you even if you don’t look closer
see also top arxiv—metascience
also https://en.wikipedia.org/wiki/Wikipedia:Signs_of_AI_writing is interesting