EON / Michael Andregg / Alex Wissner-Gross mostly don’t make straightforwardly false claims, but nonetheless much of their communication (though not all of it!) is (predictably) misleading.
I think the 91%/95% accuracy claims are pretty misleading, though I don’t have the expertise to confidently adjudicate this.
The EON front page says “No hand-coded behaviors. Just brain structure producing brain function.” In fact, the walking gaits and grooming leg motions come from NeuroMechFly motion primitives. Thus I think this is basically false rather than merely being misleading.
The post from Alex Wissner-Gross is generally less misleading, but contains some pretty misleading text (“If a fly brain can now close the sensorimotor loop in simulation, the question for the mouse becomes one of scale, not of kind.”, “Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move. The ghost is no longer in the machine. The machine is becoming the ghost.”).
I also think their prognosis about the situation and implicit predictions seem wrong, but this is more like a normal disagreement.
This isn’t representative of much progress towards uploading. I would be impressed by non-overfit versions of: “you took an actual living fruit fly, had it learn a behavior that’s among the most sophisticated behaviors a fruit fly can learn, then you uploaded it, and the upload displayed the same behavior”. I would also be somewhat impressed by the same thing for C. elegans (more impressed than I am by this demo).
The 2024 nature paper this is based on isn’t particularly misleading/exaggerated and this demo doesn’t represent significant advancement on this paper. I would recommend reading the paper (or asking an AI questions about this paper etc) to people interested in how impressive this demo is.
The claim “I think the fruit fly upload situation is one of those things that’s like this comic” doesn’t seem that accurate to me given the communications mentioned above. This would be true if we were just talking about the 2024 nature paper and the EON blog post, but we aren’t. There are caveats in the original communication that aren’t being picked up by third parties, but this isn’t the only important thing going on.
More speculative and involves psychologizing (generally a risky thing to do). Based in part on private info and talking with Michael Andregg once:
I do think that EON / Michael are likely pitching this reasonably hard, in somewhat misleading ways, to various groups.
I’d guess that Michael’s epistemics about what’s going on and how close we are to various targets aren’t great and this probably applies to EON as a whole to some extent. I do find EON’s blog post reassuring with respect to EON’s epistemics.
It’s common for startups to do behavior like this and have bad epistemics in this sort of way. My understanding is that this doesn’t typically significantly undermine the ability of startups to achieve their goals. Thus, it’s not clear that having not-great epistemics—insofar as my speculation about their epistemics is true—would make EON that much less effective at achieving their goals (and it’s worth noting that their apparent epistemics are probably better than that of a typical start up). That said, it might result in third parties being systematically misled about how close EON is to achieving their goals. If I was considering working at or investing in EON, then I would certainly take this into account when deciding how promising/tractable current work in brain emulation is, but conditioning on views about tractability/promisingness of the field, I would likely consider EON to be a pretty reasonable bet within that field.
I did a shallow investigation of this and my conclusions were:
The text on EON’s frontpage, Michael Andregg’s tweet thread, and Alex Wissner-Gross’ post seem pretty misleading in practice while this tweet from Kenneth Hayworth is more representative of the situation. I think EON’s blog post on this is not misleading while the front page of their website is.
EON / Michael Andregg / Alex Wissner-Gross mostly don’t make straightforwardly false claims, but nonetheless much of their communication (though not all of it!) is (predictably) misleading.
I think the 91%/95% accuracy claims are pretty misleading, though I don’t have the expertise to confidently adjudicate this.
The EON front page says “No hand-coded behaviors. Just brain structure producing brain function.” In fact, the walking gaits and grooming leg motions come from NeuroMechFly motion primitives. Thus I think this is basically false rather than merely being misleading.
The post from Alex Wissner-Gross is generally less misleading, but contains some pretty misleading text (“If a fly brain can now close the sensorimotor loop in simulation, the question for the mouse becomes one of scale, not of kind.”, “Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move. The ghost is no longer in the machine. The machine is becoming the ghost.”).
I also think their prognosis about the situation and implicit predictions seem wrong, but this is more like a normal disagreement.
This isn’t representative of much progress towards uploading. I would be impressed by non-overfit versions of: “you took an actual living fruit fly, had it learn a behavior that’s among the most sophisticated behaviors a fruit fly can learn, then you uploaded it, and the upload displayed the same behavior”. I would also be somewhat impressed by the same thing for C. elegans (more impressed than I am by this demo).
The 2024 nature paper this is based on isn’t particularly misleading/exaggerated and this demo doesn’t represent significant advancement on this paper. I would recommend reading the paper (or asking an AI questions about this paper etc) to people interested in how impressive this demo is.
The claim “I think the fruit fly upload situation is one of those things that’s like this comic” doesn’t seem that accurate to me given the communications mentioned above. This would be true if we were just talking about the 2024 nature paper and the EON blog post, but we aren’t. There are caveats in the original communication that aren’t being picked up by third parties, but this isn’t the only important thing going on.
More speculative and involves psychologizing (generally a risky thing to do). Based in part on private info and talking with Michael Andregg once:
I do think that EON / Michael are likely pitching this reasonably hard, in somewhat misleading ways, to various groups.
I’d guess that Michael’s epistemics about what’s going on and how close we are to various targets aren’t great and this probably applies to EON as a whole to some extent. I do find EON’s blog post reassuring with respect to EON’s epistemics.
It’s common for startups to do behavior like this and have bad epistemics in this sort of way. My understanding is that this doesn’t typically significantly undermine the ability of startups to achieve their goals. Thus, it’s not clear that having not-great epistemics—insofar as my speculation about their epistemics is true—would make EON that much less effective at achieving their goals (and it’s worth noting that their apparent epistemics are probably better than that of a typical start up). That said, it might result in third parties being systematically misled about how close EON is to achieving their goals. If I was considering working at or investing in EON, then I would certainly take this into account when deciding how promising/tractable current work in brain emulation is, but conditioning on views about tractability/promisingness of the field, I would likely consider EON to be a pretty reasonable bet within that field.