Hmm, the announcement on X does state explicitly that it wasn’t reinforcement learning. But this tweet from the president of the Brain Preservation Foundation certainly thinks it was misleading. Not sure what the real story is at the moment.
So first off, I want to say that Ken Hayworth is one of the scientists I respect most in the world, and he cares a lot about precision in language and making sure that nothing gets overstated. I’ve actually asked him to be heavily involved in Nectome’s certification process, and I think his careful approach will bring a lot of rigor to that. I do think his tone with Michael was a little harsh here, and he’s erring on the side of judging a twitter one-liner like it’s a scientific paper.
I helped found Eon and am currently one of their advisors, and I think the fruit fly upload situation is one of those things that’s like this comic:
Now, I’m actually thrilled to talk about the flywire situation, because while I think there’s been some miscommunication about it due to the standard science hype cycle and the way twitter is, and I think the object-level facts are a really cool result that you guys will appreciate.
The Eon simulation is what I’d describe as a “partial upload”, using leaky integrate and fire neurons. It’s built on Philip Shiu’s work in fruit fly brain modeling (https://www.nature.com/articles/s41586-024-07763-9). Philip has been part of Eon for about a year now. The work in the tweet is showing off how Eon took Philip’s model and added a body and environment to it to make it more embodied. Check out Alex’s description here: https://x.com/alexwg/status/2030217301929132323. Is this a full fruit fly upload? No. It’s not simulating the neurons in the fruit fly’s body directly (because we don’t have them), instead it’s looking at the brain and reading out approximately which way the brain wants to move the body, then puppeteering the simulated body in the same direction. So the simulated brain is controlling the body, but in more of a “prosthetic” sense, or like how a person controls a character in a video game. The simulated brain is also getting visual information from the simulated environment, so when it turns left and there’s a thing there, that changes the pattern of information going into its simulated eyes appropriately. The brain simulation is very simple compared to how the fruit fly brain works in real life, and incomplete, but does still reproduce many interesting behaviors in spite of all the simplifications. I’ve run the simulation on my laptop in an earlier form, I can go into more details in a future post if people are interested.
I think Eon / Phil’s work is a really cool result, and the fact that it works at all is to me very impressive. It could have been the case that when we scan connectomes and simulate them with very simple mathematical models, it didn’t do anything even remotely resembling actual animal behavior. If that had been the case, I’d be slightly more inclined to say that there’s important and subtle chemical information, not reflected in the kind of information you can get with an electron microscopy that you need in addition to a connectome to get an upload working. It wouldn’t move me on brain preservation working by very much, because almost all proteins are preserved by aldehydes, but it would move me somewhat on how easy I think full uploading in Ken’s sense will be. I take the recent partial fruit fly uploading work as weak evidence that brains are going to be fairly easy to simulate, and that much of the necessary detail is inferable from the geometry of the connectome.
Over the course of the next couple of blog posts, I’d like to provide all of you with some solid resources for evaluating what we’re doing at Nectome, and let you form your own judgements from there.
Happy to answer any further questions about Flywire. I do think it’s a really awesome result, and I was pleased that the Eon team put together that video. If you guys are interested, we could even incorporate a more in-depth discussion of the project as a post in our sequence here.
EON / Michael Andregg / Alex Wissner-Gross mostly don’t make straightforwardly false claims, but nonetheless much of their communication (though not all of it!) is (predictably) misleading.
I think the 91%/95% accuracy claims are pretty misleading, though I don’t have the expertise to confidently adjudicate this.
The EON front page says “No hand-coded behaviors. Just brain structure producing brain function.” In fact, the walking gaits and grooming leg motions come from NeuroMechFly motion primitives. Thus I think this is basically false rather than merely being misleading.
The post from Alex Wissner-Gross is generally less misleading, but contains some pretty misleading text (“If a fly brain can now close the sensorimotor loop in simulation, the question for the mouse becomes one of scale, not of kind.”, “Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move. The ghost is no longer in the machine. The machine is becoming the ghost.”).
I also think their prognosis about the situation and implicit predictions seem wrong, but this is more like a normal disagreement.
This isn’t representative of much progress towards uploading. I would be impressed by non-overfit versions of: “you took an actual living fruit fly, had it learn a behavior that’s among the most sophisticated behaviors a fruit fly can learn, then you uploaded it, and the upload displayed the same behavior”. I would also be somewhat impressed by the same thing for C. elegans (more impressed than I am by this demo).
The 2024 nature paper this is based on isn’t particularly misleading/exaggerated and this demo doesn’t represent significant advancement on this paper. I would recommend reading the paper (or asking an AI questions about this paper etc) to people interested in how impressive this demo is.
The claim “I think the fruit fly upload situation is one of those things that’s like this comic” doesn’t seem that accurate to me given the communications mentioned above. This would be true if we were just talking about the 2024 nature paper and the EON blog post, but we aren’t. There are caveats in the original communication that aren’t being picked up by third parties, but this isn’t the only important thing going on.
More speculative and involves psychologizing (generally a risky thing to do). Based in part on private info and talking with Michael Andregg once:
I do think that EON / Michael are likely pitching this reasonably hard, in somewhat misleading ways, to various groups.
I’d guess that Michael’s epistemics about what’s going on and how close we are to various targets aren’t great and this probably applies to EON as a whole to some extent. I do find EON’s blog post reassuring with respect to EON’s epistemics.
It’s common for startups to do behavior like this and have bad epistemics in this sort of way. My understanding is that this doesn’t typically significantly undermine the ability of startups to achieve their goals. Thus, it’s not clear that having not-great epistemics—insofar as my speculation about their epistemics is true—would make EON that much less effective at achieving their goals (and it’s worth noting that their apparent epistemics are probably better than that of a typical start up). That said, it might result in third parties being systematically misled about how close EON is to achieving their goals. If I was considering working at or investing in EON, then I would certainly take this into account when deciding how promising/tractable current work in brain emulation is, but conditioning on views about tractability/promisingness of the field, I would likely consider EON to be a pretty reasonable bet within that field.
I frankly think calling the Eon video any sort of “upload” is quite misleading and exaggerated. There are at least two fundamental reasons for this:
@Aurelia, as you, Ken, and even Eon (later on in their blog post) correctly point out, this was a leaky integrate-and-fire (LIF) model built from the fly connectome. So it’s not even close to the full brain of the fruit fly: no neurotransmitters, no synaptic weights, no synaptic dynamics from the fruit fly. We are not even faithfully simulating its brain in silico.
Not only is the central nervous system not a true upload, but the motor system isn’t either. What is instead used as a mapping between this LIF model and motor outputs is a policy that is hard-coded (not even imitation-learned via RL, though later on they & others do this) from fly behaviors from the NeuroMechFly team at EPFL. So the LIF model they use is neither necessary nor sufficient for the generated behavior: the fly policy can walk on its own without any additional inputs, as the EPFL team already demonstrated.
Does that mean that the brain itself was simulated, using only info from the brain scan, while other techniques were used for simulating the body and environment?
Was it a specific fruit fly, or a simulation created using data from multiple fruit flies?
Hmm, the announcement on X does state explicitly that it wasn’t reinforcement learning. But this tweet from the president of the Brain Preservation Foundation certainly thinks it was misleading. Not sure what the real story is at the moment.
So first off, I want to say that Ken Hayworth is one of the scientists I respect most in the world, and he cares a lot about precision in language and making sure that nothing gets overstated. I’ve actually asked him to be heavily involved in Nectome’s certification process, and I think his careful approach will bring a lot of rigor to that. I do think his tone with Michael was a little harsh here, and he’s erring on the side of judging a twitter one-liner like it’s a scientific paper.
I helped found Eon and am currently one of their advisors, and I think the fruit fly upload situation is one of those things that’s like this comic:
Now, I’m actually thrilled to talk about the flywire situation, because while I think there’s been some miscommunication about it due to the standard science hype cycle and the way twitter is, and I think the object-level facts are a really cool result that you guys will appreciate.
The Eon simulation is what I’d describe as a “partial upload”, using leaky integrate and fire neurons. It’s built on Philip Shiu’s work in fruit fly brain modeling (https://www.nature.com/articles/s41586-024-07763-9). Philip has been part of Eon for about a year now. The work in the tweet is showing off how Eon took Philip’s model and added a body and environment to it to make it more embodied. Check out Alex’s description here: https://x.com/alexwg/status/2030217301929132323. Is this a full fruit fly upload? No. It’s not simulating the neurons in the fruit fly’s body directly (because we don’t have them), instead it’s looking at the brain and reading out approximately which way the brain wants to move the body, then puppeteering the simulated body in the same direction. So the simulated brain is controlling the body, but in more of a “prosthetic” sense, or like how a person controls a character in a video game. The simulated brain is also getting visual information from the simulated environment, so when it turns left and there’s a thing there, that changes the pattern of information going into its simulated eyes appropriately. The brain simulation is very simple compared to how the fruit fly brain works in real life, and incomplete, but does still reproduce many interesting behaviors in spite of all the simplifications. I’ve run the simulation on my laptop in an earlier form, I can go into more details in a future post if people are interested.
I think Eon / Phil’s work is a really cool result, and the fact that it works at all is to me very impressive. It could have been the case that when we scan connectomes and simulate them with very simple mathematical models, it didn’t do anything even remotely resembling actual animal behavior. If that had been the case, I’d be slightly more inclined to say that there’s important and subtle chemical information, not reflected in the kind of information you can get with an electron microscopy that you need in addition to a connectome to get an upload working. It wouldn’t move me on brain preservation working by very much, because almost all proteins are preserved by aldehydes, but it would move me somewhat on how easy I think full uploading in Ken’s sense will be. I take the recent partial fruit fly uploading work as weak evidence that brains are going to be fairly easy to simulate, and that much of the necessary detail is inferable from the geometry of the connectome.
Over the course of the next couple of blog posts, I’d like to provide all of you with some solid resources for evaluating what we’re doing at Nectome, and let you form your own judgements from there.
Happy to answer any further questions about Flywire. I do think it’s a really awesome result, and I was pleased that the Eon team put together that video. If you guys are interested, we could even incorporate a more in-depth discussion of the project as a post in our sequence here.
I did a shallow investigation of this and my conclusions were:
The text on EON’s frontpage, Michael Andregg’s tweet thread, and Alex Wissner-Gross’ post seem pretty misleading in practice while this tweet from Kenneth Hayworth is more representative of the situation. I think EON’s blog post on this is not misleading while the front page of their website is.
EON / Michael Andregg / Alex Wissner-Gross mostly don’t make straightforwardly false claims, but nonetheless much of their communication (though not all of it!) is (predictably) misleading.
I think the 91%/95% accuracy claims are pretty misleading, though I don’t have the expertise to confidently adjudicate this.
The EON front page says “No hand-coded behaviors. Just brain structure producing brain function.” In fact, the walking gaits and grooming leg motions come from NeuroMechFly motion primitives. Thus I think this is basically false rather than merely being misleading.
The post from Alex Wissner-Gross is generally less misleading, but contains some pretty misleading text (“If a fly brain can now close the sensorimotor loop in simulation, the question for the mouse becomes one of scale, not of kind.”, “Watch the video closely. What you are seeing is not an animation. It is not a reinforcement learning policy mimicking biology. It is a copy of a biological brain, wired neuron-to-neuron from electron microscopy data, running in simulation, making a body move. The ghost is no longer in the machine. The machine is becoming the ghost.”).
I also think their prognosis about the situation and implicit predictions seem wrong, but this is more like a normal disagreement.
This isn’t representative of much progress towards uploading. I would be impressed by non-overfit versions of: “you took an actual living fruit fly, had it learn a behavior that’s among the most sophisticated behaviors a fruit fly can learn, then you uploaded it, and the upload displayed the same behavior”. I would also be somewhat impressed by the same thing for C. elegans (more impressed than I am by this demo).
The 2024 nature paper this is based on isn’t particularly misleading/exaggerated and this demo doesn’t represent significant advancement on this paper. I would recommend reading the paper (or asking an AI questions about this paper etc) to people interested in how impressive this demo is.
The claim “I think the fruit fly upload situation is one of those things that’s like this comic” doesn’t seem that accurate to me given the communications mentioned above. This would be true if we were just talking about the 2024 nature paper and the EON blog post, but we aren’t. There are caveats in the original communication that aren’t being picked up by third parties, but this isn’t the only important thing going on.
More speculative and involves psychologizing (generally a risky thing to do). Based in part on private info and talking with Michael Andregg once:
I do think that EON / Michael are likely pitching this reasonably hard, in somewhat misleading ways, to various groups.
I’d guess that Michael’s epistemics about what’s going on and how close we are to various targets aren’t great and this probably applies to EON as a whole to some extent. I do find EON’s blog post reassuring with respect to EON’s epistemics.
It’s common for startups to do behavior like this and have bad epistemics in this sort of way. My understanding is that this doesn’t typically significantly undermine the ability of startups to achieve their goals. Thus, it’s not clear that having not-great epistemics—insofar as my speculation about their epistemics is true—would make EON that much less effective at achieving their goals (and it’s worth noting that their apparent epistemics are probably better than that of a typical start up). That said, it might result in third parties being systematically misled about how close EON is to achieving their goals. If I was considering working at or investing in EON, then I would certainly take this into account when deciding how promising/tractable current work in brain emulation is, but conditioning on views about tractability/promisingness of the field, I would likely consider EON to be a pretty reasonable bet within that field.
I frankly think calling the Eon video any sort of “upload” is quite misleading and exaggerated. There are at least two fundamental reasons for this:
@Aurelia, as you, Ken, and even Eon (later on in their blog post) correctly point out, this was a leaky integrate-and-fire (LIF) model built from the fly connectome. So it’s not even close to the full brain of the fruit fly: no neurotransmitters, no synaptic weights, no synaptic dynamics from the fruit fly. We are not even faithfully simulating its brain in silico.
Not only is the central nervous system not a true upload, but the motor system isn’t either. What is instead used as a mapping between this LIF model and motor outputs is a policy that is hard-coded (not even imitation-learned via RL, though later on they & others do this) from fly behaviors from the NeuroMechFly team at EPFL. So the LIF model they use is neither necessary nor sufficient for the generated behavior: the fly policy can walk on its own without any additional inputs, as the EPFL team already demonstrated.
Thank you for trying to explain this!
Does that mean that the brain itself was simulated, using only info from the brain scan, while other techniques were used for simulating the body and environment?
Was it a specific fruit fly, or a simulation created using data from multiple fruit flies?