On the main point, I don’t think you can make those optimizations safely unless you really understand a huge amount of detail about what’s going on. Just being able to scan brains doesn’t give you any understanding, but at the same time it’s probably a prerequisite to getting a complete understanding. So you have to do the two relatively serially.
You might need help from superhuman AGI to even figure it out, and you might even have to be superhuman AGI to understand the result. Even if you don’t, it’s going to take you a long time, and the tests you’ll need to do if you want to “optimize stuff out” aren’t exactly risk free.
Basically, the more you deviate from just emulating the synapses you’ve found[1], and the more simplifications you let yourself make, the less it’s like an upload and the more it’s like a biology-inspired nonhuman AGI.
Also, I’m not so sure I see a reason to believe that those multicellular gadgets actually exist, except in the same way that you can find little motifs and subsystems that emerge, and even repeat, in plain old neural networks. If there are a vast number of them and they’re hard-coded, then you have to ask where. Your whole genome is only what, 4GB? Most of it used for other stuff. And it seems as though it’s a lot easier to from a developmental point of view to code for minor variations on “build this 1000 gross functional areas, and within them more or less just have every cell send out dendrites all over the place and learn which connections work”, than for “put a this machine here and a that machine there within this functional area”.
“Human brains have probably more than 1000 times as many synapses as current LLMs have weights.” → Can you elaborate? I thought the ratio was more like 100-200. (180-320T ÷ 1.7T)
I’m sorry; I was just plain off by a factor of 10 because apparently I can’t do even approximate division right.
Humans can get injuries where they can’t move around or feel almost any of their body, and they sure aren’t happy about it, but they are neither insane nor unable to communicate.
A fair point up, with a few limitations. Not a lot of people are completely locked in with no high bandwidth sensory experience, and I don’t think anybody’s quite sure what’s going on with the people who are. Vision and/or hearing are already going to be pretty hard to provide. But maybe not as hard as I’m making them out to be, if you’re willing to trace the connections all the way back to the sensory cells. Maybe you do just have to do the head. I am not gonna volunteer, though.
In the end, I’m still not buying that uploads have enough of a chance of being practical to run in a pre-FOOM timeframe to be worth spending time on, as well as being pretty pessimistic about anything produced by any number of uploaded-or-not “alignment researchers” actually having much of a real impact on outcomes anyway. And I’m still very worried about a bunch of issues about ethics and values of all concerned.
… and all of that’s assuming you could get the enormous resources to even try any of it.
By the way, I would have responded to these sooner, but apparently my algorithm for detecting them has bugs...
On the main point, I don’t think you can make those optimizations safely unless you really understand a huge amount of detail about what’s going on. Just being able to scan brains doesn’t give you any understanding, but at the same time it’s probably a prerequisite to getting a complete understanding. So you have to do the two relatively serially.
You might need help from superhuman AGI to even figure it out, and you might even have to be superhuman AGI to understand the result. Even if you don’t, it’s going to take you a long time, and the tests you’ll need to do if you want to “optimize stuff out” aren’t exactly risk free.
Basically, the more you deviate from just emulating the synapses you’ve found[1], and the more simplifications you let yourself make, the less it’s like an upload and the more it’s like a biology-inspired nonhuman AGI.
Also, I’m not so sure I see a reason to believe that those multicellular gadgets actually exist, except in the same way that you can find little motifs and subsystems that emerge, and even repeat, in plain old neural networks. If there are a vast number of them and they’re hard-coded, then you have to ask where. Your whole genome is only what, 4GB? Most of it used for other stuff. And it seems as though it’s a lot easier to from a developmental point of view to code for minor variations on “build this 1000 gross functional areas, and within them more or less just have every cell send out dendrites all over the place and learn which connections work”, than for “put a this machine here and a that machine there within this functional area”.
I’m sorry; I was just plain off by a factor of 10 because apparently I can’t do even approximate division right.
A fair point up, with a few limitations. Not a lot of people are completely locked in with no high bandwidth sensory experience, and I don’t think anybody’s quite sure what’s going on with the people who are. Vision and/or hearing are already going to be pretty hard to provide. But maybe not as hard as I’m making them out to be, if you’re willing to trace the connections all the way back to the sensory cells. Maybe you do just have to do the head. I am not gonna volunteer, though.
In the end, I’m still not buying that uploads have enough of a chance of being practical to run in a pre-FOOM timeframe to be worth spending time on, as well as being pretty pessimistic about anything produced by any number of uploaded-or-not “alignment researchers” actually having much of a real impact on outcomes anyway. And I’m still very worried about a bunch of issues about ethics and values of all concerned.
… and all of that’s assuming you could get the enormous resources to even try any of it.
By the way, I would have responded to these sooner, but apparently my algorithm for detecting them has bugs...
… which may already be really hard to do correctly…