You will likely die, but probably not because of a nanotech holocaust initiated by a god-like machine superintelligence.
This I agree with and always assumed, but it is also largely irrelevant if the end conclusion is that AGI still destroys us all. To most people, I’d say, the specific method of death doesn’t matter as much as the substance. It’s a special kind of academic argument one where we can endlessly debate on how precisely will the end come to be through making this thing while we all mostly agree that this thing we are making, and that we could stop making, will likely end us all. Sane people (and civilizations) just… don’t make the deadly thing.
I haven’t gone through the numbers so I’ll give it a try, but out of the box, feels to me like your arguments about biology’s computational efficiency aren’t the end of it. I actually mentioned the topic as one possible point of interest here: https://www.lesswrong.com/posts/76n4pMcoDBTdXHTLY/ideas-for-studies-on-agi-risk. My impression is that biology can come up with some spectacularly efficient trade-offs, but that’s only within the rules of biology. For example, biology can produce very fast animals with very good legs, but not rocket cars on wheels, because that requires way too many steps that are completely non-functional. All components also need to be able to be generated from an embryo, self-maintaining, compatible with the standard sources of energy, and generally fit a bunch of other constraints that don’t necessarily exist on artificially made versions of them. Cameras are better than eyes, microphones are better than ears. Why wouldn’t computing hardware, eventually, be better than neurons? Not necessarily orders of magnitude better, but still, a lot cheaper than entire server rooms. Even if you could just copy human brains one-to-one in size, energy usage and efficiency, that already would be plenty superhuman and disruptive.
I agree most with the “room at the bottom” aspect. I don’t think there’s really that much left of it. But first, I could be wrong (after all, not like I could come up with the whole DNA and enzimes nonsense that evolution pulled off if I just knew about basic organic chemistry, so what’s not to say that there aren’t even better machineries that can be invented if something smarter than me tries to optimize for them?), and second, I don’t think that’s necessary either for doom. So what’s the point of arguing?
Just don’t build the damn thing that kills us all. Not if it does so swiftly by nanomachines and not if it does so slowly by replacing and outpricing us. Life isn’t supposed to be about a mindless pursuit of increased productivity, at least that’s not what most of us find fun and pleasurable about it, and replacing humanity with a relentless maximizer, a machine-corporation that has surpassed the need for fleshy bits in its pursuit of some pointless goal, is about the saddest tomb we can possibly turn the Earth into.
This I agree with and always assumed, but it is also largely irrelevant if the end conclusion is that AGI still destroys us all. To most people, I’d say, the specific method of death doesn’t matter as much as the substance. It’s a special kind of academic argument one where we can endlessly debate on how precisely will the end come to be through making this thing while we all mostly agree that this thing we are making, and that we could stop making, will likely end us all. Sane people (and civilizations) just… don’t make the deadly thing.
I haven’t gone through the numbers so I’ll give it a try, but out of the box, feels to me like your arguments about biology’s computational efficiency aren’t the end of it. I actually mentioned the topic as one possible point of interest here: https://www.lesswrong.com/posts/76n4pMcoDBTdXHTLY/ideas-for-studies-on-agi-risk. My impression is that biology can come up with some spectacularly efficient trade-offs, but that’s only within the rules of biology. For example, biology can produce very fast animals with very good legs, but not rocket cars on wheels, because that requires way too many steps that are completely non-functional. All components also need to be able to be generated from an embryo, self-maintaining, compatible with the standard sources of energy, and generally fit a bunch of other constraints that don’t necessarily exist on artificially made versions of them. Cameras are better than eyes, microphones are better than ears. Why wouldn’t computing hardware, eventually, be better than neurons? Not necessarily orders of magnitude better, but still, a lot cheaper than entire server rooms. Even if you could just copy human brains one-to-one in size, energy usage and efficiency, that already would be plenty superhuman and disruptive.
I agree most with the “room at the bottom” aspect. I don’t think there’s really that much left of it. But first, I could be wrong (after all, not like I could come up with the whole DNA and enzimes nonsense that evolution pulled off if I just knew about basic organic chemistry, so what’s not to say that there aren’t even better machineries that can be invented if something smarter than me tries to optimize for them?), and second, I don’t think that’s necessary either for doom. So what’s the point of arguing?
Just don’t build the damn thing that kills us all. Not if it does so swiftly by nanomachines and not if it does so slowly by replacing and outpricing us. Life isn’t supposed to be about a mindless pursuit of increased productivity, at least that’s not what most of us find fun and pleasurable about it, and replacing humanity with a relentless maximizer, a machine-corporation that has surpassed the need for fleshy bits in its pursuit of some pointless goal, is about the saddest tomb we can possibly turn the Earth into.