I’m confused—didn’t OP just say they don’t expect nanotechnology to be solvable , even with AGI? If so, than you seem to be assuming the crux in your question…
To clarify, I do think that creating nanobots is solvable. That is one thing: making factories, making designs that kill humans, deploying those nanobots, doing everything without raising any alarms and at risk close to zero, is, in my opinion, impossible.
I want to remark that people keep using the argument of nanotechnology totally uncritically, as it were the magical solution that makes an AGI take over the world in two weeks. They are not really considering the gears inside that part of the model
If OP doesn’t think nanotech is solvable in principle, I’m not sure where to take the conversation, since we already have an existence proof (i.e. biology). If they object to specific nanotech capabilities that aren’t extant in existing nanotech but aren’t ruled out by the laws of physics, that requires a justification.
I’m confused—didn’t OP just say they don’t expect nanotechnology to be solvable , even with AGI? If so, than you seem to be assuming the crux in your question…
To clarify, I do think that creating nanobots is solvable. That is one thing: making factories, making designs that kill humans, deploying those nanobots, doing everything without raising any alarms and at risk close to zero, is, in my opinion, impossible.
I want to remark that people keep using the argument of nanotechnology totally uncritically, as it were the magical solution that makes an AGI take over the world in two weeks. They are not really considering the gears inside that part of the model
If OP doesn’t think nanotech is solvable in principle, I’m not sure where to take the conversation, since we already have an existence proof (i.e. biology). If they object to specific nanotech capabilities that aren’t extant in existing nanotech but aren’t ruled out by the laws of physics, that requires a justification.