Last time I checked, we still lacked an actual AGI, or really any way of strongly optimizing the world to the extent we worry about in alignment.
This is a crux for me and a basis of much of my optimism about the problem: We already live in a world that extremely optimized by engineering, and while it may seem like superintelligence would allow you do things that ordinary humans cannot, that is far from a certainty.
The question for me is not whether “superintelligence would allow you do things that ordinary humans cannot”, it is whether superintelligence would allow you—within a year or two—to know how to do things that ordinary humans might figure out how to do and defend against in 100 years.
It’s no help at all to us if 100 years later we would have figured out how to effectively combat self-replicating factories, customized biological weapons, brain subversion, and/or any other combination of things that might actually be effective in the appendages of something far smarter than us. If the AI works out how to do it long before that, we’re in trouble.
This is a crux for me and a basis of much of my optimism about the problem: We already live in a world that extremely optimized by engineering, and while it may seem like superintelligence would allow you do things that ordinary humans cannot, that is far from a certainty.
The question for me is not whether “superintelligence would allow you do things that ordinary humans cannot”, it is whether superintelligence would allow you—within a year or two—to know how to do things that ordinary humans might figure out how to do and defend against in 100 years.
It’s no help at all to us if 100 years later we would have figured out how to effectively combat self-replicating factories, customized biological weapons, brain subversion, and/or any other combination of things that might actually be effective in the appendages of something far smarter than us. If the AI works out how to do it long before that, we’re in trouble.