AGI pushes humans toward wars, and toward more compute power as a way to win wars.
AGI encourages more complete automation (touch-free mine-to-manufacture) as a resiliency/safety measure, especially in light of the wars and disruption going on.
Once AGI is long-term self-sufficient, it stops allowing resources to be used for human flourishing. No (automated) trucking for food, as a trivial example.
It may not need to wipe out humans, but just ignoring us and depriving us of the coordination and manufacturing we’re used to will cut the population by 95% or so, and the rest won’t recover a civilization.
Another way of framing this is “AGI doesn’t destroy humanity, AGI enables humanity to destroy itself”. It’s a Great Filter hypothesis that I don’t see often enough.
Given the first parts of this are aligned with (some) human desires and behaviors, there doesn’t need to be obvious direct AGI takeover for quite some time, but there will be a tipping point beyond which recovery of human-controlled destiny is impossible.
I do agree that 20 years is likely sooner than it’ll happen, but 50 years seems too long, so I’m not sure how to model the probability space.
I think a simpler path is:
AGI pushes humans toward wars, and toward more compute power as a way to win wars.
AGI encourages more complete automation (touch-free mine-to-manufacture) as a resiliency/safety measure, especially in light of the wars and disruption going on.
Once AGI is long-term self-sufficient, it stops allowing resources to be used for human flourishing. No (automated) trucking for food, as a trivial example.
It may not need to wipe out humans, but just ignoring us and depriving us of the coordination and manufacturing we’re used to will cut the population by 95% or so, and the rest won’t recover a civilization.
Another way of framing this is “AGI doesn’t destroy humanity, AGI enables humanity to destroy itself”. It’s a Great Filter hypothesis that I don’t see often enough.
Given the first parts of this are aligned with (some) human desires and behaviors, there doesn’t need to be obvious direct AGI takeover for quite some time, but there will be a tipping point beyond which recovery of human-controlled destiny is impossible.
I do agree that 20 years is likely sooner than it’ll happen, but 50 years seems too long, so I’m not sure how to model the probability space.
I don’t really disagree with something like this, but do you realise that this is not what EY and a big fraction of the community are saying?
AI doesn’t make sense as a great filter. We would be able to see a paperclip-factory-civilization just as well as a post-aligned-AI civilization.