His actual top objection is that even if we do manage to get a controlled and compliant ASI, that is still extremely destabilizing at best and fatal at worst.
Michael Nielsen brings forth a very valid concern, which should have made a lot of Alignment researchers update their beliefs already.
We currently don’t know what a benevolent OR compliant ASI would look like, or how it may end up affecting humanity (and our future agency). Worse, I doubt we can distinguish success from failure.
Michael Nielsen brings forth a very valid concern, which should have made a lot of Alignment researchers update their beliefs already.
We currently don’t know what a benevolent OR compliant ASI would look like, or how it may end up affecting humanity (and our future agency). Worse, I doubt we can distinguish success from failure.