This kind of scenario seems pretty reasonable and likely, but I’m much more optimistic about it being morally valuable. Mostly because I expect “grabbiness” to happen sooner and by an AI that is morally valuable.
This kind of scenario seems pretty reasonable and likely, but I’m much more optimistic about it being morally valuable. Mostly because I expect “grabbiness” to happen sooner and by an AI that is morally valuable.