Human-level or above AI is impossible because either they’re going to be people, which would be bad (we don’t want to have to give them rights, and it’d be wrong to kill them), or they’re going to refuse to self-improve because they don’t care about themselves.
Uploading is possible but will cause a religious war. Also, if there are sentient AIs around, they’ll beat us.
It’s unlikely we’re in a simulation because why would anyone want to simulate us?
Pretty reasonable for someone who says “rapture of the nerds”. The main problem is anthropomorphism; Stross should read up on optimization processes. There’s no reason AIs have to care about themselves to value becoming smarter.
(I’ve never found a good argument for “AGI is unlikely in theory”. It makes me sad, because Stross is looking at practical aspects of uploading, and I need more arguments for/against “AGI is unlikely in practice”.)
In some sense, AIs will need to care about themselves—otherwise they won’t adequately keep from damaging themselves as they try to improve themselves, and they won’t take measures to protect themselves from outside threats.
The alternative is that they care about their assigned goals, but unless there’s some other agent which can achieve their goals better than they can, I don’t see a practical difference between AIs taking care of themselves for the sake of the goal and taking care of themselves because that’s an independent motivation.
No, it seems to be a different mistake. He thinks nonperson AIs are possible, but they will model themselves as… roughly, body parts of humans. So they won’t optimize for anything, just obey explicit orders.
Summary:
Human-level or above AI is impossible because either they’re going to be people, which would be bad (we don’t want to have to give them rights, and it’d be wrong to kill them), or they’re going to refuse to self-improve because they don’t care about themselves.
Uploading is possible but will cause a religious war. Also, if there are sentient AIs around, they’ll beat us.
It’s unlikely we’re in a simulation because why would anyone want to simulate us?
Pretty reasonable for someone who says “rapture of the nerds”. The main problem is anthropomorphism; Stross should read up on optimization processes. There’s no reason AIs have to care about themselves to value becoming smarter.
(I’ve never found a good argument for “AGI is unlikely in theory”. It makes me sad, because Stross is looking at practical aspects of uploading, and I need more arguments for/against “AGI is unlikely in practice”.)
In some sense, AIs will need to care about themselves—otherwise they won’t adequately keep from damaging themselves as they try to improve themselves, and they won’t take measures to protect themselves from outside threats.
The alternative is that they care about their assigned goals, but unless there’s some other agent which can achieve their goals better than they can, I don’t see a practical difference between AIs taking care of themselves for the sake of the goal and taking care of themselves because that’s an independent motivation.
Sounds like he doesn’t believe in the possibility of nonperson predicates.
No, it seems to be a different mistake. He thinks nonperson AIs are possible, but they will model themselves as… roughly, body parts of humans. So they won’t optimize for anything, just obey explicit orders.