There’s still a real puzzle about why Xi/Trump/CEOs can’t coordinate here after they realise what’s happening.
Maybe it’s unclear even to superintelligent AIs where this will lead, but it in fact leads to disempowerment. Or maybe the AIs aren’t aligned enough to tell us it’s bad for us.
I agree that having truthful, aligned AGI advisors might be sufficient to avoid coordination failures. But then again, why do current political leaders regularly appoint or listen to bad advisors? Steve Byrnes had a great list of examples of this pattern, which he calls “conservation of wisdom”
I think this might be a case where ‘absolute disempowerment’ and ‘everyone dying’ comes apart from ‘relative disempowerment’ and ‘we get a much worse future than we could have done’. Seems more plausible AI advisors don’t sufficiently forewarn about the latter
I agree that having truthful, aligned AGI advisors might be sufficient to avoid coordination failures. But then again, why do current political leaders regularly appoint or listen to bad advisors? Steve Byrnes had a great list of examples of this pattern, which he calls “conservation of wisdom”
Thanks for linking to that comment—great stuff.
I think this might be a case where ‘absolute disempowerment’ and ‘everyone dying’ comes apart from ‘relative disempowerment’ and ‘we get a much worse future than we could have done’. Seems more plausible AI advisors don’t sufficiently forewarn about the latter