My view on this is even gloomier than the standard LW view. If alignment fails, we’re screwed. If alignment succeeds, most likely it’ll be alignment to power and money, so most of us are also screwed. The only way to not get screwed is to build AI that’s aligned to goodness instead of power and money, but nobody’s working on this.
My view on this is even gloomier than the standard LW view. If alignment fails, we’re screwed. If alignment succeeds, most likely it’ll be alignment to power and money, so most of us are also screwed. The only way to not get screwed is to build AI that’s aligned to goodness instead of power and money, but nobody’s working on this.