The main difference seems to be that you don’t explicitly mention strong global coordination to stop unaligned AI from arising. Is that something you also had in mind?
It’s more of a free variable—I could imagine the world turning out such that we don’t need very strong coordination (because the Quality of Life Improver AI could plausibly not sacrifice competitiveness), and I could also imagine the world turning out such that it’s really easy to build very powerful unaligned AI and we need strong global coordination to prevent it from happening.
I think the difference may just be in how we present it—you focus more on the global coordination part, whereas I focus more on the following norms + improving technology + quality of life part.
There’s also Will MacAskill and Toby Ord’s “the Long Reflection”
It’s more of a free variable—I could imagine the world turning out such that we don’t need very strong coordination (because the Quality of Life Improver AI could plausibly not sacrifice competitiveness), and I could also imagine the world turning out such that it’s really easy to build very powerful unaligned AI and we need strong global coordination to prevent it from happening.
I think the difference may just be in how we present it—you focus more on the global coordination part, whereas I focus more on the following norms + improving technology + quality of life part.
Yeah I think that’s the same concept.