Thanks for the references. I think I should also credit you with being the first to use “success story” the way I’m using it here, in connection with AI safety, which gave me the idea to write this post.
It’s not the same as your Interim Quality-of-Life Improver, but it’s got similar aspects.
The main difference seems to be that you don’t explicitly mention strong global coordination to stop unaligned AI from arising. Is that something you also had in mind? (I seem to recall someone talking about that in connection with this kind of scenario.)
It’s also related to the concept of a “Great Deliberation” where we stabilize the world and then figure out what we want to do. (I don’t have a reference for that though.)
There’s also Will MacAskill and Toby Ord’s “the Long Reflection” (which may be the same thing that you’re thinking of), which as far as I know isn’t written up in detail anywhere yet. However I’m told that both of their upcoming books will have some discussions of it.
The main difference seems to be that you don’t explicitly mention strong global coordination to stop unaligned AI from arising. Is that something you also had in mind?
It’s more of a free variable—I could imagine the world turning out such that we don’t need very strong coordination (because the Quality of Life Improver AI could plausibly not sacrifice competitiveness), and I could also imagine the world turning out such that it’s really easy to build very powerful unaligned AI and we need strong global coordination to prevent it from happening.
I think the difference may just be in how we present it—you focus more on the global coordination part, whereas I focus more on the following norms + improving technology + quality of life part.
There’s also Will MacAskill and Toby Ord’s “the Long Reflection”
Thanks for the references. I think I should also credit you with being the first to use “success story” the way I’m using it here, in connection with AI safety, which gave me the idea to write this post.
The main difference seems to be that you don’t explicitly mention strong global coordination to stop unaligned AI from arising. Is that something you also had in mind? (I seem to recall someone talking about that in connection with this kind of scenario.)
There’s also Will MacAskill and Toby Ord’s “the Long Reflection” (which may be the same thing that you’re thinking of), which as far as I know isn’t written up in detail anywhere yet. However I’m told that both of their upcoming books will have some discussions of it.
It’s more of a free variable—I could imagine the world turning out such that we don’t need very strong coordination (because the Quality of Life Improver AI could plausibly not sacrifice competitiveness), and I could also imagine the world turning out such that it’s really easy to build very powerful unaligned AI and we need strong global coordination to prevent it from happening.
I think the difference may just be in how we present it—you focus more on the global coordination part, whereas I focus more on the following norms + improving technology + quality of life part.
Yeah I think that’s the same concept.