Yeah so for sure #9 is a bit of an outlier in terms of stability, my general conception of it was something akin to “a disaster that puts us back in the middle ages”. More broadly, I think of a lot of these as states which will dominate the next few centuries/millenium or so, rather than infinitely, and I think that mostly justifies inclusion of “not technically actually fully stable states but will kinda last for a while”. I think it would be interesting to do some sort of analysis on how long we would expect e.g an AI dictatorship to last for though.
I think that practically speaking differentiating 7/8/11 when in that world or when planning for the future is probably very hard? Misinformation is gonna be nuts in most of those places, but I felt that the moral outcome was so varied they deserved to be split up.
In terms of #6, I feel like in these worlds you’re creating some sort of a simulator based ASI, which sometimes goes Bing Sydney on you and therefore cannot be reasonably called “aligned”, but has human enough motivations that it doesn’t take over the world? There are presumably minds in mind-space which aren’t aligned to humans and also don’t want to take over the world, although I admit I don’t give this high probability.
Yeah so for sure #9 is a bit of an outlier in terms of stability, my general conception of it was something akin to “a disaster that puts us back in the middle ages”. More broadly, I think of a lot of these as states which will dominate the next few centuries/millenium or so, rather than infinitely, and I think that mostly justifies inclusion of “not technically actually fully stable states but will kinda last for a while”. I think it would be interesting to do some sort of analysis on how long we would expect e.g an AI dictatorship to last for though.
I think that practically speaking differentiating 7/8/11 when in that world or when planning for the future is probably very hard? Misinformation is gonna be nuts in most of those places, but I felt that the moral outcome was so varied they deserved to be split up.
In terms of #6, I feel like in these worlds you’re creating some sort of a simulator based ASI, which sometimes goes Bing Sydney on you and therefore cannot be reasonably called “aligned”, but has human enough motivations that it doesn’t take over the world? There are presumably minds in mind-space which aren’t aligned to humans and also don’t want to take over the world, although I admit I don’t give this high probability.