I primarily mentioned it because I think people base their ‘what is the S-risk outcome’ on basically antialigned AGI. The post has ‘AI hell’ in the title and uses comparisons between extreme suffering versus extreme bliss, calls s-risks more important than alignment (which I think makes sense to a reasonable degree if antialigned s-risk is likely or a sizable portion of weaker dystopias are likely, but I don’t think makes sense for antialigned being very unlikely and my considering weak dystopias to also be overall not likely) .
The extrema argument is why I don’t think that weak dystopias are likely, because I think that—unless we succeed at alignment to a notable degree—then the extremes of whatever values shake out are not something that keeps humans around for very long. So I don’t expect weaker dystopias to occur either.
I expect that most AIs aren’t going to value making a notable deliberate AI hell, whether out of the lightcone or 5% of it or 0.01% of it. If we make an aligned-AGI and then some other AGI says ‘I will simulate a bunch of humans in torment unless you give me a planet’ then I expect that our aligned-AGI uses a decision-theory that doesn’t give into dt-Threats and doesn’t give in (and thus isn’t threatened, because the other AGI gains nothing from actually simulating humans in that).
So, while I do expect that weak dystopias have a noticeable chance of occurring, I think it is significantly unlikely? It grows more likely we’ll end up in a weak dystopia as alignment progresses. Like if we manage to get enough of a ‘caring about humans specifically’ (though I expect a lot of attempts like that to fall apart and have weird extremes when they’re optimized over!), then that raises the chances of a weak dystopia.
However I also believe that alignment is roughly the way to solve these. To get notable progress on making AGIs avoid specific area, I believe that requires more alignment progress than we have currently.
There is the class of problems where the unaligned AGI decides to simulate us to get more insight into humans, insight into evolved species, and insight into various other pieces of that. That would most likely be bad, but I expect it to not be a significant portion of computation and also not continually executed for (really long length of time). So I don’t consider that to be a notable s-risk.
I’m also not sure that I consider astronomical suffering outcome (by how its described in the paper) to be bad by itself.
If you have (absurd amount of people) and they have some amount of suffering (ex: it shakes out that humans prefer some degree of negative-reinforcement as possible outcomes, so it remains) then that can be more suffering in terms of magnitude, but has the benefits of being more diffuse (people aren’t broken by a short-term large amount of suffering) and with less individual extremes of suffering.
Obviously it would be bad to have a world that has astronomical suffering that is then concentrated on a large amount of people, but that’s why I think—a naive application of—astronomical suffering is incorrect because it ignores diffuse experiences, relative experiences (like, if we have 50% of people with notably bad suffering today, then your large future civilization with only 0.01% of people with notably bad suffering can still swamp that number, though the article mentions this I believe), and more minor suffering adding up over long periods of time.
(I think some of this comes from talking about things in terms of suffering versus happiness rather than negative utility versus positive utility? Where zero is defined as ‘universe filled with things we dont care about’. Like, you can have astronomical suffering that isn’t that much negative utility because it is diffuse / lower in a relative sense / less extreme, but ‘everyone is having a terrible time in this dystopia’ has astronomical suffering and high negative utility)
I primarily mentioned it because I think people base their ‘what is the S-risk outcome’ on basically antialigned AGI. The post has ‘AI hell’ in the title and uses comparisons between extreme suffering versus extreme bliss, calls s-risks more important than alignment (which I think makes sense to a reasonable degree if antialigned s-risk is likely or a sizable portion of weaker dystopias are likely, but I don’t think makes sense for antialigned being very unlikely and my considering weak dystopias to also be overall not likely) . The extrema argument is why I don’t think that weak dystopias are likely, because I think that—unless we succeed at alignment to a notable degree—then the extremes of whatever values shake out are not something that keeps humans around for very long. So I don’t expect weaker dystopias to occur either.
I expect that most AIs aren’t going to value making a notable deliberate AI hell, whether out of the lightcone or 5% of it or 0.01% of it. If we make an aligned-AGI and then some other AGI says ‘I will simulate a bunch of humans in torment unless you give me a planet’ then I expect that our aligned-AGI uses a decision-theory that doesn’t give into dt-Threats and doesn’t give in (and thus isn’t threatened, because the other AGI gains nothing from actually simulating humans in that).
So, while I do expect that weak dystopias have a noticeable chance of occurring, I think it is significantly unlikely? It grows more likely we’ll end up in a weak dystopia as alignment progresses. Like if we manage to get enough of a ‘caring about humans specifically’ (though I expect a lot of attempts like that to fall apart and have weird extremes when they’re optimized over!), then that raises the chances of a weak dystopia.
However I also believe that alignment is roughly the way to solve these. To get notable progress on making AGIs avoid specific area, I believe that requires more alignment progress than we have currently.
There is the class of problems where the unaligned AGI decides to simulate us to get more insight into humans, insight into evolved species, and insight into various other pieces of that. That would most likely be bad, but I expect it to not be a significant portion of computation and also not continually executed for (really long length of time). So I don’t consider that to be a notable s-risk.
I’m also not sure that I consider astronomical suffering outcome (by how its described in the paper) to be bad by itself.
If you have (absurd amount of people) and they have some amount of suffering (ex: it shakes out that humans prefer some degree of negative-reinforcement as possible outcomes, so it remains) then that can be more suffering in terms of magnitude, but has the benefits of being more diffuse (people aren’t broken by a short-term large amount of suffering) and with less individual extremes of suffering. Obviously it would be bad to have a world that has astronomical suffering that is then concentrated on a large amount of people, but that’s why I think—a naive application of—astronomical suffering is incorrect because it ignores diffuse experiences, relative experiences (like, if we have 50% of people with notably bad suffering today, then your large future civilization with only 0.01% of people with notably bad suffering can still swamp that number, though the article mentions this I believe), and more minor suffering adding up over long periods of time.
(I think some of this comes from talking about things in terms of suffering versus happiness rather than negative utility versus positive utility? Where zero is defined as ‘universe filled with things we dont care about’. Like, you can have astronomical suffering that isn’t that much negative utility because it is diffuse / lower in a relative sense / less extreme, but ‘everyone is having a terrible time in this dystopia’ has astronomical suffering and high negative utility)