What do people mean by that metric? What is x-risk for the century? Forever? For the next 10 years? Until we figured out AGI or after AGI on the road to superintelligence?
To me it’s fundamentally different because P(doom) forever must be much higher than doom over the next 10-20 years. Or is it implied that if we survive the next period means only that we figured out alignment eternally for all the next generation AIs? It’s confusing.
It does seem likely to me that a large fraction of all “doom from unaligned AGI” comes relatively soon after the first AGI that is better at improving AGI than humans are. I tend to think of it as a question having multiple bundles of scenarios:
AGI is actually not something we can do. Even in timelines where we advance in such technology for a long time, we only get systems that are not as smart as us in ways that matter for control of the future. Alignment is irrelevant, and P(doom) is approximately 0.
Alignment turns out to be relatively easy and reliable. The only risk comes from AGI before anyone has a chance to find the easy and safe solution. Where the first AGIs are aligned, they can quite safely self-improve and remain aligned. With their capabilities they can easily spot and deal with the few unaligned AGIs as they come up before they become a problem. P(doom) is relatively low and stays low.
Alignment is difficult, but it turns out that once you’ve solved it, it’s solved. You can scale up the same principles to any level of capability. P(doom by year X) goes up higher than scenario 2 due to the reduced chance of solving before powerful AGI, but then plateaus rapidly in the same way.
Alignment is both difficult and risky. AGIs that self-improve by orders of magnitude face new alignment problems, and so the most highly capable AGIs are much more likely to be misaligned to humanity than less capable ones. P(doom by year X) keeps increasing for every year in which AGI plausibly exists, though the remaining probability mass is more and more heavily toward worlds in which civilization never develops AGI.
Alignment is essentially impossible. If we get superhuman AGIs at all, almost certainly one of the earliest kills everyone one way or another. P(doom by year X) goes quickly toward 1 for every possible future in which AGI plausibly exists.
Only in scenario 4 do you see a steady increase in P(doom) over long time spans, and even that bundle of timelines probably converges fairly rapidly toward timelines in which no AGI ever exists for some reason or other.
This is why I think it’s meaningful to ask for P(doom) without a specified time span. If we somehow found out that scenario 4 was actually true, then it might be worth asking in more detail about time scales.
I think this is an important equivocation (direct alignment vs. transitive alignment). If first AGIs such as LLMs turn out to be aligned at least in the sense of keeping humanity safe, that by itself doesn’t exempt them from the reach of Moloch. The reason alignment is hard is that it might take longer to figure out than developing misaligned AGIs. This doesn’t automatically stop applying when the researchers are themselves aligned AGIs. While AGI-assisted (or more likely, AGI-led) alignment research is faster than human-led alignment research, so is AGI capability research.
Thus it’s possible that P(first AGIs are misaligned) is low, that is first AGIs are directly aligned, while P(doom) is still high, if first AGIs fail to protect themselves (and by extension humanity) from future misaligned AGIs they develop (they are not transitively aligned, same as most humans), because they failed to establish strong coordination norms required to prevent deployment of dangerous misaligned AGIs anywhere in the world.
At the same time, this is not about the timespan, because as soon as first AGIs develop nanotech, they are going to operate on many orders of magnitude more custom hardware that’s going to increase both serial speed and scale of available computation to the point where everything related to settling into an alignment security equilibrium is going to happen within a very short span of physical time. It might take first AGIs a couple of years to get there (if they manage to restrain themselves and not build a misaligned AGI even earlier), but then in a few weeks it’s all going to get settled, one way or the other.
I think it’s an all-of-time metric over a variable with expected decay baked into the dynamics. a windowing function on the probability might make sense to discuss; there are some solid P(doom) queries on manifold markets, for example.
What is the duration of P(doom)?
What do people mean by that metric? What is x-risk for the century? Forever? For the next 10 years? Until we figured out AGI or after AGI on the road to superintelligence?
To me it’s fundamentally different because P(doom) forever must be much higher than doom over the next 10-20 years. Or is it implied that if we survive the next period means only that we figured out alignment eternally for all the next generation AIs? It’s confusing.
It does seem likely to me that a large fraction of all “doom from unaligned AGI” comes relatively soon after the first AGI that is better at improving AGI than humans are. I tend to think of it as a question having multiple bundles of scenarios:
AGI is actually not something we can do. Even in timelines where we advance in such technology for a long time, we only get systems that are not as smart as us in ways that matter for control of the future. Alignment is irrelevant, and P(doom) is approximately 0.
Alignment turns out to be relatively easy and reliable. The only risk comes from AGI before anyone has a chance to find the easy and safe solution. Where the first AGIs are aligned, they can quite safely self-improve and remain aligned. With their capabilities they can easily spot and deal with the few unaligned AGIs as they come up before they become a problem. P(doom) is relatively low and stays low.
Alignment is difficult, but it turns out that once you’ve solved it, it’s solved. You can scale up the same principles to any level of capability. P(doom by year X) goes up higher than scenario 2 due to the reduced chance of solving before powerful AGI, but then plateaus rapidly in the same way.
Alignment is both difficult and risky. AGIs that self-improve by orders of magnitude face new alignment problems, and so the most highly capable AGIs are much more likely to be misaligned to humanity than less capable ones. P(doom by year X) keeps increasing for every year in which AGI plausibly exists, though the remaining probability mass is more and more heavily toward worlds in which civilization never develops AGI.
Alignment is essentially impossible. If we get superhuman AGIs at all, almost certainly one of the earliest kills everyone one way or another. P(doom by year X) goes quickly toward 1 for every possible future in which AGI plausibly exists.
Only in scenario 4 do you see a steady increase in P(doom) over long time spans, and even that bundle of timelines probably converges fairly rapidly toward timelines in which no AGI ever exists for some reason or other.
This is why I think it’s meaningful to ask for P(doom) without a specified time span. If we somehow found out that scenario 4 was actually true, then it might be worth asking in more detail about time scales.
I think this is an important equivocation (direct alignment vs. transitive alignment). If first AGIs such as LLMs turn out to be aligned at least in the sense of keeping humanity safe, that by itself doesn’t exempt them from the reach of Moloch. The reason alignment is hard is that it might take longer to figure out than developing misaligned AGIs. This doesn’t automatically stop applying when the researchers are themselves aligned AGIs. While AGI-assisted (or more likely, AGI-led) alignment research is faster than human-led alignment research, so is AGI capability research.
Thus it’s possible that P(first AGIs are misaligned) is low, that is first AGIs are directly aligned, while P(doom) is still high, if first AGIs fail to protect themselves (and by extension humanity) from future misaligned AGIs they develop (they are not transitively aligned, same as most humans), because they failed to establish strong coordination norms required to prevent deployment of dangerous misaligned AGIs anywhere in the world.
At the same time, this is not about the timespan, because as soon as first AGIs develop nanotech, they are going to operate on many orders of magnitude more custom hardware that’s going to increase both serial speed and scale of available computation to the point where everything related to settling into an alignment security equilibrium is going to happen within a very short span of physical time. It might take first AGIs a couple of years to get there (if they manage to restrain themselves and not build a misaligned AGI even earlier), but then in a few weeks it’s all going to get settled, one way or the other.
I think it’s an all-of-time metric over a variable with expected decay baked into the dynamics. a windowing function on the probability might make sense to discuss; there are some solid P(doom) queries on manifold markets, for example.