This is the multiple stages fallacy. Not only is each of the probabilities in your list too low, if you actually consider them as conditional probabilities they’re double- and triple-counting the same uncertainties. And since they’re all mulitplied together, and all err in the same direction, the error compounds.
P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8
P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.
P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into “we invent algorithms for transformative AGI”; an algorithm with such extreme inference costs wouldn’t count (and, I think, would be unlikely to be developed in the first place).
We invent and scale cheap, quality robots: Not a prerequisite.
We massively scale production of chips and power: Not a prerequisite if we have already already conditioned on inference costs.
We avoid derailment by human regulation: 0.9
We avoid derailment by AI-caused delay: 1. I would consider an AI that derailed development of other AI ot be transformative.
We avoid derailment from wars (e.g., China invades Taiwan): 0.98.
We avoid derailment from pandemics: 0.995. Thanks to COVID, our ability to continue making technological progress during a pandemic which requires everyone to isolate is already battle-tested.
We avoid derailment from severe depressions: 0.99.
There is an additional problem where one of the two key principles for their estimates is
Avoid extreme confidence
If this principle leads you to picking probability estimates that have some distance to 1 (eg by picking at most 0.95).
If you build a fully conjunctive model, and you are not that great at extreme probabilities, then you will have a strong bias towards low overall estimates.
And you can make your probability estimates even lower by introducing more (conjunctive) factors.
This is the multiple stages fallacy. Not only is each of the probabilities in your list too low, if you actually consider them as conditional probabilities they’re double- and triple-counting the same uncertainties. And since they’re all mulitplied together, and all err in the same direction, the error compounds.
What conditional probabilities would you assign, if you think ours are too low?
P(We invent algorithms for transformative AGI | No derailment from regulation, AI, wars, pandemics, or severe depressions): .8
P(We invent a way for AGIs to learn faster than humans | We invent algorithms for transformative AGI): 1. This row is already incorporated into the previous row.
P(AGI inference costs drop below $25/hr (per human equivalent): 1. This is also already incorporated into “we invent algorithms for transformative AGI”; an algorithm with such extreme inference costs wouldn’t count (and, I think, would be unlikely to be developed in the first place).
We invent and scale cheap, quality robots: Not a prerequisite.
We massively scale production of chips and power: Not a prerequisite if we have already already conditioned on inference costs.
We avoid derailment by human regulation: 0.9
We avoid derailment by AI-caused delay: 1. I would consider an AI that derailed development of other AI ot be transformative.
We avoid derailment from wars (e.g., China invades Taiwan): 0.98.
We avoid derailment from pandemics: 0.995. Thanks to COVID, our ability to continue making technological progress during a pandemic which requires everyone to isolate is already battle-tested.
We avoid derailment from severe depressions: 0.99.
Interested in betting thousands of dollars on this prediction? I’m game.
I’m interested. What bets would you offer?
There is an additional problem where one of the two key principles for their estimates is
If this principle leads you to picking probability estimates that have some distance to 1 (eg by picking at most 0.95).
If you build a fully conjunctive model, and you are not that great at extreme probabilities, then you will have a strong bias towards low overall estimates. And you can make your probability estimates even lower by introducing more (conjunctive) factors.