I don’t see what it has to do with risk-return. Sure, many startups fail. And, plausibly many people tried to build an airplane and failed before the Wright brothers. And, many people keep trying building AGI and failing. This doesn’t mean there won’t be kinks in AI progress or even a TAI created by a small group.
Saying that “the subjective expected value of AI progress over time is a smooth curve” is a very different proposition from “the actual AI progress over time will be a smooth curve”.
My line of argument here is not trying to prove a particular story about AI progress (e.g. “TAI will be similar to a startup”) but push pack against (/ voice my confusions about) the confidence level of predictions made by Christiano’s model.
My line of argument here is not trying to prove a particular story about AI progress (e.g. “TAI will be similar to a startup”) but push pack against (/ voice my confusions about) the confidence level of predictions made by Christiano’s model.
What is the confidence level of predictions you are pushing back against? I’m at like 30% on fast takeoff in the sense of “1 year doubling without preceding 4 year doubling” (a threshold roughly set to break any plausible quantitative historical precedent a threshold intended to be faster than historical precedent but that’s probably similar to the agricultural revolution sped up 10,000x). I’m at maybe 10-20% on the kind of crazier world Eliezer imagines.
Is that a high level of confidence? I’m not sure I would be able to spread my probability in a way that felt unconfident (to me) without giving probabilities that low to lots of particular ways the future could be crazy. E.g. 10-20% is similar to the probability I put on other crazy-feeling possibilities like no singularity at all, rapid GDP acceleration with only moderate cognitive automation, or singleton that arrests economic growth before we get to 4 year doubling times...
I’m at like 30% on fast takeoff in the sense of “1 year doubling without preceding 4 year doubling” (a threshold roughly set to break any plausible quantitative historical precedent).
Huh, AI impacts looked at one dataset of GWP (taken from wikipedia, in turn taken from here) and found 2 precedents for “x year doubling without preceding 4x year doubling”, roughly during the agricultural evolution. The dataset seems to be a combination of lots of different papers’ estimates of human population, plus an assumption of ~constant GWP/capita early in history.
Yeah, I think this was wrong. I’m somewhat skeptical of the numbers and suspect future revisions systematically softening those accelerations, but 4x still won’t look that crazy.
(I don’t remember exactly how I chose that number but it probably involved looking at the same time series so wasn’t designed to be much more abrupt.)
I don’t see what it has to do with risk-return. Sure, many startups fail. And, plausibly many people tried to build an airplane and failed before the Wright brothers. And, many people keep trying building AGI and failing. This doesn’t mean there won’t be kinks in AI progress or even a TAI created by a small group.
Saying that “the subjective expected value of AI progress over time is a smooth curve” is a very different proposition from “the actual AI progress over time will be a smooth curve”.
My line of argument here is not trying to prove a particular story about AI progress (e.g. “TAI will be similar to a startup”) but push pack against (/ voice my confusions about) the confidence level of predictions made by Christiano’s model.
What is the confidence level of predictions you are pushing back against? I’m at like 30% on fast takeoff in the sense of “1 year doubling without preceding 4 year doubling” (
a threshold roughly set to break any plausible quantitative historical precedenta threshold intended to be faster than historical precedent but that’s probably similar to the agricultural revolution sped up 10,000x). I’m at maybe 10-20% on the kind of crazier world Eliezer imagines.Is that a high level of confidence? I’m not sure I would be able to spread my probability in a way that felt unconfident (to me) without giving probabilities that low to lots of particular ways the future could be crazy. E.g. 10-20% is similar to the probability I put on other crazy-feeling possibilities like no singularity at all, rapid GDP acceleration with only moderate cognitive automation, or singleton that arrests economic growth before we get to 4 year doubling times...
Huh, AI impacts looked at one dataset of GWP (taken from wikipedia, in turn taken from here) and found 2 precedents for “x year doubling without preceding 4x year doubling”, roughly during the agricultural evolution. The dataset seems to be a combination of lots of different papers’ estimates of human population, plus an assumption of ~constant GWP/capita early in history.
Yeah, I think this was wrong. I’m somewhat skeptical of the numbers and suspect future revisions systematically softening those accelerations, but 4x still won’t look that crazy.
(I don’t remember exactly how I chose that number but it probably involved looking at the same time series so wasn’t designed to be much more abrupt.)