Yeah, at this point the marginal value add of forecasting/epistemics is in validating/invalidating fundamental assumptions like the software intelligence explosion idea, or the possibility of industrial/chip factory being massively scaled up by AIs, or Moore’s law not ending, rather than on the parameter ranges, because the assumptions overdetermine the conclusion.
Overall, I agree that we were not prioritizing addressing the demographic of skeptics who have detailed reasons for their beliefs. I very much sympathize with disagreeing the framework that others are using to approach takeoff forecasting rather than just parameter settings, I feel similarly to some extent about others’ work (in particular Epoch’s GATE).
However, I disagree that our model assumes a software-driven intelligence explosion. A substantial percentage of our simulations don’t contain such an explosion! You can see that for example the 90th percentile for ASI is >2100. You can totally input your best guesses for the parameters in our model and end up with <50% on a software-driven explosion.
I also think that our scenario and supplements have convinced some skeptics who didn’t have as sophisticated as reasons as e.g. you and Epoch for their disagreements going in. But of course you probably think this is a bad thing so :shrug:
And I’m also worried – as always with this stuff – that there are some people who will look at all those pages and pages of fancy numbers, and think “wow! this sounds crazy but I can’t argue with Serious Expert Research™,” and end up getting convinced even though the document isn’t really trying to convince them in the first place.
Also very much sympathize with this! I’ve had similar concerns about other approaches. I aimed to try to be clear about our levels of uncertainty and how scrappy the model is but perhaps I could have added further caveats? Curious what you would have recommended here.
Finally: I’d be quite interested in your object-level disagreements with our takeoff forecasts. I’ve appreciated your comments on timelines. I’d also be interested in your actual forecasts on timelines/takeoff. Like e.g. your 10/50/90th percentiles, as we gave in our supplements.
Yeah, at this point the marginal value add of forecasting/epistemics is in validating/invalidating fundamental assumptions like the software intelligence explosion idea, or the possibility of industrial/chip factory being massively scaled up by AIs, or Moore’s law not ending, rather than on the parameter ranges, because the assumptions overdetermine the conclusion.
Comment down below:
https://forum.effectivealtruism.org/posts/rv4SJ68pkCQ9BxzpA/?commentId=EgqgffC4F5yZQreCp
I probably agree with you guys more than you realize. You guys might be interested in this DM I sent to nostalgebraist: