And then a lot of the post seems to make really quite bad arguments against forecasting AI timelines and other technologies, doing so with… I really don’t know, a rejection of bayesianism? A random invocation of an asymmetric burden of proof?
I think the position Ben (the author) has on timelines is really not that different from Eliezer’s; consider pieces like this one, which is not just about the perils of biological anchors.
I think the piece spends less time than I would like on what to do in a position of uncertainty—like, if the core problem is that we are approaching a cliff of uncertain distance, how should we proceed?--but I think it’s not particularly asymmetric.
[And—there’s something I like about realism in plans? If people are putting heroic efforts into a plan that Will Not Work, I am on the side of the person on the sidelines trying to save them their effort, or direct them towards a plan that has a chance of working. If the core uncertainty is whether or not we can get human intelligence advancement in 25 years—I’m on your side of thinking it’s plausible—then it seems worth diverting what attention we can from other things towards making that happen, and being loud about doing that.]
I think the position Ben (the author) has on timelines is really not that different from Eliezer’s; consider pieces like this one, which is not just about the perils of biological anchors.
I think the piece spends less time than I would like on what to do in a position of uncertainty—like, if the core problem is that we are approaching a cliff of uncertain distance, how should we proceed?--but I think it’s not particularly asymmetric.
[And—there’s something I like about realism in plans? If people are putting heroic efforts into a plan that Will Not Work, I am on the side of the person on the sidelines trying to save them their effort, or direct them towards a plan that has a chance of working. If the core uncertainty is whether or not we can get human intelligence advancement in 25 years—I’m on your side of thinking it’s plausible—then it seems worth diverting what attention we can from other things towards making that happen, and being loud about doing that.]