I feel like intelligence enhancement being pretty solidly in the near-term technological horizon provides strong argument for future governance being much better. There are also maybe 3-5 other technologies that seem likely to be achieved in the next 30 years bar AGI that would all hugely improve future AGI governance.
And then a lot of the post seems to make really quite bad arguments against forecasting AI timelines and other technologies, doing so with… I really don’t know, a rejection of bayesianism? A random invocation of an asymmetric burden of proof? If anyone learned anything useful from its section on timelines or technological forecasting, please tell me, since it really is among the worst things I have heard Ben Landau Taylor write, who I respect a lot. The stuff as written really makes no sense. I am personally on the longer end of timelines, but none of my reasoning looks anything like that.
Seriously, what are the technological forecasts in this essay:
While there is no firm ground for any prediction as to how long it will take before any technological breakthrough [to substantial intelligence enhancement], if ever, it seems more likely that such a regime would have to last worldwide for a century or several centuries before such technology were created.
I will very gladly take all your bets that intelligence augmentation will not take “several centuries”. What is the basis of this claim? Like, IDK, I see no methodology that suggests anything remotely as long as this, and so many forms of trend extrapolation, first principles argument, reference class forecasting and so many other things that suggest things happen faster than that.
I really don’t get the worldview that writes this essay. A worldview in which not-even-particularly-sci-fi technologies should by default be assumed to take centuries (centuries!!!) to be developed. A worldview in which even as AI systems destroy every single benchmark anyone has ever come up with, the hypothesis that AI might be soon gets dismissed because… I really don’t know. Because the author wants to maintain authority over reference classes and therefore vaguely implied it can’t happen soon.
There is no obvious prior over technological developments or progress. There not being proofs around doesn’t support that things won’t happen soon. And I would so gladly take this worldview’s money if it’s willing to actually draw some probability distributions that are spread out enough to put large fractions of its probability mass on centuries away.
And then a lot of the post seems to make really quite bad arguments against forecasting AI timelines and other technologies, doing so with… I really don’t know, a rejection of bayesianism? A random invocation of an asymmetric burden of proof?
I think the position Ben (the author) has on timelines is really not that different from Eliezer’s; consider pieces like this one, which is not just about the perils of biological anchors.
I think the piece spends less time than I would like on what to do in a position of uncertainty—like, if the core problem is that we are approaching a cliff of uncertain distance, how should we proceed?--but I think it’s not particularly asymmetric.
[And—there’s something I like about realism in plans? If people are putting heroic efforts into a plan that Will Not Work, I am on the side of the person on the sidelines trying to save them their effort, or direct them towards a plan that has a chance of working. If the core uncertainty is whether or not we can get human intelligence advancement in 25 years—I’m on your side of thinking it’s plausible—then it seems worth diverting what attention we can from other things towards making that happen, and being loud about doing that.]
I feel like intelligence enhancement being pretty solidly in the near-term technological horizon provides strong argument for future governance being much better. There are also maybe 3-5 other technologies that seem likely to be achieved in the next 30 years bar AGI that would all hugely improve future AGI governance.
And then a lot of the post seems to make really quite bad arguments against forecasting AI timelines and other technologies, doing so with… I really don’t know, a rejection of bayesianism? A random invocation of an asymmetric burden of proof? If anyone learned anything useful from its section on timelines or technological forecasting, please tell me, since it really is among the worst things I have heard Ben Landau Taylor write, who I respect a lot. The stuff as written really makes no sense. I am personally on the longer end of timelines, but none of my reasoning looks anything like that.
Seriously, what are the technological forecasts in this essay:
I will very gladly take all your bets that intelligence augmentation will not take “several centuries”. What is the basis of this claim? Like, IDK, I see no methodology that suggests anything remotely as long as this, and so many forms of trend extrapolation, first principles argument, reference class forecasting and so many other things that suggest things happen faster than that.
I really don’t get the worldview that writes this essay. A worldview in which not-even-particularly-sci-fi technologies should by default be assumed to take centuries (centuries!!!) to be developed. A worldview in which even as AI systems destroy every single benchmark anyone has ever come up with, the hypothesis that AI might be soon gets dismissed because… I really don’t know. Because the author wants to maintain authority over reference classes and therefore vaguely implied it can’t happen soon.
There is no obvious prior over technological developments or progress. There not being proofs around doesn’t support that things won’t happen soon. And I would so gladly take this worldview’s money if it’s willing to actually draw some probability distributions that are spread out enough to put large fractions of its probability mass on centuries away.
I think the position Ben (the author) has on timelines is really not that different from Eliezer’s; consider pieces like this one, which is not just about the perils of biological anchors.
I think the piece spends less time than I would like on what to do in a position of uncertainty—like, if the core problem is that we are approaching a cliff of uncertain distance, how should we proceed?--but I think it’s not particularly asymmetric.
[And—there’s something I like about realism in plans? If people are putting heroic efforts into a plan that Will Not Work, I am on the side of the person on the sidelines trying to save them their effort, or direct them towards a plan that has a chance of working. If the core uncertainty is whether or not we can get human intelligence advancement in 25 years—I’m on your side of thinking it’s plausible—then it seems worth diverting what attention we can from other things towards making that happen, and being loud about doing that.]