I am not adding more detail to my prediction, I’m adding more detail to my justification of that prediction, which doesn’t make my prediction less probable. Unless you think predictions formed on the basis of little information are somehow more robust than predictions formed based on lots of information.
As for denying the super-exponential trend, I agree. I don’t put a lot of stock in extrapolating from past progress at all, because breakthroughs are discontinuous. That’s why I think it’s valuable to actually discuss the nature of the problem, rather than treating the problem as a black box we can predict by extrapolation.
To be completely honest, I think the best argument against AI 2027′s scenario is that it relies on the assumption that we will soon be in a super-exponential progress regime, and we don’t have much evidence that we are on a super-exponential trajectory soon, and we have reason to believe the data points that vindicate super-exponential trajectories are fundamentally temporary and non-extrapolatable.
We don’t really need any more detailed argument than that, and we shouldn’t go too much into details here, because of the fact that detailed stories must become either equally or less probable for every detail added.
Edit: I will likely respond to comments slowly, if at all due to rate limits.
I am not adding more detail to my prediction, I’m adding more detail to my justification of that prediction, which doesn’t make my prediction less probable. Unless you think predictions formed on the basis of little information are somehow more robust than predictions formed based on lots of information.
As for denying the super-exponential trend, I agree. I don’t put a lot of stock in extrapolating from past progress at all, because breakthroughs are discontinuous. That’s why I think it’s valuable to actually discuss the nature of the problem, rather than treating the problem as a black box we can predict by extrapolation.