I think I care less than you do about this question. My reasoning is: if there’s high uncertainty about the difficulty of alignment, then we should behave as if alignment is hard. Therefore, we should behave as if alignment is hard.
There’s an asymmetric payoff to being wrong. If you assume alignment is hard when it’s easy, then you unnecessarily delayed the singularity, which has some opportunity cost (and runs some risk that we go extinct from something unrelated to AI in the meantime). If you assume alignment is easy when it’s hard, then everyone dies. The downside to being wrong in the latter case is worst. Therefore, we should behave as if alignment is hard (or at least skew our behavior in that direction).
I agree we don’t know for sure and need to allow for a range of possibilities, and that (in some cases) that means the right thing to do is to pessimize.
However, I think there is some utility here. The case I make at the end of my answer is that we’re very likely not going to be done in time if your timelines are 5 years, and probably not even if they’re 10 years, but that we are close enough that if we could increase the growth rate of the field from 20% per year to 50% a year, then we have at least some chance in 5 years, and probably would be OK at 10.
My conclusions may, or may not, in fact turn out to be roughly right, and that sort of information does require you to be able to make an estimate to within something like a factor of two or three, so it’s quite easy to be wrong, especially this early — but it’s also really valuable information for things like funding priorities: it tells us we need to drastically increase effort on Fieldbuilding. Now if, as some people argue, this is in fact a problem, then you’d reach a very different set of conclusions about funding priorities.
I think I care less than you do about this question. My reasoning is: if there’s high uncertainty about the difficulty of alignment, then we should behave as if alignment is hard. Therefore, we should behave as if alignment is hard.
There’s an asymmetric payoff to being wrong. If you assume alignment is hard when it’s easy, then you unnecessarily delayed the singularity, which has some opportunity cost (and runs some risk that we go extinct from something unrelated to AI in the meantime). If you assume alignment is easy when it’s hard, then everyone dies. The downside to being wrong in the latter case is worst. Therefore, we should behave as if alignment is hard (or at least skew our behavior in that direction).
I agree we don’t know for sure and need to allow for a range of possibilities, and that (in some cases) that means the right thing to do is to pessimize.
However, I think there is some utility here. The case I make at the end of my answer is that we’re very likely not going to be done in time if your timelines are 5 years, and probably not even if they’re 10 years, but that we are close enough that if we could increase the growth rate of the field from 20% per year to 50% a year, then we have at least some chance in 5 years, and probably would be OK at 10.
My conclusions may, or may not, in fact turn out to be roughly right, and that sort of information does require you to be able to make an estimate to within something like a factor of two or three, so it’s quite easy to be wrong, especially this early — but it’s also really valuable information for things like funding priorities: it tells us we need to drastically increase effort on Fieldbuilding. Now if, as some people argue, this is in fact a problem, then you’d reach a very different set of conclusions about funding priorities.