I think it’s likely that without a long (e.g. multi-decade) AI pause, one or more of these “non-takeover AI risks” can’t be solved or reduced to an acceptable level
Does that mean that you think that boring old yes-takeover AI risk can be solved without a pause? Or even with a pause? That seems very optimisitic indeed.
making it harder in the future to build consensus about the desirability of pausing AI development
I don’t think you’re going to get that consensus regardless of what kind of copium people have invested in. Not only that, but even if you had consensus I don’t think it would let you actually enact anything remotely resembling a “long enough” pause. Maybe a tiny “speed bump”, but nothing plausibly long enough to help with either the takeover or non-takeover risks. It’s not certain that you could solve all of those problems with a pause of any length, but it’s wildly unlikely, to the point of not being worth fretting about, that you can solve them with a pause of achievable length.
… which means I think “we” (not me, actually...) are going to end up just going for it, without anything you could really call a “solution” to anything, whether it’s wise or not. Probably one or more of the bad scenarios will actually happen. We may get lucky enough not to end up with extinction, but only by dumb luck, not because anybody solved anything. Especially not because a pause enabled anybody to solve anything, because there will be no pause of significant length. Literally nobody, and no combination of people, is going to be able to change that, by any means whatsoever, regardless of how good an idea it might be. Might as well admit the truth.
I mean, I’m not gonna stand in your way if you want to try for a pause, and if it’s convenient I’ll even help you tell people they’re dumb for just charging ahead, but I do not expect any actual success (and am not going to dump a huge amount of energy into the lost cause).
By the way, if you want to talk about “early”, I, for one, have held the view that usefully long pauses aren’t feasible, for basically the same reasons, since the early 1990s. The only change for me has been to get less optimistic about solutions being possible with or without even an extremely, infeasibly long pause. I believe plenty of other people have had roughly the same opinion during all that time.
It’s not about some “early refusal” to accept that the problems can’t be solved without a pause. It’s about a still continuing belief that a “long enough pause”, however convenient, isn’t plausibly going to actually happen… and/or that the problems can be solved even with a pause.
Does that mean that you think that boring old yes-takeover AI risk can be solved without a pause? Or even with a pause? That seems very optimisitic indeed.
I don’t think you’re going to get that consensus regardless of what kind of copium people have invested in. Not only that, but even if you had consensus I don’t think it would let you actually enact anything remotely resembling a “long enough” pause. Maybe a tiny “speed bump”, but nothing plausibly long enough to help with either the takeover or non-takeover risks. It’s not certain that you could solve all of those problems with a pause of any length, but it’s wildly unlikely, to the point of not being worth fretting about, that you can solve them with a pause of achievable length.
… which means I think “we” (not me, actually...) are going to end up just going for it, without anything you could really call a “solution” to anything, whether it’s wise or not. Probably one or more of the bad scenarios will actually happen. We may get lucky enough not to end up with extinction, but only by dumb luck, not because anybody solved anything. Especially not because a pause enabled anybody to solve anything, because there will be no pause of significant length. Literally nobody, and no combination of people, is going to be able to change that, by any means whatsoever, regardless of how good an idea it might be. Might as well admit the truth.
I mean, I’m not gonna stand in your way if you want to try for a pause, and if it’s convenient I’ll even help you tell people they’re dumb for just charging ahead, but I do not expect any actual success (and am not going to dump a huge amount of energy into the lost cause).
By the way, if you want to talk about “early”, I, for one, have held the view that usefully long pauses aren’t feasible, for basically the same reasons, since the early 1990s. The only change for me has been to get less optimistic about solutions being possible with or without even an extremely, infeasibly long pause. I believe plenty of other people have had roughly the same opinion during all that time.
It’s not about some “early refusal” to accept that the problems can’t be solved without a pause. It’s about a still continuing belief that a “long enough pause”, however convenient, isn’t plausibly going to actually happen… and/or that the problems can be solved even with a pause.