Upvoted, and thanks for writing this. I disagree on multiple dimensions—on the object level, I don’t think ANY research topic can be stopped for very long, and I don’t think AI specifically gets much safer with any achievable finite pause, compared to a slowdown and standard of care for roughly the same duration. On the strategy level, I wonder what other topics you’d use as support for your thesis (if you feel extreme measures are correct, advocate for them). US Gun Control? Drug legalization or enforcement? Private capital ownership?
On the “be honest and direct” side, hiding your true beliefs does lead to correctness of slippery-slope fears. “If we allow compromise X, next they’ll push for X+1″ is actually the truth on such topics. It’s not clear that it MATTERS if your opponents/disbelievers fear the slippery slope or if they know for certain that you want the endpoint.
On the “push for an achievable compromise position” side, a few major benefits. First, it may actually work—you may get some improvement. Second, it leads to discussion and exploration of that point on the continuum, and will shift the overton window a bit. Third, it keeps you enough in the mainstream that you can work toward your REAL goals (AI safety, not AI pause) with all the tools available, rather than being on the fringe and nobody listening to anything you say.
Upvoted, and thanks for writing this. I disagree on multiple dimensions—on the object level, I don’t think ANY research topic can be stopped for very long, and I don’t think AI specifically gets much safer with any achievable finite pause, compared to a slowdown and standard of care for roughly the same duration. On the strategy level, I wonder what other topics you’d use as support for your thesis (if you feel extreme measures are correct, advocate for them). US Gun Control? Drug legalization or enforcement? Private capital ownership?
On the “be honest and direct” side, hiding your true beliefs does lead to correctness of slippery-slope fears. “If we allow compromise X, next they’ll push for X+1″ is actually the truth on such topics. It’s not clear that it MATTERS if your opponents/disbelievers fear the slippery slope or if they know for certain that you want the endpoint.
On the “push for an achievable compromise position” side, a few major benefits. First, it may actually work—you may get some improvement. Second, it leads to discussion and exploration of that point on the continuum, and will shift the overton window a bit. Third, it keeps you enough in the mainstream that you can work toward your REAL goals (AI safety, not AI pause) with all the tools available, rather than being on the fringe and nobody listening to anything you say.