Note that there are two problems Buck is highlighting here:
Get useful work out of scheming models that might try to sabotage this work.
Get useful research work out of models which aren’t scheming. (Where perhaps the main problem is in checking its outputs.)
My sense is that work on (1) doesn’t advance ASI research except to the extent that scheming AIs would have tried to sabotage this research? (At least, insofar as work on (1) doesn’t help much with (2) in the long run.)
Note that there are two problems Buck is highlighting here:
Get useful work out of scheming models that might try to sabotage this work.
Get useful research work out of models which aren’t scheming. (Where perhaps the main problem is in checking its outputs.)
My sense is that work on (1) doesn’t advance ASI research except to the extent that scheming AIs would have tried to sabotage this research? (At least, insofar as work on (1) doesn’t help much with (2) in the long run.)