But in the end, those plans are not opposed to each other.
I think they are somewhat opposed, due to signaling effects: If you’re working on Plan 2 only, then that signals to the general public or non-experts that you think the risks are manageable/acceptable. If a lot of people are working on Plan 2, that gives ammunition to the people who want to race or don’t want to pause/stop to say “Look at all these AI safety experts working on solving AI safety. If the risks are really as high as the Plan 1 people say, wouldn’t they be calling for a pause/stop too instead of working on technical problems?”
I think they are somewhat opposed, due to signaling effects: If you’re working on Plan 2 only, then that signals to the general public or non-experts that you think the risks are manageable/acceptable. If a lot of people are working on Plan 2, that gives ammunition to the people who want to race or don’t want to pause/stop to say “Look at all these AI safety experts working on solving AI safety. If the risks are really as high as the Plan 1 people say, wouldn’t they be calling for a pause/stop too instead of working on technical problems?”