You put it succinctly, I believe p(doom|personal_action) ≈ p(doom|~personal_action) for any personal action I can take. I do not see what I can do. I am also not trying to start a B2B SaaS, because spending my last days doing that is not the right thing to do.
Do you think this is wrong for most people / people trying to start an AI B2B SaaS / some other class of people you want to appeal to?
I admit, I don’t quite follow the superrational part. If you’re referring to some decision theoretic widget which allows one to cooperate with other people which are also capable of the same reasoning, to be effective these people have to exist and one has to be one of them, right?
A failure mode might also be that the SaaS people are assuming the other players are not superrational. In that case a superrational player should also defect.
Without having put much thought into it, I believe (adult) humans cooperating via this mechanism is in general very unlikely. Agents cooperating relies on all agents coming to the same (or sufficiently similar?) conclusion regarding the payoff matrix and the nature of the other agents. So in human terms, this relies on everyones ability to reason correctly about the problem and everyone elses behavior AND everyone having the right information. I don’t think that happens very often, if at all. The “everyone predicting each others behavior correctly” part seems especially unlikely to me. Also slightly different (predicted) information (e.g. AGI timelines in our case) can yield very different payoff matrices?