But Petrov was not a launch authority. The decision to launch or not was not up to him, it was up to the Politburo of the Soviet Union.
This is obviously true in terms of Soviet policy, but it sounds like you’re making a moral claim. That the Politburo was morally entitled to decide whether or not to launch, and that no one else had that right. This is extremely questionable, to put it mildly.
We have to remember that when he chose to lie about the detection, by calling it a computer glitch when he didn’t know for certain that it was one, Petrov was defecting against the system.
Indeed. But we do not cooperate in prisoners’ dilemmas “just because”; we cooperate because doing so leads to higher utility. Petrov’s defection led to a better outcome for every single person on the planet; assuming this was wrong because it was defection is an example of the non-central fallacy.
Is that the sort of behavior we really want to lionize?
If you will not honor literally saving the world, what will you honor? If we wanted to make a case against Petrov, we could say that by demonstrably not retaliating, he weakened deterrence (but deterrence would have helped no one if he had launched), or that the Soviets might have preferred destroying the world to dying alone, and thus might be upset with a missileer unwilling to strike. But it’s hard to condemn him for a decision that predictably saved the West, and had a significant chance (which did in fact occur) of saving the Soviet Union.
While there may be a substantial worldview gap, I suspect the much larger difference is that most Sneer Clubbers are looking to boost their status by trying to bully anyone who looks like a vulnerable target, and being different, as LessWrong is, is enough to qualify. This situation is best modeled by conflict theory, not mistake theory.
Since that does not seem likely to be the sort of answer you’re looking for though, if I wanted to bridge the inferential gap with a hypothetical Sneer Clubber who genuinely cared about truth, or indeed about anything other than status (which they do not), I’d tell them that convention doesn’t work as well as one might think. If you think that the conventional way to approach the world is usually right, the rationalist community will seem unusually stupid. We ignore all this free wisdom lying around and try to reinvent the wheel! If the conventional wisdom is correct, then concerns about the world changing, whether due to AI or any other reason, are pointless. If they were important, conventional wisdom would already be talking about them. If the conventional wisdom is correct, Bayesianism is potentially wrong (it’s not part of the Standard Approach to Life), and certainly useless: why try to learn through probability theory when tradition can tell you everything you need to know much faster? But I would tell them that in a world where the conventional wisdom was embarrassingly wrong in all previous eras, it would be a real coincidence for this age to be the first to get everything right. And if tradition isn’t perfect, or nearly so, that’s when rationalism suddenly becomes very important.
I would also tell them that it’s possible to actually understand things. Most people seem to go through life on rote, seemingly not recognizing when something doesn’t make sense because they don’t expect anything to make sense. But it’s possible to start thinking through how things work, and when you do that, rationality starts seeming sensible because you can see how it works and that it works, rather than silly because it superficially pattern matches to a Scientology style cult.