some predictable counterpoints: maybe we won because we were cautious; we could have won harder; many relevant thinkers still pooh-pooh the problem; it’s not just the basic problem statement that’s important, but potentially many other ideas that aren’t yet popular; picking battles isn’t lying; arguing about sensitive subjects is fun and I don’t think people are very tempted to find excuses to avoid it; there are other things that are potentially the most important in the world that could suffer from bad optics; I’m not against systematically truthseeking discussions of sensitive subjects, just if it’s in public in a way that’s associated with the rationalism brand
some predictable counterpoints: maybe we won because we were cautious; we could have won harder; many relevant thinkers still pooh-pooh the problem; it’s not just the basic problem statement that’s important, but potentially many other ideas that aren’t yet popular; picking battles isn’t lying; arguing about sensitive subjects is fun and I don’t think people are very tempted to find excuses to avoid it; there are other things that are potentially the most important in the world that could suffer from bad optics; I’m not against systematically truthseeking discussions of sensitive subjects, just if it’s in public in a way that’s associated with the rationalism brand
(This extended runaround on appeals to consequences is at least a neat microcosm of the reasons we expect unaligned AIs to be deceptive by default! Having the intent to inform other agents of what you know without trying to take responsibility for controlling their decisions is an unusually anti-natural shape for cognition; for generic consequentialists, influence-seeking behavior is the default.)