Certainly agree with your point about donating to Substacks / journalists. Could be very impactful to have a writeup of that somewhere here or on the EA forum.
I’m familiar with Galef’s ideas; I would place “soldiers” in category 2. But yes, the distinction is very subtle and I did not specify it well enough.
I believe that sufficiently well designed UI for navigating debates/arguments/discussions can make it very difficult for people to disguise soldier mindsets via obfuscated (intentional or unintentional) communication and reasoning.
Imagine, for example:
User creates a strongly worded post that features a clear strawman and/or blatantly skips over serious, in-depth prior discussion on the same topic.
An LLM categorizes the argument to properly situate it within prior discussion and notifies the user that they A) do not appear to have an accurate understanding of the original source—specifically pointing out why B) have not yet explored the X counterarguments coming after that line of reasoning, and the Y that come after that.
This could be seen as an enhanced version of “community notes” aimed at situating shallow, under-researched takes within a larger “map of human thought.”
Whether this can scale and outcompete current systems is unknown, but it does truly seem promising for the enhancement of public discourse and like a step in the right direction.
Appreciate the comment, was helpful in clarifying my thoughts.
I think the disagreement stems from a lack of specificity on my part; ignore the specific description of the categories.
I hold beliefs on it, sure. I am now interested in seeing if they reflect reality, and learning why/why not. Is this mindset inadequate, and what would make it more rational?
Separately—do you think there is promise in tools of the type I describe to combat soldier mindset at scale? I will definitely be reading into some of the CFAR resources, just curious to hear from you.