It would be helpful if people writing online could point to some description of their fundamental beliefs, interests and assumptions, so you could get a better picture of who they are and where their writing is coming from [2].
I’ve tried to do this below. I wrote down my fundamentals, my main beliefs, my intellectual background scenery. They are all things that inform what I write, are relevant to how I interpret other texts, and help others interpret my writing.
I’d like to encourage other bloggers and writers to do this too. It’s a great tool, not just for others but for yourself too. Have one you can link to so people can sniff you out and get a feel. But be warned, enumerating your own unstated basic beliefs was much harder than I thought it’d be. It’s a chasing-your-tail type enterprise that’ll leave you unsatisfied. I thought I’d have 10 or 12 points but now there’s 30 because I kept coming up with more. The list of background things people can differ on without necessarily realizing it seems infinitely long, and knowing whether to include something or take it for granted doesn’t get any easier just because you’re writing specifically to solve that problem one level above. There are assumptions behind assumptions behind assumptions.
That said, I’m a bit skeptical of the tractability of this approach to resolve deep disagreements between smart knowledgeable well-intentioned people, mostly because my experience has resonated with what Scott wrote about “high-level generators of disagreements” wrote in Varieties of Argumentative Experience:
If we were to classify disagreements themselves – talk about what people are doing when they’re even having an argument – I think it would look something like this:
Most people are either meta-debating – debating whether some parties in the debate are violating norms – or they’re just shaming, trying to push one side of the debate outside the bounds of respectability.
If you can get past that level, you end up discussing facts (blue column on the left) and/or philosophizing about how the argument has to fit together before one side is “right” or “wrong” (red column on the right). Either of these can be anywhere from throwing out a one-line claim and adding “Checkmate, atheists” at the end of it, to cooperating with the other person to try to figure out exactly what considerations are relevant and which sources best resolve them.
If you can get past that level, you run into really high-level disagreements about overall moral systems, or which goods are more valuable than others, or what “freedom” means, or stuff like that. These are basically unresolvable with anything less than a lifetime of philosophical work, but they usually allow mutual understanding and respect.
Bit more on those generators:
High-level generators of disagreement are what remains when everyone understands exactly what’s being argued, and agrees on what all the evidence says, but have vague and hard-to-define reasons for disagreeing anyway. In retrospect, these are probably why the disagreement arose in the first place, with a lot of the more specific points being downstream of them and kind of made-up justifications. These are almost impossible to resolve even in principle. …
Some of these involve what social signal an action might send; for example, even a just war might have the subtle effect of legitimizing war in people’s minds. Others involve cases where we expect our information to be biased or our analysis to be inaccurate; for example, if past regulations that seemed good have gone wrong, we might expect the next one to go wrong even if we can’t think of arguments against it. Others involve differences in very vague and long-term predictions, like whether it’s reasonable to worry about the government descending into tyranny or anarchy. Others involve fundamentally different moral systems, like if it’s okay to kill someone for a greater good. And the most frustrating involve chaotic and uncomputable situations that have to be solved by metis or phronesis or similar-sounding Greek words, where different people’s Greek words give them different opinions.
You can always try debating these points further. But these sorts of high-level generators are usually formed from hundreds of different cases and can’t easily be simplified or disproven. Maybe the best you can do is share the situations that led to you having the generators you do. Sometimes good art can help.
The high-level generators of disagreement can sound a lot like really bad and stupid arguments from previous levels. “We just have fundamentally different values” can sound a lot like “You’re just an evil person”. “I’ve got a heuristic here based on a lot of other cases I’ve seen” can sound a lot like “I prefer anecdotal evidence to facts”. And “I don’t think we can trust explicit reasoning in an area as fraught as this” can sound a lot like “I hate logic and am going to do whatever my biases say”. If there’s a difference, I think it comes from having gone through all the previous steps – having confirmed that the other person knows as much as you might be intellectual equals who are both equally concerned about doing the moral thing – and realizing that both of you alike are controlled by high-level generators. High-level generators aren’t biases in the sense of mistakes. They’re the strategies everyone uses to guide themselves in uncertain situations.
This reminds me of John Nerst’s 2018 post 30 Fundamentals:
(As an aside, Nerst’s exercise might be useful for personalising LLM system prompts too.)
That said, I’m a bit skeptical of the tractability of this approach to resolve deep disagreements between smart knowledgeable well-intentioned people, mostly because my experience has resonated with what Scott wrote about “high-level generators of disagreements” wrote in Varieties of Argumentative Experience:
Bit more on those generators: