I like this post, though I wish it were explicit about the fact that the subject matter is really “relatively intractable disagreements between smart, well-informed people about Big Issues”, not “arguments” in general.
If everyone in the discussion is smart and well-informed, and the subject is a Big Issue, then trying to resolve the issue by bringing up an isolated fact tends to be a worse use of time, or is further away from people’s cruxes, than trying to survey all the evidence, which tends to be worse / less cruxy than delving into high-level generators. But:
A lot of arguments aren’t about Big Issues. One example I’ve seen: Alice and Bob disagreed about whether a politician had made an inflammatory gesture, based on contradictory news reports. Alice tracked down a recording and showed it to Bob, while noting a more plausible explanation for the gesture; this convinced Bob, even though it was a mere fact and not a literature review or philosophical treatise.
A lot of Big-Issue-adjacent arguments aren’t that sophisticated. If you read Scott’s post and then go argue with someone who says “evolution is just a theory”, it will often be the case that the disagreement is best resolved by just clarifying definitions, not by going hunting for deep generators.
An obvious reply is “well, those arguments are bad in their own right; select arguments and people-to-argue-with such that Scott’s pyramid is true, and you’ll be much better off”. I tentatively think that’s not the right approach, even though I agree that the examples I cited aren’t good topics for rationalists to spend time on. Mostly I just think it’s not true that smart people never believe really consequential, large-scale things for trivial-to-refute reasons. Top rationalists don’t know everything, so some of their beliefs will be persistently wrong just because they misunderstood a certain term, never happened to encounter a certain isolated fact, are misremembering the results from a certain study, etc. That can lead to long arguments when the blind spot is hard to spot.
If people neglect mentioning isolated facts or studies (or clarifying definitions) because they feel like it would be lowly or disrespectful, they may just end up wasting time. And I worry that people’s response to losing an argument is often to rationalize some other grounds for their original belief, in which case Scott’s taxonomy can encourage people to mis-identify their own cruxes as being deeper and more intractable than they really are. This is already a temptation because it’s embarrassing to admit that a policy or belief you were leaning on hard was so simple to refute.
(Possibly I don’t have a substantive disagreement with Scott and I just don’t like how many different dimensions of value the pyramid is collapsing. There’s a sense in which arguments toward the top can be particularly valuable, but people who like the pyramid shouldn’t skip over the necessary legwork at the lower levels.)
I like this post, though I wish it were explicit about the fact that the subject matter is really “relatively intractable disagreements between smart, well-informed people about Big Issues”, not “arguments” in general.
If everyone in the discussion is smart and well-informed, and the subject is a Big Issue, then trying to resolve the issue by bringing up an isolated fact tends to be a worse use of time, or is further away from people’s cruxes, than trying to survey all the evidence, which tends to be worse / less cruxy than delving into high-level generators. But:
A lot of arguments aren’t about Big Issues. One example I’ve seen: Alice and Bob disagreed about whether a politician had made an inflammatory gesture, based on contradictory news reports. Alice tracked down a recording and showed it to Bob, while noting a more plausible explanation for the gesture; this convinced Bob, even though it was a mere fact and not a literature review or philosophical treatise.
A lot of Big-Issue-adjacent arguments aren’t that sophisticated. If you read Scott’s post and then go argue with someone who says “evolution is just a theory”, it will often be the case that the disagreement is best resolved by just clarifying definitions, not by going hunting for deep generators.
An obvious reply is “well, those arguments are bad in their own right; select arguments and people-to-argue-with such that Scott’s pyramid is true, and you’ll be much better off”. I tentatively think that’s not the right approach, even though I agree that the examples I cited aren’t good topics for rationalists to spend time on. Mostly I just think it’s not true that smart people never believe really consequential, large-scale things for trivial-to-refute reasons. Top rationalists don’t know everything, so some of their beliefs will be persistently wrong just because they misunderstood a certain term, never happened to encounter a certain isolated fact, are misremembering the results from a certain study, etc. That can lead to long arguments when the blind spot is hard to spot.
If people neglect mentioning isolated facts or studies (or clarifying definitions) because they feel like it would be lowly or disrespectful, they may just end up wasting time. And I worry that people’s response to losing an argument is often to rationalize some other grounds for their original belief, in which case Scott’s taxonomy can encourage people to mis-identify their own cruxes as being deeper and more intractable than they really are. This is already a temptation because it’s embarrassing to admit that a policy or belief you were leaning on hard was so simple to refute.
(Possibly I don’t have a substantive disagreement with Scott and I just don’t like how many different dimensions of value the pyramid is collapsing. There’s a sense in which arguments toward the top can be particularly valuable, but people who like the pyramid shouldn’t skip over the necessary legwork at the lower levels.)