When I try to mentally simulate negative reader-reactions to the dialogue, I usually get a complicated feeling that’s some combination of:
Some amount of conflict aversion: Harsh language feels conflict-y, which is inherently unpleasant.
Empathy for, or identification with, the people or views Eliezer was criticizing. It feels bad to be criticized, and it feels doubly bad to be told ‘you are making basic mistakes’.
Something status-regulation-y: My reader-model here finds the implied threat to the status hierarchy salient (whether or not Eliezer is just trying to honestly state his beliefs), and has some version of an ‘anti-cheater’ or ‘anti-rising-above-your-station’ impulse.
How right/wrong do you think this is, as a model of what makes the dialogue harder or less pleasant to read from your perspective?
(I feel a little wary of stating my model above, since (a) maybe it’s totally off, and (b) it can be rude to guess at other people’s mental states. But so far this conversation has felt very abstract to me, so maybe this can at least serve as a prompt to go more concrete. E.g., ‘I find it hard to read condescending things’ is very vague about which parts of the dialogue we’re talking about, about what makes them feel condescending, and about how the feeling-of-condescension affects the sentence-parsing-and-evaluating experience.)
I think part of what I was reacting to is a kind of half-formed argument that goes something like:
My prior credence is very low that all these really smart, carefully thought-through people are making the kinds of stupid or biased mistakes they are being accused of.
In fact, my prior for the above is sufficiently low that I suspect it’s more likely that the author is the one making the mistake(s) here, at least in the sense of straw-manning his opponents.
But if that’s the case then I shouldn’t trust the other things he says as much, because it looks like he’s making reasoning mistakes himself or else he’s biased.
Therefore I shouldn’t take his arguments so seriously.
Again, this isn’t actually an argument I would make. It’s just me trying to articulate my initial negative reactions to the post.
Right. And according to Zvi’s posit above, a large part of the point of this dialog is that that class of implicit argument is not actually good reasoning (acknowledging that you don’t endorse this argument).
More specifically, says my Inner Eliezer, it is less helpful to reason from or about one’s priors about really smart, careful-thinking people making or not making mistakes, and much more helpful to think directly about the object-level arguments, and whether they seem true.
“More specifically, says my Inner Eliezer, it is less helpful to reason from or about one’s priors about really smart, careful-thinking people making or not making mistakes, and much more helpful to think directly about the object-level arguments, and whether they seem true.”
When you say it’s much more helpful, do you mean it’s helpful for (a) forming accurate credences about which side is in fact correct, or do you just mean it’s helpful for (b) getting a much deeper understanding of the issues? If (b) then I totally agree. If (a) though, why would I expect myself to achieve a more accurate credence about the true state of affairs than any of the people in this argument? If it’s because they’ve stated their arguments for all the world to see so now anybody can go assess those arguments—why should I think I can better assess those arguments than Eliezer and his interlocutors? They clearly still disagree with each other despite reading all the same things I’m reading (and much more, actually). And add to that the fact that Eliezer is essentially saying in these dialogues that he has private reasoning and arguments that he cannot properly express and nobody seems to understand, in which case we have no choice but to do a secondary assessment of how likely he is to have good arguments of that type, or else to form our credences while completely ignoring the possible existence of a very critical argument in one direction.
Sometimes assessments of the argument maker’s cognitive abilities and access to relevant knowledge / expertise is in fact the best way to get the most accurate credence you can, even if it’s not ideal.
(This is all just repeating standard arguments in favor of modest epistemology, but still.)
I had mixed feelings about the dialogue personally. I enjoy the writing style and think Eliezer is a great writer with a lot of good opinions and arguments, which made it enjoyable.
But at the same time, it felt like he was taking down a strawman. Maybe you’d label it part of “conflict aversion”, but I tend to get a negative reaction to take-downs of straw-people who agree with me.
To give an unfair and exaggerated comparison, it would be a bit like reading a take-down of a straw-rationalist in which the straw-rationalist occasionally insists such things as “we should not be emotional” or “we should always use Bayes’ Theorem in every problem we encounter.” It should hopefully be easy to see why a rationalist might react negatively to reading that sort of dialogue.
When I try to mentally simulate negative reader-reactions to the dialogue, I usually get a complicated feeling that’s some combination of:
Some amount of conflict aversion: Harsh language feels conflict-y, which is inherently unpleasant.
Empathy for, or identification with, the people or views Eliezer was criticizing. It feels bad to be criticized, and it feels doubly bad to be told ‘you are making basic mistakes’.
Something status-regulation-y: My reader-model here finds the implied threat to the status hierarchy salient (whether or not Eliezer is just trying to honestly state his beliefs), and has some version of an ‘anti-cheater’ or ‘anti-rising-above-your-station’ impulse.
How right/wrong do you think this is, as a model of what makes the dialogue harder or less pleasant to read from your perspective?
(I feel a little wary of stating my model above, since (a) maybe it’s totally off, and (b) it can be rude to guess at other people’s mental states. But so far this conversation has felt very abstract to me, so maybe this can at least serve as a prompt to go more concrete. E.g., ‘I find it hard to read condescending things’ is very vague about which parts of the dialogue we’re talking about, about what makes them feel condescending, and about how the feeling-of-condescension affects the sentence-parsing-and-evaluating experience.)
I think part of what I was reacting to is a kind of half-formed argument that goes something like:
My prior credence is very low that all these really smart, carefully thought-through people are making the kinds of stupid or biased mistakes they are being accused of.
In fact, my prior for the above is sufficiently low that I suspect it’s more likely that the author is the one making the mistake(s) here, at least in the sense of straw-manning his opponents.
But if that’s the case then I shouldn’t trust the other things he says as much, because it looks like he’s making reasoning mistakes himself or else he’s biased.
Therefore I shouldn’t take his arguments so seriously.
Again, this isn’t actually an argument I would make. It’s just me trying to articulate my initial negative reactions to the post.
Right. And according to Zvi’s posit above, a large part of the point of this dialog is that that class of implicit argument is not actually good reasoning (acknowledging that you don’t endorse this argument).
More specifically, says my Inner Eliezer, it is less helpful to reason from or about one’s priors about really smart, careful-thinking people making or not making mistakes, and much more helpful to think directly about the object-level arguments, and whether they seem true.
“More specifically, says my Inner Eliezer, it is less helpful to reason from or about one’s priors about really smart, careful-thinking people making or not making mistakes, and much more helpful to think directly about the object-level arguments, and whether they seem true.”
When you say it’s much more helpful, do you mean it’s helpful for (a) forming accurate credences about which side is in fact correct, or do you just mean it’s helpful for (b) getting a much deeper understanding of the issues? If (b) then I totally agree. If (a) though, why would I expect myself to achieve a more accurate credence about the true state of affairs than any of the people in this argument? If it’s because they’ve stated their arguments for all the world to see so now anybody can go assess those arguments—why should I think I can better assess those arguments than Eliezer and his interlocutors? They clearly still disagree with each other despite reading all the same things I’m reading (and much more, actually). And add to that the fact that Eliezer is essentially saying in these dialogues that he has private reasoning and arguments that he cannot properly express and nobody seems to understand, in which case we have no choice but to do a secondary assessment of how likely he is to have good arguments of that type, or else to form our credences while completely ignoring the possible existence of a very critical argument in one direction.
Sometimes assessments of the argument maker’s cognitive abilities and access to relevant knowledge / expertise is in fact the best way to get the most accurate credence you can, even if it’s not ideal.
(This is all just repeating standard arguments in favor of modest epistemology, but still.)
I had mixed feelings about the dialogue personally. I enjoy the writing style and think Eliezer is a great writer with a lot of good opinions and arguments, which made it enjoyable.
But at the same time, it felt like he was taking down a strawman. Maybe you’d label it part of “conflict aversion”, but I tend to get a negative reaction to take-downs of straw-people who agree with me.
To give an unfair and exaggerated comparison, it would be a bit like reading a take-down of a straw-rationalist in which the straw-rationalist occasionally insists such things as “we should not be emotional” or “we should always use Bayes’ Theorem in every problem we encounter.” It should hopefully be easy to see why a rationalist might react negatively to reading that sort of dialogue.