The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn’t have to be free of people who disagree with it to be influential, and it doesn’t even have to be correct.
How are we, we who are far below his level, supposed to evaluate if we should believe what Yudkowsky says if we are neither smart enough to do so nor able to subject his work to empirical criticism?
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don’t see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
Writing is influential when many people are influenced by it.
You talked about Yudkowsky’s influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don’t think they influenced the right people.
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Moreover, I believe that even when such statements are true, one should avoid making them when possible
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
pushing them into an arguments as soldiers mode which is detrimental to rational discourse.
On this blog, any person should definitely be resisting this push.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
I did not say that one should avoid telling people when and where they’re going wrong. I was objecting to the practice of questioning people’s motivations. For the most part I don’t think that questioning somebody’s motivations is helpful to him or her.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn’t mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
On this blog, any person should definitely be resisting this push.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
I was objecting to the practice of questioning people’s motivations.
Not questioning their motivations; you objected to the practice of pointing out motivated cognition:
I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition … Moreover, I believe that even when such statements are true, one should avoid making them when possible
Pointing out that someone hasn’t thought through the issue because they are motivated not to—this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn’t let them know that they have something wrong, and they miss a chance to improve it.
Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise.
To paraphrase steven, if you’re interested in winning disputes you should dismiss personal attacks, but if you’re interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it’s a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.
this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you’re right, and this whole third part of our discussion is irrelevant.
It’s quite possible to be inaccurate about other people’s motivations, and if you are, then they will have another reason to dismiss your argument.
How do you identify motivated cognition in other people?
Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)
Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.
How do you identify motivated cognition in other people?
Some of the same ways I see it in myself. Specifically, when dealing with others:
Opposed to easy (especially quick or instant) tests: strong evidence of motivated stopping.
All for difficult (especially currently-impossible) tests: moderate evidence of motivated continuing.
Waiting on results of specific test to reconsider or take a position: moderate evidence of motivated continuing.
Seemingly-obvious third alternative: very strong evidence of motivated stopping. Caveat! this one is problematic. It is very possible to miss third alternatives.
Opposed to plausible third alternatives: weak evidence of motivated stopping—strong evidence with a caveat and split, as “arguments as soldiers” can also produce this effect. Mild caveat on plausibility being somewhat subjective.
In the case of XiXiDu’s comment, focusing on Ben Goertzel’s rejection is an example of waiting on results from a specific test. That is enough evidence to locate the motivated continuing hypothesis¹ - ie, that XiXiDu does not want to accept the current best-or-accepted-by-the-community answer.
The questions XiXiDu posed afterwards seem to have obvious alternative answers, which suggests motivated stopping. He seems to be stopping on “Something’s fishy about Eliezer’s setup”.
¹: As well as “Goertzel is significantly ahead of AI development curve”, “AGI research and development is a field with rigid formal rules on what does and doesn’t convince people”—the first is easily tested by looking at Ben’s other views, the second is refuted by many researchers in that field
The motivated cognition here is pretty thick. Writing is influential when many people are influenced by it. It doesn’t have to be free of people who disagree with it to be influential, and it doesn’t even have to be correct.
Level up first. I can’t evaluate physics research, so I just accept that I can’t tell which of it is correct; I don’t try to figure it out from the politics of physicists arguing with each other, because that doesn’t work.
But what does this mean regarding my support of the SIAI? Imagine I was a politician who had no time to level up first but who had to decide if they allowed for some particle accelerator or AGI project to be financed at all or go ahead with full support and without further security measures.
Would you tell a politician to go and read the sequences and if, after reading the publications, they don’t see why AGI research is as dangerous as being portrait by the SIAI they should just forget about it and stop trying to figure out what to do? Or do you simply tell them to trust a fringe group which does predict that a given particle accelerator might destroy the world when all experts claim there is no risk?
You talked about Yudkowsky’s influential publications. I thought you meant some academic papers, not the LW sequences. They indeed influenced some people, yet I don’t think they influenced the right people.
Downvoted for this:
Your interpretation seems uncharitable. I find it unlikely that you have enough information to make a confident judgment that XiXiDu’s comment is born of motivated cognition to a greater extent than your own comments.
Moreover, I believe that even when such statements are true, one should avoid making them when possible as they’re easily construed as personal attacks which tend to spawn an emotional reaction in one’s conversation partners pushing them into an Arguments as soldiers mode which is detrimental to rational discourse.
Strongly disagree. To improve, you need to know where to improve, and if people avoid telling you when and where you’re going wrong, you won’t improve.
On this blog, any conversational partners should definitely not be construing anything as personal attacks.
On this blog, any person should definitely be resisting this push.
I did not say that one should avoid telling people when and where they’re going wrong. I was objecting to the practice of questioning people’s motivations. For the most part I don’t think that questioning somebody’s motivations is helpful to him or her.
I disagree. Sometimes commentators make statements which are pretty clearly intended to be personal attacks and it would be epistemically irrational to believe otherwise. Just because the blog is labeled as being devoted to the art of refining rationality doesn’t mean that the commentators are always above this sort of thing.
I agree with you insofar as I think that one work to interpret comments charitably.
I agree, but this is not relevant to the question of whether one should be avoiding exerting such a push in the first place.
Not questioning their motivations; you objected to the practice of pointing out motivated cognition:
Pointing out that someone hasn’t thought through the issue because they are motivated not to—this is not an attack on their motivations; it is an attack on their not having thought through the issue. Allowing people to keep their motivated cognitions out of respect for their motivations is wrong, because it doesn’t let them know that they have something wrong, and they miss a chance to improve it.
To paraphrase steven, if you’re interested in winning disputes you should dismiss personal attacks, but if you’re interested in the truth you should dig through their personal attacks for any possible actual arguments. Whether or not it’s a personal attack, you ought to construe it as if it is not, in order to maximise your chances of finding truth.
Agreed. I think the first two parts of our comments address whether one should exert such a push. I think you’re right, and this whole third part of our discussion is irrelevant.
It’s quite possible to be inaccurate about other people’s motivations, and if you are, then they will have another reason to dismiss your argument.
How do you identify motivated cognition in other people?
Not thinking something through could be habitual sloppiness, repeating what one has heard many times, or not thinking that a question is worthy of much mental energy rather than a strong desire for a particular conclusion. (Not intended as a complete list.)
Making a highly specific deduction from an absence rather than a presence strikes me as especially likely to go wrong.
Some of the same ways I see it in myself. Specifically, when dealing with others:
Opposed to easy (especially quick or instant) tests: strong evidence of motivated stopping.
All for difficult (especially currently-impossible) tests: moderate evidence of motivated continuing.
Waiting on results of specific test to reconsider or take a position: moderate evidence of motivated continuing.
Seemingly-obvious third alternative: very strong evidence of motivated stopping. Caveat! this one is problematic. It is very possible to miss third alternatives.
Opposed to plausible third alternatives: weak evidence of motivated stopping—strong evidence with a caveat and split, as “arguments as soldiers” can also produce this effect. Mild caveat on plausibility being somewhat subjective.
In the case of XiXiDu’s comment, focusing on Ben Goertzel’s rejection is an example of waiting on results from a specific test. That is enough evidence to locate the motivated continuing hypothesis¹ - ie, that XiXiDu does not want to accept the current best-or-accepted-by-the-community answer.
The questions XiXiDu posed afterwards seem to have obvious alternative answers, which suggests motivated stopping. He seems to be stopping on “Something’s fishy about Eliezer’s setup”.
¹: As well as “Goertzel is significantly ahead of AI development curve”, “AGI research and development is a field with rigid formal rules on what does and doesn’t convince people”—the first is easily tested by looking at Ben’s other views, the second is refuted by many researchers in that field
I recommend explaining that sort of thing when you say someone is engaging in motivated cognition.
I think it seems more like a discussable matter then and less like an insult.
Thanks for engaging with me; I now better understand where jimrandomh might have been coming from. I fully agree with Nancy Lebovitz here.