My thought wasn’t that he wouldn’t have anything true to say. It was that if he’s still defending good and evil as obviously existing, in that context, he’s far enough behind me on the issue that I can safely assume that he doesn’t have anything major to teach me, and that what he says is untrustworthy enough (because there’s an obvious flaw in his thought process) that I’d have to spend an inordinate amount of time checking his logic before using even the parts that appear good—time that would be better spent elsewhere.
That’s not a good heuristic. There are a lot of people—Eliezer would name Robert Aumann, I think—who are incredibly bright, highly knowledgeable, and capable of conveying that knowledge who are wrong about the answers to what some of us would consider easy questions.
Now, I know Berserk Buttons (warning: TV Tropes) as well as anyone, and I’ve dismissed some works of fiction which others have considered quite good (e.g. Alfred Bester’s The Demolished Man, TV sitcom The Modern Family) because they pushed those buttons, but when it comes to factual information, even stupid people can teach you.
(Granted, you may be right about the worthlessness of this particular speech to you—I haven’t watched it. But the heuristic is poor.)
The heuristic isn’t widely applicable, but I disagree about it being poor altogether. As I pointed out above, it’s not just that he defended good vs. evil. It’s that he did it in the context of a presentation on a subtopic of how we conceptualize the world. He may have things to teach me in other areas, obviously.
That’s why I compared it to someone bringing God into a discussion on ethics specifically. (Or, say, evolution.) That person may be brilliant at physics, but on the topic at hand, not so much.
It also occurs to me that this heuristic may be unusually useful to me because of my neurology. It does seem to take much more time and effort for me to deconstruct and find flaws in new ideas presented by others, compared to most people, and because of the extra time, there’s a risk of getting distracted and not completing the process. It’s enough of an issue that even a flawed heuristic to weed out bad memes is (or, feels—I’m not sure how one would actually test that) useful.
Okay, I’ll grant you that. It’s better to have a sufficiently strict filter that loses some useful information than a weaker filter which lets in garbage data. I would presume (or, at least, advise) that you make a particular effort to analyze data which you previously rejected but which remains widely discussed, however—an example from my own experience being Searle’s Chinese Room argument. Such items should be uncommon enough.
That’s not a good heuristic. There are a lot of people—Eliezer would name Robert Aumann, I think—who are incredibly bright, highly knowledgeable, and capable of conveying that knowledge who are wrong about the answers to what some of us would consider easy questions.
Now, I know Berserk Buttons (warning: TV Tropes) as well as anyone, and I’ve dismissed some works of fiction which others have considered quite good (e.g. Alfred Bester’s The Demolished Man, TV sitcom The Modern Family) because they pushed those buttons, but when it comes to factual information, even stupid people can teach you.
(Granted, you may be right about the worthlessness of this particular speech to you—I haven’t watched it. But the heuristic is poor.)
The heuristic isn’t widely applicable, but I disagree about it being poor altogether. As I pointed out above, it’s not just that he defended good vs. evil. It’s that he did it in the context of a presentation on a subtopic of how we conceptualize the world. He may have things to teach me in other areas, obviously.
That’s why I compared it to someone bringing God into a discussion on ethics specifically. (Or, say, evolution.) That person may be brilliant at physics, but on the topic at hand, not so much.
It also occurs to me that this heuristic may be unusually useful to me because of my neurology. It does seem to take much more time and effort for me to deconstruct and find flaws in new ideas presented by others, compared to most people, and because of the extra time, there’s a risk of getting distracted and not completing the process. It’s enough of an issue that even a flawed heuristic to weed out bad memes is (or, feels—I’m not sure how one would actually test that) useful.
Okay, I’ll grant you that. It’s better to have a sufficiently strict filter that loses some useful information than a weaker filter which lets in garbage data. I would presume (or, at least, advise) that you make a particular effort to analyze data which you previously rejected but which remains widely discussed, however—an example from my own experience being Searle’s Chinese Room argument. Such items should be uncommon enough.
Agreed.