Dark Side Epistemology
If you once tell a lie, the truth is ever after your enemy.
I have discussed the notion that lies are contagious. If you pick up a pebble from the driveway, and tell a geologist that you found it on a beach—well, do you know what a geologist knows about rocks? I don’t. But I can suspect that a water-worn pebble wouldn’t look like a droplet of frozen lava from a volcanic eruption. Do you know where the pebble in your driveway really came from? Things bear the marks of their places in a lawful universe; in that web, a lie is out of place.1
What sounds like an arbitrary truth to one mind—one that could easily be replaced by a plausible lie—might be nailed down by a dozen linkages to the eyes of greater knowledge. To a creationist, the idea that life was shaped by “intelligent design” instead of “natural selection” might sound like a sports team to cheer for. To a biologist, plausibly arguing that an organism was intelligently designed would require lying about almost every facet of the organism. To plausibly argue that “humans” were intelligently designed, you’d have to lie about the design of the human retina, the architecture of the human brain, the proteins bound together by weak van der Waals forces instead of strong covalent bonds . . .
Or you could just lie about evolutionary theory, which is the path taken by most creationists. Instead of lying about the connected nodes in the network, they lie about the general laws governing the links.
And then to cover that up, they lie about the rules of science—like what it means to call something a “theory,” or what it means for a scientist to say that they are not absolutely certain.
So they pass from lying about specific facts, to lying about general laws, to lying about the rules of reasoning. To lie about whether humans evolved, you must lie about evolution; and then you have to lie about the rules of science that constrain our understanding of evolution.
But how else? Just as a human would be out of place in a community of actually intelligently designed life forms, and you have to lie about the rules of evolution to make it appear otherwise, so too beliefs about creationism are themselves out of place in science—you wouldn’t find them in a well-ordered mind any more than you’d find palm trees growing on a glacier. And so you have to disrupt the barriers that would forbid them.
Which brings us to the case of self-deception.
A single lie you tell yourself may seem plausible enough, when you don’t know any of the rules governing thoughts, or even that there are rules; and the choice seems as arbitrary as choosing a flavor of ice cream, as isolated as a pebble on the shore . . .
. . . but then someone calls you on your belief, using the rules of reasoning that they’ve learned. They say, “Where’s your evidence?”
And you say, “What? Why do I need evidence?”
So they say, “In general, beliefs require evidence.”
This argument, clearly, is a soldier fighting on the other side, which you must defeat. So you say: “I disagree! Not all beliefs require evidence. In particular, beliefs about dragons don’t require evidence. When it comes to dragons, you’re allowed to believe anything you like. So I don’t need evidence to believe there’s a dragon in my garage.”
And the one says, “Eh? You can’t just exclude dragons like that. There’s a reason for the rule that beliefs require evidence. To draw a correct map of the city, you have to walk through the streets and make lines on paper that correspond to what you see. That’s not an arbitrary legal requirement—if you sit in your living room and draw lines on the paper at random, the map’s going to be wrong. With extremely high probability. That’s as true of a map of a dragon as it is of anything.”
So now this, the explanation of why beliefs require evidence, is also an opposing soldier. So you say: “Wrong with extremely high probability? Then there’s still a chance, right? I don’t have to believe if it’s not absolutely certain.”
Or maybe you even begin to suspect, yourself, that “beliefs require evidence.” But this threatens a lie you hold precious; so you reject the dawn inside you, push the Sun back under the horizon.
Or you’ve previously heard the proverb “beliefs require evidence,” and it sounded wise enough, and you endorsed it in public. But it never quite occurred to you, until someone else brought it to your attention, that this proverb could apply to your belief that there’s a dragon in your garage. So you think fast and say, “The dragon is in a separate magisterium.”
Having false beliefs isn’t a good thing, but it doesn’t have to be permanently crippling—if, when you discover your mistake, you get over it. The dangerous thing is to have a false belief that you believe should be protected as a belief—a belief-in-belief, whether or not accompanied by actual belief.
A single Lie That Must Be Protected can block someone’s progress into advanced rationality. No, it’s not harmless fun.
Just as the world itself is more tangled by far than it appears on the surface, so too there are stricter rules of reasoning, constraining belief more strongly, than the untrained would suspect. The world is woven tightly, governed by general laws, and so are rational beliefs.
Think of what it would take to deny evolution or heliocentrism—all the connected truths and governing laws you wouldn’t be allowed to know. Then you can imagine how a single act of self-deception can block off the whole meta level of truth-seeking, once your mind begins to be threatened by seeing the connections. Forbidding all the intermediate and higher levels of the rationalist’s Art. Creating, in its stead, a vast complex of anti-law, rules of anti-thought, general justifications for believing the untrue.
Steven Kaas said, “Promoting less than maximally accurate beliefs is an act of sabotage. Don’t do it to anyone unless you’d also slash their tires.” Giving someone a false belief to protect—convincing them that the belief itself must be defended from any thought that seems to threaten it—well, you shouldn’t do that to someone unless you’d also give them a frontal lobotomy.
Once you tell a lie, the truth is your enemy; and every truth connected to that truth, and every ally of truth in general; all of these you must oppose, to protect the lie. Whether you’re lying to others, or to yourself.
You have to deny that beliefs require evidence, and then you have to deny that maps should reflect territories, and then you have to deny that truth is a good thing . . .
Thus comes into being the Dark Side.
I worry that people aren’t aware of it, or aren’t sufficiently wary—that as we wander through our human world, we can expect to encounter systematically bad epistemology.
The “how to think” memes floating around, the cached thoughts of Deep Wisdom—some of it will be good advice devised by rationalists. But other notions were invented to protect a lie or self-deception: spawned from the Dark Side.
“Everyone has a right to their own opinion.” When you think about it, where was that proverb generated? Is it something that someone would say in the course of protecting a truth, or in the course of protecting from the truth? But people don’t perk up and say, “Aha! I sense the presence of the Dark Side!” As far as I can tell, it’s not widely realized that the Dark Side is out there.
But how else? Whether you’re deceiving others, or just yourself, the Lie That Must Be Protected will propagate recursively through the network of empirical causality, and the network of general empirical rules, and the rules of reasoning themselves, and the understanding behind those rules. If there is good epistemology in the world, and also lies or self-deceptions that people are trying to protect, then there will come into existence bad epistemology to counter the good. We could hardly expect, in this world, to find the Light Side without the Dark Side; there is the Sun, and that which shrinks away and generates a cloaking Shadow.
Mind you, these are not necessarily evil people. The vast majority who go about repeating the Deep Wisdom are more duped than duplicitous, more self-deceived than deceiving. I think.
And it’s surely not my intent to offer you a Fully General Counterargument, so that whenever someone offers you some epistemology you don’t like, you say: “Oh, someone on the Dark Side made that up.” It’s one of the rules of the Light Side that you have to refute the proposition for itself, not by accusing its inventor of bad intentions.
But the Dark Side is out there. Fear is the path that leads to it, and one betrayal can turn you. Not all who wear robes are either Jedi or fakes; there are also the Sith Lords, masters and unwitting apprentices. Be warned; be wary.
As for listing common memes that were spawned by the Dark Side—not random false beliefs, mind you, but bad epistemology, the Generic Defenses of Fail—well, would you care to take a stab at it, dear readers?
1Actually, a geologist in the comments says that most pebbles in driveways are taken from beaches, so they couldn’t tell the difference between a driveway pebble and a beach pebble, but they could tell the difference between a mountain pebble and a driveway/beach pebble (http://lesswrong.com/lw/uy/dark_side_epistemology/4xbv). Case in point . . .
- Raising the Sanity Waterline by 12 Mar 2009 4:28 UTC; 239 points) (
- Beware of Other-Optimizing by 10 Apr 2009 1:58 UTC; 185 points) (
- [April Fools’] Definitive confirmation of shard theory by 1 Apr 2023 7:27 UTC; 168 points) (
- Value Claims (In Particular) Are Usually Bullshit by 30 May 2024 6:26 UTC; 143 points) (
- Subagents, trauma and rationality by 14 Aug 2019 13:14 UTC; 111 points) (
- Belief in Self-Deception by 5 Mar 2009 15:20 UTC; 99 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:01 UTC; 97 points) (
- In Favor of Niceness, Community, and Civilization by 23 Feb 2014 22:24 UTC; 84 points) (
- Ethical Injunctions by 20 Oct 2008 23:00 UTC; 76 points) (
- The Sacred Mundane by 25 Mar 2009 9:53 UTC; 72 points) (
- “Epistemic range of motion” and LessWrong moderation by 27 Nov 2023 21:58 UTC; 60 points) (
- Striving to Accept by 9 Mar 2009 23:29 UTC; 48 points) (
- My Way by 17 Apr 2009 1:25 UTC; 48 points) (
- A summary of every “Highlights from the Sequences” post by 15 Jul 2022 23:05 UTC; 47 points) (EA Forum;
- If Clarity Seems Like Death to Them by 30 Dec 2023 17:40 UTC; 46 points) (
- 24 Dec 2019 9:41 UTC; 46 points) 's comment on Free Speech and Triskaidekaphobic Calculators: A Reply to Hubinger on the Relevance of Public Online Discussion to Existential Risk by (
- Internal Information Cascades by 25 Jun 2021 16:35 UTC; 45 points) (
- The Tragedy of the Anticommons by 15 Mar 2009 17:32 UTC; 43 points) (
- Why Yudkowsky is wrong about “covalently bonded equivalents of biology” by 6 Dec 2023 14:09 UTC; 34 points) (
- Artificial Mysterious Intelligence by 7 Dec 2008 20:05 UTC; 32 points) (
- Why Yudkowsky is wrong about “covalently bonded equivalents of biology” by 6 Dec 2023 14:09 UTC; 29 points) (EA Forum;
- Some of the best rationality essays by 19 Oct 2021 22:57 UTC; 29 points) (
- An unofficial “Highlights from the Sequences” tier list by 5 Sep 2022 14:07 UTC; 29 points) (
- 22 Feb 2014 19:59 UTC; 28 points) 's comment on Open Thread for February 18-24 2014 by (
- 29 Jan 2011 18:10 UTC; 23 points) 's comment on Harry Potter and the Methods of Rationality discussion thread, part 7 by (
- Algorithms of Deception! by 19 Oct 2019 18:04 UTC; 23 points) (
- 25 Nov 2019 2:07 UTC; 23 points) 's comment on Act of Charity by (
- 29 Mar 2009 21:46 UTC; 20 points) 's comment on Tell Your Rationalist Origin Story by (
- Thwarting a Catholic conversion? by 18 Jun 2012 16:26 UTC; 19 points) (
- Rationality, Cryonics and Pascal’s Wager by 8 Apr 2009 20:28 UTC; 18 points) (
- Language, Power, and the Categorical Imperative by 4 Apr 2022 20:33 UTC; 16 points) (
- Rationality Reading Group: Introduction and A: Predictably Wrong by 17 Apr 2015 1:40 UTC; 16 points) (
- Anthropologists and “science”: dark side epistemology? by 10 Dec 2010 10:49 UTC; 16 points) (
- 2 Feb 2013 20:37 UTC; 14 points) 's comment on Rationality Quotes February 2013 by (
- 15 May 2010 21:12 UTC; 12 points) 's comment on The Social Coprocessor Model by (
- 11 Jul 2023 21:07 UTC; 11 points) 's comment on Blanchard’s Dangerous Idea and the Plight of the Lucid Crossdreamer by (
- 13 Jul 2010 7:55 UTC; 11 points) 's comment on Some Thoughts Are Too Dangerous For Brains to Think by (
- 16 May 2021 2:12 UTC; 9 points) 's comment on Containment Thread on the Motivation and Political Context for My Philosophy of Language Agenda by (
- Gödel Incompleteness: For Dummies by 12 Jul 2020 9:13 UTC; 9 points) (
- 2 Aug 2010 8:09 UTC; 9 points) 's comment on Open Thread, August 2010 by (
- Speed arguments against scheming (Section 4.4-4.7 of “Scheming AIs”) by 8 Dec 2023 21:09 UTC; 9 points) (
- 18 Mar 2009 6:03 UTC; 8 points) 's comment on The Pascal’s Wager Fallacy Fallacy by (
- A Hill of Validity in Defense of Meaning by 15 Jul 2023 17:57 UTC; 8 points) (
- 9 May 2014 1:38 UTC; 8 points) 's comment on A Dialogue On Doublethink by (
- 9 May 2011 23:15 UTC; 8 points) 's comment on Building rationalist communities: lessons from the Latter-day Saints by (
- 6 Nov 2021 1:34 UTC; 8 points) 's comment on [Book Review] “The Bell Curve” by Charles Murray by (
- 1 Mar 2010 17:24 UTC; 7 points) 's comment on Rationality quotes: March 2010 by (
- Rationality Reading Group: Part G: Against Rationalization by 12 Aug 2015 22:09 UTC; 7 points) (
- Speed arguments against scheming (Section 4.4-4.7 of “Scheming AIs”) by 8 Dec 2023 21:10 UTC; 6 points) (EA Forum;
- [SEQ RERUN] Dark Side Epistemology by 26 Sep 2012 2:30 UTC; 6 points) (
- 13 Apr 2011 17:39 UTC; 6 points) 's comment on On Debates with Trolls by (
- 1 Mar 2010 3:05 UTC; 6 points) 's comment on Splinters and Wooden Beams by (
- 9 Apr 2011 5:06 UTC; 6 points) 's comment on Human errors, human values by (
- 3 Jul 2010 18:28 UTC; 6 points) 's comment on What Cost for Irrationality? by (
- 27 Sep 2011 3:24 UTC; 4 points) 's comment on How to incentivize people doing useful stuff on Less Wrong by (
- 16 Mar 2009 23:29 UTC; 4 points) 's comment on In What Ways Have You Become Stronger? by (
- 24 May 2019 22:21 UTC; 4 points) 's comment on Comment section from 05/19/2019 by (
- 28 May 2011 3:56 UTC; 3 points) 's comment on Theism, Wednesday, and Not Being Adopted by (
- 20 Nov 2015 9:04 UTC; 3 points) 's comment on Open thread, Nov. 16 - Nov. 22, 2015 by (
- 17 Sep 2019 19:57 UTC; 3 points) 's comment on Realism and Rationality by (
- 24 Oct 2011 3:06 UTC; 3 points) 's comment on Rationality Quotes October 2011 by (
- My Skepticism by 31 Jan 2015 2:00 UTC; 3 points) (
- 10 Apr 2010 21:21 UTC; 3 points) 's comment on Of Exclusionary Speech and Gender Politics by (
- 17 Oct 2012 17:58 UTC; 3 points) 's comment on Happy Ada Lovelace Day by (
- 13 Apr 2009 14:23 UTC; 3 points) 's comment on Persuasiveness vs Soundness by (
- 4 Oct 2011 5:38 UTC; 3 points) 's comment on Knox and Sollecito freed by (
- 9 Mar 2013 7:02 UTC; 2 points) 's comment on Don’t Get Offended by (
- 7 Jul 2014 19:13 UTC; 2 points) 's comment on Ethicality of Denying Agency by (
- 12 Apr 2012 22:30 UTC; 2 points) 's comment on Rationally Irrational by (
- 26 Mar 2014 23:08 UTC; 2 points) 's comment on Rationality Quotes March 2014 by (
- 25 Mar 2009 22:30 UTC; 2 points) 's comment on The Good Bayesian by (
- 23 Nov 2020 2:14 UTC; 2 points) 's comment on My default frame: some fundamentals and background beliefs by (
- 7 Aug 2011 13:31 UTC; 1 point) 's comment on Beware of Other-Optimizing by (
- 24 Jun 2011 6:14 UTC; 1 point) 's comment on Community roles: teachers and auxiliaries by (
- 4 Jun 2021 4:40 UTC; 1 point) 's comment on Gears in understanding by (
- 7 Apr 2009 14:51 UTC; 1 point) 's comment on Rationalist wiki, redux by (
- 31 Aug 2014 21:37 UTC; 1 point) 's comment on Reverse engineering of belief structures by (
- 20 Oct 2019 4:30 UTC; 1 point) 's comment on Algorithms of Deception! by (
- 28 Aug 2011 6:26 UTC; 1 point) 's comment on Welcome to Less Wrong! (2010-2011) by (
- 12 Feb 2016 9:47 UTC; 0 points) 's comment on Religious and Rational? by (
- 22 Nov 2018 22:39 UTC; 0 points) 's comment on Speculative Evopsych, Ep. 1 by (
- 18 Apr 2009 15:23 UTC; -1 points) 's comment on Rationality Quotes—April 2009 by (
- Rationalist position towards lying? by 12 Apr 2023 1:21 UTC; -2 points) (
- 17 Feb 2016 18:16 UTC; -2 points) 's comment on Should we admit it when a person/group is “better” than another person/group? by (
The most dangerous dark side meme I can think of is the idea of sinful thoughts: that questioning one’s faith is itself a sin even if not acted upon. A close second is “don’t try to argue with the devil—he has more experience at it than you”.
Especially when it’s explicitly enforced, a la death penalty for leaving Islam in Islamic countries.
Not all who wear robes are either Jedi or fakes What do you mean by “wear robes”? Could we move away from references to fictional stories?
Are you trying to argue against the use of metaphor for argument? The fact that Star Wars is a fiction doesn’t make analogies made with its concepts wrong.
To clarify the phrase that you take issue with, “robes” from what I can gather signifies memetic authority, like scientists or priests or marketers who have dominion over a region of thought patterns—as the Jedi wield the Force.
Eliezer,
I agree with you what regards people deceiving themselves. But I disagree regarding people that are deceiving others with purpose. Some of these people can be very smart and know very well what they are doing and on what biases they are playing. They have elevated the art of deception to a science, ohhh yes, read marketing books as an example. Otherwise a superintelligence would become stupid in the process of lying to the human operator with the intention to get out of the box.
-faith: i.e. unconditional belief is good. It’s like loyalty. Questioning beliefs is like betrayal. -The saying “Stick to your guns.”: Changing your mind is like diserting your post in a war. Sticking to a belief is like being a heroic soldier. -The faithfull: i.e. us, we are the best, god is on our side. -the infedels: i.e. them, sinners, barely human, or not even. -God: Infenetly powerful alpha male. Treat him as such with all the implications… -The devil and his agents: They are always trying to seduce you to sin. Any doubt is evedence the devil is seducing you to sin and suceeding. Anyone opposed to your beliefs is cooperating with/being influenced by the devil. -Assasination fatwas: Whacking people who are anti-Islam is the will of Allah. -a sexually satisfying lifestyle is bad: This makes people more angsty(especially young men). This angst is your fault and it’s sin. To be less angsty you should be less sinful ergo fight your sexual urges. And so the cycle of desire, guilt, angst and confusion continues. -no masturbation: see above. -you are born in debt to Jesus because he died for your sins 2000 years ago. That’s all I could think of right now.
The endorsement of information cascades: claiming that X is indisputably true in the name of philosophical majoritarianism, and thus biasing research and statements to foster belief in X is desirable as a way to foster true beliefs (where the majority only exists because of such biased efforts).
Just to be clear, I’m not looking for random false beliefs defended by Dark Side epistemology, I’m looking for Dark Side epistemology itself—the Generic Defenses of Fail.
Roland, these are the Sith masters.
In general, beliefs require evidence.
In general? Which beliefs don’t?
Think of what it would take to deny evolution or heliocentrism
Or what it would take to prove that the Moon doesn’t exist.
As for listing common memes that were spawned by the Dark Side—would you care to take a stab at it, dear readers?
Cultural relativity. Such-and-such is unconstitutional. The founding fathers never intended… (various appeals to stick to the founding fathers original vision) Be reasonable (moderate) Show respect for your elders It’s my private property _ is human nature. Don’t judge me. _ is unnatural and therefore wrong. _ is natural and therefore right. We need to switch to alternative energies such as wind, solar, and tidal. The poor are lazy The entire American political vocabulary (bordering on Orwellian) Animal rights
.. much more.
“‘In general, beliefs require evidence.’ In general? Which beliefs don’t?”
This is a language problem. “In general” or “generally” to a scientist/mathematician/engineer means “always,” whereas in everyday speech it means “sometimes.”
For example I could tell you that a fence with 2 sections has 3 posts ( I=I=I ), or I could tell you that “in general” a fence with N sections has N+1 posts.
Where N >= 3 the fence can (and often does) have N posts.
Ya, if it wraps in on itself, for sure.
Or if the farmer uses a tree instead. ;)
“How many posts does a fence have, if you call the tree a post?”
“In general” does not mean “always”, it means “by default”. It is not the same thing. Rectangles, in general, do not have equal sides with a common dot—except the squares which do. However, there must be reasons for excluding something from a default—and a random false belief is unlikely to find such reasons (not to mention the very going from belief to finding such reasons is backwards).
“We need to switch to alternative energies such as wind, solar, and tidal. The poor are lazy … Animal rights”
I don’t think these fit. Regardless of whether you agree with them, they are specific assertions, not general claims about reasoning with consistently anti-epistemological effects.
Actually the poor are lazy and animal rights seem to fit to me. Animal rights were a hard sell for me, but thinking about it I had to come to the conclusion, that the bottom line “we should treat animals well” was probably either motivated by “I don’t want to eat sick food” or “Awww, cuute!”. Not by “I believe that animals in general need rights, because...” What? They react faster to stimuli than plants? They show complex behaviour? In that case, do you not kill mosquitos? Do you want rights for some fungi as well? How about programs that show complex behaviour? It seems like this was written after the bottom line.
Similarly, since we do not live in an equal world, simply saying that the poor were lazy makes sense if your motivation is to not feel guilty about not trying to help them.
Alternative energies however… I think time proved our dear OP wrong on that front. We may not need to use any one of these specifically, but we need to get away from fossil fuels and until we have fusion or solar farms in orbit, alternative energies are the longest term option. Even nuclear runs out of fuel in a relatively short amount of time.
The probability is the prior times the evidence ratio, so the higher the prior probability, the less evidence you need. If there’s a lottery with one million numbers, and you have no evidence for anything, you’ll think there’s a 0.0001% chance of it getting 839772 exactly, a 50% chance of it getting 500000 or less, and a 99.9999% chance of it getting something other than 839772. Thus, you can be pretty sure it won’t land on 839772 even without evidence.
I think knowing a prior constitutes evidence. If you know that the lottery has one million numbers, that is a piece of evidence.
You need a prior to take evidence into account. If the prior is evidence, then what is the prior?
Hm… You make a good point. I’m not sure I understand this conceptually well enough to have any sort of coherent response.
The ultimate prior is maximum entropy, aka “idk”, aka “50/50: either happens or not”. We never actually have it, because we start gathering evidence for how the world is before our brains even form enough to make any links between it.
That prior doesn’t work when there is a countable number of hypotheses, aka “I’ve picked a number from {0,1,2,...}. Which?” or “Given that the laws of physics can be described by a computer program, which?”.
Your knowledge of the rules of probability is evidence. It’s not evidence specific to this question, but it is evidence for this question, among others.
Your link is broken.
Well, cultural relativity is a fact, as there are no morality and people either justify any of their actions via tradition, or simply follow it when they don’t want to think. Universal life rights would be great (no less than human rights, at least. I’m one personality legalist and one personality ecocentrist who wants sentience to remain in order to save biosphere from geological and astronomical events that are coming sooner than new Homo sapiens may emerge through evolution if current one is extinct before making AGI) Everything else, I upvote.
In the words of the great sage Emo Phillips, “I used to think that the brain was the most wonderful organ in my body. Then I realized who was telling me this.”
I thought of some more. -there is a destiny/Gods plan/reason for everything: i.e. some powerful force is making things the way they are and it all makes sense(in human terms, not cold heartless math). That means you are safe but don’t fight the status quo. -everything is connected with “energy”(mystically): you or special/chosen people might be ably to tap into this “energy”. You might glean information you normally shouldn’t have or gain some kind of special powers. -Scientists/professionals/experts are “elitists”. -Mystery is good: It makes life worth while. Appreciating it makes us human. As opposed to destroying it being good. That’s it for now.
Relax. It will be over soon.
We’re past that now.
X is supernatural.
X is natural.
You’re correct, but it will make people uncomfortable.
You’re smart. You should go to college.
Why do you consider
among these? It seems like the odd one out.
I’ve had forms of this said to me; it basically means “I’m losing the debate because you personally are smart, not because I’m wrong. Whichever authority I listen to in order to reinforce my existing beliefs would surely crush all your arguments. So stop assailing me with logic...”
It’s Dark Side because it surrenders personal understanding to authority, and treats it as a default epistemological position.
Dark side or not it is quite often valid. People who do not trust their ability to filter bullshit from knowledge should not defer to whatever powerful debater attempts to influence them.
It is no error to assign a low value to p(the conclusion expressed is valid | I find the argument convincing).
Isn’t “Dark Side” approximately “effective, but dangerous”?
No, and argument from authority can be a useful heuristic in certain cases, but at least you’d want to take away the one or two arguments you found most compelling and check them out later. In that sense, this is borderline.
Usually, however, this tactic is employed by people who are just looking for an excuse to flee into the warm embrace of an unassailable authority, often after scores of arguments they made were easily refuted. It is a mistake to give a low value to p(my position is mistaken | 10 arguments I have made have been refuted to my satisfaction in short order).
I’m pretty confident that “”Everyone has a right to their own opinion.” was generated by people trying to protect themselves from people who were trying to protect themselves from the truth.
We really need some talk about what the consequences of an AI with access to its own source code and self-protecting beliefs would be.
I’m looking for Dark Side epistemology itself—the Generic Defenses of Fail.
In that case—association, essentialism, popularity, the scientific method, magic, and what I’ll call Past-ism.
Wait a second—the scientific method? How? It may not be the most efficient way to get the truth, and it may not take into account Baye’s theorem that could speed it up, but I don’t see how the scientific method is epistemologically (is that a word?) wrong.
Too late—it’s been 3 and a half years.
“epistemologically ” is a word, but it’s hard to tell when to instead say “epistemically”.
Somewhat amusing, but it should not be surprising that most of the commentary on old sequence posts is people reading them and engaging with the ideas for the first time.
That’s ridiculous: whenever I want to comment, I always observe that I am reading 4-year-old arguments and keep on scrolling.
Necro-commenting isn’t usually frowned upon around here.
I’d say rules against necro-commenting are a good tool for the Dark Side, ensuring no discussion progresses beyond a single burst of activity and wasting a lot of time repeating the same arguments again and again.
An interesting claim I came across recently is that most people view the Internet as opening up the past, but that isn’t quite right—the past was always accessible, through books and stories and so on. What the internet does that is strange is extend the present into the past, so that content created in 2001 or 2012 or so on can be indistinguishable from content created in 2016, if the formatting, context, or dynamics are the same.
That is, one doesn’t expect Jane Austen to return any fan letters, but sometimes when you respond to a four-year old comment, you get a response within a day.
I don’t know if that’s right. The past was always accessible to some degree, but never before as an overwhelming exhaustive array of minutiae. It’s precisely because of that level of detail that this past looks so much like the present.
Yes, it’s the time language that got me.
We are missing something. Humans are ultimatly driven by emotions. We should look for which emotions beliefs tap into in order to understand why people seek or avoid certain beliefs.
I’m not sure what emotion it is, but I would hypothesize that it comes from tribal survival habits. Group cohesion was existentially important in the tribal prehuman/early-human era. Being accurate and correct with your beliefs was important, but not as important as sharing the same beliefs as the tribe.
So we developed methods of fitting into our tribes despite it requiring us to believe paradoxical and irrational things that should be causing cognitive dissonance.
A particular flavor of “if it ain’t broke, don’t fix it” that points to established traditions as “having worked for ages”. Playing off the fear of the unknown? The meme of traditions in general adds weight to many of these.
I second “cultural relativity” as being an extension of “everyone having a right to their opinion”, but in both cases point to them as also being tools to find things in one’s own life that are arbitrary and in need of evaluation on a more objective basis.
Isn’t the scientific method a servant of the Light Side, even if it is occasionally a little misguided?
What kind of thing do you mean by “occasionally a little misguided”? Are you referring to something bad about it because humans (and all our mental frailties) were using it, or something bad that would happen no matter what kind of creature tried to use it, even ones that had ways around human-like mental frailties?
(I see this comment is from 7 years ago, and I will understand completely if no response comes.)
@Eliezer: Roland, these are the Sith masters.
Ok, got your point. One thing I worry though is how much those movie analogies end up inducing biases in you and others.
@Eliezer:
To drive home my earlier point. The whole idea of jedis vs. siths reflects a Manichaeistic worldview(good vs. bad). Isn’t this a simplification?
Isn’t the scientific method a servant of the Light Side, even if it is occasionally a little misguided?
Too restrictive. Science is not synonymous with the hypothetico-deductive method, and nor is there any sort of thing called the “scientific method” from which scientists draw their authority on a subject. Neither is it a historically accurate description of how science has done its work. Read up on Feyerabend.
Science is inherently structureless and chaotic. It’s whatever works.
Aehm, was Feyerabend a scientist?
Eliezer writes, “In general, beliefs require evidence.”
To which Peter replies, “In general? Which beliefs don’t?”
Normative beliefs (beliefs about what should be) don’t, IMHO. What would count as evidence for or against a normative belief?
In isolation, almost certainly nothing, but you can play normative beliefs against one another. If you can demonstrate that a person’s normative belief is inconsistent with another of their normative beliefs, that demonstrates that one of them must be ‘false’. You can’t check them against reality directly, but they must still be consistent.
Evidence that would substantially inform a simulation of the enforcement of those beliefs. For example, history provides pretty clear evidence of the ultimate result of fascist states/dictatorships, partisan behaviour, and homogeneous group membership The qualities found in this projected result is highly likely to conflict with other preferences and beliefs.
At that point, the person may still say ‘Shut up, I believe what I want to believe.’ But that would only mean they are rejecting the evidence, not that the evidence doesn’t apply.
How about “Comparing Apples and Oranges,” or “How Dare you Compare,” a misrepresentation of the scope of analogies. For a recent example, see the response to John Lewis’s drawing an analogy between certain aspects of the McCain campaign and those of George Wallace—the response is not a consideration of the scope and aptness of the analogy but a rejection that any analogy at all can be drawn between two subjects when one is so generally recognized to be Evil. The McCain campaign does not attempt to differentiate the aspects under analogy (rhetoric and its potential for the fomentation of violence) from those of Wallace, but rather condemns the idea that the analogy can be considered at all. Under the epistemology of Fail, any difference between two subjects of comparison is enough to reject its validity, regardless the relevance of the distinction to the actual comparison being drawn. See also: Godwin’s Law.
Some self-entitled males like to use this one, particularly in defense of the notion that one has in inviolate right to make sexual advances toward other people regardless of circumstance or outward sign. Sooner or later, after demonstrating how each of their justifications also justify sexual assault, it leads to “how dare you compare me to a rapist,” which is where the fun begins. After I have done epistemologically belittling them I point out that the obvious fact that sexual assault is known to be bad is a manifestation of general principles of ethical interaction among humans, and not a special case handed down from a God who says that everything that is not expressly forbidden by a law is good.
Somehow I doubt that “regardless of circumstance or outward sign” is their wording and not yours.
(Edit) Also, the converse of “not everything that is not expressly forbidden by a law is good” is “not everything that causes the slightest incidental harm is unforgivable babyeating evil”.
Animal rights???
You’re smart. You should go to college???
Essentialism???
Normative beliefs (beliefs about what should be) don’t [require evidence], IMHO. What would count as evidence for or against a normative belief?
That’s correct if you don’t consider pure reason to be evidence—but I consider it to be so. So morality and ethics and all these normative things are, in fact, based on evidence—although it is a mix of abstract evidence (reason) with concrete evidence (empirical data). If you base your morality, or any normative theory (how the world should be) on anything other than how things actually are (including mathematics), you necessarily have to invoke ascribe some supernatural property onto it
One giant category of dark side reasoning looks like “That idea is _” Where the idea is an “is” (not a “should”) and _ is any negative affect word with a meaning other than “untrue”.
Examples include {unpatriotic, communist, capitalist, liberal, conservative, provincial, any-demonym-goes-here, cultish, religious, atheistic, sinful, evil, dangerous, repugnant, elitist, condescending, out-of-touch, politically incorrect, offensive, argumentative, hateful, cowardly, fool-hardy, inappropriate, indecent, unsettling, lewd, silly, idiotic, new-fangled, old-fashioned, staid, dead, uncool, too simple, too complicated} and many more.
Important note: The exception to this rule is if the speaker could goes on to show how _ is evidence about the truth of the proposition. If you can say why something is idiotic, that’s fine. A seasoned scientist has the right to say “that theory looks too complicated” if the they have many examples of surprisingly simple theories explaining things well, but a creationist doesn’t earn the right to accuse the theory of evolution of being “too complicated,” until they explain what whatever it is they mean by “too complicated” has to do with the idea being wrong.
To avoid concluding that an idea is true, the Dark Side’s first line of defense is to avoid even considering whether the idea is true. Those who are good enough at suppressing contradictions can simply save themselves the trouble of building up “a vast complex of anti-law, rules of anti-thought”. After all, building such a complex is a risky business from the standpoint of protecting the precious belief. The larger the complex gets, the more close scrapes it could have with real sensory experience.
Just as a murderer ties the corpse of his victim to a heavy stone before throwing it into the water, so too do victims of the Dark Side tie ideas they want to dispose of to negative affect words. It really does make them less likely to resurface.
The same caution applies to tying positive affect words to desired ideas.
Ideas are also often dismissed for being politically correct, by concluding the speaker is a hypocrite. I suppose you can count that as a particular case of cowardly.
Saying ‘There is lots of evidence for it’ When in fact there is little to none. I guess the epistemology is ‘It is ok to believe something if you believe there is evidence to support it.’
Creationists are told the fossil record supports X and Y, and they run with it.
The concept of different epistemological magisteria. E gave an example of it in this post (and also in the post about scientists outside the laboratory), but his example is just the tip of the iceberg. This failure of rationality doesn’t manifest itself explicitly most of the time, but is engaged in implicitly by almost everybody that I know that isn’t into hardcore rationality.
It’s definitely engaged in by people who are into, or at least cheer for, science and (traditional) rationality and/or philosophy. It’s the double standard between what epistemological standards you explicitly endorse, and what are the actual beliefs on the basis of which you act. Acting as if the sun will rise tomorrow even though you endorse radical scepticism, accepting what Richard Dawkins says on his authority while seeking out refutations for creationist arguments. I think one big reason for this is that people who are interested in this sort of thing are exposed too much to deductive reasoning and hardly at all to rigorous inductive reasoning. Inductive reasoning is the practical form of reasoning that actually works in the real world (many fallacies of deductive reasoning are actually valid probabilistic inferences), and we all have to engage in it explicitly or implicitly to cope in the world. But having been exposed only the “way” of deductive rationality, and warned against it’s fallacies, people may come to experience a cognitive dissonance between what epistemological techniques are useful in real life and which epistemological techniques they ought to be using—and therefore to see science, rationality and philosophy as disconnected from real life, things to be cheered for and entertaning diversions. Such people don’t hold every part of their epistemological self under the same level of scrutiny, because implicitly they believe that their methods of scrutinizing are imperfect. I recognize my past self in this, but not my present self, who knows about evo psych, inductive reasoning etc. and has seen that these methods actually work and can therefore criticize his own epistemological habits using the full force of his own rationality...
This might concern mistaken, well-meaning people more than the actual Dark Side but it seems to me to be an important point anyway.
A few general schemas:
“True for”, as in, “That may be true for you, but not for me. We each choose our own truths.”
“I feel that X.” Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying “I feel that X” in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying “No you don’t”, and watch the explosion. “How dare you try to tell me what I’m feeling!”
Write obscurely.
Never explicitly state your beliefs. Hint at them in terms that the faithful will pick up and applaud, but which give nothing for the enemy to attack. Attack the enemy by stating their beliefs in terms that the faithful will boo, while giving the enemy nothing to dispute.
Ignore the entire machinery of rationality. Treat all human interaction as nothing more than social grooming or status games in a tribe of apes.
Argument by innuendo. Politicians love this. Imply, then deny. “I never said that.”
All good stuff. Perhaps dark side epistemology is mainly about behaviors, not beliefs? A list of behaviors I noticed while speaking to climate science deniers:
First and foremost, they virtually never admit that they got anything wrong, not even little things. (If you spot someone admitting they were wrong about something, congrats! You may have stumbled upon a real skeptic!)
They don’t construct a map of the enemy’s territory: they have a poor mental model of how the climate system works. After all, they are taught “models can’t be trusted,” even though all science is built on models of some sort. Instead they learn a list of stories, ideas and myths, and they debate mainly by repeating items on their list.
They often ignore your most rock solid arguments, as if you’d said nothing at all, and they attack whatever they perceive to be your weakest point.
They think they are “scientific”. I was astonished at one of them’s ability to sound sciencey.… but then I saw how GPT2 could say plausible things without really understanding what it was saying, and I saw Eliezer talking about the “literary genre” of science, so I guess that’s the answer—certain people somehow pick up and mimick the literary genre of science without understanding or caring about its underlying substance.
They lack self-awareness. You’ll never ever hear them say “Okay, I know this might sound crazy, but those thousands of climate scientists are all wrong. I can’t blame you for agreeing with a supermajority, but if you’ll just hear me out, I will explain how I, a non-scientist, can be certain the contrarians are right. Just let me know if I’ve made some mistake in my reasoning here...” (which reminds me of I an interesting idea I had after reading about philisophical zombies… is it possible that people who seem to lack self-awareness literally lack self-awareness? That they are zombies?)
So, they are not introspective: they’re not thinking about how they think. So they haven’t thought about the Dunning-Kruger effect (meme!), and confirmation bias is something that happens to other people. “Motivated reasoning? Not me! So what if I do? Everybody does it…”
It’s as if schoolyard irony is an important defense mechanism for them. They take accusations often used against them, and toss them at detractors. They’ll say you’re in a “cult” or “religion” for believing humans cause warming, that you lie, fudge data, are “closed-minded”, etc. One guy called me a “denier” (in denial that it’s all a hoax) even though I had not called him a denier. In general you can expect attacks on your character even if you were careful not to attack them, yet these attacks will seem like plausible descriptions of the attacker. Similarly, they may dismiss talk of the scientific literature or consensus as “appeals to authority”, apparently oblivious to the authorities (Rush Limbaugh, Roy Spencer, and many others) upon which their own opinion is based. Last but not least, they’ll complain of “politicizing the science” while politicizing the science.
Lack of knowledge seems to satisfy them as a knowledge substitute — e.g. “I’ve not seen evidence for X, so I can safely assume X is false” or “I’ve not seen evidence against X, so I can safely assume X is true.” Missing knowledge somehow provides not merely hope, but great confidence that the experts are wrong.
When you have reached the point where you’re considering whether your opponents are literally zombies without any subjective consciousness… could it be time to consider whether your own thinking has gone wrong somewhere?
Lacking self-awareness (in the sense described above: habitually declining to engage in metacognitive thinking) is different from lacking consciousness/qualia. I am not claiming that they lack the latter. But, I do wonder if there have been any investigations into whether qualia are universal among humans, and I wonder how one would go about detecting qualia (it’s vaguely like a Turing test; a human without qualia would likely not intentionally deceive the tester the way a computer might during a Turing test, but would of course be unaware that there is any difference between his/her experience and anyone else’s, and can be expected to deny any difference exists.)
I don’t think the proponents of qualia as metaphysical would agree that such a test is possible in theory—otherwise you could put someone in an MRI scan, show him a red square, monitor for activity in his visual cortex and wait for him to confirm he sees “the redness”. This should be enough to conclude some “redness” related experience has occured in the subject’s brain (since qualia is supposed to be individual, differences in experience is expected—it doesn’t have to be exactly the same). And yet the question of philosophical zombies remains (at least according to some philosophers).
If I take a digital picture, I can convert the file to BMP format and extract the “red” bits, but this is no evidence that my phone has qualia of redness. An fMRI scanning a brain will have the same problem. The idea that everyone has qualia is inductive: I have qualia (I used to call it my “soul”), and I know others have it too since I learned about the word itself from them. I can deduce that maybe all humans have it, but it’s doomed to be a “maybe”. If someone were to invent a test for qualia, perhaps we couldn’t even tell if it works properly without solving the hard problem of consciousness.
To avoid semantic confusion, here is the Wikipedia definition of qualia: “In philosophy and certain models of psychology, qualia (/ˈkwɑːliə/ or /ˈkweɪliə/; singular form: quale) are defined as individual instances of subjective, conscious experience.” https://en.m.wikipedia.org/wiki/Qualia
You are skipping the part where we receive confirmation from the patient that he sees the redness. This, combined with the fMRI, should be enough to prove the colour red has been experienced (i.e. processed) by the patient’s brain.
Now one question remains—was this a conscious experience? (Thank you for making me clarify this, I missed it in my previous comment!)
I propose that any meaningful phylosophical definition of consciousness related to humans should cover the medical state of consciousness (i.e. the patient follows a light, knows the day of the week, etc.) If it doesn’t, I would rather taboo “consciousness” and discuss “the mental process of modeling the environment” instead.
Whatever the definition of consciousness, as long as it relates to the function of a healthy human brain, it entails qualia.
However, if the definition of consciousness doesn’t include what’s occuring in the human brain, why bother with it?
I’ve heard people speaking of a soul before—it did not convince me they (or I) have one. I would happily grant them consciousness instead.
Even without solving the hard problem of consciousness, as long as we agree that consciousness is a property the human mind has, the test can be administered by a paramedic with a flashlight.
We will need the solution when we try to answer if our phone/dog/A.I. is conscious, though.
(I recently worked out a rudimentary solution (most probably wrong), which relies heavily on Eliezer’s writings on the question of free will later in the Sequences. I am reluctant to share it here, since it would spoil Eliezer’s solution and he advises people to try working it out for themselves first. I could PM or ROT13 in case of interest.)
Qualiaphiles don’t think qualia are something other than a property the mind has, they think they are not open to any obvious third-party inspection, like shining a flashlight.
If you define consc. as the thing EMT’s can check with a flashlight, all you have done is left qualia out of the definition: you haven’t solved any problem of qualia.
Yes. Once I define qualia as “conscious experience”, I necessarily have to leave it out of the definition of “consciousness” (whatever that may be).
My point is that only the question of consciousness remains. And consciousness is worth talking about only if human brains exhibit it.
I am not trying to solve the question of qualia, I am trying to dissolve it as improper.
P.S. Do you mind tabooing “qualia” in any further discission? This way I can be sure we are talking about the same thing.
Again, as a non-illusionist, I disagree that physiological consciousness necessarily implies qualia (or that an AGI necessary has qualia). It seems merely to be a reasonable assumption (in the human case only).
Ok. I am still unsure of your position. Do you think other people have experiences, but we cannot say if those are conscious experiences? Or are you of the opinion we cannot say anyone has any kind of experiences? Could you please taboo “qualia”, so I know we are not talking about different things entirely?
Well, the phrase “something-it-is-like to be a thing” is sometimes used as a stand-in for qualia. What I am talking about when I use that word is “the element of experience which, according to the known laws of physics, does not exist”. There is only one level of airplane, and it’s quarks. It seems impossible for a quark (electron, atom) or photon to be aware it is inside a mind. So in the standard reductionist model, there is no meaningful difference between minds and airplanes; a mind cannot feel anything for the same reason an airplane or a computer cannot feel anything. The sun is constantly exploding while being crushed, but it is not in pain. A mind is simply a machine with unusual mappings from inputs to outputs. Redness, cool breezes, pleasure, and suffering are just words that represent states which are correlated with past inputs and moderate the mind’s outputs. Many computer programs (intelligent or not) could be described in similar terms.
Suppose someone invents a shockingly human-like AGI and compiles it to run single-threaded. I run a copy on the same PC I’m using now, inside a GPU-accelerated VR simulation (maybe it runs extremely slowly, at 1⁄500 real time, but we can start it from a saved teenager-level model and speak to it immediately via a terminal in the VR). Some would claim this AGI is “phenomenally conscious”; I claim it is not, since the hardware can’t “know” it’s running an AGI any more than it “knows” it is running a text editor inside a web browser on lesswrong.com. It’s just fetching and executing a sequence of instructions like “mov”, “add”, “push”, “cmp”, “bnz”, just as it always has (and it doesn’t know it’s doing that, either). I claim that, associated with our minds, there is something additional, aside from the quarks, which can feel things or be aware of feelings. This something is not an abstraction (representing a collection of quarks which could be interpreted by another mind as a state that modulates the output of a neural network), but a primitive of some sort that exists in addition to the quarks that embody the state, and interacts with those quarks somehow. I expect this primitive will, like everything else in the universe, follow computable rules, so it will not associate itself with any arbitrary representation of a state, such as my single-threaded AGI or an arrangement of rocks. (by the way, I also assume that this primitive provides something useful to its host, otherwise animals would not evolve an attachment to them.)
Ok, I could decipher this as a vague stand in for experience. I would much prefer something like “the ability to process information about the environment and link it to past memories”, but to each their own.
Uhm… Are you banking on a revolution in the field of physics? And later you even show exactly how reductionism not only permits, but also explains our experiences.
Yes, there is. One has states of mind and the other doesn’t. How meaningful this difference is depends on your position on nihilism.
Wrong! The end of your paragraph shows why this is a wrong description of reductionism.
Yes. Exactly. Pleasure and suffering are just words, but the states of mind they represent are very much real.
Correct—particals lack the computational power to know anything. Minds, on the other hand, can know they are made of particles. This is not a problem for reductionism. Actually, explaining how simple particles’ interactions lead to observed phenomena on the macro level is the entire point.
Yes, no one would call your GPU conscious. The AGI is the software, though. The AGI could entertain the hipotesis it lives in a simulation, even before discovering any hard evidence. Much like we do. Depending on its code, it could have states of mind similar to a human and then I would not hesitate to call it conscious.
How willing would you be to put such an AGI in the state of mind described by reductionists as “pain”, even if it is simply a program run on hardware?
If such a primitive does interact with quarks, we will find it.
And then we have yet another particle. How is that different from reductionism?
Ah, it’s a magical particle. It is smaller than an electron, yet it interacts with the quarks in the brain, but not those in the carbon of a diamond. Or is it actually big, remote and intelligent on its own (unlike electrons)? So intelligent it knows exactly what to interact with, and exactly when, so as to remain undetected?
If you are not postulating a god, you are at the very least postulating a soul under a new name.
See, once you step outside the boundaries of mundane physics, you get very close to teology very fast.
I wasn’t talking about the GPU. Using the word “yes” to disagree with me is off-putting.
I never said I rejected reductionism. I reject illusionism.
Quite the opposite. A magical particle would be one that is inexplicably compatible with any and every representation of human-like consciousness (rocks, CPUs of arbitrary design) - with the term “human-like” also remaining undefined. I make no claims as to its size. I claim only that it is not an abstraction, and that therefore known physics does not seem to include it.
I do not think it is intelligent, though it may augment intelligence somehow.
I think it’s fair to give illusionism a tiny probability of truth, which could make me hesitant (especially given its convincing screams), but I would be much more concerned about animal suffering than about my AMD Ryzen 5 3600X suffering.
By the way, where will the suffering be located? Is it in the decode unit? The scheduler? the ALU? The FPU? The BTB? The instruction L1 cache? The data L1 cache? Does the suffering extend to the L2 cache? the L3? out to the chipset and the memory sticks? Is this a question that can be answered at all, and if so, how could one go about finding the answer?
Noted. Thank you for pointing this out.
Good to have that clarified.
Huh? I am now confused.
Pain signals are processed by the brain and suffering happens in the mind. So, theoretically, the suffering would be happening in the mind running on top of the simulated cortex, inside the matrix. All the hardware would be necessary to run the simulation. The hardware would not be experiencing the simulation. Just as individual electrons are not seeing red.
I misunderstood then—you do seem unhappy with the standard reductionist model’s position on emotions and experiences as states of mind.
What do you mean by “illusionism”? Is it only the belief that AGI or a mind upload could be conscious? Or is there more to it?
And how do you know that? Why do you think this unknown particle is not compatible with rocks and CPUs? Is it because you get to define its behaviour precisely as you need to answer a philosophical question a certain way?
What evidence would it take to falsify your belief in this primitive particle? What predictions does it allow you to make? Does it pay rent in anticipation?
I don’t know why. I have an AMD Ryzen 5 CPU and my earlier premise should make sense if you know what “single-threaded” means.
I thought it was obvious, but okay… let X be a nontrivial system or pattern with some specific mathematical properties. I can’t conceive of a rule by which any arbitrary physical representation of X could be detected, let alone interacted with. If a particle (or indivisible entity) does something computationally impossible (or even just highly intelligent), I call it magic.
It pays rent in sensation. I have a first-person subjective experience and I am unable to believe that it is only an abstraction. (Otherwise I probably would have turned atheist much sooner.)
I think of consciousness as a process (software) run on our brains (wetware), with the theoretical potential to be run on other hardware. I thought you understood my position. Asking me to pinpoint the hardware component which would contain suffering, tells me you don’t.
To me, saying the cpu (or the gpu) is conscious sounds like saying the cpu is linux—this is a type error. A pc can be running linux. A pc cannot actually be linux, even if “running” is often omitted.
But if one doesn’t know “running” is omitted, one could ask where does the linux-ness come from, if neither the cpu nor the ram are themselves linux.
But it does know to interact with mammals and not with trees and diamonds? … Argh! You know what, screw it. This is like arguing how many angels can sit on top of a needle. Occam’s razor says not to.
Without falsifiable predictions, we have no way to difirentiate a true ad-hoc explanation from a false one. Also, a model with no predictive powers is useless. Its only “benefit” would be to provide piece of mind as a curiosity stopper. (See https://www.lesswrong.com/posts/a7n8GdKiAZRX86T5A/making-beliefs-pay-rent-in-anticipated-experiences.)
I honestly don’t see the disconnect. I don’t think the existence of a conscious AGI would invalidate my subjective experiences in the slightest. The explanation is always mundane (“only an abstraction” ?), that doesn’t detract from the beauty of the phenomenon. (See https://www.lesswrong.com/posts/x4dG4GhpZH2hgz59x/joy-in-the-merely-real).
I believe you are right. Many people cite subjective personal experiences as their reason for being religious. This does make me doubt our ability to draw correct conclusions based on such.
So, I think we’ve cleared up the distinction between illusionism and non-illusionism (not sure if the latter has its own name), yay for that. But note that Linux is a noun and “conscious” is an adjective—another type error—so your analogy doesn’t communicate clearly.
I can’t be sure of that. AFAIK, you are correct that we have no falsifiable predictions as of yet—it’s called the “hard problem” for a reason. But illusionism has its own problems. The most obvious problem—that there is no “objective” subjective experience, qualia, or clear boundaries on consciousness in principle (you could invent a definition that identifies a “boundary” or “experience”, but surely someone else could invent another definition with different boundaries in edge cases)—tends not to be perceived as a problem by illusionists, which is mysterious to me. I think you’re saying the suffering has no specific location (in my hypothetical scenario), but that it still exists, and that this makes sense and you’re fine with it; I’m saying I don’t get it.
But perhaps illusionism’s consequences are a problem? In particular, in a future world filled with AGIs, I don’t see how morality can be defined in a satisfactory way without an objective way to identify suffering. How could you ever tell if an AGI is suffering “more” than a human, or than another AGI with different code? (I’m not asking for an answer, just asserting that a problem exists.)
Linux is also an adjective—linux game/shell/word processor.
Still, let me rephrase then—I don’t need a wet cpu to simulate water. Why would I need a conscious cpu to simulate consciousness?
Do you expect this to change? Chalmers doesn’t. In fact, expecting to have falsifiable predictions is itself a falsifiable prediction. So you should drop the “yet”. Only then can you see your position for the null hypothesis it is.
There is not a single concept, that could not be redefined. If this is a problem, it is not unique to consciousness.
“A process currently running on human brains” -although far from being a complete definition, already gives us some boundaries.
Suffering is a state of mind. The physical location is the brain.
By stimulating different parts of the brain, we can cause suffering (and even happiness).
Another way to think about it is this—where does visual recognition happen? How about arithmetic? Both required a biological brain for a long, long time.
And for the hipothetical scenario—let’s say I am playing CS and I throw a grenade—where does it explode?
That’s only the central problem of all of ethics, is it not? Objective morality? How could you tell if a human is suffering more than another human?
I don’t see how qualia helps you with that one. It would be pretty bold to exclude AGIs from your moral considerations, before excluding trees (and qualia has not helped you exclude trees!).
Edit: I now realize your position has little to do with Chalmers. Since you are postulating a qualia particle, which has casual effects, you are a substance dualist. But why rob your position of its falsifiable prediction? Namely—before the question of consciousness is solved, the qualia particle will be found.
Or am I misrepresenting you again?
“Car” isn’t an adjective just because there’s a “Car factory”; Consider: *”the factory is tall, car, and red”.
Yes, but I expect it to take a long time because it’s so hard to inspect living humans brains non-destructively. But various people theorize about the early universe all the time despite our inability to see beyond the surface of last scattering… ideas about consciousness should at least be more testable than ideas about how the universe began. Hard problems often suffer delays; my favorite example is the delay between the Michelson–Morley experiment’s negative result and the explanation of that negative result (Einstein’s Special Relativity). Here, even knowing with certainty that something major was missing from physics, it still took 18 years to find an explanation (though I see here an ad-hoc explanation was given by George FitzGerald in 1889 which pointed in the right direction). Today we also have a long-standing paradox where quantum physics doesn’t fit together with relativity, and dark matter and dark energy remain mysterious… just knowing there’s a problem doesn’t always quickly lead to a solution. So, while I directly sense a conflict between my experience and purely reductive consciousness, that doesn’t mean I expect an easy solution. Assuming illusionism, I wouldn’t expect a full explanation of that to be found anytime soon either.
It was just postulation. I wouldn’t rule out panpsychism.
Chalmers seems not to believe in a consciousness without physical effects—see his 80000 hours interview. So Yudkowsky’s description of Chalmers’ beliefs seems to be either flat-out wrong, or just outdated.
I do hope we solve this before letting AGIs take over the world, since, if I’m right, they won’t be “truly” conscious unless we can replicate whatever is going on in humans. Whether EAs should care about insect welfare, or even chicken welfare, also hinges on the answer to this question.
Thank you for this discussion.
I was wrong about grammar and the views of Chalmers, which is worse. Since I couldn’t be bothered to read him myself, I shouldn’t have parroted the interpretations of someone else.
I now have better understanding of your position, which is, in fact, falsifiable.
We do agree on the importance of the question of consciousness. And even if we expect the solution to have different shape, we both expect it to be embedded in physics (old or new).
I hope I’ve somewhat clarified my own views. But if not, I don’t expect to do better in future comments, so I will bow out.
Again, thank you for the discussion.
Yeah, this was a good discussion, though unfortunately I didn’t understand your position beyond a simple level like “it’s all quarks”.
On the question of “where does a virtual grenade explode”, to me this question just highlights the problem. I see a grenade explosion or a “death” as another bit pattern changing in the computer, which, from the computer’s perspective, is of no more significance than the color of the screen pixel 103 pixels from the left and 39 pixels down from the top changing from brown to red. In principle a computer can be programmed to convincingly act like it cares about “beauty” and “love” and “being in pain”, but it seems to me that nothing can really matter to the computer because it can’t really feel anything. I once wrote software which actually had a concept that I called “pain”. So there were “pain” variables and of course, I am confident this caused no meaningful pain in the computer.
I intuit that at least one part* of human brains are different, and if I am wrong it seems that I must be wrong either in the direction of “nothing really matters: suffering is just an illusion” or, less likely, “pleasure and suffering do not require a living host, so they may be everywhere and pervade non-living matter”, though I have no idea how this could be true.
* after learning about the computational nature of brains, I noticed that the computations my brain does are invisible to me. If I glance at an advertisement with a gray tube-nosed animal, the word “elephant” comes to mind; I cannot sense why I glanced at the ad, nor do I have any visibility into the processes of interpreting the image and looking up the corresponding word. What I feel, at the level of executive function, is only the output of my brain’s computations: a holistic sense of elephant-ness (and I feel as though I “understand” this output—even though I don’t understand what “understanding” is). I have no insight into what computations happened, nor how. My interpretation of this fact is that most of the brain is non-conscious computational machinery (just as a human hand or a computer is non-conscious) which is connected to a small kernel of “consciousness” that feels high-level outputs from these machines somehow, and has some kind of influence over how the machinery is subsequently used. Having seen the movie “Being John Malkovich”, and having recently head of the “thousand brains theory”, I also suppose that consciousness may in fact consist of numerous particles which likely act identically under identical circumstances (like all other particles we know about) so that many particles might be functionally indistinguishable from one “huge” particle.
It’s not true that particles behave identically under identical circumstances—that would be determinism.
If it were true, it wouldn’t only apply to consciousness, or mean that “cosnciousness is One” in some sense that doens’t apply to everything else.
There’s a lot of information in N particles. If you want to conserve it all, your huge particle has to exist in 3*N dimensional space. But a freely moving particle in 3*N space would behave locally, so you also need constraints to recover locality. Which is bascially the argument for space realluy being 3 dimensional.
Is there actually anything else to human interaction?
It makes no sense to expect people to engage the machinery of rationality when they don’t believe it’ll further their goals. Even if they benefit from being privately rational, it’s not necessarily in their interest to share their rationality with you. Hence, if you haven’t earned their respect, they’ll conceal their wisdom from you, like the Spartans.
In fact, pretty much everything in Eliezer’s post seems to apply only to the rare situation of two or more people who respect each other enough to actually feel a need to appear logically consistent and make their lies plausible. Usually at least one of the people is in no real need to convince the other of anything (i.e., they have higher status), so they won’t waste any time or energy trying to. Therefore, their statements serve other purposes; mainly, to display their high status and to warn the underling when they’re getting too close to a line they won’t let them cross unpunished. Conspicuously wasting the interlocutor’s time with nonsense serves this purpose very well.
Status, status, status. It gets (some of) us every time. There seems to be very little to life but status to a normal person.
Daniel: A close second is “don’t try to argue with the devil—he has more experience at it than you”.
Would you still disagree with that one if “the devil” was replaced by “a strong AI”?
How about the notion of an insult as a first-order offence? “Don’t insult God/Our Nation/The People/etc.”. It is an explicit emotional fortress that reason cannot by definition scale. When it goes near there, all the ‘intelligence defeating itself’ mechanisms come into play. We take the fortress as our starting argument and start to think backwards until our agitated emotions are satisfied by our half-reasonable but beautiful explanation of why the fortress is safe and why what caused us to doubt it is either not so or can be explained some other way. Ergo, one step deeper into dark epistemology.
Would you still disagree with that one if “the devil” was replaced by “a strong AI”?
Yes. Suffice it to say I don’t think I’d be a very reliable gatekeeper :-).
(Conversely, I don’t even think the AI’s job in the box experiment is even hard, much less impossible. Last week, I posted a $15 offer to play the AI in a run of the experiment, but my post disappeared somehow.)
I’m in strong agreement with Peter’s examples above. I would generalize by saying that the epistemic “dark side” tends to arise whenever there’s an implicit discounting of the importance of increasing context. In other words, whenever, for the sake of expediency, “the truth”, “the right”, “the good”. etc., is treated categorically rather than contextually (or equivalently, as if the context were fixed or fully specified.)
See, now there’s a prime example of corrupted reasoning right there. Science is carefully structured chaos, ordered according to certain fundamental principles. Meeting those principles is what we mean when we talk about something ‘working’.
The recognition of what ‘working’ is, and the tools that have been found useful in reaching that state, is what constitutes the scientific method.
Scientists do not concern themselves with what philosophers say about science—it is my experience that they are actively contemptuous of such. Yet science goes on. Strange, isn’t it? It’s almost as though the philosophers didn’t know what they were talking about.
(Additional: the central metaphor of this discussion is flawed—the Light and Dark sides define and require each other; contrastingly, both Jedi and Sith are corruptions and failures to properly represent the two sides of the Force. Accept one, and you reject the truth of things.)
These comments are largely true
These comments don’t follow from the above. Yes,scientists dont need philosophers to tell them how to science, which they can do on the riding-a-bike basis. That doesn’t mean philosophers are wrong. Birds don’t need scientists to tell them how to fly..doesn’t mean the scientists are wrong.
“Scientists do not concern themselves with what philosophers say about science—it is my experience that they are actively contemptuous of such. Yet science goes on. Strange, isn’t it? It’s almost as though the philosophers didn’t know what they were talking about.”
This is a rather tribalistic disciplinary dogmatism, which is really quite out of step with your subsequent claim to universal monological truth (scientists think it works, so who cares what philosophers think) - a clear demonstration of Archimedean rationality...
Do scientists think it works, or does it work? The end result is a model for a particular phenomenon which can be tested for accuracy. When we use a cell phone we are seeing the application of our understanding of electromagnetism, among other things. It’s not scientists saying that science works—it’s just working.
Can you clarify what your point is?
My original objection, to which you responded, although not explicit, was that ‘science going on’ is not sufficient reason for the philosophy of science ‘not knowing what they are talking about’ - the entire post is puerile dogmatism.
My point was not really related to your discussion, I just wanted to clarify on your paraphrasing of “scientists think it works, so who cares what philosophers think.”
I think it is slightly silly to worry about who thinks it works when the fact of the matter is that it works—this is not a point directly against your comments, just a point of clarification in general.
That was part of my point—that, in this one facet of human endeavor, and in modern times rather than ancient ones, it’s remarkable the extent to which an actual Light Side Epistemology and Dark Side Epistemology have developed. Like the sort of contrast that naive people draw between Their Party and the Other Party, only in real life.
There’s a huge conspiracy covering it up
Well, that’s just what one of the Bad Guys would say, isn’t it?
Why should I have to justify myself to you?
Oh, you with your book-learning, you think you’re smarter than me?
They said that to Einstein and Galileo!
That’s a very interesting question, let me show you the entire library that’s been written about it (where if there were a satisfactory answer it would be shortish)
How can you be so sure?
Marcello, I think your list generalizes too much. I see three main types of words on the list. The first type indicates in-group out-group distinction and seems pretty poisonous to me. The second are ad hominem arguments which are dangerous, but do apply sometimes. And then there are a few like “too complicated.” You call those “negative affect words”? Surely it is better to say “that is too complicated to be true” than to say simply “that is not true”?
-You can’t prove I’m wrong!
-Well, I’m an optimist.
-Millions of people believe it, how can they all be wrong?
-You’re relying too much on cold rationality.
-How can you possibly reduce all the beauty in the world to a bunch of equations?
Douglas says: “”“ And then there are a few like “too complicated.” You call those “negative affect words”? Surely it is better to say “that is too complicated to be true” than to say simply “that is not true”? “””
Well, yes, but that’s only when whatever you mean by complicated has something to do with being true. Some people though, just use the phrase “too complicated” just so they can avoid thinking about an idea, and, in that context it really is an empty negative-affect phrase.
Of course, it is better for a scientist to say “that’s too complicated to be true” rather than just “that’s not true.” You’re not done by any means once you’ve made a claim about whether something is true or false; the claim still needs to be backed up. The point was simply that any characterization of an idea is bad unless that characterization really does have something to do with whether the idea is true.
That was part of my point—that, in this one facet of human endeavor, and in modern times rather than ancient ones, it’s remarkable the extent to which an actual Light Side Epistemology and Dark Side Epistemology have developed. Like the sort of contrast that naive people draw between Their Party and the Other Party, only in real life.
That sounds a lot more like you’re being subject to the same bias. “Some people have this view, even though reality is more complex, but what’s amazing is that in a subject area I care a lot about, that’s what’s there.”
Yes, if you label the things you accept Light, and the things you reject Dark, you’ll see that dichotomy, but why that grouping?
Is traditional rationality Light side? or just bayesianism?
The dark side might be more appropriately grouped into a few different schools.
There will be classes of similar rules that contain both light and dark members.
The both sides have always been around, some of the light side rules might be new, and it is new to group the light side together as the things that work best.
But they are not opposed to each other. Just as physics doesn’t care if you suffer, logic doesn’t care if you get the right answer. There is no battle for our minds. Humans argue about the origin of life, but all existing humans use a combination of light and dark thinking. Creationists can look for evidence and evolutionists can say irrational things for their own psychological defense. The ‘sides’ coexist quite peacefully, not at all like competing bands of primates.
And this might be a reason that it’s so hard to get rid of bad thinking even in ourselves. The light side doesn’t have any alarm bell defenses against the dark side.
“one man’s modus ponens in another man’s modus tollens.”[1][2] is maxim that is easily weaponised by the Dark Side by taking it in a one sided way. One sees ones own implications as proving their consequents and the other sides implications as casting doubt on their antecedents.
If you once tell a lie, the truth is ever after your enemy.
That isn’t true.
I’ve told lies when I was a kid. If I got caught I gave up rather than doing an epistomological attack.
Richard Kennaway: “I feel that X.” Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying “I feel that X” in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying “No you don’t”, and watch the explosion. “How dare you try to tell me what I’m feeling!”
If I say I feel something, I’m talking about an emotion. I don’t intend it to be an objective statement about the world, and I’m not offended if someone says it doesn’t apply to everyone else.
“If you once tell a lie...” should, of course, read “If you once tell a lie then, until you give it up...”.
Nancy Lebovitz: If I say I feel something, I’m talking about an emotion.
That prohibits you from saying “I feel that X”. No emotion is spoken of in saying “I feel that the Riemann hypothesis is true”, or “I feel that a sequel to The Hobbit should never be made”, or “I feel that there is no God but Jaynes and Eliezer (may he live forever) is His prophet”, or in any other sentence of that form. “I feel” and “that X” cannot be put together and make a sensible sentence.
If someone finds themselves about to say “I feel that X”, they should try saying “I believe that X” instead, and notice how it feels to say that. It will feel different. The difference is fear.
That’s kind of catchy.
I believe that there are circumstances in which you can say “I feel that X”. What that could rationally mean is that you yourself recognize that you do not have enough evidence or knowledge to justify a belief about X vs. not-X, but that without evidence you lean toward X because you like that alternative. You are admitting ignorance on the subject. Ideally, this would then also imply an openness with regard to forming a belief about X or not-X given some evidence—that recognition that all you have is a feeling about it means a very weak attachment to the idea of X.
PhilB
Caledonian: What fundamental principles? As far as I can tell the only fundamental principle is that it has to work. But I’m open to counterexamples, if you are.
The recognition of what ‘working’ is, and the tools that have been found useful in reaching that state, is what constitutes the scientific method.
The scientific method is actually pretty specific—and it is not a set of tools. There is no systematic method of advancing science, no set of rules/tools which are exclusively the means to attaining scientific knowledge.
Scientists do not concern themselves with what philosophers say about science—it is my experience that they are actively contemptuous of such. . . It’s almost as though the philosophers didn’t know what they were talking about.
That’s actually my point. Scientists do what works, and employ methodological diversity—the “scientific method” is not an actual description of how real scientists do their work, nor how real science has advanced. It’s propaganda, made up by certain people who were/are absolutely horrified that science has no defining and fundamental underlying principles—which would throw their entire schema of epistemology into turmoil.
The “rules” of science, if they exist, are subject to change at any time. Science has physical reality at the input and useful models at the output—and no bona fide, tried and true, structure in between.
Here’s a rule of science: Your hypothesis must make testable predictions. It must be falsifiable. Is that “subject to change at any time” ? I bet there are more.
While it may not perfectly describe how actual scientists do their work all the time, the scientific method is a description of the process of how we sort out good ideas/models from bad ones, which is the quintessential goal of science (the “advancement of science,” if you will).
Just to be clear on what we are discussing, here is the Oxford English Dictionary definition (I don’t like using dictionaries as authorities; I think it’s stupid. this is just to have a working definition on the table): “A method or procedure… consisting in systematic observation, measurement, and experiment, and the formulation, testing, and modification of hypotheses.”
In order for the scientific community to take a claim seriously, there are certain expectations that must be satisfied such as a reproducible experiment, peer reviewed publication, etc. When a hypothesis is proposed (assuming it has already met the baseline requirement of making testable predictions), it is thrust into the death pit of scientific inquiry where scientists do everything they can to test and falsify it. While the subject matter may span vastly different areas of science, this process is still generally followed.
Scientists who do science for a living may have gotten good at this process, so much so that they do it without belaboring each element as you would in a middle school science class, but they do it never the less. It is true that in the past, bad science happened, and even today lapses in scientific integrity happen; however, the reason science is given the authority that it is is due to it’s strict adherence to the above process. (Also, as a disclaimer, there are many nuances to said process that I glossed over; I just wanted to get the general idea.)
If I may go out on a limb here, it sounds to me like the chaos you are talking about is the unavoidably arbitrary nature of observation of phenomena and the unavoidably arbitrary nature of proposing hypotheses. Often times throughout history we have encountered entirely new areas of science by sheer accident. Likewise (unless they are making a phenomenological model) scientists have no better way to propose hypotheses than to guess at what the answer is based on observations that they currently have and then make new observations/experiments to see if they were right.
So I definitely agree with you on the chaotic nature of our stumbling across new phenomena on on our quest to understand reality, but to say that the process we go through to establish scientific knowledge is not systematic seems a bit extreme.
You haven’t earned the right to say X.
You haven’t earned the right to say X.
I think that one is poorly-phrased but defensible. You can think of it as short hand for “Your life experiences have provided you with an insufficient collection of Bayesian priors to permit you to assert X with any reasonable certainty”.
The worst one is “this is my truth”. The ultimate victory of map over territory. In the universe I create, rocks fall up. Forcing me to believe in “gravity” puts you in my proper role as divine map-maker. Your “reason” and “evidence” are just a power grab. I choose not to believe the rock I’m about to drop on my toes will hurt. Ouch! You bastard, you contaminated my purity of self-definition.
“Everyone has a right to their own opinion” is largely a product of its opposite. For a long period many people believed “If my neighbor has a different opinion than I do, then I should kill him”. This led to a bad state of affairs and, by force, a less lethal meme took hold.
Exactly—it’s not epistemics, it’s a peace treaty.
To Richard Kennaway:
Your original point, which I didn’t read carefully enough:
“I feel that X.” Every sentence of this form is false, because X is an assertion about the world, not a feeling. Someone saying “I feel that X” in fact believes X, but calling it a feeling instead of a belief protects it from refutation. Try replying “No you don’t”, and watch the explosion. “How dare you try to tell me what I’m feeling!”
“No, you don’t” sounds like a chancy move under the circumstances. Have you tried “How sure are you about X?” and if so, what happens?
More generally, statements usually imply more than one claim. If you negate a whole statement, you may think that which underlying claim you’re disagreeing with is obvious, but if the person you’re talking to thinks you’re negating a different claim, it’s very easy to end up talking past each other and probably getting angry at each other’s obtuseness.
My reply: If I say I feel something, I’m talking about an emotion.
You again: That prohibits you from saying “I feel that X”. No emotion is spoken of in saying “I feel that the Riemann hypothesis is true”, or “I feel that a sequel to The Hobbit should never be made”, or “I feel that there is no God but Jaynes and Eliezer (may he live forever) is His prophet”, or in any other sentence of that form. “I feel” and “that X” cannot be put together and make a sensible sentence.
If someone finds themselves about to say “I feel that X”, they should try saying “I believe that X” instead, and notice how it feels to say that. It will feel different. The difference is fear.”
It sounds to me as though you’ve run into a community (perhaps representative of the majority of English speakers) with bad habits. I, and the people I prefer to hang out with, would be able to split “I feel that x” into a statement about emotions or intuitions and a statement about the perceived facts which give rise to the emotions or intuitions.
I believe that “I believe that a sequel to The Hobbit should never be made” is emotionally based. Why would someone say such a thing unless they believed that the sequel would be so bad that they’d hate it?
Here’s something I wrote recently about the clash between trying to express the feeling that strong emotions indicate the truth and universality of their premises and the fact that real world is more complicated.
“I feel that X” really means, “I believe X, and accept that others will likely disagree.” The purpose is to serve as a conversational marker showing that disagreement is expected. When used properly, this is simply to grease the wheels of discourse a bit, making it more likely that the respondent will have the proper idea about the attitude the speaker takes towards the idea, not to imply that the disagreement will be taken as unresolvable. It makes discourse more efficient. Of course, it can be misused in the way that Richard complains about, but I think he’s being obtuse to be against the phrase in every manifestation, and especially obtuse in the way he frames his disagreement.
I am being forthright, not obtuse. I say again that there is no statement of the form “I feel that X”, which would not be rendered more accurate by replacing it with “I believe that X”. That people use the word “feel” in this way does not make it a statement about feelings: it remains a statement about beliefs. Neither of those statements actually contains any expression of a feeling about X. Here is one that does: “I am angry that X”. Compare “I feel that X”—what is the feeling? It is not there. In a larger context, the listener may be able to tell, but if they can, they can do so equally well from “I believe that X”.
It might well be. But the emotions would not be communicated any better by using the word “feel”. They are not communicated at all by either word. (I can think of other reasons why someone might object to a sequel: for example, some people have an ethical objection to fanfiction.)
And no, I’ve never actually responded to an “I feel that” with a blunt “No you don’t”. It would rarely help. But I do know people that would call me on it if I ever used the expression, as I would them. A lot of the time—I am talking about actual, specific experience here, not vague generalisation—people react emotionally to beliefs they are holding that they have never actually stated out loud as beliefs, and asked “Are these actually true?” Until you have noticed what you believe, you cannot update your beliefs. I-feel-thats avoid that confrontation.
To use “feel”, as a couple of people suggested, to mean “tentative belief” changes only the map: there are still no actual feelings being expressed, just a word that has been blurred. This does not grease the wheels of discourse, it gums them up. Better to reserve “feel” for feelings and “believe” for beliefs, for it is a short step from calling them both by the same name to passing them off as the same thing, and then you are on the Dark Side, whether you know it or not. State something as a belief and you open yourself to the glorious possibility of being proved wrong. Call it a feeling and you give yourself a licence to ignore reality.
Probably silly to reply almost four years later, but what the heck. I think that in a lot of cases “I feel that X” is a statement of belief in belief. That is, what the person really means is “I believe that X should be true,” or “I have an emotional need to believe that X is true regardless of whether it is or not.” Since you’re very unlikely to get someone who think “I feel that X” is a valid statement in support of X to admit what they really mean, it is indeed an excellent example of Dark epistemology.
Hyperbole as a perversion of projection, arguments like: ”...and next you’ll be killing AI developers who disagree with FAI, to prevent them posing an existential threat.” that contain both sufficient clear reasoning and sufficient unknowable elements as to sound possible, sure, plausible, even. This is used to discredit the original idea, not the fantastical extrapolation.
How about the all-time great, now better than ever:
This time it will be different
Another good candidate may be revealed in the following Dostoevsky quote:
“If someone were to prove to me that Christ is outside of the truth, and it were truly so, that the truth was outside of Christ, I would prefer to remain with Christ, rather than with the truth.”
[http://books.google.co.uk/books?ct=result&q=%22Christ+is+outside+of+the+truth%22&btnG=Search+Books]
Substitute ‘Christ’ for your favourite deity/belief system. This was the epistemological line I was not able to cross during my christian journey. Others may however, and once it is crossed, there may be little that can be done to rescue that person (other than perhaps pure shock and awe at the reprecussions of such a departure from reality). If this is not the root of a ‘dark side epistemology’, it is certainly the pinnacle of it, the final lie that must be accepted to justify all the ones that came before it.
An interesting contrast to that is C.S. Lewis (through one of his characters): “Iâm on Aslanâs side even if there isnât any Aslan to lead it. Iâm going to live as like a Narnian as I can, even if there isnât any Narnia.”
I agree with Thom Blake: “Everyone has a right to their own opinion” is a defense against unreliable hardware. Your opinion is wrong so I must kill you and take your women, or even just your opinion is wrong so I must repress you.
For a long time, I’ve had problems with phrases that treat Pride as a good thing. i.e. “Take some pride in X” “Where is your pride?” “Have you no pride?”
I realize that in the past, Pride may have had many positive evolutionary values, but in modern times, we have more efficient and accurate ways to test for usefulness and prowess among our population.
From what I can tell—this is actually just the flip-side of shame. Shame is often used to coerce people into (or out of) certain behaviours.
Contrast with: “Where is your shame?”, “Have you no shame?”
There are two of these Generic Defenses, iterations of this species of logical fallacy, that I’ve found particularly vile. They may collapse into one. First, the extension of “tolerance” to assertions, e.g. “Be tolerant of my creationist beliefs”, which means “My creationist beliefs are immune to discourse or thought: they command respect simply because they are my assertions,” but disguises itself in the syntax of a honeyed pluralistic truism like “Be tolerant of people who hold opinions that aren’t yours.”
The other is the notion of false balance, which is a palatable and pervasive trope of people who are talking nonsense, e.g. “There are two sides to the dinosaur debate: Some scientists believe in dinosaurs, and others think God has put fossils in the ground to test our faith in Him. Isn’t it interesting to consider the arguments of both sides? I guess we’ll never know the real answer!”
That stuff drives me mad.
Arguably, another one is the adage that when people disagree on anything very strongly, “the truth is usually in the middle.”
It’s not entirely nonsensical to anticipate and correct for people’s tendency to exaggerate away from their perceived enemy, but it’s not a reliable rule of thumb at all. It’s not all that hard to find situations where one side is just wrong.
Here’s a better way to take polarisation into account: instead of concluding that “both sides are probably a bit right”, it would be more realistic to say “both sides are probably wrong”. Or better yet: “what both sides think is irrelevant, I’m just going to ignore the whole business and figure it out for myself.”
The worst of them all is probably to judge an idea by some real or perceived characteristics of its proponents (e.g., “strident”). Taken to an extreme this leads to whining about issues like tone while ignoring content.
Sometimes jerks are right.
“Cui bono?” Who benefits?
I believe the Dark Side coopted “cui bono?” because it has a valid usage: those who benefit from various policies may falsify or embelish their opinions, and “cui bono?” can sometimes identify faked opinions. (For instance, why do many businesses support minimum wage hikes?) A rationalist should count a suspect opinion as weaker evidence than a non-suspect opinion.
But the dark side uses it thus: if someone benefits, the belief is wrong and the evidence in its favor can be dismissed.
Example: “Who benefits from the story of the Holocaust? Israel. The Holocaust raises sympathy for Jews worldwide, and sympathizing voters and politicians in the United States and Europe enable Israel’s continued existence.”
This is 1) Not the rationalist use of “cui bono” and 2) COMPLETELY INSANE. Holocaust deniers use “cui bono?” to question if the Holocaust actually happened. They figure that the fact someone benefits is enough to support a worldwide, 65-year long conspiracy theory. No matter how much suspicious motives may make us weary of someone, the independent lines of evidence leading to the historical event of the Holocaust blow them out of the water. “Cui bono?” is so weak in comparison that it can be completely ignored when estimating the likelihood of “The Holocaust happened.”
This usage can probably be categorized as a subset of all Type M arguments.
An amusing Onion parody of anti-epistemology and crackpots: Rogue Scientist Has His Own Scientific Method
One method I’ve seen no mention of is distraction from the essence of an argument with pointless pedantry. The classical form is something along the lines of “My opponent used X as an example of Y. As an expert in X, which my opponent is not, I can assure you that X is not an example of Y. My opponent clearly has no idea how Y works and everything he says about it is wrong.” which only holds true of X and Y are in the same domain of knowledge.
A good example: Eliezer said in the first paragraph that a geologist could tell a pebble from the beach from a driveway. As a geologist, I know that most geologists, myself included, honestly couldn’t tell the difference. Most pebbles found in concrete, driveways and so forth are taken from rivers and beaches, so a pebble that looks like a beach pebble wouldn’t be suprising to find in someone’s driveway. That doesn’t mean that Eliezer’s point is wrong, since he could have just as easily said “a pebble from a mountaintop” or “a pebble from under the ocean” and the actual content of this post wouldn’t have changed a bit.
In a more general sense, this an example of assuming an excessively convenient world to fight the enemy arguments in, but I think this specific form bears pointing out, since it’s a bit less obvious than most objections of that sort.
Time for some Stirner:
You know, the Jedi had bad epistemology, same as the Sith. For instance: “Only the Sith speak in absolutes!” …. Give it a moment. Think about it. Only is what kind of modifier again?
Attributing Obi-Wan’s highly emotional statement in the situation of the Order’s destruction to all the Jedi is a no-go. They did have problems with their actions but more of the kind of being, so to say, “too careful”.
I love this. I just . . . this is awesome. You rock. Thank you.
Here are some:
Your epistemology is just a ploy so that only the university-educated can defend opinions
If I credibly claim that I have suffered a sufficiently large wrong imposed at the national culture level, then I may dictate my proper place and anybody who questions me, even just to ask whether my claim is really credible, is participating in that national-culture-level wrong.
Endless hypothesis privileging.
A popular sentiment is “I don’t care about X!!!”. Sometimes this even appears in memes proudly lauding the “fact” of their non-caring about whatever X happens to be. While it may be wise to take people’s knee-jerk disapproval with a grain of salt, clearly we as humans are wired in such a way as to care what others think, for better or for worse. Instead of facing our emotions head-on, and admitting that we do care, it is much easier not to reveal how fragile we are to the world.
An interesting specific case study (although I haven’t been able to generalize a more broad category) would be the argument that Pluto is or should be a planet. People who argue this tend to know a couple of the reasons why it makes sense for scientists to use the term “planet” to refer to the larger bodies, while using the words “dwarf planet” to indicate objects like Pluto and Ceres with another set of characteristics. Their argument is interesting in that it doesn’t seem to occur on the factual level at all, but purely on the emotional, gut reaction level. In other cases, the two usually get tangled up rapidly, with factual arguments on both sides, but this does not occur here. It’s purely an argument about terminology, and that our terminology should not be optimized to reflect the facts of reality, but rather our own traditions and desires for their own sakes alone. There aren’t even any augments that such an awkward naming convention would be instrumentally valuable, say by letting scientists keep the same terminology they’ve been using for years. You see exactly those sorts of arguments for not switching to the Metric system, but not regarding Pluto for some reason.
But what might this tell us about how emotionally based opinions are formed? What’s different in this case? Well, first of all, it is extremely obvious that the outcome of the disagreement will have no harmful impact on anyone anywhere. With political arguments, there’s always a victim, no matter who wins. The question is who’s grievance is worse, or which victim you identify with more. That’s how you choose your side. So, when you need to prove that your side has it worse off, you need to bring facts into the discussion to prove it. We quickly get defensive, and then start rationalizing.
But I wouldn’t call what’s going on in the Pluto debate “rationalizing”. It’s not starting with a conclusion, and then trying to find evidence and arguments to justify it. It’s not starting with evidence either though. It’s the raw belief itself, without any supporting evidence or justifications tacked on. If you want a rationalization, you actually have to probe someone for it, and even then they may or may not respond with one. They are just as likely to respond with “but it’s just really sad that Pluto is getting demoted”. It’s pure nostalgia, or personification of the inanimate, or some mix of other emotions with no logic attached.
No amount of facts or logic will win against this. Instead, the only winning move is to get them to love truth, to find beauty in the structure of logic, to admire the scientific method, to see elegance in simplicity, and to find Joy in the Merely Real. Failing to be able to do so isn’t a Dark Side Epistemology, but rather the gaping yaw that all Dark Side Epistemology is trying to fill. It’s the true cause of a Fake Justification. If we want to actually prevent systemic rational failures from popping up, we need to know the true causal history that originated the need for such beliefs, not just the true causal history that originated the beliefs themselves. Dark Side Epistemology gives rise to the beliefs themselves, but the inability to find joy in the merely real gives rise to the need for such beliefs.
What’s the solution, then? Well, there are already “beautiful engineering” memes, and some visualizing mathematics such as fractals, although the more abstract math is difficult to show that way. But there are plenty of quotes out there proclaiming the beauty of such things. “I Fucking Love Science” is popular, and Neil deGrasse Tyson brings the stars and planets to life fairly effectively. What seems to be missing is a social base promoting formal logic itself, or traits that limit self-deception. There are plenty of skeptics groups, some of which advocate for something like reductionism, but that’s only tangentially relevant to disproving UFO claims. Less Wrong seems to be the closest thing there is to this, but I wouldn’t want to dilute this community down to a meme factory. Things like HPMOR are a big step in the right direction, but we need a true cultural movement to unfold if we want to change the way people think.
I can’t answer your questions about / criticisms of my belief, but if you ask my guru (or read his book), he’ll definitely have the answers to all your questions.”
(Or “her book” etc—but the examples I’ve come across have all used men as their infallible guru.)
Ayn Rand is an example at times.
You’re overthinking it
The acronym FLICC describes techniques of science denial and alludes to a lot of dark side epistemology:
F—Fake Experts (and Magnified Minority): you’ve got your scientists and I’ve got mine (and even though There’s No Consensus, mine are right and yours are wrong, that’s for sure).
L—Logical fallacies
I—Impossible expectations. This refers to an unrealistic expectation of proof before acting on evidence. It tends to be paired with very low demands of evidence for the contrary position (confirmation bias). This is often unnecessary because if the goal is inaction (e.g. don’t bother to lower emissions or get vaccinations) you can just have an unreasonable standard of proof for both sides and take no action as a default. Nevertheless this heavily lopsided analysis occurs in practice.
C—Cherry picking of data (perhaps this is just another logical fallacy, but it is more central to science denial than other logical fallacies)
C—Conspiracy theories. One “dark side” thing about conspiracy theories is their self-sealing quality—evidence contrary to one’s position can always be explained by assuming it was generated by the conspiracy, so the conspiracy theory tends to grow larger over time until it is a massive global conspiracy with untold thousands of actors hiding the hidden truth. An even more interesting and common dark-side trick, though, is to believe in a conspiracy without ever thinking about the conspiracy. Most people aren’t dumb enough to believe in a massive global conspiracy, but they use an assumption of some amount of conspiracy as a “background belief”: they rely mainly on FLIC, and just use Conspiracy Theory as a last resort, so Conspiracy serves as a window dressing to cover any remaining issues that otherwise wouldn’t make sense in their version of “the truth”. Or maybe it just looks that way: the science denier may know that talking about their conspiracy theory would make them sound more nutty, so they outwardly prefer to rely on other arguments and fall back on conspiracy as a last resort.
Looking at Scott Alexander’s Argument From My Opponent Believes Something, I guessed that the general Dark Side technique he’s describing was misrepresentation borne out of sloppy analog thinking. But at the end he points out that he has listed a set of Fully General Counterarguments, all of which are tools of the dark side since they can attack any position and lead to any conclusion:
Jordan Peterson’s redefinition of truth comes to mind. During his first appearance on Sam Harris’ podcast, he presented the following: “Nietzsche said that truth is useful (for humanity). Therefore, what is harmful for humanity, cannot be “true”. Example—if scientists discover how to create a new plague, that knowledge may be technically correct, but cannot be called “true”. On the other hand, the bible is very useful. Like, extremely useful. So very useful, that even if not technically correct, the bible is nevertheless “true”.”
Of course, how to judge whether “E=mc^2” is “true” or only correct (before the Apocalypse!) is left to the listener. The important part is being able to say that the bible is “true”, everything else is secondary.
I think the problem you’re pointing at is “using words to confuse the issue”. Most people know what truth is, and don’t need a definition (except to clarify which sense of the word we’re talking about). But humans do a lot of linguistic reasoning. So if you introduce a new definition for a word, one that people don’t normally use, you have a chance of confusing people into reasoning using that new definition, and using the results of that reasoning on the original sense of the word.
Here, I don’t know what Nietzsche said, but it does not follow from the phrase “truth is useful” that “it is not true that this is a discovery for creating a plague, because plagues are not useful”. It seems, rather, that he’s misrepresenting Nietzsche by simply mislabeling usefulness as truth (and if Nietzsche actually did that, he’s wrong).
Another way to look at it is to observe that the word “is” is used in the same sense as “a sphere is rollable”, which does not imply “if it is rollable, it must be a sphere”. In the same way, “truth is useful” does not imply “if it is useful, it must be truth”.
Either way, people make logical mistakes all the time, and therefore one mistake in isolation is not dark epistemology. But what if you had the chance to explain to Peterson what his logical mistake was, and he responded by (1) denying that he made any mistake or (2) ignoring your point entirely? Now that’s what I call dark epistemology. Or what if Peterson makes the same mistake over and over and never seems to notice unless his opponents do it? More dark epistemology.
I’ve heard Peterson accuse feminists of disregarding what is true in the name of ideology on many occasions.
Sam Harris initially spent an hour arguing against Peterson’s redefinition of “truth” to include a “moral dimension”. They’ve clashed about it since, with no effect. Afaik, “the bible is true because it is useful” is central component of Peterson’s worldview.
To be fair, I believe Peterson has managed to honestly delude himself on this point and is not outright lying about his beliefs.
Nevertheless, when prompted to think of a “General Defense of Fail”, attempting to redefine the word “truth” in order to protect one’s ideology came to mind very quickly.
Arguing against consistency itself. “I was trying to be consistent when I was younger, but now I’m more wise than that.”
Most lies are bad, but there are circumstances where lying is necessary and does not make truth the enemy, when telling the truth causes immediate bad action.
When people in Germany were sheltering people during the holocaust, and a Nazi official asked if they were hiding anyone, the correct response was “no” even though it was a lie. When someone doesn’t believe in a religion or is gay or something, but they would be cast out of the home or “honor-killed” if parents found out, they should lie until they have a way to escape.
Yes, Eliezer agrees with that and wrote about it in Meta-Honesty: Firming Up Honesty Around Its Edge-Cases (also using the hiding someone from a Nazi example)
Eliezer also mentions it here, saying that if you’re willing to lie to someone, you should be willing to slash their tires or lobotomize them. But I want to point out the Fallacy of Gray here—there are different degrees of lying, of its implications, and of the implications. I may hide the truth from my teacher about my friend cheating on a test (trying to stop the friend is a different discussion, but I would), but I wouldn’t go so far as to outright violence in order to protect the secret.
I think the vast majority of people will gladly slash your tyres or lobotomize you without a second thought if the alternative is to go to the effort of debating you for any length of time with a genuinely truth-seeking attitude. Only if they fear you may they attempt to fake the latter.
“One of the saddest lessons of history is this: If we’ve been bamboozled long enough, we tend to reject any evidence of the bamboozle. We’re no longer interested in finding out the truth. The bamboozle has captured us. It’s simply too painful to acknowledge, even to ourselves, that we’ve been taken. Once you give a charlatan power over you, you almost never get it back.”
― Carl Sagan