I’ll grant that there’s a sense in which instrumental and epistemic rationality could be said to not coincide for humans, but I think they conflict much less often than you seem to be implying, and I think overemphasizing the epistemic/instrumental distinction was a pedagogical mistake in the earlier days of the site.
Forget about humans and think about how to build an idealized agent out of mechanical parts. How do you expect your AI to choose actions that achieve its goals, except by modeling the world, and using the model to compute which actions will have what effects?
From this perspective, the purported counterexamples to the coincidence of instrumental and epistemic rationality seem like pathological edge cases that depend on weird defects in human psychology. Learning how to build an unaligned superintelligence or an atomic bomb isn’t dangerous if you just … choose not to build the dangerous thing, even if you know how. Maybe there are some cases where believing false things helps achieve your goals (particularly in domains where we were designed by evolution to have false beliefs for the function of decieving others), but trusting false information doesn’t increase your chances of using information to make decisions that achieve your goals.
In my personal life, I’ve observed that self-deception is related to one’s ability to deceive others. Narcissism is a less contrived conflict between instrumental and epistemic rationality.
The narcissists I know who genuinely self-deceive (as opposed to mere doublethink) tend to be unhappy, unstable and unproductive. But…they also have a superficial charisma. Evolutionarily-speaking, I think this is a Nash equilibrium.
I think self-deception is instrumental in acting unethically for one’s own self-interest. In this way, believing false things can help achieve yourevolution’s goals.
Forget about humans and think about how to build an idealized agent out of mechanical parts. How do you expect your AI to choose actions that achieve its goals, except by modeling the world, and using the model to compute which actions will have what effects?
The AI depends on epistemic rationality to achieve its goals. Instrumental rationality at the expense of epistemic rationality may help the AI achieve yours.
Comparing Pragmatism to narcissism? What about being willing to reduce your own knowledge (aka your own advantage) for other peoples sake (e.g forgetting knowledge that could lead to genocide). I would argue this is altruistic rather than narcissistic (and realism would be more narcissistic)
(EDIT: a spelling error, more impersonal and better flow)
It sounds like you are concerned about hypothetical situations that test the limits of philosophical ideas whereas Zack_M_Davis and I are concerned about real-world situations that happen all the time.
Slightly reducing one’s own knowledge to prevent massive harm to others is the moral imperative. I don’t think anyone here would disagree. But I don’t think that’s the fundamental problem either. The interesting question is whether you’re willing to deceive yourself to achieve moderate instrumental ends.
Suppose there was an invisible monster that ate anyone who knew it existed. If I accidentally discovered this monster then I would want to forget that knowledge in order to protect my life.
But I would not want to replace this knowledge with a false belief. Such a false belief could get me into trouble in other ways. I would also want to preserve the knowledge in some form.
What follows is a passage from Luna Lovegood and the Chamber of Secrets.
Luna daydreamed a lot. She often found herself in rooms with little memory of how she got there. It was rare for her to find herself in a room with literally no memory of how she got there.
“I’ve just been obliviated, haven’t I?” Luna said.
“You rushed in here and pleaded for me to erase your memory,” Professor Lapsusa said.
“And?”
“It is a crime for a professor to modify the memory of a student. And for good reason. No. I have never magically tampered with your mind and I will never do so.”
Luna’s felt like she had just run up several flights of stairs. She was breathing quickly. Sweat soaked from her fingertips into the diary she was holding.
“Have I been possessed?” Luna asked.
“No,” Lapsusa said.
Lapsusa waited for Luna to work it out.
“This book I’m holding. Is it magicial?” Luna asked.
Lapsusa smiled.
“It is a tool for self-obliviation then,” Luna said.
That’s an odd passage, not sure what you’re trying to say, but I’ll check out Luna Lovegood and the Chamber of Secrets. But this knowledge dilemma is not so hypothetical as you might think. Placebo is a very real thing we encounter everyday and I would generally advice people to stay optimistic during medical operations because you’ll increase your chances of succes (I would argue skewing your worldview temporarily is worth it). When governments decide whether they should fund research into e.g nuclear weapons I would generally advice against it (even though it gives us a better map of the territory) because it’s dangerous. I much rather spend that money on pragmatic (but unintellectual) projects like housing the homeless.
I made no mention of the frequency of such an occurrence, I agree that it’s a rare edge case, but that’s what makes it interesting. Also we are humans, so of course I’m interested in what humans (aka this site’s demographic) will do and not some hypothetical other being.
I’ll grant that there’s a sense in which instrumental and epistemic rationality could be said to not coincide for humans, but I think they conflict much less often than you seem to be implying, and I think overemphasizing the epistemic/instrumental distinction was a pedagogical mistake in the earlier days of the site.
Forget about humans and think about how to build an idealized agent out of mechanical parts. How do you expect your AI to choose actions that achieve its goals, except by modeling the world, and using the model to compute which actions will have what effects?
From this perspective, the purported counterexamples to the coincidence of instrumental and epistemic rationality seem like pathological edge cases that depend on weird defects in human psychology. Learning how to build an unaligned superintelligence or an atomic bomb isn’t dangerous if you just … choose not to build the dangerous thing, even if you know how. Maybe there are some cases where believing false things helps achieve your goals (particularly in domains where we were designed by evolution to have false beliefs for the function of decieving others), but trusting false information doesn’t increase your chances of using information to make decisions that achieve your goals.
In my personal life, I’ve observed that self-deception is related to one’s ability to deceive others. Narcissism is a less contrived conflict between instrumental and epistemic rationality.
The narcissists I know who genuinely self-deceive (as opposed to mere doublethink) tend to be unhappy, unstable and unproductive. But…they also have a superficial charisma. Evolutionarily-speaking, I think this is a Nash equilibrium.
I think self-deception is instrumental in acting unethically for one’s own self-interest. In this way, believing false things can help achieve
yourevolution’s goals.The AI depends on epistemic rationality to achieve its goals. Instrumental rationality at the expense of epistemic rationality may help the AI achieve yours.
Comparing Pragmatism to narcissism? What about being willing to reduce your own knowledge (aka your own advantage) for other peoples sake (e.g forgetting knowledge that could lead to genocide). I would argue this is altruistic rather than narcissistic (and realism would be more narcissistic)
(EDIT: a spelling error, more impersonal and better flow)
It sounds like you are concerned about hypothetical situations that test the limits of philosophical ideas whereas Zack_M_Davis and I are concerned about real-world situations that happen all the time.
Fair enough. Let us dive into the fantastical. Suppose we lived in a world like you describe.
Slightly reducing one’s own knowledge to prevent massive harm to others is the moral imperative. I don’t think anyone here would disagree. But I don’t think that’s the fundamental problem either. The interesting question is whether you’re willing to deceive yourself to achieve moderate instrumental ends.
Suppose there was an invisible monster that ate anyone who knew it existed. If I accidentally discovered this monster then I would want to forget that knowledge in order to protect my life.
But I would not want to replace this knowledge with a false belief. Such a false belief could get me into trouble in other ways. I would also want to preserve the knowledge in some form.
What follows is a passage from Luna Lovegood and the Chamber of Secrets.
That’s an odd passage, not sure what you’re trying to say, but I’ll check out Luna Lovegood and the Chamber of Secrets. But this knowledge dilemma is not so hypothetical as you might think. Placebo is a very real thing we encounter everyday and I would generally advice people to stay optimistic during medical operations because you’ll increase your chances of succes (I would argue skewing your worldview temporarily is worth it). When governments decide whether they should fund research into e.g nuclear weapons I would generally advice against it (even though it gives us a better map of the territory) because it’s dangerous. I much rather spend that money on pragmatic (but unintellectual) projects like housing the homeless.
I would argue the edge cases for humans are fairly common.
I made no mention of the frequency of such an occurrence, I agree that it’s a rare edge case, but that’s what makes it interesting. Also we are humans, so of course I’m interested in what humans (aka this site’s demographic) will do and not some hypothetical other being.