“good” or “bad”, or even “rational” and “irrational”. (Which of course are just disguised versions of “good” and “bad”, if you’re a rationalist.)
I saw this, and felt a strong urge to walk to work where my laptop is and correct it.
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way. When I see “rational” mental tags on an agent, I used to see “good” mental tags. They used to be synonymous to me. Then I changed my mind and realized that much of the time, instrumentally rational means “very very dangerous”, “powerful optimizing agent present”. This is true in the case of the off topic thing, and of humans. Many instrumentally rational humans are dangerous to me. I am lucky to live in a society where I am mostly protected from clever, powerful humans such as the mafia.
Without emotion, you have no way to narrow down the field of “all possible hypotheses” to “potentially useful hypotheses” or “likely to be true” hypotheses...
This statement is either false or meaningless, depending on how you interpret “emotion”. It suffices to say that an agent can single out true hypotheses without having a goal, and an autistic human can distinguish truth from falsehood. Humans with damage to the emotional centres of their brains don’t get anything done, but their ability to tell truth from falsehood is unaltered. In fact I suspect that less emotional people are better epistemic rationalists, i.e. they are good at finding “likely to be true” hypotheses.
rationality—seemed to require three things to actually work:
A way to generate possibly-useful ideas
A way to check the logical validity—not truth! -- of those ideas, and
A way to test those ideas against experience.
There’s more to epistemic rationality than these. Probabilistic reasoning, probabilistic logic, analogy formation, introspection and reflective thinking, domain knowledge, heuristics for which approximations are valid, notions of context all come to mind. In a short amount of time working in AI, I have quickly realised that first order predicate logic plays a very very small part in a mind.
This statement is either false or meaningless, depending on how you interpret “emotion”.
Let’s review the statement in question:
Without emotion, you have no way to narrow down the field of “all possible hypotheses” to “potentially useful hypotheses” or “likely to be true” hypotheses...
By “narrow down”, I actually meant “narrow down prior to conscious evaluation”—not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that’s not what the sentence is talking about… it’s referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.
By “narrow down”, I actually meant “narrow down prior to conscious evaluation”—not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that’s not what the sentence is talking about… it’s referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.
Still either false or meaningless, depending on how you interpret ‘emotion’. Our brains narrow things down prior to conscious evaluation. It’s their speciality. If you hacked out the limbic system you would still be left with a whole bunch of cortex that is good at narrowing things down without conscious evaluation. In fact, if you hacked out the frontal lobes you would end up with tissue that retained the ability to narrow things down without being able to conscoiusly evaluate anything.
The point of emotions—which I see I failed to make sufficiently explicit in this post, from the frequent questions about it—is that their original purpose was to prepare the body to take some physical, real-world action… and thus they were built in to our memory/prediction systems long before we reused those systems to “think” or “reason” with.
Brains weren’t originally built for thinking—they were built for emoting: motivating co-ordinated physical action.
I disagree again: I don’t think that any reasonable definition of emotion makes the following statement true: :
emotions allow you to (prior to conscious evaluation) narrow down the field of “all possible hypotheses” to “likely to be true” hypotheses.
I think that emotions often do the opposite. They narrow down the field of “all possible hypotheses” to “likely to make me feel good about myself if I believe it” hypotheses and “likely to support my preexisting biases about the world” hypotheses, which is precisely the problem that this site is tackling… if emotions subconsciously selected “likely to be true” hypotheses, we would not be in the somewhat problematic situation we are in.
think that emotions often do the opposite. They narrow down the field of “all possible hypotheses” to “likely to make me feel good about myself if I believe it” hypotheses and “likely to support my preexisting biases about the world” hypotheses, which is precisely the problem that this site is tackling… if emotions subconsciously selected “likely to be true” hypotheses, we would not be in the somewhat problematic situation we are in.
Those are subsets of what you believe to be likely true.
Right, we agree. But I think that we have overused the word emotion… That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love. We need different names for them. I call the latter emotion, and the former a “hypothesis generating part of your cognitive algorithm”. I think and hope that one can separate the two.
That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love
No… the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval.
One’s physio-emotional state at the time of retrieval also has an effect on retrieval priorities… if you’re angry, for example, memories tagged “angry” are prioritized.
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way.
Although these days Roko is probably uninterested in whether I agree with him, I agree with that passage.
According to my definition, “epistemically rational” means “effective at achieving one’s goals”. If the goals are incompatible with my goals, I’m going to hope that the agent remains epistemically irrational.
(Garcia used “intelligent” and “ethical” for my “epistemically rational” and “has goals compatible with my goals”.)
Since 1971, Garcia’s been stressing that increasing a person’s epistemic rationality increases that person’s capacity for good and capacity for evil, so you should try to determine whether the person will do good or do evil before you increase the epistemic rationality of the person. (Of course your definition of “good” might differ from mine.)
The smartest person (Ph. D. in math from a top program, successful entrepreneur) I ever met before I met Eliezer was probably unethical or evil. I say “probably” only to highlight that one cannot be highly confident of one’s judgement about someone’s ethics or evilness even if one has observed them closely. But most people here would probably agree with me that this person was unethical or evil.
“good” or “bad”, or even “rational” and “irrational”. (Which of course are just disguised versions of “good” and “bad”, if you’re a rationalist.)
I saw this, and felt a strong urge to walk to work where my laptop is and correct it.
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way.
Rationality can be bad when it’s given to an agent with undesirable goals, but your own goals are always good to, so where your own thoughts are concerned, being ‘rational’ means they’re good and being ‘irrational’ means they’re bad. I think the article’s statement was meant to apply only to thoughts evaluated from the inside.
I saw this, and felt a strong urge to walk to work where my laptop is and correct it.
Rational agents/things are not synonymous with good things. A paperclip maximizer is the canonical example of an agent acting rationally. As far as most people are concerned, including me, the paperclip maximizer is not acting in a good way. When I see “rational” mental tags on an agent, I used to see “good” mental tags. They used to be synonymous to me. Then I changed my mind and realized that much of the time, instrumentally rational means “very very dangerous”, “powerful optimizing agent present”. This is true in the case of the off topic thing, and of humans. Many instrumentally rational humans are dangerous to me. I am lucky to live in a society where I am mostly protected from clever, powerful humans such as the mafia.
This statement is either false or meaningless, depending on how you interpret “emotion”. It suffices to say that an agent can single out true hypotheses without having a goal, and an autistic human can distinguish truth from falsehood. Humans with damage to the emotional centres of their brains don’t get anything done, but their ability to tell truth from falsehood is unaltered. In fact I suspect that less emotional people are better epistemic rationalists, i.e. they are good at finding “likely to be true” hypotheses.
There’s more to epistemic rationality than these. Probabilistic reasoning, probabilistic logic, analogy formation, introspection and reflective thinking, domain knowledge, heuristics for which approximations are valid, notions of context all come to mind. In a short amount of time working in AI, I have quickly realised that first order predicate logic plays a very very small part in a mind.
Let’s review the statement in question:
By “narrow down”, I actually meant “narrow down prior to conscious evaluation”—not consciously evaluate for truth or falsehood. You can consciously evaluate whatever you like, and you can certainly check a statement for factual accuracy without the use of emotion. But that’s not what the sentence is talking about… it’s referring to the sorting or scoring function of emotion in selecting what memories to retrieve, or hypotheses to consider, before you actually evaluate them.
Still either false or meaningless, depending on how you interpret ‘emotion’. Our brains narrow things down prior to conscious evaluation. It’s their speciality. If you hacked out the limbic system you would still be left with a whole bunch of cortex that is good at narrowing things down without conscious evaluation. In fact, if you hacked out the frontal lobes you would end up with tissue that retained the ability to narrow things down without being able to conscoiusly evaluate anything.
The point of emotions—which I see I failed to make sufficiently explicit in this post, from the frequent questions about it—is that their original purpose was to prepare the body to take some physical, real-world action… and thus they were built in to our memory/prediction systems long before we reused those systems to “think” or “reason” with.
Brains weren’t originally built for thinking—they were built for emoting: motivating co-ordinated physical action.
I disagree again: I don’t think that any reasonable definition of emotion makes the following statement true: :
I think that emotions often do the opposite. They narrow down the field of “all possible hypotheses” to “likely to make me feel good about myself if I believe it” hypotheses and “likely to support my preexisting biases about the world” hypotheses, which is precisely the problem that this site is tackling… if emotions subconsciously selected “likely to be true” hypotheses, we would not be in the somewhat problematic situation we are in.
Those are subsets of what you believe to be likely true.
Great! Hurrah for emotions, they make you believe things that you believe are likely to be true…
epistemic rationality is about believing things that are actually true, rather than believing things that you believe to be true.
And that’s why it’s a good thing to know what you’re up against, with respect to the hardware upon which you’re trying to do that.
Right, we agree. But I think that we have overused the word emotion… That which proposes hypotheses is not exactly the same piece of brainware as that which makes you laugh and cry and love. We need different names for them. I call the latter emotion, and the former a “hypothesis generating part of your cognitive algorithm”. I think and hope that one can separate the two.
No… the former merely sorts those hypotheses based on information from the latter. Or more precisely, the raw data from which those hypotheses are generated, has been stored in such a manner that retrieval is prioritized on emotion, and such that any such emotions are played back as an integral part of retrieval.
One’s physio-emotional state at the time of retrieval also has an effect on retrieval priorities… if you’re angry, for example, memories tagged “angry” are prioritized.
Although these days Roko is probably uninterested in whether I agree with him, I agree with that passage.
According to my definition, “epistemically rational” means “effective at achieving one’s goals”. If the goals are incompatible with my goals, I’m going to hope that the agent remains epistemically irrational.
(Garcia used “intelligent” and “ethical” for my “epistemically rational” and “has goals compatible with my goals”.)
Since 1971, Garcia’s been stressing that increasing a person’s epistemic rationality increases that person’s capacity for good and capacity for evil, so you should try to determine whether the person will do good or do evil before you increase the epistemic rationality of the person. (Of course your definition of “good” might differ from mine.)
The smartest person (Ph. D. in math from a top program, successful entrepreneur) I ever met before I met Eliezer was probably unethical or evil. I say “probably” only to highlight that one cannot be highly confident of one’s judgement about someone’s ethics or evilness even if one has observed them closely. But most people here would probably agree with me that this person was unethical or evil.
no! not at all.
Rationality can be bad when it’s given to an agent with undesirable goals, but your own goals are always good to, so where your own thoughts are concerned, being ‘rational’ means they’re good and being ‘irrational’ means they’re bad. I think the article’s statement was meant to apply only to thoughts evaluated from the inside.