I would like people doing bad things to stop doing those things
How would you like this to occur?
To put it another way, what stops you from murdering somebody you dislike? The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings. Nowhere in the fun theory sequence does EY advocate getting rid of bad feelings, and in fact EY argues against that.
I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.
To put it another way, what stops you from murdering somebody you dislike?
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
Number of days since casual murder was used in a discussion on LessWrong: 0.
The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
None of the above.
(BTW, the Star Trek novels, at least the ones I have read, paint a far more creditable and credible version of Vulcan rationality than the TV shows and films. Vulcans do not suppress their feelings, but master them. A tradition in the real world with multiple long pedigrees. And a shorter one.)
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings.
I am well aware of them. But I think people often misinterpret what they are. As I revised my original comment to say, negative feelings tell you something. What matters is to do something about it. All that stuff about negative reinforcement and feelings conceived as similar to physical forces that push you and pull you into doing stuff is fairy tales, fantasies of non-agency. (Which pop up all over the place, not just in BDSM. Strange.)
“Making someone feel bad” is even more of a fairy tale. How do you “make someone feel bad”? What will happen if you try? Here is one person’s hypothetical reaction, and here is the basic problem with the idea.
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
I’m pretty sure HPMoR already took a dive into this point, in a manner I found sufficiently eloquent to expose the moral nihilism and/or philosophical egocentrism required for the first to occur.
Are you talking about the same things?
(If you haven’t read HPMoR, darn. I was hoping it would provide a speed boost to that line of philosophical reasoning.)
what stops you from murdering somebody you dislike?
As for me, the fact that if murdering somebody one dislikes were right, then one would have to be extra careful to never be disliked by anybody (if one doesn’t want to be killed), and that would be a lot nastier than people one dislikes staying alive. (Yes, that would make no sense to CDTists, but people aren’t CDTists anyway.)
I’m not sure I understand your question. I’d prefer to not be murdered rather than to be murdered, all other things being equal; are you asking anything else?
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
This relates to the above statement:
“Feeling bad” is (I believe) never useful
If you do not murder people because you would feel bad, feeling bad is useful.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Feeling bad is one of the reasons why I don’t do certain things, but not the only one. If I’m convinced something that would make me feel bad would also have desirable consequences that would outweigh that (even considering ethical injunctions, TDT-related considerations, etc.), I try to overcome my emotional hang-up (using precommitment devices, drinking alcohol, etc., if necessary) and do that anyway.
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
It was a denotative simple question attempting to assert a non-sequitur rhetorical point.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
That doesn’t follow.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Nonsense. Your reasoning is well below the standard expected around here. It may pass elsewhere but only because anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Well, let me spell it out even more so than I already have.
Preferences are system 2 concepts.
Over time, system 2 concepts map to system 1 concepts.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
In humans, as such, any preference against a thing means that human feels bad about that thing.
anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
Let A(X) be a function such that X.Consciousness becomes terminated (ends, dies, etc.)
I have a preference for NOT A(me).
Over time, the above maps to Feel Bad → A(me)
As such, if I am offered NOT A(me), and given A(me), I will feel bad because I attempt to be reflectively coherent.
As such, my preference for NOT A(me) does, as you claim, imply that I ought to feel bad about A(me).
The above are intended as a rephrasing of your statements, and I fully agree.
However…
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
You are making the subsequent conclusion that I have:
Feel Bad → A( X | X.isElementOf(people) )
because I have preference for NOT A(me).
wedrifid correctly asserts that this does not follow.
If I’m reading it right I don’t think your formalism fits what I’m trying to argue, but this is a boring point and I’m not terribly interested in taking it further.
Well, let me spell it out even more so than I already have.
“That doesn’t follow” does not mean “I cannot understand your argument”. It means that the argument was fundamentally logically flawed and your reasoning confused.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
Some people might feel bad. Others would feel amused (and, incidentally, many would personally develop themselves such that they are more inclined to feel positive than negative emotions in that kind of situation). Most importantly, system 1 refers to a heck of a lot more than emotions. Even system 1 based decisions to avoid something don’t translate to ‘feeling bad’ about it. Especially in people who are mature or experienced.
In humans, as such, any preference against a thing means that human feels bad about that thing.
No it doesn’t.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
I dispute both your first and your second bullet point. As far as I know there exist both system 1 and system 2 preferences, and it’s not clear that system 2 concepts usually bridge the gap. Can you give some examples or evidence?
How would you like this to occur?
To put it another way, what stops you from murdering somebody you dislike? The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings. Nowhere in the fun theory sequence does EY advocate getting rid of bad feelings, and in fact EY argues against that.
The possibility that they could still contain potential for improving paperclip production (to the extent that that is true).
I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
Number of days since casual murder was used in a discussion on LessWrong: 0.
None of the above.
(BTW, the Star Trek novels, at least the ones I have read, paint a far more creditable and credible version of Vulcan rationality than the TV shows and films. Vulcans do not suppress their feelings, but master them. A tradition in the real world with multiple long pedigrees. And a shorter one.)
I am well aware of them. But I think people often misinterpret what they are. As I revised my original comment to say, negative feelings tell you something. What matters is to do something about it. All that stuff about negative reinforcement and feelings conceived as similar to physical forces that push you and pull you into doing stuff is fairy tales, fantasies of non-agency. (Which pop up all over the place, not just in BDSM. Strange.)
“Making someone feel bad” is even more of a fairy tale. How do you “make someone feel bad”? What will happen if you try? Here is one person’s hypothetical reaction, and here is the basic problem with the idea.
I’m pretty sure HPMoR already took a dive into this point, in a manner I found sufficiently eloquent to expose the moral nihilism and/or philosophical egocentrism required for the first to occur.
Are you talking about the same things?
(If you haven’t read HPMoR, darn. I was hoping it would provide a speed boost to that line of philosophical reasoning.)
I’ve read HPMoR, but not studied it—which chapter?
I fail to recall the specifics at the moment, but I’ll look for the passage (with better search tools) once I get home in a few hours.
Agency is the fantasy.
That isn’t putting it another way, it’s a different question entirely.
Is that what stops you murdering (more) people? Remorse? Who did you kill last time?
As for me, the fact that if murdering somebody one dislikes were right, then one would have to be extra careful to never be disliked by anybody (if one doesn’t want to be killed), and that would be a lot nastier than people one dislikes staying alive. (Yes, that would make no sense to CDTists, but people aren’t CDTists anyway.)
How do you feel about possibly being murdered?
I’m not sure I understand your question. I’d prefer to not be murdered rather than to be murdered, all other things being equal; are you asking anything else?
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
This relates to the above statement:
If you do not murder people because you would feel bad, feeling bad is useful.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Feeling bad is one of the reasons why I don’t do certain things, but not the only one. If I’m convinced something that would make me feel bad would also have desirable consequences that would outweigh that (even considering ethical injunctions, TDT-related considerations, etc.), I try to overcome my emotional hang-up (using precommitment devices, drinking alcohol, etc., if necessary) and do that anyway.
It was a denotative simple question attempting to assert a non-sequitur rhetorical point.
That doesn’t follow.
Nonsense. Your reasoning is well below the standard expected around here. It may pass elsewhere but only because anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Well, let me spell it out even more so than I already have.
Preferences are system 2 concepts.
Over time, system 2 concepts map to system 1 concepts.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
In humans, as such, any preference against a thing means that human feels bad about that thing.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
Let A(X) be a function such that X.Consciousness becomes terminated (ends, dies, etc.)
I have a preference for NOT A(me).
Over time, the above maps to Feel Bad → A(me)
As such, if I am offered NOT A(me), and given A(me), I will feel bad because I attempt to be reflectively coherent.
As such, my preference for NOT A(me) does, as you claim, imply that I ought to feel bad about A(me).
The above are intended as a rephrasing of your statements, and I fully agree.
However…
You are making the subsequent conclusion that I have:
Feel Bad → A( X | X.isElementOf(people) )
because I have preference for NOT A(me).
wedrifid correctly asserts that this does not follow.
If I’m reading it right I don’t think your formalism fits what I’m trying to argue, but this is a boring point and I’m not terribly interested in taking it further.
“That doesn’t follow” does not mean “I cannot understand your argument”. It means that the argument was fundamentally logically flawed and your reasoning confused.
Some people might feel bad. Others would feel amused (and, incidentally, many would personally develop themselves such that they are more inclined to feel positive than negative emotions in that kind of situation). Most importantly, system 1 refers to a heck of a lot more than emotions. Even system 1 based decisions to avoid something don’t translate to ‘feeling bad’ about it. Especially in people who are mature or experienced.
No it doesn’t.
Irrelevant.
I dispute both your first and your second bullet point. As far as I know there exist both system 1 and system 2 preferences, and it’s not clear that system 2 concepts usually bridge the gap. Can you give some examples or evidence?