Generally, I, like most humans, think that people doing bad things should feel bad about it.
FWIW, I do not think that. I would like people doing bad things to stop doing those things. “Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else.
Having decided that it’s a bad idea for me to continue discussing things with eridu, it might be better for me to avoid discussing the same things with people who are currently engaged in conversation with him. But I think that in this case we have a substantive disagreement.
I think that not only is people feeling bad a powerful moderator of our behavior, and one that it’s useful for other people to know we have, I think deliberately making people feel bad about their actions can be a useful way to motivate them to change their behavior in positive ways. Ideally, nobody should have to feel bad, but then, ideally, nobody should be doing bad things either.
To draw an available example, Ghandi’s efforts to gain independence for India rested almost entirely on making the British colonialists feel bad about themselves, and while giving up their possession of India might have been an economic inevitability, he certainly accelerated it.
I think eridu is overgeneralizing the usefulness of imposing guilt on others though. It appears to me that in order to modify others’ behavior by encouraging them to feel guilty, you need to start with people who have an existing set of moral standards (ones by which they actually operate not simply ones they profess,) which they are not applying in a particular case, and make them feel intuitively that this is a case where they should be applying those standards. For instance, the British citizens mostly had moral standards against attacking civilized, non-resisting people with clubs. If they saw Indian people behaving in a civilized, nonthreatening manner, and being beaten with clubs for challenging colonial rule, the British citizens are going to feel guilty without needing further incitement. On the other hand, if you try to encourage people to feel guilty for, say, stopping women from having abortions, and appeal to them on principles of autonomy, it won’t work because they don’t relate it to anything else they would feel guilty about. You can tell them why they should, but they aren’t going to intuitively put either “women” or “abortion” into a new reference class that completes a preexisting basis for guilt.
I’m not sure whether it’s a separate principle, or an extension of this one, that trying to get people to modify their behavior too radically by appealing to guilt will also backfire. For instance, you can appeal to someone that a consistent application of their principles would lead to them giving away nearly all their money to charity, but most people don’t have preexisting models for guilt whereby they will feel guilty for not giving away nearly everything they own. They can be guilted into “doing their part,” make some contribution, and stop feeling guilty, but if they judge that the person encouraging them to feel guilty is asking too much of them, then they’ll try to avoid the person trying to make them feel guilty, rather than the behaviors that person is trying to encourage them to change.
I suspect the banhammer may be looming over all of this, or the karmic penalty for being under the same bridge as the troll, as eridu’s last ancestor comment has vanished, but I’ll just briefly refer to this reply of mine to eridu, and take up the following:
I’m not sure whether it’s a separate principle, or an extension of this one, that trying to get people to modify their behavior too radically by appealing to guilt will also backfire. For instance, you can appeal to someone that a consistent application of their principles would lead to them giving away nearly all their money to charity, but most people don’t have preexisting models for guilt whereby they will feel guilty for not giving away nearly everything they own. They can be guilted into “doing their part,” make some contribution, and stop feeling guilty, but if they judge that the person encouraging them to feel guilty is asking too much of them, then they’ll try to avoid the person trying to make them feel guilty, rather than the behaviors that person is trying to encourage them to change.
Bingo. People have these fantasies of being able to reach into other people’s heads and tweak some switches to make them do what they (the ones tweaking) want, but things just don’t work like that. People have their own purposes, and nothing you can do to them is any more than a disturbance to those purposes. What they will do to get what they want in spite of someone else’s meddling will not necessarily resemble, even slightly, what the meddler wanted. See also Goodhart’s law.
I would like people doing bad things to stop doing those things
How would you like this to occur?
To put it another way, what stops you from murdering somebody you dislike? The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings. Nowhere in the fun theory sequence does EY advocate getting rid of bad feelings, and in fact EY argues against that.
I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.
To put it another way, what stops you from murdering somebody you dislike?
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
Number of days since casual murder was used in a discussion on LessWrong: 0.
The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
None of the above.
(BTW, the Star Trek novels, at least the ones I have read, paint a far more creditable and credible version of Vulcan rationality than the TV shows and films. Vulcans do not suppress their feelings, but master them. A tradition in the real world with multiple long pedigrees. And a shorter one.)
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings.
I am well aware of them. But I think people often misinterpret what they are. As I revised my original comment to say, negative feelings tell you something. What matters is to do something about it. All that stuff about negative reinforcement and feelings conceived as similar to physical forces that push you and pull you into doing stuff is fairy tales, fantasies of non-agency. (Which pop up all over the place, not just in BDSM. Strange.)
“Making someone feel bad” is even more of a fairy tale. How do you “make someone feel bad”? What will happen if you try? Here is one person’s hypothetical reaction, and here is the basic problem with the idea.
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
I’m pretty sure HPMoR already took a dive into this point, in a manner I found sufficiently eloquent to expose the moral nihilism and/or philosophical egocentrism required for the first to occur.
Are you talking about the same things?
(If you haven’t read HPMoR, darn. I was hoping it would provide a speed boost to that line of philosophical reasoning.)
what stops you from murdering somebody you dislike?
As for me, the fact that if murdering somebody one dislikes were right, then one would have to be extra careful to never be disliked by anybody (if one doesn’t want to be killed), and that would be a lot nastier than people one dislikes staying alive. (Yes, that would make no sense to CDTists, but people aren’t CDTists anyway.)
I’m not sure I understand your question. I’d prefer to not be murdered rather than to be murdered, all other things being equal; are you asking anything else?
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
This relates to the above statement:
“Feeling bad” is (I believe) never useful
If you do not murder people because you would feel bad, feeling bad is useful.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Feeling bad is one of the reasons why I don’t do certain things, but not the only one. If I’m convinced something that would make me feel bad would also have desirable consequences that would outweigh that (even considering ethical injunctions, TDT-related considerations, etc.), I try to overcome my emotional hang-up (using precommitment devices, drinking alcohol, etc., if necessary) and do that anyway.
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
It was a denotative simple question attempting to assert a non-sequitur rhetorical point.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
That doesn’t follow.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Nonsense. Your reasoning is well below the standard expected around here. It may pass elsewhere but only because anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Well, let me spell it out even more so than I already have.
Preferences are system 2 concepts.
Over time, system 2 concepts map to system 1 concepts.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
In humans, as such, any preference against a thing means that human feels bad about that thing.
anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
Let A(X) be a function such that X.Consciousness becomes terminated (ends, dies, etc.)
I have a preference for NOT A(me).
Over time, the above maps to Feel Bad → A(me)
As such, if I am offered NOT A(me), and given A(me), I will feel bad because I attempt to be reflectively coherent.
As such, my preference for NOT A(me) does, as you claim, imply that I ought to feel bad about A(me).
The above are intended as a rephrasing of your statements, and I fully agree.
However…
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
You are making the subsequent conclusion that I have:
Feel Bad → A( X | X.isElementOf(people) )
because I have preference for NOT A(me).
wedrifid correctly asserts that this does not follow.
If I’m reading it right I don’t think your formalism fits what I’m trying to argue, but this is a boring point and I’m not terribly interested in taking it further.
Well, let me spell it out even more so than I already have.
“That doesn’t follow” does not mean “I cannot understand your argument”. It means that the argument was fundamentally logically flawed and your reasoning confused.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
Some people might feel bad. Others would feel amused (and, incidentally, many would personally develop themselves such that they are more inclined to feel positive than negative emotions in that kind of situation). Most importantly, system 1 refers to a heck of a lot more than emotions. Even system 1 based decisions to avoid something don’t translate to ‘feeling bad’ about it. Especially in people who are mature or experienced.
In humans, as such, any preference against a thing means that human feels bad about that thing.
No it doesn’t.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
I dispute both your first and your second bullet point. As far as I know there exist both system 1 and system 2 preferences, and it’s not clear that system 2 concepts usually bridge the gap. Can you give some examples or evidence?
FWIW, I do not think that. I would like people doing bad things to stop doing those things. “Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else.
Are you using ‘never’ in a figurative sense here? Seeing the absolute claim like that prompted me to think of a whole list of real world counter-examples despite me probably mostly agreeing with your position. (For a start, making people feel bad is useful in nearly all cases in which breaking someone’s finger is useful. Maintaining dominance, keeping oppressed people oppressed, provoking an enemy into taking hasty reactions against you that you believe you can win, short term coercement. Making others believe that you have the power to do harm to another without them having any recourse. That kind of thing. That’s before thinking up the cases where actual respectable, decent sounding outcomes could arise—those are rare but do occur.)
Seeing the absolute claim like that prompted me to think of a whole list of real world counter-examples
That is something I find a standard but rather annoying geek conversational failure. You could simply have answered your own question:
Are you using ‘never’ in a figurative sense here?
with “yes”. But “figurative” does not really capture it. All apparently absolute generalisations are relative to their context. Are there substantial exceptions relevant to the context?
Now, on further consideration I might indeed revise my original statement, but not in any of the directions you explore. Feeling bad—that is, having feelings that one does not want—is useful to precisely this extent: it informs you that something is wrong; that there is a conflict somewhere. The useful response to this is find where the conflict is and do something about it. Nothing else is useful about the feeling.
For a start, making people feel bad is useful in nearly all cases in which breaking someone’s finger is useful.
Days since someone used torture to illustrate an argument: 0.
I would write “seldom” instead of “never”.
I prefer to write “never” instead of “seldom”. “Seldom” and other such qualifiers too easily protect what one is saying behind a fog of vagueness. It allows one to move one’s soldiers around like the pieces of a sliding-block puzzle, so that wherever the enemy attacks, one can say “Ha! Fooled you! Never said that! Nobody there! Try again!”
“Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else. [emphasis added.]
Not so. Some reasons:
Psychologist Richard J. Davidson has shown that the affective trait Resilience (speedy recovery from bad feelings) becomes maladaptive when extremely high, as it interferes with empathy.
Almost all judicial systems have concluded that remorse helps avoid recidivism in criminals. (I’m opposed to remorse-based sentencing—but not based on its being irrelevant.)
FWIW, I do not think that. I would like people doing bad things to stop doing those things. “Feeling bad” is (I believe) never useful: not to the person having the feeling, and not to anyone else.
Having decided that it’s a bad idea for me to continue discussing things with eridu, it might be better for me to avoid discussing the same things with people who are currently engaged in conversation with him. But I think that in this case we have a substantive disagreement.
I think that not only is people feeling bad a powerful moderator of our behavior, and one that it’s useful for other people to know we have, I think deliberately making people feel bad about their actions can be a useful way to motivate them to change their behavior in positive ways. Ideally, nobody should have to feel bad, but then, ideally, nobody should be doing bad things either.
To draw an available example, Ghandi’s efforts to gain independence for India rested almost entirely on making the British colonialists feel bad about themselves, and while giving up their possession of India might have been an economic inevitability, he certainly accelerated it.
I think eridu is overgeneralizing the usefulness of imposing guilt on others though. It appears to me that in order to modify others’ behavior by encouraging them to feel guilty, you need to start with people who have an existing set of moral standards (ones by which they actually operate not simply ones they profess,) which they are not applying in a particular case, and make them feel intuitively that this is a case where they should be applying those standards. For instance, the British citizens mostly had moral standards against attacking civilized, non-resisting people with clubs. If they saw Indian people behaving in a civilized, nonthreatening manner, and being beaten with clubs for challenging colonial rule, the British citizens are going to feel guilty without needing further incitement. On the other hand, if you try to encourage people to feel guilty for, say, stopping women from having abortions, and appeal to them on principles of autonomy, it won’t work because they don’t relate it to anything else they would feel guilty about. You can tell them why they should, but they aren’t going to intuitively put either “women” or “abortion” into a new reference class that completes a preexisting basis for guilt.
I’m not sure whether it’s a separate principle, or an extension of this one, that trying to get people to modify their behavior too radically by appealing to guilt will also backfire. For instance, you can appeal to someone that a consistent application of their principles would lead to them giving away nearly all their money to charity, but most people don’t have preexisting models for guilt whereby they will feel guilty for not giving away nearly everything they own. They can be guilted into “doing their part,” make some contribution, and stop feeling guilty, but if they judge that the person encouraging them to feel guilty is asking too much of them, then they’ll try to avoid the person trying to make them feel guilty, rather than the behaviors that person is trying to encourage them to change.
I suspect the banhammer may be looming over all of this, or the karmic penalty for being under the same bridge as the troll, as eridu’s last ancestor comment has vanished, but I’ll just briefly refer to this reply of mine to eridu, and take up the following:
Bingo. People have these fantasies of being able to reach into other people’s heads and tweak some switches to make them do what they (the ones tweaking) want, but things just don’t work like that. People have their own purposes, and nothing you can do to them is any more than a disturbance to those purposes. What they will do to get what they want in spite of someone else’s meddling will not necessarily resemble, even slightly, what the meddler wanted. See also Goodhart’s law.
How would you like this to occur?
To put it another way, what stops you from murdering somebody you dislike? The (bad feeling of) fear of getting caught? The (bad feeling of) remorse from taking a human’s life?
Or do you really think you’re a Hollywood rationalist, making a cold and precise computation of negative utility as a result of your potential action, and choosing another path?
Like the other poster who said roughly the same thing as you, you seem entirely ignorant to the massive amount of bad feelings present in reality, and the usefulness of those feelings. Nowhere in the fun theory sequence does EY advocate getting rid of bad feelings, and in fact EY argues against that.
The possibility that they could still contain potential for improving paperclip production (to the extent that that is true).
I’m happy to have one of the most well-loved LW celebrities respond to a post I made!
In the counterfactual world where you did murder someone you disliked, and later found that they were planning on instigating paperclip production, how would you feel out of “good” or “bad”?
Of course, maybe you don’t have something you call “feelings,” but rather think of things purely in terms of expected paperclips. Humans, on the other hand, have difficulty thinking strictly in terms of expected paperclips, but rather learn to associate expected paperclips with good feelings, and negative expected paperclips with bad feelings.
In humans, we have a set of primitive mental actions (like feelings, intuitions, and similar system-one things) that we can sometimes compose into more sophisticated ones (like computing expected paperclips yielded by an action).
As such, you can always say “I wouldn’t kill someone I disliked because I might feel regret for taking a life,” or “I wouldn’t kill someone I disliked because I would be imprisoned and unable to accomplish my goals,” but ultimately, all those things boil down to the general explanation of “feeling bad.”
“Feeling bad” is the default human state of not accomplishing their goal.
(As an aside, this is why I think that you, clippy, can be said to have emotions like humans—because I don’t think there’s a difference between your expectation of negative paperclips as a result of a possible future event and fear or dread, nor do I think there’s a difference between a realization that you created fewer paperclips and sadness, loss, or regret.)
Thank you again for replying, Clippy—I’ll go down to my supply room at my earliest convenience and take most of the paperclips as a token for me to remember this interaction, and in the process, causing my employer to purchase paperclips sooner, raising demand and thus causing more paperclips to be produced.
Thanks for buying more paperclips, you’re a good human.
To answer your question, if I entropized a human and later found out that the human had contained information or productive power that would have, on net, been better for paperclip production, I will evaluate the reasoning that led me to entropize that human, and if I find that I can improve heuristics in a way that will avoid such killings without also preventing a disproportoinate amount of papeclip production, then I will implement that improvement.
I suppose that is just a difference between us. Not a disagreement, but a difference: you are one way and I am another.
You think of disliking someone and ask, what stops you murdering them?
I think of disliking someone and ask (and only because of your question), what would start me murdering them?
Number of days since casual murder was used in a discussion on LessWrong: 0.
None of the above.
(BTW, the Star Trek novels, at least the ones I have read, paint a far more creditable and credible version of Vulcan rationality than the TV shows and films. Vulcans do not suppress their feelings, but master them. A tradition in the real world with multiple long pedigrees. And a shorter one.)
I am well aware of them. But I think people often misinterpret what they are. As I revised my original comment to say, negative feelings tell you something. What matters is to do something about it. All that stuff about negative reinforcement and feelings conceived as similar to physical forces that push you and pull you into doing stuff is fairy tales, fantasies of non-agency. (Which pop up all over the place, not just in BDSM. Strange.)
“Making someone feel bad” is even more of a fairy tale. How do you “make someone feel bad”? What will happen if you try? Here is one person’s hypothetical reaction, and here is the basic problem with the idea.
I’m pretty sure HPMoR already took a dive into this point, in a manner I found sufficiently eloquent to expose the moral nihilism and/or philosophical egocentrism required for the first to occur.
Are you talking about the same things?
(If you haven’t read HPMoR, darn. I was hoping it would provide a speed boost to that line of philosophical reasoning.)
I’ve read HPMoR, but not studied it—which chapter?
I fail to recall the specifics at the moment, but I’ll look for the passage (with better search tools) once I get home in a few hours.
Agency is the fantasy.
That isn’t putting it another way, it’s a different question entirely.
Is that what stops you murdering (more) people? Remorse? Who did you kill last time?
As for me, the fact that if murdering somebody one dislikes were right, then one would have to be extra careful to never be disliked by anybody (if one doesn’t want to be killed), and that would be a lot nastier than people one dislikes staying alive. (Yes, that would make no sense to CDTists, but people aren’t CDTists anyway.)
How do you feel about possibly being murdered?
I’m not sure I understand your question. I’d prefer to not be murdered rather than to be murdered, all other things being equal; are you asking anything else?
I’m asking how you feel about possibly being murdered. You know, emotions. It’s a simple question.
Because if “I don’t want to be murdered because by TDT-style rhetoric it leads to my being more likely to be murdered,” and if you feel bad about being murdered, you abstain from murdering people because you feel bad.
This relates to the above statement:
If you do not murder people because you would feel bad, feeling bad is useful.
I think this is a trivial point, and if I started this discussion on a different topic, it would be trivially accepted by most of the people currently arguing against it.
Feeling bad is one of the reasons why I don’t do certain things, but not the only one. If I’m convinced something that would make me feel bad would also have desirable consequences that would outweigh that (even considering ethical injunctions, TDT-related considerations, etc.), I try to overcome my emotional hang-up (using precommitment devices, drinking alcohol, etc., if necessary) and do that anyway.
It was a denotative simple question attempting to assert a non-sequitur rhetorical point.
That doesn’t follow.
Nonsense. Your reasoning is well below the standard expected around here. It may pass elsewhere but only because anything with “boo murder” in it is too hard to argue with regardless of the standards of the content.
Well, let me spell it out even more so than I already have.
Preferences are system 2 concepts.
Over time, system 2 concepts map to system 1 concepts.
As such, if you prefer ice cream to spinach, you will feel bad (in a system 1 sense) if you are promised ice cream but given spinach.
In humans, as such, any preference against a thing means that human feels bad about that thing.
Arguing about the choice of something that represents the LW concept of negative utility in a hypothetical example is equivalent to arguing about grammar.
Let A(X) be a function such that X.Consciousness becomes terminated (ends, dies, etc.)
I have a preference for NOT A(me).
Over time, the above maps to Feel Bad → A(me)
As such, if I am offered NOT A(me), and given A(me), I will feel bad because I attempt to be reflectively coherent.
As such, my preference for NOT A(me) does, as you claim, imply that I ought to feel bad about A(me).
The above are intended as a rephrasing of your statements, and I fully agree.
However…
You are making the subsequent conclusion that I have:
Feel Bad → A( X | X.isElementOf(people) )
because I have preference for NOT A(me).
wedrifid correctly asserts that this does not follow.
If I’m reading it right I don’t think your formalism fits what I’m trying to argue, but this is a boring point and I’m not terribly interested in taking it further.
“That doesn’t follow” does not mean “I cannot understand your argument”. It means that the argument was fundamentally logically flawed and your reasoning confused.
Some people might feel bad. Others would feel amused (and, incidentally, many would personally develop themselves such that they are more inclined to feel positive than negative emotions in that kind of situation). Most importantly, system 1 refers to a heck of a lot more than emotions. Even system 1 based decisions to avoid something don’t translate to ‘feeling bad’ about it. Especially in people who are mature or experienced.
No it doesn’t.
Irrelevant.
I dispute both your first and your second bullet point. As far as I know there exist both system 1 and system 2 preferences, and it’s not clear that system 2 concepts usually bridge the gap. Can you give some examples or evidence?
Are you using ‘never’ in a figurative sense here? Seeing the absolute claim like that prompted me to think of a whole list of real world counter-examples despite me probably mostly agreeing with your position. (For a start, making people feel bad is useful in nearly all cases in which breaking someone’s finger is useful. Maintaining dominance, keeping oppressed people oppressed, provoking an enemy into taking hasty reactions against you that you believe you can win, short term coercement. Making others believe that you have the power to do harm to another without them having any recourse. That kind of thing. That’s before thinking up the cases where actual respectable, decent sounding outcomes could arise—those are rare but do occur.)
I would write “seldom” instead of “never”.
That is something I find a standard but rather annoying geek conversational failure. You could simply have answered your own question:
with “yes”. But “figurative” does not really capture it. All apparently absolute generalisations are relative to their context. Are there substantial exceptions relevant to the context?
Now, on further consideration I might indeed revise my original statement, but not in any of the directions you explore. Feeling bad—that is, having feelings that one does not want—is useful to precisely this extent: it informs you that something is wrong; that there is a conflict somewhere. The useful response to this is find where the conflict is and do something about it. Nothing else is useful about the feeling.
Days since someone used torture to illustrate an argument: 0.
I prefer to write “never” instead of “seldom”. “Seldom” and other such qualifiers too easily protect what one is saying behind a fog of vagueness. It allows one to move one’s soldiers around like the pieces of a sliding-block puzzle, so that wherever the enemy attacks, one can say “Ha! Fooled you! Never said that! Nobody there! Try again!”
Not so. Some reasons:
Psychologist Richard J. Davidson has shown that the affective trait Resilience (speedy recovery from bad feelings) becomes maladaptive when extremely high, as it interferes with empathy.
Almost all judicial systems have concluded that remorse helps avoid recidivism in criminals. (I’m opposed to remorse-based sentencing—but not based on its being irrelevant.)
For better or worse, judicial systems buying into an empirical proposition is not very strong evidence that the proposition is true.