If we’re counting guilt as suffering in an ethically consequential sense—which seems reasonable, since it’s pretty profoundly unpleasant and there’s a pretty clear functional analogy to physical pain—and if that suffering is additive with other kinds, then consequentialists should want people to feel guilt when they do bad things if and only if that guilt eliminates more suffering (of any type) down the road. Don’t know if you’re a consequentialist, but this seems like a good starting point.
In any case, that condition seems like it’s sometimes but not always true. Guilt over immutable or nearly immutable urges seems like a net loss unless those urges are both proportionally destructive and susceptible to conditioned reduction in the average case. Guilt strong enough to be unpleasant but weak enough not to overcome whatever other factors are making people do bad shit is likewise a loss. Interestingly, this seems to indicate that consequentialists should sometimes prefer intense over moderate guilt, unless it’s gratuitously intense relative to what’s needed to stop the behavior: sufficiently disproportionate guilt is also a loss.
The obvious objection to this line of thinking is that certain categories of socially constructed bad shit—not to name names—might stick around if and only if they stay at or above a certain level of prevalence in the population, sort of a memetic equivalent of herd immunity. Since these patterns can persist for an unbounded length of time and cause suffering as long as they do, anything capable of incrementally degrading them could have second-order consequences much larger than its first-order effects, potentially enough to justify any and all related guilt. In this case uncertainties about the problem structure seem to dominate consequential reasoning, much as per Pascal’s Mugging.
Guilt over immutable or nearly immutable urges seems like a net loss unless those urges are both proportionally destructive and susceptible to conditioned reduction in the average case.
In my experience, feelings of guilt coupled with the attitude that it is “immutable”, can be an effective excuse not to fix harmful behavior. It’s a sort of ugh field. When the consequences of the behavior become sufficiently intolerable, one is eventually tempted to hang the guilt and test that supposed immutability.
Sure, that’s a failure mode, and it’s one which—stepping down a level of abstraction—seems prevalent in gender discussions (“I’m $gender, I can’t help it!”). From the inside, it can be pretty hard to distinguish between the motivations you can and can’t change with enough reflection. There’s a loose cultural consensus as to what counts, but at the same time that varies between subcultures and can lead to conflict in its own right: consider the “ex-gay” phenomenon in fundamentalist Christian spheres.
Maybe I shouldn’t have mentioned it in context; in my estimation it’s not directly relevant to what we’re discussing upthread. But at the same time I think it’s a mistake to consider our wants entirely plastic; for the time being we’re working with a certain set of hardware, and software changes can only do so much.
Possibly not. I do think punishments can deter bad actions. But I think this works best when those punishments are clearly described in advance of the crime.
Also, it seems to me that there is a perverse aspect of regret, that it punishes sympathetic malefactors more than it punishes psychopathic ones.
If that isn’t hyperbole, I’m interested in your reasons for believing that.
Of course it is. The point is that we see all around us (that’s another hyperbole), and it is a recurring theme on LessWrong (that isn’t), that people persist in acting, or failing to act, in ways that they “feel bad” about. As a strategy for change, “feeling bad” doesn’t seem to be effective, does it?
“Making someone feel bad”, or “good”, fares even worse—see this parable.
it is a recurring theme on LessWrong (that isn’t), that people persist in acting, or failing to act, in ways that they “feel bad” about.
I agree.
As a strategy for change, “feeling bad” doesn’t seem to be effective, does it?
I disagree. One of the reasons akrasia is so notable is that feeling bad usually works. Usually touching a hot stove or hit your thumb with a hammer once is enough to change your behavior. Often being mocked by your peers, or sensing genuine disappointment from your mentors, is enough to change your behavior. It’s only in these weird corner cases where opposing strong motivations collide that we notice the unusual inefficacy of bad feelings, and haul out the rational analysis toolkit.
But doesn’t the same logic lead me to conclude that pain isn’t aversive? (That is: if pain were actually aversive, people wouldn’t do things that cause them pain. People do things that cause them pain, therefore pain is not aversive.)
The problem with that logic as it applies to pain is that pain can be aversive without completely preventing people from doing something. If a behavior B is N% likely ordinarily, and B becomes Y% likely if coupled to pain, and Y < X, that’s evidence for considering pain aversive even though we still do B. Relatedly, if B is always coupled to pain, then I never get to observe X.
Observing a nonzero Y is not evidence that pain is non-aversive.
It seems to me the same reasoning applies to guilt and other kinds of bad feelings. It’s certainly possible that they are non-aversive, but observing a nonzero frequency of the behaviors that cause it isn’t evidence of that.
There may be other evidence, though, which is why I asked Richard his reasons.
Taboo “feeling bad”, keeping in mind that our normal emotional vocabulary is pretty inadequate. (E.g., it seems to me that shame is basically never useful, but guilt and sadness can be.)
I mean I feel X when I’m not being productive. And yet I do not become productive. I have no idea how to taboo qualia like “X”.
Maybe an extensional definition?: That feeling you get when you’ve done something wrong. An uncomfortable and frustruating feeling that makes you feel guilty. A bit like stress.
That’s awfully specific. I wonder how general the non-utility of it is.
This is a thought-provoking sentence. I think I don’t want anyone to feel bad, even when they do bad things.
As for me, I’d say it depends on whether them feeling bad makes them stop doing bad things.
If we’re counting guilt as suffering in an ethically consequential sense—which seems reasonable, since it’s pretty profoundly unpleasant and there’s a pretty clear functional analogy to physical pain—and if that suffering is additive with other kinds, then consequentialists should want people to feel guilt when they do bad things if and only if that guilt eliminates more suffering (of any type) down the road. Don’t know if you’re a consequentialist, but this seems like a good starting point.
In any case, that condition seems like it’s sometimes but not always true. Guilt over immutable or nearly immutable urges seems like a net loss unless those urges are both proportionally destructive and susceptible to conditioned reduction in the average case. Guilt strong enough to be unpleasant but weak enough not to overcome whatever other factors are making people do bad shit is likewise a loss. Interestingly, this seems to indicate that consequentialists should sometimes prefer intense over moderate guilt, unless it’s gratuitously intense relative to what’s needed to stop the behavior: sufficiently disproportionate guilt is also a loss.
The obvious objection to this line of thinking is that certain categories of socially constructed bad shit—not to name names—might stick around if and only if they stay at or above a certain level of prevalence in the population, sort of a memetic equivalent of herd immunity. Since these patterns can persist for an unbounded length of time and cause suffering as long as they do, anything capable of incrementally degrading them could have second-order consequences much larger than its first-order effects, potentially enough to justify any and all related guilt. In this case uncertainties about the problem structure seem to dominate consequential reasoning, much as per Pascal’s Mugging.
In my experience, feelings of guilt coupled with the attitude that it is “immutable”, can be an effective excuse not to fix harmful behavior. It’s a sort of ugh field. When the consequences of the behavior become sufficiently intolerable, one is eventually tempted to hang the guilt and test that supposed immutability.
Sure, that’s a failure mode, and it’s one which—stepping down a level of abstraction—seems prevalent in gender discussions (“I’m $gender, I can’t help it!”). From the inside, it can be pretty hard to distinguish between the motivations you can and can’t change with enough reflection. There’s a loose cultural consensus as to what counts, but at the same time that varies between subcultures and can lead to conflict in its own right: consider the “ex-gay” phenomenon in fundamentalist Christian spheres.
Maybe I shouldn’t have mentioned it in context; in my estimation it’s not directly relevant to what we’re discussing upthread. But at the same time I think it’s a mistake to consider our wants entirely plastic; for the time being we’re working with a certain set of hardware, and software changes can only do so much.
Interesting. Does that remain true if you believe that feeling bad when they do bad things makes people less likely to do bad things?
Possibly not. I do think punishments can deter bad actions. But I think this works best when those punishments are clearly described in advance of the crime.
Also, it seems to me that there is a perverse aspect of regret, that it punishes sympathetic malefactors more than it punishes psychopathic ones.
Agreed on both counts.
If feeling bad when they did bad things made people less likely to do bad things, there would be no such thing as akrasia.
Huh. If that isn’t hyperbole, I’m interested in your reasons for believing that.
Of course it is. The point is that we see all around us (that’s another hyperbole), and it is a recurring theme on LessWrong (that isn’t), that people persist in acting, or failing to act, in ways that they “feel bad” about. As a strategy for change, “feeling bad” doesn’t seem to be effective, does it?
“Making someone feel bad”, or “good”, fares even worse—see this parable.
I agree.
I disagree. One of the reasons akrasia is so notable is that feeling bad usually works. Usually touching a hot stove or hit your thumb with a hammer once is enough to change your behavior. Often being mocked by your peers, or sensing genuine disappointment from your mentors, is enough to change your behavior. It’s only in these weird corner cases where opposing strong motivations collide that we notice the unusual inefficacy of bad feelings, and haul out the rational analysis toolkit.
If feeling bad was actually motivational, all of us who currently feel bad about our (present tense) actions would not have such problems.
But doesn’t the same logic lead me to conclude that pain isn’t aversive? (That is: if pain were actually aversive, people wouldn’t do things that cause them pain. People do things that cause them pain, therefore pain is not aversive.)
The problem with that logic as it applies to pain is that pain can be aversive without completely preventing people from doing something. If a behavior B is N% likely ordinarily, and B becomes Y% likely if coupled to pain, and Y < X, that’s evidence for considering pain aversive even though we still do B. Relatedly, if B is always coupled to pain, then I never get to observe X.
Observing a nonzero Y is not evidence that pain is non-aversive.
It seems to me the same reasoning applies to guilt and other kinds of bad feelings. It’s certainly possible that they are non-aversive, but observing a nonzero frequency of the behaviors that cause it isn’t evidence of that.
There may be other evidence, though, which is why I asked Richard his reasons.
Taboo “feeling bad”, keeping in mind that our normal emotional vocabulary is pretty inadequate. (E.g., it seems to me that shame is basically never useful, but guilt and sadness can be.)
Thanks for the taboo request.
I mean I feel X when I’m not being productive. And yet I do not become productive. I have no idea how to taboo qualia like “X”.
Maybe an extensional definition?: That feeling you get when you’ve done something wrong. An uncomfortable and frustruating feeling that makes you feel guilty. A bit like stress.
That’s awfully specific. I wonder how general the non-utility of it is.