The reason people say stuff like “if you empathize more, you’ll feel kinder towards them” is that the negative emotions behind your judgements require prediction error, and understanding necessarily crowds it out.
To give a toy example, it’s easy to get frustrated when a calculator isn’t working because it should work, dammit. You just bought the damn thing, how can it be broken already!? But then, if you stop to wonder if it has batteries in it and it doesn’t, it get’s a lot harder to stay frustrated because it’s a lot harder to hold onto the idea that the calculator “should” work in such a state. You don’t stop judging it as “non-functioning” because obviously it’s non functioning, you just lower your expectations to match the reality of the situation; if you want it to work, you need to put batteries in.
The recognition of “Oh, this isn’t a functioning calculator” is a necessary step between “expecting the calculator to work” and understanding (or even asking) why it’s not a functioning calculator. So there’s necessarily going to be that “Shit, okay. I guess I don’t have a working calculator” where you have to mourn the loss of what you thought you had, before you reorient to the reality that you find yourself faced with.
That is to say, feeling disgusted and disappointed is fine. What happens next? What happens once you accept that people do have these values, empathize further, and try to undertand where these (perhaps unfortunate) values come from? And then go through as many steps of that as needed before you could say “If I was missing my battery, and X, and Y, and Z, I would also have these messed up values and choose not to take the nail out of my head?”
Suppose Alice’s daughter Beth gets cancer and slowly dies. After a long battle, numerous doctors that tell them Beth’s death is inevitable, and many nights in the hospital, Alice finally watches as Beth breathes her last. Then, Alice feels a stab of intense grief and goes into mourning for the next month.
Do you claim these negative emotions are a result of prediction error, and that Alice would feel zero grief if she only had an accurate understanding of the situation? Color me skeptical.
Another example: suppose Carl is tied to some train tracks and sees a train approaching. As the train gets closer, Carl feels an intense sense of fear, and anger against the person who tied him up. Do you claim this is also a prediction error? The bad thing that Carl is afraid of hasn’t actually happened yet (the train has yet to reach him); where exactly is the error located?
These are good questions. Thanks for the pushback :)
Do you claim these negative emotions are a result of prediction error, and that Alice would feel zero grief if she only had an accurate understanding of the situation?
Yes, but not in the way you think. “[Perfectly] accurate understanding of the situation”, such that there is no grief to experience, is an impossibly high standard. The implication of “If you’re sad you’re just a bad rationalist!” absolutely does not follow. It’s closer to the opposite, in that if you’re flinching from experiencing sadness (or other emotions) you’re resisting updating.
I give some explanation of how this relates to the process of grieving in a different comment downstream (ctrl-f “pet”), but there’s another aspect that I’d like to touch on here.
My Grandpa won life. The man was very successful in business, in marriage, in family. He lived into old age with a full mind, and as active as one can be in his later years. It’s really hard to expect more out of life than this, so when he finally croaked in his nineties… it’s kinda hard to expect more. I mean, yeah, it’d have been nice to have him for a few more years, and yeah, occasionally people live longer. Sure, it’d be nice for aging to have been solved. But overall it’s kinda like “That’s how it’s supposed to go. If only all life went so well”. At his funeral, there were a lot of people smiling and remembering him fondly.
In contrast, lose someone important who is in their twenties, and it’s devastating. There are going to be all sorts of ways in which you expected things to go differently, and updating your maps there (i.e. “grieving”) sucks. Alice’s death sucks not just because you would like more for her, but because you thought she would get more. And she didn’t. And that matters. These funerals are not so fun and full of smiles.
Carl is tied to some train tracks and sees a train approaching. As the train gets closer, Carl feels an intense sense of fear, and anger against the person who tied him up. Do you claim this is also a prediction error? The bad thing that Carl is afraid of hasn’t actually happened yet (the train has yet to reach him); where exactly is the error located?
Yes, most definitely.
The anger error is located in the mismatch between expecting the person who tied him up to have followed some norms which he clearly he wasn’t bound by, and the reality that he did not follow the norms. In that situation, I have a hard time imagining being angry because I can’t see why I’d ever put some expectations like that on someone who wasn’t bound by them. Even if it was my own brother I wouldn’t be angry because I’d be too shocked and confused to take my prediction error as his fault. Not “Fuck you for doing this to me” but “Fuck me for not recognizing this to be possible”.
The fear error is locating in the mismatch between expecting to be safe and the reality of not being safe. This one is utterly unreasonable to expect anyone to solve on the fly like that, but 1) when people resign themselves to their fate (the example given to me by the Jewish man who taught me this stuff was Jews in concentration camps), there is no more fear, and 2) when you can easily untie yourself minutes before the train gets there it’s not so scary anymore because you just get off the tracks.
It’s worth noting that these things can get pretty complicated, and fear doesn’t necessarily feel the way you’d expect when you actually find yourself in similar situations. For example, I have a friend whose rope harness came undone while rappelling, leaving him desperately clinging to the rope while trying to figure out how to safely get to the ground. Afterwards, he said that he was afraid until his harness fell apart. After that, he was just too busy figuring out what to do to feel afraid. Rising to the occasion often requires judiciously dropping prior expectations and coming up with new ones on the fly.
Emotions evolved as a way of influencing our behavior in useful directions. They correspond (approximately—this is evolution we’re talking about) to a prediction that there is some useful way of changing your behavior in response to a situation. Fear tells you take precautions, anger tells you to retaliate, contempt tells you to reconsider your alliance, etc. (Scott Alexander has a post on ACX theorizing that general happiness and sadness are a way of telling you to take more/fewer risks, but I can’t find it at the moment.)
I think your examples of fear disappearing when people give up hope of escape are explained at least as well by this hypothesis as by yours. Also your example of your friend who “was afraid until his harness fell apart”—that was the moment when “taking precautions” stopped being a useful action, but it seems pretty weird to conjecture that that was the moment when his prediction error disappeared (was he predicting a 100% chance of the harness breaking? or even >50%?)
On my model, examples of people giving up anger when they accept physical determinism strike me as understandable but mistaken. They are reasoning that some person could not have done otherwise, and thus give up on changing the person’s behavior, which causes them to stop feeling anger. But this is an error, because a system operating on completely deterministic rules can still be altered by outside forces—such as a pattern of other people retaliating in certain circumstances.
On my model, the correct reason to get angry at a murderer, but not to get angry at a storm, is that murderers can (sometimes) be deterred, and storms cannot. I think the person who stops feeling anger has performed an incomplete reduction that doesn’t add up to normality.
Notice that my model provides an explanation for why different negative emotions occur in different circumstances: They recommend different actions. As far as I can see, you have not offered an explanation for why some prediction errors cause fear, others anger, others disgust.
Your model also appears to require that we hypothesize that the prediction errors are coming from some inner part of a person that can’t be questioned directly and is also very stupid. We seemingly have to believe that an 8-year-old is scared of the dark because some inner part of them still hasn’t figured out that, yes, it gets dark every night, dipshit (even though the 8yo will profess belief in this, and has overwhelming experience of this). This seems implausible and unfalsifiable.
Emotions evolved as a way of influencing our behavior in useful directions. They correspond (approximately—this is evolution we’re talking about) to a prediction that there is some useful way of changing your behavior in response to a situation. Fear tells you take precautions, anger tells you to retaliate, contempt tells you to reconsider your alliance, etc.
This isn’t an alternative hypothesis. It’s another part of the same picture.
Notice how it’s a new prediction about how your behavior needs to be changed? That’s because you’re learning that the path you’re currently on was built on false presumptions. Get your predictions right the first time, and none of this is needed.
On my model, examples of people giving up anger when they accept physical determinism strike me as understandable but mistaken. They are reasoning that some person could not have done otherwise, and thus give up on changing the person’s behavior, which causes them to stop feeling anger. [...] I think the person who stops feeling anger has performed an incomplete reduction that doesn’t add up to normalit
Anger is a good example of this.
If you’re running around in the fantasy world of “We’re all just going to be nice to each other, because that’s what we should do, and therefore we should wish only good things on everyone”, then a murderer breaks things. Anger is an appropriate response here, because if you suppress anger (because of rationalizing about determinism or whatever) then you end up saying stupid things like “He couldn’t have done any differently! One dead is enough, we don’t need another to die!”.
But this is a stupid way to go through life in the first place, because it’s completely delusional. When I say that I wouldn’t be angry at someone who tied me to the tracks, that doesn’t mean I’m incapable of retaliation. I’ve never been murdered or tied to train tracks, but one time some friends and I were attacked by strangers who I correctly inferred were carrying knives and willing to use them—and I wasn’t angry at them. But, rather than lamenting “Sigh, I guess he was bound to do this” when fleeing didn’t work, I turned and threw the guy to the ground. While I was smashing his face with my elbows, I wasn’t feeling “GRR! I HATE YOU!”. I was laughing about how it really was as incredibly easy as you’d think, and how stupid he had to be to force a fight with a wrestler that was significantly larger than himself.
Anger is a flinch. If you touch a hot stove, sure, it makes sense to flinch away from the stove. Keeping your hand there—rationalizing “determinism” or otherwise—would be foolish.
But also, maybe you wouldn’t be in this situation, if you weren’t holding to some silly nonsense like “My allies would never betray me”, and instead thought things through and planned accordingly.
And perhaps even more importantly, is your flinch actually gonna work? They often don’t. You don’t want to end up the fool in impotent rage blubbering about how someone did something “wrong” when you in fact do not have the power to enforce the norm you’re attached to. You want to be the person who can see crystal clear what is going on, what their options are, and who doesn’t hesitate to take them when appropriate. Anger, like flinching in general, is best as a transient event when we get surprised, and until we can reorient.
Your model also appears to require that we hypothesize that the prediction errors are coming from some inner part of a person that can’t be questioned directly and is also very stupid.
Heh. This is where we’re going to differ big time. There’s a gigantic inferential chasm here so none of this will ring true, but nevertheless here is my stance:
It is our outer wrappings that are very stupid most of the time. Even when your inner part is stupid too, that’s generally still the fault of the outer wrapping getting in the way and not doing its job. Stupid outer layer notwithstanding, that inner layer can quite easily be questioned directly. And updated directly.
The way I came to this view is by learning hypnosis to better fix these “irrational inner parts”, peeling the onion a layer at a time to come up with methods that work in increasingly diverse circumstances, and eventually recognizing that the things that work most generally are actually what we do by default—until we freak out and make up stories about how we “can’t” and “need hypnosis to fix these non-directly accessible very stupid inner parts”. Turns out those stories aren’t true, and “hypnosis” is an engineered way of fighting delusion with delusion. The stories feel true due to confirmation bias, until you tug at the seams.
Notice how it’s a new prediction about how your behavior needs to be changed? That’s because you’re learning that the path you’re currently on was built on false presumptions. Get your predictions right the first time, and none of this is needed.
It seems to me that you should change your behavior as circumstances change, even if the changes are completely expected. When you step into deep water, you should start swimming; when you step out of the water, you should stop trying to swim and start walking again. This remains true even if the changes are 100% expected.
that inner layer can quite easily be questioned directly. And updated directly.
Do you mean to say that you have some empirical way of measuring these “prediction errors” that you’re referring to, separately from the emotions you claim they explain?
Got any data you can share?
If you use your technique on an 8-year-old who is scared of the dark at night, do you actually predict your technique would reveal that they have a prediction that it won’t get dark at night? Would your technique allow you to “directly update” the 8yo so that they stop being scared of the dark?
It seems to me that you should change your behavior as circumstances change, even if the changes are completely expected. When you step into deep water, you should start swimming;
Yes, your behavior at time t = 0 and time t = 1 ought to be different even if the changes between these times are entirely predicted. But at t = 0, your planned behavior for t = 1 will be swimming if you foresee the drop off. If you don’t see the drop off, you get that “Woah!” that tells you that you need to change your idea of what behavior is appropriate for t >=1.
I guess I should have said “Notice how your planned behavior has to change”.
Do you mean to say that you have some empirical way of measuring these “prediction errors” that you’re referring to, separately from the emotions you claim they explain?
Well, if you were to walk outside and get rained on, would you experience surprise? If you walked outside and didn’t get rained on, would you feel surprised? The answers here tells you what you’re predicting.
If you use your technique on an 8-year-old who is scared of the dark at night, do you actually predict your technique would reveal that they have a prediction that it won’t get dark at night? Would your technique allow you to “directly update” the 8yo so that they stop being scared of the dark?
No, I wouldn’t expect the 8-year-old to be doing “I expect it to not get dark”, but rather something more like “I expect to be able to see a lack of monsters at all times”—which obviously conflicts with the reality that they cannot when the lights are out.
The way I’d approach this depends on the specific context, but I generally would not want to directly update the kids beliefs in any simple sort of way. I take issue with the assumption that fear is a problem in the first place, and generally find that in any case remotely like this, direct overwriting of beliefs is a bad thing.
Got any data you can share?
I’m 13 posts into a big sequence laying out my thoughts on this, and it’s full of examples where I’ve achieved what might seem like unusual results from a “this stuff is unconscious and hard” perspective, but which aren’t nearly so impressive once you see behind the curtain.
The one I posted today, for example, shows how I was able to get both of my daughters to be unafraid of getting their shots when they were two years old (separate instances, not twins), and how the active ingredient was “not giving a shit if they’re afraid of their shots”.
If you want more direct proof that I’m talking about real things, the best example would be the transcript where I helped someone greatly reduce his suffering from chronic pain through forum PMs, following the basic idea of “Obviously pain isn’t a problem, but this guy sure seems to think it is, so how is he going wrong exactly?”. That one did eventually overwrite his felt experience of the pain being a bad thing (for the most part), but it wasn’t so quick and direct because like with the hypothetical scared 8 year old, a direct overwrite would have been bad.
If you want to learn more about “direct overwriting”, then that’s the section on attention, and I explain how I was able to tell my wife to constrict her blood vessels to stop bleeding in about thirty seconds, and why that isn’t nearly as an extraordinary claim as it might seem like it should be.
I should probably throw together a sequence page, but for now they’re all on my user profile.
Well, if you were to walk outside and get rained on, would you experience surprise? If you walked outside and didn’t get rained on, would you feel surprised? The answers here tells you what you’re predicting.
I feel like I have experienced a lot of negative emotions in my life that were not particularly correlated with a feeling of surprise. In fact, I can recall feeling anger about things where I literally wrote down a prediction that the thing would happen, before it happened.
Conversely, I can recall many pleasant surprises, which involved a lot of prediction error but no negative emotions.
So if this is what you are relying on to confirm your theory, it seems pretty disconfirmed by my life experience. And I’m reasonably certain that approximately everyone has similar observations from their own lives.
I thought this was understood, and the only way I was taking your theory even mildly seriously was on the assumption that you meant something different from ordinary surprise.
No, I wouldn’t expect the 8-year-old to be doing “I expect it to not get dark”, but rather something more like “I expect to be able to see a lack of monsters at all times”
I find it quite plausible they would have a preference for seeing a lack of monsters. I do not find it remotely plausible that they would have a prediction of continuously being able to see a lack of monsters. That is substantially more stupid than the already-very-stupid example of not expecting it to get dark.
Are you maybe trying to refer to our models of how the world “should” work, rather than our models of how it does work? I’m not sure exactly what I think “should” is, but I definitely don’t think it’s the same as a prediction about what actually will happen. But I could maybe believe that disagreements between “should” and “is” models play a role in explaining (some) negative emotions.
If you want more direct proof that I’m talking about real things, the best example would be the transcript where I helped someone greatly reduce his suffering from chronic pain through forum PMs
I am not searching through everything you’ve ever written to try to find something that matches a vague description.
I feel like we’ve been talking for quite a while, and you are making extraordinary claims, and you have not presented ANY noteworthy evidence favoring your model over my current one, and I am going to write you off very soon if I don’t see something persuasive. Please write or directly link some strong evidence.
I feel like I have experienced a lot of negative emotions in my life that were not particularly correlated with a feeling of surprise. In fact, I can recall feeling anger about things where I literally wrote down a prediction that the thing would happen, before it happened.
Ah, that’s what you’re getting at.
Okay, so for example, say you angrily tell your employee “I expect you to show up on time!”. Then, he doesn’t, and you’re not surprised. This shows that you (meta) expected your (object level) expectation of “You will show up on time!” to be false. You’re not surprised because you’re not learning anything, because you’ve chosen not to. Notice the hesitance to sigh and say “Well, I guess he is not going to show up on time”?
This stickiness comes from the desire to control things combined with a lack of sophisticated methods of control. When you accept “He is not going to show up on time”, you lose your ability to tell him “I expect you to show up on time!” and with it your ability too put pressure on him to be punctual. Your setpoint that you control to is your expectation, so if you update your expectation then you lose your ability to (crudely) attempt to control the person’s behavior. Once you learn more sophisticated methods of control, the anger no longer serves a purpose so you’re free to update your expectations to match reality. E.g. “I don’t know if you’re going to show up on time, but I do know that if you don’t, you will be fired! No hard feelings either way, have a nice day :)”
This is a really tricky equivalence to wrap ones mind around, and it took me years to really understand even after I could see that there was something there. I explain this more in my post expectations=intentions=setpoint, and give examples of how more sophisticated attempts to control cede immediate reality and attempt to control towards trajectories instead—to concretely better results.
Conversely, I can recall many pleasant surprises, which involved a lot of prediction error but no negative emotions.
Yeah, I think positive emotions generally require prediction errors too, though I’m less solid on this one. People are generally more willing to update on pleasant surprises so that prediction error discomfort is less likely to persist enough to be notable, though it’s worth noting that this isn’t always the case. Imposter syndrome is an example where people get quite uncomfortable because of this refusal.
The prediction error is not the same as negative emotions. Prediction error is the suffering that happens while you refuse to update, while negative emotions like sadness come while you update. You still have to have erred in order to have something sad to learn, but it’s not the same thing.
Now that I say it, I realize I had the opportunity to clarify earlier, because I did notice that this was a point of confusion, and I chose not to take the opportunity to. I think I see why I did, but I shouldn’t have, and I apologize.
So if this is what you are relying on to confirm your theory, it seems pretty disconfirmed by my life experience. And I’m reasonably certain that approximately everyone has similar observations from their own lives.
Again, giant chasm of inferential distance. You’re not passing the ITT yet, and until you do it’s going to be real tough for you to “test” anything I’m saying, because you will always be testing a misinterpretation. It’s obviously reasonable to be suspicious of such claims as attempts to hide from falsifiability, but at the same time sometimes that’s just how things are—and assuming either way is a poor way of finding truth.
To distinguish, you want to look for signs of cognitive dissonance, not merely things that you disagree with. Because if you conclude that you’re right on the surface level because the other person gets something wrong two levels deep… and your judgment of whether they’re wrong is your perspective, which they disagree with… then you’ve just given up on ever learning anything when there’s a disagreement three or more levels deep. If you wait to see signs that the person is being forced to choose between changing their own mind or ignoring data, then you have a much more solid base.
That, and look for concrete predictions that both sides can agree on. For example, you took the stance that anger was appropriate because without it you become susceptible to murderers and don’t retaliate—but once I pointed out the alternative, I don’t think you doubt that I was actually able to fight back without anger? Or is that genuinely hard to believe for you?
I find it quite plausible they would have a preference for seeing a lack of monsters. I do not find it remotely plausible that they would have a prediction of continuously being able to see a lack of monsters. That is substantially more stupid than the already-very-stupid example of not expecting it to get dark.
Hey, kids are stupid. Adults too. Sometimes people even keep expecting people to not piss them off, even when they know that the person will piss them off :p
Jokes aside, this is still the “expectations=intentions” thing. We try to not see our expectations as predictions when we’re using them as intents, but they function as predictions nonetheless.
Are you maybe trying to refer to our models of how the world “should” work, rather than our models of how it does work? I’m not sure exactly what I think “should” is, but I definitely don’t think it’s the same as a prediction about what actually will happen. But I could maybe believe that disagreements between “should” and “is” models play a role in explaining (some) negative emotions.
“Should” is used as an attempt to disconnect ourselves from updating on what will happen in order to try to make something happen—because we recognize that it will likely fail and want to try anyway. If I say “You will show up on time” as if it’s a matter of fact, that’s either powerful… or laughable. And if I sense that I don’t have the authority to pull that off, I’m incentivized to back off to “You SHOULD show up on time” so that I don’t have to accept “I guess you won’t show up on time, huh?” when you don’t. I can always say “Okay maybe he won’t BUT HE SHOULD” and immediately negate the uncomfortable reality.
So “Yes, I’m talking about our models of how the world should work”, and also that is necessarily the same as our models of how the world does work—even if we also have meta models which identify the predictable errors in our object level models and try to contain them.
Maybe that part could use more emphasis. Of course we have meta models that contradict these obviously wrong object level models. We know that we’re probably wrong, but on the object level that doesn’t make us any less wrong until we actually do the update.
I am not searching through everything you’ve ever written to try to find something that matches a vague description.
That’s fine, no pressure to do anything of course. For what it’s worth though, it’s very clearly labeled. There’s no way you wouldn’t recognize at a glance.
I feel like we’ve been talking for quite a while, and you are making extraordinary claims, and you have not presented ANY noteworthy evidence favoring your model over my current one,
I don’t think that’s fair. For one, your model said you need anger in order to retaliate, and I gave an example of how I didn’t need anger in order to retaliate. I think the fact that I don’t always struggle with predictable anger while simultaneously not experiencing your predicted downsides is clear evidence, do you not?
Of course, this isn’t strong evidence that I’m right about anything else, but it’s absolutely fatal to the idea that your model accurately depicts the realm of possibility. If your model gets one thing this wrong, this unexpectedly, how can you trust it to tell you what else to view as “extraordinary”?
and I am going to write you off very soon if I don’t see something persuasive. Please write or directly link some strong evidence.
You’re welcome to read my posts, or not. They’re quite long and I don’t expect you to read them, but they’re there if you want a better understanding of what I’m talking about.
Either way, I’m happy to continue because I can see that you’re engaging in good faith even though you’re skeptical (and maybe a bit frustrated?), and I appreciate the push back. At the end of the day, neither my ability to retaliate without anger, nor my ability to help kids overcome fear by understanding their predictions, hinge on you believing in it.
At the same time, I’m curious if you’ve thought about how it looks from my perspective. You’ve written intelligent and thoughtful responses which I appreciate, but are you under the impression that anything you’ve written provides counter-evidence? Do you picture me thinking “Yes, that’s what I’m saying” before you argue against what you think I’m saying?
I don’t think that’s fair. For one, your model said you need anger in order to retaliate, and I gave an example of how I didn’t need anger in order to retaliate.
I didn’t respond to this because I didn’t see it as posing any difficulty for my model, and didn’t realize that you did.
I don’t think you need anger in order to retaliate. I think anger means that the part of you that generates emotions (roughly, Kahneman’s system 1) wants to retaliate. Your system 2 can disagree with your system 1 and retaliate when you’re not angry.
Also, your story didn’t sound to me like you were actually retaliating. It sounded to me like you were defending yourself, i.e. taking actions that reduced the other guy’s capability of harming you. Retaliation (on my model) is when you harm someone else in an effort to change their decisions (not their capabilities), or the decisions of observers.
So I’m quite willing to believe the story happened as you described it, but this was 2 steps removed from posing any problem to my model, and you didn’t previously explain how you believed it posed a problem.
I also note that you said “for one” (in the quote above) but then there was no number two in your list.
If you wait to see signs that the person is being forced to choose between changing their own mind or ignoring data, then you have a much more solid base.
I do see a bunch of signs of that, actually:
I claimed that your example of your friend being afraid until their harness broke seems to be better explained by my model than yours, because that would be an obvious time for the recommended action to change but a really weird time for his prediction error to disappear. You did not respond to this point.
I claimed that my model has an explanation for how different negative emotions are different and why you experience different ones in different situations, and your model seemingly does not, and this makes my model better. You did not respond to this point.
I asked you if you had a way of measuring whatever you mean by “prediction error”, so that we could check how well the measurements fit your model. You told me to use my own feelings of surprise. When I pointed out that doesn’t mach your model, you said that you meant something different, but didn’t clarify what you meant, and did not provide a new answer to the earlier question about how you measure “prediction error”. This looks like you saying whatever deflects the current point without keeping track of how the current point is related to previous points.
Note that I don’t actually need to understand what you mean in order for the measurement to be interesting. You could hand me a black box and say “this measures the thing I’m talking about” and if the black box produces measurements that correlate with your predictions that would be interesting even if I have no clue how the black box works (as long as I don’t see an uninteresting way of deriving your predictions from its inputs). But you haven’t done this, either.
I gave an example where I made an explicit prediction, and then was angry when it came true. You responded by ignoring my example and substituting your own hypothetical example where I made an explicit prediction and then was angry when it was falsified. This looks like you shying away from examples that are hard for your theory to explain and instead rehearsing examples that are easier.
You have claimed that there’s evidence in your other writing, but have refused to prioritize it so that I can find your best evidence as quickly as possible. This looks like an attempt to dissuade me from checking your claims by maximizing the burden of effort placed on me. In a cooperative effort of truth-seeking, you ought to be the one performing the prioritization of your writing because you have a massive advantage in doing so.
Many of your responses seem like you are using my points to launch off on a tangent, rather than addressing my point head-on.
So “Yes, I’m talking about our models of how the world should work”, and also that is necessarily the same as our models of how the world does work—even if we also have meta models which identify the predictable errors in our object level models and try to contain them.
This seems like it’s just a simple direct contradiction. You’re saying that model X and model Y are literally the same thing, but also that we keep track of the differences between them. There couldn’t be any differences to track if they were actually the same thing.
I also note that you claimed these are “necessarily” the same, but provided no reasoning or evidence to back that up; it’s just a flat assertion.
At the same time, I’m curious if you’ve thought about how it looks from my perspective. You’ve written intelligent and thoughtful responses which I appreciate, but are you under the impression that anything you’ve written provides counter-evidence? Do you picture me thinking “Yes, that’s what I’m saying” before you argue against what you think I’m saying?
There are some parts of your model that I think I probably roughly understand, such as the fact that you think there’s some model inside a person making predictions (but it’s not the same as the predictions they profess in conversation) and that errors in these predictions are a necessary precondition to feeling negative emotions. I think I can describe these parts in a way you would endorse.
There are some parts of your model that I think I probably don’t understand, like where is that model actually located and how does it work.
There are some parts of your model that I think are incoherent bullshit, like where you think “should” and “is” models are the same thing but also we have a meta-model that tracks the differences between them, or where you think telling me to pay attention to my own feelings of surprise makes any sense as a response to my request for measurements.
I don’t think I’ve written anything that directly falsifies your model as a whole—which I think is mostly because you haven’t made it legible enough.
But I do think I’ve pointed out:
several ways in which my model wins Bayes points against yours
several ways that your model creates more friction than mine with common-sensical beliefs across other domains
several ways in which your own explanations of your model are contradictory or otherwise deficient
that there is an absence of support on your side of the discussion
I don’t think I require a better understanding of your model than I currently have in order for these points to be justified.
You’re extending yourself an awful lot of charity here.
For example, you accuse me of failing to respond to some of your points, and claim that this is evidence of cognitive dissonance, yet you begin this comment with:
I didn’t respond to this because I didn’t see it as posing any difficulty for my model, and didn’t realize that you did.
Are you really unable to anticipate that this is very close to what I would have said, if you had asked me why I didn’t respond to those things? The only reason that wouldn’t be my exact answer is that I’d first point out that I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model! This doesn’t seem like a hard one to get right, if you were extending half the charity to me that you extend yourself, you know? (should I be angry with you for this, by the way?)
As to your claim that it doesn’t pose difficulty to your model, and attempts to relocate goal posts, here are your exact words:
I think the person who stops feeling anger has performed an incomplete reduction that doesn’t add up to normality.
This is wrong. It is completely normal to not feel anger, and retaliate, when you have accurate models instead of clinging to inaccurate models, and I gave an example of this. Your attempt to pick the nit between “incapacitation” vs “dissuasion” is very suspect as well, but also irrelevant because dissuasion was also a goal (and effect) of my retaliation that night. I could give other examples too, which are even more clearly dissuasion not incapacitation, but I think the point is pretty clear.
And no, even with the relocated goalposts your explanation fails. That was a system 1 decision, and there’s no time for thinking slow when you’re in the midst of something like that.
You have claimed that there’s evidence in your other writing, but have refused to prioritize it so that I can find your best evidence as quickly as possible.
No, I made it very clear. If you have a fraction of the interest it would take to read the post and digest the contents, you would spend the ten seconds needed to pull up the post. This is not a serious objection.
Again, it’s totally understandable if you don’t want to take the time to read it. It’s a serious time and effort investment to sit down and not only read but make sense of the contents, so if your response were to be “Hey man, I got a job and a life, and I can’t afford to spend the time especially given that I can’t trust it’ll change my mind”, that would be completely reasonable.
But to act like “Nope, it doesn’t count because you can’t expect me to take 10 seconds to find it, and therefore must be trying to hide it” is.… well, can you see how that might come across?
>So “Yes, I’m talking about our models of how the world should work”, and also that is necessarily the same as our models of how the world does work—even if we also have meta models which identify the predictable errors in our object level models and try to contain them.
This seems like it’s just a simple direct contradiction. You’re saying that model X and model Y are literally the same thing, but also that we keep track of the differences between them. There couldn’t be any differences to track if they were actually the same thing.
So if I tell you that the bottle of distilled water with “Drinking water” scribbled over the label contains the same thing as the bottle of distilled water that has “coolant” scribbled on it… and that the difference is only in the label… would you understand that? Would that register to you as a coherent possibility?
I’m sorry, but I’m having a hard time understanding which part of this is weird to you. Are you really claiming that you can’t see how to make sense of this?
>At the same time, I’m curious if you’ve thought about how it looks from my perspective.
There are some parts of your model that I think I probably roughly understand, [...] But I do think I’ve pointed out:
You’re missing the point of my question. Of course you think you’ve pointed that stuff out. I’m not asking if you believe you’re justified in your own beliefs.
There are a lot of symmetries here. You said some thing’s that [you claim] I didn’t respond to. I said some things which [I claim] you didn’t respond to. Some of the things I say strike you as either missing the point or not directly responding to what you say. A lot of the things that you’ve said strike me in the same way. Some of the my responses [you claim] look like cognitive dissonance to you. Some of your responses [I claim] look that way to me. I’m sure you think it’s different because your side really is right, and my side really is wrong. And of course, I feel the same way. This is all completely normal for disagreements that run more than a step or two deep.
But then you go on to act like you don’t notice the symmetry, as if your own perspective objectively validates your own side. You start to posture stuff like “You haven’t posted any evidence [that I recognize]” and “I’m gonna write you off, if you don’t persuade me”, with no hint to the possibility that there’s another side to this coin.
The question is, do you see how silly this looks, from my perspective? Do you see how much this looks like you’re missing the self awareness that is necessary in order to have a hope of noticing when you’re inhabiting a mistaken worldview, which pats itself on the back prematurely?
Because if you do, then perhaps we can laugh about our situation together, and go about figuring out how to break this asymmetry. But if you don’t, or if you try to insist “No, but my perspective really is better supported [according to me]”, the symmetry is already broken.
Are you really unable to anticipate that this is very close to what I would have said, if you had asked me why I didn’t respond to those things? The only reason that wouldn’t be my exact answer is that I’d first point out that I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model! This doesn’t seem like a hard one to get right, if you were extending half the charity to me that you extend yourself, you know? (should I be angry with you for this, by the way?)
You complain that I failed to anticipate that you would give the same response as me, but then immediately give a diametrically opposed response! I agreed that I didn’t respond to the example you highlighted, and said this was because I didn’t pick up on your implied argument. You claim that you did respond to the examples I highlighted. The accusations are symmetrical, but the defenses are very much not.
I did notice that the accusations were symmetrical, and because of that I very carefully checked (before posting) whether the excuse I was giving myself could also be extended to you, and I concluded definitively that it couldn’t. My examples made direct explicit comparisons between my model and (my model of) your model, and pointed out concrete ways that the output of my model was better; it seems hugely implausible you failed to understand that I was claiming to score Bayes points against your model. Your example did not mention my model at all! (It contrasts two background assumptions, where humans are either always nice or not, and examines how your model, and only your model, interacts with each of those assumptions. I note that “humans are always nice” is not a position that anyone in this thread has ever defended, to my knowledge.)
And yes, I did also consider the meta-level possibility that my attempt to distinguish between what was said explicitly and what wasn’t is so biased as to make its results useless. I have a small but non-zero probability for that. But even if that’s true, that doesn’t seem like a reason to continue the argument; it seems like proof that I’m so hopeless that I should just cut my losses.
I considered including a note in my previous reply explaining that I’d checked if you could use my excuse and found you couldn’t, but I was concerned that would feel like rubbing it in, and the fact that you can’t use my excuse isn’t actually important unless you try to use it, and I guessed that you wouldn’t try. (Whether that guess was correct is still a bit unclear to me—you offer an explanation that seems directly contradictory to my excuse, but you also assert that you’re saying the same thing as me.)
If you are saying that I should have guessed the exact defense you would give, even if it was different from mine, then I don’t see how I was supposed to guess that.
If you are saying that I should have guessed you would offer some defense, even if I didn’t know the details, then I considered that moderately likely but I don’t know what you think I should have done about it.
If I had guessed that you would offer some defense that I would accept then I could have updated to the position I expected to hold in the future, but I did not guess that you’d have a defense I would accept; and, in fact, you don’t have one. Which brings us to...
(re-quoted for ease of reference)
I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model!
I have carefully re-read the entire reply that you made after the comment containing the two examples I accused you of failing to respond to.
Those two examples are not mentioned anywhere in it. Nor is there a general statement about “my examples” as a group. It has 3 distinct passages, each of which seems to be a narrow reply to a specific thing that I said, and none of which involve these 2 examples.
Nor does it include a claim that I’ve misapplied your model, either generally or related to those particular examples. It does include a claim that I’ve misunderstood one specific part of your model that was completely irrelevant to those two examples (you deny my claim that the relevant predictions are coming from a part of the person that can’t be interrogated, after flagging that you don’t expect me to follow that passage due to inferential distance).
Your later replies did make general claims about me not understanding your model several times. I could make up a story where you ignored these two examples temporarily and then later tried to address them (without referencing them or saying that that was what you were doing), but that story seems neither reasonable nor likely.
Possibly you meant to write something about them, but it got lost in an editing pass?
Or (more worryingly) perhaps you responded to my claim that you had ignored them not by trying to find actions you took specifically in response to those examples, but instead by searching your memory of everything you’ve said for things that could be interpreted as a reply, and then reported what you found without checking it?
In any case: You did not make the response you claimed that you made, in any way that I can detect.
Communication is tricky!
Sometimes both parties do something that could have worked, if the other party had done something different, but they didn’t work together, and so the problem can potentially be addressed by either party. Other times, there’s one side that could do something to prevent the problem, but the other side basically can’t do anything on their own. Sometimes fixing the issue requires a coordinated solution with actions from both parties. And in some sad situations, it’s not clear the issue can be fixed at all.
It seems to me that these two incidents both fall clearly into the category of “fixable from your side only”. Let’s recap:
(1) When you talked about your no-anger fight, you had an argument against my model, but you didn’t state it explicitly; you relied on me to infer it. That inference turned out to be intractable, because you had a misunderstanding about my position that I was unaware of. (You hadn’t mentioned it, I had no model that had flagged that specific misunderstanding as being especially likely, and searching over all possible misunderstandings is infeasible.)
There’s an obvious, simple, easy, direct fix from your side: State your arguments explicitly. Or at least be explicit that you’re making an argument, and you expect credit. (I mistook this passage as descriptive, not persuasive.)
I see no good options from my side. I couldn’t address it directly because I didn’t know what you’d tried to do. Maybe I could have originally explained my position in a way that avoided your misunderstanding, but it’s not obvious what strategy would have accomplished that. I could have challenged your general absence of evidence sooner—I was thinking it earlier, but I deferred that option because it risked degrading the conversation, and it’s not clear to me that was a bad call. (Even if I had said it immediately, that would presumably just accelerate what actually happened.)
If you have an actionable suggestion for how I could have unilaterally prevented this problem, please share.
(2) In the two examples I complained you didn’t respond to, you allege that you did respond, but I didn’t notice and still can’t find any such response.
My best guess at the solution here is “you need to actually write it, instead of just imagining that you wrote it.” The difficulty of implementing that could range from easy to very hard, depending on the actual sequence of events that lead to this outcome. But whatever the difficulty, it’s hard to imagine it could be easier to implement from my side than yours—you have a whole lot of relevant access to your writing process that I lack.
Even assuming this is a problem with me not recognizing it rather than it not existing, there are still obvious things you could do on your end to improve the odds (signposting, organization, being more explicit, quoting/linking the response when later discussing it). Conversely, I don’t see what strategy I could have used other than “read more carefully,” but I already carefully re-read the entire reply specifically looking for it, and still can’t find it.
I understand it’s possible to be in a situation where both sides have equal quality but both perceive themselves as better. But it’s also possible to be in a situation where one side is actually better and the other side falsely claims it’s symmetrical. If I allowed a mere assertion of symmetry from the other guy to stop me from ever believing the second option, I’d get severely exploited. The only way I have a chance at avoiding both errors is by carefully examining the actual circumstances and weighing the evidence case-by-case.
My best judgment here is that the evidence weighs pretty heavily towards the problems being fixable from your side and not fixable from my side. This seems very asymmetrical to me. I think I’ve been as careful as I reasonably could have been, and have invested a frankly unreasonable amount of time into triple-checking this.
Before I respond to your other points, let me pause and ask if I have convinced you that our situation is actually pretty asymmetrical, at least in regards to these examples? If not, I’m disinclined to invest more time.
Before I respond to your other points, let me pause and ask if I have convinced you that our situation is actually pretty asymmetrical, at least in regards to these examples? If not, I’m disinclined to invest more time.
Oh, the situation is definitely asymmetrical. In more ways than you realize.
However, the important part of my comment was this:
The question is, do you see how silly this looks, from my perspective? Do you see how much this looks like you’re missing the self awareness that is necessary in order to have a hope of noticing when you’re inhabiting a mistaken worldview, which pats itself on the back prematurely?
Because if you do, then perhaps we can laugh about our situation together, and go about figuring out how to break this asymmetry. But if you don’t, or if you try to insist “No, but my perspective really is better supported [according to me]”, the symmetry is already broken.
If you can’t say “Shoot, I didn’t realize that”, or “Heh, yeah I see how it definitely looks more symmetrical than I was giving credit for (even though we both know there are important dissymmetries, and disagree on what they are)”, and instead are going to spend a lot of words insisting “No, but my perspective really is better supported [according to me]”… after I just did you the favor of highlighting how revealing that would be… then again, the symmetry is already broken in the way that shows which one of us is blind to our limitations.
There’s another asymmetry though, which has eluded you:
Despite threatening to write me off, you still take me seriously enough to write a long comment trying to convince me that you’re right, and expect me to engage with it. Since you failed to answer the part that matters, I can’t even take you seriously enough to read it. Ironically, this would have been predictable to you if not for your stance on prediction errors, Lol.
Also, with a prediction error like that, you’re probably not having as much fun as I am, which is a shame. I’m genuinely sorry it turned out the way it did, as I was hoping we’d get somewhere interesting with this. I hope you can resolve your error before it eats at you too much, and that you can keep a sense of humor about things :)
We can be, if you want. And I certainly wouldn’t blame you for wanting to bail after the way I teased you in the last comment.
I do want to emphasize that I am sincere in telling you that I hope it doesn’t eat at you too much, and that I hoped for the conversation to get somewhere interesting.
If you turn out to be a remarkably good sport about the teasing, and want to show me that you can represent how you were coming off to me, I’m still open to that conversation. And it would be a lot more respectful, because it would mean addressing the reason I couldn’t take your previous comment seriously.
No expectations, of course. Sincere best wishes either way, and I hope you forgive me for the tease.
Understanding crowds out prediction error, it does not necessarily crowd out negative emotions, which is part of the point of this article.
That is, I understand the last paragraph, but it does not then go ‘thus I feel kindness’ necessarily. There may be steps to take to try to help them up, but that does not necessitate kindness, I can feel disgust at someone I know who could do so much more while still helping them.
Possibly one phrasing of it as based on your calculator example, is there’s no need for there to be a “lower expectations” step. I can still have the dominant negative emotion that the calculator and the calculator company did not include a battery, even if I understand why.
Understanding crowds out prediction error, it does not necessarily crowd out negative emotions, which is part of the point of this article.
No, it actually does. Which is the point of my comment :P
When I say “prediction error” I don’t mean that you verbally say stuff like “I predict X” and not having bets scored in your favor. I mean that thing where your brain expects one thing, and sensory data coming up suggesting not that, and you get all uncomfortable because reality isn’t doing what it’s “supposed to”.
In other words, your actual predictions, not necessarily the things that you declare your predictions to be.
I can feel disgust at someone I know who could do so much more while still helping them. Possibly one phrasing of it as based on your calculator example, is there’s no need for there to be a “lower expectations” step.
You could, yes, but it would require mismodeling them as someone who could do more than they actually can given the very real limitations which you may or may not understand yet. I can stay as furious as I want at the calculator, but only if I shut out of my mind the fact that of course it can’t work without a battery, stupid. The fact that I might say “I know I know, there’s no battery but...” doesn’t negate the fact that I’m deliberately acting without this knowledge. It just means I’m flinching away from this aspect of reality.
And it turns out, that’s not a good idea. Accurately modeling people, and credibly conveying these accurate models so that they can recognize and trust that you have accurately modeled them, is incredibly important for helping people. Good luck getting people to open themselves to your help while you view them as disgusting.
I can still have the dominant negative emotion that the calculator and the calculator company did not include a battery, even if I understand why.
This is just kicking the can one step further. You can still be annoyed, but you can no longer be annoyed at “the stupid calculator!” for not working. You have to be annoyed at the company for not including batteries—if you can pull that one off.
But hey, why did they not include batteries? If it turns out that it’s illegal for whatever reason and they literally can’t because the authorities check, where goes your annoyance now?
If your reasoning results in “I can’t have negative emotions about things where I deeply understand the causes”, then I think you’ve made a misstep.
You could, yes, but it would require mismodeling them as someone who could do more than they actually can given the very real limitations which you may or may not understand yet.
They could have done more. The choices were there in front of them, and they failed to choose them.
I will feel more positive flavored emotions like kindness/sadness if they’re pushed into hard choices where they have to decide between becoming closer to their ideal or putting food on the table; with the converse of feeling substantially less positive when the answer is they were browsing dazedly browsing social media.
With enough understanding I could trace back the route which led to them relying more and more on social media as it fills some hole of socialization they lack, is easy to do, … and still retain my negative emotions while holding this deeper understanding.
Accurately modeling people, and credibly conveying these accurate models so that they can recognize and trust that you have accurately modeled them, is incredibly important for helping people. Good luck getting people to open themselves to your help while you view them as disgusting.
I disagree that I am inaccurately modeling them, because I dispute the absolute connection between negative emotion and prediction error in the first place. I can understand them. I can accurately feel the mental pushes that push against their mind; I’ve felt them myself many times. And yet still be disquieted, disappointed in their actions.
Regardless, I do not have issues getting along with someone even if I experience negative emotions about how they’ve failed to reach farther in the past—just like I can do so even if their behavior, appearance, and so on are displeasing. This will be easier if I do something vaguely like John’s move of ‘thinking of them like a cat’, but it is not necessary for me to be polite and friendly.
Word-choice implication nitpick: Common usage of lower expectations means a mix of literal prediction and also moral/behavioral standards. I might have a ‘low expectation’ in the sense that a friend rarely arrives on time while still holding them ‘high expectations’ in the what-is-good sense!
This is just kicking the can one step further. You can still be annoyed, but you can no longer be annoyed at “the stupid calculator!” for not working. You have to be annoyed at the company for not including batteries—if you can pull that one off.
No, I can be annoyed at the calculator and the company. There’s no need for my annoyance to be moved down the chain like I only have 1 Unit of Annoyance to divvy out.
Or, you can view it as cumulative if that makes more sense, that it ties back into the overall emotions on the calculator. If I learn that supplying batteries is illegal, my annoyance with the company does decrease, but then it gets more moved primarily to the authorities. Some remains still, and I’m still annoyed at the calculator despite understanding why it doesn’t have a battery.
I do think the calculator metaphor starts to break apart, because a calculator is not the system that feeds-back-on-itself to then decide on no batteries.
Humans are complex, and I love them for it, their decisions, mindset, observations, thought processes, and so much more loop back in on themselves to shape the actions they take in the world. …That includes both their excellent actions where they do great things, reach farther, become closer to their ideals… as well as when they falter, when they get ground down by short-term optimization leaving them unable to focus on ways to improve themselves, and find themselves falling short.
But that does mean my negative emotions will be more centered on humans, on their beliefs and more. Some of this negative evaluation bleeds off to social media companies optimizing short-form content feeds, or society in vague generality for lack of ambition, but as I said before it isn’t 1 Unit of Annoyance to spread around like jam.
That is, you’re talking this like the concept of blame, when negative emotions and blame are not necessarily the same thing.
Paired with this: You appear to be implicitly taking a hard determinist sort of stance, wherein concepts like blame and ‘being able to choose otherwise’ start dissolving, but I find that direction questionable in the first-place. We can still judge people’s decisions, it is normal that their actions are influenced by their interactions with the world, and I can still feel negative emotions about their choices. That they were not able to do better, that their decisions did not go elsewise, that they failed to reinforce good decisions and more.
I do take a hard deterministic stance, so I’d like to hear your thoughts here. Do you agree w/ the following?
People literally can’t make different choices due to determinism
Laws & punishments are still useful for setting the right incentives that lead to better outcomes
You’re allowed to have negative emotions given other people’s actions (see #1), but those emotions don’t necessarily lead to better outcomes or incentives
I remember being 9 years old & being sad that my friend wasn’t going to heaven. I even thought “If I was born exactly like them, I would’ve made all the same choices & had the same experiences, and not believe in God”. I still think that if I’m 100% someone else, then I would end up exactly as they are.
I think the counterfactual you’re employing (correct me if wrong) is “if my brain was in their body, then I wouldn’t...” or “if I had their resources, then I wouldn’t...”, which is saying you’re only [80]% that person. You’re leaving out a part of them that made them who they are.
Now, you could still argue #2, that these negative emotions set correct incentives. I’ve only heard second-hand of extreme situations where that worked [1], but most of the time backfires
Son calls their parent after a while “Oh son, you never call! Shame shame”
Child says their sorry, but the parent demands them to show/feel remorse or it doesn’t count.
One of my teacher’s I still talk to pushed a student against the wall, yelling at them that they’re wasting their life w/ drugs/etc, fully expecting to get fired afterwards. They didn’t get fired & the student cleaned up (I believe this was in the late 90′s though)
Yes. But also that people are still making those choices.
Yes. But I would point out that ‘punishment’ in the moral sense of ‘hurt those who do great wrongs’ still holds just fine in determinism for the same reasons it originally did, though I personally am not much of a fan
Yes, just like I can be happy in a situation where that doesn’t help me.
“if my brain was in their body, then I wouldn’t...” or “if I had their resources, then I wouldn’t...”, which is saying you’re only [80]% that person. You’re leaving out a part of them that made them who they are.
No, it is more that I am evaluating from multiple levels.
There is
basic empathy: knowing their own standards and feeling them, understanding them.
‘idealized empathy’: Then I often have extended sort of classical empathy where I am considering based on their higher goals, which is why I often mention ideals. People have dreams they fail to reach, and I’d love them to reach further, and yet it disappoints me when they falter because my empathy reaches towards those too.
Values: Then of course my own values, which I guess could be considered the 80% that person, but I think I keep the levels separate; all the considerations have to come together in the end. I do have values about what they do, and how their mind succeeds.
Some commenters seemingly don’t consider the higher ideals sort or they think of most people in terms of short-term values; others are ignoring the lens of their own values.
So I think I’m doing multiple levels of emulation, of by-my-values, in-the-moment, reflection, etc. They all inform my emotions about the person.
I remember being 9 years old & being sad that my friend wasn’t going to heaven. I even thought “If I was born exactly like them, I would’ve made all the same choices & had the same experiences, and not believe in God”. I still think that if I’m 100% someone else, then I would end up exactly as they are.
And I agree. If I ‘became’ someone I was empathizing with entirely then I would make all their choices.
However, I don’t consider that notably relevant!
They took those actions, yes influenced by all there is in the world, but what else would influence them? They are not outside physics.
Those choices were there, and all the factors that make up them as a person were what decided their actions.
If I came back to a factory the next day and notice the steam engine failed, I consider that negative even when knowing that there must have been a long chain of cause and effect. I’ll try fixing the causes… which usually ends up routing through whatever human mind was meant to work on the steam engine as we are very powerful reflective systems. For human minds themselves that have poor choices? That often routes back through themselves.
I do think that the hard-determinist stance often, though of course not always, comes from post-Christian style thought which views the soul as atomically special, but that they then still think of themselves as ‘needing to be’ outside physics in some important sense rather than fully adapting their ontology. That choices made within determinism are equivalent to being tied up by ropes, when there is actually a distinction between the two scenarios.
Now, you could still argue #2, that these negative emotions set correct incentives. I’ve only heard second-hand of extreme situations where that worked [1], but most of the time backfires
A negative emotion can still push me to spend more effort on someone, though it usually needs to be paired with a belief that they could become better. Just because you have a negative emotion doesn’t mean you only output negative-emotion flavored content.
I’ll generally be kind to people even if I think their choices are substantially flawed and that they could improve themselves.
I do think that the example of your teacher is one that can work, I’ve done it at least once though not in person, and it helped but it definitely isn’t my central route. This is effectively the ‘staging an intervention’ methodology, and it can be effective but requires knowledge and benefits greatly from being able to push the person.
But, as John is making the point, a negative emotion may not be what people are wanting, because I’m not going to have a strong kindness about how hard someone’s choices were… when I don’t respect those choices in the first place. However, giving them full positive empathy is not necessarily good either, it can feel nice but rarely fixes things.
Which is why you focus on ‘fixing things’, advice, pointing out where they’ve faltered, and more if you think they’ll be receptive. They often won’t be, because most people have a mix of embarrassment at these kinds of conversations and a push to ignore them.
You appear to be implicitly taking a hard determinist sort of stance, wherein concepts like blame and ‘being able to choose otherwise’ start dissolving, but I find that direction questionable in the first-place.[...] If your reasoning results in “I can’t have negative emotions about things where I deeply understand the causes”, then I think you’ve made a misstep.
I certainly understand why you think that. I used to think that myself. I pushed back myself when I first heard someone take such a “ridiculous” stance. And yet, it proved to be true, so I changed my mind.
The thing that I was missing then, and which you’re missing now, is that the bar for deep careful analysis is just a lot higher than you think (or most anyone thinks). It’s often reasonable to skimp out and leave it as “because they’re bad/lazy/stupid”/”they shouldn’t have” or whatever you want to round it to, but these things are semantic stopsigns, not irreducible explanations.
Pick an issue, any issue, and keep at the analysis until you do get to something irreducible. Okay, so you’ve kicked the can one step further and are upset with the people who banned shipping batteries or whatever. Why did they do it? Keep asking “Why? Why? Why?” like a curious two year old, until there is no more “why?”. If, after you feel like you’ve hit the end of the road, you still have annoyance with the calculator itself, go back and ask why? “I’m annoyed that the calculator doesn’t work… without batteries?” How do you finish the statement of annoyance?
The way I was initially convinced of this was by picking something fake, subjecting myself to that “overconfident” guy’s incessant questioning, with an expectation of proving to him that it was endless. It wasn’t, he won. Since then I’ve done it with many more real things, and the answer is always the same. Empirically, what happens, is that you can keep going and keep going, until you can’t, and at that point there’s just no more negative around that spot because it’s been crowded out. It doesn’t matter if it’s annoyance, or sadness, or even severe physical pain. If you do your analysis well, the experience shifts, and loses its negativity.
If you’re feeling “badness” and you think you have a full understanding, that feeling of badness itself contains the clues about where you’re wrong.
They could have done more. The choices were there in front of them, and they failed to choose them.
This is a bit of a distraction, but Thane covered it pretty well:
In other words, there are reasons for their choices. Do you understand why they chose the way they did?
Regardless, I do not have issues getting along with someone even if
Notice the movement of goal posts here? I’m talking about successfully helping people, you’re saying you can “get along”. Getting along is easy. I’m sure you can offer what passes as empathy to the girl with the nail in her head, instead of fighting her like a beliggerent dummy.
But can you exclaim “You got a nail in your head, you dummy!” and have her laugh with you, because you’re obviously correct? If you can’t trivially get her to agree that the problem is the nail, and figure out with you what to do about it, then your mismodeling is getting in the way.
This higher level of ability is achievable, and the path to get there is better modeling than you thought possible.
The thing that I was missing then, and which you’re missing now, is that the bar for deep careful analysis is just a lot higher than you think (or most anyone thinks). It’s often reasonable to skimp out and leave it as “because they’re bad/lazy/stupid”/”they shouldn’t have” or whatever you want to round it to, but these things are semantic stopsigns, not irreducible explanations.
No, I believe I’m fully aware the level of deep careful analysis, and I understand why it pushes some people to sweep all facets of negativity or blame away, I just think they’re confused because their understanding of emotions/relations/causality hasn’t updated properly alongside their new understanding of determinism
“I’m annoyed that the calculator doesn’t work… without batteries?” How do you finish the statement of annoyance?
Because I wanted the calculator to work, I think it is a good thing for calculators in stores to work, I am frustrated that the calculator didn’t work… none of this is exotic, nor is it purely prediction error.
(nor do prediction error related emotions have to go away once you’ve explained the error… I still feel emotional pain when a pet dies even if I realize all the causes why; why would that not extend to other emotions related to prediction error?)
Empirically, what happens, is that you can keep going and keep going, until you can’t, and at that point there’s just no more negative around that spot because it’s been crowded out. It doesn’t matter if it’s annoyance, or sadness, or even severe physical pain. If you do your analysis well, the experience shifts, and loses its negativity.
You assert this but I still don’t agree with it. I’ve thought long and hard about people before and the causes that make them do things, but no, this does not match my experience. I understand the impulse that encourages sweeping away negative emotions once you’ve found an explanation, like realizing that humanities’ lack of coordination is a big problem, but I can still very well feel negative emotions about that despite there being an explanation.
In other words, there are reasons for their choices. Do you understand why they chose the way they did?
Relatively often? Yes.
I don’t blame people for not outputting the code for an aligned AGI because it is something that would have been absurdly hard to reinforce in yourself to become the kind of person to do that.
If someone has a disease that makes so they struggle to do much at all, I am going to judge them a hell of a lot less. Most humans have the “disease” that they can’t just smash out the code for an aligned AGI.
I can understand why someone is not investing more time studying, and I can even look at myself and relatively well pin down why, and why it is hard to get over that hump… I just don’t dismiss the negative feeling even though I understand why.
They ‘could have’, because the process-that-makes-their-decisions is them and not some separate third-thing.
I fail to study when I should because a combination of short-term optimized positive feeling seeking which leads me to watching youtube or skimming X, a desire for faster intellectual feelings that are easier gotten from arguing on reddit (or lesswrong) than slowly reading through a math paper, because I fear failure, and much more. Yet I still consider that bad, even if I got a full causal explanation it would have still been my choices.
Regardless, I do not have issues getting along with someone even if I experience negative emotions about how they’ve failed to reach farther in the past—just like I can do so even if their behavior, appearance, and so on are displeasing. This will be easier if I do something vaguely like John’s move of ‘thinking of them like a cat’, but it is not necessary for me to be polite and friendly.
Notice the movement of goal posts here? I’m talking about successfully helping people, you’re saying you can “get along”. Getting along is easy. I’m sure you can offer what passes as empathy to the girl with the nail in her head, instead of fighting her like a beliggerent dummy.
I don’t have issues with helping people, there “goalposts” moved forward again, despite nothing in my sentence meaning I can’t help people. My usage of ‘get along’ was not the bare minimum meaning.
Getting along with people in the nail scenario often means being friendly and listening to them. I can very well do that, and have done it many times before, while still thinking their individual choices are foolish.
I don’t think your comment has supplied much more beyond further assertions that I must surely not be thinking things through.
No, I believe I’m fully aware the level of deep careful analysis,
How did you arrive at this belief? Like, the thing that I would be concerned with is “How do I know that Russel’s teapot isn’t just beyond my current horizon”?
and I understand why it pushes some people to sweep all facets of negativity or blame away,
Oh no, nothing is being swept away. Definitely not that. More on this with the grieving thing below.
>nor do prediction error related emotions have to go away once you’ve explained the error...
The prediction error goes away when you update your prediction to match reality, not when you recite an explanation for why your current beliefs are clashing. You can keep predicting poorly all you want. If you want to keep feeling bad and getting poor results, I guess.
With a good explanation, you don’t have to.
I still feel emotional pain when a pet dies even if I realize all the causes why
Yes, you’re still losing your pet, and that still sucks. That’s real, and there’s no getting away from what’s real. You don’t get to accurate maps painlessly, let alone effortlessly. There’s no “One simple trick for not having to feel negative emotions!”.
The question is how this works. It’s very much not as simple as “Okay, I said he ded now I’m done grieving”. Because again, that’s not your predictions. The moment that you notice the fact that “he’s dead” is true can be long before you start to update your actual object level beliefs, and it’s a bit bizarre but also completely makes sense that it’s not until you start to update your beliefs that it hits you.
Even after you update the central belief, and even after you resolve all the “But why!?” questions that come up, you still expect to see everyone for Christmas. Until you realize that you can’t because someone is no longer alive, and update that prediction too. You think of something you’d have wanted to show him, and have to remember you can’t do that anymore. There are a bazillion little ways that those we care about become entwined with our lives, and grieving the loss of someone important is no simple task. You actually have to propagate this fact through to all the little things it effects, and correct all the predictions that required his life to fulfil.
Yet as you grieve, these things come up less and less frequently. Over time, you run out of errant predictions like “It’s gonna be fun to see Benny when—Oh fuck, no, that’s not happening”. Eventually, you can talk about their death like it’s just another thing that is, because it is.
You assert this but I still don’t agree with it. I’ve thought long and hard about people before and the causes that make them do things, but no, this does not match my experience.
Is it possible, do you think, that the way you’re doing analysis isn’t sufficient, and that if you were to be more careful and thorough, or otherwise did things differently, your experience would be different? If not, how do you rule this out, exactly? How do you explain others who are able to do this?
I don’t have issues with helping people, there “goalposts” moved forward again,
:) I appreciate it, thanks.
Getting along with people in the nail scenario often means being friendly and listening to them. I can very well do that, and have done it many times before, while still thinking their individual choices are foolish.
I’m holding the goal posts even further forward though. Friendly listening is one thing, but I’m talking about pointing out that they’re acting foolish and getting immediate laughter in recognition that you’re right. This is the level of ability that I’m pointing at. This is what is what’s there to aim for, which is enabled by sufficiently clear maps.
I don’t think your comment has supplied much more beyond further assertions that I must surely not be thinking things through.
It contained a bit more than that. I checked to make sure I wasn’t being too opaque (it happens), but Claude can show you what you missed, if you care.
The big thing I was hoping you’d notice, is that I was trying to make my claims so outrageous and specific so that you’d respond “You can’t say this shit without providing receipts, man! So lets see them!”. I was daring you to challenge me to provide evidence. I wonder if maybe you thought I was exaggerating, or otherwise rounding my claims down to something less absurd and falsifiable?
Anyway, there are a few things in your comment that suggest you might not be having fun here. If that’s the case, I’m sorry about that. No need to continue if you don’t want, and no hard feelings either way.
How did you arrive at this belief? Like, the thing that I would be concerned with is “How do I know that Russel’s teapot isn’t just beyond my current horizon”?
Empirical evidence of being more in tune with my own emotions, generally better introspection, and in modeling why others make decisions. Compared to others.
I have no belief that I’m perfect at this, but I do think I’m generally good at it and that I’m not missing a ‘height’ component to my understanding.
Is it possible, do you think, that the way you’re doing analysis isn’t sufficient, and that if you were to be more careful and thorough, or otherwise did things differently, your experience would be different? If not, how do you rule this out, exactly? How do you explain others who are able to do this?
Because, (I believe) the impulse to dismiss any sort of negativity or blame once you understand the causes deep enough is one I’ve noticed myself. I do not believe it to be a level of understanding that I’ve failed to reach, I’ve dismissed it because it seems an improper framing.
At times the reason for this comes from a specific grappling with determinism and choice that I disagree with.
For others, the originating cause is due to considering kindness as automatically linked with empathy, with that unconsciously shaping what people think is acceptable from empathy.
In your case, some of it is tying it purely to prediction that I disagree with, because of some mix of kindness-being-the-focus, determinism, a feeling that once it has been explained in terms of the component parts that there’s nothing left, and other factors that I don’t know because they haven’t been elucidated.
Empirical exploration as in your example can be explanatory. However, I have thought about motivation and the underlying reasons to a low granularity plenty of times (impulses that form into habits, social media optimizing for short form behaviors, the heuristics humans come with which can make doing it now hard to weight against the cost of doing it a week from now, how all of those constrain the mind...), which makes me skeptical. The idea of ‘shift the negativity elsewhere’ is not new, but given your existing examples it does not convince me that if I spent an hour with you on this that we would get anywhere.
“because they’re bad/lazy/stupid”/”they shouldn’t have” or whatever you want to round it to, but these things are semantic stopsigns, not irreducible explanations.
This, for example, is a misunderstanding of my position or the level of analysis that I’m speaking of. Wherein I am not stopping there, as I mentally consider complex social cause and effect and still feel negative about the choices they’ve made.
Yet as you grieve, these things come up less and less frequently. Over time, you run out of errant predictions like “It’s gonna be fun to see Benny when—Oh fuck, no, that’s not happening”. Eventually, you can talk about their death like it’s just another thing that is, because it is.
Grief like this exists, but I don’t agree that it is pure predictive remembrance. There is grief which lasts for a time and then fades away, not because my lower level beliefs are prediction to see them—away from home and a pet dies, I’m still sad, not because of prediction error but because I want (but wants are not predictions) the pet to be alive and fine, but they aren’t. Because it is bad, to be concise.
You could try arguing that this is ‘prediction that my mental model will say they are alive and well’, with two parts of myself in disagreement, but that seems very hard to determine the accuracy as an explanation and I think is starting to stretch the meaning of prediction error.
Nor does the implication that ‘fully knowing the causes’ carves away negative emotion follow?
I’m holding the goal posts even further forward though. Friendly listening is one thing, but I’m talking about pointing out that they’re acting foolish and getting immediate laughter in recognition that you’re right. This is the level of ability that I’m pointing at. This is what is what’s there to aim for, which is enabled by sufficiently clear maps.
This is more about socialization ability, though having a clear map helps. I’ve done this before, with parents and joking with a friend about his progress on a project, but I do not do so regularly nor could I do it in arbitrarily.
Joking itself is only sometimes the right route, the more general capability is working a push into normal conversation, with joking being one tool in the toolbox there.
I don’t really accept the implication ‘and thus you are mismodeling via negative emotions if you can not do that consistently’. I can be mismodeling to the degree that I don’t know precisely what words will satisfy them, but that can be due to social abilities.
The big thing I was hoping you’d notice, is that I was trying to make my claims so outrageous and specific so that you’d respond “You can’t say this shit without providing receipts, man! So lets see them!”. I was daring you to challenge me to provide evidence. I wonder if maybe you thought I was exaggerating, or otherwise rounding my claims down to something less absurd and falsifiable?
When you don’t provide much argumentation, I don’t go ‘huh, guess I need to prod them for argumentation’ I go ‘ah, unfortunate, I will try responding to the crunchy parts in the interests of good conversation, but will continue on’. That is, the onus is on you to provide reasons. I did remark that you were asserting without much backing.
I was taking you literally, and I’ve seen plenty of people fall back without engaging—I’ve definitely done it during the span of this discussion, and then interpreting your motivations through that. ‘I am playing a game to poke and prod at you’ is uh.....
Anyway, there are a few things in your comment that suggest you might not be having fun here. If that’s the case, I’m sorry about that. No need to continue if you don’t want, and no hard feelings either way.
A good chunk of it is the ~condescension. Repeated insistence while seeming to mostly just continue on the same line of thought without really engaging where I elaborate, goalpost gotcha, and then the bit about Claude when you just got done saying that it was to ‘test’ me; which it being to prod me being quite annoying in-of-itself.
Of course, I think you have more positive intent behind that. Pushing me to test myself empirically, or pushing me to push back on you so then you can push back yourself on me to provide empirical tests (?), or perhaps trying to use it as an empathy test for whether I understand you. I’m skeptical of you really understanding my position given your replies.
I feel like I’m being better at engaging at the direct level, while you’re often doing ‘you would understand if you actually tried’, when I believe I have tried to a substantial degree even if nothing precisely like ‘spend two hours mapping cause and effect of how a person came to these actions’.
Hm. Given the way you responded here, I don’t think it’s worth my time to continue. Given the work you put into this comment I feel like I at least owe you an explanation if you want one, but I’ll refrain unless you ask.
When I accept that a calculator won’t work without batteries, that’s not “thinking of the calculator like a rock”, and choosing to not notice the differences between the calculator and a rock so as to avoid holding it to higher standards. I’m still looking at the calculator as a calculator, just more specifically, as a calculator which doesn’t have any batteries—because that’s what it is. The idea is to move towards more detailed and accurate models, not less. Because this gives you options to improve the calculator by adding batteries.
Your words imply that you have expectations for “humans” which empirically do not seem to be holding up so far as you can tell. Rather than turning away from this failed expectation, saying “I won’t even think of them as human”, look into it. Why, exactly, are people failing to behave in the ways you think they should? Why is it wrong of you to expect people to behave in the ways you wished they would?
Or, put another way, what is the missing constraint that you’re not seeing, and how can you provide it such that people can and will live up to the standards you want to hold for them? (easier said than done, but doable nonetheless)
what is the missing constraint that you’re not seeing
Intelligence and personality, which are both largely innate.
how can you provide it
Genetic engineering…?
(EDIT: And, like, UBI / social safety nets / reform the education system / solve unemployment / cure all diseases / etc. All of these things would surely improve many people’s ability to perform well in life.)
The reason people say stuff like “if you empathize more, you’ll feel kinder towards them” is that the negative emotions behind your judgements require prediction error, and understanding necessarily crowds it out.
To give a toy example, it’s easy to get frustrated when a calculator isn’t working because it should work, dammit. You just bought the damn thing, how can it be broken already!? But then, if you stop to wonder if it has batteries in it and it doesn’t, it get’s a lot harder to stay frustrated because it’s a lot harder to hold onto the idea that the calculator “should” work in such a state. You don’t stop judging it as “non-functioning” because obviously it’s non functioning, you just lower your expectations to match the reality of the situation; if you want it to work, you need to put batteries in.
The recognition of “Oh, this isn’t a functioning calculator” is a necessary step between “expecting the calculator to work” and understanding (or even asking) why it’s not a functioning calculator. So there’s necessarily going to be that “Shit, okay. I guess I don’t have a working calculator” where you have to mourn the loss of what you thought you had, before you reorient to the reality that you find yourself faced with.
That is to say, feeling disgusted and disappointed is fine. What happens next? What happens once you accept that people do have these values, empathize further, and try to undertand where these (perhaps unfortunate) values come from? And then go through as many steps of that as needed before you could say “If I was missing my battery, and X, and Y, and Z, I would also have these messed up values and choose not to take the nail out of my head?”
Suppose Alice’s daughter Beth gets cancer and slowly dies. After a long battle, numerous doctors that tell them Beth’s death is inevitable, and many nights in the hospital, Alice finally watches as Beth breathes her last. Then, Alice feels a stab of intense grief and goes into mourning for the next month.
Do you claim these negative emotions are a result of prediction error, and that Alice would feel zero grief if she only had an accurate understanding of the situation? Color me skeptical.
Another example: suppose Carl is tied to some train tracks and sees a train approaching. As the train gets closer, Carl feels an intense sense of fear, and anger against the person who tied him up. Do you claim this is also a prediction error? The bad thing that Carl is afraid of hasn’t actually happened yet (the train has yet to reach him); where exactly is the error located?
These are good questions. Thanks for the pushback :)
Yes, but not in the way you think. “[Perfectly] accurate understanding of the situation”, such that there is no grief to experience, is an impossibly high standard. The implication of “If you’re sad you’re just a bad rationalist!” absolutely does not follow. It’s closer to the opposite, in that if you’re flinching from experiencing sadness (or other emotions) you’re resisting updating.
I give some explanation of how this relates to the process of grieving in a different comment downstream (ctrl-f “pet”), but there’s another aspect that I’d like to touch on here.
My Grandpa won life. The man was very successful in business, in marriage, in family. He lived into old age with a full mind, and as active as one can be in his later years. It’s really hard to expect more out of life than this, so when he finally croaked in his nineties… it’s kinda hard to expect more. I mean, yeah, it’d have been nice to have him for a few more years, and yeah, occasionally people live longer. Sure, it’d be nice for aging to have been solved. But overall it’s kinda like “That’s how it’s supposed to go. If only all life went so well”. At his funeral, there were a lot of people smiling and remembering him fondly.
In contrast, lose someone important who is in their twenties, and it’s devastating. There are going to be all sorts of ways in which you expected things to go differently, and updating your maps there (i.e. “grieving”) sucks. Alice’s death sucks not just because you would like more for her, but because you thought she would get more. And she didn’t. And that matters. These funerals are not so fun and full of smiles.
Yes, most definitely.
The anger error is located in the mismatch between expecting the person who tied him up to have followed some norms which he clearly he wasn’t bound by, and the reality that he did not follow the norms. In that situation, I have a hard time imagining being angry because I can’t see why I’d ever put some expectations like that on someone who wasn’t bound by them. Even if it was my own brother I wouldn’t be angry because I’d be too shocked and confused to take my prediction error as his fault. Not “Fuck you for doing this to me” but “Fuck me for not recognizing this to be possible”.
The fear error is locating in the mismatch between expecting to be safe and the reality of not being safe. This one is utterly unreasonable to expect anyone to solve on the fly like that, but 1) when people resign themselves to their fate (the example given to me by the Jewish man who taught me this stuff was Jews in concentration camps), there is no more fear, and 2) when you can easily untie yourself minutes before the train gets there it’s not so scary anymore because you just get off the tracks.
It’s worth noting that these things can get pretty complicated, and fear doesn’t necessarily feel the way you’d expect when you actually find yourself in similar situations. For example, I have a friend whose rope harness came undone while rappelling, leaving him desperately clinging to the rope while trying to figure out how to safely get to the ground. Afterwards, he said that he was afraid until his harness fell apart. After that, he was just too busy figuring out what to do to feel afraid. Rising to the occasion often requires judiciously dropping prior expectations and coming up with new ones on the fly.
Let me propose an alternate hypothesis:
Emotions evolved as a way of influencing our behavior in useful directions. They correspond (approximately—this is evolution we’re talking about) to a prediction that there is some useful way of changing your behavior in response to a situation. Fear tells you take precautions, anger tells you to retaliate, contempt tells you to reconsider your alliance, etc. (Scott Alexander has a post on ACX theorizing that general happiness and sadness are a way of telling you to take more/fewer risks, but I can’t find it at the moment.)
I think your examples of fear disappearing when people give up hope of escape are explained at least as well by this hypothesis as by yours. Also your example of your friend who “was afraid until his harness fell apart”—that was the moment when “taking precautions” stopped being a useful action, but it seems pretty weird to conjecture that that was the moment when his prediction error disappeared (was he predicting a 100% chance of the harness breaking? or even >50%?)
On my model, examples of people giving up anger when they accept physical determinism strike me as understandable but mistaken. They are reasoning that some person could not have done otherwise, and thus give up on changing the person’s behavior, which causes them to stop feeling anger. But this is an error, because a system operating on completely deterministic rules can still be altered by outside forces—such as a pattern of other people retaliating in certain circumstances.
On my model, the correct reason to get angry at a murderer, but not to get angry at a storm, is that murderers can (sometimes) be deterred, and storms cannot. I think the person who stops feeling anger has performed an incomplete reduction that doesn’t add up to normality.
Notice that my model provides an explanation for why different negative emotions occur in different circumstances: They recommend different actions. As far as I can see, you have not offered an explanation for why some prediction errors cause fear, others anger, others disgust.
Your model also appears to require that we hypothesize that the prediction errors are coming from some inner part of a person that can’t be questioned directly and is also very stupid. We seemingly have to believe that an 8-year-old is scared of the dark because some inner part of them still hasn’t figured out that, yes, it gets dark every night, dipshit (even though the 8yo will profess belief in this, and has overwhelming experience of this). This seems implausible and unfalsifiable.
This isn’t an alternative hypothesis. It’s another part of the same picture.
Notice how it’s a new prediction about how your behavior needs to be changed? That’s because you’re learning that the path you’re currently on was built on false presumptions. Get your predictions right the first time, and none of this is needed.
Anger is a good example of this.
If you’re running around in the fantasy world of “We’re all just going to be nice to each other, because that’s what we should do, and therefore we should wish only good things on everyone”, then a murderer breaks things. Anger is an appropriate response here, because if you suppress anger (because of rationalizing about determinism or whatever) then you end up saying stupid things like “He couldn’t have done any differently! One dead is enough, we don’t need another to die!”.
But this is a stupid way to go through life in the first place, because it’s completely delusional. When I say that I wouldn’t be angry at someone who tied me to the tracks, that doesn’t mean I’m incapable of retaliation. I’ve never been murdered or tied to train tracks, but one time some friends and I were attacked by strangers who I correctly inferred were carrying knives and willing to use them—and I wasn’t angry at them. But, rather than lamenting “Sigh, I guess he was bound to do this” when fleeing didn’t work, I turned and threw the guy to the ground. While I was smashing his face with my elbows, I wasn’t feeling “GRR! I HATE YOU!”. I was laughing about how it really was as incredibly easy as you’d think, and how stupid he had to be to force a fight with a wrestler that was significantly larger than himself.
Anger is a flinch. If you touch a hot stove, sure, it makes sense to flinch away from the stove. Keeping your hand there—rationalizing “determinism” or otherwise—would be foolish.
But also, maybe you wouldn’t be in this situation, if you weren’t holding to some silly nonsense like “My allies would never betray me”, and instead thought things through and planned accordingly.
And perhaps even more importantly, is your flinch actually gonna work? They often don’t. You don’t want to end up the fool in impotent rage blubbering about how someone did something “wrong” when you in fact do not have the power to enforce the norm you’re attached to. You want to be the person who can see crystal clear what is going on, what their options are, and who doesn’t hesitate to take them when appropriate. Anger, like flinching in general, is best as a transient event when we get surprised, and until we can reorient.
Heh. This is where we’re going to differ big time. There’s a gigantic inferential chasm here so none of this will ring true, but nevertheless here is my stance:
It is our outer wrappings that are very stupid most of the time. Even when your inner part is stupid too, that’s generally still the fault of the outer wrapping getting in the way and not doing its job. Stupid outer layer notwithstanding, that inner layer can quite easily be questioned directly. And updated directly.
The way I came to this view is by learning hypnosis to better fix these “irrational inner parts”, peeling the onion a layer at a time to come up with methods that work in increasingly diverse circumstances, and eventually recognizing that the things that work most generally are actually what we do by default—until we freak out and make up stories about how we “can’t” and “need hypnosis to fix these non-directly accessible very stupid inner parts”. Turns out those stories aren’t true, and “hypnosis” is an engineered way of fighting delusion with delusion. The stories feel true due to confirmation bias, until you tug at the seams.
It seems to me that you should change your behavior as circumstances change, even if the changes are completely expected. When you step into deep water, you should start swimming; when you step out of the water, you should stop trying to swim and start walking again. This remains true even if the changes are 100% expected.
Do you mean to say that you have some empirical way of measuring these “prediction errors” that you’re referring to, separately from the emotions you claim they explain?
Got any data you can share?
If you use your technique on an 8-year-old who is scared of the dark at night, do you actually predict your technique would reveal that they have a prediction that it won’t get dark at night? Would your technique allow you to “directly update” the 8yo so that they stop being scared of the dark?
Yes, your behavior at time t = 0 and time t = 1 ought to be different even if the changes between these times are entirely predicted. But at t = 0, your planned behavior for t = 1 will be swimming if you foresee the drop off. If you don’t see the drop off, you get that “Woah!” that tells you that you need to change your idea of what behavior is appropriate for t >=1.
I guess I should have said “Notice how your planned behavior has to change”.
Well, if you were to walk outside and get rained on, would you experience surprise? If you walked outside and didn’t get rained on, would you feel surprised? The answers here tells you what you’re predicting.
No, I wouldn’t expect the 8-year-old to be doing “I expect it to not get dark”, but rather something more like “I expect to be able to see a lack of monsters at all times”—which obviously conflicts with the reality that they cannot when the lights are out.
The way I’d approach this depends on the specific context, but I generally would not want to directly update the kids beliefs in any simple sort of way. I take issue with the assumption that fear is a problem in the first place, and generally find that in any case remotely like this, direct overwriting of beliefs is a bad thing.
I’m 13 posts into a big sequence laying out my thoughts on this, and it’s full of examples where I’ve achieved what might seem like unusual results from a “this stuff is unconscious and hard” perspective, but which aren’t nearly so impressive once you see behind the curtain.
The one I posted today, for example, shows how I was able to get both of my daughters to be unafraid of getting their shots when they were two years old (separate instances, not twins), and how the active ingredient was “not giving a shit if they’re afraid of their shots”.
If you want more direct proof that I’m talking about real things, the best example would be the transcript where I helped someone greatly reduce his suffering from chronic pain through forum PMs, following the basic idea of “Obviously pain isn’t a problem, but this guy sure seems to think it is, so how is he going wrong exactly?”. That one did eventually overwrite his felt experience of the pain being a bad thing (for the most part), but it wasn’t so quick and direct because like with the hypothetical scared 8 year old, a direct overwrite would have been bad.
If you want to learn more about “direct overwriting”, then that’s the section on attention, and I explain how I was able to tell my wife to constrict her blood vessels to stop bleeding in about thirty seconds, and why that isn’t nearly as an extraordinary claim as it might seem like it should be.
I should probably throw together a sequence page, but for now they’re all on my user profile.
I feel like I have experienced a lot of negative emotions in my life that were not particularly correlated with a feeling of surprise. In fact, I can recall feeling anger about things where I literally wrote down a prediction that the thing would happen, before it happened.
Conversely, I can recall many pleasant surprises, which involved a lot of prediction error but no negative emotions.
So if this is what you are relying on to confirm your theory, it seems pretty disconfirmed by my life experience. And I’m reasonably certain that approximately everyone has similar observations from their own lives.
I thought this was understood, and the only way I was taking your theory even mildly seriously was on the assumption that you meant something different from ordinary surprise.
I find it quite plausible they would have a preference for seeing a lack of monsters. I do not find it remotely plausible that they would have a prediction of continuously being able to see a lack of monsters. That is substantially more stupid than the already-very-stupid example of not expecting it to get dark.
Are you maybe trying to refer to our models of how the world “should” work, rather than our models of how it does work? I’m not sure exactly what I think “should” is, but I definitely don’t think it’s the same as a prediction about what actually will happen. But I could maybe believe that disagreements between “should” and “is” models play a role in explaining (some) negative emotions.
I am not searching through everything you’ve ever written to try to find something that matches a vague description.
I feel like we’ve been talking for quite a while, and you are making extraordinary claims, and you have not presented ANY noteworthy evidence favoring your model over my current one, and I am going to write you off very soon if I don’t see something persuasive. Please write or directly link some strong evidence.
Ah, that’s what you’re getting at.
Okay, so for example, say you angrily tell your employee “I expect you to show up on time!”. Then, he doesn’t, and you’re not surprised. This shows that you (meta) expected your (object level) expectation of “You will show up on time!” to be false. You’re not surprised because you’re not learning anything, because you’ve chosen not to. Notice the hesitance to sigh and say “Well, I guess he is not going to show up on time”?
This stickiness comes from the desire to control things combined with a lack of sophisticated methods of control. When you accept “He is not going to show up on time”, you lose your ability to tell him “I expect you to show up on time!” and with it your ability too put pressure on him to be punctual. Your setpoint that you control to is your expectation, so if you update your expectation then you lose your ability to (crudely) attempt to control the person’s behavior. Once you learn more sophisticated methods of control, the anger no longer serves a purpose so you’re free to update your expectations to match reality. E.g. “I don’t know if you’re going to show up on time, but I do know that if you don’t, you will be fired! No hard feelings either way, have a nice day :)”
This is a really tricky equivalence to wrap ones mind around, and it took me years to really understand even after I could see that there was something there. I explain this more in my post expectations=intentions=setpoint, and give examples of how more sophisticated attempts to control cede immediate reality and attempt to control towards trajectories instead—to concretely better results.
Yeah, I think positive emotions generally require prediction errors too, though I’m less solid on this one. People are generally more willing to update on pleasant surprises so that prediction error discomfort is less likely to persist enough to be notable, though it’s worth noting that this isn’t always the case. Imposter syndrome is an example where people get quite uncomfortable because of this refusal.
The prediction error is not the same as negative emotions. Prediction error is the suffering that happens while you refuse to update, while negative emotions like sadness come while you update. You still have to have erred in order to have something sad to learn, but it’s not the same thing.
Now that I say it, I realize I had the opportunity to clarify earlier, because I did notice that this was a point of confusion, and I chose not to take the opportunity to. I think I see why I did, but I shouldn’t have, and I apologize.
Again, giant chasm of inferential distance. You’re not passing the ITT yet, and until you do it’s going to be real tough for you to “test” anything I’m saying, because you will always be testing a misinterpretation. It’s obviously reasonable to be suspicious of such claims as attempts to hide from falsifiability, but at the same time sometimes that’s just how things are—and assuming either way is a poor way of finding truth.
To distinguish, you want to look for signs of cognitive dissonance, not merely things that you disagree with. Because if you conclude that you’re right on the surface level because the other person gets something wrong two levels deep… and your judgment of whether they’re wrong is your perspective, which they disagree with… then you’ve just given up on ever learning anything when there’s a disagreement three or more levels deep. If you wait to see signs that the person is being forced to choose between changing their own mind or ignoring data, then you have a much more solid base.
That, and look for concrete predictions that both sides can agree on. For example, you took the stance that anger was appropriate because without it you become susceptible to murderers and don’t retaliate—but once I pointed out the alternative, I don’t think you doubt that I was actually able to fight back without anger? Or is that genuinely hard to believe for you?
Hey, kids are stupid. Adults too. Sometimes people even keep expecting people to not piss them off, even when they know that the person will piss them off :p
Jokes aside, this is still the “expectations=intentions” thing. We try to not see our expectations as predictions when we’re using them as intents, but they function as predictions nonetheless.
“Should” is used as an attempt to disconnect ourselves from updating on what will happen in order to try to make something happen—because we recognize that it will likely fail and want to try anyway. If I say “You will show up on time” as if it’s a matter of fact, that’s either powerful… or laughable. And if I sense that I don’t have the authority to pull that off, I’m incentivized to back off to “You SHOULD show up on time” so that I don’t have to accept “I guess you won’t show up on time, huh?” when you don’t. I can always say “Okay maybe he won’t BUT HE SHOULD” and immediately negate the uncomfortable reality.
So “Yes, I’m talking about our models of how the world should work”, and also that is necessarily the same as our models of how the world does work—even if we also have meta models which identify the predictable errors in our object level models and try to contain them.
Maybe that part could use more emphasis. Of course we have meta models that contradict these obviously wrong object level models. We know that we’re probably wrong, but on the object level that doesn’t make us any less wrong until we actually do the update.
That’s fine, no pressure to do anything of course. For what it’s worth though, it’s very clearly labeled. There’s no way you wouldn’t recognize at a glance.
I don’t think that’s fair. For one, your model said you need anger in order to retaliate, and I gave an example of how I didn’t need anger in order to retaliate. I think the fact that I don’t always struggle with predictable anger while simultaneously not experiencing your predicted downsides is clear evidence, do you not?
Of course, this isn’t strong evidence that I’m right about anything else, but it’s absolutely fatal to the idea that your model accurately depicts the realm of possibility. If your model gets one thing this wrong, this unexpectedly, how can you trust it to tell you what else to view as “extraordinary”?
You’re welcome to read my posts, or not. They’re quite long and I don’t expect you to read them, but they’re there if you want a better understanding of what I’m talking about.
Either way, I’m happy to continue because I can see that you’re engaging in good faith even though you’re skeptical (and maybe a bit frustrated?), and I appreciate the push back. At the end of the day, neither my ability to retaliate without anger, nor my ability to help kids overcome fear by understanding their predictions, hinge on you believing in it.
At the same time, I’m curious if you’ve thought about how it looks from my perspective. You’ve written intelligent and thoughtful responses which I appreciate, but are you under the impression that anything you’ve written provides counter-evidence? Do you picture me thinking “Yes, that’s what I’m saying” before you argue against what you think I’m saying?
I didn’t respond to this because I didn’t see it as posing any difficulty for my model, and didn’t realize that you did.
I don’t think you need anger in order to retaliate. I think anger means that the part of you that generates emotions (roughly, Kahneman’s system 1) wants to retaliate. Your system 2 can disagree with your system 1 and retaliate when you’re not angry.
Also, your story didn’t sound to me like you were actually retaliating. It sounded to me like you were defending yourself, i.e. taking actions that reduced the other guy’s capability of harming you. Retaliation (on my model) is when you harm someone else in an effort to change their decisions (not their capabilities), or the decisions of observers.
So I’m quite willing to believe the story happened as you described it, but this was 2 steps removed from posing any problem to my model, and you didn’t previously explain how you believed it posed a problem.
I also note that you said “for one” (in the quote above) but then there was no number two in your list.
I do see a bunch of signs of that, actually:
I claimed that your example of your friend being afraid until their harness broke seems to be better explained by my model than yours, because that would be an obvious time for the recommended action to change but a really weird time for his prediction error to disappear. You did not respond to this point.
I claimed that my model has an explanation for how different negative emotions are different and why you experience different ones in different situations, and your model seemingly does not, and this makes my model better. You did not respond to this point.
I asked you if you had a way of measuring whatever you mean by “prediction error”, so that we could check how well the measurements fit your model. You told me to use my own feelings of surprise. When I pointed out that doesn’t mach your model, you said that you meant something different, but didn’t clarify what you meant, and did not provide a new answer to the earlier question about how you measure “prediction error”. This looks like you saying whatever deflects the current point without keeping track of how the current point is related to previous points.
Note that I don’t actually need to understand what you mean in order for the measurement to be interesting. You could hand me a black box and say “this measures the thing I’m talking about” and if the black box produces measurements that correlate with your predictions that would be interesting even if I have no clue how the black box works (as long as I don’t see an uninteresting way of deriving your predictions from its inputs). But you haven’t done this, either.
I gave an example where I made an explicit prediction, and then was angry when it came true. You responded by ignoring my example and substituting your own hypothetical example where I made an explicit prediction and then was angry when it was falsified. This looks like you shying away from examples that are hard for your theory to explain and instead rehearsing examples that are easier.
You have claimed that there’s evidence in your other writing, but have refused to prioritize it so that I can find your best evidence as quickly as possible. This looks like an attempt to dissuade me from checking your claims by maximizing the burden of effort placed on me. In a cooperative effort of truth-seeking, you ought to be the one performing the prioritization of your writing because you have a massive advantage in doing so.
Many of your responses seem like you are using my points to launch off on a tangent, rather than addressing my point head-on.
This seems like it’s just a simple direct contradiction. You’re saying that model X and model Y are literally the same thing, but also that we keep track of the differences between them. There couldn’t be any differences to track if they were actually the same thing.
I also note that you claimed these are “necessarily” the same, but provided no reasoning or evidence to back that up; it’s just a flat assertion.
There are some parts of your model that I think I probably roughly understand, such as the fact that you think there’s some model inside a person making predictions (but it’s not the same as the predictions they profess in conversation) and that errors in these predictions are a necessary precondition to feeling negative emotions. I think I can describe these parts in a way you would endorse.
There are some parts of your model that I think I probably don’t understand, like where is that model actually located and how does it work.
There are some parts of your model that I think are incoherent bullshit, like where you think “should” and “is” models are the same thing but also we have a meta-model that tracks the differences between them, or where you think telling me to pay attention to my own feelings of surprise makes any sense as a response to my request for measurements.
I don’t think I’ve written anything that directly falsifies your model as a whole—which I think is mostly because you haven’t made it legible enough.
But I do think I’ve pointed out:
several ways in which my model wins Bayes points against yours
several ways that your model creates more friction than mine with common-sensical beliefs across other domains
several ways in which your own explanations of your model are contradictory or otherwise deficient
that there is an absence of support on your side of the discussion
I don’t think I require a better understanding of your model than I currently have in order for these points to be justified.
You’re extending yourself an awful lot of charity here.
For example, you accuse me of failing to respond to some of your points, and claim that this is evidence of cognitive dissonance, yet you begin this comment with:
Are you really unable to anticipate that this is very close to what I would have said, if you had asked me why I didn’t respond to those things? The only reason that wouldn’t be my exact answer is that I’d first point out that I did respond to those things, by pointing out that your arguments were based on a misunderstanding of my model! This doesn’t seem like a hard one to get right, if you were extending half the charity to me that you extend yourself, you know? (should I be angry with you for this, by the way?)
As to your claim that it doesn’t pose difficulty to your model, and attempts to relocate goal posts, here are your exact words:
This is wrong. It is completely normal to not feel anger, and retaliate, when you have accurate models instead of clinging to inaccurate models, and I gave an example of this. Your attempt to pick the nit between “incapacitation” vs “dissuasion” is very suspect as well, but also irrelevant because dissuasion was also a goal (and effect) of my retaliation that night. I could give other examples too, which are even more clearly dissuasion not incapacitation, but I think the point is pretty clear.
And no, even with the relocated goalposts your explanation fails. That was a system 1 decision, and there’s no time for thinking slow when you’re in the midst of something like that.
No, I made it very clear. If you have a fraction of the interest it would take to read the post and digest the contents, you would spend the ten seconds needed to pull up the post. This is not a serious objection.
Again, it’s totally understandable if you don’t want to take the time to read it. It’s a serious time and effort investment to sit down and not only read but make sense of the contents, so if your response were to be “Hey man, I got a job and a life, and I can’t afford to spend the time especially given that I can’t trust it’ll change my mind”, that would be completely reasonable.
But to act like “Nope, it doesn’t count because you can’t expect me to take 10 seconds to find it, and therefore must be trying to hide it” is.… well, can you see how that might come across?
So if I tell you that the bottle of distilled water with “Drinking water” scribbled over the label contains the same thing as the bottle of distilled water that has “coolant” scribbled on it… and that the difference is only in the label… would you understand that? Would that register to you as a coherent possibility?
I’m sorry, but I’m having a hard time understanding which part of this is weird to you. Are you really claiming that you can’t see how to make sense of this?
You’re missing the point of my question. Of course you think you’ve pointed that stuff out. I’m not asking if you believe you’re justified in your own beliefs.
There are a lot of symmetries here. You said some thing’s that [you claim] I didn’t respond to. I said some things which [I claim] you didn’t respond to. Some of the things I say strike you as either missing the point or not directly responding to what you say. A lot of the things that you’ve said strike me in the same way. Some of the my responses [you claim] look like cognitive dissonance to you. Some of your responses [I claim] look that way to me. I’m sure you think it’s different because your side really is right, and my side really is wrong. And of course, I feel the same way. This is all completely normal for disagreements that run more than a step or two deep.
But then you go on to act like you don’t notice the symmetry, as if your own perspective objectively validates your own side. You start to posture stuff like “You haven’t posted any evidence [that I recognize]” and “I’m gonna write you off, if you don’t persuade me”, with no hint to the possibility that there’s another side to this coin.
The question is, do you see how silly this looks, from my perspective? Do you see how much this looks like you’re missing the self awareness that is necessary in order to have a hope of noticing when you’re inhabiting a mistaken worldview, which pats itself on the back prematurely?
Because if you do, then perhaps we can laugh about our situation together, and go about figuring out how to break this asymmetry. But if you don’t, or if you try to insist “No, but my perspective really is better supported [according to me]”, the symmetry is already broken.
You complain that I failed to anticipate that you would give the same response as me, but then immediately give a diametrically opposed response! I agreed that I didn’t respond to the example you highlighted, and said this was because I didn’t pick up on your implied argument. You claim that you did respond to the examples I highlighted. The accusations are symmetrical, but the defenses are very much not.
I did notice that the accusations were symmetrical, and because of that I very carefully checked (before posting) whether the excuse I was giving myself could also be extended to you, and I concluded definitively that it couldn’t. My examples made direct explicit comparisons between my model and (my model of) your model, and pointed out concrete ways that the output of my model was better; it seems hugely implausible you failed to understand that I was claiming to score Bayes points against your model. Your example did not mention my model at all! (It contrasts two background assumptions, where humans are either always nice or not, and examines how your model, and only your model, interacts with each of those assumptions. I note that “humans are always nice” is not a position that anyone in this thread has ever defended, to my knowledge.)
And yes, I did also consider the meta-level possibility that my attempt to distinguish between what was said explicitly and what wasn’t is so biased as to make its results useless. I have a small but non-zero probability for that. But even if that’s true, that doesn’t seem like a reason to continue the argument; it seems like proof that I’m so hopeless that I should just cut my losses.
I considered including a note in my previous reply explaining that I’d checked if you could use my excuse and found you couldn’t, but I was concerned that would feel like rubbing it in, and the fact that you can’t use my excuse isn’t actually important unless you try to use it, and I guessed that you wouldn’t try. (Whether that guess was correct is still a bit unclear to me—you offer an explanation that seems directly contradictory to my excuse, but you also assert that you’re saying the same thing as me.)
If you are saying that I should have guessed the exact defense you would give, even if it was different from mine, then I don’t see how I was supposed to guess that.
If you are saying that I should have guessed you would offer some defense, even if I didn’t know the details, then I considered that moderately likely but I don’t know what you think I should have done about it.
If I had guessed that you would offer some defense that I would accept then I could have updated to the position I expected to hold in the future, but I did not guess that you’d have a defense I would accept; and, in fact, you don’t have one. Which brings us to...
(re-quoted for ease of reference)
I have carefully re-read the entire reply that you made after the comment containing the two examples I accused you of failing to respond to.
Those two examples are not mentioned anywhere in it. Nor is there a general statement about “my examples” as a group. It has 3 distinct passages, each of which seems to be a narrow reply to a specific thing that I said, and none of which involve these 2 examples.
Nor does it include a claim that I’ve misapplied your model, either generally or related to those particular examples. It does include a claim that I’ve misunderstood one specific part of your model that was completely irrelevant to those two examples (you deny my claim that the relevant predictions are coming from a part of the person that can’t be interrogated, after flagging that you don’t expect me to follow that passage due to inferential distance).
Your later replies did make general claims about me not understanding your model several times. I could make up a story where you ignored these two examples temporarily and then later tried to address them (without referencing them or saying that that was what you were doing), but that story seems neither reasonable nor likely.
Possibly you meant to write something about them, but it got lost in an editing pass?
Or (more worryingly) perhaps you responded to my claim that you had ignored them not by trying to find actions you took specifically in response to those examples, but instead by searching your memory of everything you’ve said for things that could be interpreted as a reply, and then reported what you found without checking it?
In any case: You did not make the response you claimed that you made, in any way that I can detect.
Communication is tricky!
Sometimes both parties do something that could have worked, if the other party had done something different, but they didn’t work together, and so the problem can potentially be addressed by either party. Other times, there’s one side that could do something to prevent the problem, but the other side basically can’t do anything on their own. Sometimes fixing the issue requires a coordinated solution with actions from both parties. And in some sad situations, it’s not clear the issue can be fixed at all.
It seems to me that these two incidents both fall clearly into the category of “fixable from your side only”. Let’s recap:
(1) When you talked about your no-anger fight, you had an argument against my model, but you didn’t state it explicitly; you relied on me to infer it. That inference turned out to be intractable, because you had a misunderstanding about my position that I was unaware of. (You hadn’t mentioned it, I had no model that had flagged that specific misunderstanding as being especially likely, and searching over all possible misunderstandings is infeasible.)
There’s an obvious, simple, easy, direct fix from your side: State your arguments explicitly. Or at least be explicit that you’re making an argument, and you expect credit. (I mistook this passage as descriptive, not persuasive.)
I see no good options from my side. I couldn’t address it directly because I didn’t know what you’d tried to do. Maybe I could have originally explained my position in a way that avoided your misunderstanding, but it’s not obvious what strategy would have accomplished that. I could have challenged your general absence of evidence sooner—I was thinking it earlier, but I deferred that option because it risked degrading the conversation, and it’s not clear to me that was a bad call. (Even if I had said it immediately, that would presumably just accelerate what actually happened.)
If you have an actionable suggestion for how I could have unilaterally prevented this problem, please share.
(2) In the two examples I complained you didn’t respond to, you allege that you did respond, but I didn’t notice and still can’t find any such response.
My best guess at the solution here is “you need to actually write it, instead of just imagining that you wrote it.” The difficulty of implementing that could range from easy to very hard, depending on the actual sequence of events that lead to this outcome. But whatever the difficulty, it’s hard to imagine it could be easier to implement from my side than yours—you have a whole lot of relevant access to your writing process that I lack.
Even assuming this is a problem with me not recognizing it rather than it not existing, there are still obvious things you could do on your end to improve the odds (signposting, organization, being more explicit, quoting/linking the response when later discussing it). Conversely, I don’t see what strategy I could have used other than “read more carefully,” but I already carefully re-read the entire reply specifically looking for it, and still can’t find it.
I understand it’s possible to be in a situation where both sides have equal quality but both perceive themselves as better. But it’s also possible to be in a situation where one side is actually better and the other side falsely claims it’s symmetrical. If I allowed a mere assertion of symmetry from the other guy to stop me from ever believing the second option, I’d get severely exploited. The only way I have a chance at avoiding both errors is by carefully examining the actual circumstances and weighing the evidence case-by-case.
My best judgment here is that the evidence weighs pretty heavily towards the problems being fixable from your side and not fixable from my side. This seems very asymmetrical to me. I think I’ve been as careful as I reasonably could have been, and have invested a frankly unreasonable amount of time into triple-checking this.
Before I respond to your other points, let me pause and ask if I have convinced you that our situation is actually pretty asymmetrical, at least in regards to these examples? If not, I’m disinclined to invest more time.
Oh, the situation is definitely asymmetrical. In more ways than you realize.
However, the important part of my comment was this:
If you can’t say “Shoot, I didn’t realize that”, or “Heh, yeah I see how it definitely looks more symmetrical than I was giving credit for (even though we both know there are important dissymmetries, and disagree on what they are)”, and instead are going to spend a lot of words insisting “No, but my perspective really is better supported [according to me]”… after I just did you the favor of highlighting how revealing that would be… then again, the symmetry is already broken in the way that shows which one of us is blind to our limitations.
There’s another asymmetry though, which has eluded you:
Despite threatening to write me off, you still take me seriously enough to write a long comment trying to convince me that you’re right, and expect me to engage with it. Since you failed to answer the part that matters, I can’t even take you seriously enough to read it. Ironically, this would have been predictable to you if not for your stance on prediction errors, Lol.
Also, with a prediction error like that, you’re probably not having as much fun as I am, which is a shame. I’m genuinely sorry it turned out the way it did, as I was hoping we’d get somewhere interesting with this. I hope you can resolve your error before it eats at you too much, and that you can keep a sense of humor about things :)
Guess we’re done, then.
We can be, if you want. And I certainly wouldn’t blame you for wanting to bail after the way I teased you in the last comment.
I do want to emphasize that I am sincere in telling you that I hope it doesn’t eat at you too much, and that I hoped for the conversation to get somewhere interesting.
If you turn out to be a remarkably good sport about the teasing, and want to show me that you can represent how you were coming off to me, I’m still open to that conversation. And it would be a lot more respectful, because it would mean addressing the reason I couldn’t take your previous comment seriously.
No expectations, of course. Sincere best wishes either way, and I hope you forgive me for the tease.
Understanding crowds out prediction error, it does not necessarily crowd out negative emotions, which is part of the point of this article.
That is, I understand the last paragraph, but it does not then go ‘thus I feel kindness’ necessarily. There may be steps to take to try to help them up, but that does not necessitate kindness, I can feel disgust at someone I know who could do so much more while still helping them. Possibly one phrasing of it as based on your calculator example, is there’s no need for there to be a “lower expectations” step. I can still have the dominant negative emotion that the calculator and the calculator company did not include a battery, even if I understand why.
No, it actually does. Which is the point of my comment :P
When I say “prediction error” I don’t mean that you verbally say stuff like “I predict X” and not having bets scored in your favor. I mean that thing where your brain expects one thing, and sensory data coming up suggesting not that, and you get all uncomfortable because reality isn’t doing what it’s “supposed to”.
In other words, your actual predictions, not necessarily the things that you declare your predictions to be.
You could, yes, but it would require mismodeling them as someone who could do more than they actually can given the very real limitations which you may or may not understand yet. I can stay as furious as I want at the calculator, but only if I shut out of my mind the fact that of course it can’t work without a battery, stupid. The fact that I might say “I know I know, there’s no battery but...” doesn’t negate the fact that I’m deliberately acting without this knowledge. It just means I’m flinching away from this aspect of reality.
And it turns out, that’s not a good idea. Accurately modeling people, and credibly conveying these accurate models so that they can recognize and trust that you have accurately modeled them, is incredibly important for helping people. Good luck getting people to open themselves to your help while you view them as disgusting.
This is just kicking the can one step further. You can still be annoyed, but you can no longer be annoyed at “the stupid calculator!” for not working. You have to be annoyed at the company for not including batteries—if you can pull that one off.
But hey, why did they not include batteries? If it turns out that it’s illegal for whatever reason and they literally can’t because the authorities check, where goes your annoyance now?
If your reasoning results in “I can’t have negative emotions about things where I deeply understand the causes”, then I think you’ve made a misstep.
They could have done more. The choices were there in front of them, and they failed to choose them.
I will feel more positive flavored emotions like kindness/sadness if they’re pushed into hard choices where they have to decide between becoming closer to their ideal or putting food on the table; with the converse of feeling substantially less positive when the answer is they were browsing dazedly browsing social media. With enough understanding I could trace back the route which led to them relying more and more on social media as it fills some hole of socialization they lack, is easy to do, … and still retain my negative emotions while holding this deeper understanding.
I disagree that I am inaccurately modeling them, because I dispute the absolute connection between negative emotion and prediction error in the first place. I can understand them. I can accurately feel the mental pushes that push against their mind; I’ve felt them myself many times. And yet still be disquieted, disappointed in their actions.
Regardless, I do not have issues getting along with someone even if I experience negative emotions about how they’ve failed to reach farther in the past—just like I can do so even if their behavior, appearance, and so on are displeasing. This will be easier if I do something vaguely like John’s move of ‘thinking of them like a cat’, but it is not necessary for me to be polite and friendly.
Word-choice implication nitpick: Common usage of lower expectations means a mix of literal prediction and also moral/behavioral standards. I might have a ‘low expectation’ in the sense that a friend rarely arrives on time while still holding them ‘high expectations’ in the what-is-good sense!
No, I can be annoyed at the calculator and the company. There’s no need for my annoyance to be moved down the chain like I only have 1 Unit of Annoyance to divvy out. Or, you can view it as cumulative if that makes more sense, that it ties back into the overall emotions on the calculator. If I learn that supplying batteries is illegal, my annoyance with the company does decrease, but then it gets more moved primarily to the authorities. Some remains still, and I’m still annoyed at the calculator despite understanding why it doesn’t have a battery.
I do think the calculator metaphor starts to break apart, because a calculator is not the system that feeds-back-on-itself to then decide on no batteries.
Humans are complex, and I love them for it, their decisions, mindset, observations, thought processes, and so much more loop back in on themselves to shape the actions they take in the world. …That includes both their excellent actions where they do great things, reach farther, become closer to their ideals… as well as when they falter, when they get ground down by short-term optimization leaving them unable to focus on ways to improve themselves, and find themselves falling short. But that does mean my negative emotions will be more centered on humans, on their beliefs and more. Some of this negative evaluation bleeds off to social media companies optimizing short-form content feeds, or society in vague generality for lack of ambition, but as I said before it isn’t 1 Unit of Annoyance to spread around like jam.
That is, you’re talking this like the concept of blame, when negative emotions and blame are not necessarily the same thing. Paired with this: You appear to be implicitly taking a hard determinist sort of stance, wherein concepts like blame and ‘being able to choose otherwise’ start dissolving, but I find that direction questionable in the first-place. We can still judge people’s decisions, it is normal that their actions are influenced by their interactions with the world, and I can still feel negative emotions about their choices. That they were not able to do better, that their decisions did not go elsewise, that they failed to reinforce good decisions and more.
I do take a hard deterministic stance, so I’d like to hear your thoughts here. Do you agree w/ the following?
People literally can’t make different choices due to determinism
Laws & punishments are still useful for setting the right incentives that lead to better outcomes
You’re allowed to have negative emotions given other people’s actions (see #1), but those emotions don’t necessarily lead to better outcomes or incentives
I remember being 9 years old & being sad that my friend wasn’t going to heaven. I even thought “If I was born exactly like them, I would’ve made all the same choices & had the same experiences, and not believe in God”. I still think that if I’m 100% someone else, then I would end up exactly as they are.
I think the counterfactual you’re employing (correct me if wrong) is “if my brain was in their body, then I wouldn’t...” or “if I had their resources, then I wouldn’t...”, which is saying you’re only [80]% that person. You’re leaving out a part of them that made them who they are.
Now, you could still argue #2, that these negative emotions set correct incentives. I’ve only heard second-hand of extreme situations where that worked [1], but most of the time backfires
Son calls their parent after a while “Oh son, you never call! Shame shame”
Child says their sorry, but the parent demands them to show/feel remorse or it doesn’t count.
Guilt tripping in general, lol
What do you think?
One of my teacher’s I still talk to pushed a student against the wall, yelling at them that they’re wasting their life w/ drugs/etc, fully expecting to get fired afterwards. They didn’t get fired & the student cleaned up (I believe this was in the late 90′s though)
Yes. But also that people are still making those choices.
Yes. But I would point out that ‘punishment’ in the moral sense of ‘hurt those who do great wrongs’ still holds just fine in determinism for the same reasons it originally did, though I personally am not much of a fan
Yes, just like I can be happy in a situation where that doesn’t help me.
No, it is more that I am evaluating from multiple levels. There is
basic empathy: knowing their own standards and feeling them, understanding them.
‘idealized empathy’: Then I often have extended sort of classical empathy where I am considering based on their higher goals, which is why I often mention ideals. People have dreams they fail to reach, and I’d love them to reach further, and yet it disappoints me when they falter because my empathy reaches towards those too.
Values: Then of course my own values, which I guess could be considered the 80% that person, but I think I keep the levels separate; all the considerations have to come together in the end. I do have values about what they do, and how their mind succeeds.
Some commenters seemingly don’t consider the higher ideals sort or they think of most people in terms of short-term values; others are ignoring the lens of their own values.
So I think I’m doing multiple levels of emulation, of by-my-values, in-the-moment, reflection, etc. They all inform my emotions about the person.
And I agree. If I ‘became’ someone I was empathizing with entirely then I would make all their choices. However, I don’t consider that notably relevant! They took those actions, yes influenced by all there is in the world, but what else would influence them? They are not outside physics. Those choices were there, and all the factors that make up them as a person were what decided their actions.
If I came back to a factory the next day and notice the steam engine failed, I consider that negative even when knowing that there must have been a long chain of cause and effect. I’ll try fixing the causes… which usually ends up routing through whatever human mind was meant to work on the steam engine as we are very powerful reflective systems. For human minds themselves that have poor choices? That often routes back through themselves.
I do think that the hard-determinist stance often, though of course not always, comes from post-Christian style thought which views the soul as atomically special, but that they then still think of themselves as ‘needing to be’ outside physics in some important sense rather than fully adapting their ontology. That choices made within determinism are equivalent to being tied up by ropes, when there is actually a distinction between the two scenarios.
A negative emotion can still push me to spend more effort on someone, though it usually needs to be paired with a belief that they could become better. Just because you have a negative emotion doesn’t mean you only output negative-emotion flavored content. I’ll generally be kind to people even if I think their choices are substantially flawed and that they could improve themselves.
I do think that the example of your teacher is one that can work, I’ve done it at least once though not in person, and it helped but it definitely isn’t my central route. This is effectively the ‘staging an intervention’ methodology, and it can be effective but requires knowledge and benefits greatly from being able to push the person.
But, as John is making the point, a negative emotion may not be what people are wanting, because I’m not going to have a strong kindness about how hard someone’s choices were… when I don’t respect those choices in the first place. However, giving them full positive empathy is not necessarily good either, it can feel nice but rarely fixes things. Which is why you focus on ‘fixing things’, advice, pointing out where they’ve faltered, and more if you think they’ll be receptive. They often won’t be, because most people have a mix of embarrassment at these kinds of conversations and a push to ignore them.
I certainly understand why you think that. I used to think that myself. I pushed back myself when I first heard someone take such a “ridiculous” stance. And yet, it proved to be true, so I changed my mind.
The thing that I was missing then, and which you’re missing now, is that the bar for deep careful analysis is just a lot higher than you think (or most anyone thinks). It’s often reasonable to skimp out and leave it as “because they’re bad/lazy/stupid”/”they shouldn’t have” or whatever you want to round it to, but these things are semantic stopsigns, not irreducible explanations.
Pick an issue, any issue, and keep at the analysis until you do get to something irreducible. Okay, so you’ve kicked the can one step further and are upset with the people who banned shipping batteries or whatever. Why did they do it? Keep asking “Why? Why? Why?” like a curious two year old, until there is no more “why?”. If, after you feel like you’ve hit the end of the road, you still have annoyance with the calculator itself, go back and ask why? “I’m annoyed that the calculator doesn’t work… without batteries?” How do you finish the statement of annoyance?
The way I was initially convinced of this was by picking something fake, subjecting myself to that “overconfident” guy’s incessant questioning, with an expectation of proving to him that it was endless. It wasn’t, he won. Since then I’ve done it with many more real things, and the answer is always the same. Empirically, what happens, is that you can keep going and keep going, until you can’t, and at that point there’s just no more negative around that spot because it’s been crowded out. It doesn’t matter if it’s annoyance, or sadness, or even severe physical pain. If you do your analysis well, the experience shifts, and loses its negativity.
If you’re feeling “badness” and you think you have a full understanding, that feeling of badness itself contains the clues about where you’re wrong.
This is a bit of a distraction, but Thane covered it pretty well:
In other words, there are reasons for their choices. Do you understand why they chose the way they did?
Notice the movement of goal posts here? I’m talking about successfully helping people, you’re saying you can “get along”. Getting along is easy. I’m sure you can offer what passes as empathy to the girl with the nail in her head, instead of fighting her like a beliggerent dummy.
But can you exclaim “You got a nail in your head, you dummy!” and have her laugh with you, because you’re obviously correct? If you can’t trivially get her to agree that the problem is the nail, and figure out with you what to do about it, then your mismodeling is getting in the way.
This higher level of ability is achievable, and the path to get there is better modeling than you thought possible.
No, I believe I’m fully aware the level of deep careful analysis, and I understand why it pushes some people to sweep all facets of negativity or blame away, I just think they’re confused because their understanding of emotions/relations/causality hasn’t updated properly alongside their new understanding of determinism
Because I wanted the calculator to work, I think it is a good thing for calculators in stores to work, I am frustrated that the calculator didn’t work… none of this is exotic, nor is it purely prediction error. (nor do prediction error related emotions have to go away once you’ve explained the error… I still feel emotional pain when a pet dies even if I realize all the causes why; why would that not extend to other emotions related to prediction error?)
You assert this but I still don’t agree with it. I’ve thought long and hard about people before and the causes that make them do things, but no, this does not match my experience. I understand the impulse that encourages sweeping away negative emotions once you’ve found an explanation, like realizing that humanities’ lack of coordination is a big problem, but I can still very well feel negative emotions about that despite there being an explanation.
Relatively often? Yes. I don’t blame people for not outputting the code for an aligned AGI because it is something that would have been absurdly hard to reinforce in yourself to become the kind of person to do that.
If someone has a disease that makes so they struggle to do much at all, I am going to judge them a hell of a lot less. Most humans have the “disease” that they can’t just smash out the code for an aligned AGI.
I can understand why someone is not investing more time studying, and I can even look at myself and relatively well pin down why, and why it is hard to get over that hump… I just don’t dismiss the negative feeling even though I understand why. They ‘could have’, because the process-that-makes-their-decisions is them and not some separate third-thing.
I fail to study when I should because a combination of short-term optimized positive feeling seeking which leads me to watching youtube or skimming X, a desire for faster intellectual feelings that are easier gotten from arguing on reddit (or lesswrong) than slowly reading through a math paper, because I fear failure, and much more. Yet I still consider that bad, even if I got a full causal explanation it would have still been my choices.
I don’t have issues with helping people, there “goalposts” moved forward again, despite nothing in my sentence meaning I can’t help people. My usage of ‘get along’ was not the bare minimum meaning.
Getting along with people in the nail scenario often means being friendly and listening to them. I can very well do that, and have done it many times before, while still thinking their individual choices are foolish.
I don’t think your comment has supplied much more beyond further assertions that I must surely not be thinking things through.
How did you arrive at this belief? Like, the thing that I would be concerned with is “How do I know that Russel’s teapot isn’t just beyond my current horizon”?
Oh no, nothing is being swept away. Definitely not that. More on this with the grieving thing below.
The prediction error goes away when you update your prediction to match reality, not when you recite an explanation for why your current beliefs are clashing. You can keep predicting poorly all you want. If you want to keep feeling bad and getting poor results, I guess.
With a good explanation, you don’t have to.
Yes, you’re still losing your pet, and that still sucks. That’s real, and there’s no getting away from what’s real. You don’t get to accurate maps painlessly, let alone effortlessly. There’s no “One simple trick for not having to feel negative emotions!”.
The question is how this works. It’s very much not as simple as “Okay, I said he ded now I’m done grieving”. Because again, that’s not your predictions. The moment that you notice the fact that “he’s dead” is true can be long before you start to update your actual object level beliefs, and it’s a bit bizarre but also completely makes sense that it’s not until you start to update your beliefs that it hits you.
Even after you update the central belief, and even after you resolve all the “But why!?” questions that come up, you still expect to see everyone for Christmas. Until you realize that you can’t because someone is no longer alive, and update that prediction too. You think of something you’d have wanted to show him, and have to remember you can’t do that anymore. There are a bazillion little ways that those we care about become entwined with our lives, and grieving the loss of someone important is no simple task. You actually have to propagate this fact through to all the little things it effects, and correct all the predictions that required his life to fulfil.
Yet as you grieve, these things come up less and less frequently. Over time, you run out of errant predictions like “It’s gonna be fun to see Benny when—Oh fuck, no, that’s not happening”. Eventually, you can talk about their death like it’s just another thing that is, because it is.
Is it possible, do you think, that the way you’re doing analysis isn’t sufficient, and that if you were to be more careful and thorough, or otherwise did things differently, your experience would be different? If not, how do you rule this out, exactly? How do you explain others who are able to do this?
:) I appreciate it, thanks.
I’m holding the goal posts even further forward though. Friendly listening is one thing, but I’m talking about pointing out that they’re acting foolish and getting immediate laughter in recognition that you’re right. This is the level of ability that I’m pointing at. This is what is what’s there to aim for, which is enabled by sufficiently clear maps.
It contained a bit more than that. I checked to make sure I wasn’t being too opaque (it happens), but Claude can show you what you missed, if you care.
The big thing I was hoping you’d notice, is that I was trying to make my claims so outrageous and specific so that you’d respond “You can’t say this shit without providing receipts, man! So lets see them!”. I was daring you to challenge me to provide evidence. I wonder if maybe you thought I was exaggerating, or otherwise rounding my claims down to something less absurd and falsifiable?
Anyway, there are a few things in your comment that suggest you might not be having fun here. If that’s the case, I’m sorry about that. No need to continue if you don’t want, and no hard feelings either way.
Empirical evidence of being more in tune with my own emotions, generally better introspection, and in modeling why others make decisions. Compared to others. I have no belief that I’m perfect at this, but I do think I’m generally good at it and that I’m not missing a ‘height’ component to my understanding.
Because, (I believe) the impulse to dismiss any sort of negativity or blame once you understand the causes deep enough is one I’ve noticed myself. I do not believe it to be a level of understanding that I’ve failed to reach, I’ve dismissed it because it seems an improper framing.
At times the reason for this comes from a specific grappling with determinism and choice that I disagree with.
For others, the originating cause is due to considering kindness as automatically linked with empathy, with that unconsciously shaping what people think is acceptable from empathy.
In your case, some of it is tying it purely to prediction that I disagree with, because of some mix of kindness-being-the-focus, determinism, a feeling that once it has been explained in terms of the component parts that there’s nothing left, and other factors that I don’t know because they haven’t been elucidated.
Empirical exploration as in your example can be explanatory. However, I have thought about motivation and the underlying reasons to a low granularity plenty of times (impulses that form into habits, social media optimizing for short form behaviors, the heuristics humans come with which can make doing it now hard to weight against the cost of doing it a week from now, how all of those constrain the mind...), which makes me skeptical. The idea of ‘shift the negativity elsewhere’ is not new, but given your existing examples it does not convince me that if I spent an hour with you on this that we would get anywhere.
This, for example, is a misunderstanding of my position or the level of analysis that I’m speaking of. Wherein I am not stopping there, as I mentally consider complex social cause and effect and still feel negative about the choices they’ve made.
Grief like this exists, but I don’t agree that it is pure predictive remembrance. There is grief which lasts for a time and then fades away, not because my lower level beliefs are prediction to see them—away from home and a pet dies, I’m still sad, not because of prediction error but because I want (but wants are not predictions) the pet to be alive and fine, but they aren’t. Because it is bad, to be concise.
You could try arguing that this is ‘prediction that my mental model will say they are alive and well’, with two parts of myself in disagreement, but that seems very hard to determine the accuracy as an explanation and I think is starting to stretch the meaning of prediction error. Nor does the implication that ‘fully knowing the causes’ carves away negative emotion follow?
This is more about socialization ability, though having a clear map helps. I’ve done this before, with parents and joking with a friend about his progress on a project, but I do not do so regularly nor could I do it in arbitrarily. Joking itself is only sometimes the right route, the more general capability is working a push into normal conversation, with joking being one tool in the toolbox there. I don’t really accept the implication ‘and thus you are mismodeling via negative emotions if you can not do that consistently’. I can be mismodeling to the degree that I don’t know precisely what words will satisfy them, but that can be due to social abilities.
When you don’t provide much argumentation, I don’t go ‘huh, guess I need to prod them for argumentation’ I go ‘ah, unfortunate, I will try responding to the crunchy parts in the interests of good conversation, but will continue on’. That is, the onus is on you to provide reasons. I did remark that you were asserting without much backing.
I was taking you literally, and I’ve seen plenty of people fall back without engaging—I’ve definitely done it during the span of this discussion, and then interpreting your motivations through that. ‘I am playing a game to poke and prod at you’ is uh.....
A good chunk of it is the ~condescension. Repeated insistence while seeming to mostly just continue on the same line of thought without really engaging where I elaborate, goalpost gotcha, and then the bit about Claude when you just got done saying that it was to ‘test’ me; which it being to prod me being quite annoying in-of-itself.
Of course, I think you have more positive intent behind that. Pushing me to test myself empirically, or pushing me to push back on you so then you can push back yourself on me to provide empirical tests (?), or perhaps trying to use it as an empathy test for whether I understand you. I’m skeptical of you really understanding my position given your replies.
I feel like I’m being better at engaging at the direct level, while you’re often doing ‘you would understand if you actually tried’, when I believe I have tried to a substantial degree even if nothing precisely like ‘spend two hours mapping cause and effect of how a person came to these actions’.
Hm. Given the way you responded here, I don’t think it’s worth my time to continue. Given the work you put into this comment I feel like I at least owe you an explanation if you want one, but I’ll refrain unless you ask.
That goes back to “thinking of the person like a cat”. And I guess I do then empathize with them in the same way I empathize with cats.
Except that they’re not cats, right?
When I accept that a calculator won’t work without batteries, that’s not “thinking of the calculator like a rock”, and choosing to not notice the differences between the calculator and a rock so as to avoid holding it to higher standards. I’m still looking at the calculator as a calculator, just more specifically, as a calculator which doesn’t have any batteries—because that’s what it is. The idea is to move towards more detailed and accurate models, not less. Because this gives you options to improve the calculator by adding batteries.
Your words imply that you have expectations for “humans” which empirically do not seem to be holding up so far as you can tell. Rather than turning away from this failed expectation, saying “I won’t even think of them as human”, look into it. Why, exactly, are people failing to behave in the ways you think they should? Why is it wrong of you to expect people to behave in the ways you wished they would?
Or, put another way, what is the missing constraint that you’re not seeing, and how can you provide it such that people can and will live up to the standards you want to hold for them? (easier said than done, but doable nonetheless)
Intelligence and personality, which are both largely innate.
Genetic engineering…?
(EDIT: And, like, UBI / social safety nets / reform the education system / solve unemployment / cure all diseases / etc. All of these things would surely improve many people’s ability to perform well in life.)