Thoughts on The Replacing Guilt Series⁠ — pt 1

I’m cur­rently go­ing through Nate’s se­ries on car­ing. He starts with the is­sue of listless guilt. First post is about the stamp col­lec­tor, the sec­ond one is about al­low­ing your­self to fight for some­thing.

I want to log my thoughts on this, be­cause it seems that there’s enough to chew on here.

First, a bit on the stamp col­lec­tor.

I agree with the gen­eral mes­sage. It’s true, peo­ple aren’t op­ti­mized solely for plea­sure-seek­ing. There are other things. Evolu­tion­ary psy­chol­ogy has a suffi­cient enough ex­pla­na­tion, I be­lieve. The heuris­tic “peo­ple are ul­ti­mately he­do­nists” is sim­ply wrong, its pre­dic­tive power is weak.

Now, about the sec­ond post.

Yes, you can care about things in the world. You can only ac­cess it through your map, but this doesn’t pos­tu­late wire­head­ing as the fi­nal goal.

I, too, think that the word “want” is loaded. Let’s dis­sect.

Peo­ple are em­bed­ded in their en­vi­ron­ments. Their mo­ti­va­tional sys­tems are com­plex. There’s the pri­mal “draw” to­wards cer­tain things, and it’s me­di­ated by differ­ent as­pects of mo­ti­va­tional sys­tems. For ex­am­ple, it’s gen­er­ally eas­ier to feel this pri­mal “draw” to­wards con­crete things, which im­p­inge on our senses (let’s not talk PP here).

But we also can use our ex­ec­u­tive func­tions in tan­dem with the ebb-and-flow of our pri­mal mo­ti­va­tions. You can “force” your­self to start work­ing on some­thing, and the short bursts of re­in­force­ment for achiev­ing mea­surable (micro-)mile­stones will do the rest.

What’s im­por­tant though is the link be­tween the “force­ful” and the “pri­mal”, S2 and S1. There’s only so much you can re­al­is­ti­cally do purely through S2 con­trol. Usu­ally it’s both S2 and S1 that af­fect your pro­duc­tivity.

(cor­rect me if I’m wrong here; it might be pos­si­ble to run on S2 for a long time, but I be­lieve that at some point your lo­cal pro­duc­tivity will tank so hard it won’t be worth to con­tinue any­more; fur­ther­more, if you try to abuse your S1 with S2 in this way for longer pe­ri­ods of time, S1 will start re­sist­ing, and you’ll prob­a­bly burn out and\or ruin the rest of your life; alright, I just stated a bunch of ob­vi­ous, but po­ten­tially im­por­tant stuff to some­one who haven’t thought about it be­fore; you might be one of them, and if you are, here’s your in­sight: you don’t ac­tu­ally need mo­ti­va­tion to do stuff, thanks to your S2. go now, try it.)

I think it’s im­por­tant to also tell your­self that you’re al­lowed to care about ex­ter­nal world for self­ish rea­sons, too, and ac­tu­ally change the world! It’s some­thing that comes with the un­der­stand­ing that “peo­ple are ul­ti­mately he­do­nists” is too weak, and pure plea­sure isn’t the ter­mi­nal value, be­cause it sim­ply doesn’t work. Soares men­tions that true self­ish­ness doesn’t lead to the listless guilt, and he’s cor­rect. The peo­ple who con­nect “self­ish­ness” to “root of my mild de­pres­sion” are us­ing a shal­low con­cep­tion of self­ish­ness, shal­low in the sense of “it’s a shitty model, wake up!”.

(for those who start scream­ing “wire­head­ing!” — sure, but it doesn’t ex­ist yet, so don’t make de­ci­sions based on stupid thought ex­per­i­ments, more on this later)

Then Soares gives us this thought ex­per­i­ment. Read through it, if you want to re­fresh your mem­ory.

Imag­ine you live alone in the woods, hav­ing for­saken civ­i­liza­tion when the Uneth­i­cal Psy­chol­o­gist Author­i­tar­i­ans came to power a few years back.

Your only com­pan­ion is your dog, twelve years old, who you raised from a puppy. (If you have a pet or have ever had a pet, use them in­stead.)

You’re aware of the fact that hu­mans have figured out how to do some pretty im­pres­sive per­cep­tion mod­ifi­ca­tion (which is part of what al­lowed the Uneth­i­cal Psy­chol­o­gist Author­i­tar­i­ans to come to power).

One day, a psy­chol­o­gist comes to you and offers you a deal. They’d like to take your dog out back and shoot it. If you let them do so, they’ll clean things up, erase your mem­o­ries of this con­ver­sa­tion, and then al­ter your per­cep­tions such that you per­ceive ex­actly what you would have if they hadn’t shot your dog. (Don’t worry, they’ll also have peo­ple track you and al­ter the per­cep­tions of any­one else who would see the dog, so that they also see the dog, so that you won’t seem crazy. And they’ll re­move that fact from your mind, so you don’t worry about be­ing tracked.)

In re­turn, they’ll give you a dol­lar.

Most peo­ple re­ject it! Their gut feel­ing screams “DOGGIE DIES” and they re­fuse.

I say, they’re prob­a­bly not be­ing con­se­quen­tial­ist enough.

Imag­ine you took the offer. How is the world differ­ent now?

You try to touch the dog. Pre­sum­ably, the cor­rect neu­rons get ac­ti­vated, and you feel the gen­tle brush of its hair on your hand. You can’t move your hand any fur­ther than the pre­sumed phys­i­cal lo­ca­tion of the dog al­lows, be­cause the un­eth­i­cal-but-smart psy­chol­o­gists some­how mag­i­cally have your in­hibitory sys­tem un­der con­trol, and they’ve calcu­lated the den­sity of the rele­vant part of the dog, so you feel more or less the right amount of push­back.

Does the dog run around like a real dog, bark­ing loudly and chas­ing squir­rels? Well, of course! The un­eth­i­cal-but-smart psy­chol­o­gists did ev­ery­thing to en­sure that any differ­ence in per­ceived dog’s be­hav­ior is so small it’s un­no­tice­able to the hu­man eye, so there’s noth­ing un­canny about this happy, but not truly real crea­ture.

If we can’t find any sig­nifi­cant con­se­quences in your rou­tine in­ter­ac­tions with the dog, what about ev­ery­thing else? How do the evil psy­chol­o­gists ac­tu­ally make this hap­pen? What kind of hor­rible voodoo ar­ti­fact would it take to con­trol not only your per­cep­tion, but per­cep­tions of ev­ery­one else?

One might say: but you can’t use the dog’s bones in or­der to dig its grave, be­cause only your per­cep­tions get con­trol­led. That’s true. Your per­cep­tion of “dig­ging a hole” might be al­tered, but you won’t be able to ac­tu­ally dig a hole, be­cause per­cep­tions aren’t spades.

Now, what if you try to dig a hole with the poor dog’s bones, and an ac­tual hole ap­pears? You try putting your hand in­side, and noth­ing pushes back. It’s a real hole.

Con­grat­u­la­tions, the un­eth­i­cal-but-smart psy­chol­o­gists cre­ated an in­stru­ment that al­lows them to rewrite re­al­ity in whichever way they want, in a fine-grained and very pre­cise way, too! You spill the dog’s guts? Yep, here they are, fit­ting right into the hole you dug out. You feed the dog cheap canned food and it poops it out. (It’s even bet­ter if it doesn’t. I’d buy a ge­net­i­cally mod­ified pet that doesn’t shit ev­ery­where. “But it’s im­por­tant that my dog poops, be­cause I care about re­al­ity!” Oh my god, it must also be im­por­tant to you that hu­mans suffer and die, be­cause you care about re­al­ity. Nat­u­ral­is­tic fal­lacy much?) Every­thing works as it’s sup­posed to.

Do you still think that tak­ing a dol­lar is the wrong choice, even though liter­ally noth­ing changes af­ter­wards? If you do, do you think it’s a ra­tio­nal choice? Or is your S1 de­lud­ing you?

You don’t care about min­i­mal differ­ences. If the atoms of what con­sti­tutes the dog are a bit lack­ing, so what? It doesn’t nec­es­sar­ily mat­ter and you know it. If you ever shaved your dog, you al­tered the con­sti­tu­tion of its atoms by a great amount, prob­a­bly sev­eral times. If the dog in fact does act a bit differ­ently, but it im­pos­si­ble to de­tect with the hu­man eye, so what?

But the dog isn’t ac­tu­ally a dog, it’s some whacko mechanism that im­p­inges on my senses and makes me see a dog!” Yes! The real dog isn’t ac­tu­ally a dog ei­ther way, it’s par­ti­cles that im­p­inge on your senses and make you see a dog! Oh wait...

If there are no real con­se­quences that make a real differ­ence, then you should take the offer. Maybe the voodoo ar­ti­fact of the evil psy­chol­o­gists is, in fact, very un­sta­ble, and you want to de­stroy it be­fore it de­stroys ev­ery­thing else, but let’s leave it to sci-fi au­thors, shall we?

It might be a ra­tio­nal choice to re­fuse the offer, but only if the bad con­se­quences of the choice out­weigh the good ones. If I’m con­fi­dent that tak­ing the offer and get­ting the dog shot will lead to a bet­ter world, I will do it. Did you flinch? Great! Now shut up and mul­ti­ply.

I used the words “truly real”. The dog doesn’t mat­ter, the con­se­quences of the phe­nomenon that you call “dog” mat­ter. “Real” is just a heuris­tic, and a use­ful one at that, be­cause the world is com­plex and ar­tifi­cial in­terfer­ence can eas­ily be­come mis­guided and lead to dis­as­trous con­se­quences, if ap­plied to larger scale phe­nom­ena.

Any­way, what am I get­ting at? The fact that you chose to re­fuse the offer, if you did, doesn’t nec­es­sar­ily mean that you care about re­al­ity. If it’s true that noth­ing of im­por­tance is changed in the world af­ter your dog gets shot, then your feel­ings de­ceive you. If you say that minis­cule differ­ences mat­ter to you, and that even micro­scopic al­ter­a­tions in the “nat­u­ral” be­hav­ior of dog mat­ter in them­selves, then you’re ei­ther a liar or a fool. Fight me.

But if you re­fuse the offer be­cause you think that “real” is a good heuris­tic, and you don’t have enough ev­i­dence to trust the damned psy­chol­o­gists, then I ap­plaud you.

I’m sure that there are cer­tain peo­ple who were also left dis­satis­fied with the thought ex­per­i­ment and went “meh, I’m still right af­ter all”.

If you’re one of them, you’re wrong.

Not in your anal­y­sis, but in the way you model re­al­ity. No, heroin isn’t all you need, you know how most ac­tual, real peo­ple who go this route end up. No, wire­head­ing isn’t all you need, be­cause you live in a world that doesn’t have it, and prob­a­bly won’t have it un­til you die.

Yes, I’m lead­ing you to a pretty ob­vi­ous con­clu­sion.

Here it is.


Ori­ent your­self to re­al­ity.


When you con­duct a thought ex­per­i­ment, you cre­ate a very rough, sim­plified, tiny model of some part of ex­is­tence, and then strug­gle to up­date on the fake ev­i­dence that you’re get­ting. And you can suc­ceed! And it’s gonna be awful and hor­rible and bad!

Thought ex­per­i­ments can be use­ful, but you need to know ex­actly what you are get­ting your­self into, and how ex­actly to in­ter­pret the ev­i­dence you’re get­ting from them, and what ex­act hy­pothe­ses they are sup­port­ing. Don’t fall prey to folk psy­chol­ogy.

Let’s get back to self­ish­ness. Selfish­ness is definitely not he­do­nism, not in the strict sense of the word. In or­der to learn what self­ish­ness is, you need to study hu­mans, not philos­o­phy. And if you do, you’ll find out that a bunch of stuff mat­ters: our aliefs, other hu­mans, sex, pres­tige, et cetera. And then there’s you, a unique in­di­vi­d­ual with a given set of traits and abil­ities. And they’re malle­able, at least in prin­ci­ple, be­cause neu­ro­plas­tic­ity. And there’s your en­vi­ron­ment that’s var­i­ously suited (or not) to your traits and abil­ities. Learn! Learn and flour­ish!

Soares is cor­rect in ad­vis­ing you to search your mem­ory for mo­ti­vat­ing ma­te­rial.
Me­mory re­con­soli­da­tion served you well and gave you al­most perfect, clean, cen­sored mem­o­ries of the good stuff.

Wait, you’re say­ing it’s not real, is it?

Yes! Or no! So what! Who cares! You’re a hu­man, act in ways which are suited to hu­mans, don’t fol­low some dry, ab­stract idea. If the mem­ory keeps you mov­ing, then it has served its pur­pose already. Keep in mind that re­al­ity can be differ­ent from how you re­mem­ber it, but also re­mem­ber (ha!) that with­out change there won’t be change. Pre­pare your­self for the jour­ney and go ex­plor­ing!

But which one to pick?

Who cares? Satis­fice if you can! You want to launch your­self into the world and get de­stroyed by it, in or­der to get re­born and be­come stronger, like a phoenix.

Nate is also right in say­ing that you won’t get rid of the listless­ness af­ter merely day­dream­ing for a while. That’s not how hu­mans work, ob­vi­ously. But it’s a start.

Mo­ti­va­tion, de­sire won’t just ap­pear in front of you in a puff of smoke, it’s born out of S1 crash­ing into ex­pe­rience. And what a co­in­ci­dence! Our In­ner Sim can gen­er­ate ex­pe­rience on spot! You’ll need ways to keep the flame al­ive, though.

Maybe Nate knew all this already. Maybe he de­cided to be su­per­ra­tional and shuffle his ar­gu­ments in or­der to con­vince you. I don’t know.

I’m offer­ing you the Hard Way out.

You’ll prob­a­bly strug­gle with in­ner con­flicts a lot more if you fol­low this Way.

Keep chip­ping away at them and you’ll get re­warded by bet­ter, stronger mod­els, which won’t get bro­ken by re­al­ity.

You know why? Be­cause they will com­plete it.