# Hazard’s Shortform Feed

In light of read­ing through Rae­mon’s short­form feed, I’m mak­ing my own. Here will be smaller ideas that are on my mind.

No nominations.
No reviews.
• Over the past few months I’ve no­ticed a very con­sis­tent cy­cle.

1. No­tice some­thing fishy about my models

2. Strug­gle and strain un­til I was able to for­mu­late the ex­tra vari­able/​han­dle needed to de­velop the model

3. Re-read an old post from the se­quences and re­al­ize “Oh shit, Eliezer wrote a very lu­cid de­scrip­tion of liter­ally this ex­act same thing.”

What’s sur­pris­ing is how much I’m sur­prised by how much this hap­pens.

• Often I have an idea three times in var­i­ous forms be­fore it makes it to ter­ri­tory of, “Well thought out idea that I’m ac­tu­ally act­ing upon and hav­ing good stuff come from it.”

My de­fault, I fol­low a pat­tern of, “semi-ran­domly ex­pose my­self to lots of ideas, not worry a lot about screen­ing for repet­i­tive stuff, let the most salient ideas at any given mo­ment float up to re­ceive tid-bits of con­scious thought, then for­get about them till the next semi-ran­dom event trig­gers it be­ing thought about.”

I’d be in­ter­ested if there was a bet­ter pro­to­col for, “This thing I’ve en­coun­tered seems ex­tra im­por­tant/​in­ter­est­ing, let me dwell on it more and more in­ten­tion­ally in­te­grate it into my think­ing/​”

• Ahh, the “meta-thoughts” idea in seems like a use­ful thing to ap­ply if/​when this hap­pens again.

(which begs the ques­tions, when I wrote the above com­ment, why didn’t I have the meta-thought that I did in the linked com­ment? (I don’t feel up to think­ing about that in this mo­ment)) *tk*

• Some­times when I talk to friends about build­ing emo­tional strength/​re­silience, they re­spond with “Well I don’t want to be­come a robot that doesn’t feel any­thing!” to para­phrase them un­char­i­ta­bly.

I think wolver­ine is a great phys­i­cal ana­log for how I think about emo­tional re­silience. Every time wolver­ine gets shot/​stabbed/​clubbed it ab­solutely still hurts, but there is an im­por­tant way in which these at­tacks “don’t re­ally do any­thing”. On the emo­tional side, the aim is not that you never feel a twinge of hurt/​sor­row/​jeal­ousy etc. but that said pain is felt, and noth­ing more hap­pens be­sides that twinge of pain (un­less those emo­tions held in­for­ma­tion that would be use­ful to up­date on).

Like­wise, though I’m not re­ally a mar­vel buff, I’m as­sum­ing wolver­ine can still die. Though he can heal crazy fast, it’s still con­ceiv­able that he could be phys­i­cally as­saulted in such a way that he can’t re­cover. Same for the emo­tions side. I’m sure that for most emo­tion­ally re­silient peo­ple there is some con­ceiv­able, very spe­cific idiosyn­cratic sce­nario that could “break them”.

That doesn’t change the fact that you’re a moth­er­fuck­ing bad-ass with re­gen­er­a­tive pow­ers and can take on most threats in the mul­ti­verse.

• Maybe emo­tional re­silience is bad for some forms of sig­nal­ing. The more you re­act emo­tion­ally, the stronger you sig­nal that you care about some­thing. Keep­ing calm de­spite feel­ing strong emo­tions can be mis­in­ter­preted by oth­ers as not car­ing.

Mi­sun­der­stand­ings cre­ated this way could pos­si­bly cause enough harm to out­weigh the benefits of emo­tional re­silience. Or per­haps the bal­ance de­pends on some cir­cum­stances, e.g. if you are phys­i­cally strong, peo­ple will be nat­u­rally afraid to hurt you, so then it is okay to de­velop emo­tional re­silience about phys­i­cal pain, be­cause it won’t re­sult in them hurt­ing you more sim­ply be­cause “you don’t mind it any­way”.

• That prob­lem should be ad­dressed by bet­ter mas­tery over one’s pre­sen­ta­tion, not by re­lin­quish­ing mas­tery over one’s emo­tions.

• Keep­ing calm de­spite feel­ing strong emo­tions can be mis­in­ter­preted by oth­ers as not car­ing.

To some ex­tent, the in­ter­pre­ta­tion is ar­guably cor­rect; if you per­son­ally suffer from some­thing not work­ing out, then you have a much greater in­cen­tive to ac­tu­ally en­sure that it does work out. If a situ­a­tion go­ing bad would cause you so much pain that you can’t just walk out from it, then there’s a sense in which it’s cor­rect to say that you do care more than if you could just choose to give up when­ever.

• I’m in the pro­cess of turn­ing this thought into a full es­say.

Ideas that are get­ting mixed to­gether:

• A mind can perform origi­nal see­ing (to var­i­ous de­grees), and it can also use cached thoughts.

• Cache thoughts are more “Pro­ce­du­ral in­struc­tion man­u­als” and origi­nal see­ing is more “Your true an­ti­ci­pa­tions of re­al­ity”.

• Both re­al­ity and so­cial re­al­ity (so­cial im­prov web) ap­ply pres­sures and re­wards that shape your cached thoughts.

• It of­ten looks like peo­ple can be said to have mo­tives/​agen­das/​goals, be­cause their cached thoughts have been formed by the pres­sures of the so­cial im­prov web.

• Ex. Tom has a cached thought, the ex­e­cu­tion of which re­sults in “Peo­ple Like Tom”, which makes it look rea­son­able to as­sert “Tom’s mo­tives are for peo­ple to like him”.

• Peo­ple are Cached-thought-ex­ecu­tors, not Utility-max­i­miz­ers/​agenda-pur­suers.

• One can switch from act­ing from cached thoughts, to act­ing from origi­nal see­ing with­out ever re­al­iz­ing a switch hap­pened.

• Motte and bailey doesn’t have to be in­ten­tional.

• When talk­ing with some­one and ap­ply­ing pres­sure to their be­liefs, it no longer be­comes effec­tive to chase down their “mo­tives”/​cached thoughts, be­cause they’ve switched to a weak form of origi­nal see­ing, and in that mo­ment effec­tively no longer have the “mo­tives” they had a few mo­ments ago.

• Ten­ta­tively dub­bing this the Schrod­inger’s Agenda.

• Just wanted to say I liked the core in­sight here (that peo­ple seem more-like-hid­den-agenda ex­ecu­tors when they’re run­ning on cached thoughts). I think it prob­a­bly makes more sense to frame it as a hy­poth­e­sis than a “this is a true thing about how so­cial re­al­ity and mo­ti­va­tion work”, but a pretty good hy­poth­e­sis. I’d be in­ter­ested in the es­say ex­plor­ing what ev­i­dence. might falsify it or re­in­force it.

(This is some­thing that’s not cur­rently a ma­jor pat­tern among ra­tio­nal­ist think­pieces on psy­chol­ogy but prob­a­bly should be)

• hm­m­mmm, iron­i­cally my im­me­di­ate thought was, “Well of course I was con­sid­er­ing it as a hy­poth­e­sis which I’m ex­am­in­ing the ev­i­dence for”, though I’d bet that the map/​ter­ri­tory sep­a­ra­tion was not nearly as em­pha­sized in my mind when I was gen­er­at­ing this idea.

Yeah, I think your fram­ing is how I’ll take the es­say.

• Here’s a more re­fined way of point­ing out the prob­lem that the par­ent com­ment was ad­dress­ing:

• I am a gen­eral in­tel­li­gence that emerged run­ning on hard­ware that wasn’t in­tel­li­gently de­signed for gen­eral in­tel­li­gence.

• Be­cause of the sorts of prob­lems I’m able to solve when di­rectly ap­ply­ing my gen­eral in­tel­li­gence (and be­cause I don’t un­der­stand in­tel­li­gence that well), it is easy to end up im­plic­itly be­liev­ing that my hard­ware is far more in­tel­li­gent than it ac­tu­ally is.

• Ex­am­ples of ways my hard­ware is “sub-par”:

• It don’t seem to get au­to­matic be­lief prop­a­ga­tion.

• There doesn’t seem to be strong rea­sons to ex­pect that all of my sub­sys­tems are guaran­teed to be al­igned with the mo­tives that I have on a high level.

• Be­cause there are lots of lit­tle things that I im­plic­itly be­lieve my hard­ware does, which it does not, there are a lot of cor­rec­tive mea­sures I do not take to solve the defi­cien­cies I ac­tu­ally have.

• It’s com­pletely pos­si­ble that my hard­ware works in such a way that I’m effec­tively work­ing on differ­ent sets of be­liefs and mo­tives and var­i­ous points in time, and I have a bias to­wards dis­miss­ing that be­cause, “Well that would be stupid, and I am in­tel­li­gent.”

Another per­spec­tive. I’m think­ing about all of the ex­am­ples from the se­quences of peo­ple near Eliezer think­ing that AI’s would just do cer­tain things au­to­mat­i­cally. It seems like that lens is also how we look at our­selves.

Or it could hu­mans are not au­to­mat­i­cally strate­gic, but on steroids. Hu­mans do not au­to­mat­i­cally get great hard­ware.

• Here’s a pat­tern I’m notic­ing more and more: Gark makes a claim. Tlof doesn’t have any par­tic­u­lar con­tra­dic­tory be­liefs, but takes up ar­gu­ment with Gark, be­cause (and this is the ac­tual-source-of-be­hav­ior be­cause) the claim pat­tern matches, “Some­one try­ing to lay claim to a tool to wield against me”, and peo­ple of­ten try to get claims “ap­proved” to be used against the other.

Tlof be­hav­ior is a use­ful adap­ta­tion to a com­bat­ive con­ver­sa­tional en­vi­ron­ment, and has been nor­mal­ized to feel like a “sim­ple dis­agree­ment”. Even in high trust sce­nar­ios, Tlof by habit con­tinues to fol­low con­ver­sa­tional be­hav­iors that get in the way of good truth seek­ing.

• A bit more gen­er­al­ized: there are var­i­ous type of “gotcha!”s that peo­ple can pull in con­ver­sa­tion, and it is pos­si­ble to ha­bit­u­ate var­i­ous “gotcha!” defenses. Th­ese be­hav­iors can de­tract from con­ver­sa­tions where no one is pul­ling a “gotcha!”.

• Some­thing as sim­ple as talk­ing too loud can com­pletely screw you over so­cially. There’s a guy in one of my classes who talks at al­most a shout­ing level when he asks ques­tions, and I can feel the rest of the class tense up. I’d guess he’s un­aware of it, and this is likely a way he’s been for many years which has sub­tlety/​not so sub­tlety pushed peo­ple away from him.

Would it be a good idea to tell him that a lot of peo­ple don’t like him be­cause he’s loud? Could I pack­age that mes­sage such that it’s clear I’m just try­ing to give him use­ful info, as op­posed to try­ing to in­sult him?

This seems like the sort of prob­lem where most of the time, no one will bring it up to him, un­less they reach a “break­ing point” in which case they’d tell him he’s too loud via a so­cial at­tack. It seems like there might be a gen­eral solu­tion to this sort of co­nun­drum.

• I pointed out in this post that ex­pla­na­tions can be con­fus­ing be­cause you lack some as­sumed knowl­edge, or be­cause the piece of info that will make the ex­pla­na­tion click has yet to be pre­sented (as­sum­ing a good/​cor­rect ex­pla­na­tion to be­gin with). It seems like there can be a similar break­down when fac­ing con­f­sion in the pro­cess of try­ing to solve a prob­lem.

I was work­ing on some puz­zles in as­sem­bly code, and I made the mis­take of in­ter­pret­ing hex num­bers as dec­i­mal (treat­ing 0x30 as 30 in­stead of 48). This lead me to draw a mem­ory map that looked re­ally weird and con­fus­ing. There also hap­pened to be a bunch of nested func­tions that would op­er­ate on this block of mem­ory. I defi­nately no­ticed my con­fu­sion, but I think I im­plic­itly pre­dicted that my con­fus­ing mem­ory di­a­gram would make sense in light of in­ves­ti­gatin the func­tions more.

In this par­tic­u­lar ex­am­ple, that was the wrong pre­dic­tion to make. I’m cu­ri­ous if I would have made the same predic­i­ton if I had been mak­ing it ex­plic­itly. This seems to point at a gen­eral situ­a­tion one could find them­selves in when notic­ing con­fu­sion. “Did I screw some­thing up ear­lier, or do I just not have enough info for this to make sense?”

Again, in my as­sem­bly ex­am­ple, I might have benefited from ex­am­in­ing my con­fu­sion. I could have no­ticed thta the mem­ory di­a­gram I was draw­ing didnt just not im­me­di­ately make sense, but that it also vi­o­lated most rules of “how to not ruin you com­puter through bad code”.

• Lol, one rea­son it’s hard to talk to peo­ple about some­thing I’m work­ing through when there’s a large in­fer­en­tial gap, is that when they mi­s­un­der­stand me and tell me what I think I some­times be­lieve them.

• Ex­am­ple Me: “I’m think­ing about pos­si­ble al­ter­na­tives typ­i­cal ad rev­enue mod­els of fund­ing con­tent cre­ation and what it would take to switch, like what would it take to get eeeeeev­ery­one on pa­treon? Maybe we could elimi­nate some of the win­ner takes all pop­u­lar­ity effects of sel­l­ing eye­balls.”

Friend: some­what in­dig­nantly “You’re miss­ing the point. Why would you think this could solve pop­u­lar­ity con­test? Pa­treon just shifts where that con­test hap­pens.”

Me: fum­bles around try­ing to ex­plain why I think pa­treon is a good idea, even though I DONT, and ex­plic­itly started the convo with I’m ex­plor­ing pos­si­bil­ities, but be­cause my thoughts aren’t yet su­per clear I’m su­per into sup­port­ing some­thing the other per­son thinks I think

• This hap­pens on LW as well, fairly of­ten. It’s hard to re­ally in­tro­duce a topic in a way that peo­ple BELIEVE you when you say you’re ex­plor­ing con­cept space and look­ing for ideas re­lated to this, rather than try­ing to eval­u­ate this ac­tual state­ment. It’s still worth try­ing to get that across when you can.

It’s also im­por­tant to know your au­di­ence/​dis­cus­sion part­ners. For many peo­ple, it’s en­tirely pre­dictable that when you say “I’m think­ing about … get ev­ery­one on pa­treon” they will re­act to the idea of get­ting their rep­re­sen­ta­tion of “ev­ery­one” on their ideas of “pa­treon”. In fact, I don’t know what else you could pos­si­bly get.

It may be bet­ter to try to frame your un­cer­tainty about the prob­lem, and ex­plore that for awhile, be­fore you con­sider solu­tions, es­pe­cially solu­tions to pos­si­bly-re­lated-but-differ­ent prob­lems. WHY are you think­ing about fund­ing and rev­enue? Do you need money? Do you want to give money to some­one? Do you want some per­son C to cre­ate more con­tent and you think per­son D will fund them? It’s worth it to ex­plore where Pa­treon suc­ceeds and fails at what­ever goals you have, but first you have to iden­tify the goals.

• Separat­ing two differ­ent points in my ex­am­ple, there’s “You mi­s­un­der­stand­ing my point leads me to mi­s­un­der­stand my point” (the thing I think is the most in­ter­est­ing part) and there’s also “blarg! Stop mi­s­un­der­stand­ing me!”

I’m with you on your sug­ges­tion of fram­ing a dis­cus­sion as un­cer­tainty about a prob­lem, to get less of the mi­s­un­der­stand­ing.

• I’ve re­cently re-read Lou Keep’s Uruk se­ries, and a lot more ideas have clicked to­gether. I’m go­ing to briefly sum­ma­rize each post (hope­fully will tie things to­gether if you have read them, might not make sense if you haven’t). This is also a mini-ex­per­i­ment in us­ing com­ments to make a twit­ter-es­que idea thread.

• This post tracks ideas in The True Believer, by Hoeffer.

There is a fun­da­men­tal differ­ence be­tween the ap­peal of a mass move­ment and the ap­peal of a prac­ti­cal or­ga­ni­za­tion. The prac­ti­cal or­ga­ni­za­tion offers op­por­tu­ni­ties for self-ad­vance­ment, and its ap­peal is mainly to self-in­ter­est. On the other hand, a mass move­ment, par­tic­u­larly in its ac­tive, re­vival­ist phase, ap­peals not to those in­tent on bols­ter­ing and ad­vanc­ing a cher­ished self, but to those who crave to be rid of an un­wanted self. A mass move­ment at­tracts and holds a fol­low­ing not be­cause it can satisfy the de­sire for self-ad­vance­ment, but be­cause it can satisfy the pas­sion for self-re­nun­ci­a­tion.

The main MO of a MM is to re­place ac­tion with iden­tity. This is the gen­eral phe­nom­ena of which nar­cis­sism (TLP and samz­dat brand) is a spe­cific form of.

Moloch like forces con­spire such that the most suc­cess­ful MM will be the one’s that do the best job of keep­ing their mem­bers very frus­trated. Hate is of­ten used to keep fire burn­ing.

• One sen­tence: Metis is the be­lief, the rit­ual, and the world view, and they are way less sep­a­rable than you think.

Ex­plores the re­cent his­tory of gri-gri, witch doc­tor magic used in Africa to make peo­ple in­vuln­er­a­ble to bul­lets to fight against lo­cal war­lords (it also can in­volve some nasty sac­ri­fice and can­ni­bal­ism rit­u­als). Lou em­pha­sizes the point that it’s not enough to go “ah, gri-gri is a use­ful lie that helps mo­ti­vates ev­ery­one to fight as a unified force, and fight­ing as a unified force is what ac­tu­ally has a huge im­pact on fight­ing of war­lords...”

The State’s re­sponse is likely go­ing to be “Ahhh, so gri-gri doesn’t do any­thing, let’s ban it and just tell peo­ple to fight in groups”. This will fail, be­cause this has no the­ory of in­di­vi­d­ual adop­tion (i.e, the only rea­son peo­ple fought as one was be­cause they liter­ally thought they were in­vuln­er­a­ble).

This is all to ham­mer in the point that for any given piece of illeg­ible metis, it’s very hard to find a ac­tual work­ing re­place­ment, and Very hard (pos­si­bly be­yond the states pay grade) to find a leg­ible re­place­ment.

• One sen­tence: Peo­ple care about the so­cial as­pects of life, and the so­cial is now em­bed­ded in mar­ket struc­tures in a way that al­lows Moloch-es­que forces to de­stroy the good so­cial stuff.

It starts my ad­dress­ing the “weird­ness” of ev­ery­one be­ing an­gry, even though peo­ple are richer than ever. This post tracks the book The Great Trans­for­ma­tion by Polanyi.

Claim: (quote from Polanyi)

He [man] does not act so as to safe­guard his in­di­vi­d­ual in­ter­est in the pos­ses­sion of ma­te­rial goods; he acts so as to safe­guard his so­cial stand­ing, his so­cial claims, his so­cial as­sets. He val­ues ma­te­rial goods only in­so­far as they serve this end.

Cap­i­tal­ism is differ­en­ti­ated from mar­kets. Rea­son be­ing is that mar­kets have always been around (they were me­di­ated and con­trol­led through so­cial re­la­tion­ships), the new/​re­cent thing is build­ing so­ciety around a mar­ket.

Claim: Once you treat la­bor and land like com­mon mar­ket goods and sub­ject them to the flows of the mar­ket, you open up a path­way for Moloch to gnaw away at your soul. Now “in­cen­tives” can ap­ply pres­sure such that you slowly sac­ri­fice more and more of the so­cial/​re­la­tional as­pects of life that peo­ple ac­tu­ally care about.

• The con­cept of leg­i­bil­ity is in­tro­duced (I like Rib­bon Farm’s ex­pla­na­tion of the con­cept). The state only talks in terms of leg­i­bil­ity, and thus can’t un­der­stand and illeg­ible claims, ideas, prac­tices. The pow­er­less (i.e the illeg­ible who can’t speak in the terms of the state) tend to get crushed. (now adays an illeg­ible group would be Chris­ti­ans)

Lou points to the cur­rent pro­cess/​tra­jec­tory of the state slowly leg­i­bi­liz­ing the world, and de­stroy­ing all that is illeg­ible in its path. Be­sides not­ing this pro­cess, Lou also claims that some of those illeg­ible prac­tices are valuable, and be­cause the state does not truly un­der­stand the illeg­ible prac­tices it de­stroys, the state does not provide ad­e­quate re­place­ments.

Ex­tra claim: a lot of the illeg­ible metis be­ing de­stroyed has to do with hap­piness, fulfill­ment, and other es­sen­tial com­po­nents of hu­man ex­pe­rience.

• I re­ally like that you’re do­ing this! I’ve tried to get into the se­ries, but I haven’t done so in a while. Thanks for the sum­maries!

(Also, maybe it’d be good for fu­ture com­ments about what you’re do­ing to be chil­dren of this post, so it doesn’t break the flow of sum­maries.)

• Thoughts on writ­ing (I’ve been spend­ing the 4 hours ev­ery morn­ing the last week work­ing on Hazardous Guide to Words):

## Feedback

Feed­back is about figur­ing out stuff you didn’t already know. I wrote the first draft of HGTW a month ago, and I wrote it in “Short sen­tences that con­vince me per­son­ally that I have a co­her­ent idea here”. When I went to get feed­back from some friends last week, I’d for­got­ten that I’d hadn’t ac­tu­ally worked to make it un­der­stand­able, and so most of the feed­back was “this isn’t un­der­stand­able”.

## Writ­ing with purpose

Al­most always if I get bogged down when writ­ing it’s be­cause I’m try­ing to “do some­thing jus­tice” in­stead of “do what I want”. “Where is the mean­ing?” started as “oh, I’ll just para­phrase Hofs­tadter’s view of mean­ing”. The first ex­am­ple I thought was to talk about how you can draw too much mean­ing from things, and look at claims of the pyra­mids pre­dict­ing the fu­ture. I got bogged down right­ing those ex­am­ples, be­cause “what can lead you to think mean­ing is there when it’s not?” was not re­ally what I was talk­ing about, nor was it what I needed to talk about lan­guage. It is in­ter­est­ing though.

I’m get­ting bet­ter at notic­ing the feel­ing of be­ing part way through an ex­pla­na­tion and go­ing “oh shit, this is wrong/​not the right frame/​isn’t con­gru­ent with the last chap­ter/​doesn’t build to where I want”. There have been times in the past when I thought that feel­ing was just pesky perfec­tion­ism.

Hav­ing an ex­plicit pur­pose for each post is crazy helpful for de­cid­ing what does and doesn’t go in.

## Process

I’m hap-haz­ardously grow­ing more of a pro­cess with writ­ing. I’ve cur­rently got an out­line of the re­fac­tored ver­sion of HGTW with thought given to build­ing con­cepts in the right or­der. Now I’m go­ing down the out­line and mak­ing the re­quired posts.

I’ve started head­ing each post with a one-two sen­tences for me de­scribing what the pur­pose of this post is. I then try to out­line the post, and when I’m done or if I get stuck, I just start try­ing to write it out. This is “get it all out” don’t even worry about con­nect­ing sen­tences, bail mid para­graph and start again. Rn I’m go­ing on gut for switch­ing in be­tween out­lin­ing and org­ing and writ­ing in be­tween con­tent. I’m get­ting much bet­ter on ditch­ing stuff that I liked if I don’t think it serves the pur­pose.

Oh, I’m also writ­ing on work cy­cles (po­modoros with sprin­kles). Breaks are stretch­ing and star­ing out he win­dow, great for not de­stroy­ing my eyes and keep­ing my body from shrivel­ing up and dy­ing.

## Mus­ing on Ways I might bet­ter op­er­a­tional­ize my writing

• Stric­ter sense of audience

• Or in the re­verse fram­ing, stric­ter sense of “this is my style and I’m stick­ing to it”

• More in­ten­tion­ally en­train “pur­pose driven” writ­ing?

• Trig­gers: I’m get­ting bored. It feels hard to write. I have writ­ten any­thing in a minute. All my phras­ings sound fake.

• Ac­tion: “Aha! Fric­tion, I no­ticed, thank you brain. Why was I try­ing to write that? Why does it feel weird? If this doesn’t re­ally mat­ter, what does? I’ve I got­ten to what mat­ters yet?”

• Can I pro­duc­tively work on writ­ing in shorter chunks f time, or do can I re­ally only do stuff in 3 12 hour chunks?

• Yeah, this seems pretty im­por­tant given that I want to con­tinue writ­ing all through the next semester/​year/​life.

• I think it might be more use­ful to have more con­crete men­tal buck­ets for stages of writ­ing.

• When I’m do­ing 6 cy­cles in a day, I start each cy­cle like “Cool, time to [clar­ify the mid­dle sec­tion]” as op­posed to “write more”. It might be the case that “work­ing on that blog post” might be to fuzzy to come to ev­ery day.

• End each cy­cle by writ­ing down the next step

• Maybe a differ­ent men­tal­ity. In a given cy­cle, don’t try to con­nect all the dots. Just ex­plain a few dots. After a few days of hav­ing made some dots, then i might be able to con­nect them in one day.

• I finished read­ing Crazy Rich Asi­ans which I highly en­joyed. Some thoughts:

The char­ac­ters in this story are crazy sta­tus ob­sessed, and my guess is be­cause sta­tus games were the only real games that had ever ex­isted in their lives. Any­thing they ever wanted, they could just buy, but you can’t pay other rich peo­ple to think you are im­pres­sive. So all of their en­ergy goes into do­ing things that will make oth­ers think their awe­some/​fash­ion­able/​wealthy/​classy/​etc. The de­gree to which the book plays this line is ob­scene.

Though you’re never given ex­act num­bers on Nick’s fam­ily for­tune, the book builds up an aura of im­pen­e­tra­ble wealth. There is no way you will ever be­come as rich as the Youngs. I’ve already been some­one who’s been a grumpy cur­mud­geon about show­ing off/​sig­nal­ling/​buy­ing po­si­tional goods, but a thing that this book made real to me was just how deep these games can go.

If you pick the most straight for­ward sta­tus mark­ers (money), you’ve de­cided to try and climb a sta­tus lad­der of im­pos­si­ble height with vi­cious com­pe­ti­tion. If you’re go­ing to pick a do­main in which you care more about your or­di­nal­ity than your car­nal­ity, for the love of god choose care­fully.

This re­minds me of some­thing an old fenc­ing coach told me:

Fenc­ing is a small enough sport that if you just train re­ally dili­gently, you could make it to the Olympics. If you want to be the best in foot­ball, you have to train re­ally dili­gently, be a ge­netic freak, and be lucky.

Whether or not that is/​was true, it’s an im­por­tant thing to keep in mind. Also, I think I want to pay ex­tra at­ten­tion to “Do I ac­tu­ally think that XYZ is car­di­nally cool, or is it just the most im­pres­sive thing any­one is do­ing in my sphere of aware­ness?” Im­pli­ca­tion be­ing that if it’s the lat­ter, ex­pand­ing my sphere will lead to me not feal­ling good about do­ing XYZ.

• I no­tice a dis­par­ity be­tween my abil­ity to parse difficult texts when I’m just “read­ing for fun” ver­sus when I’m try­ing to solve a par­tic­u­lar prob­lem for a home­work as­sign­ment. It’s of­ten eas­ier to do it for home­work as­sign­ments. When I’ve got time that’s just, “read­ing up on fun and in­ter­est­ing things,” I bounce-off of difficult texts more of­ten than I would like.

After ex­am­in­ing some re­cent in­stances of this hap­pen­ing, I’ve re­al­ized that when I’m read­ing for fun, my im­plicit goal has of­ten been, “read what­ever will most quickly lead to a feel­ing of in­sight.” When I’m read­ing for home­work, I have a very ex­plicit goal of, “un­der­stand of dy­namic mem­ory man­age­ment works,” or what­ever the topic is. Upon re­flec­tion, I think that most of the time I’d be bet­ter served if I ap­proached my fun-ex­plo­ra­tory read­ing with with a goal of, “Find some­thing that seems in­ter­est­ing, and then fo­cus on try­ing to un­der­stand that in par­tic­u­lar.”

The use­ful TAP would be no­tice when I’m bounc­ing off a text, and check if my ac­tual rea­sons for read­ing are al­igned with my big pic­ture rea­sons for read­ing, and read­just as nec­es­sary.

• The uni­ver­sity I’m at has meal plans were you get a cer­tain num­ber of blocks (meal + drink + side). Th­ese are things that one has, and uses them to buy stuff. Last week at din­ner, I gave the cashier my or­der and he said “Sorry man, we ran out of blocks.” If I didn’t ex­plain blocks well enough, this is a state­ment that makes no sense.

I com­pletely broke the flow of the back and forth and replied with a re­ally con­fused, “Huh?” At that point the guy and an­other worker started laugh­ing. Turns out they’d been com­ing up with non­sen­si­cal lines and see­ing how many peo­ple they would fly past.

Mo­ral of this story, I think the only rea­son I no­ticed my con­fu­sion and didn’t do men­tal gym­nas­tics to “make it make sense” was be­cause I was re­ally tired. Yep, the great­est weapon I wield against the pres­sures of so­cial re­al­ity is my de­sire to go to bed.

• Sketch of a post I’m writ­ing:

“Keep your iden­tity small” by Paul Gra­ham $$\cong$$ “Peo­ple get stupid/​un­rea­son­able about an is­sue when it be­comes part of their iden­tity. Don’t put things into your iden­tity.”

“Do Some­thing vs Be Some­one” John Boyd dis­tinc­tion.

I’m go­ing to think about this in terms of “What is one’s main strat­egy to meet XYZ needs?” I claim that “This per­son got un­rea­son­able be­cause their iden­tity was un­der at­tack” is more a situ­a­tion of “This per­son is pan­ic­ing at the pos­si­bil­ity that their main strat­egy to meet XYZ need will fail.”

Me grow­ing up: I made effort to not speci­fi­cally “iden­tify” with any group or ideal. Also, my main strat­egy for meet­ing so­cial needs was “Be so ca­su­ally im­pres­sive that ev­ery­one wants to be my friend.” I can’t re­mem­ber an in­stance of this, but I bet I would have looked like “My iden­tity was un­der at­tack” if some­one start­ing say­ing some­thing that un­der­mined that strat­egy of mine. Be­ing called bor­ing prob­a­bly would have been ter­rify­ing.

“Keep your iden­tity small” is not ac­tion­able ad­vice. The tar­get should be more “Build multi-faceted con­fi­dence in your­self over­time, thus al­low­ing you to never feel like one strat­egy failing is your doom.”

Another way iden­tity is a slightly un­helpful frame: If you claim iden­tities are pas­sive, in­ac­tive, “be­ing” things, you are ig­nor­ing iden­tities like, “I’m part of this sub cul­ture that ac­tu­ally DOES stuff” or “I am de­ci­sive and get things done quickly”. Some iden­tities can in­volve more vac­u­ous sig­nal­ling than oth­ers.

Also, some­thing about iden­tity as a blue print “I’ll try to be like this type of per­son, be­cause they seem to suc­ceed” that is very lossy and prone to Good­hart­ing. Seems similar to the differ­ence be­tween ask­ing “Is it ra­tio­nal to be­lieve the sky is blue?” vs “Is the sky blue?”

• Yes­ter­day I read the first 5 ar­ti­cles on google for “why ar­gu­ments are use­less”. It seems pretty in the zeit­geist that “when peo­ple have their iden­tity challenged you can’t ar­gue with them. A few of them stopped there and ba­si­cally de­clared com­mu­ni­ca­tion to be im­pos­si­ble if iden­tity is in­volved, a few of them se­quitously hinted at learn­ing to listen and find com­mon ground. A rea­son I want to get this post out is to add to the “Here’s why iden­tity doesn’t have to be a stop sign.”

I’ve taken co­pi­ous notes in note­books over the past 6 years, I’ve used ev­er­note on and off as a cap­ture tool for the past 4 years, and for the past 1.5 years I’ve been try­ing to or­ga­nize my notes via a per­sonal wiki. I’m in the pro­cess of switch­ing and re­design­ing sys­tems, so here’s some thoughts.

• # Con­cepts and Frames

As­so­ci­a­tion, link­ing and graphs

A defin­ing idea in this space is “Your mem­ory works my as­so­ci­a­tion, get your note tak­ing to mir­ror that.” A sim­ple ver­sion of this is what you have in a wiki. Every con­cept men­tioned that has it’s own page has a link to it. I’m a big fan of graph vi­su­al­iza­tions of in­for­ma­tion, and you could imag­ine look­ing at a graph of your per­sonal wiki where edges are links. Roam em­braces links with mem­ory, all your notes know if they’ve been linked to and dis­play this in­for­ma­tion. My idea for a memex tool to make re­ally in­ter­est­ing graphs is to ba­si­cally giv­ing you free reign to make the type sys­tem of your nodes and edges, and give your re­ally good fil­ter­ing/​search ca­pac­ity on that type sys­tem. Ba­si­cally a dope gui/​text ed­i­tor over­top of neo4j.

Per­sonal Lit review

This is one way I frame what I want to my­self. Some­times I go “Okay, I want to re­think how I ori­ent to loose-tie friend­ships.” Then I re­mem­ber that I’ve definitely thought about this be­fore, but can’t re­mem­ber what I thought. This is the situ­a­tion where I’d want to do a “lit re­view” of how I’ve at­tacked this is­sue in the past, and move for­ward in light of my his­tory.

Just-in-time ideation

I take a shit ton of notes. Some are notes on what I’m read­ing, oth­ers are ran­dom ideas for jokes, pro­jects, the­o­ries, arm chair philoso­phiz­ing. Not all ideas should be, or can be acted upon right away, or at all (like “turn Spain into a tor­tilla”). But there is some pos­si­ble fu­ture situ­a­tion where it would be use­ful to have this idea brought to mind. My ideal memex would ac­tu­ally be a ge­nie that re­mem­bers ev­ery­thing I’ve thought and writ­ten, fol­lows me around, and con­stantly goes, “What would be use­ful for Hazard to re­mem­ber right now?” This can be acted on in how you de­sign your notes. Think, “What sort of situ­a­tion would it be use­ful to re­mem­ber this in? In that situ­a­tion, what key words and phrases will be in my head? In­clude those in this note so they’ll pop up in a search for those key­words.”

Low fric­tion cap­ture everything

If you get perfec­tion­ist with your notes, you lose. This frame imag­ines your mind as a fire­hose of gold, and you want to cap­ture all of it, and sort out what’s good later. Record all ideas, no mat­ter how crack­pot. Carry a note­book, put your note tak­ing app on your home­screen, set up your alexa to dic­tate notes, do what­ever it takes. One prin­ci­ple that comes out of this frame is to be lax on hi­er­ar­chy and or­ga­ni­za­tion. It should be as easy as pos­si­ble to just cap­ture an idea, with no re­gard for “where it goes”. If I have to nav­i­gate a file tree and de­cide where a doc/​note/​brain­storm goes be­fore I’ve even got­ten it out, it might die. The ex­treme end is NO or­ga­ni­za­tion, all search. Ti­ago doesn’t like that and sug­gests “no org on cap­ture, and op­por­tunis­ti­cally or­ga­nize and sum­ma­rize and com­bine over­time”.

Put EVERYTHING in your memex

This is em­braced by An­drew Louis. This is also em­braced by No­tion, they want to me the one app you put ev­ery­thing in. I don’t nec­es­sar­ily want on ap­pli­ca­tion that can do it all (text, ta­bles, video, blah blah blah), but I DO want one memex com­mand cen­ter where the ex­is­tence of all data and files are recorded, and you can con­nect and in­ter­link them. This is sorta like tagspace, they are a liter­ally a wrap­per around your file sys­tem, let­ting you tag, nav­i­gate, and add meta data to files for or­ga­ni­za­tional pur­pose. I would LOVE if I had one “mas­ter file sys­tem memex”, spe­cial fea­tures for text edit­ing, and then spe­cific ap­pli­ca­tions in charge of any more spe­cial­ized func­tion­al­ity.

• # Peo­ple Talk­ing about Me­mex stuff

Ti­ago Forte: Build a Se­cond Brain (here’s an in­tro­duc­tion)

He’s been on my radar for a year, and I’ve just started read­ing more of his stuff. Sus­pi­cion that he might be me from the fu­ture. He’s all about the pro­cess and de­sign of the info flow and doesn’t sell a memex tool. Big ideas: find what you need when the time is right, new or­ganic con­nec­tions, your sec­ond brain should sur­prise you, pro­gres­sive sum­ma­riza­tion.

An­drew Louis: I’m build­ing a memex

This guy takes the memex as a way of life. Self-pro­claimed digi­tal pack­rat, he’s got ev­ery chat log since high­school saved, always has his gps on and records that lo­ca­tion, and ba­si­cally pours all of his digi­tal data into a mas­sive per­sonal database. He’s been de­vel­op­ing an app for him­self (par­tially for oth­ers) to man­age and in­ter­act with this. This goes waaaaaaaay be­yond note tak­ing. I’d binge more of his stuff if I wanted to get a sense for the emer­gent rev­e­la­tions that could come from in­tense memex­ing.

(check out his demo vid)

Conor: Roam

Conor both has a beta-product, and many ideas about how to or­ga­nize ideas. In­spired by zettlekas­ten (post about zettlekas­ten, was the name of a phys­i­cal note card sys­tem used by Nik­las Luh­mann). Check out his white pa­per for the philosophy

• # Prod­ucts I’ve in­ter­acted with

Nuclino

Very cool. Mixes wiki, trello board, and graph cen­tric views. Has all the nice con­tent em­bed­ding, slash com­mands, etc. DOESN’T WORK OFFLINE :( (would be great oth­er­wise)

Style/​In­spira­tion: Wiki meets trello + ex­tra.

Roam

Conor has been de­vel­op­ing this with the Zet­telkas­ten sys­tem as his in­spira­tion. Biggest fea­ture (in my mind) is “deep link­ing” things. You can link other notes to your note, and have them “ex­panded”, and if you edit the deep linked note in a par­ent note, it ac­tual ed­its the linked note. Also, notes keep track of ev­ery place there men­tioned. Allows for pow­er­ful spi­der­webby knowl­edge con­nec­tion. I’m play­ing with the beta, still get­ting fa­mil­iar and don’t yet have much to say ex­cept that deep link­ing is ex­actly the fea­ture I’ve always wanted and couldn’t find.

Zim Wiki

Desk­top wiki that works for linux. Noth­ing fancy, uses a sim­ple mark­down es­que syn­tax, ev­ery­thing is text files. I used that for a year, now I’m mov­ing away. 1 rea­son is I want more rich out­lin­ing pow­ers like fold­ing, but I’m also broadly mov­ing away from fram­ing my notes as a “per­sonal wiki” for rea­sons I’ll men­tion in an­other post.

PB Wiki

Just a wiki soft­ware. When I first de­cided to use a wiki to or­ga­nize my school notes, I used this. It’s an on­line tool which is --, but works okay as a wiki.

Emacs Org Mode

(what I’m cur­rently us­ing) Emacs is a mag­i­cal ex­ten­si­ble text ed­i­tor, and org mode is a spe­cific pack­age for that ed­i­tor. Org mode has great out­lin­ing ca­pa­bil­ities, and un­limited pos­si­bil­ities for how you can cus­tomize stuff (cod­ing re­quired). The cur­rent thing that I’d re­ally need for org mode to fit my needs is to be able to search my notes and see pre­views of them (think ev­er­note search, you see the ti­tles of notes, and a pre­view of the con­tent). I think deft can get me this, haven’t in­stalled yet though. Long term, it seems emacs is ap­peal­ing be­cause it seems like I can craft my own work­flow with pre­ci­sion. Will take work though. Not recom­mended if you want some­thing that “Just works”.

Evernote

Have used a lot over the years. Great for cap­ture (it’s on your phone and your desk­top (but not linux [:(])). I’ve got sev­eral years of notes in there. I rarely build ideas in ev­er­note though. This is a “works out of the box” app.

• I re­ally like the idea of a per­sonal wiki. I’ve been think­ing for a while about how I can track con­cepts that I like but that don’t seem to be part of the zeit­geist. I might set up a per­sonal wiki for it!

• Yes! Think­ing about it is a great idea.

Is there any par­tic­u­lar open source soft­ware you use to set this up?

• I use GitBook.com, func­tions very well as a per­sonal wiki (can link to other pages, cat­e­gorise, etc)

• IIRC, there is some kind of tem­plate soft­ware you can use to set up a ba­sic wiki, kind of like how WordPress is a tem­plate soft­ware for a ba­sic blog. If you google around you’ll prob­a­bly find it, if it ex­ists.

• I’m torn on WaitButWhy?s new se­ries The Story of Us. My ini­tial re­ac­tion was mostly nega­tive. Most of that came from not lik­ing the frame of Higher Mind and Prim­i­tive Mind, as that sort of think­ing has been rea­son­able for a lot of hic­cups for me, mak­ing “do­ing what I want” and un­nec­es­sar­ily an­tag­o­nis­tic pro­cess. And then along the way I see plenty of other ways I don’t like how he slices up the world.

The torn part: maybe this is sorta post “most peo­ple” need to start bridg­ing the in­fer­en­tial gap to­wards what I con­sider good episte­mol­ogy? I ex­pect most peo­ple on LW to find his se­ries too sim­plis­tic, but I won­der if his posts would do more good than the Se­quences for the av­er­age joe. As I’m writ­ing this I’m acutely aware of how lit­tle I know about how “most peo­ple” think.

It also makes me think about how at some point in re­cent years I thought, “More dumbed down sim­plifi­ca­tions of crazy ad­vanced math con­cepts should ex­ist, to get more peo­ple a lit­tle bit closer to all the cool stuff there is.” I guessed a math­e­mat­i­cian might balk at this sug­ges­tion (“Don’t tar­nish my pre­cious pre­ci­sion!”) Am I re­act­ing the same way?

I dunno, what do you think?

• Agree, seems like LW for normies circa ten plus years ago? Re­ac­tion for stan­dard meta­con­trar­ian rea­sons, see­ing past self in it.

• I’d like to see some­one in this com­mu­nity write an ex­ten­sion /​ re­fine­ment of it to fur­ther {need-good-color-name}pill peo­ple into the LW memes that the “higher mind” is not fun­da­men­tally bet­ter than the “an­i­mal mind”

• Yep, agreed. I want all my friends and fam­ily to read the se­ries… and then have a con­ver­sa­tion with me about the ways in which it over­sim­plifies and mis­leads, in par­tic­u­lar the higher mind vs. prim­i­tive mind bit.

On bal­ance though I think it’s great that it ex­ists and I pre­dict it will be the gate­way drug for a bunch of new ra­tio­nal­ists in years to come.

• Notic­ing an in­ter­nal dy­namic.

As a kid I liked to build stuff (lit­tle cat­a­pults, mod­ify nerf guns, sling shots, etc). I en­tered a lot of those pro­jects with the mind­set of “I’ll make this toy and then I can play with it for­ever and never be bored again!” When I would make the thing and get bored with it, I would be sur­prised and mildly up­set, then for­get about it and move to an­other thing. Now I think that when I was imag­in­ing the glo­ri­ous cool toy fu­ture, I was ac­tu­ally imag­in­ing a hav­ing a bunch of friends to play with (didn’t live around many other kids).

When I got to mid­dle school and high­school and spent more time around other kids, the idea of “That per­son’s talks like they’re cool but they aren’t.” When I got into sub-cul­tures cen­ter­ing around a skill or ac­tivity (magic) I ex­pe­rienced the more con­cen­trated form, “That per­son acts like they’re good at magic, but couldn’t do a show to save their life.”

I got the mes­sage, “To fit in, you have to re­ally be about the thing. No half ass­ing it. No pos­ing.”

Why, his­tor­i­cally, have I got­ten so wor­ried when my in­ter­ests shift? I’m not yet at a point in my life where there are that many lo­gis­ti­cal con­straints (I’ve switched ma­jors three times in three years with­out a hitch). I think it’s be­cause in the back of my head I ex­pect ev­ery pos­si­ble group or so­cial scene to say, “We only want you if you’re all about do­ing XYZ all the time.” And when I’m su­per ex­cited about XYZ, it’s fine. But when I feel like “Yeah, I need a break” I get ner­vous.

Yeah, there is a hard un­der­ly­ing prob­lem of “How to not let your cul­ture be­come mean­ingless”, but I think my ex­tra-prob­lem is that I grav­i­tated to­wards the groups that defined them­selves by “We put in lots of time mas­ter­ing this spe­cific hard skill and ap­ply­ing it.” Though I ex­pect it to be the case that for the rest of my life I want to have thought­ful en­gag­ing dis­cus­sion with in­tel­lec­tu­ally hon­est peo­ple (a piece of what I want from less wrong), I feel less rea­son to be sure that I’ll want to spend a large frac­tion of my time and life work­ing on a spe­cific skill/​do­main, like magic, or dis­tributed sys­tems.

• Years ago, I wrote fic­tion, and dreamed about writ­ing a novel (I was only able to write short sto­ries). I as­sumed I liked writ­ing per se. But I was hang­ing out reg­u­larly with a group of fic­tion fans… and when later a con­flict hap­pened be­tween me and them, so that I stopped meet­ing them com­pletely, I found out I had no de­sire left to write fic­tion any­more. So, seems like this was ac­tu­ally about im­press­ing spe­cific peo­ple.

I got the mes­sage, “To fit in, you have to re­ally be about the thing. No half ass­ing it. No pos­ing.”

I sus­pect this is only a part of the story. There are var­i­ous ways to fit in a group. For ex­am­ple, if you are at­trac­tive or highly so­cially skil­led, peo­ple will for­give you be­ing mediocre at the thing. But if you are not, and you still want to get to the cen­ter of at­ten­tion, then you have to achieve the ex­treme lev­els of the thing.

• I’ve been work­ing on some more emo­tional bugs lately, and I’m notic­ing that many of the core is­sues that I’m drag­ging up are ones I’ve no­ticed at var­i­ous points in the past and then just… ? I some­how just man­aged to for­get about them, though I re­mem­ber that in round 1 it also took a good deal of in­tro­spec­tion for these is­sues to rise to the top. Keep­ing a per­ma­nent list of core emo­tional bugs would be an easy fix. The list would need to be some­where I look at least once a week. I don’t always have to be work­ing on all of them, but I at least need to not for­get that these prob­lems ex­ist.

• Prob­a­bly not an ac­ci­dent. For­get­ful­ness is one of the main tools your mind will use to get you to stop think­ing about things. If you make a list you might end up flinch­ing away from look­ing at the list.

• Is that a pre­dic­tion about how one’s de­fault “for­get painful stuff” mechanisms work, or have you pre­vi­ously made a list and also ended up ig­nor­ing it? You’ve writ­ten el­se­where about con­quer­ing a lot of emo­tional bugs in the past year, and I’d be in­ter­ested to know what you did to keep those bugs in mind and not for­get about them.

• I have for­got­ten about im­por­tant emo­tional bugs be­fore, and have seen other peo­ple liter­ally for­get the topic of the con­ver­sa­tion when it turns to a suffi­ciently thorny emo­tional bug.

The thing that usu­ally hap­pens to my lists is that they feel wrong and I have to re­gen­er­ate them from scratch con­stantly; they’re like Fo­cus­ing la­bels that ex­pire and aren’t quite right any­more.

The past year I was deal­ing with what felt to me like ap­prox­i­mately one very large bug (roughly an anx­ious-pre­oc­cu­pied at­tach­ment thing), so it was easy to re­mem­ber.

• The gen­eral does not ex­ist, there are only speci­fics.

If I have a thought in my head, “Tex­ans like their guns”, that thought got there from a finite amount of spe­cific in­ter­ac­tions. Maybe I heard a joke about tex­ans. Maybe my fam­ily is from texas. Maybe I hear a lot about it on the news.

“Peo­ple don’t like it when you cut them off mid sen­tence”. Which peo­ple?

At a lo­cal meetup we do a thing called en­counter groups, and one rule of en­counter groups is “there is no ‘the group’, just in­di­vi­d­ual peo­ple”. Hav­ing con­ver­sa­tions in that mode has been in­cred­ibly helpful to re­al­ize that, in fact, there is no “the group”.

• But why stop at in­di­vi­d­ual peo­ple? This kind of on­tolog­i­cal defla­tion­ism can nat­u­rally be con­tinued to say there are no in­di­vi­d­ual peo­ple, just cells, and no cells, just molecules, and no molecules, just atoms, and so on. You might ob­ject that it’s ab­surd to say that peo­ple don’t ex­ist, but then why isn’t it also ab­surd to say that groups don’t ex­ist?

• The idea was less “In­di­vi­d­ual hu­mans are on­tolog­i­cally ba­sic” and more: I see I of­ten talk­ing about broad groups of peo­ple has been less use­ful than drop­ping down to talk about in­ter­ac­tions I’ve had with in­di­vi­d­ual peo­ple.

In writ­ing the com­ment I was fo­cus­ing more on what the ac­tion I wanted to take was (think about spe­cific en­coun­ters with peo­ple when eval­u­at­ing my im­pres­sions) and less my my on­tolog­i­cal claims of what ex­ists. I see how me lax open­ing sen­tence doesn’t make that clear :)

• Con­crete ex­am­ple: when I’m full, I’m gen­er­ally un­able to imag­ine meals in the fu­ture as be­ing plea­surable, even if I imag­ine eat­ing a food I know I like. I can still pre­dict and ex­pect that I’ll en­joy hav­ing a burger for din­ner to­mor­row, but if I just stuffed my­self on french fries, and just can’t run a simu­la­tion of to­mor­row where the “en­joy­ing the food ex­pe­rience” sense is trig­gered.

I take this as ev­i­dence for my in­ter­nal food ex­pe­rience simu­la­tor has “code” that just asks, “If you ate XYZ right now, how would it feel?” and spit­ting back the re­sult.

This makes me won­der how many other men­tal sys­tems I have that I think of as “Try­ing to imag­ine how I’d feel in the fu­ture” are re­ally just pre­dict­ing how I’d feel right now.

More speci­fi­cally, the fact that I liter­ally can’t do a non-what-im-feel­ing-right-now food simu­la­tion makes me ex­pect that I’m cur­rently in­ca­pable of pre­dict­ing fu­ture feel­ings in cer­tain do­mains.

• There are a few in­stances where I’ve had “re-have an idea” 3 times, each in a slightly differ­ent form, be­fore it stuck and af­fected me in any sig­nifi­cant way. I no­ticed this when go­ing through some old note­books and see­ing stub-thoughts of ideas that I was cur­rently flush­ing out (and had been un­aware that I had given this thing thought be­fore). One ex­am­ple is with TAPS. Two win­ters ago I was writ­ing about an idea I called “micro habits/​at­ti­tudes” and they felt su­per im­por­tant, but noth­ing ever came of them. Now I see that ba­si­cally I was reach­ing at some­thing like TAPs.

It seems like it would be use­ful to have a men­tal move along the lines of “Tag this idea/​con­cept/​topic as likely to be hid­ing some­thing use­ful even if I don’t know what”

• I re­cently was go­ing through the past 3 years of note­books, and this pat­tern is in­cred­ibly per­sis­tent.

• tldr;

In high-school I read pop cogSci books like “You Are Not So Smart” and “Sublimi­nal: How the Sub­con­scious Mind Rules Your Be­hav­ior”. I learned that “con­trary to pop­u­lar be­lief”, your mem­ory doesn’t perfectly cap­ture events like a cam­era would, but it’s changed and re­con­structed ev­ery time you re­mem­ber it! So even if you think you re­mem­ber some­thing, you could be wrong! Me­mory is con­structed, not a faith­ful rep­re­sen­ta­tion of what hap­pened! AAAAAANARCHY!!!

Wait a sec­ond, a cam­era doesn’t perfectly cap­ture events. Or at least, they definitely didn’t when this anal­ogy was first made. Do you re­mem­ber red eye? In­stead of philoso­phiz­ing on the meta­physics of rep­re­sen­ta­tion, I’m just gonna note that “X is a con­struct!” sorts of claims cache out in terms of “you can be wrong in ways that mat­ter to me!”.

There’s some­thing funny about loudly declar­ing “it’s not im­pos­si­ble to be wrong!”

In high-school, “gen­der is a so­cial con­struct!” was enough of a meme that it wasn’t un­com­mon for some­thing to be called a so­cial con­struct to ex­press that you thought it was dumb.

Me: “God, the cafe­te­ria food sucks!”

Friend: “Cafe­te­ria food is a so­cial con­struct!”

Cal­ling some­thing a so­cial con­struct ei­ther meant “I don’t like it” or “you can’t tell me what to do”. That was my limited ex­pe­rience with the idea of so­cial con­structs. Some­thing I didn’t have ex­pe­rience with was the rich fem­i­nist liter­a­ture de­scribing ex­actly how gen­der is con­structed, what it’s effects are, and how it’s been used to shape and con­trol peo­ple for ages.

That is way more in­ter­est­ing to me than just the claim “if your ex­pla­na­tion in­volves gen­der, you’re wrong”. Similarly, these days the cogSci I’m read­ing is stuff like Pre­dic­tive Pro­cess­ing the­ory, which posits that all of hu­man per­cep­tion is made through a cre­ative con­struc­tion pro­cess, and more im­por­tantly it gives a de­tailed de­scrip­tion of the pro­cess that does this con­struct­ing.

For me, a claim that “X is a con­struct” of “X isn’t a 100% faith­ful rep­re­sen­ta­tion” can only be in­ter­est­ing if there’s ei­ther an ac­count of the forces that are try­ing to as­sert oth­er­wise, or there’s an ac­count of how the con­struc­tion works.

Put an­other way; “you can be wrong!” is what you shout at a some­one who is in­sist­ing they can’t be and is try­ing to make things hap­pen that you don’t like. Some peo­ple need to have that shouted at them. I don’t think I’m that per­son. If there’s a convo about some­thing be­ing a con­struct, I want to jump right to the juicy parts and start ex­plor­ing that!

(note: I want to ex­tra em­pha­size that it can be as use­ful to ex­plore “who’s in­sist­ing to me that X is in­fal­lible?” as it is to ex­plore “how is this fal­lible?” I’ve been think­ing about how your sense of what’s hap­pen­ing in your head is con­structed, no­ticed I want to go “GUYS! Con­scious­ness IS A CONSTRUCT!” and when I sat down to ask “Wait, who was try­ing to in­sist that it 100% isn’t and that it’s an in­fal­lible ac­cess into your own mind?” I got some very in­ter­est­ing re­sults.)

• I think you’re fal­ling for the curse of knowl­edge. Most peo­ple are so naive that they do think their, e.g., vi­sion is a “di­rect ex­pe­rience” of re­al­ity. The more sim­plis­tic books are needed to bridge the in­fer­en­tial gap.

• I’m ig­nor­ing that gap un­less I find out that a bulk of the peo­ple read­ing my stuff think that way. I’m more writ­ing to what feels like the edge of in­ter­est­ing and rele­vant to me.

• Over this past year I’ve been think­ing more in terms of “Much of my be­hav­ior ex­ists be­cause it was made as a mechanism to meet a need at some point.”

Ideas that flow out of this frame seem to be things like In­ter­nal Fam­ily Sys­tems, and “if I want to change be­hav­ior, I have to ac­tu­ally make sure that need is get­ting met.”

Ques­tion: does any­one know of a source for this frame? Or at least writ­ings that may have pi­o­neered it?

• Psy­cho-cy­ber­net­ics is an early text in this realm.

• I think this has de­vel­oped grad­u­ally. The idea of “be­hav­ior is based on un­con­scious de­sires” goes back as far as at least Freud, prob­a­bly ear­lier.

• Yeah. To home in more speci­fi­cally, I’m look­ing at “All of your needs are le­git”. I’ve heard for a while “You have all these un­con­scious de­sires your op­ti­miz­ing for” and of­ten fol­lowed with a “If only we could find a way to get rid of these de­sires.” The new thing for me has been the idea that be­hind each of those “petty”/​”base” de­sires there is a real valid need that is okay to have.

• That seems like a po­ten­tially very un­healthy thing when ap­plied to “ba­sic” de­sires such as food and sex… Un­less yolo­ing your way through a life of hook­ers, coke (the sug­ary kind) and jelo seems ap­peal­ing.

Our first or­der de­sires usu­ally con­flict with our long terms de­sires, and those are usu­ally much bet­ter to aim for.

But maybe I’m get­ting some­thing wrong here. Where did you get this idea from ?

• The sen­tence “All your needs are le­gi­t­i­mate” is pretty un­der-speci­fied so I’ll try to flush out the pic­ture.

This gets a bit closer, “All your needs are le­gi­t­i­mate, but not all of your strate­gies to meet those needs are le­gi­t­i­mate.” I can think there’s noth­ing wrong with want­ing sex, but there are still plenty of ways to meet that need which I’d fine ab­hor­rent. “All your needs are le­git” is not me claiming that any ac­tion you think to take is morally okay as long as it’s an at­tempt to meet a need/​de­sire. Another phras­ing might be that I see a differ­ence be­tween, “I have a need for spo­radic plea­surable ex­pe­riences, and for con­sum­ing food so I don’t die” and “Right now I want to go get a burger and a milk­shake”

Another thing that shapes my frame is the claim that a lot of our be­hav­ior, even some that looks like it’s just purs­ing “ba­sic” things, sources from needs/​de­sires like “need­ing to feel loved” “need­ing to feel like your aren’t use­less” etc. This ex­tends to the ten­ta­tive claim: “If more peo­ple had most of their emo­tional needs met, lots of peo­ple would be far less in­clined to en­gage it stereo­typ­i­cal “he­do­nis­tic de­bauch­ery’”

Now to your “Where did this idea come from?” I don’t re­mem­ber when I first ex­plic­itly en­coun­tered this idea, but the most for­ma­tive in­ter­ac­tion might have been at CFAR a year ago. You men­tioned “Our first or­der de­sires usu­ally con­flict with our long terms de­sires, and those are usu­ally much bet­ter to aim for.” I was in­ves­ti­gat­ing a lot of my ‘long term de­sires’ and other top-down frame­works I had to value parts of my life, and be­gan to see how they had been care­fully crafted to meet cer­tain “ba­sic” de­sires, like not be­ing in situ­a­tions where peo­ple would yell at me and never hav­ing to beg for at­ten­tion. Many of my long term de­sires were ac­tu­ally strate­gies to meet var­i­ous ba­sic emo­tional needs, and they were also strate­gies that were caus­ing con­flicts with other parts of my life. My prior ten­dency was to go, “I’ll just re­buke and dis­avow this strat­egy/​de­sire (I didn’t see the differ­ence) and not make the mis­take I was mak­ing”

The ac­tion­able and use­ful thing that the “All your needs are le­gi­t­i­mate” gave me was pre­vi­ously, if I found a be­hav­ior was caus­ing some prob­lems, and I de­ter­mined I was likely en­gag­ing in this be­hav­ior so that peo­ple would like me, I’d de­cide “Ha, need­ing to be liked is base and weak. I’ll just axe this be­hav­ior.” This would of­ten lead to ei­ther mys­te­ri­ously un­suc­cess­ful be­hav­ior change, or more in­ter­nal an­guish. Now I go, “It is com­pletely okay and le­git to want to be liked. I do in fact want that. Is there some way I can meet that need, but not in­cur the nega­tives that this be­hav­ior was pro­duc­ing?”

• All your needs are le­gi­t­i­mate, but not all of your strate­gies to meet those needs are legitimate

Even in this form I don’t be­lieve this sen­tence holds.

For ex­am­ple, I am a smoke (well, vaper, but you get the point, nico­tine user). I can guaran­tee you I have a very real need for:

a) Ni­co­tine’s effect on the brain

b) The throat hit nico­tine gives

c) The phys­i­cal “ac­tion” of smoking

Are those needs le­gi­t­i­mate in the sense you seem to un­der­stand them ? Yes, they are pretty le­gi­t­i­mate, or at least I can as­so­ci­ate them to be on the same de­gree as other needs that most peo­ple would con­sider le­gi­t­i­mate (e.g. need to take a piss, need to talk with a friend, w/​e)

Must those needs stay le­gi­t­i­mate ? No, ac­tu­ally, hav­ing taken breaks of up to half a year from the prac­tice I can ac­tu­ally tell those needs get less rele­vant the longer you go with­out smoking

Should those needs stay le­gi­t­i­mate ? Well, I’d cur­rently ar­gue “yes”, since oth­er­wise I wouldn’t be va­p­ing as I’m writ­ing this. But, I’d equally ar­gue that from a so­cietal per­spec­tive the an­swer is “no”, in­deed, for parts of my brain (the ones that don’t want to smoke), the an­swer is “no”.

1. Now, ei­ther smok­ing is a le­gi­t­i­mate need

OR

2. Some needs that “seem” le­gi­t­i­mate should ac­tu­ally be supressed

OR

3. Needs not only need to “feel/​seem” le­gi­t­i­mate, they also need to have some other stamp of ap­proval, such as be­ing natural

1 - is a bad per­spec­tive to hold all things con­sid­ered, you wouldn’t teach your kid that you caught smok­ing that he should keep do­ing it be­cause it’s a le­gi­t­i­mate need now that he kinda likes it.

2 - seem to counter-act your point be­cause we can now claim any le­gi­t­i­mate need should ac­tu­ally be sup­pressed rather than in­dulged in some way.

3 - You get into a nur­ture vs na­ture de­bate… in which case, I’m on the “you can’t re­ally tell” side for now and wouldn’t per­son­ally go any fur­ther in that di­rec­tion.

• Okay, I agree that for “All your needs are le­gi­t­i­mate....” the “all” part doesn’t re­ally seem to hold. Your ex­am­ple straight­for­wardly seems to ad­dress that. Stuff that’s closer to “biolog­i­cal stuff we de­cent un­der­stand­ing of” (drugs, food) doesn’t re­ally fit the claim I was mak­ing.

I think you also helped me figure out a bet­ter way to ex­press my sen­ti­ment. I was about to rephrase it as “All of your emo­tional needs are le­git” but that feels like it’s a me go­ing down the wrong path. I’ll try to ex­plain why I wanted to phrase it that way in the first place.

I see the “stan­dard view” as some­thing like “Of course your emo­tions are im­por­tant, but there are few un­sa­vory feel­ings that just aren’t ac­cept­able and you shouldn’t have them.” I think I reached to quickly for “There is no such thing as un­ac­cept­able feel­ings” rather than “Here is why this spe­cific feel­ing you are call­ing un­ac­cept­able ac­tu­ally is ac­cept­able.” I prob­a­bly reached for that be­cause it was eas­ier.

Claim 1: The rea­son­ing that pro­claims a given emo­tional/​so­cial need is not le­gi­t­i­mate is nor­mally flawed.

(I could speak more to that, but it’s sort of what I was men­tion­ing at the end of my last com­ment)

I think this thing you men­tioned is rele­vant.

Must those needs stay le­gi­t­i­mate ? No, ac­tu­ally, hav­ing taken breaks of up to half a year from the prac­tice I can ac­tu­ally tell those needs get less rele­vant the longer you go with­out smoking

I to­tally agree that some­thing like smok­ing can have this “re-nor­mal­iza­tion” mechanism. Now I won­der what hap­pens if we swap out the need for smok­ing with the need to feel like some­one cares about you?

Claim 2: Ig­nored emo­tional/​so­cial needs will not “re-nor­mal­ize” and will be a re­cur­ring source of pain, suffer­ing, and prob­lems.

The sec­ond claim seems like it could lead to very tricky de­bate. High-school-me would have in­sisted that I could to­tally just ig­nore my de­sire to be liked by peo­ple with­out ill con­se­quences, be­cause look at me, I’m do­ing it right now and ev­ery­thing’s fine! I can cur­rently see how this was caus­ing me se­ri­ous prob­lems. So… if some­one said to me that they can to­tally just ig­nore things that I’d call emo­tional/​so­cial needs with no ill af­fects, I don’t know how I’d sep­a­rate it be­ing true from it be­ing the same as what I was go­ing through.

• Claim 1: The rea­son­ing that pro­claims a given emo­tional/​so­cial need is not le­gi­t­i­mate is nor­mally flawed.
Claim 2: Ig­nored emo­tional/​so­cial needs will not “re-nor­mal­ize” and will be a re­cur­ring source of pain, suffer­ing, and prob­lems.

I can pretty much agree with these claims.

I think it’s worth break­ing down emo­tional/​so­cial needs into lower-level en­tities than peo­ple usu­ally do, e.g:

• “I need to be in a sex­ual re­la­tion­ship with {X} even though they hate me”—is an emo­tional need that’s prob­a­bly flawed

• “I need to be in a sex­ual re­la­tion­ship”—is an emo­tional need that’s prob­a­bly correct

***

• “I need to be friends with {Y} even though they told me they don’t en­joy my com­pany”—again, prob­a­bly flawed

• “I need to be friends with some of the peo­ple that I like”—most likely correct

But then you reach the prob­lem of where ex­actly you should stop the break­down, as in, if your need is “too” generic once you reach its core it might make it rather hard to act upon. If you don’t break them down at all you end up act­ing like a sit­com char­ac­ter with­out the laugh-track, wit and happy co­in­ci­dences.

Also, whilst I dis­agree with your ini­tial for­mu­la­tion:

All your needs are legitimate

I don’t par­tic­u­larly see any­thing against:

There is no such thing as un­ac­cept­able feelings

But it seems from your re­ply that you hold them to be one and the same ?

• In both of those ex­am­ples you give I agree with you judg­ment of the needs.

If you switch “All your needs are le­git” to “All your so­cial/​emo­tional needs are le­git”, then yeah, I was think­ing of that and “There is no such things as un­ac­cept­able feel­ings” as the same thing. Though I can now see two dis­tinct ideas that they could point to.

“All your S/​E needs are le­git” seems to say not only that it’s okay to have the need, it’s okay to do some­thing to meet it. That’s a bit harder to han­dle than just “It’s okay to feel some­thing.” And yeah, there prob­a­bly is some sce­nario where you could have a need that there’s no way you could eth­i­cally meet, and that you can’t break­down into a need that can be met.

Another thing that I no­ticed in­formed my ini­tial phras­ing is I think that there is a strong sour grapes pres­sure to go from “I have this need, and I don’t see any­way to get it met that I’m okay with” to “Well then this is a silly need and I don’t even re­ally care about it.”

You’ve sparked many more thoughts from me on this, and I think those will come in a post some­time later. Thanks for prod­ding!

• A form­ing thought on post-ra­tio­nal­ity. I’ve been read­ing more samz­dat lately and think­ing about leg­i­bil­ity and illeg­i­bil­ity. Me para­phras­ing one point from this post:

State driven ra­tio­nal plan­ning (episteme) de­stroys lo­cal knowl­edge (metis), of­ten re­sult­ing in met­rics get­ting bet­ter, yet life get­ting worse, and it’s im­pos­si­ble to com­plain about this in a lan­guage the state un­der­stands.

The quip that most read­ily comes to mind is “well if ra­tio­nal­ity is about win­ning, it sounds like the state isn’t be­ing very ra­tio­nal, and this isn’t a fair at­tack on ra­tio­nal­ity it­self” (this com­ment quotes a similar ar­gu­ment).

Similarly, I was hav­ing a con­ver­sa­tion with two friends once. Per­son A ex­pressed that they were wor­ried if they started hang­ing around more EA’s and ra­tio­nal­ists, they might end up hav­ing a su­per bor­ing op­ti­mized life and never do fun things like cook meals with friends (be­cause soylent) or go danc­ing. Friend B ex­pressed, “I dunno, that sounds pretty op­ti­mal to me.”

I don’t think friend A was le­gi­t­i­mately wor­ried about the gen­eral con­cept of op­ti­miza­tion. I do think they were wor­ried about what they ex­pected there im­ple­men­ta­tion (or their friends im­ple­men­ta­tion) of “op­ti­mal­ity” in their own lives.

Cur­rent most char­i­ta­ble claim I have of the post-ra­tio­nal­ist mind­set: the best and most tech­ni­cal speci­fi­ca­tions that we have for what things like op­ti­mal/​truth/​ra­tio­nal might look like con­tain very lit­tle in­for­ma­tion about what to ac­tu­ally do. In your pur­suit of “truth”/​”ra­tio­nal­ity”/​”the op­ti­mal” as it per­tains to your life, you will be mak­ing up most of your art along the way, not de­riv­ing it from first prin­ci­ples. Fur­ther­more, think­ing in terms of the truth/​ra­tio­nal­ity/​op­ti­mal­ity will [some­how] lead you to make im­por­tant er­rors you wouldn’t have made oth­er­wise.

A more blase ver­sion of what I think the post ra­tio­nal­ist mind­set is: you can’t han­dle the (con­cept of the) truth.

• Epistemic sta­tus: Some bab­ble, help me prune.

My thoughts on the ba­sic di­vide be­tween ra­tio­nal­ist and post-ra­tio­nal­ists, lawful thinkers and toolbox thinkers.

Rat thinks: “I’m on board with The Great Re­duc­tion­ist Pro­ject, and ev­ery­thing can in the­ory be for­mal­ized.”

Post-Rat hears: “I per­son­ally am go­ing to re­duce love/​jus­tice/​mercy and the re­duc­tion is go­ing to be perfect and work great.”

Post-Rat thinks: “You aren’t go­ing to suc­ceed in time /​ in a man­ner that will be use­ful for do­ing any­thing that mat­ters in your life.”

Rat hears: “It’s fun­da­men­tally im­pos­si­ble to re­duce love/​jus­tice/​mercy and no for­mal­ism of any­thing will do any good.”

## New­comb’s Problem

Another way I see the differ­ence is that the post-rats look at New­comb’s prob­lem and say “Those causal ra­tio­nal­ist losers! Just one-box! I don’t care what your de­ci­sion the­ory says, tell your self what­ever story you need in or­der to just one-box!” The post-rats rally against peo­ple who are do­ing things like two-box­ing be­cause “it’s op­ti­mal”.

The most in­dig­nant ra­tio­nal­ists are the one’s who took the effort to cre­ate whole new for­mal de­ci­sion the­o­ries that can one-box, and don’t like that the post-rats think they’d be fool­ish enough to two-box just be­cause a de­ci­sion the­ory recom­mends it. While I think this gets the ba­sic idea across, this ex­am­ple is also cheat­ing. Rats can point to for­mal­ism that do one-box, and in LW cir­cles there even seem to be peo­ple who have worked the ra­tio­nal­ity of one-box­ing deep into their minds.

Hy­poth­e­sis: All the best ra­tio­nal­ists are post-ra­tio­nal­ists, they also hap­pen to care enough about AI Safety that they con­tinue to work dili­gently on for­mal­ism.

• Alter­na­tive hy­poth­e­sis: Post-ra­tio­nal­ity was started by David Chap­man be­ing an­gry at his­tor­i­cal ra­tio­nal­ism. Ra­tion­al­ity was started by Eliezer be­ing an­gry at what he calls “old-school ra­tio­nal­ity”. Both talk a lot about how peo­ple mi­suse frames, pre­tend that rigor­ous defi­ni­tions of con­cepts are a thing, and broadly don’t have good mod­els of ac­tual cog­ni­tion and the mind. They are not fully the same thing, but most of the time I talked to some­one iden­ti­fy­ing as “pos­tra­tional­ist” they picked up the term from David Chap­man and were con­trast­ing them­selves to his­tor­i­cal ra­tio­nal­ism (and some­times con­fus­ing them for cur­rent ra­tio­nal­ists), and not ra­tio­nal­ity as prac­ticed on LW.

• I’d buy that.

Any idea what a good re­cent thing/​per­son/​blog ex­am­ple of em­body­ing that his­tor­i­cal ra­tio­nal­ist mind­set? The only con­text I have for the his­tor­i­cal ra­tio­nal­ist a is Descartes, and I have not per­son­ally seen any­one who felt su­per Descartes-es­que.

• The de­fault book that I see men­tioned in con­ver­sa­tion that ex­plains his­tor­i­cal ra­tio­nal­ism is “See­ing like a state” though I have not read the whole book my­self.

• Cool. My back of the mind plan is “Ac­tu­ally read the book, find big names in the top down plan­ning regimes, see if they’ve writ­ten stuff” for when­ever I want to re­place my Descartes stereo­type with sub­stance.

• What are the bar­ri­ers to hav­ing re­ally high “knowl­edge work out­put”?

I’m not ca­pa­ble of “be­ing pro­duc­tive on ar­bi­trary tasks”. One win­ter break I made a plan to ap­ply for all the small \$100 es­say schol­ar­ships peo­ple were always tel­ling me no one ap­plied for. After two days of sheer mis­ery, I had to ad­mit to my­self that I wasn’t able be pro­duc­tive on a task that in­volved mak­ing up bul­lshit opinions about top­ics I didn’t care about.

Con­vic­tion is im­por­tant. From ex­per­i­ments with TAPs and a re­cent bout of med­i­ta­tion, it seems like when I bail on an in­ten­tion, on some level I am no longer con­vinced the in­ten­tion is a good idea/​what I ac­tu­ally want to do. Strong con­vic­tion feels like con­fi­dence all the way up in the fact that this task/​pro­ject is the right thing to spend your time on.

There’s prob­a­bly a lot in the vein of have good chem­istry: sleep well, eat well, get ex­er­cise.

One of the more mys­te­ri­ous quan­tities seems to be “cog­ni­tive effort”. Some­times think­ing hard feel like it hurts my brain. This post has a lot of ad­vice in that re­gard.

I’ve pre­vi­ously hy­poth­e­sized that the a huge chunk of painful brain fog is the ex­pe­rience of think­ing at a prob­lem, but not ac­tu­ally en­gag­ing with it. (similar to how Mark Forster has posited that the re­sis­tance one feels to a given task is pro­por­tional to how many times it has been re­jected)

Hav­ing the rest of your life to­gether and time box­ing your work is in­sanely im­por­tant for re­duc­ing the fre­quency with which your brains pro­motes “un­re­lated” thoughts to your con­scious­ness (if there’s im­por­tant stuff that isn’t get­ting done, and you haven’t con­vinced your­self that it will be han­dled ad­e­quately, your mind’s ten­dency is to keep it in a loop).

I’ve got a feel­ing that there’s a large amount of gains in the 5-sec­ond level. I would be su­per in­ter­ested in see­ing any­one’s thoughts or writ­ings on the 5-sec­ond level of do­ing bet­ter work and avoid­ing cog­ni­tive fa­tigue.

• (Less a re­ply and more just re­lated)

I of­ten think a sen­tence like, “I want to have a re­ally big brain!”. What would that ac­tu­ally look like?

• Not ex­pe­rienc­ing fear or worry when en­coun­ter­ing new math.

• Really quick to de­ter­mine what I’m most cu­ri­ous about.

• Not hav­ing my head hurt when I’m think­ing hard, and gen­er­ally not feel­ing much “cog­ni­tive strain”.

• Be able to fill in the vague and gen­eral im­pres­sions with the con­crete ex­am­ples that origi­nally cre­ated them.

• Do­ing a ham­mers and nails scan when I en­counter new ideas.

• Hav­ing a clear, quickly ac­cessible un­der­stand­ing of the “proof chains” of ideas, as well as the “mo­ti­va­tion chains”.

• I don’t need to know all the proofs or mo­ti­va­tions, but I do have a clear sense of what I un­der­stand my­self, and what I’ve out­sourced.

• In­stead of feel­ing “gen­er­ally con­fused” by things of just “not get­ting them”, I always have con­crete, “This doesn’t make sense be­cause BLANK” ex­pres­sions that al­low me to move for­ward.

• “With a suffi­ciently neg­li­gent God, you should be able to hack the uni­verse.”

Just a fun lit­tle thought I had a while ago. The idea be­ing that if your de­ity in­ter­venes with the world, or if if there are prayers, mir­a­cles, “su­per­nat­u­ral crea­tures” or any­thing of that sort, then with enough plan­ning and chutz­pah, you should be able to hack re­al­ity un­less God has got a re­ally close eye on you.

This par­tially came from a fic­tion premise I have yet to act on. Dave (gar­den va­ri­ety athe­ist) wakes up in hell. Turns out that the Chris­tian God TM is real, though a bit of a dunce. Dave and Satan team up and go on a wacky ad­ven­ture to over­throw God.

• Quick thoughts on TAPS:

The past few weeks I’ve been do­ing a lot of pos­ture/​phys­i­cal tick based TAPs (not slouch­ing, not bit­ing lips etc). Th­ese seem to be very well fit to TAPs, be­cause the trig­ger is a phys­i­cal move­ment, mak­ing it eas­ier to no­tice. I’ve no­ticed roughly three phases of notic­ing triggers

1. I sud­denly be­come aware of the fact I’ve been do­ing the ac­tion.

2. I be­come aware of the fact that I’ve ini­ti­ated the ac­tion.

3. Be­fore any phys­i­cal move­ment hap­pens, I no­tice the “im­pulse” to do the thing .

To make a TAP run deep, it seems the key is to train up the lad­der and be able to deal with trig­gers and ac­tions on the level where they first origi­nate in the mind.

• Rib­bon Farm cap­tured some­thing that I’ve felt about no­madic travel. I’m think­ing back to a 2 month bi­cy­cle trip I did through Viet­nam, Cam­bo­dia, and Laos. Dur­ing that whole trip, I “did” very lit­tle. I read lots of books. Played lots of cards. Oc­ca­sion­ally chat with my bik­ing part­ner. “Not much”. And yet when move­ment is your nat­u­ral state of af­fairs, ev­ery day is ac­com­panied with a feel­ing of progress and ac­com­plish­ment.

• Me circa March 2018

“Should”s only make sense in a realm where you are di­vorced form your­self. Where you are bar­gain­ing with some other be­ing that con­trols your body, and you are threat­en­ing it.

Up­date: This past week I’ve had an un­usual amount of spon­ta­neous in­tro­spec­tive aware­ness on mo­ments when I was feel­ing pul­led my a should, es­pe­cially one that came from com­par­ing my­self from oth­ers. I’ve also been meet­ing these thoughts with an, “Oh in­ter­est­ing. I won­der why this made me feel a should?” as op­posed to a stan­dard “en­dorse or dis­avow” re­sponse.

Meta Thoughts: What do I know about “should”s that I didn’t know in March 2018?

I’m more aware of how in­cred­ibly per­va­sive “should”s are in my think­ing. Last sat­ur­day alone I counted over 30 mo­ments of feel­ing the nega­tive tug of some “should”.

I know see that even for things I con­sider cool, dope, and vir­tu­ous, I’ve been us­ing “you should do this or else” to get my­self to do them.

Since CFAR last fall I’ve gained a lot of metis on al­ign­ing my­self, a task that I’ve pre­vi­ously triv­ial­ized or brought in “willpower” to con­quer. Last year I was more in­clined to go, “Well okay fine, I’m still say­ing I should do XYZ, but the part of me that is re­sist­ing that is ac­tu­ally just stupid and de­serves to be co­erced.”

• I love the ex­pe­rience of re­al­iz­ing what cog­ni­tive al­gorithm I’m run­ning in a given sce­nario. This is eas­iest to spot when I screw some­thing up. To­day, I mis­spel­led the word “pro­cess” by writ­ing three “s” in­stead of two. I’m al­most cer­tain that while writ­ing the word, there was a cached script of “this word has one more ‘s’ than feels write, so add an­other one” that ac­ti­vated as I wrote the 1st “s”, but then some idea popped into my mind (small con­text switch, work­ing mem­ory dump?) and I then ex­e­cuted “this word has one more ‘s’ than feels write, so add an­other one” an ex­tra time.

I don’t spell the word the “pro­cess” cor­rectly by hav­ing mem­o­rized the cor­rect spel­ling. I spell the word cor­rectly by do­ing a mem­o­rized im­proper spel­ling and trig­ger­ing a bug-patch script, which if my at­ten­tion shifts, can cause a bug where that patch script runs twice. It’s awe-in­spiring to know that a bulk of my cog­ni­tion is prob­a­bly this sort of bug-patch, hacky, add on code.

I don’t ex­pect to gain any­thing from this par­tic­u­lar in­sight, but I love notic­ing these sorts of things. I in­tend to get bet­ter at this sort of notic­ing.

• An un­countable finite set is any finite set that con­tains the source code to a su­per in­tel­li­gence that can prov­ably pre­vent any­one from count­ing all of it’s el­e­ments.

• I still think this is ge­nius.

• In a fight be­tween the CMU stu­dent body and the ra­tio­nal­ist com­mu­nity, CMU would prob­a­bly for­get about the fight un­less it was as­signed for home­work, and the ra­tio­nal­ists would all in­di­vi­d­u­ally come to the con­clu­sion that it is most ra­tio­nal to re­treat. No one would en­gage in com­bat, and ev­ery­one would win.

• I’ve been hav­ing fun read­ing through Sig­nals: Evolu­tion, Learn­ing, & In­for­ma­tion. Many of the sce­nar­ios re­volve around vari­a­tions of the Lewis Sig­nal­ling Game. It’s a nice sim­ple model that lets you talk about com­mu­ni­ca­tion with­out hav­ing to talk about in­ten­tion­al­ity (what you “meant” to say).

In­ten­tion seems to mostly be about self-aware­ness of the ex­ist­ing sig­nal­ling equil­ibrium. When I speak slowly and care­fully, I’m con­stantly check­ing what I want to say against my un­der­stand­ing of our sig­nal­ling equil­ibrium, and rea­son­ing out im­pli­ca­tions. If I scream when I see a tiger, I’m still sig­nal­ling, but var­i­ous facts about the sig­nal­ling equil­ibrium are not booted into con­scious­ness.

So, claim: Lewis style sig­nal­ling games are the root of all com­mu­ni­ca­tion, from hu­mans to dogs to bac­te­ria. The “ex­tra” stuff that hu­mans seem to have which is of­ten called in­tent as to do with hav­ing other/​ad­di­tional rea­son­ing abil­ities, and be­ing able to load one’s sig­nal­ling equil­ibrium into that rea­son­ing sys­tem to fur­ther en­gage in shenani­gans.

• “Mov­ing from fos­sil fuels to re­new­able en­ergy” but as a metaphor for mo­ti­va­tional sys­tems. Nate Soares re­plac­ing guilt seems to be try­ing to do this.

With mo­ti­va­tion, you can more eas­ily go, “My life is gonna be finite. And it’s not like some­one else has to deal with my mo­ti­va­tion sys­tem af­ter I die, so why not run on guilt and panic?”

Hm­mmm, maybe some­thing like, “It would be doper if large scale peo­ple got to more re­new­able mo­ti­va­tional sys­tems, and for that change to hap­pen it feels im­por­tant for peo­ple grow­ing up to be able to see those who have made the leap.”

• Weird hack for a weird tick. I’ve no­ticed I don’t like au­dio abruptly end­ing. Like, some­times I’ve listened to an en­tire pod­cast on a walk, even when I re­al­ized I wasn’t into it, all be­cause I an­ti­ci­pated the twinge of pain from turn­ing it off. This is re­solved by turn­ing the vol­ume down un­til it is silent, and then turn­ing it off. Who’d of thunk it...

• Re­v­erse-Eng­ineer­ing a World View

I’ve been hav­ing to do this a lot for Rib­bon­farm’s Me­diocratopia blog chain. Rao of­ten con­fuses me and I have to step up my game to figure out where he’s com­ing from.

It’s ba­si­cally a move of “What would have to be differ­ent for this to make sense?”

Con­fu­sion: “But if you’re go­ing up in lev­els, stuff must be get­ting harder, so even though you’re mediocre in the next tier, shouldn’t you be loos­ing slack, which is an­ti­thet­i­cal to medi­ocrity?”

Re­s­olu­tion: “What if there’s weird dis­con­tin­u­ous jumps to both skill and perfor­mance, and tak­ing on a new frame/​strat­egy/​prac­tice bumps you to the next level, with­out your effort go­ing up pro­por­tion­ally?”

• Quick de­scrip­tion of a pat­tern I have that can mud­dle com­mu­ni­ca­tion.

“So I’ve been mul­ling over this idea, and my origi­nal thoughts have changed a lot af­ter I read the ar­ti­cle, but not be­cause of what the ar­ti­cle was try­ing to per­suade me of …”

Gen­era Pat­tern: There is a con­crete thing I want to talk about (a new idea - ???). I don’t tell what it is, I merely provide a place­holder refer­ence for it (“this idea”). Be­fore I ex­plain it, I be­gin ap­ply­ing a bunch of mod­ifiers (typ­i­cally by giv­ing a lot of con­text “This idea is a new take on a do­main I’ve pre­vi­ously had thoughts on” “there was an ar­ti­cle in­volved in chang­ing my mind” “that ar­ti­cle wasn’t the di­rect cause of the mind change”)

This con­fuses a lot of peo­ple. My guess is that in­ter­pret­ing state­ments like this re­quire a lot more work­ing mem­ory. If in­tro­duce the main sub­ject, and then mod­ify it, peo­ple can “men­tally mod­ify” the sub­ject as I go along. If I don’t give them the sub­ject, they need to store a stack of mod­ifiers, wait un­til I get to the sub­ject, and then ap­ply all those mod­ifiers they’ve been stor­ing.

I no­tice I do this most when I ex­pect the listener will have a nega­tive gut re­ac­tion to the sub­ject, and I’m try­ing to pre­emp­tively do a bunch of ex­pla­na­tion be­fore in­tro­duc­ing it.

Any­one no­tice any­thing similar?

• Yep, I no­tice this some­times when other peo­ple are do­ing it. I don’t no­tice my­self do­ing it, but that’s prob­a­bly be­cause it’s eas­ier to no­tice from the re­ceiv­ing end.

In writ­ing, it makes me bounce off. (There are many posts com­pet­ing for my at­ten­tion, so if the first few sen­tences fail to say any­thing in­ter­est­ing, my brain as­sumes that your post is not com­pet­i­tive and moves on.) In speech, it makes me get frus­trated with the speaker. If it’s in speech and it’s an in­ter­rup­tion, that’s es­pe­cially bad, be­cause it’s dis­plac­ing work­ing mem­ory from what­ever I was do­ing be­fore.

• I also do this a lot, and think it’s not always a mis­take, but I agree that it im­poses sig­nifi­cant cog­ni­tive bur­den on my con­ver­sa­tional part­ner.

• Do you also do it as a pre­emp­tive move like I de­scribed, or for other rea­sons?

• [Every­thing is “free” and we in­un­date you in ad­ver­tise­ments] feels bad. First thought al­ter­na­tive is some­thing like paid sub­scrip­tions, or micro­pay­ments per thing con­sumed. But the ques­tion is begged, how does any­one find out about the sites they want to sub­scribe to? If only there was some web­site ag­gre­ga­tor that was free for me to use so that I could browse differ­ent pos­si­ble sub­scrip­tions...

Oh no. Or if not oh no, it seems like the sel­l­ing eye­balls model won’t go away just be­cause al­ter­na­tives ex­ist, if only from the “peo­ple need to some­how find out about the thing they are pay­ing for” side.

I could prob­a­bly do with get­ting a stronger sense of why sel­l­ing eye­balls feels bad. I’m also prob­a­bly think­ing about this too ab­stractly and could do with get­ting more con­crete.

• Maybe it has some­thing to do with the sen­ti­ment that “if it’s free, the product is you”. Per­haps with­out pay­ing some form of sub­scrip­tion, you feel that there is no ‘bounded’ pay­ment for the ser­vice—as you con­sume more of any given ser­vice, you are es­sen­tially pay­ing more (in cog­ni­tive load or some­thing similar?).

Kind of feels like fixed vs vari­able costs—of­ten you feel a lot bet­ter with fixed as it tends to be “more valuable” the more you con­sume.

Just an off-the-cuff take based on per­sonal ex­pe­rience, definitely in­ter­ested in hear­ing other takes.

• From Gw­ern’s about page:

I per­son­ally be­lieve that one should think Less Wrong and act Long Now, if you fol­low me.

Pos­si­bly my fa­vorite catch-phrase ever :) What do I think is hid­ing there?

• Think Less Wrong

• Self an­thro­pol­ogy- “Why do you be­lieve what you be­lieve?”

• Hug­ging the Query and not sink­ing into con­fused questions

• Li­tany of Tarski

• No­tice your con­fu­sion - “Either the story is false or you model is wrong”

• Act Long Now

• Cul­ti­vate habits and prac­tice rou­tines that seem small /​ triv­ial on a day/​week/​month timeline, but will re­sult in you be­ing su­per­hu­man in 10 years.

• Build ab­strac­tions where you are acutely aware of where it leaks, and have good rea­son to be­lieve that leak does not af­fect the most im­por­tant work you are us­ing this ab­strac­tion for.

• What things trig­ger “Man, it sure would be use­ful helpful if I had data on XYZ from the past 8 years”? Start track­ing that.

• What am I cur­rently do­ing to Act Long Now? (Dec 4th 2019)

• Switch­ing to Roam: Though it’s still in de­vel­op­ment and there are a lot of tech­ni­cal hur­dles to this be­ing a long now move (they don’t have good im­port ex­port, it’s all cloud hosted and I can’t have my own back­ups), putting ideas into my roam net­work feels like long now or­ga­ni­za­tion for max­i­mized cre­ative/​in­tel­lec­tual out­put over the years.

• Try­ing to milk a lot of ex­plo­ra­tion out of the next year be­fore I start work, hope­fully giv­ing my­self spring­boards to more things at points in the fu­ture where I might not have had the en­ergy to get started /​ make the ini­tial push.

• Be­ing kind.

• Ar­gu­ing Poli­tics* With my Best Friends

What am I cur­rently do­ing to think Less Wrong?

• Writ­ing more has helped me hone my think­ing.

• Lot’s of progress on un­der­stand­ing emo­tional learn­ing (or more prac­ti­cally, how to do emo­tional un­learn­ing) al­low­ing me to get to a more even keeled cen­ter from which to think and act.

• Get­ting bet­ter at ig­nor­ing the bot­tom line to gen­uinely con­sider what the world would be like for al­ter­na­tive hy­poth­e­sis.

• This is a great list! I’d be cu­ri­ous about things you are cur­rently do­ing to act short now and think more wrong as well. I of­ten find I get a lot out of such lists.

• Act Short Now

• Sleep­ing in

• Flirt­ing more

Think More Wrong

• I longer buy that there’s a struc­tural differ­ence be­tween math/​the for­mal/​a pri­ori and sci­ence/​the em­piri­cal/​ a pos­te­ri­ori.

• Prob­a­bil­ity the­ory feels sorta lame.

• Claim: There’s a headspace you can be in where you don’t have a bucket for ex­plore/​bab­ble. If you are en­ter­tain­ing an idea or work­ing through a plan, it must be be­cause you already ex­pect it to work/​be in­ter­est­ing. If your prune filter is also grow­ing in strength and qual­ity, then you will be aban­don­ing ideas and plans as soon as you see any rea­son­able in­di­ca­tor that they won’t work.

Miss­ing that bucket and en­hanc­ing your prune filter might feel like you are merely grow­ing up, get­ting wiser, or maybe more cyn­i­cal. This will be re­ally strongly felt if the pre­vi­ous phase in your life in­volved you div­ing into lots of pro­jects only to re­al­ize some time and money later that they won’t work out. The men­tal mo­tion of, “Aha! This plan leaves ABC com­pletely un­speci­fied and I’d prob­a­bly fall apart when reach­ing that road­block,” will be ac­com­panied by a, “Man, I’m so glad I no­ticed that, oth­er­wise I would have wasted a whole day/​week/​month. Go prune!”.

Un­til you get a new bucket for ex­plore, and at­tempts to get you to “think big” and “get cre­ative” and “let it all out in a brain­storm” will feel like at­tacks on your valuable time. Some­how, you need to get a strong felt sense for ex­plore be­ing it’s own, com­pletely vi­able op­tion, which in no way obliges you to act on what you’ve ex­plored.

Next thoughts: What is needed for me to deeply feel ex­plore as an op­tion, and what things might be stop­ping me from do­ing so? *tk*

• You can have in­finite as­pira­tions, but in­finite plans are of­ten out to get you.

When you make new plans, run more cre­ative “what if?” in­ner-sims, sprin­kle in more ex­ploit, and en­sure you have bounded loss if things go south.

When you feel like quit­ting, re­al­ize you have the op­por­tu­nity to learn and up­date be­ing ask­ing, “What’s differ­ent be­tween now and when I first made this plan?

Make your con­fi­dence in your plans ex­plicit, so if you fail you can be sur­prised in­stead of dis­ap­pointed.

If the thought of giv­ing up feels ter­rible, you might need to learn how to lose.

And of course, if you can’t af­ford to lose,

• Stub Post: Thoughts on why it can be hard to tell if some­thing is hind­sight bias or not.

Imag­ine one’s thought pro­cess as an idea-graph, with the pro­cess of think­ing be­ing hop­ping around nodes. Your long term mem­ory can be thought of as the nodes and edges that are already there and per­sist strongly. The con­tents of your work­ing mem­ory are like tem­po­rary nodes and edges that are in your idea graph, and ev­ery­thing that is close to them gets a +10 to speed-of-ac­cess. A short term mem­ory node can even cause edges to pop up be­tween two other nodes around it.

Claim: There is no ob­vi­ous felt/​per­ceived ex­perice that ac­com­pa­nies the cre­ation of an edge, only to the traver­sal of an edge.

Im­pli­ca­tion: If I ob­served men­tally hop­ping from A to B to C, I could see and ad­mit that B was re­spon­si­ble for get­ting to C. But if the pres­ence of B in my work­ing mem­ory cre­ates an edge di­rectly form A to B, it “feels like” I jump from A to C, and that B doesn’t have any­thing to do with it.

• This seems to be in ac­cord with things like how the fram­ing of ques­tions has a huge effect on what peo­ple’s an­swers are. There are prob­a­bly some do­mains where you don’t ac­tu­ally have much of a per­sis­tent model, and your “model” mostly con­sists of the tem­po­rary con­nec­tions cre­ated by the con­tents of your work­ing mem­ory.

• Utility func­tions aren’t com­pos­able! Utility func­tions aren’t com­pos­able! Sorry to shout, I’ve just re­al­ized a very spe­cific way I’ve been wrong for quite some time.

VNM util­ity is com­pletely ig­nores that struc­ture and of out­comes and “similar­i­ties” be­tween out­comes. U(1 ap­ple) doesn’t need to have any re­la­tion to U(2 ap­ples). With de­ci­sion sce­nar­ios I’m used to in­ter­act­ing in, there are of­ten ways in which it is nat­u­ral to things of out­comes as com­po­si­tions or trans­for­ma­tion of other out­comes or ob­jects. When I think of out­comes, they can be more or less similar to each other, even if I’m not talk­ing about value. From fac­ing a lot of sce­nar­ios like this, it’s easy to think it terms of “Find some way to value the smaller set of out­comes that can com­pose to make all out­comes”, which makes it easy to ex­pect such com­pos­abil­ity to be a prop­erty of of VNM util­ity works. But it’s not! It re­ally re­ally isn’t.

I’ve re­cently been read­ing about or­di­nal num­bers, and get­ting fa­mil­iar with the idea that you can have things that have or­der, but no no­tion of dis­tance. I had that in the back of my mind when go­ing through the wikipe­dia page for VNM util­ity, and I think that’s what made it click.

• Yes, in­deed quite im­por­tant. This is a com­mon con­fu­sion that has of­ten lead me down weird con­ver­sa­tional paths. I think some microe­co­nomics has most made this clear to me, be­cause in there you seem to be con­stantly throw­ing tons of af­fine trans­for­ma­tions at your util­ity func­tions to make them con­ve­nient and get you an­a­lytic solu­tions, and it be­comes clear very quickly that you are not pre­serv­ing the rel­a­tive mag­ni­tude of your origi­nal util­ity func­tion.

• I think one of the rea­sons it took me so long to no­tice was that I was in­tro­duced to VNM util­ity I’m the con­text of game the­ory, and win­ning at card games. Most of those prob­lems do have the prop­erty of the util­ity of some base scor­ing sys­tem com­pos­ing well to gen­er­ate the util­ity of var­i­ous end games. Since that was always the case, I guess I thought that it was a prop­erty of util­ity, and not the games.

• One of the more use­ful rat-tech­niques I’ve en­joyed has been the re­fram­ing of “Mak­ing a de­ci­sion right here right now” to “Mak­ing this sort of de­ci­sion in these sorts of sce­nar­ios”. When con­sid­er­ing how to judge a be­lief based on some ar­gu­ments, the ques­tion be­comes, “Am I will­ing to ac­cept this sort of con­clu­sion based on this sort of ar­gu­ment in similar sce­nar­ios?”

From that, if you ac­cept claim-ar­gu­ment pair A “Dude, if elec­tric forks where a good idea, some­one would have done it by now”, but not claim-ar­gu­ment pair B “Dude, if cur­ing can­cer was a good idea, some­one would have done it by now”, then it was never the A’s ar­gu­ment that made you be­lieve the claim. You have some other un­men­tioned rea­sons, and those should be what’s ad­dressed.

• Similarly is the re-fram­ing, “what is the ac­tual de­ci­sion I am mak­ing?” One friend was tel­ling me, “This lin­ear alge­bra class is a waste of my time, I’d get more by skip­ping lec­ture and read­ing the book.” When I asked him if he ac­tu­ally thought he’d read the book if he didn’t go to lec­ture, he said prob­a­bly not. Here, it felt like the choice was, “Go to lec­ture, or not?” but it would be bet­ter framed as, “Given I’m try­ing to learn lin­ear alge­bra, but fea­si­ble paths do I have for learn­ing it?” If you don’t ac­tu­ally ex­pect to be able to self-study, then you no longer can think of “just not go­ing to lec­ture” as an op­tion.

• Here’s a pat­tern I want to out­line and pos­si­ble sug­ges­tions on how to fix it.

Some­times when I’m try­ing to find the source of the bug, I make in­cor­rect up­dates. An ex­pla­na­tion of what the prob­lem might be pops to mind, and it seems to fit (ex. “oops, this ma­chine is Big En­dian, not Lit­tle En­dian”). Then I work on the bug some more, things still don’t work, and at some point I find the real prob­lem. To­day, when I found a bug I was hunt­ing for, I had a mo­ment of, “Oh shit, an hour ago I up­dated my be­liefs about how this ma­chine worked, but that was a bad up­date be­cause the prob­lem had noth­ing to do with En­di­annes”.

I imag­ine that there have been plenty of times when I’ve been pro­gram­ming/​lifing and I haven’t kept track of up­dates that were tied to a prob­lem, and never cor­rected them when I found out what the ac­tual solu­tion was.

May­haps I should keep a run­ning list of as­sump­tions and up­dates I’ve made while I pro­gram, and ev­ery­time a bug is com­pletely re­solved, see how that effects past up­dates.

• Had a similar style bug while pro­gram­ming to­day. I caught it much faster though I can’t say if that can be at­tributed to pre­vi­ously iden­ti­fy­ing this pat­tern. But did think of the pre­vi­ous big as soon as I made the men­tal leap to figure out what was wrong this time.

• Last fall I hosted a dis­cus­sion group with friends on three differ­ent oc­ca­sions. I pitched it as “get in­ter­est­ing peo­ple to­gether and in­ten­tion­ally have an in­ter­est­ing con­ver­sa­tion” and was not a ra­tio­nal­ist dis­cus­sion group. One thing that I no­ticed was that when­ever I wanted to re­ally fix­ate on and solve a prob­lem we iden­ti­fied, it felt wrong, like it would break some im­plicit rule I never re­mem­bered set­ting.

Later I pin pointed the fol­low­ing as the culprit. I per­son­ally can’t con­sis­tantly pro­duce qual­ity clear think­ing at “con­ver­sa­tional speeds” on things I haven’t thought about be­fore (I’d be in­ter­ested in know­ing what the dis­tri­bu­tion on this abil­ity is). In this case, buck­ling down and solv­ing the prob­lem would mean hav­ing a long pause in the con­ver­sa­tion while I and oth­ers think.

It also hap­pens that such a pause is gen­er­ally very un­com­fortable for a ca­sual group un­less you have very par­tic­u­lar norms/​rules sanc­tion­ing it.

Ac­tion­able thought: if you want peo­ple to ac­tu­ally try to solve a prob­lem in a group set­ting, you prob­a­bly want to make it su­per okay/​nor­mal/​ac­cept­able to have long pauses where you turn of your “con­ver­sa­tion mind” and go into “se­ri­ous thought” mode.

• I’ve been writ­ing on twit­ter more lately. Some­times when I’m try­ing to ex­press and idea, to gen­er­ate progress I’ll think “What’s the short­est sen­tence I can write that con­vinces me I know what I’m talk­ing about?” This is differ­ent from “What’s a sim­ple but no sim­pler ex­pla­na­tion for the reader?”

Start­ing a twit­ter thread and forc­ing sev­eral tweet sized chunk of ideas out are quite helpful for that. It helps get the con­cept clearer in my head, and then I have some­thing out there and I can dwell on how I’d turn it into a con­sum­able for oth­ers.

• I’ve been writ­ing A LOT on twit­ter lately. It’s been hella fun.

One thing that seems clear. Twit­ter threads are not the place to hash out deep dis­agree­ments start to finish. When you start multi thread­ing, it gets chaotic real fast, and the char­ac­ter limit is a limit­ing force.

On the other side of things, it’s feels great for ges­tat­ing ideas, and get­ting lots of leads on in­ter­est­ing ideas.

1) Leads: It helps me in­crease my “known un­knowns”. There’s a lot of top­ics, ideas, dis­ci­plines I see peo­ple mak­ing off­hand com­ments about, and while it’s rarely enough to piece to­gether the whole idea, I of­ten can pick up the type sig­na­ture and know where the idea re­lates to other ideas I am fa­mil­iar with. This is dope. Ex­pand you anti-library

2): ges­ta­tion: there’s a limit to how much you can squeeze into a sin­gle tweet, but thread­ing re­ally helps to shot­gun blast out ideas. It of­ten ends up be­ing less step-by-step care­fully rea­soned arg, and more lots of quasi-in­de­pen­dent thoughts on the topic that you then con­nect. Also, I eas­ily get 5x en­gage­ment on twit­ter, and other peo­ple throw­ing in their thoughts is re­ally helpful.

I know Rae­mon and crew have men­tioned try­ing to help with more ges­ta­tion and de­vel­op­ment of ideas (with­out sac­ri­fic­ing over­all rigor). post-rat-twit­ter /​ strangely-earnest-twit­ter feels like it’s nailed the ges­ta­tion part. Might be some­thing to in­ves­ti­gate.

• See this for the best ex­am­ple of rapid brain­storm­ing, and the clos­est twit­ter has to long form con­tent.

• Re Men­tal Moun­tains, I think one of the rea­sons that I get wor­ried when I meet an­other youn­gin that is su­per gung-ho about ra­tio­nal­ity/​”be­ing log­i­cal and co­her­ent”, is that I don’t ex­pect them to have a good The­ory of How to Change Your Mind. I worry that they will rea­son out a bunch of con­clu­sions, suc­ceed in high-level chang­ing their minds, think that they’ve deeply changed their minds, but in­stead leave hoards of un­re­solved emo­tional mem­o­ries/​mod­els that they learn to ig­nore and fuck them up later.

• (tid bit from some re­cent deep self ex­am­i­na­tion I’ve been do­ing)

I in­curred judg­ment-fueled “mo­ti­va­tional debt” by ag­gres­sively buy­ing into the idea “Talk is worth­less, the only thing that mat­ters is go­ing out and get­ting re­sults” at a time where I was so con­fi­dent I never ex­pected to fail. It felt like I was get­ting free mo­ti­va­tion, be­cause I saw no con­se­quences to mak­ing this value judg­ment about “not get­ting re­sults”.

When I learned more, the pos­si­bil­ity of failure be­came more real, and that can­non of judge­ment I’d built swiveled around to point at me. Oops.

• This seems to be a spe­cific in­stance of a more gen­eral phe­nom­ena that Lev­er­age Re­search calls “De-sim­plifi­ca­tion”

The ba­sic phe­nom­ena goes like this:

1. Ac­cord­ing to lev­er­age re­search, your be­lief struc­ture must always be such that you be­lieve you can achieve your ter­mi­nal val­ues/​goals.

2. When you’re rel­a­tively pow­er­less and un­skil­led, this means that by ne­ces­sity you have to be­lieve that the world is more sim­ple than it is and things are eas­ier to do than they are, be­cause oth­er­wise there’d be no way you could achieve your goals/​val­ues.

3. As you gain more skill and power, your abil­ity to tackle com­plex and hard prob­lems be­come greater, so you can be­gin to see more com­plex and difficulty in the world and the prob­lems you’re try­ing to solve.

4. If you don’t know about this phe­nom­ena, it might feel like power and skills don’t ac­tu­ally help you, and you’re just tread­ing wa­ter. In the worst case, you might think that power and abil­ity ac­tu­ally make things worse. In fact, what’s go­ing on is that your new power and abil­ity made salient things that were always there, but which you could not al­low your­self to see. Be­ing able to see things as harder or more com­plex as ac­tu­ally a sig­nal that you’ve lev­eled up.

• This is a very use­ful frame! Is the blog on Lev­er­age Re­search’s cite where most of there stuff is, or would I go some­where else if I wanted to read about what they’ve been up to?

• There’s not re­ally any­where to go to read what lev­er­age has been up to, they’re a very pri­vate or­ga­ni­za­tion. They did have an arm called paradigm academy that did teach­ing, which is where I learned this. How­ever lev­er­age re­cently down­sized, and I’m not sure about the sta­tus of Paradigm or other splin­ter or­ga­ni­za­tions.

• Per­son I talked to once: “Mo­ral rules are dumb be­cause they aren’t go­ing to work in ev­ery sce­nario you’re go­ing to en­counter. You should just ev­ery­thing case by case.”

The thing that feels most wrong about this to me is the propo­si­tion that there is an ac­tion you can do which is, “Judge ev­ery­thing case by case”. I don’t think there is. You wouldn’t say, “No ab­strac­tion cov­ers ev­ery sce­nario, so you should model ev­ery­thing in quarks.”

For some­one rea­son or an­other, it some­times feels like you can “model things at their most re­duced” when pon­der­ing a moral de­ci­sion. But you aren’t even close. “Judge ev­ery­thing case by case” ar­gu­ments seem to come form a place of not know­ing how your mind works. May­haps it’s more of a jus­tifi­ca­tion things, where if you say ,”It felt right to me” you’re gen­er­ally off the hook, whereas if you sup­ply prin­ci­pled rea­sons for your de­ci­sion mak­ing, you open your­self up to crit­i­cism (Copen­hagen ethics-ish).

• I can’t re­mem­ber the ex­act quote or where it came from, so I’m go­ing to para­phrase.

The end goal of med­i­ta­tion is not to be able to calm your mind while you are sit­ting cross-legged on the floor, it’s to be able to calm your mind in the mid­dle of a hur­ri­cane.

Map­ping this onto ra­tio­nal­ity, there are two ques­tion you can ask your­self.

How ra­tio­nal can I be while mak­ing de­ci­sions in my room?

How ra­tio­nal can I be in the mid­dle of a hur­ri­cane?

I think the dis­tinc­tion is im­por­tant be­cause rec­og­niz­ing it al­lows you to train both skills sep­a­rately.

• I sus­pect there is rele­vance here to maps of differ­ent de­tails.

For ex­am­ple play­ing a ball sport. I can in­tel­lec­tu­ally know a lot more than I can carry out in my sys­tem 1 while run­ning from the other play­ers.

For s1 I need tighter mod­els that I can do on the fly. Not sure if that matches perfectly to. Med­i­tat­ing in a hur­ri­cane.

• Some thoughts on a toy model of pro­duc­tivity and well-being

T = set of task

S = set of phys­iolog­i­cal states

R = level of “re­flec­tive ac­cep­tance” of cur­rent situ­a­tion (ex. am I do­ing “good” or “bad”)

Qual­ity of Work = some_func­tion(s,t) + stress_applied

Qual­ity of Sub­jec­tive Ex­pe­rience = Qual­ity—stress + R

Some states are stick­ier than oth­ers. It’s eas­ier to jump out of “I’m dis­tracted” then it is to es­cape “I’ve got the flu”. States can be bet­ter or worse at do­ing tasks, and tasks can be of vary­ing difficulty.

There is some lever which I’m go­ing to call stress (might call willpower) that you can spam to get a non-triv­ial in­crease in work out­put, though it seems to max out pretty fast.

R is very much pri­mal, and also seems to be dis­tinct from S. I gen­er­ally don’t feel bad about not be­ing able to do work when I’m sick (nor­mal R, low S), yet if I’m per­sis­tently “just dis­tracted” it’s eas­ier to get a bad R value. By de­fault, it seems like R is main feed­back loop us hu­mans us to make cor­rec­tive mea­sures.

Some­times I feel amaz­ing and can just breeze through work, other times I can barely think. I’m used to try­ing to main­tain a con­stant qual­ity of work, which mean if I’m in a poor S, more stress is ap­plied, which de­creases the qual­ity of the sub­jec­tive ex­pe­rience, which can have long term nega­tive effects.

The mas­ter-level play seems to be to hack you S to con­sis­tently be higher qual­ity. Growth-mind­set? Diet? Stim­u­lants?

• Or, if you’re okay with be­ing a bit less of a canon­i­cal ro­bust agent and don’t want to take on the costs of re­li­a­bil­ity, you could try to always match your work to your state. I’m think­ing more of “mood” than “state” here. Be in­finitely cre­ative chaos.

Oooh, I don’t know any blog post the cite, but Dun­can men­tioned at a CFAR work­shop the idea of be­ing a King or a Prophet. Both can be re­li­able and ro­bust agents. The King does so by putting out Royal De­crees about what they will do, and then ex­e­cut­ing said plans. The Prophet gives you prophe­cies about what they will do in the fu­ture, and they come true. While you can count on both the de­crees of the king and the prophe­cies of the prophet, the ac­tions of the prophet are more un­ruly and chaotic, and don’t seem to make as much sense as the king’s.

• I no­tice that there’s al­most a sort of pres­sure that builds up when I look at some­one, as if it’s a literal in­di­ca­tor of, “Dude, you’re ap­proach­ing a so­cially un­ac­cept­able star­ing time!”

It seems ob­vi­ous what is go­ing on. If you stare at some­one for too long, things get “weird” and you come off as a “creep”. I know that. Most peo­ple know that. And since we all have com­mon knowl­edge about that rule, I un­der­stand that there are con­se­quences to star­ing at some­one for more than a sec­ond or two. Thus, the rea­son I don’t stare at peo­ple for very long is be­cause I know I will be so­cially pe­nal­ized for it.

Ex­cept I’m doubt­ing the story that such a line of rea­son­ing is ever com­puted in the ac­tual sce­nario. I re­cently re­al­ized that when I don’t have my con­tacts in (I’ve got re­ally ter­rible vi­sion), I feel no such pres­sure to look away from peo­ple. I can just stare at a stranger who is only a few feet away from me, and I only feel a vague obli­ga­tion like, “Hmmm, I mean I guess I should stop star­ing…”

This seems like weak ev­i­dence that my be­hav­ior “Not star­ing at peo­ple for too long” is a re­sult of a vi­sual in­put to ac­tion map­ping, rather than an im­plicit rea­son­ing pro­cess.

• Another ex­am­ple of “I was run­ning a less gen­eral and more hacky al­gorithm than an­ti­ci­pated”.

On a bike trip through Viet­nam, very few peo­ple in the coun­tryside spoke English. Often, we’d just talk at each other in our re­spec­tive lan­guages and ges­tic­u­late wildly to ac­tu­ally make our points.

I no­ticed that I was still smil­ing and laugh­ing in re­sponse to things said to me in Viet­namese, even though I had no idea what was go­ing on. This has lead me to see the de­ci­sion to laugh or smile to be mostly based on non-ver­bal stuff, and not, “Yes, I’ve un­der­stand the thing you have said, and what you said is funny.”

• I’m cur­rently read­ing The Open Veins of Latin Amer­ica, which is a de­tailed his­tory of how Latin Amer­ica has been screwed over across the cen­turies. It re­minds me of a book I read a while ago, Con­fes­sions of an Eco­nomic Hit-man. Though it’s clear the au­thor thinks that what has hap­pened to Latin Amer­ica has been un­just, he does a good job of not adding lots of, “and there­for...”s. It’s mostly a po­etic his­tor­i­cal ac­count. There’s a lot more car­toon­ishly evil things that have hap­pened in his­tory than I re­al­ized.

I’m simu­lat­ing bring­ing up this book to var­i­ous friends, and in many cases the sim-of-friend feels the need to ei­ther go, “Yeah it sucks, but it’s not ac­tu­ally that bad be­cause XYZ,” or “I know! The global­ist/​cap­i­tal­ist/​ma­te­ri­al­ist west is sooo evil, right?”

This seems to point to a gen­eral trend of peo­ple not want­ing to spend a ton of time dwelling on the data, and in­stead jump­ing straight to draw­ing con­clu­sions.

If you spend enough time deal­ing with peo­ple who are try­ing to get cer­tain data to sup­port their team, you start to lose you abil­ity to en­gage with ex­plor­ing the ter­ri­tory. For some, it might not feel safe to ask about what the U.S did or didn’t do in Latin Amer­ica, be­cause if they agree to the wrong point, they might be forced into the other sides con­clu­sion.

Hold of on propos­ing solu­tions.

• Fun Fram­ing: Em­piri­cism is try­ing to pre­dict TheUni­verse(t = n + delta) us­ing TheUni­verse(t=n) as your black­box model.

• Some­times the teacher makes a typo. In con­ver­sa­tion, some­times peo­ple are “just wrong”. So a lot of the times, when you no­tice con­fu­sion, it can be dis­missed with “the other per­son just screwed up”. But re­al­ity doesn’t screw up. It just is. Always pay at­ten­tion to con­fu­sion that comes from look­ing at re­al­ity.

(Also, when you come to the con­clu­sion that an­other per­son “screwed up”, you aren’t com­pletely done un­til you have some un­der­stand­ing of how they might of screwed up)

• A rephras­ing of ideas from the re­cent Care Less post.

Value al­lo­ca­tion is not zero sum, though time al­lo­ca­tion is. In or­der to not break down at the “colos­sal in­jus­tice of it all”, a com­mon strat­egy is to op­er­ate as if value is zero-sum.

To be as effec­tive as pos­si­ble, you need to be able to see the dark world, one that is be­yond the reach of God. Do not ex­plain why the cur­rent state of af­fairs is ac­cept­able. In­stead, look at re­al­ity very care­fully and move to­wards the goal. Ex­plain­ing why your world is ac­cept­able shuts down the sense that more is pos­si­ble.

• I just finished read­ing and reread­ing Debt: The First 5000 Years. I was tempted to go, “Yep, makes sense, I was ba­si­cally already think­ing about money and debt like that.” Then I re­mem­bered that not but two months ago I was ar­gu­ing with a friend and as­sert­ing that there was noth­ing dis­func­tional about be­ing able to sell your kid­ney. It’s hard to re­mem­ber what I used to think about cer­tain things. When there’s a con­crete re­minder, some­times it comes as a shock that I used to think differ­ently from how I do. For what­ever the big things I’ve changed my mind about in the past few years, I doubt that the the “proper con­se­quences” of those changes have suc­cess­fully pro­pogated to all cor­ners of my mind. Another thing to watch out for...

• Worth read­ing the moun­tains of crit­i­cism of this book, e.g. these blog posts. I still got some­thing in­ter­est­ing out of read­ing it though.

• Most of what I’ve got­ten out of the book as been lenses for view­ing co­or­di­na­tion is­sue, and less “XYZ events in his­tory hap­pened be­cause of ABC.” (and skim­ming the posts you linked , they seemed more to do with the lat­ter)

I think when I read Nas­sim Taleb’s Black Swan was the first time I im­me­di­ately af­ter­wards googled “book name crit­i­cism”. Taleb had made some minor claim about net­work the­ory be­ing not used for any­thing prac­ti­cal, which turned out to just be wrong (a critic cited it be­ing used for de­vel­op­ing solu­tions to malaria out­breaks). See­ing that made me re­al­ize had hadn’t even won­dered whether or not the claim was true when I first read it. Since then I’ve been more cre­d­u­lous of any given de­tails an au­thor uses, un­less it seems like a “ba­sic” el­e­ment of their realm of ex­per­tise (like, I don’t doubt and of the an­thro­polog­i­cal de­tails Grae­ber pre­sented about the Tiv, though I may dis­agree with his ex­trap­o­la­tions)

• Highly spec­u­la­tive thought.

I don’t of­ten get an­gry/​up­set/​ex­as­per­ated with the cod­ing or math that I do, but to­day I’ve got­ten roy­ally pissed at some Java pro­ject of mine. Here’s a guess at a pos­si­ble mechanism.

The more hu­man-like a sys­tem feels, the eas­ier it is to an­thro­po­mor­phize and get an­gry at. When deal­ing with my code to­day, it has felt less like the world of be­ing able to rea­son care­fully over a de­ter­minis­tic sys­tem, and more like deal­ing with an un­pre­dictable pos­si­bly hos­tile agent. May­haps part of my brain pat­tern matches that be­havi­our to some­thing in­teligent → some­thing hu­man → ap­ply anger strat­egy.

• So a thing Galois the­ory does is ex­plain:

Why is there no for­mula for the roots of a fifth (or higher) de­gree polyno­mial equa­tion in terms of the co­effi­cients of the polyno­mial, us­ing only the usual alge­braic op­er­a­tions (ad­di­tion, sub­trac­tion, mul­ti­pli­ca­tion, di­vi­sion) and ap­pli­ca­tion of rad­i­cals (square roots, cube roots, etc)?

Which makes me won­der; would there be a for­mula if you used more ma­chin­ery that nor­mal stuff and rad­i­cals? What does “more than rad­i­cals” look like?

• I think peo­ple usu­ally just use “the num­ber is the root of this polyno­mial” in and of it­self to de­scribe them, which is in­deed more than rad­i­cals. There prob­a­bly are more round about ways to do it, though.

• There are two times when Oc­cam’s ra­zor comes to mind. One is for ad­dress­ing “crazy” ideas ala “The witch down the road did it” and one is for pick­ing which le­git seem­ing hy­poth­e­sis might I pri­ori­tize in some sci­en­tific con­text.

For the first one, I re­ally like Eliezer’s re­minder that when go­ing with “The witch did it” you have to in­clude the ob­served data in your ex­pla­na­tion.

For the sec­ond one, I’ve been think­ing about the sim­plic­ity for­mu­la­tion that one of my pro­fes­sors uses. Roughly, A is sim­pler than B if all data that is con­sis­tent with A is a sub­set of all data that is con­sis­tent with B.

His mo­ti­va­tion for us­ing this no­tion has to do with min­i­miz­ing the num­ber of times you are forced to up­date.

• Roughly, A is sim­pler than B if all data that is con­sis­tent with A is a sub­set of all data that is con­sis­tent with B.

Maybe the less rough ver­sion is bet­ter, but this seems like a re­ally bad for­mu­la­tion. Con­sider (a) an ex­act enu­mer­a­tion of ev­ery event that ever hap­pened, mak­ing no pre­dic­tion of the fu­ture, vs (b) the true laws of physics and the true ini­tial con­di­tions, cor­rectly pre­dict­ing ev­ery event that ever hap­pened and ev­ery event that will hap­pen.

In­tu­itively, (b) is sim­pler to spec­ify, and we definitely want to as­sign (b) a higher prior prob­a­bil­ity. But ac­cord­ing to this for­mu­la­tion, (a) is sim­pler, since all fu­ture events are con­sis­tent with (a), while al­most none are con­sis­tent with (b). Since both the­o­ries have equally much ev­i­dence, we’d be forced to as­sign higher prob­a­bil­ity to (a).

• I think me adding more de­tails will clear things up.

The setup pre­sup­poses a cer­tain amount of re­al­ism. Start with Pos­si­ble Wor­lds Se­man­tics, where log­i­cal propo­si­tions are at­tached to /​ re­fer to the set of pos­si­ble wor­lds in which they are true. A hy­poth­e­sis is some propo­si­tion. We think of data as get­ting some propo­si­tion (in prac­tice this is shaped by the meth­ods/​tools you have to look at and mea­sure the world), which nar­rows down the al­low­able pos­si­ble wor­lds con­sis­tent with the data.

Now is the part that I think ad­dresses what you were get­ting at. I don’t think there’s a di­rect ana­log in my setup to your (a). You could con­sider the hy­poth­e­sis/​propo­si­tion, “the set of all wor­lds com­pat­i­ble with the data I have right now”, but that’s not quite the same. I have more thoughts, but first, do you still want feel like you idea is rele­vant to the setup I’ve de­scribed?

• That does seem to change things… Although I’m con­fused about what sim­plic­ity is sup­posed to re­fer to, now.

In a pure bayesian ver­sion of this setup, I think you’d want some sim­plic­ity prior over the wor­lds, and then dis­card in­con­sis­tent wor­lds and renor­mal­ize ev­ery time you en­counter new data. But you’re not speak­ing about sim­plic­ity of wor­lds, you’re speak­ing about sim­plic­ity of propo­si­tions, right?

Since a propo­si­tions is just a set of wor­lds, I guess you’re speak­ing about the com­bined sim­plic­ity of all the wor­lds. And it makes sense that that would in­crease if the propo­si­tion is con­sis­tent with more wor­lds, since any of the wor­lds would in­deed lead to the propo­si­tion be­ing true.

So now I’m at “The sim­plic­ity of a propo­si­tion is pro­por­tional to the prior-weighted num­ber of wor­lds that it’s con­sis­tent with”. That’s start­ing to sound closer, but you seem to be say­ing that “The sim­plic­ity of a propo­si­tion is pro­por­tional to the num­ber of other propo­si­tions that it’s con­sis­tent with”? I don’t un­der­stand that yet.

(Also, in my for­mu­la­tion we need some other kind of sim­plic­ity for the sim­plic­ity prior.)

• I’m cur­rently turn­ing my notes from this class into some posts, and I’ll wait to con­tinue this un­til I’m able to get those up. Then, hope­fully, it will be eas­ier to see if this no­tion of sim­plic­ity is lack­ing. I’ll let you know when that’s done.

• “Con­tra­dic­tions aren’t bad be­cause they make you ex­plode and con­clude ev­ery­thing, they’re bad be­cause they don’t tell you what to do next.”

Quote from a pro­fes­sor of mine who makes for­mal­isms for philos­o­phy of sci­ence stuff.

• Con­tra­dic­tions tell you to fix the con­tra­dic­tion/​s next.

• Look­ing at my cal­en­dar over the last 8 months, it looks like my at­ten­tion span for a pro­ject is about 1-1.5 weeks. I’m mus­ing on what it would like to lean into that. Have mul­ti­ple pro­jects at once? Work ex­tra hard to en­sure I hit save points be­fore the week­ends? Only work on things in week long bursts?

• If you can be de­liber­ate about learn­ing from pro­jects, this could ac­tu­ally be a good setup – do­ing one pro­ject a week, learn­ing what you can from it, and mov­ing on ac­tu­ally seems pretty good if you’re op­ti­miz­ing for skill growth.

• Yeah, be­ing ex­plicit about 1 week would likely help. The pro­jects that made me make this ob­ser­va­tion were all ones where I was try­ing to do more than a weeks worth of stuff, and a week is were I de­cided to move to some­thing else.

I ex­pect “I have a week to learn about X” would both take into ac­count wan­ing/​wax­ing in­ter­est, and add a bit of rush-mo­ti­va­tion.

• I’m notic­ing an even more gran­u­lar ver­sion of this. Things that I might do ca­su­ally (read­ing some blog posts) have a sig­nifi­cant effect on what’s loaded into my mind the next day. Smaller than the week level, I’m notic­ing a 2-3 day cy­cle of “the thing that was most re­cently in my head” and how it effects the ques­tion of “If I could work on any­thing rn what would it be?”

This week on Tues­day I picked Wed­nes­day as the day I was go­ing to write a sketch. But be­cause of some­thing I was think­ing be­fore go­ing to bed, on Wed­nes­day my head was filled with thoughts on urbex. So I switched gears, and urbex thoughts ran their course through Wed­nes­day, and on Thurs­day I was ready to ac­tu­ally write a sketch (com­edy thoughts need to be loaded for that)

• Pos­si­ble hack re­lated to small wins. Many of the pro­jects that I stopped got stopped part way through “con­tin­u­ing more of the same”. One was writ­ing my Hazardous Guide to Words, and the other was re­search­ing how the in­ter­net works. Maybe I could work on one co­he­sive thing for longer if there was a sig­nifi­cant vic­tory and gear shift af­ter a work. Like, if I was mak­ing a video game, “Yay, I finished mak­ing all the art as­sets, onto ac­tual code” or some­thing.

• The tar­get au­di­ence for Hazardous Guide is friends of yours, cor­rect? (vaguely re­call that)

A thing that nor­mally works for writ­ing is that af­ter each chunk, I get to pub­lish a thing and get com­ments. One thing about Hazardous Guide is that it mostly isn’t new ma­te­rial for LW vet­er­ans, so I could see it get­ting less feed­back than av­er­age. Might be able to ad­dress by ac­tu­ally show­ing parts to friends if you haven’t

• Ooo, good point. I was get­ting a lot less feed­back form than then from other things. There’s one piece of feed­back which is “am I on the right track?” and an­other that’s just “yay, peo­ple are en­gag­ing!” both of which seem rele­vant to mo­ti­va­tion.

• Elephant in the Brain style model of sig­nal­ing:

Ac­tu­ally show­ing that you have XYZ skill/​trait is the most benefi­cial thing you can do, be­cause oth­ers can ver­ify you’ve got the goods and will hire your /​ like you /​ be on your team. So now there’s an in­cen­tive for ev­ery­one to be con­stantly dis­play­ing there skills/​traits. This takes up a lot of time and en­ergy, and I’m gonna guess that anti-com­pe­ti­tion norms cre­ated “show­ing off” as a bad thing to do to pre­vent this “over sat­u­ra­tion”.

So if there’s an “no show­ing-off” norm, what can you do? You sig­nal (do non di­rect things to try and con­vey you have a skill or trait). It’s still of­ten that peo­ple sig­nal all the time and it takes up time and en­ergy, but it does seem a bit less waste­ful than ev­ery­one “show­ing off” all the time.

• This has been my model too, de­riv­ing from EitB. But it’s prob­a­bly not just about pre­vent­ing the over-sat­u­ra­tion, it’s also to the benefit of those who are more skil­led at sig­nal­ing covertly to pro­mote a norm that dis­ad­van­tages who only have skills, but not the covert-sig­nal­ing skills.

• Yeah, I see those play­ing to­gether in the form of the base norm be­ing about anti-com­pe­ti­tion, and then peo­ple can’t want to en­force the norm from gen­eral “I’ll get pun­ished if I don’t sup­port it” and “I per­son­ally can skil­lfully sub­vert, so en­forc­ing this norm helps me keep the un­skil­led out”.

• Be care­ful not to over­sim­plify—norms are com­plex, muta­ble, and con­text-sen­si­tive. “no show­ing off” is not a very com­plete de­scrip­tion of any­one’s ex­pec­ta­tions. No show­ing off badly is closer, but “badly” is do­ing a LOT of work—in it­self is a com­plex and some­what re­cur­sive norm.

Find­ing out where “show­ing” skills is al­igned with “ex­cer­cis­ing” those skills to achieve an out­come is non-triv­ial, but ever so won­der­ful if you do find a pro­fes­sion and pro­ject where it’s pos­si­ble.

See also https://​​en.wikipe­dia.org/​​wiki/​​Coun­tersig­nal­ing , the idea where if you’re con­fi­dent that you’re as­sumed to have some skills, you ac­tu­ally show HIGHER skills by failing to sig­nal those skills.

• Thanks on re­mind­ing me of nu­ance. Yeah, the “badly” does a lot of work, but also puts me in the right head space to guess at when I do and don’t think real peo­ple would get an­noyed at some­one “show­ing off”.

• When I first read The Se­quences, why did I never think to se­ri­ously ex­am­ine if I was wrong/​bi­ased/​par­tially-in­com­plete in my un­der­stand­ing of these new ideas?

Hyp: I be­lieved that fool­ing one’s self was all iden­tity driven. You want to be a type of per­son, and your bias lets you com­fortably sink into it. I was un­able to see my iden­tity. I also had a self nar­ra­tive of “Yeah, this Eliezer dude, what ever, I’ll just see if he has any­thing good to say. I don’t need to fit in with the ra­tio­nal­ists.”

I saw my­self as “just” tak­ing in and think­ing about some ar­gu­ments, and these ar­gu­ments were con­vinc­ing to me, and so they stuck and I took them in. I didn’t ap­ply lots of rigor or self re­flec­tion, be­cause I didn’t think I needed care­ful thought to avoid be­ing bi­ased if I couldn’t see a clear and ob­vi­ous iden­tity on the line.

(spoiler, iden­tity was on the line, and also your rea­son­ing can be flawed for a bazillion non iden­tity based rea­sons)

• There is chaos and one (or a state) is try­ing to pre­dict and act on the world. It sure would be eas­ier if things were sim­pler. So far, this seems like a pretty hu­man/​stan­dard de­sire.

I think the core move of leg­i­bil­ity is to de­clare that ev­ery­thing must be sim­ple and easy to un­der­stand, and if re­al­ity (i.e peo­ple) aren’t as sim­ple as our planned sim­plifi­ca­tion, well too bad for peo­ple.

As a ra­tio­nal­ist/​post-ra­tio­nal­ist/​per­son who thinks good, you don’t have to do that. Giv­ing into the the pro­cess of leg­i­bil­ity is giv­ing into ac­cept­ing a the­ory for the sake of a the­ory, even if muffles a part of re­al­ity that mat­ters. Don’t do that.

• “If we’re all so good at fool­ing our­selves, why aren’t we all happy?”

The zealot is only “fool­ing them­selves” from the per­spec­tive of the “ra­tio­nal” out­sider. The zealot has not fooled them­selves. They have looked at the world and their rea­son­ing pro­cesses have come to the clear and ob­vi­ous con­clu­sion that []. They have gri-gri, and it works.

But it seems like most of us are much bet­ter at fool­ing our­selves than we are at “hap­pen­ing to use the full ca­pac­ity of our minds to come to false and use­ful con­clu­sions”. We have be­lief in be­lief. It’s pos­si­ble to work this into al­most as strong of a fortress as the zealot, but it is more pre­car­i­ous.

• I’ve spent the last three weeks mak­ing some sim­ple apps to solve small prob­lems I en­counter, and prac­tice the de­vel­op­ment cy­cle. Ex­am­ple.

I’ve already been sold on the con­cept of de­vel­op­ing things in a Lean MVP style for prod­ucts. Shorter feed­back loops be­tween mak­ing stuff and figur­ing out if any­one wants it. Less time spent mak­ing things peo­ple don’t want to give you money for. It was only these past few weeks where I no­ticed the im­por­tance of a MVP ap­proach for per­sonal pro­jects. Now it’s a case of short­en­ing the feed­back loops be­tween mak­ing stuff and figur­ing out if I care about what I’ve made. This is cru­cial for mo­ti­va­tion. It’s easy for me to go, “I’m gonna make a slick app!” and try to do the sym­bolic thing that is app de­vel­op­ment, and not spend time work­ing to­wards the what I cared about that made me start the pro­ject.

I see this a lot with blog posts as well. If I get in my head that this post should be “defini­tive” or “ex­tra-well re­searched”, I can spend a lot of time on that, even though I didn’t ac­tu­ally care about it that much, and by the time I get to writ­ing the thing that was in my heart I’m sick and tired of the idea and don’t want to write.

• I love at­ten­tion, but I HATE ask­ing for it. I’ve no­ticed this a few times be­fore in var­i­ous forms. This time it re­ally clicked. What changed?

• This time around, the in­sight came in the con­text of perform­ing magic. This made the “I love at­ten­tion” part more ob­vi­ous than other times, when I merely no­ticed, “I have an aller­gic re­ac­tion to seem­ing needy.”

• I was able to re­mem­ber some of the con­text that this pat­tern arose from, and can ob­serve “Yes, this may have helped me back then, but here are ways it isn’t as helpful now, and it’s not au­to­mat­i­cally ter­rible to ask for at­ten­tion.”

• I re­al­ize this is my fault, but when I click “what changed” I’m not ac­tu­ally sure what com­ment it’s link­ing to. (I’ll im­prove the com­ment-link­ing UI this week hope­fully so it’s more clear which com­ments link where). Which com­ment did you mean to be link­ing to?

I’m in­ter­ested in more de­tails about what was go­ing on in the par­tic­u­lar ex­am­ple here (i.e. perform­ing magic as in stage-magic? What made that differ­ent?)

This is less about the notic­ing and more about effects of the pre­vi­ous frame.

• I like this post, and think it’d be fine to cross­post to LW.

• I’ll be writ­ing a post about this later. The com­ment it links to is the first child com­ment of the tippy top com­ment of this page. (yes, magic the perfor­mance art)

• Rea­sons why I cur­rently track or have tracked var­i­ous met­rics in my life:

1. A mind­ful­ness tool. Tack­ing the time to record and note some met­ric is it­self the goal.

2. Have data to be able to test an hy­poth­e­sis about ways some in­ter­ven­tion would af­fect my life. (i.e Did wak­ing up ear­lier give me less en­ergy in the day?)

3. Have data that en­ables me to make bet­ter pre­dic­tions about the fu­ture (mostly re­lated to time track­ing, “how long does X amount of work take?”)

4. Un­der­stand­ing how [THE PAST] was differ­ent of [THE PRESENT] to help defeat the Deadly De­mons of Doubt and Shitty Ser­pents of Should (ala De­liber­ate Once).

I have not always had these in mind when de­cid­ing to track a met­ric. Often I tracked be­cause “that’s wut pro­duc­tive peo­ple do right?”. When I keep these in mind, track­ing gets more use­ful.

• Cur­rent be­liefs about how hu­man value works: var­i­ous thoughts and ac­tions can pro­duce a “re­ward” sig­nal in the brain. I also have lots of pre­dic­tive cir­cuits that fire when they an­ti­ci­pate a “re­ward” sig­nal is com­ing as a re­sult of what just hap­pened. The pre­dic­tive cir­cuits have been trained to use the pat­terns of my en­vi­ron­ment to pre­dict when the “re­ward” sig­nal is com­ing.

Get­ting an “ac­tual re­ward” and a pre­dic­tive cir­cuit firing will both be ex­pe­rienced as some­thing “good”. Be­cause of this, pre­dic­tive cir­cuits can not only track “ac­tual re­ward” but also the ac­ti­va­tion of other pre­dic­tive cir­cuits. (So far this is ba­si­cally “there’s ter­mi­nal and in­stru­men­tal val­ues, and they are ex­pe­rienced as roughly the same thing”)

The pre­dic­tive cir­cuits are all do­ing some “learn­ing pro­cess” to keep their firing cor­re­lated to what they’re track­ing. How­ever, the “qual­ity” of this learn­ing can vary dras­ti­cally. Some cir­cuits are more “hard­wired” than oth­ers, and less able to up­date when they be­gin to be­come un­cor­re­lated from what they are track­ing. Some are caught in in­ter­est­ing feed­back loops with other cir­cuits, such that you have to up­date mul­ti­ple cir­cuits si­mul­ta­neously, or in a par­tic­u­lar or­der.

Thought ev­ery thing that feels “good” feels good be­cause at some point or an­other it was track­ing the base “re­ward” sig­nal, it won’t always be a good idea to think of the “re­ward” sig­nal as the thing you value.

Say you have a cir­cuit that tracks a proxy of your base “re­ward”. If some­thing hap­pens in your brain such that this cir­cuit ceases to up­date, you ba­si­cally value this proxy ter­mi­nally.

Said an­other way, I don’t have a nice clean on­tolog­i­cal line be­tween ter­mi­nal val­ues and in­stru­men­tal val­ues. The less valuable a pre­dic­tive cir­cuit, the more “ter­mi­nal” the value it rep­re­sents.

• Weird­ness that comes from re­flec­tion:

In this frame, I can self-re­flect on a given cir­cuit and ask, “Does this cir­cuit ac­tu­ally push me to­wards what I think is good?” When do­ing this, I’ll be us­ing some more meta/​higher-or­der cir­cuits (con­cepts I’ve built up over time about what a “good” brain looks like) but I’ll also be us­ing lower level cir­cuits, and I might even end up us­ing the eval­u­ated cir­cuit it­self in this eval­u­a­tion pro­cess.

Some­times this re­flec­tion pro­cess will go smooth. Some­times it won’t. But one take­away/​claim is you have this com­plex round­about pro­cess for re-eval­u­at­ing your val­ues when some cir­cuits be­gin to think that other cir­cuits have di­verged from “good”.

Be­cause of this abil­ity to re­flect and change, it seems cor­rect to say that “I value things con­di­tional on my en­vi­ron­ment” (where en­vi­ron­ment has a lot of flex, it could be as small as your work space, or as broad as “any ex­ist­ing hu­man cul­ture”).

Ex­am­ple. Let’s say there was liter­ally no scarcity for sur­vival goods (food wa­ter etc). It seems like a HUGE chunk of my val­ues and morals are built up in­fer­ences and solu­tions to re­source al­lo­ca­tion prob­lems. If re­source scarcity was mag­i­cally no longer a prob­lem, much of my val­ues have lost their con­nec­tion to re­al­ity. From what I’ve seen so far of my own self-re­flec­tion pro­cess, it seems likely that over­time I would come to re­or­ga­nize my val­ues in such a post-scarcity world. I’ve also cur­rently got no clue what that re­or­ga­ni­za­tion would look like.

• AFI worry: A hu­man-in-the-loop AI that only takes ac­tions that get hu­man ap­proval (and whose ex­pected out­comes have hu­man ap­proval) hits big prob­lems when the con­text the AI is act­ing in is a very differ­ent con­text from where our val­ues were trained.

Is there any way around this be­sides simu­lat­ing peo­ple hav­ing their val­ues re-or­ga­nized given the new en­vi­ron­ment? Is this what CEV is about?

• The slo­gan ver­sion of some thoughts I’ve been hav­ing lately are in the vein of “Hurry is the root of all evil”. Think­ing in terms of code. I’ve been work­ing in a new dev en­vi­ron­ment re­cently and have felt the siren song of, “Copy the code in the tu­to­rial. Just im­port all the pack­ages they tell you to. Don’t sweat the de­tails man, just go with it. Just get it run­ning.” All that as op­posed to “Learn what the differ­ent ab­strac­tions are grounded in, figure out what tools do what, figure out ex­actly what I need, and use what­ever is nec­es­sary to ac­com­plish it.”

When I ping my­self about why the former feels to have tug, I come up with 1). a tiny fear of not be­ing ca­pa­ble of un­der­stand­ing the fine de­tails and 2). a tiny fear that if un­der­stand­ing is pos­si­ble, it will take a lot of time and WE’RE RUNNING OUT OF TIME!

Which is in­ter­est­ing, be­cause this is just a side pro­ject that I’m do­ing for fun over win­ter break, which is speci­fi­cally de­signed to get me to learn more.

• The fact that util­ity and prob­a­bil­ity can be trans­formed while main­taing the same de­ci­sions matches what the algo feels like from the in­side. When think­ing about ac­tions, I of­ten just feel like a po­ten­tial ac­tion is “bad”, and it takes effort to piece out if I don’t think the out­come is su­per valuable, or if there’s a good out­come that I don’t think is likely.

• Think­ing about be­lief in be­lief.

You can have things called “be­liefs” which are of type ac­tion. “Hav­ing” this be­lief is ac­tu­ally your de­ci­sion to take cer­tain ac­tions in cer­tain sce­nar­ios. You can also have things called “be­liefs” which are of type prob­a­bil­ity, and are part your deep felt sense of what is and isn’t likely/​true.

A be­lief-ac­tion that has a high EV (and feels “good”) will prob­a­bly feel the same as a be­lief-prob­a­bil­ity that is close to 1.

Take a given sen­tence/​propo­si­tion. You can put a high EV on the be­lief-ac­tion ver­sion of that sen­tence (may­haps it has im­por­tant con­se­quences for your so­cial groups) while putting a low prob­a­bil­ity on the be­lief-prob­a­bil­ity ver­sion of the sen­tence.

Meta Thoughts: The above idea is not fun­da­men­tally differ­ent from be­lief in be­lief or crony be­liefs, both of which I’ve read a year or more ago. What I just wrote felt like a gen­uine in­sight. What do I think I un­der­stand now that I don’t think I un­der­stood then?

I think that re­cently (past two months, since CFAR) I’ve had bet­ter luck with go­ing into “Su­per-truth” mode, look­ing into my own soul and ask­ing, “Do you ac­tu­ally be­lief this?”

Now, I’ve got many more data points of, “Here’s a thing that I to­tally thought that I be­lieved(prob­a­bil­ity) but ac­tu­ally I be­lieved(ac­tion).”

Maybe the in­sight is that it’s easy to get mixed up be­tween be­lief-prob and be­lief-ac­tion be­cause the felt sense of prob­a­bil­ity and EV are very very similar, and gen­uinely non-triv­ial to peel apart.

^yeah, that feels like it. I think pre­vi­ously I thought, “Oh cool, now that I know that be­lief-ac­tion and be­lief-prob are differ­ent things, I just won’t do be­lief-ac­tion”. Now, I be­lieve that you need to teach your­self to feel the differ­ence be­tween them, oth­er­wise you will con­tinue to mis­take be­lief-ac­tions for be­lief-probs.

Meta-Meta-Thought: The meta-thoughts was su­per use­ful to do, and I think I’ll do it more of­ten, given that I of­ten have a sense of, “Hm­mmm, isn’t this ba­si­cally [in­sert post in The Se­quences here] re-phrased?”

• Don’t ask peo­ple for their mo­tives if you are only ask­ing so that you can shit on their mo­tives. Nor­mally when I see some­one ask­ing some­one else, “Why did you do that?” I in­ter­pret the state­ment to come from a place of, “I’m already about to start mak­ing nega­tive judg­ments about you, this is the last chance for you to offer a plau­si­ble ex­cuse for your be­hav­ior be­fore I start firing.”

If this is in fact the dy­namic, then no one is in­cen­tivised to give you their ac­tual rea­sons for things.

• I have been look­ing at in­ten­tions and try­ing to act with in­ten­tions in mind.

No one ever has ill in­ten­tions, they can have a “make the sale at your detri­ment” in­ten­tion. But no one ever has a “worse off for ev­ery­one” in­ten­tion.

• make the sale at your detriment

I like that phras­ing.

Yeah, a was speak­ing and (slightly) think­ing about peo­ple with the pure mo­tive to harm, which wouldn’t be a typ­i­cal case of this. Refrase with, “Don’t blah blah blah if you will end up mak­ing ex­plicit nega­tive judg­ments at them,” and you have a bet­ter ver­sion of my thought.

• I’m look­ing at note­book from 3 years ago, and read­ing some scrib­bles from past me ex­cit­edly de­scribing how they think they’ve pieced to­gether that anger and the de­sire to pun­ish are adap­ta­tions pro­duced by evolu­tion be­cause they had good game the­o­retic prop­er­ties. In the haste of the writ­ing, and in the num­ber of ex­cla­ma­tion marks used, I can see that this was a huge re­al­iza­tion for me. It’s sur­pris­ing how ab­solutely nor­mal and “ob­vi­ous” the idea is to me now. I can only re­mem­ber a glim­mer of the “holy shit!”ness that I felt at the time. It’s so easy to for­get that I haven’t always thought the way I cur­rently do. As if I’m typ­i­cal-mind­ing my past self.

• “It seems like you are ar­gu­ing/​en­gag­ing with some­thing I’m not say­ing.”

I can re­mem­ber a ar­gu­ment with a friend who went to great lengths to defend a point he didn’t feel su­per strongly about, all be­cause he im­plic­itly as­sumed I was about to go “Given point A, X con­clu­sion, check­mate.”

It seems like a pretty com­mon “ar­gu­men­tal move­ment” is to get some­one to agree to a few sim­ple propo­si­tions, with the goal of later “trap­ping” them with a du­bi­ous “and there­fore!”. Peo­ple are good at spot­ting this, and will of­ten fight you on “facts” be­cause they now the con­clu­sion you are try­ing to reach (ala The Sig­nal and The Cor­rec­tive).

It seems like my friend was still run­ning the same defen­sive mechanism, even when there wasn’t in­tent on my part to trap him in a con­clu­sion.

Often, when some­one I’m talk­ing to “ar­gues with some­thing I’m not say­ing”, I don’t no­tice in time, and quickly I also end up ar­gu­ing a point I don’t care about.

• I re­ally like the phras­ing alk­jash used, One Inch Punch. Re­cently I’ve been pay­ing closer at­ten­tion to when I’m in “do­ing” or “try­ing” mode, and whether or not those are qual­ity han­dles, there do seem to be mul­ti­ple forms of “do­ing” that have dis­tinct qual­ities to them.

It’s way eas­ier for me to “just” get out of bed in the morn­ing, than to try and con­vince my­self get­ting out of bed is a good idea. It’s way eas­ier for me to “just” hit send on an email or mes­sage that might not be worded right, rather than con­vince my­self that it’s the right move.

When I act on a habit that fights in­cen­tives of com­fort, there’s a part of me that tries to rea­son me out of it. I’ve no­ticed that any en­gage­ment with that voice leads to a dras­tic re­duc­tion in the prob­a­bil­ity that I do the thing (this is much eas­ier to no­tice with phys­i­cal ac­tions and habits).

This doesn’t ap­ply to all things. There are some things where I gen­uinely don’t know what a good de­ci­sion looks like, and I know there’s very lit­tle chance that “just tak­ing ac­tion” won’t give a stel­lar re­sult. I have no ideas on a for­mal­ism for spot­ting when to ap­ply a One Inch Punch, and when to en­gage in de­liber­a­tion, though I have a feel­ing that my S1 is get­ting bet­ter at do­ing such categori

• I re­ally like the phras­ing alk­jash used, One Inch Punch. Re­cently I’ve been pay­ing closer at­ten­tion to when I’m in “do­ing” or “try­ing” mode, and whether or not those are qual­ity han­dles, there do seem to be mul­ti­ple forms of “do­ing” that have dis­tinct qual­ities to them.

It’s way eas­ier for me to “just” get out of bed in the morn­ing, than to try and con­vince my­self get­ting out of bed is a good idea. It’s way eas­ier for me to “just” hit send on an email or mes­sage that might not be worded right, rather than con­vince my­self that it’s the right move.

When I act on a habit that fights in­cen­tives of com­fort, there’s a part of me that tries to rea­son me out of it. I’ve no­ticed that any en­gage­ment with that voice leads to a dras­tic re­duc­tion in the prob­a­bil­ity that I do the thing (this is much eas­ier to no­tice with phys­i­cal ac­tions and habits).

This doesn’t ap­ply to all things. There are some things where I gen­uinely don’t know what a good de­ci­sion looks like, and I know there’s very lit­tle chance that “just tak­ing ac­tion” won’t give a stel­lar re­sult. I have no ideas on a for­mal­ism for spot­ting when to ap­ply a One Inch Punch, and when to en­gage in de­liber­a­tion, though I have a feel­ing that my S1 is get­ting bet­ter at do­ing such cat­e­go­riz­ing.

• So Kol­mogorov Com­plex­ity de­pends on the lan­guage, but the differ­ence be­tween com­plex­ity in any two lan­guages differs by at most a con­stant (what ever the size of an in­ter­preter from one to the other is).

This seems to mean that the com­plex­ity or­der­ing of differ­ent hy­poth­e­sis can be re­ar­ranged by switch­ing lan­guages, but “only so much”. So

and

are both to­tally pos­si­ble, as long as

I see how if you care about or­ders of mag­ni­tude, the de­scrip­tion lan­guage prob­a­bly doesn’t mat­ter. But if you ever had to make a de­ci­sion where it mat­tered if the com­plex­ity was 1,000,000 vs 1,000,001 then lan­guage does mat­ter.

Where is KC ac­tu­ally used, and in those con­texts how sen­si­tive are re­sults to small re­order­ing like the one I pre­sented?

• I am not an ex­pert, but my guess is that KC is only used in ab­stract proofs, where these de­tails do not mat­ter. Things like:

• KC in not computable

• there is a con­stant “c” such that KC of any mes­sage is smaller than its length plus c

Etc.

• Yeah. I guess the only place I can re­mem­ber see­ing it refer­enced in ac­tions was with re­gard to as­sign­ing pri­ors for solomonoff in­duc­tion. So I won­der if it changes any­thing there (though solomonoff is already pretty ab­stracted away from other things, so it might not make sense to do a sen­si­tivity anal­y­sis)

• Mini Post, Li­tany of Gendlin re­lated.

Chang­ing your mind feels like chang­ing the world. If I change my mind and now think the world is a shit­tier place than I used to (all my friends do hate me), it feels like I just tele­ported into a shit­tier world. If I change my mind and now think the world is a bet­ter place than I used to (I didn’t leave the oven on at home, so my house isn’t go­ing to burn down!) it feels like I’ve just been tele­ported into a bet­ter world.

Con­se­quence of the above: if some­one is try­ing to change your mind, it feels like they are try­ing to change your world. If some­one is try­ing to make you be­lieve the world is a shit­tier place than you thought, it feels like they are try­ing to make your life shit­tier.

Now I re­cite the Li­tany of Gendlin like a good ra­tio­nal­ist. Now let me try to walk through why that might be un­com­pel­ling to the av­er­age Joe.

Let’s say all of your friends have se­cretly hated you for a while. Some­thing has just hap­pened (you saw one of their group chats were they were shit talk­ing you) and you are con­sid­er­ing “Shit, what if they have been hat­ing me for years?” You re­cite the Li­tany of Gendlin. It’s in­effec­tive. What’s up?

It seems it has to be that your con­cept of “All my friends se­cretly hate me” is not in ac­cord with what your friends ac­tu­ally hat­ing you is like. You have already en­dured your friends se­cretly hat­ing you. You have not yet en­dured be­liev­ing “My friends se­cretly hate me”. This can only do dam­age by in­ter­act­ing with other be­lief net­works in your mind. Maybe hav­ing this be­lief trig­gers “Only an idiot could go years with­out his notic­ing his friends hate him” which com­bines with “If I’m an oblivi­ous idiot I won’t be able to ac­com­plish my goals” and “No one will love an oblivi­ous idiot who can’t ac­com­plish their goals” and now the fu­ture does not feel safe.

it seems like the move you could pull that might best re­duce the feel­ing of “Believ­ing this will make life shit­tier than not” is to imag­ine be­liev­ing it and the world be­ing shitty, and then to imag­ine not be­liev­ing it, but the world still be­ing shitty. I think this will help in many sce­nar­ios. I’d ex­pect many Li­tany of Gendlin sce­nar­ios to be one’s were ig­nor­ing the truth will cre­ate com­pound­ing trou­ble down the road. So the move is to imag­ine go­ing along bliss­fully in de­nial, and then get­ting socked in the face by a crash­ing build up. Com­pare that to the ex­tra work and worry of be­liev­ing now.

If you did that and came out with “Nope, it still seems like I’ll be net bet­ter of to not be­lieve”, well shit, what was the sce­nario? I’m gen­uinely in­ter­ested, and don’t have im­me­di­ate thoughts on whether or not you should change your mind.

(Look­ing for feed­back on how use­ful you think this ex­pla­na­tion and ex­tra ad­vice would be to a non rat go­ing through a Gendlin style crisis)

• Look­ing for feed­back on how use­ful you think this ex­pla­na­tion

The na­ture of this ex­pe­rience may vary be­tween peo­ple. I’d say find­ing out some­thing bad and hav­ing to deal with the im­pact of that is more com­mon/​of an is­sue than re­ject­ing the way things are (or might be), though:

ex­tra ad­vice would be to a non rat go­ing through a Gendlin style crisis)

Offhand­edly I’m not sure “rat” makes an effect here?

1. Figur­ing out what to do with new trou­bling in­for­ma­tion—mak­ing a plan and act­ing on it—can be hard. (Know­ing what to do might help peo­ple with “ac­cept­ing” their “new” re­al­ity?)

2. Just be­cause you un­der­stand part of an is­sue doesn’t mean you’ve wrapped your head around all the im­pli­ca­tions.

3. Real­iz­ing some­thing “bad” can take a while. Pro­cess­ing might not hap­pen all at once.

4. If it’s tak­ing you take a long time to work some­thing out, you might already know what the an­swer is, and be afraid of it.

5. This gets into an area where things vary de­pend­ing on the per­son (and the situ­a­tion) - some­times peo­ple may have more trou­ble ac­cept­ing “new nega­tive re­al­ities”, some­times peo­ple are too fast to jump to nega­tive con­clu­sions.

• Col­lect­ing some re­cent ob­ser­va­tions from some self study:

• In my fresh­man fall of uni­ver­sity, I re­al­ized I was in­cred­ibly judg­men­tal of my­self and felt I should be ca­pa­ble of ev­ery­thing. I “dealt with it” and felt less suffer­ing and self-loathing/​judg­ment in the fol­low­ing months year. I more or less thought I had “learned how to stop be­ing so harsh on my­self.”

Now I see that I never re­duced the harsh­ness. What I did was con­vince my fear/​judge­ment/​loathing to use a new rubric for grad­ing me. I did a huge sys­tems, suc­cess­fully started a shit ton of habits, and build a much bet­ter abil­ity to fo­cus. It was as if to say “See? Look at this awe­some plan I have! Yes, I im­plic­itly buy into the uni­verse where it’s im­per­a­tive I do [all the shit]. All I ask is that you give me time. This plan is great and I’ll to­tally be able to do [all the stuff], just not right now.”

I was fused with the judge­ment enough that I wasn’t able to ques­tion it, only ne­go­ti­ate with it for bet­ter terms. The penalty for failure was still “feel like a mis­er­able piece of shit”.

I now have a much bet­ter sense of what lead to this fear and judge­ment be­ing built up in the first place, and that un­der­stand­ing has lead to not do­ing [all the stuff] feel more like “a less cool world than oth­ers” and not “hell, com­plete with eter­nal tor­ment and self-loathing”

• This comment

• Some­thing I no­ticed about what I take cer­tain in­ter­nal events to mean:

Over the past 4 years I’ve had trou­ble be­ing in touch with “what I want”. I’ve made a lot of progress in the past year (a huge part was notic­ing that I’d pre­vi­ously in­ten­tion­ally cut off com­mu­ni­ca­tion with the parts of me that want).

Pre­vi­ously when I’d ask “what do I want right now?” I was ba­si­cally ask­ing, “What would be the most ed­ify­ing to my self-con­cept that is also doable right now?”

So I’ve man­aged to stop do­ing that a lot. Last week, I no­ticed that “what do I want to do right now?” or “do I want to do X right now?” turns into “am I im­me­di­ately able to think of in­ter­est­ing parts of X? Are parts of X already loaded into my mind and my brain is work­ing on it?”

Notic­ing this is su­per helpful. Ba­si­cally I was ask­ing “am I already work­ing on X in my head?” and then de­cid­ing to work on it ex­plic­itly. Con­se­quences of this: If what I was work­ing on in the morn­ing wasn’t met with hard road blocks, I’d feel that I’d want to just do that thing for the whole day, and that switch­ing would be “be­tray­ing my wants”. If I did hit a road block, or my mind was just DONE with the first task of the day, then I could switch.

On the op­po­site side, if I thought of an ac­tivity, and it didn’t im­me­di­ately boot up the rele­vant and in­ter­est­ing parts, then I’d take that as “I don’t want to do this” or “Oh, I guess that feels bor­ing right now.”

Now I can work on bet­ter pre­dict­ing “If I did start do­ing this, how much would I like it?” and I don’t have to im­plic­itly rely only on “Am I already work­ing on it?”

• Be­ing un­di­vided is cool. Peo­ple who seem to act as one mono­lithic agent are in­spiring. They get stuff done.

What can you do to try and be un­di­vided if you don’t know any of the men­tal and emo­tional moves that go into this sort of in­te­gra­tion? You can tell ev­ery­one you know, “I’m this sort of per­son!” and try su­per su­per hard to never let that iden­tity falter, and feel like a shitty mis­er­able failure when­ever it does.

How funny that I can feel like I shouldn’t be hav­ing the “prob­lem” of “feel­ing like I shouldn’t be hav­ing XYZ prob­lems”. Ha.

• You can tell ev­ery­one you know, “I’m this sort of per­son!” and try su­per su­per hard to never let that iden­tity falter, and feel like a shitty mis­er­able failure when­ever it does.

You could also just avoid the feel­ings of mis­er­able failure by re­clas­sify­ing all of your failures as not-failures and then for­get­ting about them. :-)

• More Mal­colm Ocean:

“So the aim isn’t to be pro­duc­tive all the time. It’s to be pro­duc­tive at the times when your in­ter­nal so­ciety of mind gen­er­ally agrees it would be good to be pro­duc­tive. It’s not to be able to mo­ti­vate your­self to do any­thing. It’s to be able to mo­ti­vate your­self to do any­thing it makes sense to do.”

I no­tice some of my older im­plicit and ex­plicit strate­gies were, “Well first I’ll get good at be­ing able to do any ar­bi­trary thing that I (i.e the dom­i­nant self-con­cept/​iden­tify I want to pro­ject) pick, and then I’ll work on figur­ing out what I ac­tu­ally want and care about.”

Oops.

Also, not­ing that the “then I’ll figure out what I want” was more “Well I’ve got no idea how to figure out what I want, so let’s do any­thing else!”

Oops.

• Short fram­ing on one rea­son it’s of­ten hard to re­solve dis­agree­ments:

[with some fre­quency] dis­agree­ments don’t come from the same place that they are found. You’re brain is always run­ning in­fer­ence on “what other peo­ple think”. From a state­ment like, “I re­ally don’t think it’s a good idea to home­school”, you’re mind might already be guess­ing at a dis­agree­ment you have 3 con­cepts away, yet only ping you with a “dis­agree­ment” alarm.

Com­bine that with a de­cent abil­ity to con­fab­u­late. You ask your­self “Why do I dis­agree about home­school­ing?” and you are given a plethora of pos­si­ble rea­sons to dis­agree and start talk­ing about those.

• True if you squint at it right: Learn­ing more about “how things work” is a jour­ney that starts at “Life is a sim­ple and easy game with ran­dom out­comes” and ends in “Life is a com­plex and thought in­ten­sive game with de­ter­minis­tic out­comes”

• Idea that I’m go­ing to use in these short form posts: for ideas/​things/​threads that I don’t feel are “re­solved” I’m go­ing to write “*tk*” by the most rele­vant sen­tence for easy search later (I vaguely re­mem­ber Tim Fer­ris talk­ing about us­ing “tk” as a sub­sti­tute for “do re­search and put the real num­bers in” since “tk” is not a let­ter pair that shows up much in English words. )

• I’ve taken a lot of pro­gram­ming courses at uni­ver­sity, and now I’m tak­ing some more math and proof based courses. I no­tice that it feels con­sid­er­ably worse to not fully un­der­stand what’s go­ing on in Real Anal­y­sis than it did to not fully un­der­stand what was go­ing on in Data Struc­tures and Al­gorithms.

When I’m cod­ing and pul­ling on lev­ers I don’t un­der­stand (out­sourc­ing tasks to a library, or adding this line to the pro­ject be­cause, “You just have to so it works”) there’s a yuck feel­ing, but there’s also, “Well at least it’s work­ing now.”

Com­pare that to math. If I’m writ­ing a proof on an exam or a home­work, and I don’t re­ally know what I’m writ­ing (but you know, I vaguely re­mem­ber this be­ing what a proof for this sort of prob­lem looks like), it feels like a dis­gust­ing waste of time.

• The other day at a lunch time I re­al­ized I’d for­got to make and pack a lunch. It felt odd that I only re­al­ized it right when I was about to eat and was look­ing through my bag for food. Trac­ing back, I re­mem­bered that some­thing ab­nor­mal had hap­pened in my morn­ing rou­tine, and af­ter deal­ing with the pop-up, I just skipped a step in my rou­tine and never even no­ticed.

One thing I’ve done semi-in­ten­tion­ally over the past few years is de­crease the amount of am­bi­ent thought that goes to lo­gis­tics. I used to con­sider it to be “use­less wor­ry­ing”, but given how a small dis­rup­tion was able to make me skip a very im­por­tant step, now I think of it more as trad­ing off effi­ciency for “ro­bust­ness”.

• Here is a an ab­strac­tion of a type of dis­agree­ment:

Claim: it is com­mon for one to be more con­cerned with ques­tions like, “How should I re­spond to XYZ sys­tem?” over “How should I cre­ate an ac­cu­rate model of XYZ sys­tem?”

Let’s say the sys­tem /​ en­vi­ron­ment is so­cial in­ter­ac­tions.

Liti: Why are you sup­posed to give some­one a strong hand­shake when you meet them?

Hale: You need to give a strong handshake

Here Hale mi­s­un­der­stands Liti as ask­ing for in­for­ma­tion about the proper pro­ce­dure to perform. Really, Liti wants to know how this sys­tem came to be, why do we shake hands in the first place, and why peo­ple use it as a proxy for get­ting the gist of you.

For Hale, it can be frus­trat­ing when Liti keeps ask­ing ques­tions, be­cause they’ve ex­plained ev­ery­thing that seems im­por­tant and nec­es­sary to func­tion in a hand­shake-sce­nario.

For Liti this can be frus­trat­ing be­cause Hale isn’t an­swer­ing their ques­tion, and they feel like they aren’t be­ing heard.

• This com­ment will col­lect things that I think be­gin­ner ra­tio­nal­ists, “naive” ra­tio­nal­ists, or “old school” ra­tio­nal­ists (these dis­tinc­tions are in my head, I don’t ex­pect them to trans­late) do which don’t help them.

• You have an ex­cit­ing idea about how peo­ple could do things differ­ently. Or maybe you think of norms which if they be­came main­stream would dras­ti­cally in­crease epistemic san­ity. “If peo­ple weren’t so sen­si­tive and at­tached to their iden­tities then they could re­ceive feed­back and han­dle dis­agree­ments, al­low­ing us to more rapidly work to­wards the truth.” (ex­am­ple picked be­cause ver­sions of this stance have been dis­cussed on LW)

Some­times the ra­tio­nal­ist is think­ing “I’ve got no idea how be­com­ing more or less sen­si­tive, gain­ing a thicker or thin­ner skin, or shed­ding or gain­ing iden­tity works in hu­mans. So I’m just go­ing to black box this, tell peo­ple they should change, nega­tively re­in­force them when they don’t, and hope for the best.” (ps I don’t think ev­ery­one thinks this, though I know at least one per­son who does) (most rele­vant parts in ital­ics)

Com­ments will be con­tinued thoughts on this be­hav­ior.

• When I see this be­hav­ior, I worry that the ra­tio­nal­ist is set­ting them­selves up to have a blindspot when it comes them­selves be­ing “overly sen­si­tive” to feed­back. I worry about this be­cause it’s hap­pened to me. Not with re­ac­tions to feed­back but with other things. It’s par­tially the failure mode of think­ing that some state is be­neath you, be­ing up­set and an­noyed at oth­ers for be­ing in that state, and this dis­dain mak­ing it hard to see when you en­gage in it.

K, I get that think­ing a mis­take is triv­ial doesn’t au­to­mat­i­cally mean your doomed to se­cretly make it for­ever. Still, I worry.

• The way this can feel to the per­son be­ing told to change: “None of us care about how hard this is for your, nor the pain you might be feel­ing right now. Just change already, yeesh.” (it can be true or false that the ra­tio­nal­ist ac­tu­ally things this. I think I’ve seen some peo­ple play­ing the ra­tio­nal­ist role in this story who ex­plic­itly en­dorsed com­mu­ni­cat­ing this sen­ti­ment)

Now, I un­der­stand that mak­ing some­one feel emo­tion­ally sup­ported takes var­i­ous lev­els of effort. Some­times it might seem like the effort re­quired is not worse the loss in purs­ing the origi­nal ra­tio­nal­ity tar­get. We could have lots of fruit­ful dis­cus­sion about what would be fruit­ful norms for draw­ing that line. But I think an­other prob­le­matic thing that can hap­pen, is that in the ra­tio­nal­ists rush to get back on track to purs­ing the im­por­tant tar­get, they in­ten­tion­ally or un­in­ten­tion­ally com­mu­ni­cate. “You aren’t re­ally in pain. Or if you are, you shouldn’t be in pain /​ you suck or are weak for feel­ing pain right now.” Be­ing told you aren’t in pain SUCCCKS, es­pe­cially when you’re in pain. Be­ing rep­ri­manded be­ing in pain SUCCCKS, es­pe­cially when you’re in pain.

Claim: Even if you’ve reached a point it would be to costly to give the other per­son ad­e­quate emo­tional sup­port, the least you can do is not make them think they’re be­ing gaslit about their pain or rep­ri­manded for it.

• Er­rata.

they intentionally

or [un]in­ten­tion­ally com­mu­ni­cate:

[a] “You aren’t re­ally in pain. [b] Or if you are, you shouldn’t be in pain /​ you suck or are weak for feel­ing pain right now.” [a] Be­ing told you aren’t in pain SUCCCKS, es­pe­cially when you’re in pain.
Claim: Even if you’ve reached a point it would be to costly to give the other per­son ad­e­quate emo­tional sup­port, the least you can do is not make them think they’re be­ing [a’] gaslit about their pain.

The di­alogue refers to two pos­si­bil­ities, A and B, but only A is refer­enced af­ter­wards. (I won­der what the word for ‘tel­ling peo­ple their pain doesn’t mat­ter’ is.)

• Yeah, I only talked about A af­ter. Is the par­en­thet­i­cal rhetor­i­cal? If not I’m miss­ing the thing you want to say.

• Non-rhetor­i­cal. The spel­ling sug­ges­tion sug­gests an im­prove­ment which largely un­am­bigu­ous/​style-ag­nos­tic. Suggest­ing adding a word re­quires choos­ing a word—a mat­ter which is am­bigu­ous/​style de­pen­dent. Some­times writ­ing con­tains gram­mat­i­cal er­rors—but when peo­ple other than the au­thor sug­gest fixes, the fixes don’t have the same voice. This is why I in­cluded a prompt for what word you (Hazard) would use.

For clar­ity, I can make less vague com­ments in the fu­ture. What I wanted to say rephrased:

they in­ten­tion­ally or [un]in­ten­tion­ally com­mu­ni­cate:
“You aren’t re­ally in pain. Or if you are, you shouldn’t be in pain /​ you suck or are weak for feel­ing pain right now.” Be­ing told you aren’t in pain SUCCCKS, es­pe­cially when you’re in pain.
Claim: Even if you’ve reached a point it would be to costly to give the other per­son ad­e­quate emo­tional sup­port, the least you can do is not make them think they’re be­ing gaslit about[/​mocked for] their pain.

Here the [] serve one pur­pose—sug­gest­ing im­prove­ment, even when there’s mul­ti­ple choices.

• Aaaah, I see now. Just ed­ited to what I think fits.

• If you re­ally had no idea… fine, can’t do much bet­ter than try­ing to op­er­ant con­di­tion­ing a per­son to­wards the end goal. In my world, get­ting a deep un­der­stand­ing of how to change is the biggest goal/​point of ra­tio­nal­ity (I’ve given my­self away, I care about AI Align­ment less than you do ;).

So try­ing to skip to the rous­ing de­bate and clash of ideas while just hop­ing ev­ery­one figures out how to han­dle it feels like leav­ing most of the work un­done.

• Meta note: Me up­vot­ing the com­ment above could make things go out of or­der.

op­er­ant conditioning

It could also be seen as se­lec­tion—get rid of the peo­ple who aren’t X. This risks get­ting rid of peo­ple who might learn, which could be an is­sue if the goal of that place (whether it’s LW, SSC, or etc.) in­cludes learn­ing.

An or­ga­ni­za­tion, con­sist­ing only of peo­ple who have a PhD might be an in­ter­est­ing place, per­haps en­abling col­lab­o­ra­tion and cut­ting edge work that couldn’t be done any­where else. But with­out a place where peo­ple can get a Phd, even­tu­ally there will be no such or­ga­ni­za­tions.

• (Meta: the or­der wasn’t im­por­tant, thanks for think­ing about that though)

The se­lec­tion part is some­thing else I was think­ing about. One of my thoughts was your “If there’s no way to train PhDs, they die out.” And the other was me be­ing a bit skep­ti­cal of how big the pool would be right this sec­ond if we adopted a re­ally thick skin policy. Reflect­ing on that sec­ond point, I re­al­ize I’m draw­ing from my day to day dis­tri­bu­tion, and don’t have thoughts about how thick skinned most LW peo­ple are or aren’t.

• Thought that is re­lated to this gen­eral pat­tern, but not this ex­am­ple. Think of hav­ing an idea of an end skill that you’re ex­cited by (do­ing bayes up­dates irl, suc­cess­fully im­ple­ment­ing TAPs, be­ing swayed by “solid log­i­cal ar­gu­ments”). Also imag­ine not hav­ing a the­ory of change. I per­son­ally have some­times not no­ticed that there is or could be an ac­tual the­ory of how to move from A to B (of­ten be­cause I thought I should already be able to do that), and so would use the black box nega­tive re­in­force­ment strat­egy on my­self.

Be­ing in that place in­volved be­ing stuck for a while and feel­ing bad about be­ing stuck. Progress was only made when I man­aged to go “Oh. There are steps to get from A to B. I can’t ex­pect to already know them. I most fo­cus on un­der­stand­ing this pro­gres­sion, and not on just pun­ish­ing my­self when­ever I fail.”

• I’ve been think­ing about this as a gen­eral pat­tern, and have speci­fi­cally filled in “you should be thick skinned” to make it con­crete. Here’s a thought that ap­plies to this con­crete ex­am­ple that doesn’t nec­es­sar­ily ap­ply to the gen­eral pat­tern.

There’s all sorts of rea­sons why some­one might feel hurt, put-off, or up­set about how some­one gives them feed­back or dis­agrees with them. One of these ways can be some­thing like, “From past ex­pe­rience I’ve learned that some­one how uses XYZ lan­guage or ABC tone of voice is say­ing what they said to try and be mean to me, and they will prob­a­bly try to hurt and bully me in the fu­ture.”

If you are the ra­tio­nal­ist in this situ­a­tion, you’re an­noyed that some­one thinks you’re a bully. You aren’t a bully! And it sure would suck if they con­vinced other peo­ple that you were a bully. So you tell them that, duh, you aren’t try­ing to be mean that this is just how you talk, and that they should trust you.

If your the per­son be­ing told to change, you start to get even more wor­ried (af­ter all, this is ex­actly what you piece of shit older brother would do to you), this per­son is tel­ling to trust that they aren’t a bully when you have no rea­son to, and you’re wor­ried they’re go­ing to turn the by­stan­ders against you.

Hm­mmm, af­ter writ­ing this out the prob­lem seems much harder to deal with than I first thought.

• Have some hor­rible jar­gon: I spit out a ques­tion or topic and ask you for your NeMRIT, your Next Most Rele­vant In­ter­est­ing Take.

Either give your thoughts about the idea I pre­sented as you un­der­stand it, un­less that’s bor­ing, then give thoughts that in­ter­ests you that seem con­cep­tu­ally clos­est to the idea I brought up.

• MIST*, Most In­ter­est­ing Similar Take?

*This is a back­ro­nym.

• I like that be­cause I can verb it while speak­ing.

“How much cat­tle could you fit in this lobby? You can an­swer di­rectly or mist.”

• Kevin Zol­l­man at CMU looks like he’s done a de­cent amount of re­search on group episte­mol­ogy. I plan to read the deets at some point, here’s a link if any­one wanted to do it first and post some­thing about it.

• I of­ten don’t feel like I’m “do­ing that much”, but find that when I list out all of the pro­jects, ac­tivi­ties, and thought streams go­ing on, there’s an amount that feels like “a lot”. This has hap­pened when re­flect­ing on ev­ery semester in the past 2 years.

Hyp: Un­til I write down a list of ev­ery­thing I’m do­ing, I’m just prob­ing my work­ing mem­ory for “how much stuff am I up to?” Work­ing mem has a limit, and re­li­ably I’m go­ing to get only a hand­ful of things. Any­time when I’m do­ing more things than what fit in work­ing mem­ory, when I stop to write them all down, I will ex­pe­rience “Huh, that’s more than it feels like.”

• Re­lat­edly, the KonMari clean­ing method in­volves tak­ing all items of a cat­e­gory “e.g. all books” and putting them in on big pile, be­fore clear­ing them out. You of­ten feel like you don’t own “that much stuff” and are al­most always sur­prised by the size of the pile.

• Good de­scrip­tion of what was hap­pen­ing in my head when I went was ex­pe­rienc­ing the depths of the un­canny valley of ra­tio­nal­ity:

I was more genre savvy than re­al­ity savvy. Even when I first started to learn about bi­ases, I was more genre-of-bi­ases savvy than ac­tual bias-savvy. My first con­tact with the se­quences suc­cess­fully pre­vented me from be­ing okay with dou­ble-think­ing, and mostly re­moved my abil­ity to feel okay about guid­ing my life via genre-savvy­ness. I also hadn’t learned enough to make any sort of su­pe­rior “ba­sis” from which to act and de­cide. So I hit some slumps.

• Likely false semi-ex­plicit be­lief that I’ve had for a while: changes in pat­terns of be­hav­ior and thought are “merely” a mat­ter of con­di­tion­ing/​train­ing. When­ever it’s hard to change be­hav­ior, it’s just be­cause the sys­tem is already in mo­tion in a cer­tain di­rec­tion, and it takes en­ergy/​effort to push it in a new di­rec­tion.

Now, I’m more aware of some be­hav­iors that seem to have ac­cess to some op­ti­miza­tion power that has the goal of keep­ing them around. Some be­hav­iors seem to be part of a deeper strat­egy run my some sub-pro­cess of me, a sub-pro­cess that can no­tice when wreck­ing-ball Con­scious Me is try­ing to change the be­hav­ior, and starts throw­ing road spikes and slash­ing my tires. Con­scious Me, hav­ing pre­vi­ously not had a space for this in its on­tol­ogy, just went, “Man, sure is hard to change this be­hav­ior. Guess I just have to ap­ply more juice or give up.”

• I’ve always been off-put when some­one says, “free will is a delu­sion/​illu­sion”. There seems to be a hint­ing that one’s feel­ings or ex­pe­riences are in some way wrong. Here’s one way to think you have fun­da­men­tal free will with­out be­ing ‘de­luded’ → “I can imag­ine a sys­tem where agents have an on­tolog­i­cally ba­sic ‘de­ci­sion’ op­tion, and it seems like that sys­tem would pro­duce ex­pe­riences that match up with what I ex­pe­rience, there­fore I live in a sys­tem with fun­da­men­tal free-will”. Here, it’s not that you are trapped in an illu­sion, it’s just that you came to a wrong con­clu­sion based on your ex­pe­rience data.

What I think now is → “My ex­pe­riences seem con­sis­tent with a fun­da­men­tal free-will uni­verse, and with a de­ter­minis­tic physics uni­verse, and given that the free-will uni­verse doesn’t seem su­per co­her­ent, I’m go­ing to guess I live in the de­ter­minis­tic physics uni­verse.” There’s prob­a­bly no sub-cir­cuit in your brain speci­fi­cally ded­i­cated to fabri­cat­ing the “ex­pe­rience of free-will”.