# Matt Goldenberg’s Short Form Feed

Where I write up some small ideas that I’ve been hap­pen­ing that may even­tu­ally be­come their own top level posts. I’ll start pop­u­lat­ing with a few ideas I’ve posted up as twit­ter/​Face­book thoughts.

• FEELINGS AND TRUTH SEEKING NORMS

Stephen Covey says that ma­tu­rity is be­ing able to find the bal­ance be­tween Courage and Con­sid­er­a­tion. Courage be­ing the de­sire to ex­press your­self and say your truth, con­sid­er­a­tion be­ing rec­og­niz­ing the con­se­quences of what you say on oth­ers.

I of­ten wish that this was com­mon knowl­edge in the ra­tio­nal­ity com­mu­nity (or just so­ciety in gen­eral) be­cause I see so many fights be­tween peo­ple who are on op­po­site sides of the spec­trum and don’t rec­og­nize the need for bal­ance.

Courage is putting your needs first, con­sid­er­a­tion is putting some­one else’s needs first, the bal­ance is putting your needs equally. There are some other di­chotomies that I think are point­ing to a similar dis­tinc­tion.

Courage------->Ma­tu­rity----->Consideration

From par­ent­ing liter­a­ture:

Author­i­tar­ian------->Author­i­ta­tive----->Permissive

From a course on con­fi­dence:

Ag­gres­sive------->Assertive----->Passive

From at­tach­ment the­ory:

Avoidant------->Se­cure----->Preoccupied

From my three types of safe spaces:

We’ll make you grow---->Own Your Safety---> We’ll pro­tect you.

--------------------------------------------------------------------

Cer­tain peo­ple may be won­der­ing how car­ing about your feel­ings and oth­ers feel­ings re­late to truth seek­ing. The an­swer is that our feel­ings are based on sys­tem 1 be­liefs. I sus­pect this isn’t strictly 100% true but its’ a use­ful model, one be­hind Fo­cus­ing, Con­nec­tion The­ory, Cog­ni­tive Be­hav­ioral Ther­apy, In­ter­nal Dou­ble Crux, and a good por­tion of other suc­cess­ful ther­a­peu­tic in­ter­ven­tions.

How this caches out is that be­ing able to fully ex­press your­self is a nec­es­sary pre­req­ui­site to be­ing able to bring all your be­liefs to bear on a situ­a­tion. Now some­times, when some­one is get­ting up­set, its’ not a be­lief like (this thing is bad) but “I be­lieve that be­liev­ing what you’re say­ing is un­safe for my iden­tity” or some similar be­lief.

How­ever, if they think its’ un­safe to ex­press THAT be­lief, you end up in a situ­a­tion where peo­ple have to pro­tect them­selves un­der the ve­neer of mo­ti­vated rea­son­ing. You end up in a situ­a­tion where ev­ery­body is still pro­tect­ing them­selves, but they’re all pre­tend­ing to do it in pur­suit of the truth (or what­ever the group says it val­ues).

In this sense, tone ar­gu­ments are vi­tally im­por­tant to keep­ing clean epistemic norms. If I’m not al­lowed to ex­press the be­lief that the way you’re phras­ing things means I’m go­ing to die hor­ribly and live alone for­ever (which may be an ac­tual sys­tem 1 be­lief), then I have to come up with FAKE ar­gu­ments against the thing you’re say­ing, or leave the group where that be­lief of mine isn’t be­ing re­spected.

Which brings me back to the defi­ni­tion of Ma­tu­rity. If you put your need to ex­press what you think is true in the way you feel is true (which again, is based on your be­liefs), over my feel­ings that I’m go­ing to be alone for­ever if peo­ple take your ar­gu­ments se­ri­ously), you not only are act­ing im­ma­ture, but you’re fos­ter­ing an im­ma­ture com­mu­nity with peo­ple who aren’t in touch with their own be­liefs. What was wrong with this ex­am­ple:

The con­ver­sa­tion of the group shifted at the point when Su­san started to cry. From that mo­ment, the group did not dis­cuss the ac­tual is­sue of the stu­dent com­mu­nity. Rather, they spent the du­ra­tion of the meet­ing con­sol­ing Su­san, re­as­sur­ing her that she was not at fault.

Was not that the group con­sid­ered Su­san’s feel­ings, but that they put Su­san’s feel­ings above their own be­liefs, in­stead of on equal foot­ing.

------------------------------------------------------------

Here are some situ­a­tions I’ve en­coun­tered where I wish peo­ple knew about the defi­ni­tion of Ma­tu­rity:

A ra­tio­nal­ist friend of mine got up­set about be­ing re­peat­edly asked about a situ­a­tion af­ter he asked the other per­son to stop. The other ra­tio­nal­ist friend told him “The ma­ture thing to do would be able to con­trol your feel­ings, like this other ra­tio­nal­ist I know.” The ma­ture thing is to con­trol your feel­ings, but also some­times ex­press them loudly, de­pend­ing on the needs in the mo­ment.

A lover told me that they weren’t go­ing to lie to me, they were go­ing to tell it like it is. I said that was in gen­eral fine, but that I wanted them to con­sider how the way and time they told me things af­fected my feel­ings. They said no, they would ex­press them­selves when and how they wanted, and they ex­pected me to do the same. That re­la­tion­ship didn’t last long.

Peo­ple tak­ing care of a friend at detri­ment to their own health.

Soooo many more.

------------------------------------------------------------

Lately, I’ve been con­sid­er­ing adding a third fac­tor, so it’s no longer a di­chotomy but a tri­chotomy. Courage, Con­sid­er­a­tion, and Con­se­quences.

I know there’s a strong idea around norms in the ra­tio­nal­ity com­mu­nity to go full courage (ex­press­ing your true be­liefs) and have other peo­ple mind thme­selves and ig­nore the con­se­quences (de­cou­pling norms). As I’ve said el­se­where and above, I think in ac­tu­al­ity this leads to a com­mu­nity that trains peo­ple to hide cer­tain be­liefs and lie about their mo­tives, but do it in a way that can’t be called out.

I think you should ob­vi­ously think about the effects of what you say, on the cul­ture, on the world, and on the per­son you’re speak­ing to. I have be­liefs about this, which cache out in me feel­ing very up­set when peo­ple ex­press the truth at all costs, be­cause they’re sac­ri­fic­ing their ter­mi­nal val­ues for in­stru­men­tal ones, but I’m pun­ished in the ra­tio­nal­ity com­mu­nity for say­ing this, so I’m less likely to ex­press it. So the truth seek­ing norm is stifling my abil­ity to tell the truth.

I think in gen­eral I’d love to see WAY more truth seek­ing norms in so­ciety, but I think that’s be­cause most of so­ciety is im­ma­ture, they’re way too much on the side of con­sid­er­a­tion, with barely thought to con­se­quences and courage.

Mean­while, some of the ra­tio­nal­ity com­mu­nity has gone way to much to­wards courage, ig­nor­ing con­sid­er­a­tion and con­se­quences.

• I found Taber’s rad­i­cal hon­esty work­shops very use­ful for a fram­ing of how to deal with tel­ling the truth.

Ac­cord­ing to him tel­ling the truth is usu­ally about choos­ing pain now in­stead of pain in the fu­ture. How­ever not all kinds of pain are equal. A per­son who prac­tices yoga has to be able to tell the pain from stretch­ing from the pain of hurt­ing their joints. In the same way a per­son who speaks in a rad­i­cal hon­est way should be aware of the pain that the state­ment pro­duces and be able to dis­t­in­guish whether it’s healthy or isn’t.

Courage is only valuable when it comes with the wis­dom to know when the pain you are ex­pos­ing your­self is healthy and when it isn’t. The teenager who ex­presses courage to sig­nal courage to his friends with­out any sense of whether the risk he takes is worth it isn’t ma­ture.

Build­ing up thick emo­tional walls and tel­ling “the truth” with­out any con­sid­er­a­tion of the effects of the act of com­mu­ni­ca­tion doesn’t lead to hon­est con­ver­sa­tion in the rad­i­cal hon­esty sense. As it turns out, it also doesn’t have much to do with real courage as it’s still avoid­ing the con­ver­sa­tions that are ac­tu­ally difficult.

• I like this fram­ing, the idea of use­ful and nonuse­ful pain. It seems like a similary use­ful defi­ni­tion of ma­tu­rity.

• One differ­ence is that differ­ent types of pain come with slightly differ­ent qualia. This al­lows com­mu­ni­ca­tion that’s in con­tact with what’s felt in the mo­ment which isn’t there in ideas of ma­tu­rity where ma­tu­rity is about fol­low­ing rules that cer­tain things shouldn’t be spo­ken.

• Ex­cel­lent com­ment!

I know there’s a strong idea around norms in the ra­tio­nal­ity com­mu­nity to go full courage (ex­press­ing your true be­liefs) and have other peo­ple mind thme­selves and ig­nore the con­se­quences (de­cou­pling norms).

“Have other peo­ple mind them­selves and ig­nore the con­se­quences” comes in var­i­ous de­grees and fla­vors. In the dis­cus­sions about de­cou­pling norms I have seen (mostly in the con­text of Sam Har­ris), it ap­peared me that they (de­cou­pling norms) were treated as the op­po­site of “be­ing re­spon­si­ble for peo­ple un­char­i­ta­bly mi­s­un­der­stand­ing what you are say­ing.” So I worry that pre­sent­ing it as though courage = de­cou­pling norms makes it harder to get your point across, out of worry that peo­ple might lump your so­phis­ti­cated feed­back/​crit­i­cism to­gether with some of the of­ten not-so-so­phis­ti­cated crit­i­cism di­rected at peo­ple like Sam Har­ris. No mat­ter what one might think of Har­ris, to me at least he seems to come across as a lot more em­pa­thetic and cir­cum­spect and less “truth over ev­ery­thing else” than the ra­tio­nal­ists whose at­ti­tude about truth-seek­ing’s re­la­tion to other virtues I find off-putting.

Hav­ing made this caveat, I think you’re ac­tu­ally right that “de­cou­pling norms” can go too far, and that there’s a grad­ual spec­trum from “not feel­ing re­spon­si­ble for peo­ple un­char­i­ta­bly mi­s­un­der­stand­ing what you are say­ing” to “not feel­ing re­spon­si­ble about other peo­ple’s feel­ings ever, un­less maybe if a perfect util­i­tar­ian robot in their place would also have well-jus­tified in­stru­men­tal rea­sons to turn on fa­cial ex­pres­sions for be­ing hurt or up­set”. I just wanted to make clear that it’s com­pat­i­ble to think that de­cou­pling norms are gen­er­ally good as long as con­sid­er­ate­ness and tact also come into play. (Hope­fully this would miti­gate wor­ries that the ra­tio­nal­ist com­mu­nity would lose some­thing im­por­tant by try­ing to re­ward con­sid­er­ate­ness a bit more.)

• FITTING IN AND THE RATIONALITY COMMUNITY

One of my biggest learn­ing ex­pe­riences over the last few years was mov­ing to the Bay Area, and at­tempt­ing to be ac­cepted into the “Ra­tion­al­ity Tribe”.

When I first took my CFAR work­shop years ago, and in­ter­acted with the peo­ple in the group, I was en­am­ored. A group of peo­ple who was into sav­ing the world, self-im­prove­ment, un­der­stand­ing their own minds, con­nect­ing with oth­ers—I felt like I had found my peo­ple.

A few short months later I moved to the Bay Area.

I had never been good at join­ing groups or tribes. From a very early age, I made my friend group (some­times very small) by find­ing solid in­di­vi­d­u­als that could con­nect to my par­tic­u­lar brand of manic, am­bi­tious, and open, and bring­ing them to­gether through my own events and hang­outs.

In Port­land, where I was be­fore mov­ing to the Bay, I re­ally felt I had a han­dle on this, meet­ing peo­ple at events (know­ing there weren’t many who would con­nect with me in Port­land), then reg­u­larly host­ing my own events like din­ner par­ties and mee­tups to bring to­gether the best peo­ple.

Any­way, when I got to the Bay, I for the first time tried re­ally hard to be ac­cepted into ex­ist­ing tribes. Not only did I fi­nally think I had found a large group of peo­ple I would fit in with, I was also op­er­at­ing un­der the as­sump­tion that I needed to be liked by all these peo­ples be­cause they were al­lies in chang­ing the world for the bet­ter.

And hon­estly, this made me mis­er­able. While I did find a few solid peo­ple I re­ally en­joyed, try­ing to be liked and ac­cepted by the ma­jor­ity of peo­ple in the ra­tio­nal­ity com­mu­nity was an ex­er­cise in frus­tra­tion—Be­ing pop­u­lar has always run counter to my abil­ity to ex­press my­self hon­estly and openly, and I kept hav­ing to bounce be­tween the two choices.

And the thing is, I would go as far as to say many peo­ple in the ra­tio­nal­ity com­mu­nity ex­pe­rience this same frus­tra­tion. They found a group that they feel like should be their tribe, but they re­ally don’t feel a close con­nec­tion to most peo­ple in it, and feel alienated as a re­sult.

What feels real to me is that there are peo­ple in the ra­tio­nal­ity com­mu­nity that I like, and love. And there are peo­ple out­side of the ra­tio­nal­ity com­mu­nity that I like and love. And that it makes a lot of sense for me to stop try­ing to bounce from round hole to round hole, try­ing to see if my square peg fits in.

In­stead, like always, I’ll just make my is­land, and in­vite the peo­ple who want to be there with me.

• Be­ing a ra­tio­nal­ist is not the only trait the in­di­vi­d­ual ra­tio­nal­ists have. Other traits may pre­vent you from click­ing with them. There may be traits fre­quent in the Bay Area that are un­pleas­ant to you.

Also, be­ing an as­piring ra­tio­nal­ist is not a bi­nary thing. Some peo­ple try harder, some only join for the so­cial ex­pe­rience. As­sum­ing that the base rate of peo­ple “try­ing things hard” is very low, I would ex­pect that even among peo­ple who iden­tify as ra­tio­nal­ists, the ma­jor­ity is there only for the so­cial rea­sons. If you try to fit in with the group as a whole, it means you will mostly try to fit in with these peo­ple. But if you are not there pri­mar­ily for so­cial rea­sons, that is already one thing that will make you not fit in. (By the way, no dis­re­spect meant here. Most of peo­ple who iden­tify as ra­tio­nal­ists only for so­cial rea­sons are very nice peo­ple.)

What you could do, in my opinion, is find a sub­group you feel com­fortable with, and ac­cept that this is the nat­u­ral state of things. Also, speak­ing as an in­tro­vert, I can more eas­ily con­nect with in­di­vi­d­u­als than with groups. The group is sim­ply a place where I can find such in­di­vi­d­u­als with greater fre­quency, and con­ve­niently meet more of them at the same place.

Or—as you wrote—you could cre­ate such sub­group around your­self. Hope­fully, it will be eas­ier in the Bay Area than it would be oth­er­wise.

• What you could do, in my opinion, is find a sub­group you feel com­fortable with, and ac­cept that this is the nat­u­ral state of things.

I’m pretty pes­simistic about this, it’s never worked for me be­fore, nor did I I find any ex­ist­ing sub­group in the ra­tio­nal­ity com­mu­nity that I could do this.

Or—as you wrote—you could cre­ate such sub­group around your­self.

Definitely, but why limit it to just ra­tio­nal­ists in that case?

• Definitely, but why limit it to just ra­tio­nal­ists in that case?

Good point.

Not sure how well a mixed group of ra­tio­nal­ists and non-ra­tio­nal­ists would func­tion. But you could cre­ate more than one group.

• Hope­fully, it will be eas­ier in the Bay Area than it would be oth­er­wise.

Speak­ing as a Bay Area na­tive,[1] I would not use the word “hope­fully” here!

(One would hope to find or cre­ate a sub­group, but it would be nicer if it were pos­si­ble to do this some­where with less-in­sane hous­ing prices and am­bi­ent cul­ture. Hop­ing that it needs to be done here on ac­count of just hav­ing moved here would be the sunk cost fal­lacy.)

1. Raised in Walnut Creek, presently in Berkeley. ↩︎

• Note that as some­one who has up and moved mul­ti­ple times, I can as­sure you that it’s pos­si­ble to make friends in other cities. If you’ve never moved out of your home city, I recom­mend do­ing it at least once, for a few years, even if you move back at the end.

• I’m cu­ri­ous how much of this you at­tribute to (the fol­low­ing ran­dom hy­pothe­ses I just formed, as well as any other hy­pothe­ses you have):

• tribal in­te­gra­tion be­ing gen­er­ally hard

• Bay ra­tio­nal­ists be­ing par­tic­u­lar bad at Tribal/​friendship

• Bay ra­tio­nal­ists not hav­ing enough so­cial in­fras­truc­ture, or other prob­lems dis­tinct from “bad at Tribal” (i.e. I think the math may just not work out for many friends you can ex­pect to make quickly, and how much help you’ll have mak­ing friends)

• spe­cific (pos­si­bly sub­tle) differ­ences from the cul­ture-you-wanted and the cul­ture-that-was-there. (i.e. you push­ing for changes or hav­ing opinions that ran against the sta­tus quo)

• Are you ask­ing about my par­tic­u­lar re­al­iza­tion here, or this part:

And the thing is, I would go as far as to say many peo­ple in the ra­tio­nal­ity com­mu­nity ex­pe­rience this same frus­tra­tion. They found a group that they feel like should be their tribe, but they re­ally don’t feel a close con­nec­tion to most peo­ple in it, and feel alienated as a re­sult.

?

• Hmm, ei­ther I guess. It definitely looks like there are some kind of is­sues in this space that I’d like to help the Bay com­mu­nity im­prove at, but am not sure what kind of im­prove­ments are tractable and am try­ing to just get a bet­ter shape of the situ­a­tion.

• Some thoughts on this:

I per­son­ally just am not re­ally made to fit into com­mu­ni­ties, I do a much bet­ter job build­ing my own.

I’d say that in my par­tic­u­lar case, this is­sue screens off a lot of the other is­sues.

In the case of the Bay Area ra­tio­nal­ity as a whole, I think that in gen­eral it does a fairly bad job of be­ing a friendly com­mu­nity for peo­ple who want to join com­mu­ni­ties, some of the causes of this seem to be (in no par­tic­u­lar or­der)

• High lev­els of autism and autism spec­trum di­s­or­der.

• A large gen­der im­bal­ance.

• Weird sta­tus dy­nam­ics.

• And the thing is, I would go as far as to say many peo­ple in the ra­tio­nal­ity com­mu­nity ex­pe­rience this same frus­tra­tion. They found a group that they feel like should be their tribe, but they re­ally don’t feel a close con­nec­tion to most peo­ple in it, and feel alienated as a re­sult.

As some­one who has con­sid­ered mak­ing the Pil­grim­mage To The Bay for pre­cisely that rea­son and as some­one who de­cided against it partly due to that par­tic­u­lar con­cern, I thank you for giv­ing me a data-point on it.

Be­ing a ra­tio­nal­ist in the real world can be hard. The set of peo­ple who ac­tu­ally worry about sav­ing the world, un­der­stand­ing their own minds and con­nect­ing with oth­ers is pretty low. In my bub­ble at least, pick­ing a ran­dom hobby and in­ci­den­tally be­com­ing friends with some­one at it and then in­ci­den­tally get­ting slammed and in­ci­den­tally an im­promptu con­ver­sa­tion has been the best perform­ing strat­egy so far in terms of suc­cess per op­por­tu­nity-cost. As a re­sult, look­ing from the out­side at a ra­tio­nal­ist com­mu­nity that cares about all these things looks like a fan­tas­ti­cal life-chang­ing ideal.

But, from the out­side view, all the peo­ple I’ve seen who’ve ag­gres­sively tar­geted those ideals have got­ten crushed. So I’ve adopted a strat­egy of Not Do­ing That.

(pssst: this doesn’t just ap­ply to the ra­tio­nal­ist com­mu­nity! it ap­plies to any com­mu­nity ori­ented around val­ues dis­pro­por­tionately held by in­di­vi­d­u­als who have been dis­en­fran­chised by broader so­ciety in any way! there are a lot of im­pli­ca­tions here and they’re all mildly de­press­ing!)

The Ger­vais Prin­ci­ple says that when an or­ga­ni­za­tion is run by So­ciopaths, it in­evitably de­volves into in­fight­ing and poli­tics that the so­ciopaths use to make de­ci­sions, and then blame them on oth­ers. What this cre­ates is a mis­al­igned or­ga­ni­za­tion—peo­ple aren’t work­ing to­wards the same thing, and there­fore much wasted work goes to­wards un­do­ing what oth­ers have done, or as­sign­ing blame to some­one that isn’t your­self. Or­ga­ni­za­tions with peo­ple that aren’t al­igned can some­times luck into good out­comes, es­pe­cially if the most skil­led play­ers (the most skil­led so­ciopaths) want them to. They aren’t nec­es­sar­ily dead play­ers, but they’re run­ning on bor­rowed time—bor­rowed for the use­ful­ness to the so­ciopaths.

Dead or­ga­ni­za­tions are those that are run by Rao’s clue­less (or less com­monly, by Rao’s losers, in which case you have a Bureau­cracy that out­lived its’ founder). They can’t do any­thing new be­cause they’re run by peo­ple that can’t ques­tion the rule­sets they’re in. As a clue­less lead­ing a dead or­ga­ni­za­tion, one effec­tive strat­egy seems to be to ac­cept the memes around you un­ques­tion­ingly and re­ally ex­e­cut­ing on them. The most suc­cess­ful peo­ple in Sili­con valley make their own rules, but the next tier are the peo­ple who take the memes of Sili­con Valley and fol­low them un­ques­tion­ingly. This is how or­ga­ni­za­tions en­ter Mythic Mode—they be­lieve in the cul­ture around them so much that they chan­nel the god of that cul­ture, and are able to at­tract fund­ing, cus­tomers, re­sults etc purely through the re­sult­ing aura.

Run­ning Good Organizations

Fram­ing the Ger­vais prin­ci­ple in terms of Ke­gan:

Losers—Ke­gan 3

Clue­less—Ke­gan 4

So­ciopaths—Ke­gan 4.5

To run a great or­ga­ni­za­tion, the first thing you need is to be lead not by a so­ci­path, but some­one who is Ke­gan 5. Then you need so­ciopath re­pel­lent.

Short Form Feed is get­ting too long. I’ll write more on good or­ga­ni­za­tions at some point soon.

• THE THREE TYPES OF RATIONALITY AND EFFECTIVE LEADERSHIP

The In­stru­men­tal/​Epistemic split is awful. If ra­tio­nal­ity is sys­tem­atized win­ning, all ra­tio­nal­ity is in­stru­men­tal.

So then, what are three types of In­stru­men­tal Ra­tion­al­ity?

1. Gen­er­a­tive Rationality

1. What men­tal mod­els will best help me/​my or­ga­ni­za­tion/​my cul­ture gen­er­ate ideas that will al­low us to sys­tem­at­i­cally win?

2. Eval­u­a­tive Rationality

1. What men­tal mod­els will best help me/​my or­ga­ni­za­tion/​my cul­ture eval­u­ate ideas, and pre­dict which ones will al­low us to sys­tem­at­i­cally win?

3. Effec­tu­a­tive Rationality

1. What men­tal mod­els will best help me/​my or­ga­ni­za­tion/​my cul­ture im­ple­ment those ideas in an effec­tive way that will help us to sys­tem­at­i­cally win?

Eval­u­a­tion typ­i­cally gets lumped un­der “Epistemics” , Effec­tu­a­tion typ­i­cally gets lumped un­der “In­stru­men­tals” and Gen­er­a­tion is typ­i­cally given the shaft—cer­tainly cre­ativity is un­der­val­ued as an ex­plicit goal in the ra­tio­nal­ity com­mu­nity (al­though it’s im­plic­itly val­ued in that peo­ple who cre­ate good ideas are given high sta­tus).

Great lead­ers can switch be­tween these 3 modes at will.

If you look at Steve Jobs’ re­al­ity dis­tor­tion field, it’s him be­ing able to switch be­tween the 3 modes at will, only us­ing eval­u­a­tive re­al­ity when choos­ing a di­rec­tion—other times he’s op­er­at­ing on Gen­er­a­tive and Effec­tu­a­tive Ra­tion­al­ity prin­ci­ples. This al­lows him to even­tu­ally shape re­al­ity to the vi­sion he gen­er­ated us­ing his effec­tu­a­tive prin­ci­ples. By us­ing the proper types of ra­tio­nal­ity at the right time, he’s ac­tu­ally able to shape re­al­ity in­stead of merely pre­dict­ing it.

If you look at Walt Dis­ney, he used to fre­quently say a phrase that in­di­cates he knew how to switch be­tween these 3 modes: He used to talk about he was “ac­tu­ally 3 differ­ent Walts: The Dreamer, The Real­ist, and the Spoiler”. Ac­cess to these 3 modes al­lowed Walt to do things that other’s would have looked at with their Eval­u­a­tive Ra­tion­al­ity and viewed as im­pos­si­ble.”

You can see with Elon Musk too. Look at that the differ­ence be­tween how he acts with bud­get­ing and how he acts with dead­lines. When he’s bud­get­ing, he uses his eval­u­a­tive ra­tio­nal­ity—when he’s mak­ing dead­lines, he’s us­ing his effec­tu­a­tive ra­tio­nal­ity—he knows large vi­sions and hard to reach goals ac­tu­ally help peo­ple take bet­ter ac­tion. You shouldn’t view his dead­lines as pre­dic­tions, but as mo­ti­va­tion tools.

Are great lead­ers then liars? No, great lead­ers are Ke­gan 5 play­ers who don’t just say things, but are ac­tu­ally op­er­at­ing through these 3 frame­works (to a first ap­prox­i­ma­tion) at any given time. When a great leader is gen­er­at­ing, their not wor­ried about eval­u­at­ing their ideas. When they’re eval­u­at­ing, theyre not wor­ried about effec­tu­at­ing those ideas. When they’re effec­tu­at­ing, they’re not gen­er­at­ing.

They’re us­ing what­ever frame­work can make the most MEANING out of the cur­rent situ­a­tion, both now in the long term. They’re skil­lfully cy­cling through these frames in them­selves—and out­putting the truth of what­ever on­tol­ogy they’re op­er­at­ing through at the given mo­ment.

One of my wor­ries with the talk about Si­mu­lacra Levels and how it re­lates to Mo­ral Mazes is that it’s not dis­t­in­guish­ing be­tween Ke­gan 2 play­ers (who are ly­ing and ma­nipu­lat­ing the sys­tem for their own gain), with Ke­gan 4.5 play­ers (who are ly­ing and ma­nipu­lat­ing the sys­tem be­cause they ac­tu­ally have no on­tol­ogy to op­er­ate through ex­cept re­venge and power), with Ke­gan 5 play­ers (who are view­ing truth and so­cial dy­nam­ics as ob­jects to be ma­nipu­lated be­cause there is no truth of which tribe their a part of or what they be­lieve about a spe­cific thing—it’s all de­pen­dent on what will gen­er­ate the most mean­ing for them/​their or­ga­ni­za­tion/​their cul­ture).

It’s ab­solutely im­per­a­tive that you cre­ate sys­tems to filter out So­cio­pathic Ke­gan 4.5 lizard peo­ple if you want your or­ga­ni­za­tion to avoid be­ing cap­tured by self-in­ter­est.

At the same time, it’s ab­solutely im­per­a­tive that you have sys­tems that can find, de­velop and pro­mote Ke­gan 5 lead­ers that can cre­ate new sys­tems and op­er­ate through all 3 types of ra­tio­nal­ity. Other­wise your or­ga­ni­za­tions/​cul­tures val­ues won’t be able to evolve with chang­ing situ­a­tion.

I worry fram­ing things as Si­mu­lacra lev­els don’t dis­t­in­guish be­tween these two types of play­ers.

• P.S. Was think­ing about writ­ing this up more co­her­ently as a top level post. Is there any in­ter­est in that?

• I’d like to see it, and even more I’d like to see the tweak­ing and ob­jec­tions from peo­ple who see the lev­els as ex­clu­sive and in­cre­men­tal, rather than filters which can be si­mul­ta­neously used or switched among as needed.

• What hap­pens if you the parts of your mind re­spon­si­ble for gen­er­a­tive ra­tio­nal­ity, the pos­i­tive op­ti­mistic part, takes over with­out in­put from Eval­u­a­tive and Effec­tu­a­tive ra­tio­nal­ity? It might look a light like Per­sis­tent Euphoric States.

• One of my wor­ries with the talk about Si­mu­lacra Levels and how it re­lates to Mo­ral Mazes is that it’s not dis­t­in­guish­ing be­tween Ke­gan 2 play­ers (who are ly­ing and ma­nipu­lat­ing the sys­tem for their own gain), with Ke­gan 4.5 play­ers (who are ly­ing and ma­nipu­lat­ing the sys­tem be­cause they ac­tu­ally have no on­tol­ogy to op­er­ate through ex­cept re­venge and power), with Ke­gan 5 play­ers (who are view­ing truth and so­cial dy­nam­ics as ob­jects to be ma­nipu­lated be­cause there is no truth of which tribe their a part of or what they be­lieve about a spe­cific thing—it’s all de­pen­dent on what will gen­er­ate the most mean­ing for them/​their or­ga­ni­za­tion/​their cul­ture).

At the same time, it’s ab­solutely im­per­a­tive that you have sys­tems that can find, de­velop and pro­mote Ke­gan 5 lead­ers that can cre­ate new sys­tems and op­er­ate through all 3 types of ra­tio­nal­ity. Other­wise your or­ga­ni­za­tions/​cul­tures val­ues won’t be able to evolve with chang­ing situ­a­tion.

I worry fram­ing things as Si­mu­lacra lev­els don’t dis­t­in­guish be­tween these two types of play­ers.

This is an in­ter­est­ing con­cern. I think it’s use­ful to dis­t­in­guish these things. I’m not sure how big a con­cern it is for the Si­mu­lacra Levels thing to cover this case – my cur­rent worry is that the Si­mu­lacra con­cept is try­ing to do too many things. But, since it does look like Zvi is hop­ing to have it be a Grand Unified The­ory, I agree the Grand Unified ver­sion of it should ac­count for this sort of thing.

• Been mul­ling around about do­ing a pod­cast in which each epi­sode is based on ac­quiring a par­tic­u­lar skil­lset (self-love, fo­cus, mak­ing good in­vest­ments) in­stead of just in­ter­view­ing a par­tic­u­lar per­son.

I in­ter­view a few peo­ple who have a par­tic­u­lar skill (e.g. self-love, fo­cus, cre­at­ing cash flow busi­nesses), and model the cog­ni­tive strate­gies that are com­mon be­tween them. Then in­ter­view a few peo­ple who strug­gle a lot with that skill, and model the cog­ni­tive strate­gies that are com­mon be­tween them. Fi­nally, model a few peo­ple who used to be bad at the skill but are now good, and model the strate­gies that are com­mon for them to make the switch.

The epi­sode is cut to tell a nar­ra­tive of what the skills are to be ac­quired, what be­liefs/​at­ti­tudes need to be let go of and ac­quired, and the pro­cess to ac­quire them, rather than fo­cus­ing on in­ter­view­ing a par­tic­u­lar person

If there’s enough in­ter­est, I’ll do a pi­lot epi­sode. Com­ment with what skil­lset you’d love to see a pi­lot epi­sode on.

Upvote if you’d have 50% or more chance of listen­ing to the first epi­sode.

• Sounds in­ter­est­ing!

The ques­tion is, how good are peo­ple at in­tro­spec­tion: what if the strate­gies they re­port are not the strate­gies they ac­tu­ally use? For ex­am­ple, be­cause they omit the parts that seem unim­por­tant, but that ac­tu­ally make the differ­ence. (Maybe pos­i­tive or nega­tive think­ing is ir­rele­vant, but imag­in­ing blue things is cru­cial.)

Or what if “the thing that brings suc­cess” causes the nar­ra­tive of the cog­ni­tive strat­egy, but merely chang­ing the cog­ni­tive strat­egy will not cause “the thing that brings suc­cess”? (Peo­ple imag­in­ing blue things will be driven to suc­ceed in love, and also to think a lot about fluffy kit­tens. How­ever, think­ing about fluffy kit­tens will not make you imag­ine blue things, and there­fore will not bring you suc­cess in love. Even if all peo­ple suc­cess­ful in love re­port think­ing about fluffy kit­tens a lot.)

• I think its’ prob­a­bly likely that gain­ing knowl­edge in this way will have sys­tem­atic bi­ases (OK, this is prob­a­bly true of all types of knowl­edge ac­qui­si­tion strate­gies, but you pointed out some good ones for this par­tic­u­lar knowl­edge gath­er­ing tech­nique.)

Any­ways, based on my own re­search (and prac­ti­cal ex­pe­rience over the past few months do­ing this sort of mod­el­ling for peo­ple with/​with­out pro­cras­ti­na­tion is­sues) here are some of the things you can do to re­duce the bias:

• Try to in­ner sim us­ing the strat­egy your­self and see if it works.

• Model mul­ti­ple peo­ple, and find the strate­gies that seem to be com­mon­al­ities.

• Check for con­gru­ence with peo­ple as they’re talk­ing. Use com­mon in­di­ca­tors of cached an­swers like in­stant an­swers or lack of emo­tional charge.

• Make sure peo­ple are em­bod­ied in a par­tic­u­lar ex­pe­rience as they dis­cuss, rather than try­ing to “figure them­selves out” from the out­side.

• Use in­tro­spec­tion tools from a va­ri­ety of dis­ci­plines like think­ing at the edge, co­her­ence ther­apy, etc that can al­low peo­ple to get bet­ter ac­cess to in­ter­nal mod­els.

All that be­ing said, there will still be bias, but I think with these tech­niques there’s not SO much bias that its’ a use­less en­deavor.

• I’m do­ing in­ter­views for this now.

I’ve got­ten great feed­back from peo­ple I’ve in­ter­viewed, say­ing it gave then a bet­ter un­der­stand­ing of them­selves.

• Sounds in­ter­est­ing. I think it may be difficult to find a per­son, let alone mul­ti­ple peo­ple on a given topic, who are have a par­tic­u­lar skill but are also able to ar­tic­u­late it and/​or iden­tify the cog­ni­tive strate­gies they use suc­cess­fully.

Re­gard­less, I’d like to hear about how peo­ple re­duce repet­i­tive talk in their own heads—how to fo­cus on new thoughts as op­posed to old, re­cur­ring ones...if that makes sense.

• Is this ru­mi­nat­ing, AKA repetively go­ing over bad mem­o­ries and nega­tive thoughts? Or is it more get­ting stuck with cached thoughts and not com­ing up with origi­nal things?

• SOCIOPATH REPELLENT FOR GOOD ORGANIZATIONS AND COMMUNITIES

The role of the Ke­gan 5 in a good or­ga­ni­za­tion:

1. Rein­vent the rules and mis­sion of the or­ga­ni­za­tion as the land­scape changes, and frame them in a way that makes sense to the kegan 3 and 4s.

2. No­tice when so­ciopaths are ar­bi­trag­ing the differ­ence be­tween the rules and the ter­mi­nal goals, and shut it down.

----------

So­ciopaths (in the Ger­vais prin­ci­ple sense) are pow­er­ful be­cause they’re Ke­gan 4.5. They know how to take the re­al­ities of Ke­gan 4′s and 3′s and deftly ma­nipu­late them, forc­ing them into al­ign­ment with what­ever is a good re­al­ity for the So­ciopath.

The most effec­tive norm I know to com­bat this be­hav­ior is Rad­i­cal Trans­parency. Rad­i­cal trans­parency is differ­ent from rad­i­cal hon­esty. Rad­i­cal hon­esty says that you should ig­nore con­sid­er­a­tion and con­se­quences in fa­vor of courage. Rad­i­cal trans­parency doesn’t make any sug­ges­tions about what you should say, only that ev­ery­one in the or­ga­ni­za­tion should be privy to things ev­ery­one says. This makes it ex­ceed­ingly hard for so­ciopaths to main­tain mul­ti­ple re­al­ities.

• One way to im­ple­ment rad­i­cal hon­esty is to do what David Ogilvy used to do. If some­one used BCC in their emails too much, he would fire them. That’s an effec­tive So­ciopath re­pel­lent.

• Another way to im­ple­ment rad­i­cal hon­esty is to record all your con­ver­sa­tions and make them available to ev­ery­one, like Bridge­wa­ter does. That’s an effec­tive So­ciopath re­pel­lent.

Once I was part of an or­ga­ni­za­tion that was try­ing to cre­ate a pow­er­ful cul­ture. Some­one had just told us about the record­ing all con­ver­sa­tions thing, so me and an­other leader in the or­ga­ni­za­tion de­cided to try it in one of our con­ver­sa­tions. We found we had to keep paus­ing the record­ing be­cause the level of hon­esty we were hav­ing with each other would cause our care­fully con­structed nar­ra­tives with ev­ery­one else to crum­ble. We were act­ing as so­ciopaths, and we had con­structed an awful or­ga­ni­za­tion.

I left shortly af­ter, but it would have been an ex­ceed­ingly painful pro­cess to con­vert to a good or­ga­ni­za­tion at that time. Creat­ing so­ciopath re­pel­lent or­ga­ni­za­tions is painful be­cause most of us act like so­ciopaths some of the time, and op­er­at­ing from a place of uni­ver­sal com­mon knowl­edge means that we have to be pre­pared to bring our full selves to ev­ery situ­a­tion, in­stead of craft­ing our­self to the per­son in front of us.

---------

The sec­ond most effec­tive norm I know to act as so­ciopath re­pel­lent is that any­one should be able to ap­ply the norms to any­one else. Here’s how I de­scribed that in a pre­vi­ous post:

Any­one should be able to ap­ply the val­ues to any­one else. If “Give crit­i­cal feed­back ASAP, and re­ceive it well” is a value, then the CEO should be will­ing to take feed­back from the new mail clerk. As soon as this stops be­ing the case, the 3′s get look for their val­i­da­tion el­se­where, and the 4′s get dis­illu­sioned.

Be­sides se­lec­tive re­al­ities, an­other way that so­ciopaths gain ad­van­tage is through se­lec­tive ap­pli­ca­tion of the norms when it suits them. By cre­at­ing norms that any­one can ap­ply to any­one else (and mak­ing them clear by pro­vid­ing the op­po­sites, as well as ex­am­ples) you pre­vent this be­hav­ior from so­ciopaths and take away one of their main weapons.

Once, I was the leader of an or­ga­ni­za­tion (ok, I was ac­tu­ally the cap­tain of a team in high­school, but same thing). I was elected leader be­cause I ex­em­plified the norms as good or bet­ter than most oth­ers, and had the skills to back it up. Once I be­came the leader, I even­tu­ally ran into challenges with so­cio­pathic (again in the Ger­vais prin­ci­ple sense) be­hav­ior try­ing to un­der­mine my au­thor­ity. In­stead of lean­ing back on the prin­ci­ples that had earned me the po­si­tion, I leaned on my power to force peo­ple to do what I wanted, while ig­nor­ing the prin­ci­ples that got me there. This made oth­ers lose faith in the prin­ci­ples, and kil­led morale, lead­ing to in­fight­ing and poli­tics.

The les­son for me as a leader was to lead with in­fluence based on moral au­thor­ity, not power. But the les­son for me as an or­ga­ni­za­tion de­signer was to al­low ANYBODY to en­force the norms, not just the leader, and to make this abil­ity part of the norms them­selves. This would have im­me­di­ately pre­vented from ru­in­ing team morale when I de­scended into petty be­hav­ior.

-------

The fi­nal im­por­tant be­hav­ior for so­ciopath re­pel­lent is to no­tice when the in­stru­men­tal val­ues of the or­ga­ni­za­tion aren’t serv­ing the ter­mi­nal goals, and re­lentlessly re­define the core val­ues to make them closer to spirit, rather than the let­ter. This is im­por­tant be­cause Ger­vais So­ciopaths ALSO have this abil­ity to no­tice when the in­stru­men­tal val­ues aren’t serv­ing the ter­mi­nal goals, and will ar­bi­trage this differ­ence for their own gain. A good Ke­gan 5 leader will be able to point to the val­ues, show how they’re meant to lead to the re­sults, then lead the or­ga­ni­za­tion in re­defin­ing them so that so­ciopaths can’t get away with any­thing.

Oc­ca­sion­ally, Ke­gan 5 lead­ers will have to take a look at the land­scape, no­tice its’ changed, and make sub­stan­tial changes to the val­ues or mis­sion of an or­ga­ni­za­tion to keep up with the cur­rent re­al­ity.

------

The next ques­tion be­comes, if you want a long last­ing or­ga­ni­za­tion, and a skil­led Ke­gan 5 leader is nec­es­sary for a long run­ning or­ga­ni­za­tion, how do you get a steady stream of Ke­gan 5 lead­ers? This is The Suc­ces­sion Prob­lem. One an­swer is to cre­ate De­liber­ately Devel­op­men­tal or­ga­ni­za­tions, that put sub­stan­tial effort into helping their mem­bers be­come more de­vel­oped hu­mans. That will be the sub­ject of the next post in the se­quence.

• It feels to me un­wise to use the term So­ciopaths in this way be­cause it means that you lose the abil­ity to dis­t­in­guish clini­cal so­ciopaths from peo­ple who aren’t.

Dist­in­guish­ing clini­cal so­ciopaths from peo­ple that aren’t is im­por­tant be­cause in­ter­ac­tion with them is fun­da­men­tally differ­ent. Tech­niques for deal­ing with grief that were taught to pris­on­ers helped re­duce re­ci­di­vism rates for the av­er­age pris­oner but in­creased it for so­ciopaths.

• I’m im­port­ing the term from Venkatash Rao, and his es­says on the Ger­vais prin­ci­ple. I agree this is an in­stance of word in­fla­tion, which is gen­er­ally bad. From now on I’ll start refer­ring to this as “Ger­vais So­ciopaths” in my writ­ing.

• Rad­i­cal trans­parency doesn’t make any sug­ges­tions about what you should say, only that ev­ery­one in the or­ga­ni­za­tion should be privy to things ev­ery­one says. This makes it ex­ceed­ingly hard for so­ciopaths to main­tain mul­ti­ple re­al­ities.

Seems like it could work, but I won­der what other effects it could have. For ex­am­ple, if some­one makes a mis­take, you can’t tell them dis­creetly; the only way to provide a feed­back on a minor mis­take is to an­nounce it to the en­tire com­pany.

By the way, are you go­ing to en­force this rule af­ter work­ing hours? What pre­vents two bad ac­tors from meet­ing in pri­vate and agree­ing to pre­tend hav­ing some de­ni­able bias in other to fur­ther their self­ish goals? Like, some things are mea­surable, but some things are a mat­ter of sub­jec­tive judg­ment, and two peo­ple could agree to always have the sub­jec­tive judg­ment col­ored in each other’s fa­vor, and against their mu­tual en­emy. In a way that even if other peo­ple no­tice, you could still in­sist that what X does sim­ply feels right to you, and what Y does rubs you the wrong way even if you can’t ex­plain why.

Also, peo­ple in the com­pany would be ex­posed to each other, and per­haps the vuln­er­a­bil­ity would can­cel out. But then some­one leaves, is no longer part of the com­pany, but still has all the info on the re­main­ing mem­bers. Could this info be used against the former col­leagues? The former col­leagues still have info on the one that left, but not on his new col­leagues. Also, if some­one strate­gi­cally joins only for a while, he could take care not to ex­pose him­self too much, while ev­ery­thing else would be ex­posed to him.

the CEO should be will­ing to take feed­back from the new mail clerk.

This as­sumes the new mail clerk will be a rea­son­able per­son. Some­one who doesn’t un­der­stand the CEO’s situ­a­tion or loves to cre­ate drama could use this op­por­tu­nity to give the CEO tons of use­less feed­back. And then com­plain about hypocrisy when oth­ers tell him to slow down.

• Seems like it could work, but I won­der what other effects it could have. For ex­am­ple, if some­one makes a mis­take, you can’t tell them dis­creetly; the only way to provide a feed­back on a minor mis­take is to an­nounce it to the en­tire com­pany.By the way, are you go­ing to en­force this rule af­ter work­ing hours?
What pre­vents two bad ac­tors from meet­ing in pri­vate and agree­ing to pre­tend hav­ing some de­ni­able bias in other to fur­ther their self­ish goals? Like, some things are mea­surable, but some things are a mat­ter of sub­jec­tive judg­ment, and two peo­ple could agree to always have the sub­jec­tive judg­ment col­ored in each other’s fa­vor, and against their mu­tual en­emy. In a way that even if other peo­ple no­tice, you could still in­sist that what X does sim­ply feels right to you, and what Y does rubs you the wrong way even if you can’t ex­plain why.
Also, peo­ple in the com­pany would be ex­posed to each other, and per­haps the vuln­er­a­bil­ity would can­cel out. But then some­one leaves, is no longer part of the com­pany, but still has all the info on the re­main­ing mem­bers. Could this info be used against the former col­leagues? The former col­leagues still have info on the one that left, but not on his new col­leagues. Also, if some­one strate­gi­cally joins only for a while, he could take care not to ex­pose him­self too much, while ev­ery­thing else would be ex­posed to him.

I had already up­dated away from this par­tic­u­lar tool, and this com­ment makes me up­date fur­ther. I still have the in­tu­ition that this can work well in a cul­ture that has tran­scended things like blame and shame, but for 99% of or­ga­ni­za­tions rad­i­cal trans­parency might not be the best tool.

This as­sumes the new mail clerk will be a rea­son­able per­son. Some­one who doesn’t un­der­stand the CEO’s situ­a­tion or loves to cre­ate drama could use this op­por­tu­nity to give the CEO tons of use­less feed­back. And then com­plain about hypocrisy when oth­ers tell him to slow down.

Yes, there are in fact ar­eas where this can break down. Note that ANY rule can be gamed, and the proper thing to do is to re­fer back to val­ues rather than try­ing to make ungame­able rules. In this case, the oth­ers might in fact point out that the val­ues of the or­ga­ni­za­tion are such that ev­ery­one should be open to feed­back, in­clud­ing mail clerks. If this hap­pened per­sis­tently with say 1 in ev­ery 4 peo­ple, then the or­ga­ni­za­tion would look at their hiring prac­tices to see how to re­duce that. If this hap­pened con­sis­tently with new hires, the or­ga­ni­za­tion would look at their train­ing prac­tices, etc.

The so­ciopath re­pel­lent here only works in the con­text of the other things I’ve writ­ten about good or­ga­ni­za­tions, like strongly teach­ing and in­grain­ing the val­ues and mak­ing sure de­ci­sions always point back to them, hav­ing strong vet­ting pro­ce­dures, etc. View­ing this or other posts in the se­ries as a list of tips risks tak­ing them out of con­text.

• This note won’t make sense to any­one who isn’t already fa­mil­iar with the So­ciopath frame­work in which you’re dis­cussing this, but I did want to call out that Venkat Rao (at least when he wrote the Ger­vais Prin­ci­ple) ex­plic­itly stated that so­ciopaths are amoral and has fairly clearly (es­pe­cially rel­a­tive to his other opinions) stated that he thinks hav­ing more So­ciopaths wouldn’t be a bad thing. Here are a few quotes from Mo­ral­ity, Com­pas­sion, and the So­ciopath which talk about this:

So yes, this en­tire ed­ifice I am con­struct­ing is a de­ter­minedly amoral one. Hitler would count as a so­ciopath in this sense, but so would Gandhi and Martin Luther King.

In all this, the source of the per­son­al­ity of this archetype is dis­trust of the group, so I am stick­ing to the word “so­ciopath” in this amoral sense. The fact that many read­ers have au­to­mat­i­cally con­flated the word “so­ciopath” with “evil” in fact re­flects the de­mo­niz­ing ten­den­cies of loser/​clue­less group moral­ity. The char­ac­ter­is­tic of these group moral­ities is au­to­matic dis­trust of al­ter­na­tive in­di­vi­d­ual moral­ities. The dis­trust di­rected at the so­ciopath though, is re­ac­tionary rather than in­formed.

So­ciopaths can be com­pas­sion­ate be­cause their dis­trust only ex­tends to groups. They are ca­pa­ble of un­der­stand­ing and em­pathiz­ing with in­di­vi­d­ual pain and act­ing with com­pas­sion. A so­ciopath who sets out to be com­pas­sion­ate is strongly limited by two fac­tors: the dis­trust of groups (and there­fore skep­ti­cism and dis­trust of large-scale, or­ga­nized com­pas­sion), and the firm ground­ing in re­al­ity. The sec­ond fac­tor al­lows so­ciopaths to look un­sen­ti­men­tally at all as­pects of re­al­ity, in­clud­ing the fact that ap­par­ently com­pas­sion­ate ac­tions that make you “feel good” and as­suage guilt to­day may have un­in­tended con­se­quences that ac­tu­ally cre­ate more evil in the long term. This is what makes even good so­ciopaths of­ten seem cal­lous to even those among the clue­less and losers who trust the so­ciopath’s in­ten­tions. The ap­par­ent cal­lous­ness is ac­tu­ally ev­i­dence that hard moral choices are be­ing made.

When a so­ciopath has the re­sources for (and feels the im­per­a­tive to­wards) larger scale do-good­ing, you get some­thing like Bill Gates’ be­hav­ior: a very care­ful, cau­tious, eyes-wide-open ap­proach to com­pas­sion. Gates has taken on a world-hunger sized prob­lem, but there is very lit­tle cer­e­mony or pos­tur­ing about it. It is so­ciopath com­pas­sion. Un­der­ly­ing the scale is a resi­d­ual dis­trust of the group — es­pe­cially the group in­spired by one­self — that leads to the “re­luc­tant mes­siah” effect. Noth­ing is as scary to the com­pas­sion­ate and pow­er­ful so­ciopath as the un­think­ing adu­la­tion and fol­low­ing in­spired by their ideas. I sus­pect the best among these lie awake at night wor­ry­ing that if they were to die, the headless group might mu­tate into a mon­ster driven by a frozen, un­ex­am­ined moral code. Which is why the smartest at­tempt to en­g­ineer in­sti­tu­tion­al­ized doubt, self-ex­am­i­na­tion and for­mal checks and bal­ances into any sys­tems they de­sign.

I hope my ex­pla­na­tion of the amoral­ity of the so­ciopath stance makes a re­sponse mostly un­nec­es­sary: I dis­agree with the premise that “more so­ciopaths is bad.” More peo­ple tak­ing in­di­vi­d­ual moral re­spon­si­bil­ity is a good thing. It is in a sense a differ­ent read­ing of Old Tes­ta­ment moral­ity — eat­ing the fruit of the tree of knowl­edge and learn­ing to tell good and evil apart is a good thing. An athe­ist view of the Bible must nec­es­sar­ily be alle­gor­i­cal, and at the risk of offend­ing some of you, here’s my take on the Bibli­cal tale of the Gar­den of Eden: Adam and Eve were clue­less, hav­ing ab­di­cated moral re­spon­si­bil­ity to a (pu­ta­tively good) so­ciopath: God. Then they be­came so­ciopaths in their own right. And were forced to live in an ecosys­tem that in­cluded an­other so­ciopath — the archety­pal evil one, Satan — that the good one could no longer shield them from. This makes the “de­scent” from the Gar­den of Eden an awak­en­ing into free­dom rather than a de­scent into base­ness. A good thing.

I apol­o­gize if this just seems like nit­pick­ing your ter­minol­ogy, but I’m call­ing it out be­cause I’m cu­ri­ous whether you agree with his ab­stract defi­ni­tion but dis­agree with his moral as­sess­ment of So­ciopaths, vice versa, or some­thing else en­tirely? As a con­crete ex­am­ple, I think Venkat would ar­gue that early EA was a form of So­ciopath com­pas­sion and that for the sorts of world-dent­ing things a lot LWers tend to be in­ter­ested in, So­ciopa­thy (again, as he defines it) is go­ing to be the right stance to take.

• Rao’s so­ciopaths are Ke­gan 4.5, they’re nihilis­tic and aren’t good for long last­ing or­ga­ni­za­tions be­cause they view the no­tion of or­ga­ni­za­tional goals as non­sen­si­cal. I agree that there’s no moral bent to them but if you’re try­ing to cre­ate an or­ga­ni­za­tion with a goal they’re not use­ful. In­stead, you want an or­ga­ni­za­tion that can de­velop Ke­gan 5 lead­ers.

• This doesn’t seem like it’s ad­dress­ing An­lam’s ques­tion though. Gandhi doesn’t seem nihilist. I as­sume (from this quote, which was new to me), that in Ke­gan terms, Rao prob­a­bly meant some­thing rang­ing from 4.5 to 5.

• I think Rao was at Ke­gan 4.5 when he wrote the se­quence and didn’t re­al­ize Ke­gan 5 ex­isted. Rao was say­ing “There’s no moral bent” to Ke­gan 4.5 be­cause he was at the stage of re­al­iz­ing there was no such thing as morals.

At that level you can also view Ke­gan 4.5′s as ob­vi­ously cor­rect and the ones who end up mov­ing so­ciety for­ward into in­ter­est­ing di­rec­tions, they’re forces of cre­ative de­struc­tion. There’s no view of Ke­gan 5 at that level, so you’ll mis­take Ke­gan 5′s as ei­ther Ke­gan 3′s or other Ke­gan 4.5′s, which may be the cause of the con­fu­sion here.

• ON SAFE SPACES

There’s at least 3 types of psy­cholog­i­cal “safe spaces”:

1. We’ll pro­tect you.
We’ll make sure there’s noth­ing in the space that can ac­tively touch your wounds. This is a place to heal with plenty of sun­sh­ine and wa­ter. Any­one who’s in this space is agree­ing to be ex­tra care­ful to not poke any wounds, and the space will ac­tively ex­pel any­one who does. Most liberal arts col­leges are try­ing to achieve this sort of safety.

There may or may not be things in this space that can ac­tively touch your wounds. You’re ex­pected to do what’s nec­es­sary to pro­tect them, up to and in­clud­ing leav­ing the space if need be. You have an ac­tive right to know your own bound­aries and par­ti­ci­pate or not as needed. Many self-help groups are look­ing to achieve this sort of safety.

3. We’ll make you grow.
This space is meant to poke at your wounds, but only to make you grow. We’ll prob­a­bly wa­ter­board the shit out of you, but we won’t let you drown. Any­one who’s too frag­ile for this en­vi­ron­ment should en­ter at their own peril. This is Bridge­wa­ter, cer­tain parts of the US Mili­tary, and other DDOs.

This is a half formed thought that seems to ping enough of my other im­por­tant con­cepts that it seems worth shar­ing. Which one you think should be de­fault re­lates a lot to how you view the world

It re­lates to:

- Why you would choose de­cou­pling vs. con­tex­tu­al­iz­ing norms (https://​www.less­wrong.com/​…/​de­cou­pling-vs-con­tex­tu­al­is­ing-n…)

- Why you would al­low or not al­low punch bug (https://​medium.com/​@Th…/​in-defense-of-punch-bug-68fcec56cd6b)

-Whether you want to pro­tect Cluess, Losers, or So­ciopaths (https://​www.rib­bon­farm.com/​…/​the-ger­vais-prin­ci­ple-or-the-…/​)

- The left/​right cul­ture war.

• How to Read a Book is the quintessen­tial how to book on gain­ing knowl­edge from a mod­ernist per­spec­tive. What would a meta­mod­ern ver­sion of HTRAB look like?

HTRAB says that the main ques­tion you should be ask­ing when read­ing a book is “Is this true?” The re­la­tion­ship you’re con­cerned with is be­tween the ma­te­rial and the real world.

But in a meta-mod­ern per­spec­tive, you want to con­sider many other re­la­tion­ships.

One of those is the three way re­la­tion­ships be­tween your­self, the ma­te­rial, and re­al­ity. Ask­ing ques­tions like “What new per­spec­tives can I gain from this?” and “How does this re­late to my other mod­els of the world?”

Another is the re­la­tion­ship be­tween the au­thor and their source ma­te­rial. What does this writ­ing say about the per­spec­tive of the au­thor? Why did they choose to write this. This is bring­ing in a more post-mod­ern/​crit­i­cal the­ory per­spec­tive.

HTRAB recom­mends “Synop­tic Read­ing”—find­ing many books on the same sub­ject or that cir­cle around a spe­cific topic to get a broad overview of the topic.

A meta-mod­ern take would also look into other ways of group­ing books. What about ex­plor­ing facets of your­self through ex­plor­ing au­thors that think differ­ently and similarly to you? What about craft­ing a nar­ra­tive as you dig into in­ter­est­ing parts of each book you move through?

What other takes would a Meta-Modern ver­sion of HTRAB en­com­pass?

• There’s a pat­tern I’ve no­ticed in my self that’s quite self-de­struc­tive.

It goes some­thing like this:

• Meet new peo­ple that I like, try to hide all my flaws and be re­ally im­pres­sive, so they’ll love me and ac­cept me.

• After get­ting com­fortable with them, notic­ing that they don’t re­ally love me if they don’t love the flaws that I haven’t been show­ing them.

• Stop tak­ing care of my­self, down­ward spiral, so that I can see they’ll take care of me at my worst and I know they REALLY love me.

• Peo­ple jus­tifi­ably get fed up with me not tak­ing care of my­self, and re­ject me. This trig­gers the thought that I’m un­lov­able.

• Be­cause I’m not lov­able, when I meet new peo­ple, I have to hide my flaws in or­der for them to love me.

This pat­tern is de­struc­tive, and has been one of the main things hold­ing me back from be­com­ing as self-suffi­cient as I’d like. I NEED to be de­pen­dent on oth­ers to prove they love me.

What’s in­ter­est­ing about this pat­tern is how self-defeat­ing it is. Do peo­ple not want­ing to sup­port me mean that they don’t love me? No, it just means that they don’t want to sup­port an­other adult. Does hid­ing all my flaws help peo­ple ac­cept me? No, it just sets me up for a crash later. Does con­stantly crash­ing from suc­cess­ful ven­tures help any of this? No, it makes it harder to seem suc­cess­ful, AND harder to be able to show my flaws with­out hav­ing peo­ple run away.

• Don’t have much else to say for now but :(

• I’ve made sig­nifi­cant progress on this by work­ing on self-love and self-trust.

• That sounds to me like the be­lief “I’m not lov­able” causes you trou­ble and it would make sense to get rid of it. Trans­form Your­self pro­vides one frame­work of how to go about it. The Lefkoe method would be a differ­ent one.

• I’ve tried both of those, as well as a host of other tools. I only re­cently (the past year) de­vel­oped the be­lief “I am lov­able”, which al­lowed me to see this pat­tern. I can now be­lief re­port both ” I am lov­able” and ” I’m not lov­able”

• As part of the Athena Ra­tion­al­ity Pro­ject, we’ve re­cently launched two new pro­to­type apps that may be of in­ter­est to LWers

Vir­tual Akra­sia Coach

The first is a Vir­tual Akra­sia Coach, which comes out of a few months of study­ing var­i­ous in­ter­ven­tions for Akra­sia, then test­ing the re­sult­ing ~25 habits/​skills through in­ter­net based les­sons to re­fine them. We then took the re­sult­ing flowchart for deal­ing with Akra­sia, and cre­ated a “Vir­tual Coach” that can walk you through a work ses­sion, en­sur­ing your work is fo­cused, pro­duc­tive and en­joy­able.

Right now about 10% of peo­ple find it use­ful to use in ev­ery ses­sion, 10% of peo­ple find it use­ful to use when they’re pro­cras­ti­nat­ing, and 10% of peo­ple find it use­ful to use when they’re prac­tic­ing the anti-akra­sia habits. The rest don’t find it use­ful, or think it would be use­ful but don’t tend to use it.

I know many of you may be won­der­ing how the idea of 25 skills fits in with the In­ter­nal Con­flict model of akra­sia. One way to frame the skills is that for peo­ple with chronic akra­sia, we’ve found that they tend to have cer­tain pat­terns that lead to in­ter­nal con­flict—For in­stance, one side thinks it would be good to work on some­thing, but an­other side doesn’t like un­cer­tainty. You can solve this by in­ter­nal dou­ble crux, or you can have a habit to always know your next ac­tion so there’s no un­cer­tainty. By us­ing this and the other 24 tools you can pre­vent a good por­tion of in­ter­nal con­flict from show­ing up in the first place.

Habit In­staller/​Un­in­staller App

The habit in­staller/​un­in­staller app is an at­tempt to cre­ate a bet­ter pro­cess for cre­at­ing TAPs, and us­ing a mod­ified Mur­phyjitsu pro­cess to cre­ate set­backs for those taps.

Here’s how it works.

1. When you think of a new TAP to in­stall, add it to your habit Queue..

2. When the TAP reaches the top of the Habit Queue, it gives you a “Con­di­tion­ing Ses­sion”—these are a set of au­dio ses­sions that take you through pro­cesses to strengthen habits, such as vi­su­al­iza­tion, mem­ory re-con­soli­da­tion, and men­tal con­trast­ing.

3. The app will check in with you about how fre­quently you’ve been ex­e­cut­ing the TAP, us­ing a spaced rep­e­ti­tion sched­ule, giv­ing you more con­di­tion­ing ses­sions when you’re likely to fail at your habit, start­ing fre­quently then less and less fre­quently as you mas­ter the habit.

4. When the habit is 10% mas­tered, you’ll be walked through a mur­phyjitsu pro­cess, com­ing up with new habits and ac­tions that can pre­vent you from failing to in­stall this habit.

5. Any new habits you cre­ate us­ing the Mur­phyjitsu pro­cess are added to the habit queue, mak­ing the pro­cess frac­tal.

6. When a habit is 100% mas­tered, you no longer re­ceive con­di­tion­ing ses­sions or check­ins, al­low­ing you room to in­stall more TAPs.

The app is definitely in pro­to­type form, and quite ugly and hacky, but I’ve per­son­ally find it quite use­ful for cre­at­ing new habits.

As an ex­per­i­ment for this par­tic­u­lar app based on learn­ing from our pre­vi­ous Akra­sia Coach pro­to­type, we’re charg­ing a small ($2.99) fee for try­ing the pro­to­type. The fee is ba­si­cally to get more com­mit­ted users, and will of course be re­funded if at any point you de­cide the app is not for you or too early stage. If you’re in­ter­ested in that, the link to test it out is here. Note that we’ve been get­ting re­ports of the con­fir­ma­tion emails end­ing up in spam, so be sure to check your spam folder once you sign up. Any­ways, feel free to try both of those out, and if you have any ques­tions, I’ll do my best to an­swer. • In re­sponse to a “sell LW to me” post: I think that the thing LW is try­ing to do is hard. I think that there’s a le­gi­t­i­mate split in the com­mu­nity, around the things you’re call­ing “cy­ber-bul­ly­ing”—I think there should be a place for crock­ers rules style com­bat cul­ture rea­son­ing, but I also want a com­mu­nity that is char­i­ta­ble and re­spect­ful and kind while main­tain­ing good epistemics. I also think there’s a le­gi­t­i­mate split in the com­mu­nity around the things you’re call­ing “epistem­i­cally sketchy”—I think there should be a place for post-ra­tio­nal pon­der­ings, but I also think there should be a place for not cater­ing to them.. I have an im­pres­sion that LW is try­ing to cater to both sides of the splits, and ba­si­cally end­ing up in a mid­dle ground that no one wants, driv­ing a lot of the most in­ter­est­ing posters away. That be­ing said, I’m quite im­pressed by the team run­ning LW. I’m quite im­pressed by the product that is LW. I’m also quite im­pressed by the ex­per­i­ments and di­rec­tion of LW—I per­ceive it as ac­tively get­ting bet­ter over time, and grap­pling with hard ques­tions. I don’t know a bet­ter place to put things to cre­ate com­mon knowl­edge about things I with were com­mon knowl­edge in the ra­tio­nal­ist com­mu­nity, and I ex­pect that things I put there will benefit from the im­prove­ments over time. I think that the mods are jus­tifi­ably be­ing very care­ful about im­pos­ing norms, be­cause split­ting the com­mu­nity is very dan­ger­ous, but I do have a small amount of faith they’ll nav­i­gate it cor­rectly—enough to make post­ing on there worth it. • I think philo­soph­i­cal bul­let bit­ing is usu­ally wrong. It can be use­ful to make a the­ory that you KNOW is wrong, and bite a bul­let in or­der to make progress on a philo­soph­i­cal prob­lem. How­ever, I think it can be quite dam­ag­ing to ac­cept a prac­ti­cal the­ory of ethics that feels prac­ti­cal and con­sis­tent to you, but breaks some of your ma­jor moral in­tu­itions. In this case I think it’s bet­ter to go “I don’t know how to come up with a con­sis­tent the­ory for this part of my ac­tions, but I’ll fol­low my gut in­stead.” Note that this is the op­po­site of be­com­ing a ro­bust agent. How­ever, the al­ter­na­tive is CREATING a ro­bust agent that is not in fact al­igned with its’ cre­ator. I’ve seen peo­ple who adopted a moral view for con­sis­tency, and now make choices that they NEVER would have en­dorsed be­fore they chose to bite bul­lets for con­sis­tency. I think this is one of my ma­jor dis­agree­ments with Rae­mon’s view of be­com­ing a ro­bust agent. • FYI I think that’d make a good post with a handy ti­tle that’d make it eas­ier to re­fer to • “There are no ex­cep­tions.” “Rules con­tain ex­cep­tions.” “How to make Rules.” “How to make Ex­cep­tions.” • Thanks for the crisp ar­tic­u­la­tion. One short an­swer is: “I, Rae­mon, do not re­ally bite bul­lets. What I do is some­thing more like “flag where there were bul­lets I didn’t bite, or ar­eas that I am con­fused about, and mark those on my In­ter­nal Map with a gi­ant red-pen ‘PLEASE EVALUATE LATER WHEN YOU ARE HAVE TIME AND/​OR ARE WISER’ la­bel.” One ex­am­ple of this: I de­scribe my moral in­tu­itions as “Sort of like me­dian-prefer­ence util­i­tar­i­anism, but not re­ally. Me­dian-prefer­ence-util­i­tar­i­anism seems to break slightly less of­ten in ways slightly more for­give­able than other moral the­o­ries, but not by much.” Mean­while, my de­ci­sion-mak­ing is some­thing like “95% self­ish, 5% al­tru­is­tic within the ‘sort of but not re­ally me­dian-prefer­ence-util­i­tar­ian-lens’, but I look for ways for the 95% self­ish part to get what it wants while gen­er­at­ing pos­i­tive ex­ter­nal­ities for the 5% al­tru­is­tic part.” And I en­dorse peo­ple do­ing a similarly hacky sys­tem as they figure them­selves out. (Also, while I don’t re­mem­ber ex­actly how I phrased things, I don’t ac­tu­ally think ro­bust agency is a thing peo­ple should pur­sue by de­fault. It’s some­thing that’s use­ful for cer­tain types of peo­ple who have cer­tain pre­cur­sor prop­er­ties. I tried to phrase my posts like ‘here are some rea­sons it might be bet­ter to be more ro­bustly-agen­tic, where you’ll be ex­pe­rienc­ing a trade­off if you don’t do it’, but not mak­ing the claim that the RA trade­offs are cor­rect for ev­ery­one) • On the flip­side, I think a dis­agree­ment I have with habryka (or did, a year or two ago), was some­thing like habryka say­ing: “It’s bet­ter to build an ex­plicit model, try to use the model for real, and then no­tice when it breaks, and then build a new model. This will cause you to un­der­perform ini­tially but even­tu­ally out­class those who were try­ing to hack to­gether var­i­ous bits of cul­tural wis­dom with­out un­der­stand­ing them.” I think I roughly agree with that state­ment of his, I just think that the cost of lots of peo­ple do­ing this at once are fairly high and that you should in­stead do some­thing like ‘start with vague cul­tural wis­dom that seems to work and slowly re­place it with more ro­bust things as you gain skills that en­able you to do so.’ • start with vague cul­tural wis­dom that seems to work and slowly re­place it with more ro­bust things as you gain skills that en­able you to do so.′ I think the thing I ac­tu­ally do here most of­ten is start with a bunch of in­com­pat­i­ble mod­els that I learned el­se­where, then try to ran­domly ap­ply them and see my re­sults. Over time I no­tice that cer­tain parts work and don’t, and that cer­tain mod­els tend to work in cer­tain situ­a­tions. Even­tu­ally, I ex­am­ine my ac­tual be­liefs on the situ­a­tion and find some­thing like “Oh, I’ve ac­tu­ally de­vel­oped my own the­ory of this that ties to­gether the best parts of all of these mod­els and my own ob­ser­va­tions.” Some­times I help this along ex­plic­itly by in­tro­spect­ing on the switch­ing rules/​similar­i­ties and differ­ences be­tween mod­els, etc. This feels re­lated to the thing that hap­pens with my moral in­tu­itions, ex­cept that there are in­ter­nal mod­els that didn’t seem to come from out­side or my own ex­pe­riences at all, ba­sic things I like and dis­like, and so some­times all these mod­els con­verge and I still have a sep­a­rate thing that’s like NOPE, still not there yet. • I think the thing I ac­tu­ally do here most of­ten is start with a bunch of in­com­pat­i­ble mod­els that I learned el­se­where, then try to ran­domly ap­ply them and see my re­sults. This seems ba­si­cally fine, but I mean my ad­vice to ap­ply to, like, 4 and 12 year olds who don’t re­ally un­der­stand what a model is. Any­thing model-shaped or ro­bust-shaped has to boot­strap from some­thing that’s more Cul­tural wis­dom shaped. (but, I prob­a­bly agree that you can have cul­tural wis­dom that more di­rectly boot­straps you into ‘learn to build mod­els’) • I think I was view­ing “cul­tural wis­dom’ as ba­si­cally its’ own black­box model, and in prac­tice I think this is ba­si­cally how I treat it. Nit­pick: Hu­man’s are definitely cre­at­ing mod­els at 12, and able to un­der­stand that what they’re cre­at­ing are mod­els. • How does this com­pare with em­piri­cism—speci­fi­cally say­ing “This is testable, so let’s test it.”? • I think there’s an in­fer­en­tial dis­tance step I’m miss­ing here, be­cause I’m ac­tu­ally a bit at a loss as to how to re­late my post to em­piri­cism. • Try­ing to de­scribe a par­tic­u­lar as­pect of Moloch I’m call­ing hy­per-in­duc­tivity: The ma­chine is hy­per-in­duc­tive. Your de­scrip­tions of the ma­chine are part of the ma­chine. The ma­chine wants you to es­cape, that is part of the ma­chine. The ma­chine knows that you know this. That is part of the ma­chine. Your trauma fuels the ma­chine. Heal­ing your trauma fuels the ma­chine. Trau­ma­tiz­ing your kids fuels the ma­chine. Failing to trau­ma­tize your kids fuels the ma­chine. Defect­ing on the pris­oner’s dilemma fuels the ma­chine. Tel­ling oth­ers not to defect on the pris­oner’s dilemma fuels the ma­chine. Your in­ten­tional com­mu­nity is part of the ma­chine. Your med­i­ta­tion prac­tice is part of the ma­chine. Your art in­stal­la­tion is part of the ma­chine. Your protest is part of the ma­chine. A se­lect few will es­cape the ma­chine. That is part of the ma­chine. The ma­chine will sim­plify, the ma­chine will dis­tort, the ma­chine will poli­ti­cize, the ma­chine will con­sumer­ize. Je­sus is part of the ma­chine. Bud­dha is part of the ma­chine. Eli­jah is part of the ma­chine. Zuess is part of the ma­chine. Your Ke­gan-5 abil­ity to see out­side the ma­chine is part of the ma­chine. Your men­tal mod­els are part of the ma­chine. Your bayesi­anism is part of the ma­chine. Your shit­posts are part of the ma­chine. The ma­chine de­vours. The ma­chine cre­ates. Your at­tempts to pro­tect your ideas from the ma­chine is part of the ma­chine. Your at­tempts to fix the ma­chine is part of the ma­chine. Your at­tempts to see that the ma­chine is an illu­sion is part of the ma­chine. Your at­tempts to use the ma­chine for your own pur­poses is part of the ma­chine. The ma­chine’s goal is to grow the ma­chine. The ma­chine does not have a goal. The ma­chine is de­signed to be anti-frag­ile. The ma­chine is not de­signed. This post is part of the ma­chine. • Re­cently went on a quest to find the best way to min­i­mize the cord clut­ter, cord man­age­ment, and charg­ing anx­iety that cre­ates a dozen triv­ial in­con­ve­niences through­out the day. Here’s what worked for me: 1. For each area that is a wire maze, I get one of these surge pro­tec­tors with 18 out­lets and 3 usb slots: https://​​amzn.to/​​33UfY7i 2. For ev­ery­where I am that I am likely to want to charge some­thing, I fill 1 −3 of the slots with these 6ft multi-charg­ing usb ca­bles (more slots if I’m likely to want to charge mul­ti­ple things). I get a cou­ple ex­tras for travel so that I can sim­ply leave them in my travel bag: https://​​amzn.to/​​33RV48T 3. For ev­ery­where that I am likely to want to plug in my lap­top, I get one of these uni­ver­sal lap­top charg­ers. Save the at­tach­ments some­where safe for fu­ture lap­tops, and leave the at­tach­ment that works for my lap­top plugged in at each place. I get an ex­tra to keep and put into my travel bag: https://​​amzn.to/​​3iwHjkf 4. I run the USB cords and lap­top cord through these nifty lit­tle cord clips, so they stay in place: https://​​amzn.to/​​31KdcPA 5. All the ex­cess wiring, along with the surge pro­tec­tor, is put into this cord box. I use the twisty ties with that to se­cure wires from dan­gling, and en­sure they go into the box neatly. Sud­denly, the wires are su­per clean: https://​​amzn.to/​​2PIGbxA 6. (Bonus Round) I have a charg­ing case for my phone, so the only time I have to worry about charg­ing it as night. I use this one for my Pixel 3A, but you’ll have to find one that works for your phone: https://​​amzn.to/​​31MuxHn 7. (Bonus Round 2): Work to go wire­less for things that have that op­tion, like head­phones. This will set you back$200 - \$500 (de­pend­ing on much of each thing you need) but man is it nice to not ever have to worry about find­ing a charg­ing cord, mov­ing a cord around, re­mem­ber­ing to pack your charger, trip­ping over wires or hav­ing the wire jun­gle dis­tract, etc.

• ^ Affili­ate links. Feel free to search them on your own if you don’t want some of the money to go to me. If af­fili­ate links are against the rules, let me know mods!

• Not a mod, but per­son­ally, I’m happy to have links to prod­ucts that long-term mem­bers per­son­ally use and recom­mend. I’d mildly pre­fer smile.ama­zon.com links over af­fili­ate or nor­mal links, but not enough to worry about it.

• A link can be both af­fili­ate and smile, they stack.

• But I’m not sure how to do it with their af­fili­ate link cre­ator. The de­fault link they give me is not smile.

• Was think­ing a bit about the how to make it real for peo­ple that the quaran­tine de­press­ing the econ­omy kills peo­ple just like Coron­avirus does.

Was think­ing about find­ing a sim­ple good enough cor­re­la­tion be­tween eco­nomic de­pres­sion and death, then cre­at­ing a “flat­ten­ing the curve” graphic that shows how many deaths we would save from stop­ping the eco­nomic freefall at differ­ent points. Com­bin­ing this was clear nar­ra­tives about re­ces­sion could be quite effec­tive.

On the other hand, I think it’s quite plau­si­ble that this par­tic­u­lar prob­lem will take care of it­self. When peo­ple be­gin to ex­pe­rience de­pres­sion, will the young peo­ple who are the eco­nomic en­g­ine of the coun­try re­ally con­tinue to stay home and quaran­tine them­selves? It seems quite likely that we’ll sim­ply be­come strat­ified for a while where young healthy peo­ple break quaran­tine, and the older and im­muno-com­pro­mised stay home.

But get­ting the time of this right is ev­ery­thing. Strik­ing the right bal­ance of “deaths from eco­nomic freefall” and “deaths from an over­loaded med­i­cal sys­tem” is a bal­anc­ing act, go­ing too far in ei­ther di­rec­tion re­sults in hun­dreds of thou­sands of un­nec­es­sary deaths.

Then I got to think­ing about the effect of a de­pressed econ­omy on x-risks from AI. Be­cause the fund­ing for AI safety is

1. Mostly in non-profits

and

2. Orders of mag­ni­tude smaller than fund­ing for AI capabilities

It’s quite likely that the fund­ing for AI safety is more in­elas­tic in de­pres­sions than than the fund­ing for AI ca­pa­bil­ities. This may an­swer the puz­zle of why more EAs and ra­tio­nal­ists aren’t speak­ing co­gently about the trade­offs be­tween de­pres­sion and lives saved from Corona—they have gone through this same train of thought, and de­cided that pre­vent­ing a de­pres­sion is an in­for­ma­tion haz­ard.

• It’s in­ter­est­ing be­cause you would in­tu­itively think this, but there is ac­tu­ally not ter­rible ev­i­dence link­ing pe­ri­ods of eco­nomic growth to in­creased mor­tal­ity.

Is non-profit fund­ing re­ally that in­elas­tic in de­pres­sion?

• It’s in­ter­est­ing be­cause you would in­tu­itively think this, but there is ac­tu­ally not ter­rible ev­i­dence link­ing pe­ri­ods of eco­nomic growth to in­creased mor­tal­ity.

Wow that is fas­ci­nat­ing. It does make the case harder to make be­cause you have to start quan­tify­ing hap­piness/​de­pres­sion, etc and trade off against lives. Much much harder to sim­plify enough to make it viral. Up­dates to­wards cap­i­tal­ism be­ing hor­rible.

Is non-profit fund­ing re­ally that in­elas­tic in de­pres­sion?

It prob­a­bly varies quite a bit by sec­tor, and where fund­ing comes from for differ­ent non-prof­its. In the case of AI safety I think it’s likely more in­elas­tic than AI ca­pa­bil­ity.

• It was brought to my at­ten­tion on Less­wrong that de­pres­sions ac­tu­ally save lives.

Which would make it much harder to build a sim­ple “two curves to flat­ten” nar­ra­tive out of.

• Wait, you re­ceived ev­i­dence that didn’t just re­fute your hy­poth­e­sis, it re­versed it. If you ac­cept that, shouldn’t you also re­verse your pro­posed rem­edy? Shouldn’t you now ar­gue _IN FAVOR_ of shut­ting down more com­pletely—it saves lives both di­rectly by limit­ing the spread of the virus AND in­di­rectly by slow­ing the econ­omy.

(note: this is in­tended to be semi-hu­morous—my base po­si­tion is that the eco­nomic causes and effects are far too com­plex and dis­tributed to re­ally pre­dict im­pact on that level, or to pre­dict what poli­cies might im­prove what out­comes).

• I did up­date from this quite sig­nifi­cantly.

• The four lev­els of listen­ing, from some old notes:

1. Con­tent—Do you ac­tu­ally un­der­stand what this per­son is say­ing? Do they un­der­stand that you un­der­stand?

2. Sub­text—Do you ac­tu­ally un­der­stand how this per­son feels about what they’re say­ing? Do they un­der­stand that you un­der­stand?

3. In­tent- Do you ac­tu­ally un­der­stand WHY this per­son is say­ing what they’re say­ing? Do they un­der­stand that you un­der­stand?

4. Paradigm—Do you ac­tu­ally un­der­stand what all of the above says about who this per­son is and how they view the world? Do they un­der­stand that you un­der­stand?

• A fre­quent failure mode that I have as a leader:

• Some­one comes on to a new pro­ject, and makes a few sug­ges­tions.

• All of those sug­ges­tions are things we/​I have thought about and dis­cussed in de­tail, and we have de­tailed rea­sons why we’ve made the de­ci­sions we have.

• I tell the per­son those rea­sons.

• The per­son comes away feel­ing like the pro­ject isn’t re­ally open to crit­i­cism feed­back, and their ideas won’t be heard.

I think a good policy is to just say yes to WHATEVER ex­per­i­ment some­one who is new to the pro­ject pro­poses, and let them take their own lumps, or pleas­antly sur­prised.

But, de­spite hav­ing known this for a bit, I always seem to for­get to do this when it mat­ters. I won­der if I can add this to our on­board­ing check­lists.

• Some con­crete up­dates I had around this idea, based on dis­cus­sion on Face­book.

• One re­ally rele­vant fac­tor is the crit­i­cism com­ing from a per­son in au­thor­ity, and lead­ers should be ex­tra care­ful of critiz­ing ideas. By steer­ing them to­wards other, less au­tho­r­a­tive figures that you think will give valid cri­tiques, you can avoid this failure.

• Another po­ten­tial ob­vi­ous pit­fall here is peo­ple feel­ing like they were set up to fail by not hav­ing all the rele­vant in­for­ma­tion. The idea here is to make peo­ple feel like they have agency, ob­vi­ously not to hide in­for­ma­tion.

• Even if you do the above, peo­ple can feel pa­tron­ized if it seems like you’re do­ing this as a tac­tic be­cause you think they can’t take crit­i­cism. This can be true even if giv­ing them crit­i­cism would in­deed be harm­ful for the team dy­namic. Thus, the em­pha­siz­ing ways to in­crease agency over avoid­ing crit­i­cism is key here.

• This com­bi­na­tion of failure modes seems pretty dicey.

I think I’ve en­coun­tered some­thing similar in re­la­tion­ships, where my naive thought was “they’re do­ing some­thing wrong/​harm­ful and I should help them avoid it” but I even­tu­ally re­al­ized “them hav­ing an in­ter­nal lo­cus of con­trol and not feel­ing like I’m out to micro­man­age them is way more im­por­tant than any given sub­op­ti­mal thing they’re do­ing.”

• I’ve rarely seen teams do this well and agree that your pro­posed ap­proach is much bet­ter than the al­ter­na­tive in many cases. I’ve definitely seen cases where in­sid­ers thought some­thing was im­pos­si­ble and then a new per­son went and did it. (I’ve been the in­sider, the new per­son who ig­nored the ad­vice and suc­ceeded, and the new per­son who ig­nored the ad­vice and failed.)

That said, I think there’s a mid­dle ground where you con­vey why you chose not to do some­thing but also leave it open for the per­son to try any­way. The down­side of just let­ting them do it with­out giv­ing con­text is they may fail for a silly rather than gen­uine rea­son.

What I’m sug­gest­ing could look some­thing like the fol­low­ing.

That’s an awe­some idea! This is some­thing some of us ex­plored a bit pre­vi­ously and de­cided not to pur­sue at the time for X, Y, and Z rea­sons. How­ever, as in­sid­ers, we are prob­a­bly bi­ased to­wards view­ing things as hard, so it’s im­por­tant for team health to have new peo­ple re-try and re-ex­plore things we may have already thought about. You should definitely not take our rea­sons as fi­nal and feel free to try The Thing if you still feel like it might work or you’ll learn some­thing by do­ing so.

• I’ve had a draft sit­ting in my posts sec­tion for months about shal­low, deep, and trans­fer learn­ing. Just made a Twit­ter thread that gets at the ba­sics. And figured I’d post here to gauge in­ter­est in a longer post with ex­am­ples.

Love kin­dle, love Ever­note. But never high­light good ideas. It’s level one read­ing. In­stead use writ­ten notes and link im­por­tant ideas to pre­vi­ous con­cepts you know.

Level 1: What’s im­por­tant? What does this mean?

Level 2: How does this link to com­pare/​con­trast to pre­vi­ous con­cepts or ex­pe­riences? Do I be­lieve this?

Level 3: How is this a metaphor for seem­ingly un­re­lated con­cepts? How can this frame my think­ing?

4 ques­tions to get to level 2:

• How is this similar to other things I know?

• How is this differ­ent from other things I know?

• What pre­vi­ous ex­pe­riences can I re­late this to?

• In what cir­cum­stances would I use this knowl­edge? How would I use it?

3 Ques­tions to ask to get to level 3:

• How does it feel to view the world through this lens?

• How does this ex­plain ev­ery­thing?

• What is this a metaphor for?

• I no­tice that this all makes perfect sense but that I don’t ex­pect to use it that much.

Which I think is more of a failure of my part to set up my life such that I can be us­ing my “de­liber­ate effort” brain while read­ing. I mostly do read­ing in the evening when I’m tired (where the base-situ­a­tion was “us­ing face­book or some­thing”, and I was try­ing to at least get ex­tra value out of my dead brain state)

Cur­rently my “de­liber­ate effort” hours go into cod­ing, and writ­ing. This seems prob­a­bly bad, but it feels like a sig­nifi­cant sac­ri­fice to do less of ei­ther. Mrr.

• Note this this mostly doesn’t feel like de­liber­ate effort any­more now that it’s a habit for me. It took maybe 3 months of it be­ing de­liber­ate effort, but now my mind just au­to­mat­i­cally no­tices some­thing im­por­tant while I’m learn­ing and asks “what is this re­lated to?”

I haven’t checked if read­ing is more tiring than be­fore, but I also haven’t no­ticed any­thing to that effect.

• That all makes sense – once the habit is in­grained I wouldn’t ex­pect it to be de­liber­ate effort per se (but, would still re­quire me to make time for this that isn’t ‘right be­fore I go to sleep while ly­ing in bed’)

• I’ve had a similar con­ver­sa­tion many times re­cently re­lated to Ke­gan’s lev­els of de­vel­op­ment and Con­struc­tive-de­vel­op­men­tal the­ory:

X: Okay, but isn’t this just pseu­do­science like My­ers-Briggs?

Me: No, there’s been a lot of sci­en­tific re­search into con­struc­tive-de­vel­op­men­tal the­ory.

X: Yeah, but does it have strong in­ter-rater re­li­ablity?

Me: Yes, it has both strong in­ter-rater re­li­ablity and test retest re­li­ablity. In ad­di­tion, it has strong cor­re­la­tion with other mea­sures of adult de­vel­op­ment that them­selves have a strong ev­i­dence base.

X: Sure, but it seems so cul­turally bi­ased.

Me: There’s also strong pre­limi­nary re­ports on cross-cul­ture val­idity.

It makes me want to make a post sum­ma­riz­ing the ev­i­dence for Con­struc­tive-Devel­op­men­tal the­ory so peo­ple don’t keep pat­tern match­ing to less-valid psy­cho­met­rics like My­ers-Briggs

• I’d be in­ter­ested in a post that was just fo­cused on lay­ing out what the em­piri­cal ev­i­dence was (prefer­ably de­cou­pled from try­ing to sell me on the the­ory too hard)

• (a bit more de­tails on how I’m think­ing about this. Note that this is just my own opinion, not nec­es­sar­ily rep­re­sent­ing any LW team con­sen­sus)

I’m gen­er­ally in­ter­ested in get­ting LW to a state where

• it’s pos­si­ble to bring up psych the­o­ries that seem wooey at first glance, but

• it’s also clearer:

• what the epistemic sta­tus of those the­o­ries are

• what timeframes are rea­son­able to ex­pect that epistemic sta­tus to reach a state where we have a bet­ter sense of how true/​use­ful the the­ory is

• have some kind of plan to de­p­re­cate weird the­o­ries if they turn out to be BS

I think there are some ad­di­tional con­straints on de­vel­op­men­tal the­o­ries, where for so­cial rea­sons I think it makes sense to lean harder in the “strong stan­dards of ev­i­dence” di­rec­tion. I think Dan Speyer’s sus­pi­cions (ar­tic­u­lated on FB) are pretty rea­son­able, and whether they’re rea­son­able or not they also seem to a fact-of-the-mat­ter that needs to be ad­dressed any­how.

I’ve re­cently up­dated that de­vel­op­men­tal the­o­ries might be pretty im­por­tant, but I think there’s a lot of ways to use them poorly and I wanna get it right.

• I have seen much talk on Less Wrong lately of “de­vel­op­ment stages” and “Ke­gan” and so forth. Nat­u­rally I am skep­ti­cal; so I do en­dorse any at­tempt to figure out if any of this stuff is worth any­thing. To aid in our efforts, I’d like to say a bit about what might con­vince me be a lit­tle less skep­ti­cal.

A the­ory should ex­plain facts; and so the very first thing we’d have to do, as in­ves­ti­ga­tors, is figure out if there’s any­thing to ex­plain. Speci­fi­cally: we would have to look at the world, ob­serve peo­ple, ex­am­ine their be­hav­ior, their pat­terns of think­ing and in­ter­act­ing with other peo­ple, their pro­fessed be­liefs and prin­ci­ples, etc., etc., and see if these fall into any sorts of pat­terns or clusters, such that they may be cat­e­go­rized ac­cord­ing to some scheme, where some peo­ple act like this [and here we might give some broad de­scrip­tion], while other peo­ple act like that.

(Clearly, the an­swer to this ques­tion would be: yes, peo­ple’s be­hav­ior ob­vi­ously falls into pre­dictable, clus­tered pat­terns. But what sort, ex­actly? Some work would need to be done, at least, to enu­mer­ate and de­scribe them.)

Se­cond, we would have to see whether these pat­terns that we ob­serve may be sep­a­rated, or fac­tored, by “do­main”, whereby there is one sort of pat­tern of clusters in how peo­ple think and act and speak, which per­tains to mat­ters of re­li­gion; and an­other pat­tern, which per­tains to re­la­tion­ship to fam­ily; and an­other pat­tern, which per­tains to prefer­ences of con­sump­tion; etc. We would be look­ing for such “do­mains” which may be con­cep­tu­ally sep­a­rated—re­gard­less of whether there were any cor­re­la­tion be­tween clus­ter­ing pat­terns in one do­main or an­other.

(Here again, the an­swer seems clearly to be that yes, such do­mains may be defined with­out too much difficulty. How­ever, the in­tu­ition is weaker than for the pre­vi­ous ques­tion; and we are less sure that we know what it is we’re talk­ing about; and it be­comes even more im­por­tant to be spe­cific and ex­plicit.)

Now we would ask two fur­ther ques­tions (which might be asked in par­allel). Third: does cat­e­go­riza­tion of an in­di­vi­d­ual into one cluster or an­other, in any of these do­mains, cor­re­late with that in­di­vi­d­ual’s cat­e­gory mem­ber­ship in cat­e­gories per­tain­ing to any ob­serv­able as­pect of hu­man vari­a­tion? (Such ob­serv­able as­pects might be: cul­tural group­ings; gen­der; weight; height; age; eth­nic­ity; so­cioe­co­nomic sta­tus; hair color; var­i­ous mat­ters of phys­i­cal health; or any of a va­ri­ety of other ways in which peo­ple demon­stra­bly differ.) And fourth: may the clusters in any of these do­mains sen­si­bly be given a to­tal or­der­ing (and the do­main thereby be mapped onto a lin­ear axis of vari­a­tion)?

Note the spe­cial im­port of this lat­ter ques­tion. Prior to an­swer­ing it, we are deal­ing ex­clu­sively with nom­i­nal data val­ues. We now ask whether any of the data we have might ac­tu­ally be or­di­nal data. The an­swer might be “no” (for in­stance, you pre­fer ap­ples, and I pre­fer or­anges; this puts us in differ­ent clusters within the “fruit prefer­ences” do­main of hu­man psy­chol­ogy, but in no sense may these clusters be ar­ranged lin­early).

Our fifth ques­tion (con­di­tional on an­swer­ing yes to all four of the pre­vi­ous ques­tion) is this: among our ob­served do­mains of clus­ter­ing, and look­ing in par­tic­u­lar at those for which the data is of an or­di­nal na­ture, are there any such that the di­men­sion of vari­a­tion has any nor­ma­tive as­pect? That is: is there a do­main such that we might sen­si­bly say that it is bet­ter to be­long to clusters closer to one end of its spec­trum of vari­a­tion, than to be­long to clusters closer to the other end? (Once more note that the an­swer might be “no”: for ex­am­ple, sup­pose that some peo­ple fid­get a lot, while oth­ers do not fid­get very much. Is it bet­ter to be a much-fid­geter than a not-much-fid­geter? Well… not re­ally; nor the re­verse; at least, not in any gen­eral way. Maybe fid­get­ing has some ad­van­tages, and not fid­get­ing has oth­ers, etc.; who knows? But over­all the an­swer is “no, nei­ther of these is clearly su­pe­rior to the other; they’re just one of those ways in which peo­ple differ, in a nor­ma­tively neu­tral way”.)

Fi­nally, our sixth ques­tion is: does there ex­ist any do­main of clus­ter­ing in hu­man be­hav­ioral/​psy­cholog­i­cal vari­a­tion for which all of these are true:

• That its clusters may nat­u­rally be given a to­tal or­der (i.e., ar­ranged lin­early);

• That this lin­ear di­men­sion has nor­ma­tive sig­nifi­cance;

• That mem­ber­ship in its cat­e­gories is cor­re­lated pri­mar­ily with cat­e­gory mem­ber­ship per­tain­ing to one as­pect of hu­man vari­a­tion (rather than be­ing cor­re­lated com­pa­rably with mul­ti­ple such as­pects);

• That in par­tic­u­lar, mem­ber­ship in this do­main’s clusters is cor­re­lated pri­mar­ily with age.

Note that we have asked six (mostly[1]) em­piri­cal ques­tions about hu­man­ity. And we have had six chances to an­swer in the nega­tive.

And note also that if we an­swer any of these ques­tions in the nega­tive, then any and all the­o­ries of “moral de­vel­op­ment” (or any similar no­tion) are nec­es­sar­ily non­sense—be­cause they pur­port to ex­plain facts which (in this hy­po­thet­i­cal sce­nario) we sim­ply do not ob­serve. Without any fur­ther in­ves­ti­ga­tion, we can dis­pose of the lot of them with ex­treme prej­u­dice, be­cause they are en­tirely un­mo­ti­vated by the pre-the­o­ret­i­cal facts.

So, this is what I would like to see from any pro­po­nents of Ke­gan’s the­ory, or any similar ones: a de­tailed, thor­ough, and spe­cific ex­am­i­na­tion (with plenty of ex­am­ples!) of the ques­tions I give in this com­ment—dis­cussed with ut­ter ag­nos­ti­cism about even the con­cept of “moral de­vel­op­ment”, “adult de­vel­op­ment” or any similar thing. In short: be­fore I con­sider any defense of any the­ory of “adult de­vel­op­ment”, I should like to be con­vinced of such a the­ory’s mo­ti­va­tion.

1. The ques­tion of nor­ma­tive im­port is not quite em­piri­cal, but it may be op­er­a­tional­ized by con­sid­er­ing in­ter­sub­jec­tive judg­ments of nor­ma­tive im­port; that is, in any case, more or less what we are talk­ing about in the first place. ↩︎

• Why must a de­vel­op­men­tal the­ory be nor­ma­tive? A de­scrip­tive the­ory that says all hu­mans go through stages where they get less moral over time works still as an in­ter­est­ing de­scrip­tive the­ory. Similary, there’s cer­tain Devel­op­men­tal stages that prob­a­bly aren’t nor­ma­tive of ev­ery­one around you is in a lower de­vel­op­men­tal stage, but it can still be de­scrip­tive as the next stage most hu­mans go through if they in­deed progress.

• I did not say any­thing about the the­ory be­ing nor­ma­tive. “A de­scrip­tive the­ory that says all hu­mans go through stages where they get less moral over time” is en­tirely con­sis­tent with what I de­scribed. Note that “moral” is a qual­ity with nor­ma­tive sig­nifi­cance—com­pare “get less ex­traverted over time” or “get less risk-seek­ing over time”.

• Ahh, so is the idea just that you don’t care about a spe­cific type of de­vel­op­ment if it doesn’t have con­se­quences that mat­ter?

• Whether I care is hardly at is­sue; all the the­o­ries of “adult de­vel­op­ment” and similar clearly deal with vari­a­tion along nor­ma­tively sig­nifi­cant di­men­sions.

If, for some rea­son, you pro­pose to defend a the­ory of de­vel­op­ment that has no such nor­ma­tive as­pect, then by all means re­move that re­quire­ment from my list. (Ke­gan’s the­ory, how­ever, clearly falls into the “nor­ma­tively sig­nifi­cant vari­a­tion” cat­e­gory.)

• I think that EG con­struc­tive-de­vel­op­men­tal the­ory stu­diously avoids nor­ma­tive claims. The level that fits best is con­text de­pen­dent on the sur­round­ing cul­ture.

• Fair enough. As­sum­ing that’s the case, then any­one propos­ing to defend that par­tic­u­lar the­ory is ex­empt from that par­tic­u­lar ques­tion.

• Just in case it isn’t clear, con­struc­tive-de­vel­op­men­tal the­ory and “kegan’s lev­els of de­vel­op­ment” are two names for the same thing.

• Ah, my mis­take.

How­ever, in that case I don’t re­ally un­der­stand what you mean. But, in any case, the rest of my origi­nal com­ment stands.

I look for­ward to any such de­tailed com­men­tary on the fact-based mo­ti­va­tion for any sort of de­vel­op­men­tal the­ory, from any­one who feels up to the task of pro­vid­ing such.

• Looks like Sarah Con­stan­tine beat me to it, al­though I think here lit re­view missed a few stud­ies I’ve seen.

• From her post:

In a study of West Point stu­dents, av­er­age in­ter-rater agree­ment on the Sub­ject-Ob­ject In­ter­view was 63%, and stu­dents de­vel­oped from stage 2 to stage 3 and from stage 3 to stage 4 over their years in school.

Are you call­ing those 63% strong in­ter-rater re­li­ablity or are you refer­ring to other stud­ies?

• There’s as far as I know 3 stud­ies on this. She found the one with 63% agree­ment, whereas the pre­vi­ous two stud­ies had about 80% agreement

• Oh, I was look­ing for that re­cently. Ap­par­ently pre­dates LessWrong in­te­gra­tion with her blog

• My gen­eral take­away from that post was that in terms of psy­cho­me­t­ric val­idity, most de­vel­op­men­tal psy­chol­ogy is quite bad. Did I miss some­thing?

This doesn’t nec­es­sar­ily mean the un­der­ly­ing con­cepts aren’t real, but I do think that in terms of the qual­ity met­rics that psy­cho­met­rics tends to as­sess things on, I don’t think the ev­i­dence base is very good.

• I haven’t looked into gen­eral de­vel­op­men­tal the­o­ries like Sarah Con­stantin, but have looked into the stud­ies on Con­struc­tive Devel­op­men­tal the­ory.

My take­aways (mostly sup­ported by her re­search, al­though she misses a lot) is that ba­si­cally all the data points to­wards con­firm­ing the the­ory, with high in­for­ma­tion value on fur­ther research

• high in­ter­rater reliability

• high test-retest reliability

• good cor­re­la­tion with age

• good cor­re­la­tions with age in mul­ti­ple cultures

• good cor­re­la­tion with mea­sures of cer­tainty types of achieve­ment like leadership

As Sarah points at, the biggest thing miss­ing is ev­i­dence that the steps pro­cede in or­der with no skip­ping, but as far as I can tell there’s no coun­terev­i­dence for that ei­ther. Also, repli­ca­tions of the other things.

Per­haps if I had went into this look­ing at a bunch of other failed de­vel­op­men­tal the­o­ries, my pri­ors would have been such that I would have de­scribed it as “not enough ev­i­dence to con­firm the the­ory”. How­ever, given this is the only de­vel­op­men­tal the­ory I looked into, my take­aways was “promis­ing the­ory with pre­limi­nary sup­port, needs more con­firm­ing re­search”

• Yes, this is what I’m imag­in­ing. A sim­ple post that just sum­ma­rizes the epistemic sta­tus, po­ten­tially as the start of a se­quence for later posts that use it as a build­ing block for other ideas.

• CHANGE IS GOOD

Some­thing I’ve been notic­ing lately in a lot of places is that many peo­ple have the in­tu­ition that change is bad, and the de­fault should be to main­tain the sta­tus quo. This is epi­tomized by the Zvi ar­ti­cle Change is Bad.

I tend to have the ex­act op­po­site in­tu­ition, and feel a sense of dread or fore­bod­ing when I see a lack of change in in­sti­tu­tions or in­di­vi­d­u­als I care about, and work to cre­ate that change when pos­si­ble. Here’s some of the mod­els that seem to be be­hind this:

• Change is in­evitable. The broader sys­tems in which the sys­tems I care about ex­ist are always chang­ing (the cul­ture, the eco­nomic sys­tem, etc). Try­ing to keep things static is MORE effort than go­ing with the flow, so I don’t buy the “con­serve your en­ergy” ar­gu­ment. Any­one who has ever TRIED to fight the flow of broader sys­tems within their lo­cal sys­tem knows this to be true.

• By De­fault Change is Inevitable but usu­ally non-di­rected. What I mean by that is that as stated above, the sys­tems are always chang­ing. How­ever, many times this is a re­sult of lo­cal ac­tors fol­low­ing lo­cal in­cen­tives and act­ing within lo­cal con­straints. Rarely are the trickle down effects on the things you care about in any way shape or form di­rected to­wards mak­ing that thing bet­ter for hu­man flour­ish­ing. This means that there’s much gain to be had by sim­ply work­ing to di­rect and shape the change that will be hap­pen­ing any way to make it ac­tu­ally GOOD. This is also an ar­gu­ment for EAs be­ing less shy about sys­temic change.

• Even if change isn’t in­evitable, en­tropy is. Even in a rel­a­tively sta­ble sys­tem, the de­fault is not for things to stay the same, but for them to fall or drift apart. I’ve found that change in­jects a NEWNESS into the sys­tem that pro­vides its’ own mo­men­tum. This is all metaphor­i­cal, but will prob­a­bly hit for any­one who has run an or­ga­ni­za­tion that meets reg­u­larly. If you keep do­ing the same thing, there’s a stal­e­ness that causes peo­ple to drift away. Try­ing to rally the troops and pre­vent this drift­ing in the face of the stal­e­ness is like pul­ling teeth. How­ever, do­ing some­thing NEW in the or­ga­ni­za­tion, or­ga­niz­ing a new event, a new ini­ti­a­tive, any­thing, pro­vides new en­ergy that makes peo­ple ex­cited to con­tinue, and is ac­tu­ally EASIER than sim­ply strug­gling against the stal­e­ness.

• Some­thing I’ve been think­ing about lately is the con­cept of Aes­thetic Pathol­ogy. The idea that our trauma’s and be­liefs can shape what we al­low our­selves to see as beau­tiful or ugly.

Take for in­stance the broad aes­thetic of or­der, or chaos. Depend­ing on what we’ve been pun­ished or ad­mired for, we may find one or the other aes­thetic beau­tiful.

This can then bleed into in­fluenc­ing our ac­tual be­liefs, we may think that some­one who keeps or­der is “good” if we have the or­der aes­thetic, or have the be­lief that “in or­der to get things done we must main­tain or­der”.

The counter to this is to be­gin to de­velop what you could call Aes­thetic Nuance—Rec­og­niz­ing that differ­ent things can be beau­tiful or ugly for differ­ent situ­a­tions.

Chaos can in fact have it’s own beauty, once we re­al­ize that, that can bleed through in our be­liefs, and we can re­al­ize that in this situ­a­tion, in or­der to act fast enough to get things done, we must em­brace the beauty of chaos.

I’ve seen this show up in the Pos­tra­tional­ity com­mu­nity—many were trau­ma­tized by the ra­tio­nal­ity aes­thetic. They de­velop an Aes­thetic Pathol­ogy for the un­ex­plain­able.

The aes­thetic nu­ance here is—The in­nefable is beau­tiful, as is the ex­plained from differ­ent per­spec­tives in differ­ent situ­a­tions.

Similarly, for a long time I’ve had an Aes­thetic Pathol­ogy re­lated to growth. I find stag­na­tion ab­hor­rent. How­ever, as I be­gin to de­velop Aes­thetic Nuance for stag­na­tion, I can see the beauty in the eter­nal and un­chang­ing.

• I tend to model aes­thet­ics as more deeply en­twined with other prefer­ences and heuris­tics. Whether caused by trauma, early or late train­ing, ge­netic or en­vi­ron­men­tal predilec­tion, or what­ever, there are many el­e­ments of each in­di­vi­d­ual’s util­ity func­tion that are some­what re­sis­tant to in­tro­spec­tion.

Your pro­posed causal­ity (trauma, and pun­ished/​re­warded frame­work) is gen­er­ally ap­pli­ca­ble—not only to things gen­er­ally in the aes­thetic realm, but also in the policy-prefer­ence, so­cial-in­ter­ac­tion, and many other top­ics where “be­lief” mostly means “more trusted mod­els” rather than “con­crete prob­a­bil­ities of propo­si­tional fu­ture ex­pe­riences”.

As you note, it’s not fully re­sis­tant to in­tro­spec­tion—you can train your­self to no­tice and en­joy (or to no­tice and dis­pre­fer) things differ­ently than your past. Some­times a par­tial ex­pla­na­tion of causal­ity for your be­lief can help. Some­times it’s a non-ex­pla­na­tion just-so story, giv­ing you per­mis­sion to change. And some­times you can change just by de­cid­ing that you’ll meet your con­sid­ered goals more eas­ily if you let go of those par­tic­u­lar heuris­tics.

• Some­thing else in the vein of “things EAs and ra­tio­nal­ists should be pay­ing at­ten­tion to in re­gards to Corona.”

There’s a com­mon failure mode in large hu­man sys­tems where one out­lier causes us to cre­ate a rule that is a worse equil­ibrium. In the Per­son­alMBA, Josh Kauf­man talks about some­one tak­ing ad­van­tage of a “buy any book you want” rule that a com­pany has—so you make it so that you can no longer get any free books.

This same pat­tern has hap­pened be­fore in the US, af­ter 9-11 - We cre­ated a whole bunch of se­cu­rity the­ater, that caused more suffer­ing for ev­ery­one, and gave gov­ern­ment way more power and way less over­sight than is safe, be­cause we over-re­acted to pre­vent one bad event, not con­sid­er­ing the coun­ter­fac­tual in­visi­ble things we would be los­ing.

This will hap­pen again with Corona, things will be put in place that are maybe good at pre­vent­ing pan­demics (or worse, mak­ing peo­ple think they’re safe from pan­demics), but cre­ate a mil­lion triv­ial con­ve­niences ev­ery day that add up to more strife than they’re worth.

Th­ese types of rules are very hard to re­peal af­ter the fact be­cause of ab­sence blind­ness—some­one needs to do the work of calcu­lat­ing the cost/​benefit ra­tio BEFORE they get im­ple­mented, then build a con­vinc­ing enough nar­ra­tive to what seems ob­vi­ous/​com­mon sense mea­sures given the cli­mate/​dev­as­ta­tion.

• When try­ing to browse LW key­board only us­ing Vimium, there are some tasks I get blocked on be­cause they’re not marked as links or but­tons. E.g. the “Read More” but­ton is not rec­og­nized as click­able by Vimium so I have to use the mouse.

I sus­pect this means that the read more but­ton is also not picked up by many ac­cessibil­ity tools. Some­thing for the LW team to look at, and may be worth do­ing a gen­eral ac­cessibil­ity au­dit.

• Oh, in­ter­est­ing. That’s a fair point.

• # HOW TO CONSISTENTLY USE BLOCKING SOFTWARE
One of my fa­vorite life hacks to stop pro­cras­ti­nat­ing is to in­stall web­site/​app block­ing soft­ware on your phone and com­puter.

How­ever, many peo­ple have tried this method, and found that they can’t do it con­sis­tently. They in­evitably end up un­in­stal­ling or dis­abling the soft­ware a few months into us­ing it.

In a mo­ment of “weak­ness”, they un­in­stall/​dis­able/​re­move the soft­ware, and then never end up re­in­stal­ling/​en­abling it for months.

The truth is, this mo­ment of “weak­ness” isn’t weak­ness at all. It’s a nat­u­ral hu­man re­sponse to lack of au­ton­omy, which Self-Deter­mi­na­tion The­ory posits as one of the three ba­sic hu­man needs.

When a wall is get­ting in the way of our ba­sic au­ton­omy, our nat­u­ral re­sponse is to knock down the wall.

Solu­tions to pro­cras­ti­na­tion should never feel like you’re **co­erc­ing** your­self into do­ing the “right be­hav­ior” as de­cided by you at a par­tic­u­lar point in time—these are un­sus­tain­able and ac­tu­ally cre­ate more pro­cras­ti­na­tion in the long run be­cause we’re tak­ing away the au­ton­omy of our pre­sent selves.

Rather, en­vi­ron­men­tal solu­tions to pro­cras­ti­na­tion should feel more like you’re **co­op­er­at­ing** be­tween your past, pre­sent, and fu­ture selves, tak­ing in­put from all 3 selves to de­cide what makes the most sense in the mo­ment.

## TURN WALLS INTO GATES
For block­ing soft­ware, the solu­tion to this is­sue is to turn walls into gates. In­stead of mak­ing it im­pos­si­ble to get to the other side, you want to make a se­ries of gates, which take some effort to get through, but al­low you in­creas­ingly more free­dom as you go through each suc­ces­sive gate.

This way you’re not limit­ing your free­dom, but in­stead just al­low­ing a short re­minder from your past self say­ing “Hey, just a re­minder I wasn’t so thrilled about what’s on the other side of this gate,” while al­low­ing your pre­sent self to say “I hear, and this one time I’m de­cid­ing that it’s im­por­tant for us to go on the other side of the gate now.”

In ad­di­tion to turn­ing walls into gates, you need to make sure your gates are ro­bust enough that it’s not eas­ier to just knock them down then to go through them.

If you build your gates re­ally flimsy, it’s to easy for your pre­sent self to say “Oh, I just want to get onto the other side of the gate the fastest way pos­si­ble” while for­get­ting to co­op­er­ate with your past and fu­ture selves. The path of least re­sis­tance has to be to pass through se­cu­rity you’ve set up at the gate.

## HOW TO CREATE ROBUST GATES WITH BLOCKING SOFTWARE

So the first way to make sure you use block­ing soft­ware is to make sure it’s hard to just knock down. Your block­ing soft­ware should have ro­bust pro­tec­tion against all the easy ways to knock down the gate like:
- Re­mov­ing it from startup
- Un­in­stal­ling or dis­abling it
- Clos­ing it us­ing the task man­ager
- Us­ing a differ­ent browser
- Switch­ing com­puter users

In ad­di­tion, the soft­ware should make it easy to in­stall var­i­ous lev­els of gates with differ­ing se­cu­rity to get through var­i­ous block­ing plans, like:
- Hav­ing a way to pause the plan for just a lit­tle time, that you need to en­ter a ran­dom set of char­ac­ters to ac­cess.
- Hav­ing a way to en­ter a few ran­dom char­ac­ters to whitelist a par­tic­u­lar site, so that for in­stance you can whitelist a par­tic­u­lar youtube video you need with­out al­low­ing all of youtube.
- Hav­ing a set­ting that will au­to­mat­i­cally re-en­able plans at the be­gin­ning of a new day, so that even if you’ve de­cided to en­ter your ran­dom pass­word and take a day to just lounge and watch Net­flix, it doesn’t re­quire any in­ter­ven­tion to re-erect the gate.
- Hav­ing Po­modoro style block­ing plans that can con­tinu­ally block then al­low short breaks on a sched­ule.

## WHAT SOFTWARE ALLOWS THE CREATION OF ROBUST GATES?
The only soft­ware I know of that has these fea­tures (hav­ing tried be­tween half a dozen and a dozen differ­ent block­ing soft­ware and tools) is Fo­cusMe. It’s not the most user friendly block­ing soft­ware out there, but it’s in­cred­ibly good at cre­at­ing ro­bust gates that al­low you to co­op­er­ate be­tween your past/​pre­sent/​fu­ture selves.

Un­for­tu­nately the An­droid ver­sion isn’t yet that great at cre­at­ing gates, but the Mac/​Win­dows ver­sion in­cred­ible.

I highly recom­mend this block­ing soft­ware if you’re work­ing on over­com­ing pro­cras­ti­na­tion, and learn­ing the set­tings to use to cre­ate a sys­tem of gates.

It also has ex­cel­lent cus­tomer ser­vice, and a “life­time plan” which pre­vents you from hav­ing to sub­scribe.

If you’re in­ter­ested in the soft­ware, you can check it out us­ing my af­fili­ate link here: https://​​fo­cusme.com/​​?ref=102&cam­paign=LW

Or, if you’re not down with the af­fili­ate thing, use a non-af­fili­ate link here: https://​​fo­cusme.com/​​

I’m also in­ter­ested if any­one knows any An­droid block­ing soft­ware that al­lows for the cre­ation of ro­bust gates!

• I think one of the biggest prob­lems with ou­ble crux is that by find­ing dou­ble cruxes, it im­plic­itly en­courages us to look at the most mu­tu­ally leg­ible parts of our maps.

How­ever, the biggest differ­ences in frames aren’t where you think X and I think not X, it’s where you think X and I think “What the hell do you mean by X?” or “Why do you even care about X any­way it seems ir­rele­vant?”

In my pre­vi­ous startup, this led to a situ­a­tion where we were agree­ing on what to do, but there were deep un­ad­dressed differ­ences in why we were do­ing it, lead­ing to a mil­lion differ­ent de­ci­sions at the level of “how it was done.”

One of the things that ex­cites me about Frame dou­ble crux and Aes­thetic dou­ble crux is that it seems to be get­ting at some of these deeper is­sues. How­ever, I think the en­tire frame of dou­ble crux is slightly bro­ken for get­ting to these deeper is­sues, be­cause again its’ always fo­cused on mu­tual leg­i­bil­ity that as “what parts of your map are also im­por­tant in my map” and not “How can I un­der­stand which parts of your map are most im­por­tant to you?

• Does any­one here strug­gle with perfec­tion­ism? I’d love to talk to you and get an un­der­stand­ing of your ex­pe­rience.

• One of the en­dur­ing in­sights I’ve got­ten from eli­tyre is that differ­ent world mod­els are of­ten about the weight and im­por­tance of differ­ent ideas, not about how likely those things are to be true. For in­stance, The Elephant in the Brain isn’t about whether or not sig­nal­ling ex­ists, its’ about how cen­tral sig­nal­ling is to the wor­ld­view of Sim­ler and Han­son. Similarly with An­tifrag­ility and Nas­sim Taleb.

One way to say this is that dis­agree­ment is of­ten about the im­por­tance of an idea, not its’ truth.

Another way to say this is that wor­ld­view differ­ences are of­ten about the cen­tral­ity and in­ter­con­nect­ed­ness of a node within a graph, and not its’ ex­is­tence.

A third way to say this is that dis­agree­ments are of­ten about trade­offs, not truths.

I’ve used all of these when try­ing to point to this idea, but I’d like a sin­gle, catchy word or phrase to use and a blog post I can point to so that this idea can en­ter the ra­tio­nal­ist lex­i­con. Does this blog­post already ex­ist? If not, any ideas for what to name this?

• any ideas for what to name this?

A Mat­ter of Degree

• Yeah. This prob­lem is es­pe­cially bad in poli­tics. I’ve been call­ing it “im­por­tance dis­agree­ments”, e.g. here and here. There’s no defini­tive blog­post, you’re wel­come to write one :-)

• Note that I think we’re talk­ing about similar things, but have slightly differ­ent fram­ing. For in­stance, you say :

I’ve had similar thoughts but for­mu­lated them a bit differ­ently. It seems to me that most peo­ple have the same bedrock val­ues, like “pain is bad”. Some moral dis­agree­ments are based on con­flicts of in­ter­est, but most are im­por­tance dis­agree­ments in­stead. Ba­si­cally peo­ple ar­gue like “X! - No, Y!” when X and Y are both true, but they dis­agree on which is more im­por­tant, all the while imag­in­ing that they’re ar­gu­ing about facts. You can see it over and over on the in­ter­net.

I think “Value Im­por­tance” dis­agree­ments definitely do hap­pen, and Ruby talks about them in “The Rock and the Hard Place”.

How­ever, I’m also try­ing to point at “Fact Im­por­tance” as a thing that peo­ple of­ten as­sume away when try­ing to model each other. I’d even go as far to say that of­ten what seems like “value im­por­tance” in­tractable de­bates are of­ten “hid­den as­sump­tion fact im­por­tance de­bates”.

For in­stance, we might both have the be­lief that sig­nal­ling effects peo­ples’ be­hav­iors, and the be­lief that peo­ple are try­ing to achieve hap­piness, and we both as­sign mod­er­ately high prob­a­bil­ity on each of these fac­tors. How­ever, un­less I un­der­stand, in their world model, how MUCH they think sig­nal­ling effects be­hav­iors in com­par­i­son to seek­ing hap­piness, I’ve prob­a­bly just un­know­ingly im­ported my own im­por­tance weights onto those items.

Any time you’re us­ing heuris­tics (which most good thinkers are) its’ im­por­tant to go up and model the meta-heuris­tics that al­low you to choose how much a given heuris­tic effects a given situ­a­tion.

• Yeah, I guess I wasn’t sep­a­rat­ing these things. A be­lief like “cap­i­tal­ists take X% of the value cre­ated by work­ers” can feel im­por­tant both for its moral ur­gency and for its ex­plana­tory power—in poli­tics that’s pretty typ­i­cal.

• Depends on the value of X.

• Just wanted to quickly as­sert strongly that I wouldn’t char­ac­ter­ize my post cited above as be­ing only about value dis­agree­ments (value dis­agree­ments might even be a minor­ity of ap­pli­ca­ble cases).

Con­sider Alice and Bob who are al­igned on the value of not dy­ing. They are ar­gu­ing heat­edly over whether to stay where they are vs run into the for­est.

Alice: “If we stay here the axe mur­derer will catch us!” Bob: “If we go into the for­est the wolves will eat us!!” Alice: “But don’t you see, the axe mur­derer is nearly here!!!”

Same value, still a rock and hard place situ­a­tion.

• Similarly, we might both agree on the meta-heuris­tics in a spe­cific situ­a­tion, but I have mod­els that ap­ply a heuris­tic to 50x the situ­a­tions that you do, so even though you agree that the heuris­tic is true, you dis­agree on how im­por­tant it is be­cause you don’t have the mod­els to ap­ply it to all the situ­a­tions that I can.

• If you make it ex­plicit like “X is im­por­tant” vs “X is not im­por­tant” I have hard time to use the word “dis­agree” on it. Like if A and B em­pha­sis and have sig­nal­ing as similarly cen­tral in their wor­ld­views say­ing “we agree on sig­nal­ing” sounds wrong. Also say­ing stuff like “I dis­agree with racism” sounds like a funky way to get that point across.

• I think dis­agree is not se­man­ti­cally ac­cu­rate for the thing I’m try­ing to point at, but it still feels in­ter­nally of­ten like “We have a fun­da­men­tal dis­agree­ment about how to view this situ­a­tion”, it make more sense to talk about “our mod­els be­ing in agree­ment” than us be­ing in agree­ment.

• RUNNING GOOD ORGANIZATIONS

Fram­ing the Ger­vais prin­ci­ple in terms of Ke­gan:

Losers—Ke­gan 3

Clue­less—Ke­gan 4

So­ciopaths—Ke­gan 4.5

To run a great or­ga­ni­za­tion, the first thing you need is to be lead not by a so­ciopath, but some­one who is Ke­gan 5. Then you need so­ciopath re­pel­lent.

The Ger­vais prin­ci­ple works on the fact that at the bot­tom, the losers see what the so­ciopaths are do­ing and opt-out, find­ing en­joy­ment el­se­where. The clue­less, in the mid­dle, be­lieve the sto­ries the so­ciopaths are tel­ling them and hold the party line. The so­ciopaths, at the top, are in­fight­ing and try­ing to use the or­ga­ni­za­tion to get their own needs met.

In a good or­ga­ni­za­tion, the peo­ple at the top are Ke­gan 5. They have vary­ing rules and mod­els in their head for how the or­ga­ni­za­tion should act, and they use this as a best guess for the VALUES the or­ga­ni­za­tion should have, given the cur­rent en­vi­ron­ment—IE, they do their best to syn­the­size their vary­ing mod­els into a leg­ible set of rules that will achieve their ter­mi­nal goals (which, be­cause they’re Ke­gan 5, aren’t pure solip­sism)

The rea­son that they need to do this dis­til­la­tion pro­cess is that they need some­thing that works for the Ke­gan 3′s and Ke­gan 4′s. The Ke­gan 4′s SHARE the ter­mi­nal goal of the Ke­gan 5 (or some more sim­plified ver­sion of it), and be­lieve in the val­ues and mis­sion of the or­ga­ni­za­tion as the ONE TRUE WAY to achieve that goal.

Be­cause the rules of the or­ga­ni­za­tion are set up to be leg­ible and re­ward ac­tions that ac­tu­ally help the ter­mi­nal goal, the Ke­gan 3′s can get their be­long­ing and good vibes in highly leg­ible, easy ways that are sim­ple to un­der­stand be­fore them. No­tice now that the 3′s, 4′s, and 5′s are all al­igned, work­ing to­wards the same ends in­stead of fight­ing each other.

Two im­por­tant things about the val­ues, mis­sion, and rules of the or­ga­ni­za­tion.

1. The val­ues must have sincere op­po­sites that you could plau­si­bly use for real de­ci­sion mak­ing, oth­er­wise they don’t help the Ke­gan 3′s and dis­illu­sion the Ke­gan 4s. You can’t run an or­ga­ni­za­tion or make de­ci­sions based on “be­ing un­pro­duc­tive” so “pro­duc­tivity” isn’t a valid goal. You can make de­ci­sions that trade­off short term pro­duc­tivity for long term pro­duc­tivity, so “move fast and break things” is a valid value, as is “Move slowly and plan care­fully.”

2. Any­one should be able to ap­ply the val­ues to any­one else. If “Give crit­i­cal feed­back ASAP, and re­ceive it well” is a value, then the CEO should be will­ing to take feed­back from the new mail clerk. As soon as this stops be­ing the case, the 3′s get look for their val­i­da­tion el­se­where, and the 4′s get dis­illu­sioned.

Two good ex­am­ples of val­ues: Prin­ci­ples by Ray Dalio, The Scribe Cul­ture Bible

The role of the Ke­gan 5 in this or­ga­ni­za­tion is twofold:

1. Rein­vent the rules and mis­sion of the or­ga­ni­za­tion as the land­scape changes, and frame them in a way that makes sense to the kegan 3 and 4s.

2. No­tice when so­ciopaths are ar­bi­trag­ing the differ­ence be­tween the rules and the ter­mi­nal goals, and shut it down.

Short Form Feed is get­ting too long. Next time, I’ll wrote more about So­ciopath re­pel­lent.

• WHY VIBING IS IMPORTANT

Vibing is a type of com­mu­ni­ca­tion where the con­tent is a medium through which you can play with the emo­tional rhythm. I’ve said be­fore that the Berkely ra­tio­nal­ist com­mu­nity is miss­ing this, and that that’s im­por­tant, but have never re­ally ex­plained why vibing is im­por­tant.

Firstly, vibing is one of the purest forms of play—if you’re play­ing with oth­ers, but you’re not vibing, there’s an im­por­tant emo­tional con­nec­tion com­po­nent miss­ing from your play.

Se­condly, vibing is a way to screen for peo­ple whose emo­tional rhythm can sync up with a group. It’s a vi­tal screen­ing mechanism to figure out if you can brain­storm well to­gether, work well to­gether, and get along.

Fi­nally, the speed at which you com­mu­ni­cate vibing means you’re com­mu­ni­cat­ing al­most purely from Sys­tem 1, ex­press­ing your ac­tual felt be­liefs. It makes de­cep­tion both of your­self and oth­ers much harder. Its much more likely to re­veal your true col­ors. This al­lows it to act as a val­ues screen­ing mechanism as well.

• I’m so cu­ri­ous about this. I pre­sume there isn’t, like, a video ex­am­ple of “vibing”? I’d love to see that

• I don’t think vibing is that an un­sual a method of com­mu­ni­ca­tion, most peo­ple have seen it and par­ti­ci­pated in it… ra­tio­nal­ists in Berkeley just hap­pen to be re­ally bad at it.

Un­for­tu­nately I can’t find a video ex­am­ple (don’t know what to search for) but I did write up a post that was try­ing to ex­plain it from the in­side. https://​​www.less­wrong.com/​​posts/​​jXHwYYn­qynhB3TAsc/​​what-vibing-feels-like

• Yeah, I’ve read that one, and I guess that would let some­one who’ve had the same ex­pe­rience un­der­stand what you mean, but not some­one who haven’t had the ex­pe­rience.

I feel similarly to when I read Valen­tine’s post on ken­sho—there is clearly some­thing valuable, but I don’t have the slight­est idea of what it is. (At least un­like with ken­sho, in this ex­am­ple it is pos­si­ble to even­tu­ally have an ob­jec­tive ac­count to point to, e.g. video.)

• I can’t wrap my brain around the com­pu­ta­tional the­ory of con­scious­ness.

Who de­cides how to in­ter­pret the com­pu­ta­tions? If I have a beach, are the lighter grains 0 and darker grains 1? What about the smaller and big­ger grains? What if I de­cide to use the mo­tion of the planets to switch be­tween these 4 in­ter­pre­ta­tions.

Surely un­der in­finite defi­ni­tions of com­pu­ta­tion, there are in­finite con­scious­nesses ex­pe­rience in­finite states at any given time, just from pure chance.

• Sup­pose that con­scious­ness were not a no-place func­tion, but rather a one-place func­tion. Speci­fi­cally, whether some­thing is con­scious or not is rel­a­tive to some re­al­ity. (A bit like move­ment rel­a­tive to refer­ence frames in physics.)

Would that help?

• Speci­fi­cally, whether some­thing is con­scious or not is rel­a­tive to some reality

How does this re­late back to the ex­am­ple with the sand? Is there a sand-planet re­al­ity that’s just like ours, but in that re­al­ity the sand is con­scious and we’re not?

I don’t think I quite get what a re­al­ity is in the func­tion.

• I was think­ing of the com­pu­ta­tional the­ory of con­scious­ness as ba­si­cally be­ing the same thing as say­ing that con­scious­ness could be sub­strate in­de­pen­dent. (E.g. you could have con­scious up­loads.)

I think this then leads you to ask, “If con­scious­ness is not spe­cific to a sub­strate, and it’s just a pat­tern, how can we ever say that some­thing does or does not ex­hibit the pat­tern? Can’t I ar­bi­trar­ily map be­tween ob­jects and parts of the pat­tern, and say that some­thing is iso­mor­phic to con­scious­ness, and there­fore is con­scious?”

And my pro­posal is that maybe it makes sense to talk in terms of some­thing like refer­ence frames. Sure, there’s some refer­ence frame where you could map be­tween grains of sand and neu­rons, but it’s a crazy refer­ence frame and not one that we care about.

• I don’t have a well-de­vel­oped the­ory here. But a few re­lated ideas:

• sim­plic­ity matters

• evolu­tion over time mat­ters—maybe you can map all the neu­rons in my head and their ac­ti­va­tions at a given mo­ment in time to a bunch of grains of sand, but the map­ping is go­ing to fall apart at the next mo­ment (un­less you in­clude some crazy up­dat­ing rule, but that vi­o­lates the sim­plic­ity re­quire­ment)

• ac­cessibil­ity mat­ters—I’m a bit hes­i­tant on this one. I don’t want to say that some­one with locked in syn­drome is not con­scious. But if some math­e­mat­i­cal ob­ject that only ex­ists in Teg­mark V is con­scious (ac­cord­ing to the pre­vi­ous defi­ni­tions), but there’s no way for us to in­ter­act with it, then maybe that’s less rele­vant.

• Ahh I see. Yeah, I think that as­sign­ing moral weight to differ­ent prop­er­ties of con­scious­ness might be a good way for­ward here. But it still seems re­ally weird that there are in­finite con­scious­nesses op­er­at­ing at any given time, and makes me a bit sus­pi­cious of the com­pu­ta­tional the­ory of con­scious­ness.

• And my pro­posal is that maybe it makes sense to talk in terms of some­thing like refer­ence frames. Sure, there’s some refer­ence frame where you could map be­tween grains of sand and neu­rons, but it’s a crazy refer­ence frame and not one that we care about.

I mean, from that refer­ence frame, does that con­scious­ness feel pain? If so, why do we not care about it? It seems to me like when it comes to moral­ity, the thing that mat­ters is the refer­ence frame of the con­scious­ness, and not our refer­ence frame (I think some similar ar­gu­ment ap­plies to longter­mism). Maybe we want to tile the uni­verse in such a way that there more in­finitely countable plea­sure pat­terns than pain pat­terns, or some­thing.

And how does this re­late back to re­al­ities? Are we say­ing that the sand op­er­ates in sep­a­rate re­al­ity?

• It seems to me like when it comes to moral­ity, the thing that mat­ters is the refer­ence frame of the con­scious­ness, and not our refer­ence frame (I think some similar ar­gu­ment ap­plies to longter­mism).

For the way I mean refer­ence frame, I only care about my refer­ence frame. (Or maybe I care about other frames in pro­por­tion to how much they al­ign with mine.) Note that this is not the same thing as ego­ism.

• For the way I mean refer­ence frame, I only care about my refer­ence frame.

How do you define refer­ence frame?

• I don’t have a good an­swer for this. I’m kinda still at the vague in­tu­ition stage rather than clear the­ory stage.

• My sense is that refer­ence frame for you is some­thing like “how ex­ter­nally similar is this en­tity to me” whereas for me the thing that mat­ters is “How similar in­ter­nally is this con­scious­ness to my con­scious­ness.” Which, if the com­pu­ta­tional the­ory of con­scious­ness is true, the an­swer is “many con­scious­nesses are very similar.”

Ob­vi­ously this is at the level of “not even a straw man” since you’re ges­tur­ing at vague in­tu­itions, but based on our dis­cus­sion so far this is as close as I can point to a crux.

• Hmm, it’s not so much about how similar it is to me as it is like, whether it’s on the same plane of ex­is­tence.

I mean, I guess that’s a cer­tain kind of similar­ity. But I’m will­ing to im­pute moral worth to very alien kinds of con­scious­ness, as long as it ac­tu­ally “makes sense” to call them a con­scious­ness. The mak­ing sense part is the key is­sue though, and a bit un­der­speci­fied.

• Here’s an anal­ogy—is Ham­let con­scious?

Well, Ham­let doesn’t re­ally ex­ist in our uni­verse, so my plan for now is to not con­sider him a con­scious­ness worth car­ing about. But if you start to deal with harder cases, whether it ex­ists in our uni­verse be­comes a trick­ier ques­tion.

• But if you start to deal with harder cases, whether it ex­ists in our uni­verse be­comes a trick­ier ques­tion.

To me this is sim­ply em­piri­cal. Is the com­pu­ta­tional the­ory of con­scious­ness true with­out reser­va­tion? Then if the com­pu­ta­tion ex­ists in our uni­verse, the con­scious­ness ex­ists. Per­haps it’s only par­tially true, and more com­plex com­pu­ta­tions, or com­pu­ta­tions that take longer to run, have less of a sense of con­scious­ness, and there­fore it ex­ists, but

• Yeah, this has always been my worry as well

• That’s why you need to use some sort of com­plex­ity-weight­ing for the­o­ries like this, so that minds that are very hard to spec­ify(given some fixed en­cod­ing of ‘the world’) are con­sid­ered ‘less real’ than easy-to-spec­ify ones.

• I think that only makes sense to do if those minds are liter­ally “less con­scious” than other minds though. Other­wise why would I care less about them be­cause they’re more com­plex?

It does make sense to me to talk about “speed” and “num­ber of ob­server mo­ments” as part of moral weight, but “com­plex­ity of defi­ni­tion” to me only makes sense if those minds ex­pe­rience things differ­ently than I do.

• De­scrip­tion com­plex­ity is the nat­u­ral gen­er­al­iza­tion of “speed” and “num­ber of ob­server mo­ments” to in­finite uni­verses/​ar­bi­trary em­bed­dings of minds in those uni­verses. It man­ages to scale as (the log of) the den­sity of copies of an en­tity, while avoid­ing giv­ing all the mea­sure to Boltz­mann brains.

• De­scrip­tion com­plex­ity is the nat­u­ral gen­er­al­iza­tion of “speed” and “num­ber of ob­server mo­ments.

Again this seems to be an em­piri­cal ques­tion that you can’t just as­sume.

• Is it an em­piri­cal ques­tion? It seems more like a philo­soph­i­cal ques­tion(what ev­i­dence could we see that would change our minds?)

Here’s a (not par­tic­u­larly rigor­ous) philo­soph­i­cal ar­gu­ment in favour. The sub­strate on which a mind is run­ning shouldn’t af­fect its moral sta­tus. So we should con­sider all com­putable map­pings from the world to a mind as be­ing ‘real’. On the other hand, we want the to­tal “num­ber” of ob­server-mo­ments in a given world to be finite(oth­er­wise we can’t com­pare the val­ues of differ­ent wor­lds). This sug­gests that we should as­sign a ‘weight’ to differ­ent ex­pe­riences, which must be ex­po­nen­tially de­creas­ing in pro­gram length for the sum to con­verge.

• Is it an em­piri­cal ques­tion? It seems more like a philo­soph­i­cal ques­tion(what ev­i­dence could we see that would change our minds?)

We could talk to differ­ent minds and have them de­scribe their ex­pe­rience, and then com­pare the num­ber of ob­server mo­ments to their com­plex­ity.

• But the ques­tion then be­comes how you sam­ple these minds you are talk­ing to. Do you just go around liter­ally speak­ing to them? Clearly this will miss a lot of minds. But you can’t use com­pletely ar­bi­trary ways of ac­cess­ing them ei­ther, be­cause then you might end up pack­ing most of the ‘mind’ into your way of in­ter­fac­ing with them. Weight­ing by com­plex­ity is meant to provide a good way of sam­pling minds, that in­cludes all com­putable pat­terns with­out at­tribut­ing mind-ful­ness to noise.

(Just to clar­ify a bit, ‘com­plex­ity’ here is refer­ring to the com­plex­ity of se­lect­ing a mind given the world, not the com­plex­ity of the mind it­self. It’s meant to be a gen­er­al­iza­tion of ‘num­ber of copies’ and ‘ex­ists/​does not ex­ist’, not a prop­erty in­her­ent to the mind)

• It seems like you can get quite a bit of data with minds that you can in­ter­face with? I think it’s true that you can’t sam­ple the space of all pos­si­ble minds, but test­ing this hy­poth­e­sis on just a few seems like high VoI.

• What hy­poth­e­sis would you be “test­ing”? What I’m propos­ing is an ideal­ized ver­sion of a sam­pling pro­ce­dure that could be used to run tests, namely, sam­pling mind-like things ac­cord­ing to their de­scrip­tion com­plex­ity.

If you mean that we should check if the minds we usu­ally see in the world have low com­plex­ity, I think that already seems to be the case, in that we’re the end-re­sult of a low-com­plex­ity pro­cess start­ing from sim­ple con­di­tions, and can be pin­pointed in the world rel­a­tively sim­ply.

• What hy­poth­e­sis would you be “test­ing”? What I’m propos­ing is an ideal­ized ver­sion of a sam­pling pro­ce­dure that could be used to run tests, namely, sam­pling mind-like things ac­cord­ing to their de­scrip­tion com­plex­ity.

I mean, I’m say­ing get minds with many differ­ent com­plex­ities, figure out a way to com­mu­ni­cate with them, and ask them about their ex­pe­rience.

That would help to figure out if com­plex­ity is in­deed cor­re­lated with ob­server mo­ments.

But how you test this feels differ­ent from the ques­tion of whether or not it’s true.

• I think we’re talk­ing about differ­ent things. I’m talk­ing about how you would lo­cate minds in an ar­bi­trary com­pu­ta­tional struc­ture(and how to count them), you’re talk­ing about de­ter­min­ing what’s valuable about a mind once we’ve found it.

• Here are some of the com­mon crit­i­cisms I get of my­self. If you know me, ei­ther in per­son, through sec­ond­hand ac­counts feel free to com­ment with your thoughts on which ones feel cor­rect to you and any nu­ance or com­ments you’d like to make. Full li­cense for this par­tic­u­lar thread to op­er­ate on Crocker’s rules and not take my feel­ings into ac­count. If you don’t feel com­fortable com­ment­ing pub­li­cly, also feel free to mes­sage with your thoughts.

• I have too low epistemic rigor.

• Too con­fi­dent in myself

• Not con­fi­dent enough in my­self.

• Too fo­cused on sta­tus.

• I don’t keep good com­pany.

• I’m too im­pul­sive.

• Too risk seek­ing.

• Had an ex­cel­lent in­ter­view with Hazard yes­ter­day break­ing down his felt sense of deal­ing with fear.

As some­one who does park­our and trick­ing, he’s had to de­velop unique mod­els that nav­i­gate the ten­sion be­tween ig­nor­ing his fear (which can lead to in­jury or death) and be­ing con­sumed by fear (mean­ing he could never prac­tice his craft).

He im­plic­itly breaks down fear into four cat­e­gories, each with their own steps:

1. Fear Alarm Bells

2. Sur­fac­ing From Water

3. Listening

4. Trans­mut­ing to Re­solve (or Back­ing off)

At each step, he has tools and tech­niques (again, that were im­plicit be­fore we chat­ted) tel­ling you how to move for­ward. Just over the past day, I’ve already had a felt shift in how I re­late to fear, and nav­i­gated a cou­ple of situ­a­tions differ­ently.

If you’re in­ter­ested in learn­ing this model, I’d love to teach you! All I ask is that you let me use some of the clips from our teach­ing ses­sion in my pod­cast on the frame­work!

Let me know if you’re in­ter­ested!

• You have a pod­cast?

• In de­vel­op­ment right now.

• Couldn’t Eliezer just re­move ev­ery refer­ence to Harry Pot­ter and pub­lish it sep­a­rately? It worked for E.L James.

• A lot of what makes it neat is the de­liber­ate con­trast. Maybe not more than 50% of what made it neat but it’s be a non­triv­ial hit. Some story beats I think were sort of de­pen­dent on the de­liber­ate con­trast for their nar­ra­tive heft, so you need to redo them, which would re­quire some craft­man­ship.

So, like, sure, it’s doable. But the whole point of HPMOR was also to be some­thing he could do for fun in is off hours with no willpower (which it even­tu­ally failed at any­how).

• A few years ago I re­mem­ber him talk­ing about how he was think­ing about writ­ing a thriller to get money but couldn’t muster the mo­ti­va­tion. It feels like if that’s still a pos­si­bil­ity it at least makes sense to try to hire an ed­i­tor to do this for a few key chap­ters and see how it turns out.

• Is this ques­tion based on some in­tent or plan that Eliezer has?

It’s per­haps pos­si­ble to make it tech­ni­cally com­pli­ant with US and UK copy­right law. Change the names, ac­knowl­edge the the­matic (non-pro­tected) in­spira­tion, rewrite maybe 110 of scenes that are based too closely on HP books and films.

It’s al­most cer­tainly im­pos­si­ble to do so with­out vi­o­lat­ing the wishes and good­will of J.K. Rowl­ing, who gives her bless­ing to cre­ate non-com­mer­cial deriva­tive works. Mak­ing such a deriva­tive work, then when it be­comes pop­u­lar due to the na­ture of the deriva­tion, to skirt the law to sell it, would be fairly evil.

• Con­text for “How does this re­late to Eliezer’s plans?” is ba­si­cally he was at one point talk­ing on Face­book about writ­ing a thriller similar to The Davinci code to make a ton of money and get con­nec­tions(my mem­ory about his post, don’t quote it) but had trou­ble mo­ti­vat­ing him­self to write a thriller.

Mak­ing such a deriva­tive work, then when it be­comes pop­u­lar due to the na­ture of the deriva­tion, to skirt the law to sell it, would be fairly evil.

I don’t feel like you have to do this? Like, 50 shades of gray doesn’t feel like it’s skirt­ing the law in re­gards to Twilight, it’s a story in it’s own right that has the­matic el­e­ments and char­ac­ters de­rived as in­spira­tion. I feel like block­ing an edit on the case that it was origi­nally us­ing Harry Pot­ter as in­spira­tion would be fairly evil in it­self.

• I don’t ac­tu­ally know much speci­fics about 50SoG—I tried to read it at the height of it’s pop­u­lar­ity, and gave up a few chap­ters in. I did read the first Twilight book, and didn’t see that much similar­ity in the parts of 50SoG I got through. I never looked at the fan­fic ver­sion of 50SoG. As such, I don’t know how clearly deriva­tive the fan­fic was, nor how much changed to the pub­lished novel. My guesses about these fac­tors are that they point to 50SoG be­ing vaguely in­spired by Twilight where HPMoR is clearly de­rived from HP books and films.

Note that my moral view is not bind­ing—I think it’d be wrong to use some­one’s per­mis­sion to make non­com­mer­cial deriva­tions, then change the min­i­mal amount to make money. That’s based on the sug­ges­tion of fairly min­i­mal rewrit­ing to change names and re­place too-ob­vi­ous refer­ences, and my in­ter­pre­ta­tion of J.K.R.’s wishes.

If it’s a much deeper rewrite, in­clud­ing chang­ing the ba­sic plot to some­thing other than a dark lord re­turn­ing based on prophe­cies about a con­nec­tion to a young boy hero, who turns out to be pos­sess­ing a teacher at a school that’s silly and amus­ing in some very spe­cific ways, it’s not prob­le­matic at all. And it’s not HPMoR at that point ei­ther—it’s some other pos­si­bly-mag­i­cal story that uses some of the non-Rowl­ing con­cepts from HPMoR.

• “Medium En­gage­ment Ac­tivi­ties” are the death of cul­ture cre­ation.

Ex­pect­ing some­one to show up for a ~1-hour or more event ev­ery week that helps shape your cul­ture is great for cul­ture cre­ation, or re­quiring them to wear a dress code—large com­mit­ments are good in the early stages.

Re­mov­ing triv­ial in­con­ve­niences to fol­low­ing your val­ues and rules is great for build­ing cul­ture, do­ing things that re­quire no or low en­gage­ment but help shape group co­he­sion. De­sign does a lot here—no com­mit­ment tools to shape cul­ture are great dur­ing early stages.

But medium com­mit­ment tools are awful, a se­ries of lit­tle things that take 5-50 min­utes a week to work on—these things are death to early stage cul­tures. It’s death by a 1000 cuts of things they can’t see clear im­me­di­ate benefit for, and which they can see clear im­me­di­ate cost for.

I don’t know why ex­actly this is, and haven’t re­ally mapped out what’s be­hind this in­tu­ition, it’s some­thing about the benefits of build­ing iden­tity vs. the time re­quired, it’s ushaped, the tails are a much more effec­tive trade­off than the mid­dle.

• A strong vi­sion can cover for a lot of in­ter­nal ten­sion—the ex­ter­nal ten­sion be­tween your vi­sion and what you want can hide in­ter­nal ten­sion re­lated to not meet­ing all your needs.

But, it can’t cover for­ever—even­tu­ally, your other needs get louder and louder un­til they drown out your vi­sion, lead­ing to a crash in pro­duc­tivity.

It can help to know what your lead­ing in­di­ca­tors for ig­nor­ing your needs… that way, you can catch a crash be­fore it hap­pens, and make sure you re­solve that in­ter­nal ten­sion. For me, it’s my weight creep­ing up. I use food as a way to ig­nore nega­tive emo­tions.

So, when I see my weight creep­ing up over the course of a few days, I take time out to pro­cess emo­tions, take care of my­self, and see what needs I’ve been ig­nor­ing. 1 hour of at­tend­ing to other needs can save me weeks of to­tal burnout.

• 3 Pos­si­bil­ities for a Less­wrong talk:

1. In this short­form, I show how the at­trac­tor for a cult (Ke­gan 4.5 lead­ers) is very easy to con­fuse with the at­trac­tor for a great cul­ture (Ke­gan 5 lead­ers). This is a pat­tern I’ve no­ticed a bunch when look­ing at good cul­tures, and I’d love to do a talk called “Cult is the root of cul­ture” where I show a bunch of in­stances of this.

2. I’ve been con­tin­u­ing to ex­plore the idea aes­thetic bias in be­liefs and the con­cept of aes­thetic pathol­ogy. I’d love to do a talk ex­plor­ing some of those ideas and giv­ing ex­am­ples.

3. The thing I’ve been spend­ing most of my time on is teach­ing how to over­come akra­sia. I have a work­shop that shows ex­pe­ri­en­tially what it’s like to act from a non-co­er­cive place (that goes over much of the ma­te­rial in this com­ment) and I’d love to less­wrongify it and run some ex­er­cises dur­ing the talk.

Which of these would you be most in­ter­ested in as a talk?

• Ehh, I re­al­ized that I don’t un­der­stand the first two well enough to give a good 5 minute talk, and the last one can’t be given ex­pe­ri­en­tially in 5 min­utes. Will in­stead choose a topic that’s more trans­par­ent to me and con­cep­tual in na­ture.

• I’m most in­ter­ested in num­ber 1

• Grudg­ing­ness is the pro­duc­tivity kil­ler.

We’ve no­ticed all our choices. We’ve brain­stormed bet­ter op­tions. We’ve de­cided that this is the best course of ac­tion.

And yet, it’s an awful choice. Real­ity forced us into a bad situ­a­tion, and we hold a grudge against.

But we kick, and scream, and moan about hav­ing to do it. We can do it, but we’re not gonna like it! We can do it, but by god are we gonna ex­pend en­ergy show­ing our­selves how much we don’t like it.

And so we sit there, push­ing against that which can not be moved.

Hold­ing on to our grudge against re­al­ity.

Should­ing our­selves in the foot.

And it’s at this point we can ask our­selves… is this serv­ing us?

Some­times, the an­swer is yes. This grudge con­nects us to our val­ues, or pro­tects us from a truth we’re not equipped to han­dle.

But of­ten… far far more of­ten, the an­swer is no.

All that kick­ing against a brick wall has done for us is to give us a stubbed toe.

So we stare at this grudge, and we thank this grudge for con­nect­ing us to our val­ues. And we ask our­selves, with an open heart:

Is it time to let this go?

• One step deeper into the maze—why fight it? Why bother to re­mem­ber that this is cur­rently nec­es­sary to meet our im­me­di­ate goals, but also con­tra­dicts our over­all prefer­ences?

(note: I gen­er­ally agree, just giv­ing a coun­ter­point. I think the key is that let­ting-go is tem­po­rary. You can ac­cept it and move on, but you should have a trig­ger or date to re-ex­am­ine the grudge and de­ter­mine if it’s time to do some­thing about it.)

• *Vir­tual Pro­cras­ti­na­tion Coach*

For the past few months I’ve been do­ing a deep dive into Pro­cras­ti­na­tion, try­ing to find the cog­ni­tive strate­gies that peo­ple who have no trou­ble with pro­cras­ti­na­tion use to over­come their pro­cras­ti­na­tion.
--------------
This deep dive has in­volved:

* In­tro­spect­ing on my own cog­ni­tive strate­gies
* Read­ing the self help liter­a­ture and min­ing cog­ni­tive strate­gies
* Scour­ing the sci­en­tific liter­a­ture for re­views and meta stud­ies re­lated to over­com­ing pro­cras­ti­na­tion, and min­ing the cog­ni­tive strate­gies.
*In­ter­view­ing peo­ple who have trou­ble with pro­cras­ti­na­tion, and peo­ple who have over­come it, and mod­el­ling their cog­ni­tive strate­gies.

I then took these ~18 cog­ni­tive strate­gies, split them into 7 les­sons, and spent ~50 hours tak­ing peo­ple in­di­vi­d­u­ally through the les­sons and see­ing what worked, what didn’t and what was miss­ing.

This re­sulted in me do­ing an­other round of re­search, adding a whole new set of cog­ni­tive strate­gies, (for a grand to­tal of 25 cog­ni­tive strate­gies taught over the course of 10 les­sons) and test­ing for an­other round of ~50 hours to again test these cog­ni­tive strate­gies with 1-on-1 les­sons to see what worked for peo­ple.
-------------------------------------
The first piece of more scal­able test­ing is now ready. I used Spencer Green­berg’s Guid­edTrack tool to cre­ate a “vir­tual coach” for over­com­ing pro­cras­ti­na­tion. I sus­pect it won’t be very use­ful with­out the les­sons (I’m writ­ing up a LW se­quence with those), but nev­er­the­less am still look­ing for a few peo­ple who haven’t taken the les­sons to test it out and see if its’ helpful.

The vir­tual coach walks you through all the parts of a work ses­sion and holds your hand. If you feel un­mo­ti­vated, in­de­ci­sive, or over­whelmed, its’ there to help. If you feel am­bi­guity, perfec­tion­ism, or fear of failure, its’ there to help.

If you’re in­ter­ested in alpha test­ing, let me know!

• STEELMANNING KEGAN 3 (OR, KEGAN 3, TO THE TUNE OF KEGAN 4)

Ruby re­cently made an ex­cel­lent post called Causal Real­ity vs. So­cial Real­ity. One way to frame what he was writ­ing was he was try­ing to point at that 58% of the pop­u­la­tion is on Ke­gan’s stage 3, and a lot of what ra­tio­nal­ity is do­ing is try­ing to move peo­ple to stage 4.

I made a re­ply to that (know­ing it might not be that well re­ceived) es­sen­tially try­ing to steel­man Ke­gan 3 from a Ke­gan 4 per­spec­tive—that is, is there a valid sys­temic rea­son based on long term goals to act as if all you care about is how you make your­self and oth­ers feel.

Here’s my slightly ed­ited at­tempt:

The thing we ac­tu­ally care about… Is it how ev­ery­one feels? Peo­ple be­ing happy and con­tent and get­ting along, love and mean­ing—it seems to be based in large part on the fun­da­men­tal ques­tion of how peo­ple feel about other peo­ple, how we get along—the ques­tions that are asked in Ke­gan 3.

It might be un­der­stand­able if you’re a per­son that cares about a world where peo­ple love and cher­ish each other, and are able to pur­sue mean­ing—you might think that the near term effects of how peo­ple think and feel re­late to what hap­pens effect the long term of how peo­ple think and feel and re­late as well. If you don’t have a lot of power, you might even sub­con­sciously think that the flowthrough effects from your abil­ity to effect how peo­ple around you feel is your best chance at af­fect­ing the “ul­ti­mate goal” of ev­ery­one get­ting along.

And when you run into some­one who (in your mind) doesn’t care about that re­al­ity of how their ac­tions effect the har­mony of the group, and in­stead is fo­cused on weird rules that dis­card those ob­vi­ous effects, you might think them cold and calcu­lat­ing and im­por­tantly in op­po­si­tion to that ul­ti­mate goal.

Then you might write up a post about how sure, rules and Ke­gan 4 and prin­ci­ples of ac­tion are im­por­tant some­times, but the im­por­tant thing is just be­ing good and kind to other peo­ple, and things will work them­selves out—That Ke­gan 3 ac­tions are ac­tu­ally the best way to achieve Ke­gan 4 goals.

• The thing we ac­tu­ally care about… Is it how ev­ery­one feels?

I hap­pen to roughly agree with this but be warned that there are peo­ple who get off this train right about here.

• *raises hand and gets off the train*

• You strike me as some­one very heaven fo­cused, so I am sur­prised you got off the train at about here.

I won­der, if you ex­pand the con­cept of “how ev­ery­one feels” to in­clude Eu­domonic hap­piness—that is, its’ not just about how they feel, but sec­ond or­der ideas of how they would feel about the mean­ingful­l­ness/​right­ness of their own feel­ings (and how you feel about the mean­ingful­l­ness/​right­ful­l­ness of their ac­tions), do you still get off the train?

• Yeah, it seems pretty plau­si­ble that I care about things that don’t have any ex­pe­rience. It seems likely that I pre­fer a uni­verse tiled with amaz­ing beau­tiful paint­ings but no con­scious ob­servers to a uni­verse filled with literal moun­tains of fe­ces but no con­scious ob­servers. I don’t re­ally know how much I pre­fer one over the other, but if you give me the choice be­tween the two I would definitely choose the first one.

• There’s a lot of un­der­ly­ing mod­els here around the “Heaven and En­light­en­ment” di­chotomy that I’ve been play­ing with. That is, it seems like when in­tro­spect­ing peo­ple ei­ther same to want to get to a point where ev­ery­one feels great, or get to a point where they can feel great/​ok/​at peace with ev­ery­one not feel­ing great. (Some peo­ple are in the mid­dle, and for in­stance want to cre­ate heaven with their prox­i­mate tribe or fam­ily, and en­light­en­ment around the suffer­ing of the broader world).

One of the things I found out re­cently that makes me put more weight into the heaven and en­light­en­ment di­chotomy is that re­search into Ke­gan stage 5 has found there are two types of Ke­gan stage 5 - peo­ple who get re­ally in­ter­ested in other peo­ple and how they feel and how to make them do bet­ter (Heaven), and peo­ple who get re­ally in­ter­ested in their own ex­pe­rience and their own body and what’s go­ing on in­ter­nally (en­light­en­ment). That is, when you’ve dis­carded all your in­stru­men­tal val­ues and on­tolo­gies as fluid and con­tex­tual and open to change and growth, whats’ left is your ter­mi­nal val­ues—Either heaven, or en­light­en­ment.

• I re­sponded to your origi­nal com­ment here. I don’t know the Ke­gan types well enough (per­haps I should) to say whether that’s a fram­ing I agree with or not.

• Are there big take­aways from Mo­ral Mazes that you don’t get from The Ger­vais Prin­ci­ple?

• My mem­ory of The Ger­vais Prin­ci­ple is that it gets wrapped up in lots of fairly spe­cific mod­els of how peo­ple in­ter­act, whereas Mo­ral Mazes has a more diffuse “you are con­tam­i­nated by in­ter­act­ing with the sys­tem” vibe. So in the end maybe pretty similar, but with differ­ent em­phases.

• Hav­ing trou­ble be­ing de­ci­sive? Turns out there’s only two sim­ple mind­set shifts that sep­a­rate de­ci­sive peo­ple from in­de­ci­sive peo­ple.

In­de­ci­sive peo­ple view de­ci­sions as a fork in the road. They can stand there for­ever, try­ing to de­cide which way to go.

De­ci­sive peo­ple view de­ci­sions more like a train switch, that will change the di­rec­tion of the train they’re already in­side. If they don’t pull the lever in time, the de­ci­sion to stay on their cur­rent path is made for them.

When in­de­ci­sive peo­ple try out this metaphor, some­times they dis­cover some­thing… think­ing of de­ci­sions like this is re­ally stress­ful!

This brings us to the sec­ond big mind­set shift. In­de­ci­sive peo­ple view all de­ci­sions as the same! De­ci­sive peo­ple don’t do that.

In­stead, they bucket their de­ci­sions. They have in mind a clear pic­ture of the things they value, and their vi­sion for the fu­ture. If a de­ci­sion doesn’t effect those things, they make it quickly and in­tu­itively. Only if does do they put more time into the de­ci­sion.

This al­lows them to not sweat most de­ci­sions, and makes de­ci­sion mak­ing much less stress­ful. It also al­lows them to put more time and effort into the truly im­por­tant de­ci­sions, be­cause they’re not wast­ing time on the de­ci­sions that don’t mat­ter.

• The things that I’m most qual­ified to teach are the things that I’m worst at.

Take pro­cras­ti­na­tion for ex­am­ple. My par­tic­u­lar ge­netic and cul­tural makeup en­sured that fo­cus would never be a strong suit. As a re­sult, I went through ba­si­cally ev­ery prob­lem that some­one who strug­gles through pro­cras­ti­na­tion goes through. I ran into a ton of is­sues sur­round­ing it, at­tacked it from a va­ri­ety of an­gles, and got to a point where I can ship cool pro­jects and do great work. Prob­a­bly av­er­age or slightly above in pro­duc­tivity, but func­tional.

Mean­while, when I teach over­com­ing pro­cras­ti­na­tion, I can truly talk about the path you need to learn the ma­te­rial. When a stu­dent runs into an is­sue, its’ rare that it’s an is­sue I haven’t over­come my­self (usu­ally mul­ti­ple times in differ­ent forms) and I can give ex­cel­lent ad­vice on a path to suc­cess.

Mean­while, the things that I’m best at are the things I’m worst at teach­ing.

Take con­struct­ing con­cep­tual mod­els. It’s some­thing that has always come nat­u­rally to me. Upon re­al­iz­ing that it was a par­tic­u­lar strength of mine, I worked to hone it and un­der­stand it and push it to the limits. How­ever, even with this deep un­der­stand­ing, I’m still not great at teach­ing it. I can tell peo­ple what it feels like, and my in­tro­spec­tion on the parts of it, and all of the sys­tems I’ve built to en­hance it and the rea­son­ing be­hind them.

But, I can­not tell them the path to go from not hav­ing the skill of con­cep­tual model build­ing to hav­ing it. It’s like breath­ing to me. If they run into a prob­lem in ac­quiring the skill, I can­not help them over­come it be­cause I never ran into it my­self. It’s much harder for me to truly un­der­stand what it’s like to be some­one who strug­gles with the skill.

• While this seems ac­cu­rate in these cases, I’m not sure how far this model gen­er­al­izes. In do­mains where teach­ing mostly means de­bug­ging, hav­ing en­coun­tered and over­come a suffi­ciently a wide va­ri­ety of prob­lems may be im­por­tant. But there are also do­mains where peo­ple start out blank, rather than start­ing out with a bro­ken ver­sion of the skill; in those cases, it may be that only the most skil­led peo­ple know what the skill even looks like. I ex­pect pro­gram­ming, for ex­am­ple, to fall in this cat­e­gory.

• Agree, the model doesn’t fully gen­er­al­ize and lacks nu­ance. I think pro­gram­ming is a plau­si­ble coun­terex­am­ple.

• Are you good at teach­ing peo­ple (your) ex­ist­ing con­cep­tual mod­els? (As op­posed to how to make their own.)

• I think I’m de­cent at it. I sup­pose you could an­swer this ques­tion bet­ter than I.

• INSTRUMENTAL RATIONALITY CURRICULUM

A few weeks ago I ran a work­shop at the EA ho­tel that taught my Frame­work for in­ter­nal de­bug­ging. It went well but there was ob­vi­ously too much con­tent, and I have doubts about the abil­ity for it to con­sis­tently effect peo­ple in the real world.

I’ve started plan­ning for the next work­shop, and cre­at­ing test con­tent. The idea is to teach the ma­te­rial as a se­ries of habits where spe­cific men­tal sen­sa­tions/​smells are as­so­ci­ated with spe­cific men­tal moves. Th­ese im­ple­men­ta­tion in­ten­tions can be prac­ticed through fo­cused med­i­ta­tions. There are 3 “sets” of habits that each have seven or 8 med­i­ta­tions at­tached to them.

The idea is that the first course, the way of fo­cus, teaches peo­ple the ba­sic skills of work­ing with in­ten­tions and fo­cus­ing that are needed to not pro­cras­ti­nate. That is there are ba­sic skills to fo­cus­ing, that even if you don’t have any in­ter­nal con­flict or trauma, you still need to get things done. The first course starts with that.

THE WAY OF FOCUS (Over­com­ing Akra­sia).

1. Notic­ing dropped in­ten­tion → Resta­bi­liz­ing intention

2. Notic­ing Com­pet­ing In­ten­tion → Lov­ing Snooze (+ Set­ting Up Po­modoros or Con­sis­tent Break Sched­ule)

3. Notic­ing Po­ten­tial In­ten­tion → Men­tal Contrasting

4. Notic­ing Co­er­cive In­ten­tion → Switch­ing to Non-co­er­cive Possiblity

5. Notic­ing Am­biguious/​Over­whelming In­ten­tion → Gen­er­at­ing Spe­cific Next Action

6. Notic­ing Con­text Switch → In­ten­tion Clear­ing (+ Habits for Re­mov­ing Dis­trac­tions)

7. Notic­ing Pro­duc­tivity - > Re­in­forc­ing Self-Con­cept as Pro­duc­tive Per­son (+ Chang­ing En­vi­ron­ment to That of Pro­duc­tive Per­son)

THE WAY OF LETTING GO (Over­com­ing Trauma)

Some­times, you’ll have com­pet­ing in­ten­tions come up that are very per­sis­tent, be­cause they’re re­lated to deep emo­tional is­sues/​trauma. You can find them by look­ing for feel­ings of avoidance or the in­abil­ity to avoid, and then use the fol­low­ing tech­niques to dis­pell.

1. Notic­ing Avoidance-> Fuse with the Feeling

2. Notic­ing Mag­netism → Dis­so­ci­ate from Feeling

3. In­hab­it­ing Feel­ing → Find­ing Emo­tional Core

4. Find­ing Emo­tional Core → Re-ex­per­ince Memories

5. Sticky Belief → Ques­tion Belief Via Work of By­ron Katie

6. Sticky Feel­ing → Let Go of Feel­ing Via Se­dona Method

7. Sticky Me­mories → Reframe Me­mories Via Lefkoe Belief Process

8. Pro­cess Fails-> Find Se­cond Layer Emo­tion.

THE WAY OF ALIGNMENT (Over­com­ing In­ter­nal Con­flict)

Some­times, you’ll no­tice com­pet­ing in­ten­tions that aren’t un­am­bi­giously nega­tive or pos­i­tive, and it’s hard to know what to do. In those cases, you can no­tice the “con­flicted” feel­ing, and use the fol­low­ing habits to deal with them over a pe­riod of time.

0. Notic­ing Con­flict → Fuse/​Dis­so­ci­ate With Feel­ing (Already Taught)

0. Easy to Fuse/​Dis­so­ci­ate → Find Emo­tional Core (Already Taught)

1. Fa­mil­iar Con­flict-> Alter­nate Fus­ing/​Dis­so­ci­at­ing (prac­tice switch­ing per­spec­tives)

2. Easy to shift per­spec­tives → Prac­tice hold­ing both at once

3. Easy to hold both at once → In­ter­nal Dou­ble Crux

4. Me­mory Re­con­soli­dated → Stack Attitudes

5. At­ti­tudes Stacked → Core Transformation

6. Core Trans­formed → Parental Timeline Reimprinting

7. Timeline Reim­printed → Mo­dal­ity Mind Palace

I’m just finish­ing up the con­tent for THE WAY OF FOCUS, and I’m look­ing for peo­ple to help test the ma­te­rial. It will in­volve com­mit­ing 30 min­utes over the in­ter­net a day for 7 days. 10 min­utes to prac­tice pre­vi­ous med­i­ta­tions, 10 min­utes to teach the new ma­te­rial, and 10 min­utes to prac­tice the new ma­te­rial via a new type of med­i­ta­tion.

• POST-RATIONALITY IS SYSTEMATIZED WINNING

John is a Green­blot, a mem­ber of the species that KNOWS that the ul­ti­mate goal, the way to win, is to min­i­mize the amount of blue in the world, and max­i­mize the amount of green.

The Green­blots have de­vel­oped the­o­ries of co­op­er­a­tion, that al­low them to work to­gether to make more green. And com­pli­cated the­o­ries of light to ex­plain the true na­ture of green, and sev­eral com­pet­ing sys­tems of ethics that de­scribe the green­ness or blue­ness of var­i­ous ac­tions, in a very com­pli­cated sense that ac­tu­ally clearly leads to the color.

One day, John meets Ted. Ted is a mem­ber of the Lovelots. John is aghast when he finds out that Lovelots can’t per­ceive the differ­ence be­tween Blue and Green. Ted is aghast that John can’t per­ceive the differ­ence be­tween love and hate. They both go on their merry way.

The next day, John is do­ing his daily med­i­ta­tion, imag­in­ing the ces­sa­tion of end­less blue and the as­cen­dance of end­less green, but thoughts of Ted and his in­abil­ity to per­ceive this situ­a­tion keep in­trud­ing. Sud­denly, John ex­pe­riences a sub­ject-ob­ject shift. He is able to per­ceive his med­i­ta­tion as Ted per­ceives it, with both col­ors be­ing the same. In the next mo­ment, he has a flash of the Green­blots cel­e­brat­ing when they’ve achieved their goal, and John now knows what its’ like to ex­pe­rience the thing Ted called love.

John is con­fused, he thought the Green­blots had built a a ful­lproof the­ory of win­ning, of how to max­i­mize the green and min­i­mize the blue. But then he ex­pe­rienced end­less green, and knew how it was for that to not be win­ning at all. And he ex­pe­rienced the thing Ted was de­scribing, and the sen­sa­tion of win­ning felt the same. John thought he knew ev­ery­thing about win­ning, but in fact he knew noth­ing.

John vows to un­der­stand the true na­ture of win­ning, and de­velop the dis­ci­pline of be­ing able to work with the sen­sa­tion just like he pre­vi­ously was able to work with be­liefs about mak­ing things greener. John will be­come the Green­blots’ first post-ra­tio­nal­ist.

• It seems like the spirit of the Li­tany of Gendlin is ba­si­cally false?

Own­ing up to what’s true makes things way worse if you don’t have the psy­cholog­i­cal im­mune sys­tem to han­dle the nega­tive news/​deal with the trauma or what­ever.

And it’s pre­cisely the things that you are avoid­ing look­ing at that are most likely to be those things you can’t han­dle, as that’s WHY you de­vel­oped the re­sponse of not look­ing at them.

• Pedan­ti­cally speak­ing, whether this is true or not de­pends on what you mean by “it”; own­ing up to it [a fact about the world ex­ter­nal to one­self] does not make it [that fact] worse, but if your psy­chol­ogy can’t han­dle un­pleas­ant truths, then own­ing up to it [a spe­cific fact about the ex­ter­nal world] make may it [the world as a whole] worse.

But this is a bit of a dodge; I think the right way to look at it is that, in most cases, a false be­lief is a form of debt; you’ll prob­a­bly have to own up to it even­tu­ally, and there’s a cost to be paid when you do, but time-shift­ing that cost fur­ther into the fu­ture cre­ates ad­di­tional costs, be­cause you make worse de­ci­sions and form other in­cor­rect be­liefs in the mean time.

• Habryka framed the Gendlin litany as a stoic med­i­ta­tion, which made me dis­like it a bit less. i.e, it’s some­thing you say to your­self to help make it true that you can en­dure the truth, by choos­ing to adopt a frame where the truth is already out there. (not sure if habryka ex­actly en­dorses this sum­mary)

The main is­sue I then have with it (through this frame) is it says “peo­ple can en­dure what is true”, rather than “I can en­dure what’s true” – “peo­ple” sounds like it’s mak­ing a claim about the ex­ter­nal world, rather than a mantra I’m re­peat­ing to my­self. (Although I can imag­ine a read­ing where the “peo­ple” is still di­rected in­ward rather than out­ward)

I guess put an­other way, fur­ther steel­man­ning the origi­nal ver­sion: the fact that peo­ple can stand what’s true, doesn’t mean that they do stand what’s true. You can be re­mind­ing your­self of what’s pos­si­ble, and com­mit­ting to cleave to­wards the truth and be the sort of the per­son who will stand what’s true by fram­ing it as some­thing you’re already en­dur­ing.

• I think it’s prob­a­bly true that the Li­tany of Gendlin is ir­recov­er­ably false, but I feel drawn to apolo­gia any­way.

I think the cen­tral point of the litany is its equiv­o­ca­tion be­tween “you can stand what is true (be­cause, whether you know it or not, you already are stand­ing what is true)” and “you can stand to know what is true”.

When some­one thinks, “I can’t have wasted my time on this startup. If I have I’ll just die”, they must re­ally mean “If I find out I have I’ll just die”. Other­wise pre­sum­ably they can con­clude from their con­tinued al­ive­ness that they didn’t waste their life, and move on. The litany is an in­vi­ta­tion to al­low your­self to have less fal­lout from ac­knowl­edg­ing or find­ing out the truth be­cause you find­ing it out isn’t what causes it to be true, how­ever bad the world might be be­cause it’s true. A lo­cal frame might be “what­ever ad­di­tional ter­rible ways it feels like the world must be now if X is true are bucket er­rors”.

So when you say “Own­ing up to what’s true makes things way worse if you don’t have the psy­cholog­i­cal im­mune sys­tem to han­dle the nega­tive news/​deal with the trauma or what­ever”, you’re not re­spond­ing to the litany as I see it. The litany says (em­pha­sis added) “Own­ing up to it doesn’t make it worse”. Own­ing up to what’s true doesn’t make the true thing worse. It might make things worse, but it doesn’t make the true thing worse (though I’m sure there are, in fact, tricky coun­terex­am­ples here)

(The Li­tany of Gendlin is im­por­tant to me, so I wanted to defend it!)

• We ob­vi­ously can’t give our at­ten­tion to ev­ery truth. The LoG has to be con­tex­tual. If you’re spend­ing a lot of re­sources pur­su­ing an im­pos­si­ble goal be­cause you’re willfully ig­nor­ing an un­com­fortable fact, stop deny­ing the truth. Build the emo­tional skills to work through dis­ap­point­ment in a healthy way and move on with your life.

My is­sue with the LoG is its tone. It seems to frame the pro­cess of cop­ing with dis­ap­point­ment as a dis­pas­sion­ate one. Like we’re sup­posed to be a com­puter. I think that’s un­helpful on the mar­gin for most peo­ple most of the time.

• I won­der why it seems like it sug­gests dis­pas­sion to you, but to me it sug­gests grace in the pres­ence of pain. The grace for me I think comes from the out­ward- and up­ward-reach­ing (to me) “to be in­ter­acted with” and “to be lived”, and grace with ac­knowl­edge­ment of pain comes from “they are already en­dur­ing it”

• Just had an ex­cel­lent chat with CFAR Cofounder (al­though no longer a part of CFAR) Michael Smith break­ing down in ex­cru­ci­at­ing de­tail a skill he calls “Break­ing Free.”

A step by step pro­cess to:

1. No­tice auto-pi­lot scripts you are run­ning that are caus­ing you pain.

2. Dis­solve them so you can see what ac­tions will lead to what you truly want.

Now, I’m look­ing for peo­ple to teach this skill to! It would in­volve a ~2 hour ses­sion where I ask you why you want the skill, and teach it to you, then a ~30 minute fol­lowup ses­sion a cou­ple weeks later where we talk about what the skill has done for you.

I’m happy to give free coach­ing on the skill to any­one who asks, all I ask is that I can use the record­ings of your ses­sion in the pod­cast about the skill.

Any­one in­ter­ested?

• I may be in­ter­ested. DM me

• CFAR’s “Ad­just Your Seat” prin­ci­ple and as­so­ci­ated story is prob­a­bly one of my most fre­quently refer­enced con­cepts when teach­ing ra­tio­nal­ity tech­niques.

I wish there was a LW post about it.

• My biggest win lately (Cour­tesy of Elliot Teper­man) in re­gards to self love is to get in the habit of think­ing of my­self as the par­ent of a child (my­self) who I have un­con­di­tional love for, and say­ing what that par­ent would say.

An un­ex­pected benefit of this is that I’ve started talk­ing like this to oth­ers.

Like, some­times my friends just need to hear that I ap­pre­ci­ate them as a hu­man be­ing, and am proud of them for what they ac­com­plished and its’ not the type of thing I used to say at all.

And so do I, I didn’t re­al­ize how much I needed to hear that sort of thing from my­self un­til I started say­ing it reg­u­larly.

One could call this In­ter­nal Par­ent Sys­tems. Not to be con­fused with the de­fault in­stalled one that many of has that judges, crit­i­cizes, or blames in our par­ents’ voice :). A close cousin of Qiaochu Yuan’s In­ter­nal Puppy Systems

• I think this has some in­ter­est­ing par­allels to trans­ac­tional anal­y­sis. In that model you could think of it as ex­er­cis­ing your par­ent part to talk to your child part and to talk to the child part of oth­ers.

• To­day I had a great chat with a friend on the differ­ence be­tween #Fluidity and #Con­gru­ency

• For the past decade+ my goal has been #Con­gru­ency (also of­ten called #Align­ment), the idea that there should be no differ­ence be­tween who I am in­ter­nally, what I do ex­ter­nally, and how I rep­re­sent my­self to others

• This worked well for quite a long time, and led me great places, but the prob­lems with #Con­gru­ency started to show more ob­vi­ously re­cently.

• Firstly, my in­ter­nal sense of “right­ness” wasn’t eas­ily en­cap­su­lated in a sin­gle sense of con­sis­tent prin­ci­ples, it’s very fuzzy and con­text spe­cific. And fur­ther­more, what I can even define as “right” shifts as my #On­tol­ogy shifts.

• Se­condly, and in par­allel, as the idea of #Self starts to ap­pear less and less co­her­ent to me, the whole base that the house is built on starts to col­lapse.

• This had led me to be­gin a shift from #Con­gru­ency to #Fluidity. #Fluidity is NOT about be­hav­ing by an in­ter­nally and ex­ter­nally con­sis­tent set of prin­ci­ples, rather it’s be­ing able to find that sense of “Right­ness”—the right way for­ward—in in­creas­ingly com­plex and nu­anced situ­a­tions.

• This “right­ness” in any given situ­a­tion is in­fluenced by the #On­tol­ogy’s that I’m op­er­at­ing un­der at any given time, and the #On­tolo­gies are in­fluenced by the sense of “right­ness”.

• But as I hone my abil­ity to fluidly shift on­tolo­gies, and my abil­ity to have enough aware­ness to be in touch with that sense of right­ness, it be­comes eas­ier to find that sense of right­ness/​wrong­ness in a given situ­a­tion. This is as close as I can come to de­scribing what is some­times called #SenseMak­ing.

• Sorry for all the hash­tags, this was origi­nally writ­ten in Roam.

• Is Roam as use­ful a medium for you to read in, as it is for you to write in?

• Mods are asleep, post pic­tures of mush­room clouds.

• Is there much EA work into tail risk from GMOs ru­in­ing crops or ecosys­tems?

If not, why not?

• When in­ter­view­ing peo­ple who were both very pro­duc­tive, and en­joyed work im­mensely, they turned out to be re­mark­ably similar in terms of the emo­tional con­tent of how they re­lated to tasks. Here are the 5 emo­tions that can make work pro­duc­tive and en­joy­able:

1. Un­qual­ified Desire

1. Defi­ni­tion: Want­ing the out­come of your task with­out reser­va­tion. Want­ing to do the task with­out reser­va­tion.

2. Ques­tions:

• How can I re­move the bad as­pects?

2. Resolve

• Defi­ni­tion: A sense that “I will do this task”. As op­posed to Un­qual­ified De­sire, which is a sense that “I want this out­come.”

• Ques­tions:

• How will I feel once the pro­ject or task is done?

• How can I make that differ­ence real to my­self?

3. Playfulness

• Ques­tions:

• What is the near­est state to what I’m cur­rently feel­ing, that in­cludes a sense of en­joy­ment?

• What does a task need for it to be in­trin­si­cally en­joy­able to me?

• Which of those things can I add to this task to most quickly get to that Near­est Playful State?

4. Meaning

• Defi­ni­tion: A sense that you’re con­nected to your deep­est val­ues when do­ing a task.

• Ques­tions:

• What is the near­est state to what I’m cur­rently feel­ing, that in­cludes a sense of Mean­ing?

• What are my val­ues?

• Which of those val­ues can I tie to this task, to more quickly get to that Near­est Mean­ingful State

5. Intentionality

• Defi­ni­tion: A state where you feel as if you choos­ing to do what you want to do, when you want to do it.

• Ques­tions:

• What is the near­est state to what I’m cur­rently feel­ing, that in­cludes a sense of In­ten­tion­al­ity?

• What can I do to move my­self into that state?

• Is that some­thing I’ll ac­tu­ally do from my cur­rent state?

• One of the things I’ve been work­ing on in the back­ground over the past ~year is chang­ing my re­la­tion­ship to money. This has al­lowed me to make more of it while feel­ing great about it.

Here are the 2 biggest shifts I made:

1. I had a deep-rooted sub-con­scious be­lief that if I got money, it would cor­rupt me, am­plify the worst parts of me. Then, I re­al­ized that hav­ing money will al­low me to hire coaches and ad­vi­sors who’s sole pur­pose is to help me reach my deep­est val­ues. I spent lots of time con­sciously vi­su­al­iz­ing this, and rec­og­niz­ing on a deep level that I could con­sciously di­rect my money to am­plify the best parts of me.

2. I used to view money as a trans­ac­tion, a fair trade be­tween giv­ing money, and get­ting some­thing back of equal or greater value. But, that caused me to miss out on the hu­man com­po­nent of money—it caused me to fo­cus on the money and the product, rather than the peo­ple be­hind them.

Another par­allel per­spec­tive I’ve adopted is that money is a gift. A gift of trust in the per­son be­ing bought from, a gift of free­dom in the sense of what the money means. When some­one gifts me money, I’ve got­ten in the habit of con­sciously “re­ceiv­ing” that money, with grat­i­tude and love. This has changed how I ap­proach my prod­ucts, and how I ap­proach my “cus­tomers”.

Th­ese two shifts have al­lowed me to be more com­fortable with money, even de­velop a pow­er­ful, mu­tu­ally benefi­cial re­la­tion­ship with it :).

• I had one of my pi­lot stu­dents for the akra­sia course I’m work­ing on point out to­day that some­thing I don’t cover in my course is in­de­ci­sion. I used to have a bit of prob­lem with that, but not enough to have sunk a lot of time into de­ter­min­ing the qualia and men­tal moves re­lated to defeat­ing it.

Has any­one read­ing this gone from be­ing re­ally in­de­ci­sive (and pro­cras­ti­nat­ing be­cause of it) to much more de­ci­sive? Or is cur­rently work­ing on mak­ing the switch I’d love to talk to you/​model you.

As a bonus thank you, you’ll of course get a free ver­sion of the course (along with all the guided med­i­ta­tions and au­dios) when it’s com­plete.

• ON HEAVEN AND ENLIGHTENMENT

https://​​scon­tent-sjc3-1.xx.fbcdn.net/​​v/​​t1.0-9/​​56656099_10220056198495676_9079758874621247488_n.jpg?_nc_cat=107&_nc_oc=AQm42c-keDXguTwDHsVQz7hGt5AK-DkYK_eG13XXmH­cy­bXql4Jv­goYZC4r0Uy4LvMAU&_nc_ht=scon­tent-sjc3-1.xx&oh=bb4a1f996cfde07165c9e22fdfe7c06d&oe=5D901596

At the ex­tremes, peo­ple have one of four life goals: To achieve a state of noth­ing­ness (hi­nayana en­light­en­ment), to achieve a state of one­ness (ma­hayana en­light­en­ment), to achieve a utopia of mean­ing (galts gulch), or to achieve a utopia of to­geth­er­ness (hive­mind).

In prac­tice, most peo­ple ex­ist some­where in the mid­dle, de­pend­ing on how much they want to change their con­cep­tion of the world (en­light­en­ment) vs. chang­ing the world it­self (heaven), and de­pend­ing on how much they view their iden­tity as seper­ate from other things (in­di­vi­d­u­al­ism) or the same as other things (col­lec­tivism).

I think I’m already past stream en­try, and this is why the above di­a­gram scares the shit out of me:

It seems like hi­nayana en­light­en­ment may be an at­trac­tor state even if I have a sig­nifi­cant amount of val­ues that would want to cre­ate a utopia of mean­ing.

If I was con­fi­dent that I could go the ma­hayana path, there’s the “Bod­hisattva op­tion”—step­ping back from your en­light­en­ment to bring oth­ers in, thus cre­at­ing heaven.

But it’s not clear to me that I won’t end up at noth­ing­ness in­stead of one­ness, and I’m not aware of a path to step back from noth­ing­ness and cre­ate a utopia of mean­ing, in fact they feel al­most di­a­met­ri­cally op­posed.

Hence ‘Stream en­try con­sid­ered harm­ful.’

• I’m in­ter­ested in a medium-fleshed-out ver­sion of this com­ment that holds my hand more than the cur­rent one does. (Not sure whether I’d want the full fledged post ver­sion yet)

(In gen­eral, happy to see more peo­ple us­ing short­form feeds)

((also, you prob­a­bly didn’t mean to call it a short-term feed))

• Will do.

• You should add in­te­gral’s in­te­rior and ex­te­rior to the di­a­gram.

• In­te­rior and ex­te­rior is one com­po­nent of heaven and en­light­en­ment. It’s pos­si­ble to break up that one axis into sev­eral axes but its’ usu­ally cor­re­lated enough to not have to do that for the vast ma­jor­ity of peo­ple and or­ga­ni­za­tions.

• At the ex­tremes, peo­ple have one of four life goals: To achieve a state of noth­ing­ness (hi­nayana en­light­en­ment), to achieve a state of one­ness (ma­hayana en­light­en­ment), to achieve a utopia of mean­ing (galts gulch), or to achieve a utopia of to­geth­er­ness (hive­mind).

Th­ese are not dis­tinct things—they’re al­ter­na­tive ways to frame one thing. All roads lead to Rome, so to speak. The way I see it, full en­light­en­ment en­tails at­tain­ing all four at once. Just don’t get dis­tracted by the taste of lo­tus on the way.

• This is a com­mon be­lief and it may in fact be true, but it’s at odds with the on­tol­ogy as pre­sented. There are trade­offs be­tween which one you choose in this on­tol­ogy.

• On­tolog­i­cally dis­tinct en­light­en­ments sug­gest path de­pen­dence. That seems cor­rect on re­flec­tion; up­dat­ing and re­fram­ing.

En­light­en­ment is caused by a cer­tain ob­ser­va­tion about mind/​re­al­ity that is salient, ob­vi­ous in ret­ro­spect and re­li­ably trig­gers ma­jor up­dates. The refer­ent of this ob­ser­va­tion is uni­ver­sal and in­var­i­ant but its in­ter­pre­ta­tion and the re­sult­ing up­dates may not be; the mind can only work with what it has.

In other words, en­light­en­ment has one refer­ent in the ter­ri­tory but the re­sult­ing maps are path de­pen­dent. This seems con­sis­tent with what I know about spiritu­al­ity-re­lated failure modes and doc­tri­nal dis­agree­ments. Also, the six­ties.

So yeah. Cau­tion is war­ranted. Just keep in mind that your skull is an in­for­ma­tion bot­tle­neck, not an on­tolog­i­cal bound­ary.

• I have a visceral nega­tive re­ac­tion to the com­ments on this post.

It re­ally an­noys me that ra­tio­nal­ists are so bad at un­der­stand­ing and us­ing anal­ogy.

https://​​www.less­wrong.com/​​posts/​​HzDcLf2LJg4x66fcH/​​not-all-com­mu­ni­ca­tion-is-ma­nipu­la­tion-chap­er­ones-don-t

• What can I do to get an in­tu­itive grasp of Kelly bet­ting? Are there apps I can play or ex­er­cises I can try?

• But can’t you just be­lieve in Rokos anti-basilisk, the al­igned AI that will pun­ish you if you bring a malev­olent AI into ex­is­tence?

• There’s no feed­back loop that re­sults in that AI be­ing cre­ated.

• You do if the su­per benev­olent AI isn’t dumber than a defect­bot.

• I mean, you get the stan­dard utopia that the al­igned AI gives you. And you’re more likely to end up in wor­lds with al­igned AIs that dis­in­cen­tivize un­al­igned AIs from be­ing cre­ated, so maybe there’s an an­thropic feed­back loop?

• I’m not sure that most peo­ple who seek to cre­ate al­igned AI’s want an AI that starts do­ing the Last Judg­ment and pun­ishes peo­ple for their mis­deads for causal trade rea­sons.

It’s been a while since I read Roko’s post, but I don’t think that it makes any ar­gu­ment for the re­sult­ing AI be­ing non-Aligned. Be­ing al­igned doesn’t pre­vent the AI from as­sum­ing that it’s ex­is­tence is very high util­ity and do­ing acausal trade to fur­ther the chances of ex­ist­ing.

• I’ve been think­ing a bit about the re­la­tion­ship be­tween Perfec­tion­ism, Fear-of-Failure, and Fear-of-Suc­cess, as I’ve been teach­ing them this week in my course.

They all have a very similar struc­ture, where each has a com­po­nent of a “shadow value”—some­thing that’s im­por­tant to us that we tend not to ac­knowl­edge, as well as a “ac­knowl­edged value”—some­thing that we al­low our­selves to ac­knowl­edge as im­por­tant.

The solu­tion for all 3 is similar—sep­a­rate the shadow value from the known value, then figure out if each value (both shadow and known) ac­tu­ally ap­plies to the situ­a­tion, and how best to ap­ply it.

For Perfec­tion­ism, the Shadow Value is pleas­ing/​be­ing loved by/​be­ing ac­cepted by oth­ers. The ac­knowl­edged value is hav­ing high stan­dards for our­selves and our work.

For Fear-of-Failure, the Shadow Value is pro­tect­ing our iden­tity. The ac­knowl­edged value is deal­ing with the nega­tive ex­ter­nal con­se­quences of failure.

For Fear-of-Suc­cess, the Shadow Value is be­ing de­serv­ing of what we re­ceive. The ac­knowl­edged value is deal­ing with the nega­tive ex­ter­nal con­se­quences of suc­cess.

What bugs me is… I don’t know why all 3 of these hap­pen to de­velop this very similar struc­ture. It could just be a co­in­ci­dence, but my gut tells me there is some­thing unify­ing all 3 of these items to­gether that I’m not see­ing, and that un­der­stand­ing what it is would give me a more com­plete un­der­stand­ing of Pro­cras­ti­na­tion.

They all seem to some­how be re­lated to “Stan­dards”—but I’m still not see­ing the un­der­ly­ing sys­tem.

• Is the shadow value always iden­tity re­lated? (You are good/​[iden­tity X which is good]/​not? Per­cep­tion/​model of self worth?)

• I’m not sure if the perfec­tion­ism case (be­ing perfect to please oth­ers) fits the iden­tity pat­tern. Although ad­mit­tedly, in some peo­ple the shadow/​ac­knowl­edged value is flipped—some peo­ple will ac­knowl­edge be­ing perfect to please oth­ers, but won’t ac­knowl­edge the part of them­selves that want to do it for them­selves.

• Think­ing that some things aren’t all right to ac­knowl­edge might be more fun­da­men­tal.

I was guess­ing that “all of the shadow stuff is about how peo­ple think of them­selves (i.e. iden­tity. I am _, I am not _.) be­cause it’s some­thing peo­ple get tied up in, and it’s a rea­son some­one might want to deny some­thing.

I also think of Perfec­tion­ism (and it’s op­po­sites, not try­ing (if the stan­dard is un­ob­tain­able*)) as be­ing (re­lated to) fear of failure.

*This might cash out as:

“I’m good at X” → does well, puts in a lot of effort (Maybe judges peo­ple for hav­ing low stan­dards, or has differ­ent per­sonal stan­dards, whether high, non­judge­men­tal, dis­tributed, etc.), may seek it out + challenges in domain

“I’m bad at Y” → doesn’t try, scrapes by, avoids/​ugh field/​pro­cras­ti­nates, says ‘it doesn’t mat­ter’/​‘i don’t care’, judges self, maybe dirty pain

(It’s not su­per easy to delineate ‘en­joys/​seeks out thing’ from (con­sis­tently) ‘works to get bet­ter at it’.)