In praise of fake frameworks

Re­lated to: Bucket er­rors, Cat­e­go­riz­ing Has Con­se­quences, Fal­la­cies of Compression

Fol­lowup to: Gears in Un­der­stand­ing

I use a lot of fake frame­works — that is, ways of see­ing the world that are prob­a­bly or ob­vi­ously wrong in some im­por­tant way.

I think this is an im­por­tant skill. There are ob­vi­ous pit­falls, but I think the ad­van­tages are more than worth it. In fact, I think the “pit­falls” can even some­times be epistem­i­cally use­ful.

Here I want to share why. This is for two rea­sons:

  • I think fake frame­work use is a won­der­ful skill. I want it rep­re­sented more in ra­tio­nal­ity in prac­tice. Or, I want to know where I’m miss­ing some­thing, and Less Wrong is a great place for that.

  • I’m build­ing to­ward some­thing. This is ac­tu­ally a con­tinu­a­tion of Gears in Un­der­stand­ing, al­though I imag­ine it won’t be at all clear here how. I need a suite of tools in or­der to de­scribe some­thing. Talk­ing about fake frame­works is a good way to demo tool #2.

With that, let’s get started.

There are two kinds of peo­ple: ex­tro­verts and in­tro­verts.

…sort of.

I mean, as I look around, it cer­tainly looks like there’s a differ­ence be­tween out­go­ing so­cial but­terflies and quiet types who mostly stay at home. Maybe it’s more like a con­tinuum rather than a bi­nary thing. But if so, I find my­self won­der­ing if it’s bi­modal with rough “ex­tro­vert” and “in­tro­vert” clusters any­way.

But then I look at long lists of differ­ences be­tween ex­tro­verts and in­tro­verts, and I worry. What ex­actly do these terms mean? Is it just about how talka­tive and loud peo­ple are? If so, are the la­bels sneak­ing in con­no­ta­tions about where peo­ple “get en­ergy” from and how ac­tion-ori­ented they are?

Well, it turns out that a bunch of those traits are cor­re­lated. The in­tu­ition is, in fact, pick­ing up on some­thing true in the world.


That doesn’t mean the in­tu­ition is cor­rect.

It looks like maybe ex­traver­sion isn’t bi­modal. I can jus­tify that af­ter the fact: the Big Five ver­ified ex­traver­sion as a cor­re­la­tional cluster of traits and defines in­tro­ver­sion as “low ex­traver­sion”, and a Gaus­sian dis­tri­bu­tion seems like a more sen­si­ble prior than a bi­modal one. But I didn’t think of that ahead of time. If I hadn’t thought to look, I might have thought the Big Five had ver­ified the bi­modal in­tu­ition be­cause “these traits are cor­re­lated” and “the cor­re­la­tion has two sep­a­rable em­piri­cal clusters” were com­pressed into one “bucket” in my mind.

What other parts of the in­tu­ition are sus­pect? What else wants to sneak in un­der the ban­ner of “ver­ified”?

That’s hard to know. The usual use of the term “ex­tro­vert” isn’t a sharp refer­ence to clear traits. It’s more like a fuzzy cluster of im­pres­sions that loosely splat over things like Type A per­son­al­ity and be­ing the “life of the party”.

So we’re left with a choice.

We can ig­nore the fuzzy in­tu­ition and just use the con­cepts that come from the re­search. OCEAN tells us what ex­traver­sion is as it ex­ists in the world. If we want to know what other traits cor­re­late with ex­traver­sion, we can mea­sure that trait and a bunch of oth­ers and look. We can feed mas­sive amounts of data to ma­chine learn­ing sys­tems and let them mag­i­cally tell us cor­re­la­tions. No guess­work re­quired.

That seems safe.


If we’d done that as a species, there would be no OCEAN. Re­searchers thought to de­velop the Big Five be­cause of folk in­tu­itions about per­son­al­ity traits.

Also, not ev­ery­thing has been re­searched, and it’s tricky to find ev­ery­thing that has been re­searched.

The whole ap­proach is too slow. It doesn’t work as a gen­eral epistemic solu­tion.

But we clearly can’t just trust the in­tu­ition. It’s pre­dictably wrong some­where. It makes some false things seem ob­vi­ously true. And we don’t get to know which seem­ingly true ideas are wrong ahead of time.\

So in­stead I sug­gest this:

As­sume the in­tu­ition is wrong. It’s fake. And then use it any­way. Let your­self won­der about and kind of be­lieve in what makes sense to you about in­tro­verts and ex­tro­verts. Just do it in a men­tal sand­box of “This is all fake and made up.”

You know more about peo­ple than you’re con­scious of. Do­ing this sand­box­ing lets you flesh out Gears for ex­tro­ver­sion and in­tro­ver­sion with more of your mind.

It also keeps you hon­est. You’re already priv­ileg­ing hy­pothe­ses. This lets you own up to it and no­tice where you’re mak­ing im­plicit as­sump­tions.

And maybe some of those “priv­ileged” hy­pothe­ses are just cor­rect. That’s worth notic­ing when it’s true. Maybe more ex­traverted peo­ple re­ally do wear more dec­o­ra­tive cloth­ing. If that’s right, then maybe you should have let your in­tu­ition in­fluence your guesses from the start.

So in prac­tice, while us­ing a roadmap it’s sen­si­ble to think of roads as ba­sic some­how. Or rather, when us­ing these maps, roads are ba­sic.

Yes, you can re­flect on it and re­mem­ber that roads are made of atoms. But two points:

  • That’s pretty use­less. That doesn’t help you get from point A to point B in a new city.

  • The roadmap would work even if the roads weren’t made of atoms. Like roads in a video game world.

This means it’s pretty silly to try to give an in­ten­sional defi­ni­tion of “road” in this con­text. If you met some­one who’d never used a roadmap be­fore, you’d point at a road near you and point at the match­ing part of the map and say “That thing is this line.”

I think this sug­gests a nat­u­ral way to define “on­tol­ogy”. I say an on­tol­ogy is a set of “ba­sic” things that you use to build a map (to­gether with rules for how you can com­bine them into a map). Some­thing is “on­tolog­i­cally ba­sic” if it’s an el­e­ment of the on­tol­ogy you’re us­ing.

Some other ex­am­ples:

  • In Eu­clidean ge­om­e­try, the un­defined terms “point”, “line”, and “plane” are the on­tolog­i­cally ba­sic things, and the pos­tu­lates are the rules for how to com­bine them. We cre­ate ter­ri­to­ries this on­tol­ogy can map when ex­trin­si­cally defin­ing the un­defined terms: we pre­tend a black­board is a “plane”, a bar of chalk dust dragged across it is a “line”, and a fat dot of chalk on it is a “point”. I think a lot of peo­ple talk about this back­wards, like the draw­ings are maps helping them ex­plore the ter­ri­tory of Eu­clidean ge­om­e­try. I think they’re con­fused about what “real” means. The draw­ings help us no­tice what ter­ri­to­ries that on­tol­ogy can make maps of.

  • I think clas­si­cal me­chan­ics has mass, po­si­tion, and time as on­tolog­i­cally ba­sic. New­ton’s Laws of Mo­tion give the rest of the on­tol­ogy. That’s a rich enough map-build­ing set that it can de­scribe most move­ment we en­counter pretty well. It falls short when mod­el­ing near-light speeds though.

  • OCEAN’s on­tol­ogy has five “per­son­al­ity spec­tra” as ba­sic: Open­ness, Con­scien­tious­ness, Ex­traver­sion, Agree­able­ness, and Neu­roti­cism. Th­ese five emerged from data. This con­trasts with My­ers-Briggs’ four (In­tro­ver­sion/​Ex­traver­sion, Sens­ing/​iN­tu­it­ing, Think­ing/​Feel­ing, and Per­ceiv­ing/​Judg­ing), which came from think­ing about Carl Jung’s the­o­ries. They both have the same on­tolog­i­cal struc­ture though: per­son­al­ity type is defined by some small num­ber of in­ter­vals with some set of be­hav­ioral traits clus­tered at each end of each in­ter­val.

On­tolo­gies make things seem real. Roads are real, right? But if time it­self isn’t real, or if there’s only one elec­tron in the whole uni­verse, then what does it mean to say that a road is real?

I think peo­ple get con­fused like this when they switch on­tolo­gies with­out notic­ing. First roads are ba­sic. Then we’re talk­ing about on­tolo­gies for physics, and none of those take roads as ba­sic. One of them even challenges the idea of a thing at a higher or­der than an elec­tron. Each time we switch on­tolo­gies that we con­nect to our ex­pe­rience, “real” takes on a new mean­ing for us.

But aren’t roads re­ally real? I mean, I walk down one ev­ery day to get to work…

…which shows how per­va­sive this illu­sion is.

If I can’t switch to an on­tol­ogy that doesn’t have roads as ba­sic while look­ing at what I was call­ing a “road”, then I’m pretty stuck. I can’t un­der­stand re­duc­tion­ism. I can’t see with fresh eyes. I’m just re­hears­ing what I “know”.

Like­wise with peo­ple who are re­ally into (say) My­ers-Briggs. If the types in My­ers-Briggs look real to you, and you can’t shake that sense, then you’re stuck. It’ll seem mean­ingful to figure out how each per­son fits in that frame­work as though the frame­work is ob­jec­tively true. That will make it hard for you to no­tice things about per­son­al­ity that that on­tol­ogy doesn’t cap­ture well.

…and it will high­light things that that on­tol­ogy does cap­ture well.

I learned a lot about just how differ­ent other minds can be by study­ing the En­nea­gram. It sug­gested Gears for peo­ple I hadn’t con­sid­ered be­fore. It helped me see my dad’s stern­ness as af­fec­tion. I mostly don’t use that sys­tem any­more; it’s un­clear whether its nine types map at all to nat­u­ral clusters of peo­ple. But I still see Dad more clearly be­cause I used the sys­tem for a while.

Switch­ing which on­tol­ogy is ac­tive feels like chang­ing what I be­lieve is real about what I’m ex­pe­rienc­ing. This means if I want on­tolog­i­cal flex­i­bil­ity, I have to take my ex­pe­rience of “real” lightly. I can’t clutch too tightly to the sense that what’s “ob­vi­ously real” to me right now is ob­jec­tively true. And I have to be able to see some­thing new as “ob­vi­ously real”.

Like with the road. Roads are real. But I can set that aside and see it as molecules in me­chan­i­cal and chem­i­cal in­ter­ac­tions. Or as quarks in a time­less wig­gling quan­tum soup. Or as a dream­like pro­jec­tion of my pat­tern-match­ing mind.

This is eas­ier to do if I think of these “real” roads as fake. And molecules as fake. And quarks as fake. If I re­mem­ber that I’m talk­ing about map-gen­er­a­tors, which feels on the in­side like talk­ing about the ter­ri­tory.

Num­bers are on­tolog­i­cally ba­sic in el­e­men­tary ar­ith­metic.

But it turns out that if you take sets as ba­sic in­stead, you can de­rive el­e­men­tary num­bers. Set the­ory is a richer on­tol­ogy than that of el­e­men­tary ar­ith­metic: ev­ery­thing you can map with num­bers, you can map at the same re­s­olu­tion with sets. But you can do more with sets.

Re­duc­tion­ism promises that all on­tolo­gies can be­come one. More for­mally: Given any finite set of on­tolo­gies that fit ex­pe­rience, there’s some su­per-on­tol­ogy that fits ex­pe­rience and is at least as rich as ev­ery on­tol­ogy in your ini­tial set.

That’s differ­ent from say­ing that you know how to find it.

Quarks are real, right? Really re­ally real? And ev­ery­thing real comes from quarks and how they in­ter­act. We just use la­bels like “evolu­tion” and “de­sire” for things that are a pain to de­rive from the quark level.

But it would feel this way if you were wrong, too. That’s what it feels like to wear an on­tol­ogy.

I sus­pect it’s a type er­ror to think of an on­tol­ogy as cor­rect or wrong. On­tolo­gies are toolk­its for build­ing maps. It makes sense to ask whether it carves re­al­ity at its joints, but that’s differ­ent. That’s look­ing at fit. Some­thing weird hap­pens to your episte­mol­ogy when you start ask­ing whether quarks are real in­de­pen­dent of on­tol­ogy.

Maybe in the se­cret noume­nal uni­verse, there are truly ba­sic things. I don’t know how I can ever know about them with­out maps though.

Which makes me want to whack my on­tolo­gies with a sledge­ham­mer.

If I’m only ever will­ing to try on on­tolo­gies that I can tell fall within a known richer su­per-on­tol­ogy (e.g., physics), then any­thing that su­per-on­tol­ogy doesn’t eas­ily map be­comes hard for me to no­tice.

This isn’t a challenge to re­duc­tion­ism. Or to physics.

It’s a challenge to as­sum­ing you already know the an­swer.

Fif­teen years ago I learned how to “ex­tend ki” in aik­ido. Ki was part of my teach­ers’ on­tol­ogy. That didn’t make sense to my physics brain, but I went with it any­way. This gave me ac­cess to strange pow­ers that took me over a decade to un­der­stand within my physics on­tol­ogy.

I think it twists the defi­ni­tion of “ra­tio­nal” to say that I should have re­jected their teach­ings as wrong­headed.

But it would have been bad if I had be­lieved in ki and physics and re­duc­tion­ism with­out be­ing con­fused. Even­tu­ally the on­tolo­gies needed to rec­on­cile.

And they did. Even­tu­ally I learned enough about body me­chan­ics and how brains model move­ment to un­der­stand why “mov­ing with ki flow” worked.

But in the mean­time, I still learned how to do aik­ido.

“On­tolog­i­cal flex­i­bil­ity” is a mouth­ful. I don’t like the phrase. Too many syl­la­bles.

So in­stead I talk about fake frame­works.

There’s a skill to try­ing on a crazy per­spec­tive, ac­tu­ally be­liev­ing it while you use it, and never tak­ing it se­ri­ously. Then you can learn whether your judg­ment of “crazy” is right. And you can ex­tract value from the good parts.

There’s an open ques­tion about how to wear ob­vi­ously wrong on­tolo­gies with­out hurt­ing your be­lief sys­tem. I don’t have a bet­ter an­swer than “Try to sand­box.” It seems to work for me. And it’s not some­thing I always did: I used to adopt and cling to ev­ery on­tol­ogy I tried on. It made the world seem very mys­te­ri­ous. I don’t think I do that any­more, so I think this is a learn­able skill.

And if this is the wrong skill some­how, I’d like to know what to use in­stead.