# Blegg Mode

Link post

Fan­fic­tion for the blegg/​rube parable in “A Hu­man’s Guide to Words”, ~800 words. (Con­tent no­tice: in ad­di­tion to mak­ing a point about episte­mol­ogy (which is why it may have been worth shar­ing here), this piece is also an ob­vi­ous alle­gory about a po­ten­tially mind­kil­ling topic; read with cau­tion, as always.)

• I don’t un­der­stand what point are you try­ing to make.

Pre­sum­ably, each ob­ject has ob­serv­able prop­er­ties and un­ob­serv­able prop­er­ties . The util­ity of putting an ob­ject into bin A is and the util­ity of putting it into bin B is . There­fore, your worker should put an ob­ject into bin A if an only if

That’s it. Any “cat­e­gories” you in­tro­duce here are at best helpful heuris­tics, with no deep philo­soph­i­cal sig­nifi­cance.

• Any “cat­e­gories” you in­tro­duce here are at best helpful heuris­tics, with no deep philo­soph­i­cal sig­nifi­cance.

I mean, yes, but I was imag­in­ing that there would be some deep philos­o­phy about how com­pu­ta­tion­ally bounded agents should con­struct op­ti­mally helpful heuris­tics.

• Alright, but then you need some (at least in­for­mal) model of why com­pu­ta­tion­ally bounded agents need cat­e­gories. In­stead, your ar­gu­ment seems to rely purely on the in­tu­ition of your fic­tional char­ac­ter (“you no­tice that… they seem to oc­cupy a third cat­e­gory in your on­tol­ogy of sortable ob­jects”).

Also, you seem to as­sume that cat­e­gories are non-over­lap­ping. You write “you don’t re­ally put them in the same men­tal cat­e­gory as bleggs”. What does it even mean, to put two ob­jects in the same or not the same cat­e­gory? Con­sider a horse and a cow. Are they in the same men­tal cat­e­gory? Both are in the cat­e­gories “liv­ing or­ganisms”, “an­i­mals”, “mam­mals”, “do­mes­ti­cated mam­mals”. But, they are differ­ent species. So, some­times you put them in the same cat­e­gory, some­times you put them in differ­ent cat­e­gories. Are “raven” and “F16 air­craft” in the same cat­e­gory? They are if your cat­e­gories are “fly­ing ob­jects” vs. “non-fly­ing ob­jects”, but they aren’t if your cat­e­gories are “an­i­mate” vs. “non-an­i­mate”.

More­over, you seem to as­sume that cat­e­gories are crisp rather than fuzzy, which is al­most never the case for cat­e­gories that peo­ple ac­tu­ally use. How many coins does it take to make a “pile” of coins? Is there an ex­act num­ber? Is there an ex­act age when a per­son gets to be called “old”? If you take a table made out of a block of wood, and start to grad­u­ally de­form its shape un­til it be­comes perfectly spher­i­cal, is there an ex­act point when it is no longer called a “table”? So, “rubes” and “bleggs” can be fuzzy cat­e­gories, and the anoma­lous ob­jects are in the gray area that defies cat­e­go­riza­tion. There’s noth­ing wrong with that.

If we take this rube/​blegg fac­tory thought ex­per­i­ment se­ri­ously, then what we need to imag­ine is the al­gorithm (in­struc­tions) that the worker in the fac­tory ex­e­cutes. Then you can say that the rele­vant “cat­e­gories” (in the con­text of the fac­tory, and in that con­text only) are the ver­tices in the flow graph of the al­gorithm. For ex­am­ple, the al­gorithm might be, a table that speci­fies how to score each ob­ject (blue +5 points, egg-shaped +10 points, furry +1 point...) and a thresh­old which says what the score should to be to put it in a given bin. Then there are es­sen­tially only two cat­e­gories. Another al­gorithm might be “if ob­ject passes test X, put in the rube bin, if ob­ject passes test Y, put it in the blegg bin, if ob­ject passes nei­ther test, put in in the Pal­la­dium scan­ner and sort ac­cord­ing to that”. Then, you have ap­prox­i­mately seven cat­e­gories: “reg­u­lar rube” (passed test X), “reg­u­lar blegg” (passed test Y), “ir­reg­u­lar ob­ject” (failed both tests), “ir­reg­u­lar rube” (failed both tests and found to con­tain enough Pal­la­dium), “ir­reg­u­lar blegg” (failed both tests and found to con­tain not enough Pal­la­dium), “rube” (any­thing put in the rube bin) and “blegg” (any­thing put in the blegg bin). But in any case, the cat­e­go­riza­tion would de­pend on the par­tic­u­lar trade-offs that the de­sign­ers of the pro­duc­tion line made (de­pend­ing on things like, how ex­pen­sive is it to run the pal­la­dium scan­ner), rather than im­mutable Pla­tonic truths about the na­ture of the ob­jects them­selves.

Then again, I’m not en­tirely sure whether we are re­ally dis­agree­ing or just for­mu­lat­ing the same thing in differ­ent ways?

• your ar­gu­ment seems to rely purely on the in­tu­ition of your fic­tional character

Yes, the de­pen­dence on in­tu­ition is definitely a weak­ness of this par­tic­u­lar post. (I wish I knew as much math as Jes­sica Tay­lor! If I want to be­come stronger, I’ll have to figure out how fit more study­ing into my sched­ule!)

you seem to as­sume that cat­e­gories are non-over­lap­ping.
you seem to as­sume that cat­e­gories are crisp rather than fuzzy

I don’t be­lieve ei­ther of those things. If you have any spe­cific word­ing sug­ges­tions on how I can write more clearly so as to bet­ter com­mu­ni­cate to my read­ers that I don’t be­lieve ei­ther of those things, I’m listen­ing.

If you take a table made out of a block of wood, and start to grad­u­ally de­form its shape un­til it be­comes perfectly spher­i­cal, is there an ex­act point when it is no longer called a “table”?

No, there is no such ex­act point; like many long­time Less Wrong read­ers, I, too, am fa­mil­iar with the Sori­ties para­dox.

But in any case, the cat­e­go­riza­tion would de­pend on the par­tic­u­lar trade-offs that the de­sign­ers of the pro­duc­tion line made (de­pend­ing on things like, how ex­pen­sive is it to run the pal­la­dium scan­ner)

Right. Another ex­am­ple of one of the things the par­tic­u­lar al­gorithm-de­sign trade-offs will de­pend on is the dis­tri­bu­tion of ob­jects.

We could imag­ine a slightly al­tered parable in which the fre­quency dis­tri­bu­tion of ob­jects is much more evenly spread out in color–shape–metal-con­tent space: while cube­ness has a rea­son­ably strong cor­re­la­tion with red­ness and pal­la­dium yield, and eggness with blue­ness and vana­dium yield, you still have a sub­stan­tial frac­tion of non-modal ob­jects: bluish-pur­ple rounded cubes, red­dish-pur­ple squar­ish eggs, &c.

In that sce­nario, a nat­u­ral-lan­guage sum­mary of the op­ti­mal de­ci­sion al­gorithm wouldn’t talk about dis­crete cat­e­gories: you’d prob­a­bly want some kind of scor­ing al­gorithm with thresh­olds for var­i­ous tests and de­ci­sions as you de­scribe, and no mat­ter where you set the thresh­old for each de­ci­sion, you’d still see a lot of ob­jects just on ei­ther side of the bound­ary, with no good “joint” to an­chor the place­ment of a cat­e­gory bound­ary.

In con­trast, my read­ing of Yud­kowsky’s origi­nal parable posits a much sparser, more tightly-clus­tered dis­tri­bu­tion of ob­jects in con­figu­ra­tion space. The ob­jects do vary some­what (some bleggs are pur­ple, some rubes con­tain vana­dium), but there’s a very clear cluster-struc­ture: vir­tu­ally all ob­jects are close to the cen­ter of—and could be said to “be­long to”—ei­ther the “rube” cluster or the “blegg” cluster, with a lot of empty space in be­tween.

In this sce­nario, I think it does make sense for a nat­u­ral-lan­guage sum­mary of the op­ti­mal de­ci­sion al­gorithm to talk about two dis­tinct “cat­e­gories” where the den­sity in the con­figu­ra­tion space is con­cen­trated. Pla­tonic essences are just the limit­ing case as the over­lap be­tween clusters goes to zero.

In my fan­fic­tion, I imag­ine that some un­known en­tity has taken ob­jects that were origi­nally in the “rube” cluster, and mod­ified them so that they ap­pear, at first glance but not on closer in­spec­tion, to be mem­bers of the “blegg” cluster. At first, the pro­tag­o­nist wishes to re­spect the ap­par­ent in­tent of the un­known en­tity by con­sid­er­ing the mod­ified ob­jects to be bleggs. But in the pro­cess of her sort­ing work, the pro­tag­o­nist finds her­self want­ing to men­tally dis­t­in­guish adapted bleggs from reg­u­lar bleggs, be­cause she can’t make the same job-rele­vant prob­a­bil­is­tic in­fer­ences with the new “bleggs (ei­ther reg­u­lar or adapted)” con­cept as she could with the old “bleggs (only stan­dard bleggs)” con­cept.

To see why, for­get about the cat­e­gory la­bels for a mo­ment and just con­sider the clusters in the six-di­men­sional color–shape–tex­ture–firm­ness–lu­mi­nes­ence–metal-con­tent con­figu­ra­tion space.

Be­fore the un­known en­tity’s in­ter­ven­tion, we had two dis­tinct clusters: one cen­tered at {blue, egg, furry, flex­ible, lu­mi­nes­cent, vana­dium}, and an­other cen­tered at {red, cube, smooth, hard, non-lu­mi­nes­cent, pal­la­dium}.

After the un­known en­tity’s in­ter­ven­tion, we have three dis­tinct clusters: the two pre­vi­ously-ex­ist­ing clusters, and a new cluster cen­tered at {blue, egg, furry, hard, non-lu­mi­nes­cent, pal­la­dium}. This is a differ­ent situ­a­tion! Work­ers on the sort­ing line might want differ­ent lan­guage in or­der to de­scribe this new re­al­ity!

Now, if we were to pro­ject into the three-di­men­sional color–shape–tex­ture sub­space, then we would have two clusters again: with just these at­tributes, we can’t dis­t­in­guish be­tween bleggs and adapted bleggs. But since work­ers on the sort­ing line can ob­serve hard­ness, and care about metal con­tent, they prob­a­bly want to use the three-cluster rep­re­sen­ta­tion, even if they sus­pect the un­known en­tity might thereby feel dis­re­spected.

• Hmm. Why would the en­tity feel dis­re­spected by how many clusters the work­ers use? I ac­tu­ally am aware that this is an alle­gory for some­thing else. More­over, I think that I dis­agree you with about the some­thing else (al­though I am not sure since I am not en­tirely sure what’s your po­si­tion about the some­thing else is). Which is to say, I think that this alle­gory misses cru­cial as­pects of the origi­nal situ­a­tion and loses the crux of the de­bate.

• I think that this alle­gory misses cru­cial as­pects of the origi­nal situation

That makes sense! As gjm noted, some­times un­scrupu­lous au­thors sneak­ily con­struct an alle­gory with the in­tent of lead­ing the reader to a par­tic­u­lar con­clu­sion within the con­text of the alle­gory with the hope that the reader will map that con­clu­sion back onto the real-world situ­a­tion in a par­tic­u­lar way, with­out do­ing the work of ac­tu­ally show­ing that the alle­gory and the real-world situ­a­tion are ac­tu­ally analo­gous in the rele­vant as­pects.

I don’t want to be guilty of that! This is a story about bleggs and rubes that I hap­pened to come up with in the con­text of try­ing to think about some­thing else (and I don’t want to be de­cep­tive about that his­tor­i­cal fact), but I definitely agree that peo­ple shouldn’t map the story onto some other situ­a­tion un­less they ac­tu­ally have a good ar­gu­ment for why that map­ping makes sense. If we wanted to dis­cuss the some­thing else rather than the bleggs and rubes, we should do that on some­one else’s web­site. Not here.

• FWIW, I pre­dicted it would be an alle­gory of trans­sex­u­al­ity even be­fore I read it or any of the com­ments.

• I mean, yes, there’s the al­lu­sion in the ti­tle! (The post wasn’t origi­nally writ­ten for be­ing shared on Less Wrong, it just seemed suffi­ciently san­i­tized to be share­able-here-with­out-run­ning-too-afoul-of-anti-poli­tics-norms af­ter the fact.)

• I read the ti­tle as just an al­lu­sion to Eliezer’s OP on bleggs and rubes. (Otoh, with­out hav­ing read the ar­ti­cle just linked, I’m fa­mil­iar with “egg” as trans­sex­ual jar­gon for some­one ex­plor­ing TS feel­ings, who (the ide­ol­ogy has it) will in­evitably in the end “hatch” into a full-on TS.)

• The de­scrip­tion here seems a lit­tle … dis­in­gen­u­ous.

[EDITED to add:] I see that this has been down­voted at least once. I don’t ob­ject at all to be­ing down­voted but find it hard to tell from just a down­vote what it is that has dis­pleased some­one; if any­one would like to in­di­cate why they dis­like this com­ment, I’m all ears. (Ob­jec­tion to “dis­in­gen­u­ous” as too harsh? Prefer­ring the “de­ni­able alle­gory”, as Zack puts it, to re­main de­ni­able for longer? Dis­lik­ing what they guess to be my po­si­tion on the spe­cific is­sue it’s an alle­gory for? …)

• I down­voted bc the de­scrip­tion is dis­in­gen­u­ous only if it’s com­mon knowl­edge that the Ra­tion­al­ist pro­ject is so doomed that an at­tempt to cor­rect a poli­ti­cally mo­ti­vated epistemic er­ror via an oth­er­wise en­tirely de­poli­ti­cized fic­tional ex­am­ple in a speci­fi­cally Ra­tion­al­ist space is con­strued as a poli­ti­cal act.

Fine to ar­gue that this is the case (thus con­tribut­ing to ac­tual com­mon knowl­edge), but in­sinu­at­ing it seems like a sketchy way of mak­ing it so.

• Thanks for the ex­pla­na­tion!

It’s rather con­densed, so it’s very pos­si­ble that my in­abil­ity to see how it’s a fair crit­i­cism of what I wrote is the re­sult of my mi­s­un­der­stand­ing it. May I at­tempt to para­phrase your crit­i­cism at greater length and ex­plain why I’m baf­fled? I re­gret that my at­tempt at do­ing this has turned out awfully long; at least it should be ex­plicit enough that it can’t rea­son­ably be ac­cused of “in­sinu­at­ing” any­thing...

So, I think your ar­gu­ment goes as fol­lows. (Your ar­gu­ment, as I (pos­si­bly mis-) un­der­stand it, is in ro­man type with num­bers at the start of each point. Ital­ics in­di­cate what I can’t make sense of.)

1. The pur­pose of the linked ar­ti­cle is not best un­der­stood as poli­ti­cal, but as im­prov­ing epistemic hy­giene: its pur­pose is to cor­rect some­thing that’s definitely an er­ror, an er­ror that merely hap­pens to arise as a re­sult of poli­ti­cal bi­ases.

It isn’t clear to me what this er­ror is meant to be. If it’s some­thing like “think­ing that there must be a definite ob­jec­tively-cor­rect di­vi­sion of all things into bleggs and rubes” then I agree that it’s an er­ror but it’s an er­ror already thor­oughly cov­ered by EY’s and SA’s posts linked to in the ar­ti­cle it­self, and in any case it doesn’t seem to me that the ar­ti­cle is mostly con­cerned with mak­ing that point; rather, it pre­sup­poses it. The other can­di­dates I can think of seem to me not to be clearly er­rors at all.

In any case, it seems to me that the main point of the linked ar­ti­cle is not to cor­rect some epistemic er­ror, but to pro­pose a par­tic­u­lar po­si­tion on the poli­ti­cal is­sue it’s al­lud­ing to, and that most of the de­tails of its alle­gory are cho­sen speci­fi­cally to sup­port that aim.

2. The au­thor has taken some trou­ble to ad­dress this er­ror in terms that are “en­tirely de­poli­ti­cized” as far as it’s pos­si­ble for it to be given that the er­ror in ques­tion is poli­ti­cally mo­ti­vated.

I think what I think of this de­pends on what the er­ror in ques­tion is meant to be. E.g., if it’s the thing I men­tioned above then it seems clear that the ar­ti­cle could eas­ily have been much less poli­ti­cal while still mak­ing the gen­eral point as clearly. In any case, call­ing this ar­ti­cle “de­poli­ti­cized” seems to me like call­ing Or­well’s “An­i­mal Farm” de­poli­ti­cized be­cause it never so much as men­tions the USSR. Con­struct­ing a hy­po­thet­i­cal situ­a­tion de­signed to match your view of a poli­ti­cally con­tentious ques­tion and draw­ing read­ers’ at­ten­tion to that matchup is not “de­poli­ti­cized” in any use­ful sense.

3. My de­scrip­tion of Zack’s de­scrip­tion as “dis­in­gen­u­ous” amounts to an ac­cu­sa­tion that Zack’s post­ing the ar­ti­cle here is a “poli­ti­cal act” (which I take to mean: an at­tempt to ma­nipu­late read­ers’ poli­ti­cal opinions, or per­haps to turn LW into a venue for poli­ti­cal flame­wars, or some­thing of the kind).

I do in fact think that Zack’s pur­pose in post­ing the ar­ti­cle here is prob­a­bly at least in part to pro­mote the poli­ti­cal po­si­tion for which the ar­ti­cle is ar­gu­ing, and that if that isn’t so—if Zack’s in­ten­tion was sim­ply to draw our at­ten­tion to a well-ex­e­cuted bit of episte­mol­ogy—then it is likely that Zack finds it well-ex­e­cuted partly be­cause of find­ing it poli­ti­cally con­ge­nial. In that sense, I do think it’s prob­a­bly a “poli­ti­cal act”. My rea­sons for think­ing this are (1) that my own as­sess­ment of the mer­its of the ar­ti­cle purely as a piece of philos­o­phy is not pos­i­tive, and (2) that the poli­ti­cal alle­gory seems to me so ob­vi­ously the main pur­pose of the ar­ti­cle that I have trou­ble see­ing why any­one would recom­mend it for an en­tirely sep­a­rate pur­pose. More on this be­low. I could of course be wrong about Zack’s opinions and/​or about the mer­its of the ar­ti­cle as an episte­molog­i­cal ex­er­cise.

It seems rele­vant here that Zack pretty much agreed with my de­scrip­tion: see his com­ments us­ing terms like “de­ni­able alle­gory”, “get away with it”, etc.

4. That could only be a rea­son­able con­cern if the peo­ple here were so bad at think­ing clearly on difficult top­ics as to make the pro­ject of im­prov­ing our think­ing a doomed one.

I have no idea why any­thing like this should be so.

5. And it could only jus­tify call­ing Zack’s de­scrip­tion “dis­in­gen­u­ous” if that weren’t only true but com­mon knowl­edge—be­cause oth­er­wise a bet­ter ex­pla­na­tion would be that Zack just doesn’t share my opinion about how in­ca­pable read­ers here are of clear thought on difficult top­ics.

That might be more or less right (though it wouldn’t re­quire quite so much as ac­tual com­mon knowl­edge) if point 4 were right, but as men­tioned above I am en­tirely baf­fled by point 4.

Hav­ing laid bare my con­fu­sion, a few words about what I take the ac­tual pur­pose of the ar­ti­cle to be and why, and about its mer­its or de­mer­its as a piece of philos­o­phy. (By way of ex­plain­ing some of my com­ments above.)

I think the (ob­vi­ous, or so it seems to me) pur­pose of the ar­ti­cle is to ar­gue for the fol­low­ing po­si­tion: “Trans peo­ple [adapted bleggs] should be re­garded as be­long­ing not to their ‘adopted’ gen­der [you don’t re­ally put them in the same men­tal cat­e­gory as bleggs], but to a cat­e­gory sep­a­rate from ei­ther of the usual gen­ders [they seem to oc­cupy a third cat­e­gory in your on­tol­ogy of sortable ob­jects]; if you have to put peo­ple into two cat­e­gories, trans peo­ple should al­most always be grouped with their ‘origi­nally as­signed’ gen­der [so that you can put the ma­jor­ity of pal­la­dium-con­tain­ing ones in the pal­la­dium bin (formerly known as the rube bin) … 90% of the adapted bleggs—like 98% of rubes, and like only 2% of non-adapted bleggs—con­tain frag­ments of pal­la­dium].” And also, per­haps, to sug­gest that no one re­ally truly thinks of trans peo­ple as quite be­long­ing to their ‘adopted’ gen­der [And at a glance, they look like bleggs—I mean, like the more-typ­i­cal bleggs … you don’t re­ally put them in the same men­tal cat­e­gory as bleggs].

(Note: the ar­ti­cle deals metaphor­i­cally only with one sort of transness—rube-to-blegg. Pre­sum­ably the au­thor would ac­tu­ally want at least four cat­e­gories: blegg, rube, rube-to-blegg, blegg-to-rube. Per­haps oth­ers too. I’m go­ing to ig­nore that is­sue be­cause this is plenty long enough already.)

I don’t think this can rea­son­ably be re­garded as cor­rect­ing an epistemic er­ror. There’s cer­tainly an epistemic er­ror in the vicinity, as I men­tioned above: the idea that we have to di­vide these hy­po­thet­i­cal ob­jects into ex­actly two cat­e­gories, with there be­ing a clear fact of the mat­ter as to which cat­e­gory each ob­ject falls into—and the cor­re­spond­ing po­si­tion on gen­der is equally er­ro­neous. But that is dealt with in pass­ing in the first few para­graphs, and most of the ar­ti­cle is ar­gu­ing not merely for not mak­ing that er­ror but for a spe­cific other po­si­tion, the one I de­scribed in the para­graph pre­ced­ing this one. And that po­si­tion is not so clearly cor­rect that ad­vo­cat­ing it is sim­ply a mat­ter of cor­rect­ing an er­ror.

(Is it not? No, it is not. Here are three other po­si­tions that con­tra­dict it with­out, I think, be­ing flat-out wrong. 1. “We shouldn’t put trans peo­ple in a third dis­tinct cat­e­gory; rather, we should re­gard the usual two cat­e­gories as fuzzy-edged, try to see them less cat­e­gor­i­cally, and avoid man­u­fac­tur­ing new cat­e­gories un­less there’s a re­ally se­ri­ous need to; if some­one doesn’t fit perfectly in ei­ther of the two usual cat­e­gories, we should re­sist the temp­ta­tion to look for a new cat­e­gory to put them in.” 2. “Hav­ing no­ticed that our cat­e­gories are fuzzy and some­what ar­bi­trary, we would do best to stick with the usual two and put trans peo­ple in the cat­e­gory of their ‘adopted’ gen­der. We will some­times need to treat them spe­cially, just as we would in any case for e.g. highly gen­der-atyp­i­cal non-trans peo­ple, but that doesn’t call for a differ­ent cat­e­gory.” 3. “Hav­ing no­ticed [etc.], we would do best to stick with the usual two and put trans peo­ple in the cat­e­gory of their ‘origi­nally as­signed’ gen­der. We will some­times need [etc.].”)

I’ve in­di­cated that if I con­sider the ar­ti­cle as an episte­molog­i­cal ex­er­cise rather than a piece of poli­ti­cal pro­pa­ganda, I find it unim­pres­sive. I should say a bit about why.

I think there are two bits of ac­tual episte­mol­ogy here. The first is the ob­ser­va­tion that we don’t have to put all of our bleggs/​rubes/​what­ever into two boxes and as­sume that the cat­e­go­riza­tion is Ob­jec­tively Cor­rect. Noth­ing wrong with that, but it’s also not in any sense a con­tri­bu­tion of this ar­ti­cle, which already links to ear­lier pieces by Eliezer and Scott that deal with that point well.

The sec­ond is the spe­cific heuris­tic the au­thor pro­poses: make a new cat­e­gory for things that have “cheap-to-de­tect fea­tures that cor­re­late with more-ex­pen­sive-to-de­tect fea­tures that are de­ci­sion-rele­vant with re­spect to the agent’s goals”. So, is this a good heuris­tic?

The first thing I no­tice about it is that it isn’t a great heuris­tic even when ap­plied to the spe­cific ex­am­ple that mo­ti­vates the whole piece. As it says near the start: ‘you have no way of know­ing how many suc­cess­fully “pass­ing” adapted bleggs you’ve missed’. Trans-ness is not always “cheap to de­tect”. I guess it’s cheaper to de­tect than, say, sex chro­mo­somes. OK—and how of­ten are an­other per­son’s sex chro­mo­somes “de­ci­sion-rele­vant with re­spect to the agent’s goals”? Pretty much only if the agent is (1) a doc­tor treat­ing them or (2) a prospec­tive sex­ual part­ner who is highly in­ter­ested in, to put it bluntly, their breed­ing po­ten­tial. Those are both fairly un­com­mon—for most of us, very few of the peo­ple we in­ter­act with are ei­ther likely pa­tients or likely breed­ing part­ners.

What about other cases where new cat­e­gories have turned out to be wanted? Try­ing to think of some ex­am­ples, it seems to me that what mat­ters is sim­ply the pres­ence of fea­tures that are “de­ci­sion-rele­vant with re­spect to the agent’s goals”. Some­times they cor­re­late with other cheaper-to-iden­tify fea­tures, some­times not. There are iso­topes: we had the chem­i­cal el­e­ments, and then it turned out that ac­tu­ally we some­times need to dis­t­in­guish be­tween U-235 and U-238. In this case it hap­pens that you can dis­t­in­guish them by mass, which I guess is eas­ier than di­rect ex­am­i­na­tion of the nu­clei, but it seems to me that we’d care about the differ­ence even if we couldn’t do that, and rel­a­tively cheap dis­t­in­guisha­bil­ity is not an im­por­tant part of why we have sep­a­rate cat­e­gories for them. In­deed, when iso­topes were first dis­cov­ered it was by ob­serv­ing nu­clear-de­cay chains. There are enan­tiomers: to take a con­crete ex­am­ple, in the wake of the thal­i­do­mide dis­aster it sud­denly be­came clear that it was worth dis­t­in­guish­ing R-thal­i­do­mide from S-thal­i­do­mide. Ex­cept that, so far as I can tell, it isn’t ac­tu­ally fea­si­ble to sep­a­rate them, and when thal­i­do­mide is used med­i­cally it’s still the racemic form and they just tell peo­ple who might get preg­nant not to take it. So there doesn’t seem to be a cheap-to-iden­tify fea­ture here in any use­ful sense. There are differ­ent types of su­per­nova for which I don’t see any cheap-fea­ture/​rele­vant-fea­ture di­chotomy. There are in­ter­sex peo­ple whose situ­a­tion has, at least log­i­cally speak­ing, a thing or two in com­mon with trans peo­ple; in many cases the way you iden­tify them is by check­ing their sex chro­mo­somes, which is ex­actly the “ex­pen­sive” fea­ture the au­thor iden­ti­fies in the case of trans peo­ple.

I’m re­ally not see­ing that this heuris­tic is a par­tic­u­larly good one. It has the look, to me, of a prin­ci­ple that’s con­structed in or­der to reach a par­tic­u­lar con­clu­sion. Even though, as I said above, I am not con­vinced that it ap­plies all that well even to the spe­cific ex­am­ple I think it was con­structed for. I also don’t think it ap­plies par­tic­u­larly well in the hy­po­thet­i­cal situ­a­tion the au­thor made up. Re­mem­ber those 2% of oth­er­wise or­di­nary bleggs that con­tain pal­la­dium? Per­son­ally, I’d want a cat­e­gory for those, if I found my­self also need­ing one for “adapted bleggs” be­cause of the pal­la­dium they con­tain. It might be im­prac­ti­ca­bly ex­pen­sive, for now, to scan all bleggs in case they be­long to the 2%, but I’d be look­ing out for ways to iden­tify pal­la­dium-con­tain­ing bleggs, and all pal­la­dium-con­tain­ing bleggs might well turn out in the end to be a “bet­ter” cat­e­gory than “adapted bleggs”, es­pe­cially as only 90% of the lat­ter con­tain pal­la­dium.

So, as I say, not im­pres­sive episte­mol­ogy, and it looks to me as if the prin­ci­ple was con­structed for the sake of this par­tic­u­lar ap­pli­ca­tion. Which is one more rea­son why I think that that ap­pli­ca­tion is the sole real point of the ar­ti­cle.

• Thanks for try­ing. I have limited time and got a sense for where we seem to have split from each other about halfway through your com­ment so I’ll mainly re­spond to that. You brought up a bunch of stuff very spe­cific to gen­der is­sues that I don’t think is rele­vant in the sec­ond half.

There’s an un­der­ly­ing situ­a­tion in which Zack made some ar­gu­ments el­se­where about gen­der stuff, and promi­nent peo­ple in the Ra­tion­al­ist com­mu­nity re­sponded with an ar­gu­ment along the lines of “since cat­e­gories are in the map, not in the ter­ri­tory, there’s no point in say­ing one cat­e­go­riza­tion is more nat­u­ral than an­other, we might as well just pick ones that don’t hurt peo­ple’s feel­ings.”

Th­ese peo­ple are claiming a po­si­tion on episte­mol­ogy that Zack thinks is sub­stan­tially mis­taken. Zack is faced with a choice. Either they’re giv­ing a poli­ti­cally mo­ti­vated anti-episte­mol­ogy in or­der to shut down the con­ver­sa­tion and not be­cause they be­lieve it—or they’re mak­ing a mis­take.

If we take the ar­gu­ment liter­ally, it’s worth cor­rect­ing re­gard­less of one’s spe­cific opinions on gen­der iden­tity.

If we all know that such ar­gu­ments aren’t meant to be taken liter­ally, but are in­stead meant to push one side of a par­tic­u­lar poli­ti­cal de­bate in that con­text, then ar­gu­ing against it is ac­tu­ally just the poli­ti­cal act of push­ing back.

But part of how bad faith ar­gu­ments work is that they fool some peo­ple into think­ing they’re good-faith ar­gu­ments. Even if YOU know that peo­ple don’t mean what they say in this case, they wouldn’t say it un­less SOMEONE was likely to be hon­estly mis­taken.

“You’re do­ing too much poli­tics here” is not a helpful cri­tique. It doesn’t give Zack enough in­for­ma­tion to get clued in if he’s not already, and leaves the key con­tro­ver­sial premise un­stated. If your ac­tual po­si­tion is, “come on, Zack, ev­ery­one on this site knows that peo­ple aren’t mak­ing this mis­take hon­estly, posts like this one by Scott are mind­kil­led poli­tics and en­gag­ing with them low­ers the qual­ity of dis­course here,” then you need to ac­tu­ally say that.

Per­son­ally, I DON’T see peo­ple be­hav­ing as though it were com­mon knowl­edge that peo­ple claiming to be mak­ing this mis­take are ac­tu­ally just ly­ing. And if we write off peo­ple like Scott we might as well just close down the whole pro­ject of hav­ing a big Ra­tion­al­ist com­mu­nity on the in­ter­net.

It’s offen­sive to me that there’s even a ques­tion about this.

• Aha, this clar­ifies some things helpfully. It is now much clearer to me than it was be­fore what episte­molog­i­cal er­ror you take Zack to be try­ing to cor­rect here.

I still think it’s clear that Zack’s main pur­pose in writ­ing the ar­ti­cle was to pro­mote a par­tic­u­lar ob­ject-level po­si­tion on the poli­ti­cal ques­tion. But I agree that “even though cat­e­gories are map rather than ter­ri­tory, some maps match re­al­ity much bet­ter than oth­ers, and to deny that is an er­ror” (call this propo­si­tion P, for fu­ture use) is a rea­son­able point to make about episte­mol­ogy in the ab­stract, and that given the con­text of Zack’s ar­ti­cle it’s rea­son­able to take that to be a key thing it’s try­ing to say about episte­mol­ogy.

But it seems to me—though per­haps I’m just be­ing dim—that the only pos­si­bly way to ap­pre­ci­ate that P was Zack’s episte­molog­i­cal point is to be aware not only of the poli­ti­cal (not-very-sub) sub­text of the ar­ti­cle (which, you’ll re­call, is the thing I origi­nally said it was wrong not to men­tion) but also of the con­text where peo­ple were ad­dress­ing that spe­cific poli­ti­cal is­sue in what Zack con­sid­ers a too-sub­jec­tive way. (For the avoidance of doubt, I’m not say­ing that that re­quires some sort of spe­cial es­o­teric knowl­edge un­available to the rest of us. Merely hav­ing just reread Scott’s TCWMFM would have sufficed. But it hap­pened that I was fa­mil­iar enough with it not to feel that I needed to re­visit it, and not fa­mil­iar enough with it to rec­og­nize ev­ery spe­cific refer­ence to it in Zack’s ar­ti­cle. I doubt I’m alone in that.)

Again, per­haps I’m just be­ing dim. But I know that some peo­ple didn’t even see the poli­ti­cal sub­text, and I know that I didn’t see P as be­ing Zack’s main episte­molog­i­cal point be­fore I read what you just wrote. (I’m still not sure it is, for what it’s worth.) So it doesn’t seem open to much doubt that just putting the ar­ti­cle here with­out fur­ther ex­pla­na­tion wasn’t suffi­cient.

There’s a spe­cific way in which I could be be­ing dim that might make that wrong: per­haps I was just dis­tracted by the poli­tics, and per­haps if I’d been able to ap­proach the ar­ti­cle as if it were purely talk­ing in the ab­stract about episte­mol­ogy I’d have taken it to be say­ing P. But, again, if so then I offer my­self as ev­i­dence that it needed some clar­ifi­ca­tion for the benefit of those li­able to be dis­tracted.

As to the rest:

It looks to me as if you are as­cribing mean­ings and pur­poses to me that are not mine at all. E.g., “If we all know that such ar­gu­ments aren’t meant to be taken liter­ally, but are in­stead meant to push one side of a par­tic­u­lar poli­ti­cal de­bate in that con­text”—I didn’t think I was say­ing, and I don’t think I be­lieve, and I don’t think any­thing I said ei­ther im­plies or pre­sup­poses, any­thing like that. The im­pres­sion I have is that this is one of those situ­a­tions where I say X, you be­lieve Y, from X&Y you in­fer Z, and you get cross be­cause I’m say­ing Z and Z is an awful thing to say—when what’s ac­tu­ally hap­pen­ing is that we dis­agree about Y. Un­for­tu­nately, I can’t tell what Y is in this situ­a­tion :-).

So I don’t know how to re­act to your sug­ges­tion that I should have said ex­plic­itly rather than just as­sum­ing that posts like Scott’s TCWMFM “are mind­kil­led poli­tics and en­gag­ing with them low­ers the qual­ity of dis­course here”; pre­sum­ably ei­ther (1) you think I ac­tu­ally think that or (2) you think that what I’ve said im­plies that so it’s a use­ful re­duc­tio, but I still don’t un­der­stand how you get there from what I ac­tu­ally wrote.

To be ex­plicit about this:

I do not think that Scott’s TCWMFM is “mind­kil­led poli­tics”.

I do not think that en­gag­ing with ar­ti­cles like Scott’s TCWMFM low­ers the qual­ity of dis­course.

I do not think that it’s im­pos­si­ble to hold Scott’s po­si­tion hon­estly.

I do not think that it’s im­pos­si­ble to hold Zack’s po­si­tion hon­estly.

I don’t think that Zack’s ar­ti­cle is “mind­kil­led poli­tics”, but I do think it’s much less good than Scott’s.

I don’t think Scott is mak­ing the episte­molog­i­cal mis­take you say Zack is say­ing he’s mak­ing, that of not un­der­stand­ing that one way of draw­ing cat­e­gory bound­aries can be bet­ter than an­other. I think he’s aware of that, but thinks (as, for what it’s worth, I do, but I think Zack doesn’t) that there are a num­ber of com­pa­rably well matched with re­al­ity ways to draw them in this case.

I think that re­spond­ing to Scott’s ar­ti­cle as if he were sim­ply say­ing “meh, what­ever, draw cat­e­gory bound­aries liter­ally any way you like, the only thing that mat­ters is which way is nicest” is not rea­son­able, and I think that cast­ing it as mak­ing the mis­take you say Zack is say­ing Scott was mak­ing re­quires some such un­char­i­ta­ble in­ter­pre­ta­tion. (This may be one rea­son why I didn’t take P to be the main episte­molog­i­cal claim of Zack’s ar­ti­cle.)

If you’re still offended by what I wrote, then at least one of us is mi­s­un­der­stand­ing the other and I hope that turns out to be fix­able.

• But I agree that “even though cat­e­gories are map rather than ter­ri­tory, some maps match re­al­ity much bet­ter than oth­ers, and to deny that is an er­ror”

Wait. Suitabil­ity for pur­pose has to come in here. There is no sin­gle or­der­ing of how closely a map re­flects re­al­ity. Maps com­press differ­ent parts of re­al­ity in differ­ent ways, to en­able differ­ent pre­dic­tions/​com­mu­ni­ca­tions about var­i­ous parts of re­al­ity. It’s been liter­ally decades since I’ve en­joyed flame­wars about which pro­jec­tion of Earth is “best” for literal maps, but the re­sult is the same: it de­pends on what the map will be used for, and you’re prob­a­bly best off us­ing differ­ent maps for differ­ent pur­poses, even if those maps are of the same place.

I don’t know the ac­tual de­bate go­ing on, and pretty much think that in un­spe­cific con­ver­sa­tion where de­tails don’t mat­ter, one should pre­fer kind­ness and sur­face pre­sen­ta­tion. Where the de­tails mat­ter, be pre­cise and fac­tual about the de­tails—don’t rely on cat­e­go­riza­tions that have no­table ex­cep­tions for the di­men­sions you’re talk­ing about.

• For the avoidance of doubt, I strongly agree that what counts as “match­ing re­al­ity much bet­ter” de­pends on what you are go­ing to be us­ing your map for; that’s a key rea­son why I am not very con­vinced by Zack’s origi­nal ar­gu­ment if it’s un­der­stood as a re­but­tal to (say) Scott’s TCWMFM ei­ther in gen­eral or speci­fi­cally as it per­tains to the poli­ti­cal ques­tion at is­sue.

• in un­spe­cific con­ver­sa­tion where de­tails don’t mat­ter, one should pre­fer kind­ness and sur­face pre­sen­ta­tion.

Why? Doesn’t this lead to sum­maries be­ing in­ac­cu­rate and peo­ple hav­ing bad world mod­els (ones that would as­sign lower prob­a­bil­ity to the ac­tual de­tails, com­pared to ones based on ac­cu­rate sum­maries)?

• Doesn’t this lead to sum­maries be­ing in­ac­cu­rate and peo­ple hav­ing bad world mod­els (ones that would as­sign lower prob­a­bil­ity to the ac­tual de­tails, com­pared to ones based on ac­cu­rate sum­maries)?

No, it doesn’t lead there. It starts there. The vast ma­jor­ity of com­mon be­liefs will re­main in­ac­cu­rate on many di­men­sions, and all you can do is to figure out which (if any) de­tails you can benefit the world by slightly im­prov­ing, in your limited time. De­tails about hid­den at­tributes that will af­fect al­most noth­ing are de­tails that don’t need cor­rect­ing—talk about more in­ter­est­ing/​use­ful things.

• No one has time to look into the de­tails of ev­ery­thing. If some­one isn’t go­ing to look into the de­tails of some­thing, they benefit from the sum­maries be­ing ac­cu­rate, in the sense that they re­flect how an hon­est party would sum­ma­rize the de­tails if they knew them. (Also, how would you know which things you should look into fur­ther if the low-re­s­olu­tion sum­maries are lies?)

This seems pretty ba­sic and it seems like you were dis­agree­ing with this by say­ing the de­scrip­tion should be based on kind­ness and sur­face pre­sen­ta­tion. Ob­vi­ously some hid­den at­tributes mat­ter more than oth­ers (and mat­ter more or less con­text-de­pen­dently), my as­ser­tion here is that sum­maries should be based pri­mar­ily on how they re­flect the way the thing is (in all its de­tails) rather than on kind­ness and sur­face pre­sen­ta­tion.

• In many con­texts, the pri­mary benefit of the sum­mary is brevity and sim­plic­ity, more even than in­for­ma­tion. If you have more time/​band­width/​at­ten­tion, then cer­tainly in­clud­ing more in­for­ma­tion is bet­ter, and even then you should pri­ori­tize in­for­ma­tion by im­por­tance.

In any case, I ap­pre­ci­ate the re­minder that this is the wrong fo­rum for poli­ti­cally-charged dis­cus­sions. I’m bow­ing out—I’ll read any fur­ther com­ments, but won’t re­spond.

• To be clear, brevity and sim­plic­ity are not the same as kind­ness and sur­face pre­sen­ta­tion, and con­fus­ing these two seems like a mis­take 8 year olds can al­most always avoid mak­ing. (No pres­sure to re­spond; in any case I meant to talk about the ab­stract is­sue of ac­cu­rate sum­maries which seems not to be poli­ti­cally charged ex­cept in the sense that episte­mol­ogy it­self is a poli­ti­cal is­sue, which it is)

• But it seems to me—though per­haps I’m just be­ing dim—that the only pos­si­bly way to ap­pre­ci­ate that P was Zack’s episte­molog­i­cal point is to be aware not only of the poli­ti­cal (not-very-sub) sub­text of the ar­ti­cle (which, you’ll re­call, is the thing I origi­nally said it was wrong not to men­tion) but also of the con­text where peo­ple were ad­dress­ing that spe­cific poli­ti­cal is­sue in what Zack con­sid­ers a too-sub­jec­tive way.

That’s not ac­tu­ally an im­por­tant part of the con­tent of Zack’s ar­ti­cle. It is only rele­vant in the con­text of your claim that Zack was re­spond­ing to a very differ­ent spe­cific thing not di­rectly refer­enced in his ar­ti­cle. I am not say­ing that the fact that you were wrong means that the true cause should have been ob­vi­ous. I am say­ing that the fact that you were wrong should make you doubt that you were ob­vi­ously right.

If peo­ple’s mod­els have a spe­cific glitch, lay­ing out what the un­dam­aged ver­sion ought to look like is le­gi­t­i­mate, and shouldn’t have to ex­ist solely in refer­ence to the spe­cific in­stance of the glitch. Truth doesn’t have to make refer­ence to er­ror to be true—it just has to match re­al­ity.

• Wait, if you reckon the propo­si­tion I called P is “not ac­tu­ally an im­por­tant part of the con­tent of Zack’s ar­ti­cle” then what did you have in mind as the “poli­ti­cally mo­ti­vated epistemic er­ror” that Zack’s ar­ti­cle was about?

(Or, if P was that er­ror, how am I sup­posed to un­der­stand your origi­nal protest which so far as I can tell only makes any sense if you con­sider that cor­rect­ing the epistemic er­ror was the whole point, or at least the main point, of Zack’s ar­ti­cle?)

Firmly agree with your last para­graph, though.

• I still think it’s clear that Zack’s main pur­pose in writ­ing the ar­ti­cle was to pro­mote a par­tic­u­lar ob­ject-level po­si­tion on the poli­ti­cal ques­tion.

Why would you think that? Why would this post be a re­motely effec­tive way to do that? Why is that even a plau­si­ble thing Zack’s try­ing to do here? Can you point to an ex­am­ple of some­one who was ac­tu­ally per­suaded?

I feel like I’ve done way too much work ex­plain­ing my po­si­tion here and you haven’t re­ally ex­plained the rea­son­ing be­hind yours.

• For what it’s worth, I feel the same way as you but with the ob­vi­ous change of sign: it feels to me like you keep ac­cus­ing me of say­ing some­what-out­ra­geous things that I’m not in­tend­ing to say and don’t be­lieve, and when I ask why you’d think I mean that you just ig­nore it, and it feels to me like I’ve put much more trou­ble into un­der­stand­ing your po­si­tion and clar­ify­ing mine than you have into un­der­stand­ing mine and clar­ify­ing yours.

Pre­sum­ably the truth lies some­where in be­tween.

I don’t think it is rea­son­able to re­spond to “I think Zack was try­ing to do X” with “That’s ridicu­lous, be­cause ev­i­dently it didn’t work”, for two rea­sons. Firstly, the great ma­jor­ity of at­tempts to pro­mote a par­tic­u­lar po­si­tion on a con­tro­ver­sial topic don’t change any­one’s mind, even in a venue like LW where we try to change our minds more read­ily when cir­cum­stances call for it. Se­condly, if you pro­pose that in­stead he was try­ing to put for­ward a par­tic­u­lar gen­er­ally-ap­pli­ca­ble episte­molog­i­cal po­si­tion (though I still don’t know what po­si­tion you have in mind, de­spite ask­ing sev­eral times, since the only par­tic­u­lar one you’ve men­tioned you then said wasn’t an im­por­tant part of the con­tent of Zack’s ar­ti­cle) then I in turn can ask whether you can point to an ex­am­ple of some­one who was per­suaded of that by the ar­ti­cle.

It’s some­what rea­son­able to re­spond to “I think Zack was try­ing to do X” with “But what he wrote is ob­vi­ously not an effec­tive way of do­ing X”, but I don’t see why it’s any more ob­vi­ously in­effec­tive as a tool of poli­ti­cal per­sua­sion, or as an ex­pres­sion of a poli­ti­cal po­si­tion, than it is as a work of episte­molog­i­cal clar­ifi­ca­tion, and in par­tic­u­lar it doesn’t even look to me more than av­er­agely in­effec­tive in such a role.

For the avoidance of doubt, I don’t in the least deny that I might be wrong about what Zack was try­ing to do. (Some­times a per­son thinks some­thing’s clear that turns out to be false. I am not im­mune to this.) Zack, if you hap­pen to be read­ing and haven’t been so an­noyed by my com­ments that you don’t want to in­ter­act with me ever again, any­thing you might want to say on this score would be wel­come. If I have badly mi­s­un­der­stood what you wrote, please ac­cept my apolo­gies.

• Gonna try a point-by-point ver­sion in case that’s clearer.

It isn’t clear to me what this er­ror is meant to be. If it’s some­thing like “think­ing that there must be a definite ob­jec­tively-cor­rect di­vi­sion of all things into bleggs and rubes” then I agree that it’s an er­ror but it’s an er­ror already thor­oughly cov­ered by EY’s and SA’s posts linked to in the ar­ti­cle it­self, and in any case it doesn’t seem to me that the ar­ti­cle is mostly con­cerned with mak­ing that point; rather, it pre­sup­poses it.

I know from con­ver­sa­tions el­se­where that Zack is re­spond­ing to the op­po­site er­ror—the claim that be­cause the usual rule for sep­a­rat­ing Bleggs from Rubes is prag­mat­i­cally mo­ti­vated, it has no im­pli­ca­tions for edge cases. If you’re mak­ing wrong guesses about what poli­ti­cal po­si­tion Zack is tak­ing, you should re­ally re­con­sider your claim that it’s ob­vi­ous what his poli­ti­cal po­si­tion is. This needs to be gen­er­al­ized, be­cause it’s ob­nox­ious to have to bring in com­pletely ex­tra­ne­ous info about mo­tives in or­der to figure out whether a post like this is poli­ti­cal. Bother­ing to ex­plain this at all feels a bit like giv­ing in to ex­tor­tion, and the fact that I ex­pect this ex­pla­na­tion to be nec­es­sary is a fur­ther up­date against con­tinued sub­stan­tive en­gage­ment on Less­wrong.

In any case, it seems to me that the main point of the linked ar­ti­cle is not to cor­rect some epistemic er­ror, but to pro­pose a par­tic­u­lar po­si­tion on the poli­ti­cal is­sue it’s al­lud­ing to, and that most of the de­tails of its alle­gory are cho­sen speci­fi­cally to sup­port that aim. [...] Con­struct­ing a hy­po­thet­i­cal situ­a­tion de­signed to match your view of a poli­ti­cally con­tentious ques­tion and draw­ing read­ers’ at­ten­tion to that matchup is not “de­poli­ti­cized” in any use­ful sense.

This seems like a pro­posal to cede an un­ten­able amount of con­ver­sa­tion ter­ri­tory. If a con­tro­ver­sial poli­ti­cal po­si­tion be­comes as­so­ci­ated with a par­tic­u­lar epistemic er­ror, then dis­cussing that spe­cific er­ror be­comes off-limits here, or at least needs to be de­p­re­cated as poli­ti­cal. I don’t know what that re­sults in, but it’s not a Ra­tion­al­ist com­mu­nity.

I do in fact think that Zack’s pur­pose in post­ing the ar­ti­cle here is prob­a­bly at least in part to pro­mote the poli­ti­cal po­si­tion for which the ar­ti­cle is ar­gu­ing, and that if that isn’t so—if Zack’s in­ten­tion was sim­ply to draw our at­ten­tion to a well-ex­e­cuted bit of episte­mol­ogy—then it is likely that Zack finds it well-ex­e­cuted partly be­cause of find­ing it poli­ti­cally con­ge­nial. In that sense, I do think it’s prob­a­bly a “poli­ti­cal act”.

A clear im­pli­ca­tion of Some­thing to Pro­tect is that peo­ple can’t be Ra­tion­al­ists un­less get­ting the right an­swer has some prac­ti­cal im­por­tance to them.

The rest of your com­ment seems to be mak­ing a sub­stan­tially wrong guess about Zack’s po­si­tion on gen­der in a way that—to me, since I know some­thing about Zack’s po­si­tion—is pretty strong ev­i­dence that Zack suc­ceeded in strip­ping out the ac­ci­den­tal speci­fics and fo­cus­ing on the core epistemic ques­tion. The stan­dard you’re ac­tu­ally hold­ing Zack to is one where if you—per­haps know­ing already that he has some thoughts on gen­der—can pro­ject a vaguely re­lated poli­ti­cally mo­ti­vated ar­gu­ment onto his post, then it’s dis­in­gen­u­ous to say it’s non­poli­ti­cal.

• (I’m re­spond­ing to this af­ter already read­ing and re­ply­ing to your ear­lier com­ment. Apolo­gies in ad­vance if it turns out that I’d have done bet­ter with the other one if I’d read this first...)

I’ll be­gin at the end. ”… per­haps know­ing already that he has some thoughts on gen­der”. What ac­tu­ally hap­pened is that I started read­ing the ar­ti­cle with­out notic­ing the web­site’s name, got a few para­graphs in and thought “ah, OK, so this is a fairly heavy-handed alle­gory for some trans-re­lated thing”, finished read­ing it and was fairly unim­pressed, then no­ticed the URL. As for the au­thor, I didn’t ac­tu­ally re­al­ise that Zack was the au­thor of the linked ar­ti­cle un­til the dis­cus­sion here was well un­der­way.

I think we may dis­agree about what con­sti­tutes strong ev­i­dence of hav­ing suc­cess­fully stripped out the ac­ci­den­tal speci­fics. Sup­pose you de­cide to ad­dress some con­tro­ver­sial ques­tion obliquely. Then there are three differ­ent ways in which a reader can come to a wrong opinion about your po­si­tion on the con­tro­ver­sial ques­tion. (1) You can de­tach what you’re writ­ing from your ac­tual po­si­tion on the ob­ject-level is­sue suc­cess­fully enough that a rea­son­able per­son would be un­able to figure out what your po­si­tion is. (2) You can write some­thing aimed at con­vey­ing your ac­tual po­si­tion, but do it less than perfectly. (3) You can write some­thing aimed at con­vey­ing your ac­tual po­si­tion, and do it well, but the reader can make mis­takes, or lack rele­vant back­ground knowl­edge, and come to a wrong con­clu­sion. It seems like you’re as­sum­ing #1. I think #2 and #3 are at least as plau­si­ble.

(As to whether I have got Zack’s po­si­tion sub­stan­tially wrong, it’s cer­tainly pos­si­ble that I might have, by any or all of the three mechanisms in the last para­graph. I haven’t gone into much de­tail on what I think Zack’s po­si­tion is so of course there are also pos­si­bil­ities 4 and 5: that I’ve un­der­stood it right but ex­pressed that un­der­stand­ing badly, or that I’ve un­der­stood and ex­pressed it OK but you’ve mi­s­un­der­stood what I wrote. If you think it would be helpful, then I can try to state more clearly what I think Zack’s po­si­tion is and he can let us know how right or wrong I got it. My guess is that it wouldn’t be su­per-helpful, for what it’s worth.)

OK, now back to the start. My re­ply to your other com­ment ad­dresses the first point (about what alleged er­ror Zack is re­spond­ing to) and I don’t think what you’ve said here changes what I want to say about that.

On the sec­ond point (ced­ing too much ter­ri­tory) I think you’re as­sum­ing I’m say­ing some­thing I’m not, namely that noth­ing with poli­ti­cal im­pli­ca­tions can ever be dis­cussed here. I don’t think I said that; I don’t be­lieve it; I don’t think any­thing I said ei­ther im­plies or pre­sup­poses it. What I do think is (1) that Zack’s ar­ti­cle ap­pears to me to be mostly about the poli­tics de­spite what Zack calls its “de­ni­able alle­gory”, (2) that link­ing mostly-poli­ti­cal things from here ought to be done in a way that ac­knowl­edges their poli­ti­cal-ness and clar­ifies how they’re in­tended to be rele­vant to LW, and (3) that (in my judge­ment, with which of course oth­ers may dis­agree) this par­tic­u­lar ar­ti­cle, if we ab­stract out the poli­ti­cal ap­pli­ca­tion, isn’t very valuable as a dis­cus­sion of episte­mol­ogy in the ab­stract.

I’m not sure I’ve un­der­stood what point you’re mak­ing when you refer­ence Some­thing to Pro­tect; I think that again you may be tak­ing me to be say­ing some­thing more nega­tive than I thought I was say­ing. At any rate, I cer­tainly nei­ther think nor in­tended to sug­gest that we should only talk about things of no prac­ti­cal im­por­tance.

• It seems rele­vant here that Zack pretty much agreed with my de­scrip­tion: see his com­ments us­ing terms like “de­ni­able alle­gory”, “get away with it”, etc.

So, from my per­spec­tive, I’m fac­ing a pretty difficult writ­ing prob­lem here! (See my re­ply to Dagon.) I agree that we don’t want Less Wrong to be a poli­ti­cized space. On the other hand, I also think that a lot of self-iden­ti­fied ra­tio­nal­ists are mak­ing a poli­ti­cally-mo­ti­vated episte­mol­ogy er­ror in as­sert­ing cat­e­gory bound­aries to be some­what ar­bi­trary, and it’s kind of difficult to ad­dress what I claim is the er­ror with­out even so much as al­lud­ing to the ob­ject-level situ­a­tion that I think is mo­ti­vat­ing the er­ror! For the long, ob­ject-level dis­cus­sion, see my re­ply to Scott Alexan­der, “The Cat­e­gories Were Made for Man To Make Pre­dic­tions”. (Sorry if the byline mis­match causes con­fu­sion; I’m us­ing a pen name for that blog.) I didn’t want to share ”… To Make Pre­dic­tions” on Less Wrong (er, at least not as a top-level post), be­cause that clearly would be too poli­ti­cal. But I thought the “Blegg Mode” parable was suffi­ciently san­i­tized such that it would be OK to share as a link post here?

I con­fess that I didn’t put a lot of thought into the de­scrip­tion text which you thought was dis­in­gen­u­ous. I don’t think I was be­ing con­sciously dis­in­gen­u­ous (bad in­tent is a dis­po­si­tion, not a feel­ing!), but af­ter you pointed it out, I do see your point that, since there is some un­avoid­able poli­ti­cal con­text here, it’s prob­a­bly bet­ter to ex­plic­itly la­bel that, be­cause read­ers who had a prior ex­pec­ta­tion that no such con­text would ex­ist would feel mis­led upon dis­cov­er­ing it. So I added the “Con­tent no­tice” to the de­scrip­tion. Hope­fully that ad­dresses the con­cern?

our cat­e­gories are [...] some­what arbitrary

No! Cat­e­gories are not “some­what ar­bi­trary”! There is struc­ture in the world, and in­tel­li­gent agents need cat­e­gories that carve the struc­ture at the joints so that they can make effi­cient prob­a­bil­is­tic in­fer­ences about the vari­ables they’re try­ing to op­ti­mize! “Even if you can­not do the math, know­ing that the math ex­ists tells you that the dance step is pre­cise and has no room in it for your whims.” We had a whole Se­quence about this! Doesn’t any­one else re­mem­ber?!

Trans-ness is not always “cheap to de­tect”. I guess it’s cheaper to de­tect than, say, sex chro­mo­somes. OK—and how of­ten are an­other per­son’s sex chro­mo­somes “de­ci­sion-rele­vant with re­spect to the agent’s goals”?

You seem to be mak­ing some as­sump­tions about which parts of the parable are get­ting mapped to which parts of the real-world is­sue that ob­vi­ously in­spired the parable. I don’t think this is the cor­rect venue for me to dis­cuss the real-world is­sue. On this web­site, un­der this byline, I’d rather only talk about bleggs and rubes—even if you were cor­rect to point out that it would be dis­in­gen­u­ous for some­one to ex­pect read­ers to pre­tend not to no­tice the real-world rea­son that we’re talk­ing about bleggs and rubes. With this in mind, I’ll re­spond be­low to a mod­ified ver­sion of part of your com­ment (with ed­its brack­eted).

I guess it’s cheaper to de­tect than, say, [pal­la­dium or vana­dium con­tent]. OK—and how of­ten [is a sortable ob­ject’s metal con­tent] “de­ci­sion-rele­vant with re­spect to the agent’s goals”? Pretty much only if [you work in the sort­ing fac­tory.] [That’s] fairly un­com­mon—for most of us, very few of the [sortable ob­jects] we in­ter­act with [need to be sorted into bins ac­cord­ing to metal con­tent].

Sure! But re­al­ity is very high-di­men­sional—bleggs and rubes have other prop­er­ties be­sides color, shape, and metal con­tent—for ex­am­ple, the prop­er­ties of be­ing flex­ible-vs.-hard or lu­mi­nes­cent-vs.-non-lu­mi­nes­cent, as well as many oth­ers that didn’t make it into the parable. If you care about mak­ing ac­cu­rate pre­dic­tions about the many prop­er­ties of sortable ob­jects that you can’t im­me­di­ately ob­serve, then how you draw your cat­e­gory bound­aries mat­ters, be­cause your brain is go­ing to be us­ing the cat­e­gory mem­ber­ship you as­signed in or­der to de­rive your prior ex­pec­ta­tions about the vari­ables that you haven’t yet ob­served.

sex chro­mo­somes, which is ex­actly the “ex­pen­sive” fea­ture the au­thor iden­ti­fies in the case of trans peo­ple.

The au­thor did no such thing! It’s episte­mol­ogy fic­tion about bleggs and rubes! It’s true that I came up with the parable while I was try­ing to think care­fully about trans­gen­der stuff that was of di­rect and in­tense per­sonal rele­vance to me. It’s true that it would be dis­in­gen­u­ous for some­one to ex­pect read­ers to not-no­tice that I was try­ing to think about trans is­sues. (I mean, it’s in the URL.) But I didn’t say any­thing about chro­mo­somes! “If con­fu­sion threat­ens when you in­ter­pret a metaphor as a metaphor, try tak­ing ev­ery­thing com­pletely liter­ally.”

Try­ing to think of some ex­am­ples, it seems to me that what mat­ters is sim­ply the pres­ence of fea­tures that are “de­ci­sion-rele­vant with re­spect to the agent’s goals”. [...]

Thanks for this sub­stan­tive, on-topic crit­i­cism! I would want to think some more be­fore de­cid­ing how to re­ply to this.

ADDENDUM: I thought some more and wrote a sister com­ment.

• Yes, I agree that the con­tent-note deals with my “dis­in­gen­u­ous­ness” ob­jec­tion.

I agree (of course!) that there is struc­ture in the world and that cat­e­gories are not com­pletely ar­bi­trary. It seems to me that this is perfectly com­pat­i­ble with say­ing that they are _some­what_ ar­bi­trary, which con­ve­niently is what I did ac­tu­ally say. Some cat­e­go­riza­tions are bet­ter than oth­ers, but there are of­ten mul­ti­ple roughly-equally-good cat­e­go­riza­tions and pick­ing one of those rather than an­other is not an episte­molog­i­cal er­ror. There is some­thing in re­al­ity that is perfectly pre­cise and leaves no room for hu­man whims, but that thing is not usu­ally (per­haps not ever) a spe­cific cat­e­go­riza­tion.

So, any­way, in the par­tic­u­lar case of transness, I agree that it’s pos­si­ble that some of the four cat­e­go­riza­tions we’ve con­sid­ered here (yours, which makes trans peo­ple a sep­a­rate cat­e­gory but nudge-nudge-wink-wink in­di­cates that for most pur­poses trans peo­ple are much more “like” oth­ers of their ‘origi­nally as­signed’ gen­der than oth­ers of their ‘adopted’ gen­der; and the three oth­ers I men­tioned: get­ting by with just two cat­e­gories and not putting trans peo­ple in ei­ther of them; get­ting by with just two cat­e­gories and putting trans peo­ple in their ‘origi­nally as­signed’ cat­e­gory; get­ting by with just two cat­e­gories and putting trans peo­ple in their ‘adopted’ cat­e­gory) are so much bet­ter than oth­ers that we should re­ject them. But it seems to me that that the rel­a­tive mer­its of these de­pend on the agent’s goals, and the best cat­e­go­riza­tion to adopt may be quite differ­ent de­pend­ing on whether you’re (e.g.) a med­i­cal re­searcher, a per­son suffer­ing gen­der dys­pho­ria, a ran­dom mem­ber of the gen­eral pub­lic, etc—and also on your own val­ues and pri­ori­ties.

I did in­deed make some as­sump­tions about what was meant to map to what. It’s pos­si­ble that I didn’t get them quite right. I de­cline to agree with your pro­posal that if some­thing metaphor­i­cal that you wrote doesn’t seem to match up well I should sim­ply pre­tend that you in­tended it as a metaphor, though of course it’s en­tirely pos­si­ble that some differ­ent match-up makes it work much bet­ter.

• But it seems to me that that the rel­a­tive mer­its of these de­pend on the agent’s goals, and the best cat­e­go­riza­tion to adopt may be quite differ­ent de­pend­ing on whether you’re [...] and also on your own val­ues and pri­ori­ties.

Yes, I agree! (And fur­ther­more, the same per­son might use differ­ent cat­e­go­riza­tions at differ­ent times de­pend­ing on what par­tic­u­lar as­pects of re­al­ity are most rele­vant to the task at hand.)

But given an agent’s goals in a par­tic­u­lar situ­a­tion, I think it would be a shock­ing co­in­ci­dence for it to be the case that “there are [...] mul­ti­ple roughly-equally-good cat­e­go­riza­tions.” Why would that hap­pen of­ten?

If I want to use sortable ob­jects as mod­ern art sculp­tures to dec­o­rate my liv­ing room, then the rele­vant fea­tures are shape and color, and I want to think about rubes and bleggs (and count adapted bleggs as bleggs). If I also care about how the room looks in the dark and adapted bleggs don’t glow in the dark like or­di­nary bleggs do, then I want to think about adapted bleggs as be­ing differ­ent from or­di­nary bleggs.

If I’m run­ning a fac­tory that har­vests sortable ob­jects for their metal con­tent and my sort­ing scan­ner is ex­pen­sive to run, then I want to think about rubes and or­di­nary bleggs (be­cause I can in­fer metal con­tent with ac­cept­ably high prob­a­bil­ity by ob­serv­ing the shape and color of these ob­jects), but I want to look out for adapted bleggs (be­cause their metal con­tent is, with high prob­a­bil­ity, not what I would ex­pect based on the color/​shape/​metal-con­tent gen­er­al­iza­tions I learned from my ob­ser­va­tions of rubes and or­di­nary bleggs). If the fac­tory in­vests in a new state-of-the-art sort­ing scan­ner that can be cheaply run on ev­ery ob­ject, then I don’t have any rea­son to care about shape or color any­more—I just care about pal­la­dium-cored ob­jects and vana­dium-cored ob­jects.

and pick­ing one of those rather than an­other is not an episte­molog­i­cal er­ror.

If you’re re­ally some­how in a situ­a­tion where there are mul­ti­ple roughly-equally-good cat­e­go­riza­tions with re­spect to your goals and the in­for­ma­tion you have, then I agree that pick­ing one of those rather than an­other isn’t an episte­molog­i­cal er­ror. Google Maps and MapQuest are not ex­actly the same map, but if you just want to drive some­where, they both re­flect the ter­ri­tory pretty well: it prob­a­bly doesn’t mat­ter which one you use. Faced with an ar­bi­trary choice, you should make an ar­bi­trary choice: flip a coin, or call ran­dom.ran­dom().

And yet some­how, I never run into peo­ple who say, “Cat­e­gories are some­what ar­bi­trary, there­fore you might as well roll a d3 to de­cide whether to say ‘trans women are women’ or ‘so-called “trans women” are men’ or ‘transwomen are transwomen’, be­cause each of these maps is do­ing a roughly-equally-good job of re­flect­ing the rele­vant as­pects of the ter­ri­tory.” But I run into lots of peo­ple who say, “Cat­e­gories are some­what ar­bi­trary, there­fore I’m not wrong to in­sist that trans women are women,” and who some­how never seem to find it use­ful to bring up the idea that cat­e­gories are some­what ar­bi­trary in seem­ingly any other con­text.

You see the prob­lem? If the one has some sort of spe­cific ar­gu­ment for why I should use a par­tic­u­lar cat­e­go­riza­tion sys­tem in a par­tic­u­lar situ­a­tion, then that’s great, and I want to hear it! But it has to be an ar­gu­ment and not a se­lec­tively-in­voked ap­peal-to-ar­bi­trari­ness con­ver­sa­tion-halter.

• Mul­ti­ple roughly-equally-good cat­e­go­riza­tions might not of­ten hap­pen to an ideal­ized su­per­in­tel­li­gent AI that’s much bet­ter than we are at ex­tract­ing all pos­si­ble in­for­ma­tion from its en­vi­ron­ment. But we hu­mans are slow and stupid and make mis­takes, and ac­cord­ingly our prob­a­bil­ity dis­tri­bu­tions are re­ally wide, which means our er­ror bars are large and we of­ten find our­selves with mul­ti­ple hy­pothe­ses we can’t de­cide be­tween with con­fi­dence.

(Con­sider, for a rather differ­ent ex­am­ple, poli­ti­cal ques­tions of the form “how much of X should the gov­ern­ment do?” where X is pro­vid­ing a so­cial “safety net”, reg­u­lat­ing busi­nesses, or what­ever. Ob­vi­ously these are some­what value-laden ques­tions, but even if I hold that con­stant by e.g. just try­ing to de­cide what I think is op­ti­mal policy I find my­self quite un­cer­tain.)

Per­haps more to the point, most of us are in differ­ent situ­a­tions at differ­ent times. If what mat­ters to you about rubleggs is some­times pal­la­dium con­tent, some­times vana­dium con­tent, and some­times fur­ri­ness, then I think you have to choose be­tween (1) main­tain­ing a bunch of differ­ent cat­e­go­riza­tions and switch­ing be­tween them, (2) main­tain­ing a sin­gle cat­e­go­riza­tion that’s much finer grained than is usu­ally needed in any sin­gle situ­a­tion and ag­gre­gat­ing cat­e­gories in differ­ent ways at differ­ent times, and (3) find­ing an ap­proach that doesn’t rely so much on putting things into cat­e­gories. The cog­ni­tive-effi­ciency benefits of cat­e­go­riza­tion are much diminished in this situ­a­tion.

Your penul­ti­mate para­graph ar­gues (I think) that talk of cat­e­gories’ some­what-ar­bi­trari­ness (like, say, Scott’s in TCWMFM) is not sincere and is adopted merely as an ex­cuse for tak­ing a par­tic­u­lar view of trans peo­ple (per­haps be­cause that’s so­cially con­ve­nient, or feels nice, or some­thing). Well, I guess that’s just the mir­ror image of what I said about your com­ments on cat­e­gories, so turn­about is fair play, but I don’t think I can agree with it.

• The “Dis­guised Queries” post that first in­tro­duced bleggs and rubes makes es­sen­tially the point that cat­e­gories are some­what ar­bi­trary, that there’s no One True Right An­swer to “is it a blegg or a rube?”, and that which an­swer is best de­pends on what par­tic­u­lar things you care about on a par­tic­u­lar oc­ca­sion.

• Scott’s “Diseased think­ing” (last time I heard, the most highly up­voted ar­ti­cle in the his­tory of Less Wrong) makes es­sen­tially the same point in con­nec­tion to the cat­e­gory of “dis­ease”. (The lead­ing ex­am­ple be­ing obe­sity rather than, say, gen­der dys­pho­ria.)

• Scott’s “The tails com­ing apart as a metaphor for life” does much the same for cat­e­gories like “good thing” and “bad thing”.

• Here’s a lit­tle thing from the In­stute for Fis­cal Stud­ies about poverty met­rics, which be­gins by ob­serv­ing that there are many pos­si­ble ways to define poverty and noth­ing re­sem­bling con­sen­sus about which is best. (The cat­e­gories here are “poor” and “not poor”.)

• More gen­er­ally, “well, it all de­pends what you mean by X” has been a stan­dard move among philoso­phers for many decades, and it’s ba­si­cally the same thing: words cor­re­spond to cat­e­gories, cat­e­gories are some­what ar­bi­trary, and ques­tions about whether a P is or isn’t a Q are of­ten best un­der­stood as ques­tions about how to draw the bound­aries of Q, which in turn may be best un­der­stood as ques­tions about val­ues or pri­ori­ties or what have you rather than about the ac­tual con­tent of the ac­tual world.

So it seems to me very not-true that the idea that cat­e­gories are some­what ar­bi­trary is a thing in­voked only in or­der to avoid hav­ing to take a definite po­si­tion (or, in or­der to avoid choos­ing one’s definite po­si­tion on the ba­sis of hard facts rather than touchy-feely sen­si­tivity) on how to think and talk about trans peo­ple.

• The “Dis­guised Queries” post that first in­tro­duced bleggs and rubes makes es­sen­tially the point that cat­e­gories are some­what ar­bi­trary, that there’s no One True Right An­swer to “is it a blegg or a rube?”, and that which an­swer is best de­pends on what par­tic­u­lar things you care about on a par­tic­u­lar oc­ca­sion.

That’s not how I would sum­ma­rize that post at all! I mean, I agree that the post did liter­ally say that (“The ques­tion ‘Is this ob­ject a blegg?’ may stand in for differ­ent queries on differ­ent oc­ca­sions”). But it also went on to say more things that I think sub­stan­tially change the moral—

If [the ques­tion] weren’t stand­ing in for some query, you’d have no rea­son to care.

[...] Peo­ple who ar­gue that athe­ism is a re­li­gion “be­cause it states be­liefs about God” are re­ally try­ing to ar­gue (I think) that the rea­son­ing meth­ods used in athe­ism are on a par with the rea­son­ing meth­ods used in re­li­gion, or that athe­ism is no safer than re­li­gion in terms of the prob­a­bil­ity of causally en­gen­der­ing vi­o­lence, etc… [...]

[...] The a pri­ori ir­ra­tional part is where, in the course of the ar­gu­ment, some­one pulls out a dic­tio­nary and looks up the defi­ni­tion of “athe­ism” or “re­li­gion”. [...] How could a dic­tio­nary pos­si­bly de­cide whether an em­piri­cal cluster of athe­ists is re­ally sub­stan­tially differ­ent from an em­piri­cal cluster of the­olo­gians? How can re­al­ity vary with the mean­ing of a word? The points in thingspace don’t move around when we re­draw a bound­ary. [bold­ing mine—ZMD]

But peo­ple of­ten don’t re­al­ize that their ar­gu­ment about where to draw a defi­ni­tional bound­ary, is re­ally a dis­pute over whether to in­fer a char­ac­ter­is­tic shared by most things in­side an em­piri­cal cluster...

I claim that what Yud­kowsky said about the ir­ra­tional­ity about ap­peal­ing to the dic­tio­nary, goes the same for ap­peal to per­sonal val­ues or pri­ori­ties. It’s not false ex­actly, but it doesn’t ac­com­plish any­thing.

Sup­pose Bob says, “Abor­tion is mur­der, be­cause it’s the kil­ling of a hu­man be­ing!”

Alice says, “No, abor­tion isn’t mur­der, be­cause mur­der is the kil­ling of a sen­tient be­ing, and fe­tuses aren’t sen­tient.”

As Alice and Bob’s hired ra­tio­nal­ist me­di­a­tor, you could say, “You two just have differ­ent prefer­ences about some­what-ar­bitary cat­e­gory bound­aries, that’s all! Abor­tion is mur­der-with-re­spect-to-Bob’s-defi­ni­tion, but it isn’t mur­der-with-re­spect-to-Alice’s-defi­ni­tion. Done! End of con­ver­sa­tion!

And maybe some­times there re­ally is noth­ing more to it than that. But of­ten­times, I think we can do more work to break the sym­me­try: to work out what differ­ent pre­dic­tions Alice and Bob are mak­ing about re­al­ity, or what differ­ent prefer­ences they have about re­al­ity, and re­fo­cus the dis­cus­sion on that. As I wrote in “The Cat­e­gories Were Made for Man to Make Pre­dic­tions”:

If differ­ent poli­ti­cal fac­tions are en­gaged in con­flict over how to define the ex­ten­sion of some com­mon word—com­mon words be­ing a scarce and valuable re­source both cul­turally and in­for­ma­tion-the­o­ret­i­cally—ra­tio­nal­ists may not be able to say that one side is sim­ply right and the other is sim­ply wrong, but we can at least strive for ob­jec­tivity in de­scribing the con­flict. Be­fore shrug­ging and say­ing, “Well, this is a differ­ence in val­ues; noth­ing more to be said about it,” we can talk about the de­tailed con­se­quences of what is gained or lost by pay­ing at­ten­tion to some differ­ences and ig­nor­ing oth­ers.

We had an en­tire Se­quence speci­fi­cally about this! You were there! I was there! Why doesn’t any­one re­mem­ber?!

• I wasn’t claiming to sum­ma­rize “Dis­guised Queries”. I was point­ing out one thing that it says, which hap­pens to be the thing that you say no one says other than to push a par­tic­u­lar po­si­tion on trans is­sues, and which “Dis­guised Queries” says with (so far as I can tell) no at­tempt to say any­thing about transness at all.

Alice and Bob’s con­ver­sa­tion doesn’t have to end once they (hope­fully) rec­og­nize that their dis­agree­ment is about cat­e­gory bound­aries as much as it is about mat­ters of fact. They may well want to figure out why they draw their bound­aries in differ­ent places. It might be be­cause they have differ­ent pur­poses; or be­cause they have differ­ent opinions on some other mat­ter of fact; or be­cause one or both are re­ally mak­ing ap­peals to emo­tion for an already-de­cided con­clu­sion rather than ac­tu­ally try­ing to think clearly about what sort of a thing a foe­tus is; etc.

End­ing a con­ver­sa­tion, or a train of thought, pre­ma­turely, is a bad thing. It seems al­to­gether un­fair to com­plain at me merely for us­ing words that could be abused for that pur­pose. (If you see me ac­tu­ally try­ing to end a con­ver­sa­tion with them, of course, then by all means com­plain away.)

Over and over again in this dis­cus­sion, it seems as if I’m be­ing taken to say things I’m fairly sure I haven’t said and cer­tainly don’t be­lieve. If it’s be­cause I’m com­mu­ni­cat­ing badly, then I’m very sorry. But it might be worth con­sid­er­ing other ex­pla­na­tions.

• I wasn’t claiming to sum­ma­rize “Dis­guised Queries”.

I may have mis­in­ter­preted what you meant by the phrase “makes es­sen­tially the point that.”

the thing that you say no one says other than to push a par­tic­u­lar po­si­tion on trans issues

I see. I think I made a mis­take in the great-great-grand­par­ent com­ment. That com­ments’ penul­ti­mate para­graph ended: “[...] and who some­how never seem to find it use­ful to bring up the idea that cat­e­gories are some­what ar­bi­trary in seem­ingly any other con­text.” I should not have writ­ten that, be­cause as you pointed out in the great-grand­par­ent, it’s not true. This turned out to be a pretty costly mis­take on my part, be­cause we’ve now just spent the bet­ter part of four com­ments liti­gat­ing the con­se­quences of this er­ror in a way that we could have avoided if only I had taken more care to phrase the point I was try­ing to make less hy­per­bol­i­cally.

The point I was try­ing to make in the offend­ing para­graph is that if some­one hon­estly be­lieves that the choice be­tween mul­ti­ple cat­e­gory sys­tems is ar­bi­trary or some­what-ar­bi­trary, then they should ac­cept the choice be­ing made ar­bi­trar­ily or some­what-ar­bi­trar­ily. I agree that “It de­pends on what you mean by X” is of­ten a use­ful mo­tion, but I think it’s pos­si­ble to dis­t­in­guish when it’s be­ing used to fa­cil­i­tate com­mu­ni­ca­tion from when it’s be­ing used to im­pose frame con­trol. Speci­fi­cally: it’s in­co­her­ent to say, “It’s ar­bi­trary, so you should do it my way,” be­cause if it were re­ally ar­bi­trary, the one would not be mo­ti­vated to say “you should do it my way.” In dis­cus­sions about my idiosyn­cratic spe­cial in­ter­est, I very fre­quently en­counter in­cred­ibly men­da­cious frame-con­trol at­tempts from peo­ple who call them­selves “ra­tio­nal­ists” and who don’t seem to do this on most other top­ics. (This is, of course, with re­spect to how I draw the “in­cred­ibly men­da­cious” cat­e­gory bound­ary.)

Speak­ing of end­ing con­ver­sa­tions, I’m feel­ing pretty emo­tion­ally ex­hausted, and we seem to be spend­ing a lot of word­count on mu­tual mi­s­un­der­stand­ings, so un­less you have more things you want to ex­plain to me, maybe this should be the end of the thread? Thanks for the in­vi­go­rat­ing dis­cus­sion! This was way more pro­duc­tive than most of the con­ver­sa­tions I’ve had lately! (Which maybe tells you some­thing about the qual­ity of those other dis­cus­sions.)

• Happy to leave it here; I have a few fi­nal com­ments that are mostly just mak­ing ex­plicit things that I think we largely agree on. (But if any of them an­noy you, feel free to have the last word.)

1. Yeah, sorry, “es­sen­tially” may have been a bad choice of word. I meant “makes (in­ter alia) a point which is es­sen­tially that …” rather than “makes, as its most es­sen­tial part, the point that …”.

2. My apolo­gies for tak­ing you more liter­ally than in­tended. I agree that “it’s ar­bi­trary so you should do it my way” is nuts. On the other hand, “there’s an el­e­ment of choice here, and I’m choos­ing X be­cause of Y” seems (at least po­ten­tially) OK to me. I don’t know what spe­cific in­cred­ibly men­da­cious things you have in mind, but e.g. noth­ing in Scott’s TCWMFM strikes me as men­da­cious and I re­main un­con­vinced by your crit­i­cisms of it. (Not, I am fairly sure, be­cause I sim­ply don’t un­der­stand them.)

Fi­nally, my apolo­gies for any part of the emo­tional ex­haus­tion that’s the re­sult of things I said that could have been bet­ter if I’d been clev­erer or more sen­si­tive or some­thing of the kind.

• Meta: That com­ment had a bunch of bul­let points in it when I wrote it. Now (at least for me, at least at the mo­ment) they seem to have dis­ap­peared. Weird. [EDIT to clar­ify:] I mean that the bul­let sym­bols them­selves, and the in­den­ta­tion that usu­ally goes with them, have gone. The ac­tual words are still there.

• Our bad. We broke bul­let-lists with a re­cent up­date that also added au­tolink­ing. I am work­ing on a fix that should ideally go up tonight.

• Should be fixed now. Sorry for the in­con­ve­nience.

• My com­ment above is un­changed, which I guess means it was a pars­ing rather than a ren­der­ing prob­lem if the bug is now fixed.

• Do bul­let lists work now?

• If they do, this and the pre­vi­ous line should be bul­leted.

… Nope, still bro­ken, sorry. But it looks as if the ver­ti­cal spac­ing is differ­ent from what it would be if these were all or­di­nary para­graphs, so some­thing is be­ing done. In the HTML they are show­ing up as <li> el­e­ments, with­out any sur­round­ing <ul> or any­thing of the sort; I don’t know whether that’s what’s in­tended.

• Wait, that list is definitely bul­leted, and I also fixed your com­ment above. Are we see­ing differ­ent things?

• I don’t see bul­lets on Fire­fox 65.0.1, but I do on Chromium 72.0.3626.121 (both Xubuntu 16.04.5).

• Right. I’m us­ing Fire­fox and see no bul­lets. We’re in “Chrome is the new IE6” ter­ri­tory, I fear; no one both­ers test­ing things on Fire­fox any more. Alas!

• I have a PR that fixes it prop­erly. Should be up by Mon­day.

I usu­ally check browser com­pat­i­bil­ity, I just didn’t con­sider it in this case since I didn’t ac­tu­ally ex­pect that some­thing as old as bul­let lists would still have browser ren­der­ing differ­ences.

• My guess is it’s some browser in­con­sis­tency be­cause of or­phaned <li> el­e­ments. Will try to fix that as well.

• Cat­e­gories are never ar­bi­trary. They are cre­ated to serve pur­poses. They can serve those pur­poses bet­ter or worse. There can be mul­ti­ple pur­poses, lead­ing to mul­ti­ple cat­e­gories over­lap­ping and in­ter­sect­ing. Pur­poses can be lost (imag­ine a link to the Se­quences post­ing on lost pur­poses). “Ar­bi­trary” is a “buffer” or “lul­laby” word (imag­ine an­other link, I might put them in when I’m not writ­ing on a phone on a train) that ob­scures all that.

• It seems to me that you’re say­ing a bunch of things I already said, and say­ing them as if they are cor­rec­tions to er­rors I’ve made. For in­stance:

RK: “Cat­e­gories are never ar­bi­trary.” gjm: “cat­e­gories are not com­pletely ar­bi­trary.”

RK: “They are cre­ated to serve pur­poses.” gjm: “the rel­a­tive mer­its of these de­pend on the agent’s goals”

RK: “They can serve those pur­poses bet­ter or worse.” gjm: “Some cat­e­go­riza­tions are bet­ter than oth­ers [...] the rel­a­tive mer­its of these de­pend on the agent’s goals.”

So, any­way, I agree with what you say, but I’m not sure why you think (if you do—it seems like you do) I was us­ing “ar­bi­trary” as what you call a “lul­laby word”. I’m sorry if for you it ob­scured any of those points about cat­e­gories, though clearly it hasn’t stopped you notic­ing them; you may or may not choose to be­lieve me when I said it didn’t stop me notic­ing them ei­ther.

For what it’s worth, I think what I mean when I say “cat­e­gories are some­what ar­bi­trary” is al­most ex­actly the same as what you mean when you say “they are cre­ated to serve pur­poses”.

• Try­ing to think of some ex­am­ples, it seems to me that what mat­ters is sim­ply the pres­ence of fea­tures that are “de­ci­sion-rele­vant with re­spect to the agent’s goals”. [...]

So, I think my mo­ti­va­tion (which didn’t make it into the parable) for the “cheap to de­tect fea­tures that cor­re­late with de­ci­sion-rele­vant ex­pen­sive to de­tect fea­tures” heuris­tic is that I’m think­ing in terms of naïve Bayes mod­els. You imag­ine a “star-shaped” causal graph with a cen­tral node (whose var­i­ous val­ues rep­re­sent the pos­si­ble cat­e­gories you might want to as­sign an en­tity to), with ar­rows point­ing to var­i­ous other nodes (which rep­re­sent var­i­ous fea­tures of the en­tity). (That is, we’re as­sum­ing that the fea­tures of the en­tity are con­di­tion­ally in­de­pen­dent given cat­e­gory mem­ber­ship: P(X|C) = Π_i P(X_i|C).) Then when we ob­serve some sub­set of fea­tures, we can use that to up­date our prob­a­bil­ities of cat­e­gory-mem­ber­ship, and use that to up­date our prob­a­bil­ities of the fea­tures we haven’t ob­served yet. The “cat­e­gory” node doesn’t ac­tu­ally “ex­ist” out there in the world—its some­thing we con­struct to help fac­tor­ize our prob­a­bil­ity dis­tri­bu­tion over the fea­tures (which do “ex­ist”).

So, as AI de­sign­ers, we’re faced with the ques­tion of how we want the “cat­e­gory” node to work. I’m pretty sure there’s go­ing to be a math­e­mat­i­cally cor­rect an­swer to this that I just don’t know (yet) be­cause I don’t study enough and haven’t got­ten to Chap­ter 17 of Daphne Kol­ler and the Meth­ods of Ra­tion­al­ity. Since I’m not there yet, if I just take at in­tu­itive am­a­teur guess at how I might ex­pect this to work, it seems pretty in­tu­itively plau­si­ble that we’re go­ing to want the cat­e­gory node to be es­pe­cially sen­si­tive to cheap-to-ob­serve fea­tures that cor­re­late with goal-rele­vant fea­tures? Like, yes, we ul­ti­mately just want to know as much as pos­si­ble about the de­ci­sion-rele­vant vari­ables, but if some ob­ser­va­tions are more ex­pen­sive to make than oth­ers, that seems like the sort of thing the net­work should be able to take into ac­count, right??

Re­mem­ber those 2% of oth­er­wise or­di­nary bleggs that con­tain pal­la­dium? Per­son­ally, I’d want a cat­e­gory for those

I agree that “things that look like ‘bleggs’ that con­tain pal­la­dium” is a con­cept that you want to be able to think about. (I just de­scribed it in words, there­fore it’s rep­re­sentable!) But while work­ing on the sort­ing line, your vi­sual sys­tem’s pat­tern-match­ing fac­ul­ties aren’t go­ing to spon­ta­neously in­vent “pal­la­dium-con­tain­ing bleggs” as a thing to look out for if you don’t know any way to de­tect them, whereas if adapted bleggs tend to look differ­ent in ways you can see, then that cat­e­gory is some­thing your brain might just “learn from ex­pe­rience.” In terms of the naïve Bayes model, I’m sort of as­sum­ing that the 2% of pal­la­dium con­tain­ing non-adapted bleggs are “flukes”: that vari­able takes that value with that prob­a­bil­ity in­de­pen­dently of the other blegg fea­tures. I agree that if that as­sump­tion were wrong, then that would be re­ally valuable in­for­ma­tion, and if you sus­pect that as­sump­tion is wrong, then you should definitely be on the look­out for ways to spot pal­la­dium-con­tain­ing bleggs.

But like, see this thing I’m at least try­ing to do here, where I think there’s learn­able statis­ti­cal struc­ture in the world that I want to de­scribe us­ing lan­guage? That’s pretty im­por­tant! I can to­tally see how, from your per­spec­tive, on cer­tain ob­ject-level ap­pli­ca­tions, you might sus­pect that the one who says, “Hey! Cat­e­gories aren’t even ‘some­what’ ar­bi­trary! There’s learn­able statis­ti­cal struc­ture in the world; that’s what cat­e­gories are for!” is se­cretly be­ing driven by ne­far­i­ous poli­ti­cal mo­ti­va­tions. But I hope you can also see how, from my per­spec­tive, I might sus­pect that the one who says, “Cat­e­gories are some­what ar­bi­trary; the one who says oth­er­wise is se­cretly be­ing driven by ne­far­i­ous poli­ti­cal mo­ti­va­tions” is se­cretly be­ing driven by poli­ti­cal mo­ti­va­tions that have pretty ne­far­i­ous con­se­quences for peo­ple like me try­ing to use lan­guage to rea­son about the most im­por­tant thing in my life, even if the psy­cholog­i­cal foun­da­tion of the poli­ti­cal mo­ti­va­tion is en­tirely kind­hearted.

• Since I’m not there yet, if I just take at in­tu­itive am­a­teur guess at how I might ex­pect this to work, it seems pretty in­tu­itively plau­si­ble that we’re go­ing to want the cat­e­gory node to be es­pe­cially sen­si­tive to cheap-to-ob­serve fea­tures that cor­re­late with goal-rele­vant fea­tures? Like, yes, we ul­ti­mately just want to know as much as pos­si­ble about the de­ci­sion-rele­vant vari­ables, but if some ob­ser­va­tions are more ex­pen­sive to make than oth­ers, that seems like the sort of thing the net­work should be able to take into ac­count, right??

I think the math­e­mat­i­cally cor­rect thing here is to use some­thing like the ex­pec­ta­tion max­i­miza­tion al­gorithm. Let’s say you have a dataset that is a list of el­e­ments, each of which has some sub­set of its at­tributes known to you, and the oth­ers un­known. EM does the fol­low­ing:

1. Start with some pa­ram­e­ters (pa­ram­e­ters tell you things like what the cluster means/​co­var­i­ance ma­tri­ces are; it’s differ­ent de­pend­ing on the prob­a­bil­is­tic model)

2. Use your pa­ram­e­ters, plus the ob­served vari­ables, to in­fer the un­ob­served vari­ables (and cluster as­sign­ments) and put Bayesian dis­tri­bu­tions over them

3. Do some­thing math­e­mat­i­cally equiv­a­lent to gen­er­at­ing a bunch of “vir­tual” datasets by sam­pling the un­ob­served vari­ables from these dis­tri­bu­tions, then set­ting the pa­ram­e­ters to as­sign high prob­a­bil­ity to the union of these vir­tual datasets (EM isn’t usu­ally de­scribed this way but it’s eas­ier to think about IMO)

4. Re­peat start­ing from step 2

This doesn’t as­sign any spe­cial im­por­tance to ob­served fea­tures. Since step 3 is just a func­tion of the vir­tual datasets (not tak­ing into ac­count ad­di­tional info about which vari­ables are easy to ob­serve), they’re go­ing to take all the fea­tures, ob­serv­able or not, into ac­count. How­ever, the hard-to-ob­serve fea­tures are go­ing to have more un­cer­tainty to them, which af­fects the vir­tual datasets. With enough data, this shouldn’t mat­ter that much, but the ar­gu­ment for this is a lit­tle com­pli­cated.

Another way to solve this prob­lem (which is eas­ier to rea­son about) is by fully ob­serv­ing a suffi­ciently high num­ber of sam­ples. Then there isn’t a need for EM, you can just do clus­ter­ing (or what­ever other pa­ram­e­ter fit­ting) on the dataset (ac­tu­ally, clus­ter­ing can be framed in terms of EM, but doesn’t have to be). Of course, this as­signs no spe­cial im­por­tance to easy-to-ob­serve fea­tures. (After learn­ing the pa­ram­e­ters, we can use them to in­fer the un­ob­served vari­ables prob­a­bil­is­ti­cally)

Philo­soph­i­cally, “func­tions of eas­ily-ob­served fea­tures” seem more like per­cepts than con­cepts (this post de­scribes the dis­tinc­tion). Th­ese are still use­ful, and neu­ral nets are au­to­mat­i­cally go­ing to learn high-level per­cepts (i.e. func­tions of ob­served fea­tures), since that’s what the in­ter­me­di­ate lay­ers are op­ti­mized for. How­ever, a Bayesian in­fer­ence method isn’t go­ing to as­sign spe­cial im­por­tance to ob­served fea­tures, as it treats the ob­ser­va­tions as causally down­stream of the on­tolog­i­cal re­al­ity rather than causally up­stream of it.

• I share jes­si­cata’s feel­ing that the best set of con­cepts to work with may not be very sen­si­tive to what’s easy to de­tect. This might de­pend a lit­tle on how we define “con­cepts”, and you’re right that your vi­sual sys­tem or some other fairly “early” bit of pro­cess­ing may well come up with ways of lump­ing things to­gether, and that that will be de­pen­dent on what’s easy to de­tect, whether or not we want to call those things con­cepts or cat­e­gories or per­cepts or what­ever else.

But in the cases I can think of where it’s be­come ap­par­ent that some set of cat­e­gories needs re­fine­ment, there doesn’t seem to be a gen­eral pat­tern of bas­ing that re­fine­ment on the ex­is­tence of con­ve­nient de­tectable fea­tures. (Ex­cept in the too-gen­eral sense that ev­ery­thing ul­ti­mately comes down to em­piri­cal ob­ser­va­tion.)

I don’t think your poli­ti­cal mo­ti­va­tions are ne­far­i­ous, and I don’t think there’s any­thing wrong with a line of think­ing that goes “hmm, it seems like the way a lot of peo­ple think about X makes them mi­s­un­der­stand an im­por­tant thing in my life re­ally badly; let’s see what other ways one could think about X, be­cause they might be bet­ter”—other than that “hard cases make bad law”, and that it’s easy to fall into an equal-and-op­po­site er­ror where you think about X in a way that would make you mi­s­un­der­stand a re­lated im­por­tant thing in other peo­ple’s lives. The poli­ti­cal hot potato we’re dis­cussing here demon­stra­bly is one where some peo­ple have feel­ings that (so far as I can tell) are as strong as yours and of op­po­site sign, af­ter all. (Which may sug­gest, by the way, that if you want an ex­tra cat­e­gory then you may ac­tu­ally need two or more ex­tra cat­e­gories: “adapted bleggs” may have fun­da­men­tal in­ter­nal differ­ences from one an­other. [EDITED to add:] … And in­deed your other writ­ings on this topic do pro­pose two or more ex­tra cat­e­gories.)

I am con­cerned that we are tee­ter­ing on the brink of—if we have not already fallen into—ex­actly the sort of ob­ject-level poli­ti­cal/​ide­olog­i­cal/​per­sonal ar­gu­ment that I was wor­ried about when you first posted this. Words like “ne­far­i­ous” and “ter­ror­ist” seem like a warn­ing sign. So I’ll limit my re­sponse to that part of what you say to this: It is not at all my in­ten­tion to en­dorse any way of talk­ing to you, or any­one else, that makes you, or any­one else, feel the way you de­scribe feel­ing in that “don’t ne­go­ti­ate with ter­ror­ist meme­plexes” ar­ti­cle.

• I share jes­si­cata’s feel­ing that the best set of con­cepts to work with may not be very sen­si­tive to what’s easy to de­tect. [...] there doesn’t seem to be a gen­eral pat­tern of bas­ing that re­fine­ment on the ex­is­tence of con­ve­nient de­tectable features

Yeah, I might have been on the wrong track there. (Jes­sica’s com­ment is great! I need to study more!)

I am con­cerned that we are tee­ter­ing on the brink of—if we have not already fallen into—ex­actly the sort of ob­ject-level poli­ti­cal/​ide­olog­i­cal/​per­sonal ar­gu­ment that I was wor­ried about

I think we’re a safe dis­tance from the brink.

Words like “ne­far­i­ous” and “ter­ror­ist” seem like a warn­ing sign

“Ne­far­i­ous” ad­mit­tedly prob­a­bly was a high-emo­tional-tem­per­a­ture warn­ing sign (oops), but in this case, “I don’t ne­go­ti­ate with ter­ror­ists” is mostly func­tion­ing as the stan­dard stock phrase to evoke the time­less-de­ci­sion-the­o­retic “don’t be ex­tortable” game-the­ory in­tu­ition, which I don’t think should count as a warn­ing sign, be­cause it would be harder to com­mu­ni­cate if peo­ple had to avoid gen­uinely use­ful metaphors be­cause they hap­pened to use high-emo­tional-valence words.

• Can you say more? What should the de­scrip­tion say in­stead? (I’m guess­ing you’re refer­ring to the fact that the post has some sub­text that prob­a­bly isn’t a good topic fit for Less Wrong? But I would ar­gue that the text (us­ing the blegg/​rube parable set­ting to make an­other point about the cog­ni­tive func­tion of cat­e­go­riza­tion) to­tally is rele­vant and po­ten­tially in­ter­est­ing!)

• “Fan­fic­tion for the blegg/​rube parable” and “to make an­other point about the cog­ni­tive func­tion of cat­e­go­riza­tion” are both com­pletely ig­nor­ing the very large elephant in the rather small room.

The ac­tual topic of the piece is clearly the cur­rently hot topic of How To Think About Trans Peo­ple. (Words like “trans” and “gen­der” are never men­tioned, but it be­comes ob­vi­ous maybe four or five para­graphs in.) Which is a suffi­ciently mind­kil­ling topic for suffi­ciently many peo­ple that maybe it’s worth men­tion­ing.

(Or maybe not; you might ar­gue that ac­tu­ally read­ers are more likely to be able to read the thing with­out get­ting mind­kil­led if their at­ten­tion isn’t drawn to the mind­kil­ling im­pli­ca­tions. But I don’t think many of those likely to be mind­kil­led will miss those im­pli­ca­tions; bet­ter to be up front about them.)

• bet­ter to be up front about them

… you’re right. (I like the aes­thet­ics of the “de­ni­able alle­gory” writ­ing style, but delu­sion­ally ex­pect­ing to get away with it is try­ing to have one’s cake and eat it, too.) I added a “Con­tent no­tice” to the de­scrip­tion here.

• I know it’s rather a side is­sue, but per­son­ally I hate the “de­ni­able alle­gory” style, though LW is prob­a­bly a bet­ter fit for it than most places …

1. The temp­ta­tion to say liter­ally-X-but-im­ply­ing-Y and then re­spond to some­one ar­gu­ing against Y with “oh, but I wasn’t say­ing that at all, I was only say­ing X; how very un­rea­son­able of you to read all that stuff into what I wrote!” is too of­ten too difficult to re­sist.

2. Even if the de­ni­able-alle­gorist re­frains from any such shenani­gans, the fear of them (as a re­sult of be­ing hit by such things in the past by de­ni­able alle­gorists with fewer scru­ples) makes it an un­pleas­ant busi­ness for any­one who finds them­selves dis­agree­ing with any of the im­pli­ca­tions.

3. And of course the rea­son why that tac­tic works is that of­ten one does mi­s­un­der­stand the im­port of the alle­gory; a mode of dis­cus­sion that in­vites mi­s­un­der­stand­ings is (to me) dis­agree­able.

4. The alle­gor­i­cal style can say, or at least ges­ture to­wards, a lot of stuff in a small space. This means that any­one try­ing to re­spond to it in literal style is li­able to look like an awful pedant. On the other hand, if you try to meet an alle­gory with an­other alle­gory, (a) that’s hard to do well and (b) af­ter one or two rounds the chances are that ev­ery­one is talk­ing past ev­ery­one else. Which might be fun but prob­a­bly isn’t pro­duc­tive.

• When I first read the post, I did not no­tice any refer­ence to any mind­kil­ling top­ics and was ac­tu­ally quite con­fused and sur­prised when I saw the com­ments about all of this be­ing about some­thing su­per poli­ti­cal, and still found the post mod­er­ately use­ful. So I do think that I am a coun­terex­am­ple to your “I don’t think many of those likely to be mind­kil­led will miss those im­pli­ca­tions; ” ar­gu­ment.

• I’m not sure you are, since it seems you weren’t at all mind­kil­led by it. I could be wrong, though; if, once you saw the im­pli­ca­tions, it took non­triv­ial effort to see past them, then I agree you’re a coun­terex­am­ple.

• Thanks. In ret­ro­spect, pos­si­bly a bet­ter ap­proach for this venue would have been to care­fully rewrite the piece for Less Wrong in a way that strips more sub­text/​con­ceals more of the elephant (e.g., cut the “dis­re­spect­ing that effort” para­graph).

• I think, to make it work for my con­cep­tion of LW, you’d also want to ac­knowl­edge other ap­proaches (stay­ing with 2 cat­e­gories and weight­ing the at­tributes, stay­ing with 2 cat­e­gories and just ac­knowl­edg­ing they’re im­perfect, giv­ing up on cat­e­gories and spec­i­fy­ing at­tributes in­di­vi­d­u­ally, pos­si­bly with pre­dic­tions of hid­den at­tributes, adding more cat­e­gories and choos­ing based on the di­men­sion with biggest de­vi­a­tion from av­er­age, etc.), and iden­tify when they’re more ap­pro­pri­ate than your preferred ap­proach.

• WTF. I didn’t down­vote (un­til now), but didn’t see any point to so many words ba­si­cally say­ing “la­bels are lossy com­pres­sion, get over it”.

Now that I ac­tu­ally no­tice the web­site name and un­der­stand that it’s an alle­gory for a de­bate that doesn’t be­long here (un­less gen­der cat­e­go­riza­tion some­how is im­por­tant to LW posts), I be­lieve it also doesn’t be­long here. I be­lieve that it doesn’t be­long here re­gard­less of which side I sup­port (and I don’t have any clue what the de­bate is, so I don’t know what the lines are or which side, if any, I sup­port).

• Quick note that the mod team had been ob­serv­ing this post and the sur­round­ing dis­cus­sion and not 100% sure how to think about it. The post it­self is suffi­ciently ab­stracted that un­less you’re already aware of the poli­ti­cal dis­cus­sion, it seemed fairly in­nocu­ous. Once you’re aware of the poli­ti­cal dis­cus­sion it’s fairly blatant. It’s un­clear to me how bad this is.

I do not have much con­fi­dence in any of the poli­cies we could pick and stick to here. I’ve been mostly satis­fied with the re­sult­ing con­ver­sa­tion on LW stay­ing pretty ab­stract and meta level.

• Per­haps also worth not­ing: I was look­ing through two other re­cent posts, Tale of Alice Al­most and In My Cul­ture, through a similar lens. They each give me the im­pres­sion that they are re­lat­ing in some way to a poli­ti­cal dis­pute which has been ab­stracted away, with a vague feel­ing that the re­sult­ing post may some­how still be a part of the poli­ti­cal strug­gle.

I’d like to a have a mod­er­a­tion policy (pri­mar­ily about whether such posts get front­paged) that works re­gard­less of whether I ac­tu­ally know any­thing about any be­hind-the-scenes drama. I’ve mul­led over a few differ­ent such poli­cies, each of which would re­sult in differ­ent out­comes as to which of the three posts would get front­paged. But in each case the three posts are hov­er­ing near the edge of how­ever I’d clas­sify them.

(The mod team was fairly di­vided on how im­por­tant a lens this was and/​or ex­actly how to think about, so just take this as my own per­sonal thoughts for now)

• My cur­rent model is that I am in fa­vor of peo­ple try­ing to come up with gen­eral analo­gies, even if they are in the mid­dle of think­ing about mind­kil­ling top­ics. I feel like peo­ple have all kinds of weird mo­ti­va­tions for writ­ing posts, and try­ing to judge and clas­sify based on them is go­ing to be hard and set up weird metacog­ni­tive in­cen­tives, whereas just de­cid­ing whether some­thing is use­ful for try­ing to solve prob­lems in gen­eral has over­all pretty de­cent in­cen­tives and al­lows us to chan­nel a lot of peo­ple’s mo­ti­va­tions about poli­ti­cal top­ics into stuff that is use­ful in a broader con­text. (And I think some of Sarah Con­stantin’s stuff is a good ex­am­ple of ideas that I found use­ful com­pletely sep­a­rate from the poli­ti­cal con­text and where I am quite glad she tried to ab­stract them away from the lo­cal poli­ti­cal con­text that prob­a­bly made her mo­ti­vated to think about those things)

• un­less [...] cat­e­go­riza­tion some­how is im­por­tant to LW posts

Cat­e­go­riza­tion is hugely rele­vant to Less Wrong! We had a whole Se­quence about this!

Of course, it would be prefer­able to talk about the episte­mol­ogy of cat­e­gories with non-dis­tract­ing ex­am­ples if at all pos­si­ble. One tra­di­tional strat­egy for avoid­ing such dis­trac­tions is to ab­stract the meta-level point one is try­ing to make into a fic­tional parable about non-dis­tract­ing things. See, for ex­am­ple, Scott Alexan­der’s “A Parable on Ob­so­lete Ide­olo­gies”, which isn’t ac­tu­ally about Nazism—or rather, I would say, is about some­thing more gen­eral than Nazism.

Un­for­tu­nately, this is ex­tremely challeng­ing to do well—most writ­ers who at­tempt this strat­egy fail to be sub­tle enough, and the parable falls flat. For this they de­serve to be down­voted.

• So I think my filter for “ap­pro­pri­ate to LessWrong” is that it should be an ab­strac­tion and gen­er­al­iza­tion, NOT a parable or obfus­ca­tion to a spe­cific topic. If there is a clean map­ping to a cur­rent hot­but­ton, the au­thor should do ad­di­tional dili­gence to find coun­terex­am­ples (the cases where more cat­e­gories are costly, or where some di­men­sions are im­por­tant for some uses and not for oth­ers, so you should use tag­ging rather than cat­e­go­riza­tion) in or­der to ac­tu­ally define a con­cept rather than just restat­ing a prefer­ence.

• I think it is worth point­ing out ex­plic­itly (though I ex­pect most read­ers no­ticed) that Dagon wrote “un­less gen­der cat­e­go­riza­tion is im­por­tant” and Zack turned it into “un­less … cat­e­go­riza­tion is im­por­tant” and then said “Cat­e­go­riza­tion is hugely rele­vant”. And that it’s perfectly pos­si­ble that (1) a gen­eral topic can be highly rele­vant in a par­tic­u­lar venue with­out it be­ing true that (2) a spe­cific case of that gen­eral topic is rele­vant there. And that most likely Dagon was not at all claiming that cat­e­go­riza­tion is not an LW-rele­vant topic, but that gen­der cat­e­go­riza­tion in par­tic­u­lar is a too-dis­tract­ing topic.

(I am not sure I agree with what I take Dagon’s po­si­tion to be. Gen­der is a very in­ter­est­ing topic, and would be even if it weren’t one that many peo­ple feel very strongly about, and it re­lates to many very LW-ish top­ics—in­clud­ing, as Zack says, that of cat­e­go­riza­tion more gen­er­ally. Still, it might be that it’s just too dis­tract­ing.)

• The right word to elide from my ob­jec­tion would be “cat­e­go­riza­tion”—I should have said “un­less gen­der is im­por­tant”, as that’s the poli­ti­cal topic I don’t think we can/​should dis­cuss here. Cat­e­go­riza­tion in math­e­mat­i­cal ab­strac­tion is on-topic, as would be a for­mal defi­ni­tion/​map­ping of a rele­vant cat­e­gory to math­e­mat­i­cally-ex­press­ible no­ta­tion.

Loose, in­for­mal map­pings of non-rele­vant top­ics is not use­ful here.

And hon­estly, I’m not sure how bright my line is—I can imag­ine top­ics re­lated to gen­der or other hu­man re­la­tion­ship top­ics that tend to by­pass ra­tio­nal­ity be­ing meta-dis­cussed here, es­pe­cially if it’s about rais­ing the san­ity wa­ter­line on such top­ics, and how to un­der­stand what goes wrong when they’re dis­cussed at the ob­ject level. I doubt we’d get good re­sults if we had di­rect ob­ject-level de­bates or points made here on those top­ics.

• I think I roughly agree with this, though the LW team definitely hasn’t dis­cussed this at length yet, and so this is just my per­sonal opinion un­til I’ve prop­erly checked in with the rest of the team.