# query

Karma: 442
NewTop
Page 1
• 16 Jun 2018 5:22 UTC
6 points

P(vul­can moun­tain | you’re not in vul­can desert) = 13

P(vul­can moun­tain | guard says “you’re not in vul­can desert”) = P(guard says “you’re not in vul­can desert” | vul­can moun­tain) * P(vul­can moun­tain) /​ P(guard says “you’re not in vul­can desert”) = ((1/​3) * (1/​4)) /​ ((3/​4) * (1/​3)) = 13

Woops, you’re right; nev­er­mind! There are al­gorithms that do give differ­ent re­sults, such as justin­pom­brio men­tions above.

• EDIT: This was wrong.

The an­swer varies with the gen­er­at­ing al­gorithm of the state­ment the guard makes.

In this ex­am­ple, he told you that you were not in one of the places you’re not in (the Vul­can Desert). If he always does this, then the prob­a­bil­ity is 14; if you had been in the Vul­can Desert, he would have told you that you were not in one of the other three.

If he always tells you whether or not you’re in the Vul­can Desert, then once you hear him say you’re not your prob­a­bil­ity of be­ing in the Vul­can Moun­tain is 13.

• Definitely makes sense. A com­monly cited ex­am­ple is women in an office work­place; what would be av­er­age as­sertive­ness for a male is con­sid­ered “bitchy”, but they still suffer roughly the same “weak” penalties for non-as­sertive­ness.

With the ad­vice-giv­ing as­pect, some situ­a­tions are likely com­ing from peo­ple not know­ing what lev­ers they’re ac­tu­ally pul­ling. Adam tells David to move his “as­sertive­ness” lever, but there’s no af­for­dance gap available for David by mov­ing that lever—he would ac­tu­ally have to move an “as­sertive­ness + so­cial skill W” lever which he doesn’t have, but which feels like a sin­gle lever for Adam called “as­sertive­ness”. Not all situ­a­tions are such; there’s no “don’t be a woman” or “don’t be autis­tic lever”. Some­times there’s some other solu­tion by mov­ing along a differ­ent di­men­sion and some­times there’s not.

• 3 May 2018 22:39 UTC
14 points

Feel similarly; since Face­book com­ments are a mat­ter of pub­lic record, dis­putes and com­plaints on them are fully pub­lic and can have high so­cial costs if un­ad­dressed. I would not be wor­ried about it in a small group chat among close friends.

• I per­ceive sev­eral differ­ent ways some­thing like this hap­pens to me:

1. If I do some­thing that strains my work­ing mem­ory, I’ll have an ex­pe­rience of hav­ing a “cache miss”. I’ll reach for some­thing, and it won’t be there; I’ll then at­tempt to pull it into mem­ory again, but usu­ally this is while try­ing to “jug­gle too many balls”, and some­thing else will of­ten slip out. This feels like it re­quires effort/​en­ergy to keep go­ing, and I have a de­sire to stop and re­lax and let my brain “fuzz over”. Even­tu­ally I’ll get a han­dle on an ab­strac­tion, ver­bal loop, or image that lets me hold it all at once.

2. If I am at­tempt­ing to force some­thing cre­ative, I might feel like I’m pay­ing close at­ten­tion to “where the cre­ative thing should pop up”. This is of­ten ac­com­panied with frus­tra­tion and anx­iety, and I’ll feel like my mind is oth­er­wise more blank than nor­mal as I keep an anx­ious eye out for the cre­ative idea that should pop up. This is a “noth­ing is get­ting past the filter” prob­lem; too much prune and not enough bab­ble for the as­signed task. (Not to say that always means you should bab­ble more; maybe you shouldn’t try this task, or shouldn’t do it in the so­cial con­text that’s caus­ing you to right­fully prune.)

3. Things can just feel gen­er­ally aver­sive or bor­ing. I can push through this by re­hears­ing con­vinc­ing ev­i­dence that I should do the thing—this can tem­porar­ily lighten the aver­sion.

All 3 of these can even­tu­ally lead to head pain/​fog­gi­ness for me.

I think this “difficulty think­ing” feel­ing is a mix of cog­ni­tive abil­ity, sub­ject do­main, emo­tional ori­en­ta­tion, and prob­a­bly other stuff. Me­chan­i­cally hav­ing less short-term mem­ory makes #1 more salient what­ever you’re do­ing. Some peo­ple prob­a­bly have more me­chan­i­cal”spark” or cre­ative in­tel­li­gence in cer­tain ways, af­fect­ing #2. Hav­ing less do­main ex­per­tise makes #1 and maybe #2 more salient since you have less ab­strac­tions and raw ma­te­rial to work with. Lot­tery of in­ter­ests, so­cial cli­mate, and any pre-made aver­sions like hav­ing a bad teacher for a sub­ject will fac­tor into #3. Sleep de­pri­va­tion wors­ens #1 and #2, but im­proves #3 for me (since other dis­trac­tions are less salient).

I think this phe­nomenon is INSANELY IMPORTANT; when you see peo­ple who are a fac­tor of 10x or 100x more pro­duc­tive in an area, I think it’s al­most cer­tainly be­cause they’ve got­ten past all of the nec­es­sary thresh­olds to not have any fog or me­chan­i­cal im­ped­i­ment to think­ing in that area. There is a large ge­netic com­po­nent here, but to fo­cus on things that might be change­able to im­prove these ar­eas:

• Us­ing pa­per or soft­ware when it can be helpful.

• Work­ing in ar­eas you find na­tively in­ter­est­ing or fun. Alter­na­tively, fram­ing an area so that it feels more na­tively in­ter­est­ing or fun (al­though that seems re­ally re­ally hard). Find­ing sub­ar­eas that are more na­tively fun to start with and ex­pand from; for in­stance, when learn­ing about pro­gram­ming, try­ing out some differ­ent lan­guages to see which you en­joy most. It’ll be eas­ier to learn nec­es­sary things about one you dis­like af­ter you’ve learned a lot in the frame­work of one you like.

• Get­ting into a so­cial con­text that gives you con­sis­tent recog­ni­tion for do­ing your work. This can be a chicken and egg prob­lem in com­pet­i­tive ar­eas.

• Elimi­nat­ing un­re­lated stres­sors, set­ting up a life that makes you hap­pier and more fulfilled; I had worse brain fog about math when in bad re­la­tion­ships.

• Eat­ing differ­ent food. There’s too many po­ten­tial dietary in­ter­ven­tions to list (many con­tra­dic­tory); I had a huge im­prove­ment from avoid­ing any­thing re­motely in the “junk food” cat­e­gory and try­ing to eat things that are in the “whole food” cat­e­gory.

• Exercise

• Stim­u­lant drugs for some peo­ple; if you have un­di­ag­nosed ADHD, try to get di­ag­nosed and med­i­cated.

I re­ally wish I had spent more time in the past work­ing on these meta prob­lems, in­stead of beat­ing my head against a wall of brain fog.

# Fol­low Stan­dard Incentives

25 Apr 2018 1:09 UTC
13 points
• A note on this, which I definitely don’t mean to ap­ply to the spe­cific situ­a­tions you dis­cuss (since I don’t know enough about them):

If you give peo­ple stronger in­cen­tives to lie to you, more peo­ple will lie to you. If you give peo­ple strong enough in­cen­tives, even peo­ple who value truth highly will start ly­ing to you. Some­times they will do this by ly­ing to them­selves first, be­cause that’s what is nec­es­sary for them to suc­cess­fully nav­i­gate the in­cen­tive gra­di­ent. This can be changed by their self-aware­ness and force of will, but some who do that change will find them­selves in the un­for­tu­nate po­si­tion of be­ing worse-off for it. I think a lot of peo­ple view the ne­ces­sity of giv­ing such lies as the fault of the per­son giv­ing the bad in­cen­tive gra­di­ent; even if they value truth in­ter­nally, they might lie ex­ter­nally and feel jus­tified in do­ing so, be­cause they view it as be­ing forced upon them.

An ex­am­ple is a mar­ried cou­ple, liv­ing to­gether and nom­i­nally ded­i­cated to each other for life, when one part­ner asks the other “Do I look fat in this?“. If there is sig­nifi­cant pun­ish­ment for say­ing Yes, and not much abil­ity to es­cape such pun­ish­ment by break­ing up or spend­ing time apart, then it takes an ex­ceed­ingly strong will to still say “Yes”. And a per­son with a strong will who does so then suffers for it, per­haps con­tinu­ally for many years.

If you value truth in your re­la­tion­ships, you should not only fo­cus on giv­ing and re­ceiv­ing the truth in one-off situ­a­tions; you should set up the in­cen­tive struc­tures in your life, with the re­la­tion­ships you pick and how you re­spond to peo­ple, to op­ti­mally give and re­ceive the truth. If you are con­stantly pun­ish­ing peo­ple for tel­ling you the truth (even if you don’t feel like you’re pun­ish­ing them, even if your re­ac­tions feel like the only pos­si­ble ones in the mo­ment), then you should not be sur­prised when most peo­ple are not will­ing to tell you the truth. You should rec­og­nize that, if you’re pun­ish­ing peo­ple for tel­ling you the truth (for in­stance, by giv­ing lots of very un­com­fortable out­ward dis­plays of high stress), then there is an in­cen­tive for peo­ple who highly value speak­ing truth to stay away from you as much as pos­si­ble.

• I think I may agree with the sta­tus ver­sion of the anti-hypocrisy flinch. It’s the epistemic ver­sion I was re­ally want­ing to ar­gue against.

Ok yeah, I think my con­cern was mostly with the sta­tus ver­sion—or rather that there’s a gen­eral sen­sor that might com­bine those things, and the parts of that re­lated to sta­tus and so­cial man­age­ment are re­ally im­por­tant, so you shouldn’t just turn the sen­sor off and run things man­u­ally.

… That doesn’t seem like treat­ing it as be­ing about epistemics to me. Why is it epistem­i­cally rele­vant? I think it’s more like a naive mix of epistemics and sta­tus. Sta­tus norms in the back of your head might make the hypocrisy salient and feel rele­vant. Epistemic dis­course norms then naively sug­gest that you can re­solve the con­tra­dic­tion by dis­cussing it.

I was definitely un­clear; my per­cep­tion was the speaker’s claiming “per­son X has nega­tive at­tribute Y, (there­fore I am more de­serv­ing of sta­tus than them)” and that, given a cer­tain so­cial frame, who is de­serv­ing of more sta­tus is an epistemic ques­tion. Whereas ac­tu­ally, the per­son isn’t ori­ented to­ward re­ally dis­cussing who is more de­serv­ing of sta­tus within the frame, but rather is mak­ing a move to in­crease their sta­tus at the ex­pense of the other per­son’s.

I think my sense that “who is de­serv­ing of more sta­tus within a frame” is an epistemic ques­tion might be as­sign­ing more struc­ture to sta­tus than is ac­tu­ally there for most peo­ple.

• I will see if I can catch a fresh one in the wild and share it. I rec­og­nize your last para­graph as some­thing I’ve ex­pe­rienced be­fore, though, and I en­dorse the at­tempt to not let that grow into righ­teous in­dig­na­tion and an­noy­ance with­out jus­tifi­ca­tion—with that as the archetype, I think that’s in­deed a thing to try to im­prove.

Most ex­am­ples that come to mind for me have to do with the per­son pro­ject­ing iden­tity, knowl­edge, or an aura of com­pe­tence that I don’t think is ac­cu­rate. For in­stance hold­ing some­one else to a so­cial stan­dard that they don’t meet, “I think per­son X has nega­tive at­tribute Y” when the speaker has also re­cently dis­played Y in my eyes. I think the anti-hypocrisy in­stinct I have is ac­cu­rate in most of those cases: the con­ver­sa­tion is not re­ally about epistemics, it’s about so­cial sta­tus and al­li­ances, and if I try to treat it as about epistemics (by for in­stance, naively point­ing out the ways the other per­son has dis­played Y) I may lose util­ity for no good rea­son.

• As you say, there are cer­tainly nega­tive things that hypocrisy can be a sig­nal of, but you recom­mend that we should just con­sider those things in­de­pen­dently. I think try­ing to do this sounds re­ally re­ally hard. If we were perfect rea­son­ers this wouldn’t be a prob­lem; the anti-hypocrisy norm should in­deed just be the sum of those hid­den sig­nals. How­ever, we’re not; if you prac­tice shut­ting down your au­to­matic anti-hypocrisy norm, and re­place it with a self-con­structed non-au­to­matic con­sid­er­a­tion of al­ter­na­tives, then I think you’ll do worse some­times.

This has sort of a “valley of bad ra­tio­nal­ity” feel to me; I imag­ine try­ing to have leg­ible, co­her­ent thoughts about al­ter­na­tive con­sid­er­a­tions while ig­nor­ing my gut anti-hypocrisy in­stinct, and that re­li­ably failing me in so­cial situ­a­tions where I should’ve just gone with my in­stinct.

I no­tice the ar­gu­ment I’m mak­ing ap­plies gen­er­ally to all “over­ride so­cial in­stinct” sug­ges­tions, and I think that you should some­times try to over­ride your so­cial in­stincts—but I do think that there’s huge valleys of bad ra­tio­nal­ity near to this, so I’d take ex­treme care about it. My guess is I think you should over­ride them much less than you do—or I have a differ­ent sense of what “over­rid­ing” is.

• One hy­poth­e­sis is that con­scious­ness evolved for the pur­pose of de­cep­tion—Robin Han­son’s “The Elephant in the Brain” is a de­cent read on this, al­though it does not ad­dress the Hard Prob­lem of Con­scious­ness.

If that’s the case, we might cir­cum­vent its use­ful­ness by hav­ing the right goals, or strong enough de­tec­tion and norm-pun­ish­ing be­hav­iors. If we build fac­to­ries that are closely mon­i­tored where faulty ma­chines are de­stroyed or re­paired, and our goal is out­put in­stead of sur­vival of in­di­vi­d­ual ma­chines, then the ma­chines be­ing de­cep­tive will not help with that goal.

If some­how the easy and hard ver­sions of con­scious­ness sep­a­rate (i.e., things which don’t func­tion­ally look like the con­scious part of hu­man brains end up “hav­ing ex­pe­rience” or “hav­ing moral weight”), then this might not solve the prob­lem even un­der the de­cep­tion hy­poth­e­sis.

• Some reader might be think­ing, “This is all nice and dandy, Quaerendo, but I can­not re­late to the ex­am­ples above… my cog­ni­tion isn’t dis­torted to that ex­tent.” Well, let me re­fer you to UTexas CMHC:
Maybe you are be­ing re­al­is­tic. Just for the sake of ar­gu­ment, what if you’re only 90% re­al­is­tic and 10% un­re­al­is­tic? That mean’s you’re wor­ry­ing 10% “more” than you re­ally have to.

Not in­tend­ing to be overly nega­tive, but this is not a good ar­gu­ment for any­thing and also doesn’t an­swer the hy­po­thet­i­cal ques­tion of not re­lat­ing to the ex­am­ples. It sounds like, “You’re not perfect along this di­men­sion, so you should de­vote en­ergy to it!”—which is definitely not the case.

I ap­pre­ci­ate the list of dis­tor­tions; such lists are nice raw ma­te­rial.

• For most ques­tions you can’t re­ally com­pute the an­swer. You need to use some com­bi­na­tion of in­tu­ition and ex­plicit rea­son­ing. How­ever, this com­bi­na­tion is in­deed more trust­wor­thy than in­tu­ition alone, since it al­lows treat­ing at least some as­pects of the ques­tion with pre­ci­sion.

I don’t think this is true; in­tu­ition + ex­plicit rea­son­ing may have more of a cer­tain kind of in­side view trust (if you model in­tu­ition as not hav­ing gears that can be trustable), but in­tu­ition alone can definitely de­velop more out­side-view/​rep­u­ta­tional trust. Some­times ex­plic­itly rea­son­ing about the thing makes you clearly worse at it, and you can ac­count for this over time.

Fi­nally, it is the ex­plicit rea­son­ing part which al­lows you to offset the bi­ases that you know your rea­son­ing to have, at least un­til you trained your in­tu­ition to offset these bi­ases au­to­mat­i­cally (as­sum­ing this is pos­si­ble at all).

I also don’t think this is as clear cut as you’re mak­ing it sound; ex­plicit rea­son­ing is also sub­ject to bi­ases, and in­tu­itions can be the things which offset bi­ases. As a quick and dirty ex­am­ple, even if your ex­plicit rea­son­ing takes the form of math­e­mat­i­cal proofs which are ver­ifi­able, you can have bi­ases about 1. which on­tolo­gies you use as your mod­els to write proofs about, 2. which things you fo­cus on prov­ing, and 3. which proofs you de­cide to give. You can also have in­tu­itions which push to cor­rect some of these bi­ases. It is not the case that in­tu­ition → bi­ased, ex­plicit rea­son­ing → un­bi­ased.

Ex­plicit re­flec­tion is in­deed a pow­er­ful tool, but I think there’s a ten­dency to con­fuse leg­i­bil­ity with abil­ity; some­one can illeg­ibly to oth­ers or them­selves have the ca­pac­ity to do some­thing (like use an in­tu­ition to cor­rect a bias). It is hard to trans­mit such abil­ities, and with­out good ex­ter­nal proof of their ex­is­tence or trans­mis­si­bil­ity we are right to be skep­ti­cal and with­hold so­cial credit in any given case, else we be mis­led or cheated.

• If you choose to “care more” about some­thing, and as a re­sult other things get less of your en­ergy, you are so­cially less li­able for the out­come than if you in­ten­tion­ally choose to “care less” about a thing di­rectly. For in­stance, “I’ve been re­ally busy” is a com­mon and some­what so­cially ac­cept­able ex­cuse for not spend­ing time with some­one; “I chose to care less about you” is not. So even if your one and only goal was to spend less time on X, it may be more so­cially ac­cept­able to do that by adding Y as cover.

So­cial ex­cus­abil­ity is of­ten reused as in­ter­nal ex­cus­abil­ity.

• Some rea­sons this is bad:

1. It’s false or not-even-wrong (“worth­less par­ody of a hu­man” is not some­thing that I imag­ine epistem­i­cally ap­plies to any hu­man ever.)

2. It’s mix­ing epistemics and shoulds—even if you cat­e­go­rized your­self as a mis­ery pit, this does not come close to mean­ing you should throw your­self un­der a bus.

3. Misery pits are a false frame­work, that may be use­ful for mod­el­ing phe­nom­ena, but may not be a use­ful model for peo­ple who would tend to iden­tity them­selves a mis­ery pits. For in­stance, if they were likely to think the quoted thought, they’d be com­mit­ting a lot of bucket er­rors.

I also dis­like this com­ment be­cause I think it’s too glib.

• I think it’s a memetic adap­ta­tion type thing. I would claim that at­tempt­ing to open up the group us­age of NVC will also (in a large enough group) open up the us­age of “lan­guage-that-ap­pears-NVCish-even-if-against-the-stated-philos­o­phy”. I think that this type of lan­guage pro­vides cover for power plays (re: the bro­ken link to the fish sel­l­ing sce­nario), and that us­ing the lan­guage in a way that main­tains bound­aries re­quires the group to adapt and be skil­lful enough at de­tect­ing these vi­o­la­tions. It is not enough if you do so as an in­di­vi­d­ual if your group does not lend sup­port; it may be enough if as an in­di­vi­d­ual you are highly skil­led at defend­ing your­self in a way that does not lose face (and prac­tic­ing NVC might raise that skill level), but it’s harder than in the al­ter­na­tive sce­nario.

I’m definitely not try­ing to ob­ject to NVC in gen­eral, but I’m wor­ried about it as a large so­cial group style. I think the failures of it as a large group style would mostly ap­pear as rel­a­tively silent sta­tus trans­fers to the less vir­tu­ous.

Also, these ar­gu­ments are not su­per spe­cific to NVC and Cir­cling, so should prob­a­bly be ab­stracted. I think any large scale group com­mu­ni­ca­tion change has similar bad po­ten­tial, and it’s an ob­ject level ques­tion whether that ac­tu­ally hap­pens. With NVC, I’ve seen some such dy­nam­ics in churches that re­mind me of it, hence why I raise the worry. I think I would feel queasy and like I was be­ing at­tacked if some­one started us­ing NVC lan­guage at me in a pub­lic set­ting in front of oth­ers; I definitely feel like I’ve been “fish-sold” be­fore.

It’s en­tirely pos­si­ble that there ex­ist large groups with a high enough skill level or differ­ent val­ues so that this is not a prob­lem at all, and my ex­pe­rience is just too limited.

• This is in­cor­rect and I think only sounds like an ar­gu­ment be­cause of the lan­guage you’re choos­ing; there’s noth­ing in­co­her­ent about 1. prefer­ring evolu­tion­ary pres­sures that look like Moloch to ex­ist so that you end up ex­ist­ing rather than not ex­ist­ing, and 2. want­ing to solve Moloch-like prob­lems now that you ex­ist.

Also, there’s noth­ing in­co­her­ent about want­ing to solve Moloch-like prob­lems now that you ex­ist re­gard­less of Moloch-like things caus­ing you to come into ex­is­tence. Our val­ues are not evolu­tion’s val­ues, if that even makes sense.

• I’m not an ex­pert, but I think MD5 isn’t the best for this pur­pose due to col­li­sion at­tacks. If it’s a very small plain-en­glish ASCII mes­sage, then col­li­sion at­tacks are prob­a­bly not a worry (I think?), but it’s prob­a­bly bet­ter to use some­thing like SHA-2 or SHA-3 any­ways.

• Yeah, this definitely seems like a bug; perma­l­inks to com­ments shouldn’t re­quire this. Un­for­tu­nately, I don’t see any ob­vi­ous way to re­port a bug.