# Bobertron

Karma: 326
• I un­der­stand your post to be about difficult truths re­lated to poli­tics, but you don’t ac­tu­ally give ex­am­ples (ex­cept “what Trump has said is ‘emo­tion­ally true’”) and the same idea ap­plies to sim­plifi­ca­tions of com­plex ma­te­rial in sci­ence etc. I just hap­pened upon an ex­am­ple from a site teach­ing draw­ing in per­spec­tive (source):

Now you may have heard of terms such as one point, two point or three point per­spec­tive. Th­ese are all sim­plifi­ca­tions. Since you can have an in­finite num­ber of differ­ent sets of par­allel lines, there are tech­ni­cally an in­finite num­ber of po­ten­tial van­ish­ing points. The rea­son we can sim­plify this whole idea to three, two, or a sin­gle van­ish­ing point is be­cause of boxes.

[...] . Be­cause of this, peo­ple like to teach those who are new to per­spec­tive that the world can be sum­ma­rized with a max­i­mum of 3 van­ish­ing points.

Hon­estly, this con­fused me for years

The au­thor way lied to about the pos­si­ble num­ber of van­ish­ing points in a draw­ing. But in­stead of re­al­iz­ing the false­hood he was con­fused.

• Sup­pose X is the case. When you say “X” your op­po­site will be­lieve Y, which is wrong. So, even though “X” is the truth, you should not say it.

Your new idea as I un­der­stand it: Sup­pose say­ing “Z” will let your op­po­site will be­lieve X. So, even though say­ing “Z” is, tech­ni­cally, ly­ing, you should say “Z” be­cause the listener will come to have a true be­lieve.

(I’m sorry if I mi­s­un­der­stood you or you think I’m be­ing un­char­i­ta­ble. But even if I mi­s­un­der­stood I think oth­ers might mi­s­un­der­stand in a similar way, so I feel jus­tified in re­spond­ing to the above con­cept)

First I dis­like that ap­proach be­cause it makes things harder for peo­ple that could un­der­stand, if only peo­ple would stop ly­ing to them or pre­fer to be told the truth along the lines of “study macro eco­nomics for two years and you will un­der­stand”.

Se­cond, that seems to me to be a form of the-end-jus­tifies-the-means that, even though I think of my­self as a con­se­quen­tial­ist, I’m not 100% com­fortable with. I’m open to the idea that some­times it’s okay, and even proper, to say some­thing that’s tech­ni­cally un­true, if it re­sults in your au­di­ence com­ing to have a truer world-view. But if this “some­times” isn’t ex­plained or re­stricted in any way, that’s just throw­ing out the idea that you shouldn’t lie.

Some ideas on that:

• Make sure you don’t harm your au­di­ence be­cause you un­der­es­ti­mate them. If you sim­plify or mod­ify what you say to the point that it can’t be con­sid­ered true any more be­cause you think your au­di­ence is limited in their ca­pac­ity to un­der­stand the cor­rect ar­gu­ment, make sure you don’t make it harder to un­der­stand the truth for those that can. That in­cludes the peo­ple you un­der­es­ti­mated, peo­ple that you didn’t in­tend to ad­dress but heard you all the same and peo­ple that re­ally won’t un­der­stand now, but will later. (Chil­dren grow up, peo­ple that don’t care enough to fol­low com­plex ar­gu­ments might come to care).

• It’s not enough that your au­di­ence comes to be­lieve some­thing true. It needs to be jus­tified true be­lieve. Or al­ter­na­tively, your au­di­ence should not only be­lieve X but know it. For a dis­cus­sion on what is meant with “know” see most of the field of episte­mol­ogy, I guess. Like, if you tell peo­ple that vot­ing for can­di­date X will give them can­cer and the be­lieve you they might come to the cor­rect be­lieve that vot­ing for can­di­date X is bad for them. But say­ing that is still un­eth­i­cal.

• I guess if you could give peo­ple jus­tified true be­lieve, it wouldn’t be ly­ing at all and the whole idea is that you need to lie be­cause some peo­ple are in­ca­pable of jus­tified true be­lieve on mat­ter X. But then it should at least be “jus­tified in some sense”. Par­tic­u­larly, your ar­gu­ment shouldn’t work just as well if “X” were false.

• When play­ing around in the sand­box, sim­ple­ton always bet copy cat (us­ing de­fault val­ues put a pop­u­la­tion of only sim­ple­ton and copy­cat). I don’t un­der­stand why.

• “Just be­ing stupid” and “just do­ing the wrong thing” are rarely helpful views

I agree. What I means was some­thing like: If the OP de­scribes a skill, then the first prob­lem (the kid that wants to be a writer) is so very easy to solve, that I feel I’m not learn­ing much about how that skill works. The sec­ond prob­lem (Carol) seems too hard for me. I doubt it’s ac­tu­ally solv­able us­ing the de­scribed skill.

I think this misses the point, and dam­ages your “should” center

Po­ten­tially, yes. I’m de­liber­ately propos­ing some­thing that might be a lit­tle dan­ger­ous. I feel my should cen­ter is already bro­ken and/​or do­ing more harm to me than the other way around.

“Smok­ing is bad for my health,” “On net I think smok­ing is worth it,” and “I should do things that I think are on net worth do­ing.”

That’s definitely not good enough for me. I never smoked in my life. I don’t think smok­ing is worth it. And if I were smok­ing, I don’t think I would stop just be­cause I think it’s a net harm. And I do think that, be­cause I don’t want to think about the harm of smok­ing or the diffii­cutly of quit­ting, I’d avoid learn­ing about ei­ther of those two.

ADDED: First mean­ing of “I should-1 do X” is “a ra­tio­nal agent would do X”. Se­cond mean­ing (idiosyn­cratic to me) of “I should-2 do X” is “do X” is the ad­vice I need to hear. should-2 is based on my (miss-)un­der­stand­ing of Con­se­quen­tial­ist-Recom­men­da­tion Con­se­quen­tial­ism. The prob­lem with should-1 is that I in­ter­pret “I shoud-1 do X” to mean that I should feel guilty if I don’t do X, which is definitely not helpful.

• In­ter­est­ing ar­ti­cle. Here is the prob­lem I have: In the first ex­am­ple, “spel­ling ocean cor­rectly” and “I’ll be a suc­cess­ful writer” clearly have noth­ing to do with each other, so they shouldn’t be in a bucket to­gether and the kid is just be­ing stupid. At least on first glance, that’s to­tally differ­ent from Carol’s situ­a­tion. I’m tempted to say that “I should not try full force on the startup” and “there is a fatal flaw in the startup” should be in a bucket, be­cause I be­lieve “if there is a fatal flaw in the startup, I should not try it”. As long as I be­lieve that, how can I sep­a­rate these two and not flinch?

Do you think one should al­low one­self to be less con­sis­tent in or­der to be­come more ac­cu­rate? Sup­pose you are a smoker and you don’t want to look into the health risks of smok­ing, be­cause you don’t want to quit. I think you should al­low your­self in some situ­a­tions to both be­lieve “I should not smoke be­cause it is bad for my health” and to con­tinue smok­ing, be­cause then you’ll flinch less. But I’m fuzzy on when. If you com­pletely give up on hav­ing your ac­tions be de­ter­mined by your be­lieves about what you should do, that seems ob­vi­ously crazy and there won’t be any rea­son to look into the health risks of smok­ing any­way.

Maybe you should model your­self as two peo­ple. One per­son is ra­tio­nal­ity. It’s re­spon­si­ble for de­ter­min­ing what to be­lieve and what to do. The other per­son is the one that queries ra­tio­nal­ity and acts on it’s recom­men­da­tions. Since ra­tio­nal­ity is a con­se­quen­tialis with in­tegrity it might not recom­mend to quit smok­ing, be­cause then the other per­son will stop act­ing on it’s ad­vice and stop giv­ing it queries.

• Here are some things that I, as an in­fre­quent reader, find an­noy­ing about the LW in­ter­face.

• The split be­tween main and dis­cus­sion doesn’t make any sense to me. I always browse /​r/​all. I think there shouldn’t be such a dis­tinc­tion.

• My feed is filled with no­tices about mee­tups in far­away places that are pretty much guaran­teed to be ir­rele­vant to me.

• I find the most re­cent open thread to be pretty difficult to find on the side bar. For a minute I thought it just wasn’t there. I’d like it if the re­cent open thread and ra­tio­nal­ity quotes were sticked at the top of r/​dis­cus­sion.

• I don’t get this (and I don’t get Ben­quo’s OP ei­ther. I don’t re­ally know any statis­tics. Only some ba­sic prob­a­bil­ity the­ory.).

“the pro­cess has a 95% chance of gen­er­at­ing a con­fi­dence in­ter­val that con­tains the true mean”. I un­der­stand this to mean that if I run the pro­cess 100 times, 95 times the re­sult­ing CI con­tains the true mean. There­fore, if I look at ran­dom CI amongst those 100 there is a 95% chance that the CI con­tains the true mean.

• “Effec­tive self-care” or “effec­tive well-be­ing”.

Okay. The “effec­tive”-part in Effec­tive Altru­ism” refers to the tool (ra­tio­nal­ity). “Altru­ism” refers to the val­ues. The cool thing about “Effec­tive Altru­ism”, com­pared to ra­tio­nal­ity (like in LW or CFAR), is that it’s spe­cific enough that it al­lows a com­mu­nity to work on rel­a­tively con­crete prob­lems. EA is mostly about the global poor, an­i­mal welfare, ex­is­ten­tial risk and a few oth­ers.

What I’d imag­ine “Effec­tive self-care” would be about is such things as health, fit­ness, hap­piness, pos­i­tive psy­chol­ogy, life-ex­ten­sion, etc. It wouldn’t be about “ev­ery­thing that isn’t cov­ered by effec­tive al­tru­ism”, as that’s too broad to be use­ful. Things like truth and beauty wouldn’t be val­ued (aside from their in­stru­men­tal value) by ei­ther al­tru­ism nor self-care.

“Effec­tive Ego­ism” sounds like the op­po­site of Effec­tive Altru­ism. Like they are en­e­mies. “Effec­tive self-care” sounds like it com­ple­ments Effec­tive Altru­ism. You could ar­gue that effec­tive al­tru­ists should be in­ter­ested in spread­ing effec­tive self-care both amongst oth­ers since al­tru­ism is about mak­ing oth­ers bet­ter off, and amongst them­selves be­cause if you take good care for your­self you are in a bet­ter po­si­tion to help oth­ers, and if you are effi­cient about it you have more re­sources to help oth­ers.

On the nega­tive side, both terms might sound too med­i­cal. And self-care might sound too limited com­pared to what you might have in mind. For ex­am­ple,one might be un­der the im­pres­sion that “self-care” is con­cerned with bring­ing hap­piness lev­els to “nor­mal” or “av­er­age”, in­stead of su­per duper high.

• None of this is a much my strongly held be­liefs as my at­tempt to find flaw with the “nu­clear black­mail” ar­gu­ment.

I don’t un­der­stand. Could you cor­rect the gram­mar mis­takes or rephrase that?

The way I un­der­stand the ar­gu­ment isn’t that the sta­tus quo in the level B game is perfect. It isn’t that Trump is a bad choice be­cause his level B strat­egy is tak­ing too much risk and there­fore bad. I un­der­stand the ar­gu­ment as say­ing: “Trump doesn’t even re­al­ize that there is a level B game go­ing on and even when he finds out he will be un­fit to play in that game”.

• As I un­der­stand it you are crit­i­ciz­ing Yud­kowski’s ide­ol­ogy. But MrMind wants to hear our opinion on whether or not Scott and Yud­kowski’s rea­son­ing was sound, given their ide­olo­gies.

• I’ve read it those two books af­ter LW. As­sum­ing you have read the se­quences: It wasn’t a to­tal waste, but from my mem­ory I would recom­mend What In­tel­li­gence Tests Miss only if you have an in­ter­est speci­fi­cally in psy­chol­ogy, IQ or the heuris­tics and bi­ases field. I would not recom­mend it sim­ply be­cause you have a ca­sual in­ter­est in ra­tio­nal­ity and philos­o­phy (“LW-type stuff”) or if you’ve read other books about heuris­tics and bi­ases. The Robot’s Re­bel­lion is a lit­tle more spec­u­la­tive and there­fore more in­ter­est­ing, Robot’s Re­bel­lion and What In­tel­li­gence Test Miss also have a sig­nifi­cant over­lap in cov­ered ma­te­rial.

• I haven’t read “Good and Real” or “Think­ing, Fast and Slow” yet, be­cause I think that I won’t learn some­thing new as a long term Less Wrong reader. In the case of “Good and Real” part seems to be about physics and I don’t think I have the physics back­ground to profit from hat (I feel a re­fresher on high school physics would be more ap­propirate for me). In the case of “Think­ing, Fast and Slow” I have already read books by Keith Stanovich (What In­tel­li­gence Tests Miss and The Robot’s Re­bel­lion) and some chap­ters of aca­demic books ed­ited by Kah­ne­man.

Does any­one think those two books are still worth my time?

• “Ver­ständ­nis” seems to­tally wrong to me. It’s from the verb “ver­ste­hen” (to un­der­stand, to com­pre­hend). It usu­ally means “un­der­stand­ing” (“meinem Ver­ständ­nis nach” → “ac­cord­ing to my un­der­stand­ing”). Maybe if you use it in a sen­tence?

I think “Ver­mu­tung” (and it’s syn­onyms) is pretty much what I was look­ing for. Maybe it’s even bet­ter than “be­lief” in some ways, since “be­lief” sug­gests a higher de­gree of con­fi­dence than “Ver­mu­tung” does.

“un­ter­stützen” (to sup­port some­thing) seems right, thanks. But it’s use­ful to have nouns. Also “das un­ter­stützt deine Be­haup­tung nicht” is much wordier than “that’s not ev­i­dence”.

“Ev­i­denz ist all das, was eine Ver­mu­tung un­ter­stützt.”

• A differ­ent Ger­man speaker here.

In English you have a whole cloud of re­lated words: mind, brain, soul, I, self, con­scious­ness, in­tel­li­gence. I don’t think it’s much of a prob­lem that Ger­man does not have perfect match for “mind”. The “mind-body-Prob­lem” would be “Leib-Seele-Prob­lem”, where “seele” would usu­ally be trans­lated as “soul”. The Ger­man wikipe­dia page for philos­o­phy of mind does use the English word “mind” once to dis­t­in­guish that mean­ing for “Geist” from a differ­ent con­cept from Hegel that I never heard about be­fore (“Welt­geist”).

Then again I don’t have much need to dis­cuss philos­o­phy of mind with the peo­ple around me, so maybe that’s why I don’t feel the need for a Ger­man word is more like “mind”.

But I do have mas­sive prob­lems with talk­ing about episte­molog­i­cal con­cepts in Ger­man. Help from other Ger­man speak­ers would be very wel­come. I don’t know how to talk about “de­grees of be­lief” in Ger­man. Or how to call those things that get up­dated when we learn new ev­i­dence (“be­liefs” in English).

If you trans­late the noun “a be­lief” into Ger­man (“ein Glaube”) and back into English, it will always come out as “faith” (as in ” the Bud­dhist faith” or in “hav­ing faith in re­demp­tion”). A differ­ent can­di­date would be “Überzeu­gung”, but that liter­ally means con­vic­tion (some­thing you be­lief with ab­solute cer­tainty). Hardly seems like a good word for talk­ing about un­cer­tainty. Wikipe­dia uses “Grad an Überzeu­gung” to trans­late “de­grees of be­lief”, but gives the English in paren­the­ses to make sure the mean­ing is clear. I don’t like it. “Eine Überzeu­gung” sounds wrong.

“Ev­i­dence” is an­other difficult one. The clos­est might be “Beweis”, but that means “proof”. Then there is “Ev­i­denz”, but I’ve only ever seen that word used to trans­late “ev­i­dence based medicine”. The av­er­age Ger­man would be un­likely to know that word.

But I won­der if Less Wrong has given me a skewed view of the English lan­guage. Maybe the way LW uses “be­lief” wouldn’t feel so nat­u­ral to the av­er­age na­tive speaker. Maybe the av­er­age na­tive speaker has quite a differ­ent no­tion of what “ev­i­dence” means.

• I in­tu­itively feel that there re­ally are ob­jec­tive morals (or: ob­jec­tive math­e­mat­ics, ac­tual free will, ta­bles and chairs, minds).

There­fore, there re­ally are ob­jec­tive morals (etc.).

“Mo­rals” is just a word. But un­like some other words, it’s not 100% clear to me what it means. There is no phys­i­cal en­tity that “morals” clearly refers to. There is no agreed upon list of ax­ioms that define what “morals” is. That’s why, to me, “there are ob­jec­tive morals” doesn’t feel en­tirely like a fac­tual state­ment.

I might jus­tify that there are ob­jec­tive morals by rely­ing on my in­tu­ition. But that’s not be­cause I think in­tu­itions are re­li­able sources of knowl­edge. That’s be­cause I think in­tu­itions are the cor­rect nor­ma­tive source of how we use words (to­gether with com­mon us­age, I guess).

It’s till pos­si­ble that my in­tu­itions con­tra­dict each other, or that they con­tra­dict facts. So they are not suffi­cient to say with con­fi­dence that ob­jec­tive morals ex­ist. But they are rele­vant.

• Naruto is the op­po­site of Tsuioku Nar­i­tai. It’s the story of “ev­ery­one had some­thing to pro­tect and prac­ticed like mad, but none of it made a huge differ­ence and most ev­ery­one would have been about as pow­er­ful anyway

But the se­ries clearly wants to be “Tsuioku Nar­i­tai”. The good guys all value hard work. Maybe the show is hyp­o­crit­i­cal, then.

I’m not sure if the mes­sage that sticks with the peo­ple who watch Naruto is what the char­ac­ters say (work hard) or how the show ac­tu­ally de­vel­ops (be born spe­cial).

• I ac­tu­ally re­ally like that you have to spend a re­source to learn new in­for­ma­tion and that the score is de­pen­dent on luck. I.e. you use limited re­sources to op­ti­mize the gam­ble you are mak­ing. That seems like a very good de­scrip­tion of how life works, only, it’s all trans­par­ent and quan­tified in your game.

Some sug­ges­tions:

• In the tu­to­rial, why do I first get to read a de­scrip­tion of a pic­ture and then I’m pre­sented with the pic­ture? Ob­vi­ously, it should be the other way around.

• You should be able to progress the text by mouse.

• It should be eas­ier to dis­t­in­guish new text from old. I think in vi­sual nov­els, the text box never “scrolls”. If the new text doesn’t fit into the text box, or to make a new para­graph, the text box is cleared. You could make sep­a­rate textboxes for the cur­rent mes­sage and the his­tory.

• The con­fus­ing no­ta­tion is a real dis­trac­tion and zaps away a lot of the po­ten­tial fun. Un­der­stand­ing the no­ta­tion ac­tu­ally seems more in­ter­est­ing than win­ning the game, but I have too lit­tle in­for­ma­tion to un­der­stand it, which leads to frus­tra­tion. Why are there two big boxes with nor­mal nodes? Why do nor­mal nodes have all those boxes in­stead of sim­ple bar that shows the prob­a­bil­ity? Why do bayes-nodes have all those rows in­stead of just two bars? What are there grey bars? How do ‘and’ and ‘or’ nodes work? I would think that one in­put cor­re­sponds to the ver­ti­cal di­vi­sion and one in­put to the hori­zon­tal di­vi­sion. It should be more ob­vi­ous which node is which (by hav­ing the in­put into that side of the box). The con­nec­tions of nodes did not have ar­rows. If I un­der­stood the game cor­rectly, that would help dis­t­in­guish in­puts from out­puts.

• The effect of click­ing on one node shouldn’t be in­stant. At first, it should prob­a­bly go step by step: You click on a node and re­veal it’s truth-value (some text ap­pears ex­plain­ing which node changed and why). Press a key → the next af­fected node gets up­dated. Un­til af­fected nodes are up­dated. Later you don’t have to click, there is a small pause be­tween each change. That way you could see the effect of mea­sur­ing a node and un­der­stand why the effect was the way it was, in­stead of … try­ing to work that in­for­ma­tion out for your­self with only be­ing able to see the af­ter­math.

• You should make it more lin­ear. Put the tu­to­rial and the main game into one. I don’t see the use of this de­ci­sion be­tween in­tro­duc­tory and in­ter­me­di­ate psy­chol­ogy.

• Have the player start out with much sim­pler net­works and in­finite en­ergy.

• In­tro­duce new types of nodes dur­ing the game, not all at once in the tu­to­rial. Every time you in­tro­duce some­thing new, go back to sim­ple net­works with un­limited en­ergy.

Of those, ex­plain­ing or sim­plify­ing the no­ta­tion seems the most im­por­tant to me.