# mwengler

Karma: 1,820
Page 1
• I’d like to join how do I do that?

• I sup­pose you might be right for some peo­ple. For me, the fact that re­peat­ing in­finite dec­i­mal ex­pan­sions are ra­tio­nal is deeply deeply in­grained. Since your post is es­sen­tially how to square your feel­ings with what turns out to be math­e­mat­i­cally true, you have a lot of room for dis­agree­ment as there is no con­tra­dic­tion in differ­ent peo­ple feel­ing differ­ent ways about the same facts.

For me the most fun thing about 0.9999.… is that 19 = .11111… and there­fore 9x1/​9 = 9x.111111..… and this last ex­pres­sion ob­vi­ously = .99999...

You should also do a search on “right” in your post and edit it, you use “right” one time where you re­ally need “write” I think it is “right down” in­stead of “write down” but I’ll let you do the look­ing.

• The OP states:

A very good ques­tion is “what kinds of ob­jects are these, any­way?” Since we have an in­finite dec­i­mal they can’t be ra­tio­nal num­bers.

This is just wrong. A ra­tio­nal num­ber is a num­ber that can be writ­ten as a frac­tion of two in­te­gers. Lots of in­finite dec­i­mals are ra­tio­nal num­bers. 13 = .3333333..., 19 = .1111111.… 17 = .142857142857142857… etc.

• Clearly we can differ­en­ti­ate be­tween differ­ent-lo­ca­tion-same-time and differ­ent-lo­ca­tion-differ­ent-time. Two things in differ­ent-lo­ca­tion-same-time are differ­ent things. Two things differ­ent-lo­ca­tion-differ­ent-time may be same thing or may be differ­ent thing de­pend­ing on the path through time. Your math­e­mat­i­cal style of ab­strac­tion in think­ing about iden­tity will only be use­ful at ex­plain­ing the real world if it is matched to real world pro­cesses, and does not ig­nore im­por­tant real world in­sights.

• if we adopt idea that con­scious­ness could be differ­ent with­out any phys­i­cal differ­ence be­tween the copies, we adopt the idea of p-zom­bies and re­ject phys­i­cal­ism that is mod­ern ver­sion of ma­te­ri­al­ism. It al­most the same as to say that im­ma­te­rial soul ex­ist. It is very strong state­ment.

Not rele­vant to the prob­lem. If you cre­ate a copy of me, the copy is not iden­ti­cal, if for no other rea­son than it oc­cu­pies a differ­ent lo­ca­tion than I do. I agree that if it oc­cu­pied the same lo­ca­tion that I do, atom for atom and quark for quark, that could lead to the con­cern you ex­press. But copies can­not oc­cupy the same lo­ca­tion, and so there is no prob­lem hav­ing the copy to the left be one con­scious­ness while the origi­nal to the right is a differ­ent con­scious­ness.

The strongest claim I might ac­cept would be that both the origi­nal and the copy have “valid” claims to be the con­tinu­a­tion of the pre-copy­ing sin­gle con­scious­ness that was me back then. But no mat­ter how you slice it, kil­ling the origi­nal when you make the copy is still de­stroy­ing a sep­a­rate con­scious­ness, even if the re­main­ing con­scious­ness thinks it is the only con­tinu­a­tion of the pre-copy con­scious­ness.

• 1) I do not un­der­stand why our ex­pe­rience of iden­ti­cal twins does not play into most dis­cus­sions of my copy be­ing “the same per­son as me.” We know that twins do not share the same con­scious­ness (un­less Oc­cam’s ra­zor is wrong and they are all ly­ing.) We know from that that if we made a copy with­out de­stroy­ing the origi­nal that the copy and the origi­nal would not share a con­scious­ness. So why isn’t at least the pos­si­biliy (I would es­ti­mate over­whelming like­li­hood) that a copy is a differ­ent con­scious­ness than the origi­nal, and that de­stroy­ing the origi­nal kills one con­scious­ness while mak­ing a copy cre­ates a differ­ent con­scious­ness, and that these are sep­a­rate pro­cesses?

2) Does philos­o­phy talk some­where about what I would call “outer” and “in­ner” wor­lds? I know I’m con­scious be­cause I par­ti­ci­pate in my in­ner world. I figure by Oc­cam’s ra­zor that you are con­scious, but I don’t have di­rect ex­pe­rience of your con­scious­ness, be­cause I can only see you in my outer world. We don’t talk any­where near as much about “in­ner” world be­cause we don’t share that with oth­ers, while our “outer” ex­pe­riences are shared, and we have evolved a host of tech­niques in­clud­ing lan­guage and sci­ence for pro­cess­ing “outer” ex­pe­riences. But “in­ner” ex­pe­riences don’t benefit from lan­guage and sci­ence be­cause they are, so far, locked away in­side us, not so­cial phe­nomenon. Be­cause I think the idea that our copy is a con­tinu­a­tion of our own con­scious­ness is a mis­take we can make if we don’t re­al­ize there is an “in­ner” ex­pe­rience quite dis­tinct from our “outer” ex­pe­riences. So sure, my copy thinks he is con­tin­u­ously con­scious, and there­fore may think my con­scious­ness has jumped into him, but that is be­cause to my copy, I am part of his “outer” world. But if a non-de­struc­tive copy of me was made, I think it is ob­vi­ous from what we know about twins that de­spite my copies elo­quence at ex­plain­ing his con­ti­nu­ity from me, that I, the origi­nal, would re­sist be­ing kil­led as su­perflous. Yes in ev­ery­body else’s outer world, where con­scious­ness of oth­ers is in­di­rectly in­ferred, they can’t tell that my copy is not a con­tinu­a­tion of my con­scious­ness in sep­a­rate mat­ter. But in my in­ner world, it seems pretty clear that I, (the origi­nal) would know.

• My first thoughts read­ing your post are 1) You start WAY TOO LATE IN THE GAME. You are es­sen­tially talk­ing about al­tru­ism as a con­scious choice which means you are well into the higher mam­mals.

Vir­tu­ally ev­ery sex­u­ally re­pro­duc­ing crea­ture de­votes re­sources to re­pro­duc­tion that could have been con­served for in­di­vi­d­ual sur­vival. As you move up in com­plex­ity, you have an­i­mals feed­ing their young and perform­ing other ser­vices for them. As would be ex­pected with all evolved co­op­er­a­tion, the en­ergy and cost you ex­pend rais­ing your young pro­duces a more sur­viv­able young and so is net cost effec­tive at get­ting the next gen­er­a­tion go­ing, which is pretty much what spreads genes.

How big of a leap is it from a mama bird re­gur­gi­tat­ing food into her baby’s mouth to you helping your neigh­bor hunt for wooly mam­moth?

If you were the first or­ganism to get the gene to feed your ba­bies or do what­ever ex­panded their sur­viv­abil­ity, then ob­vi­ously that is how that gene prop­a­gates, your ba­bies have the gene.

As you get to the more com­plex forms of al­tru­ism of pri­mates and hu­mans, you also get to strong feed­back mechanisms against non-co­op­er­a­tors and free-rid­ers. The sys­tem may not be perfect but I think it al­lows a path from feed­ing ba­bies or bury­ing eggs in the sand to mod­ern al­tru­ism in hu­mans where no wierd “how do we start this” be­hav­iors bump up to stop things.

• I may not un­der­stand the ques­tion’s point, be­cause as I read it the an­swer is a very ob­vi­ous “Yes.” We de­ter­mined New­ton’s laws and Maxwell’s equa­tions from ob­ser­va­tions of our world. So the planets in or­bit around the sun, the moon around the earth, and an ap­ple fal­ling to the ground all lead to grav­i­ta­tion. The at­trac­tion be­tween wires car­ry­ing cur­rent in the same di­rec­tion (mag­netic), the func­tion­ing of trans­form­ers (change in mag­netic field pro­duces elec­tric field) and ra­dio and light all fit to­gether to give Maxwell’s equa­tions.

So yes, a world with the macro­scopic phys­i­cal ob­ser­va­tions as ours does not vi­o­late New­ton’s or Maxwell’s laws be­cause our world with those ob­ser­va­tions doesn’t vi­o­late those laws. If New­ton’s or Maxwell’s equa­tions were differ­ent, the world you saw would nec­es­sar­ily be differ­ent.

What am I miss­ing here?

• That Ar­tifi­cial In­tel­li­gence is go­ing to do a lot of the same things that Nat­u­ral In­tel­li­gence does.

• Ta­boo “faith”, what do you mean speci­fi­cally by that term?

Good idea. I mean that EVERYBODY, ra­tio­nal­ist athe­ist and chris­tian al­ike, starts with an ax­iom or as­sump­tion.

In the case of ra­tio­nal­ist athe­ists (or at least come such as my­self) the ax­ioms started with are things like 1) truth is in­ferred with semi=quan­tifi­able con­fi­dence from ev­i­dence sup­port­ing hy­pothe­ses, 2) ex­pla­na­tions like “god did it” or “alpha did it” or “a benev­olent force of the uni­verse did it” are dis­al­lowed. I think some peo­ple are will­ing to go cir­cu­lar, al­low the ax­ioms to re­main im­plicit and then “prove” them along the way: I see no ev­i­dence for a con­scious per­son­al­ity with su­per­nat­u­ral pow­ers. But I do claim that is cir­cu­lar, you can’t prove any­thing with­out know­ing how you prove things and so you can’t prove how you prove things by ap­ply­ing how you prove things with­out be­ing cir­cu­lar.

So for me, I sup­port my ra­tio­nal­ist athe­ist point of view by ap­peal­ing to the great suc­cess it has in ad­vanc­ing en­g­ineer­ing and sci­ence. By point­ing to the rich­ness of the con­nec­tions to data, the “ob­vi­ous” con­sis­tency of ge­ol­ogy with a 4 billion year old earth, the “ob­vi­ous” con­sis­tency of evolu­tion from com­mon an­ces­tors of similar struc­tures across species right down to the ADP-ATP cy­cle and DNA.

But a the­ist is do­ing the same thing. They START with the as­sump­tion that there is a pow­er­ful con­scious be­ing run­ning both the phys­i­cal and the hu­man wor­lds. They mar­vel at the brilli­ance of the de­sign of life to sup­port their claim even though it can’t prove their ax­ioms. They mar­vel at the rich­ness of the hu­man moral and emo­tional world as more sup­port for the rich­ness and beauty of con­scious and good cre­ation.

Log­i­cally, there is no logic with­out as­sump­tions. De­duc­tion needs some­thing to de­duce from. I like oc­cams ra­zor and nat­u­ral­ism be­cause my long ex­po­sure to it leaves me feel­ing very satis­fied with its abil­ity to de­scribe many things I think are im­por­tant. Other peo­ple like the­ism be­cause their long ex­po­sure to it leaves them feel­ing very satis­fied with its abil­ity to de­scribe and even pre­scribe the things they think are im­por­tant.

I am not aware of a defini­tive way to challenge ax­ioms, and I don’t think there is one at the level I think of it.

• This com­ment is in re­ply to some ideas in the com­ments be­low.

In my opinion, my ra­tio­nal­ity is as faith-based as is a re­li­gious per­son’s re­li­gious be­lief.

Among my high­est val­ues is “be­ing right” in the sense of be­ing able to in­stru­men­tally effect or pre­dict the world. I want to be able to com­mu­ni­cate across long dis­tances, to turn com­bustible fuel into safe trans­porta­tion, to cor­rectly pre­dict what an in­ter­stel­lar probe will find and to be able to build an in­ter­stel­lar probe that will work. Look­ing at the world, I see much more suc­cess in en­deav­ors like these from sci­ence and ra­tio­nal­ity than from re­li­gios­ity or ap­peals to god. And so I adopt ra­tio­nal­ity as it sup­ports my val­ues.

I also want to raise healthy, happy, “good” chil­dren. My one child who dab­bles in al­co­hol, drugs, and petty theft, I am pretty sure I could “help” him by go­ing to church with him. I’ve known many peo­ple who are effec­tive at do­ing things I see as good be­cause, it seems, of their re­li­gious be­liefs and par­ti­ci­pa­tion in churches and re­li­gious com­mu­ni­ties. I liked be­ing a Lutheran for a few years. One night I told our pas­tor that I just didn’t be­lieve in god. He told me he thought half the church had that hap­pen­ing. Even so I couldn’t stay en­gaged.

I feel the loss of re­li­gious faith as a sor­row, or a pain, or a burr un­der my sad­dle, or some­thing. But I can’t jus­tify it, or more im­por­tantly, I can only pre­tend to be­lieve, ac­tual be­lief does not seem to me to be a real op­tion any­more.

And it turns out I have enough “faith” in sci­en­tific ra­tio­nal­ism that I won’t even pre­tend I be­lieve in god. I choose to be­lieve that stay­ing con­sis­tent with ra­tio­nal prin­ci­ples will pay­off more for me and those I care about than will fal­ling back to the more ac­cessible moral­ity of re­li­gious faith. It is a leap of faith, es­pe­cially in light of “ra­tio­nal­ists win.” If my son were to be­come an heroin ad­dict and de­vote his life to petty theft, jail, and shoot­ing up, AND I could have pre­vented that by bring­ing him to church, I will have paid a price for my faith, as much as any Chris­tian Mar­tyr who was harmed or whos fam­ily was harmed be­cause he did not deny his Chris­tian be­lief.

Peo­ple who think their ra­tio­nal­ity does not come from a faith they pos­sess re­mind me of re­li­gious peo­ple who think their be­lief in god is just right, that it does not come from a faith that they pos­sess or have cho­sen.

• I’m not sure which is cor­rect. Not that fa­mil­iar with util­i­tar­i­anist nuts and bolts.

As with so many things, if there is more than one way to in­ter­pret some­thing there is gen­er­ally not too much to be gained by in­ter­pret­ing so that there is an er­ror when there is a way to in­ter­pret it that makes sense. Clearly if a new char­ity sets up that takes twice the cost to provide the same benefit, and peo­ple switch dona­tions from the cheaper char­ity to the more ex­pen­sive one, util­ity pro­duced has been de­creased com­pared to the coun­ter­fac­tual where the new more ex­pen­sive char­ity was not set up.

So what­ever ter­minol­ogy you pre­fer, 1) op­por­tu­nity cost is a real thing and ar­guably is the only good way to com­pare money to food quan­ti­ta­tively, and 2) what­ever the ter­minol­ogy, the point of the origi­nal ar­ti­cle is a de­crease in util­ity from adding a char­ity, which is a sen­si­ble idea and well within the bounds of rea­son­able in­ter­pre­ta­tion of the ti­tle un­der ques­tion.

• I think that if a char­ity had nega­tive util­ity, that would im­ply that burn­ing a sum of money would be prefer­able to donat­ing that money to that char­ity.

If there are two char­i­ties, one which feeds home­less pop­u­la­tion for \$3/​day and a 2nd which feeds same pop­u­la­tion same food for \$6/​day, AND peo­ple tend to give some amount of money to one char­ity or the other, but not both, then it seems pretty rea­son­able to de­scribe the util­ity of the more ex­pen­sive char­ity as nega­tive. It is not that it would be bet­ter to burn my con­tri­bu­tion, but rather that I am get­ting \$3 worth of good from a \$6 dona­tion. But just out and out burn­ing money be­ing su­pe­rior to donat­ing it is not the only way to in­ter­pret nega­tive util­ity.

If you have \$6 to give to­wards feed­ing the home­less, it would be bet­ter to burn \$2 and donate \$4 to the cheaper provider than to give the en­tire \$6 to the more ex­pen­sive char­ity. But only in the same sense that it would be bet­ter to burn \$3000 and buy a par­tic­u­lar car for \$10,000 than to burn no money and buy that ex­act same car for \$14,000. Whereever there are bet­ter and worser deals, burn­ing less than the full sav­ings can be worked in as part of a su­pe­rior choice. This does not have any­thing to do with whether these are char­i­ties or for profit busi­nesses.

• ‘no pig’ > ‘happy pig + sur­prise axe’ > ‘sad pig + sur­prise axe’

Would this also mean

‘no pig’ > ‘happy pig + sur­prise preda­tor’ > ‘sad pig + sur­prise preda­tor’ I don’t think na­ture is gen­er­ally any bet­ter than (some kinds of) farm­ing for prey an­i­mals. Should ve­g­ans be benefit­ting from low­er­ing the birth rates among nat­u­ral an­i­mals?

Or for that mat­ter, does it also mean ‘no hu­man’ > ‘happy hu­man + even­tual death’ > ‘sad hu­man + even­tual death’ Even in na­ture, all life is al­ive, and then it dies, al­most always in a way it would not choose or en­joy. Does life just suck? Are we bad ac­tors for hav­ing chil­dren?

• Most veg­e­tar­i­ans would think that ac­tivi­ties that nor­mally make an­i­mals suffer are bad in them­selves.

Pre­sum­ably the moral win in re­duc­ing or elimi­nat­ing the suffer­ing of farmed meat would have more to do with non-veg­e­tar­i­ans than veg­e­tar­i­ans. But re­ally, is the point here to do some­thing bet­ter than is already done, or is to im­press veg­e­tar­i­ans?

• Would it be eth­i­cal to grow meat in a vat with­out a brain as­so­ci­ated with it? Per­son­ally, I think pretty clearly yes.

So breed­ing suffer­ing out of an­i­mals would seem to be be­tween grow­ing meat in a vat and what we have now. So it would seem to be a step in the right di­rec­tion.

We, and an­i­mals, al­most cer­tainly have suffer­ing be­cause it had sur­vival value for us and an­i­mals in the en­vi­ron­ment in which we evolved. Be­ing farmed for meat is not that en­vi­ron­ment. I don’t think re­mov­ing suffer­ing from our farmed an­i­mals has a down­side. Of course, re­mov­ing it from wild an­i­mals would prob­a­bly not be a good thing, but would prob­a­bly cor­rect it­self rel­a­tively quickly in the failure of non-suffer­ing an­i­mals to sur­vive.

• Never heard of Cir­cling un­til your post. Looked it up, ini­tially find noth­ing go­ing on in San Diego (Cal­ifor­nia US). I won­der if it is more of a Euro­pean thing?

If you know how I can find some­thing lo­cal to San Diego CA US, please let me know.

• I do think ra­tio­nal­ity is a niche. I had a con­ver­sa­tion with a not-par­tic­u­larly-bright ad­minis­tra­tive as­sis­tant at work where she ex­pressed the teach­ings of Je­ho­vah’s Wit­ness as straight­for­ward truth. She talked some of the chaos of her life (drugs, de­pres­sion) be­fore join­ing them. As I ex­pressed the ab­stract case for, es­sen­tially, be­ing care­ful about what one be­lieves, it seemed clear enough to me that she had lit­tle or noth­ing to gain by be­ing “right” (or rather adopt­ing my opinion which is more likely to be true in a Bayesian sense) and she seemed to fairly clearly have some­thing to lose. I, on the other hand, have a philosopho-physi­cist’s val­ues and also value find­ing reg­u­lar (non-the­olog­i­cal) truths by care­fully re­ject­ing my bi­ases, so I was mak­ing a choice that (prob­a­bly) makes sense for me.

When my 14 year old daugh­ter (now 16 and do­ing much bet­ter) was “ex­per­i­ment­ing” with al­co­hol, mar­ijuana, and shop-lift­ing, I had a “come to Je­sus” talk with my re­li­gious cousin. She told me that I knew right from wrong and that I was do­ing my daugh­ter no fa­vors by teach­ing her skep­ti­cism above moral­ity. I de­cided she was es­sen­tially cor­rect, and that some of my own “skep­ti­cism” was ac­tu­ally self-serv­ing, let­ting me off the hook for some steal­ing I had done from em­ploy­ers start­ing when I was about 15.

I view ra­tio­nal­ity as a thing we can do with our neo­cor­tex. But clearly we have a func­tional emo­tional brain that “knows” there are mon­sters or tigers when we are afraid of the dark and “knows” that girls we are at­tracted to are also at­tracted to us. I con­tinue to ques­tion whether I am do­ing my­self or my chil­dren any real fa­vors by be­ing as de­voted to this par­tic­u­lar fea­ture of my neo­cor­tex as I am.

• Does “value the welfare of oth­ers” nec­es­sar­ily mean “con­sciously value the welfare of oth­ers”? Is it wrong to say “I know how to in­ter­pret hu­man sounds into lan­guage and mean­ing” just be­cause I can do it? Or do I have to demon­strate I know how be­cause I can de­con­struct the pro­cess to the point that I can write an al­gorithm (or com­puter code) to do it?

The idea that we can­not value the welfare of com­put­ers seems lu­dicrously naive and mis­in­ter­pre­ta­tive. If I can value the welfare of a stranger, then clearly the thing for which I value welfare is not defined too tightly. If a com­puter (run­ning the right pro­gram) dis­plays some of the fea­tures that sig­nal me that a hu­man is some­thing i should value, why couldn’t I value the com­puter? We watch an­i­mated shows and value and have em­pa­thy for all sorts of an­i­mated en­tities. In all sorts of sto­ries we have em­pa­thy for robots or other me­chan­i­cal things. The idea that we can­not value the welfare of a com­puter flies in the face of the ev­i­dence that we can em­pathize with all sorts of non-hu­man things fic­tional and real. In real life, we value and have hu­man-like em­pa­thy for an­i­mals, fishes, and even plants in many cases.

I think the in­ter­pre­ta­tions or as­sump­tions be­hind this pa­per are bad ones. Cer­tainly, they are not brought out ex­plic­itly and ar­gued for.

• Yes, there is class of in­vest­ment strate­gies which go by the name of “liquidity con­strained”. If there is a small… mar­ket in­effi­ciency out of which you can ex­tract, say, \$100,000/​year but no more, none of the big in­vest­ment firms would bother—it’s not worth their time. But for an in­di­vi­d­ual it of­ten is.

Can you please say more about these and how to find them?