# Bob Jacobs

Karma: 343
• There’s ev­i­dence in the form of ob­ser­va­tions of events out­side the carte­sian bound­ary. There’s ev­i­dence in in­ter­nal pro­cess of rea­son­ing, whose na­ture de­pends on the mind.

My pre­vi­ous com­ment said:

both em­piri­cal and tau­tolog­i­cal evidence

With “em­piri­cal ev­i­dence” I meant “ev­i­dence in the form of ob­ser­va­tions of events out­side the carte­sian bound­ary” and with “tau­tolog­i­cal ar­gu­ment” I meant “ev­i­dence in in­ter­nal pro­cess of rea­son­ing, whose na­ture de­pends on the mind”.

When do­ing math, ev­i­dence comes up more as a guide to in­tu­ition than any­thing ex­plic­itly con­sid­ered. There are also meta­math­e­mat­i­cal no­tions of ev­i­dence, ren­der­ing some­thing ev­i­dence-like clear.

Yes, but they are both “in­for­ma­tion that in­di­cates whether a be­lief is more or less valid”. Math­e­mat­i­cal proof is also ev­i­dence, so they have the same struc­ture. Do you have a way to ground them? Or if you some­how have a way to ground one form of proof but not the other, could you share just the one? (Since the struc­ture is the same I sus­pect that the ground­ing of one could also be ap­plied to the other)

EDIT: Based on the re­ply I think it’s fair to say that this dis­cus­sion is go­ing around in cir­cles. I’m not sure why you‘re not in­ter­ested in en­gag­ing with my defi­ni­tion (or ques­tions), but since this is rather un­pro­duc­tive for the both of us I have elected to stop com­ment­ing.

• I meant both em­piri­cal and tau­tolog­i­cal ev­i­dence, so gen­eral in­for­ma­tion that in­di­cates whether a be­lief is more or less valid. When you say that you can keep track of truth, why do you be­lieve you can? What is that truth based on, ev­i­dence?

# [Question] How do I get rid of the un­grounded as­sump­tion that ev­i­dence ex­ists?

15 Oct 2020 8:02 UTC
5 points

# Sor­ti­tion Model of Mo­ral Uncertainty

8 Oct 2020 17:44 UTC
8 points
• It might be in­ter­est­ing to dis­t­in­guish be­tween “per­sonal hingey­ness” and “util­i­tar­ian hingey­ness”. Hu­mans are not util­i­tar­i­ans so we care mostly about stuff that’s hap­pen­ing in our own lives, when we die, our per­sonal tree stops and we can’t get more hinges. But the “util­i­tar­ian hingey­ness” con­tinues as it de­scribes all pos­si­ble util­ity. I made this with pop­u­la­tion ethics in mind, but you could to­tally use the same con­cept for your per­sonal life, but then the most hingey time for you and the most hingey time for ev­ery­one will be differ­ent.

I’m not sure I un­der­stand your last para­graph, be­cause you didn’t clar­ify what you meant with the word “hingey­ness”? If you meant by that “the range of to­tal amount of util­ity you can po­ten­tially gen­er­ate” (aka hinge broad­ness) or “the amount by which that range shrinks” (aka hinge re­duc­tion) It is pos­si­ble to draw a tree where the first tick of an 11 tick tree has just as broad of a range as an op­tion in the 10th tick. So the hinge broad­ness and the hinge re­duc­tion can be just as big in the 10th as in the 1st tick, but not big­ger. I don’t think you’re talk­ing about “hinge shift”, but maybe you were talk­ing about hinge precipice­ness in­stead in which case, yes that can to­tally be big­ger in the 10th tick.

• If in the first image we re­place the 0 with a −100 (much wider) what hap­pens? The amount of end­ings for 1 is still larger than 3. The amount of branches for 1 is still larger than 3. The width of the range of the pos­si­ble util­ity of the end­ings for 1 is [-100 to 8] and for 3 is [-100 to 6] (smaller). The width of the range of the to­tal amount of util­ity you could gen­er­ate over the fu­ture branches is [1->3->-100 = −96 up to 1->2->8= 11] for 1 and [3->-100= −97 up to 3->6= 9] for 3 (smaller). Is this a good ex­am­ple of what you’re try­ing to con­vey? If not could you maybe draw an ex­am­ple tree, to show me what you mean?

• End­ing in nega­tive num­bers wouldn’t change any­thing. The amount of end­ings will still shrink, the amount of branches will shrink, the range of the pos­si­ble util­ity of the end­ings will still shrink or stay the same length, the range of the to­tal amount of util­ity you could gen­er­ate over the fu­ture branches will also shrink or stay the same length. Try it! Re­place any num­ber in any of my mod­els with a nega­tive num­ber or draw your own model and see what hap­pens.

• If we draw a tree of all pos­si­ble timelines (and there is an end to the tree) the older choices will always have more branches that will sprout out be­cause of them. If we are purely look­ing at the pos­si­ble end­ings then the 1 in the first image has a range of 4 pos­si­ble end­ings, but 2 only has 2 pos­si­ble end­ings. If we’re look­ing at branches then the 1 has a range of 6 pos­si­ble branches, while 2 only has 2 pos­si­ble branches. If we’re look­ing at end­ing util­ity then 1 has a range of [0-8] while 2 only has [7-8]. If we’re look­ing at the range of pos­si­ble util­ity you can ex­pe­rience then 1 has a range from 1->3->0 = 4 util­ity all the way to 1->2->8 = 11 util­ity, while 2 only has 2->7 = 9 to 2->8 = 10.

When we talk about the util­ity of end­ings it is pos­si­ble that the range doesn’t change. For ex­am­ple:

(I can’t post images in com­ments so here is a link to the image I will use to illus­trate this point)

Here the “range of util­ity in end­ings” tick 1 has (the first 10) is [0-10] and the range of end­ings the first 0 has (tick 2) is [0-10] which is the same. Of course the prob­a­bil­ity has changed (get­ting an end­ing of 1 util­ity is not even an op­tion any­more), but the min­i­mum and max­i­mum stay the same.

Now the width of the range of the to­tal amount of util­ity you could po­ten­tially ex­pe­rience can also stay the same. For ex­am­ple the low­est util­ity tick 1 can ex­pe­rience is 10->0->0 = 10 util­ity and the high­est is 10-0-10 = 20 util­ity. The differ­ence be­tween the low­est and high­est is 10 util­ity. The low­est to­tal util­ity that the 0 on tick 2 can ex­pe­rience is 0->0 = 0 util­ity and the high­est is 0->10 = 10 util­ity, which is once again a differ­ence of 10 util­ity. The prob­a­bil­ity has changed (end­ing with a weird num­ber like 19 is im­pos­si­ble for tick 2). The range has also shifted down­wards from [10-20] to [0-10], but the width stays the same.

It just oc­curred to me that some peo­ple may find the shift in range also im­por­tant for hingey­ness. Maybe call that ‘hinge shift’?

Cru­cially, in none of these defi­ni­tions is it pos­si­ble to end up with a wider range later down the line than when you started.

# A Toy Model of Hingeyness

7 Sep 2020 17:38 UTC
16 points
• I know LessWrong has be­come less hu­morous over the years, but this idea popped into my head when I made my bounty com­ment and I couldn’t stop my­self from mak­ing it. Feel free to down­vote this short­form if you want the site to re­main a su­per se­ri­ous fo­rum. For the rest of you: here is my wanted poster for the refer­ence class prob­lem. Please solve it, it keeps me up at night.

• Thanks for re­ply­ing to my ques­tion, but al­though this was nicely writ­ten it doesn’t re­ally solve the prob­lem. So I’m putting up a \$100 bounty for any­one on this site (or out­side it) who can solve this prob­lem by the end of next year. (I don’t ex­pect it will work, but it might mo­ti­vate some peo­ple to start think­ing about it).

• I’ve touched on this be­fore, but it would be wise to take your meta-cer­tainty into ac­count when cal­ibrat­ing. It wouldn’t be hard for me to claim 99.9% ac­cu­rate cal­ibra­tion by just mak­ing a bunch of very easy pre­dic­tions (an ex­treme ex­am­ple would be buy­ing a bunch of differ­ent dice and mak­ing pre­dic­tions about how they’re go­ing to roll). My post goes into more de­tail but TLDR by try­ing to pre­dict how ac­cu­rate your pre­dic­tion is go­ing to be you can start to dis­t­in­guish be­tween “harder” and “eas­ier” phe­nom­ena. This makes it eas­ier to com­pare differ­ent peo­ples cal­ibra­tion and al­lows you to check how good you re­ally are at mak­ing pre­dic­tions.

• I can also “print my own code”, if I make a fu­ture ver­sion of a MRI scan I could give you all the in­for­ma­tion nec­es­sary to un­der­stand (that ver­sion of) me, but as soon as I look at it my neu­rolog­i­cal pat­terns change. I’m not sure what you mean with “add some­thing to it”, but I could also give you a copy of my brain scan and add some­thing to it. Hu­mans and com­put­ers can of course know a sum­mery of them­selves, but never the full pic­ture.

• An an­noy­ing philoso­pher would ask whether you could glean knowl­edge of your “meta-qualia” aka what it con­sciously feels like to ex­pe­rience what some­thing feels like. The prob­lem is that fully un­der­stand­ing our own con­scious­ness is sadly im­pos­si­ble. If a com­puter dis­cov­ers that in a cer­tain lo­ca­tion on it’s hard­ware it has stored a pic­ture of a dog, it must then store that in­for­ma­tion some­where else, but if it sub­se­quently tries to know ev­ery­thing about it­self it must store that knowl­edge of the knowl­edge of the pic­ture’s lo­ca­tion some­where else, which it must also learn. This re­peats in a loop un­til the com­puter crashes. An es­say can fully de­scribe most things but not it­self: “The au­thor starts the es­say with writ­ing that he starts the es­say with writ­ing that...”. So an­noy­ingly there will always be ex­pe­riences that are mys­te­ri­ous to us.

• billion­aires re­ally are uni­ver­sally evil just as pro­gres­sives think

Can you please add a quan­tifier when you make as­ser­tions about plu­rals. You can make any group sound dumb/​evil by not do­ing it. E.g I can make athe­ists sound evil by say­ing the truth­ful state­ment: “Athe­ists break the law”. But that’s only be­cause I didn’t add a quan­tifier like “all”, “most”, “at least one”, “a dis­pro­por­tionate num­ber”, etc.

• And by what met­ric do you sep­a­rate the com­pe­tent ex­perts from the non-com­pe­tent ex­perts? I also pre­fer listen­ing to ex­perts be­cause they can ex­plain vast amounts of things in “hu­man” terms, in­form me how differ­ent things in­ter­act and sub­se­quently an­swer my spe­cific ques­tions. It’s just that for any sin­gle piece of in­for­ma­tion you’d rather have a meta-anal­y­sis back­ing you up than an ex­pert opinion.

• Well to be fair this was just a short ar­gu­ment against sub­jec­tive ideal­ism with three pic­tures to briefly illus­trate the point and this was not (nor did it claim to be) a com­pre­hen­sive list of all the pos­si­ble mod­els in the field of philos­o­phy of mind (oth­er­wise I would also have to in­clude pic­tures with the per­cep­tion be­ing red and the out­side be­ing green, or half be­ing green no mat­ter where they are, or ev­ery­thing be­ing red, or ev­ery­thing be­ing green etc)