[Question] Ramifications of limited positive value, unlimited negative value?

This as­sumes you’ve read some stuff on acausal trade, and var­i­ous philo­soph­i­cal stuff on what is valuable from the se­quences and el­se­where. If this post seems fun­da­men­tally con­fus­ing it’s prob­a­bly not ask­ing for your help at this mo­ment. If it seems fun­da­men­tally *con­fused* and you have a de­cent sense of why, it *is* ask­ing for your help to de­con­fuse it.

Also, a bit ram­bly. Sorry.

Re­cently, I had a re­al­iza­tion that my in­tu­ition says some­thing like:

  • pos­i­tive ex­pe­riences can only add up to some finite[1] amount, with diminish­ing returns

  • nega­tive ex­pe­riences get added up linearly

[ed­ited to add]

This seems sur­pris­ing and con­fus­ing and prob­a­bly para­dox­i­cal. But I’ve re­flected on it for a month and the in­tu­ition seems rea­son­ably sta­ble.

I can’t tell if it’s more sur­pris­ing and para­dox­i­cal than other var­i­ous fla­vors of util­i­tar­i­anism, or other moral frame­works. Some­times in­tu­itions are just wrong, and some­times they’re wrong but point­ing at some­thing use­ful, and it’s hard to know in ad­vance.

I’m look­ing to get a bet­ter sense of where these in­tu­itions come from and why. My goal with this ques­tion is to ba­si­cally get good cri­tiques or ex­am­i­na­tions of “what ram­ifi­ca­tions would this wor­ld­view have”, which can help me figure out whether and how this out­look is con­fused. So far I haven’t found a sin­gle moral frame­work that seems to cap­ture all my moral in­tu­itions, and in this ques­tion I’m ask­ing for help sort­ing through some re­lated philo­soph­i­cal con­fu­sions.

[/​edit]

[1] or, pos­i­tive ex­pe­riences might be in­finite, but a smaller in­finity than the nega­tive ones?

Ba­si­cally, when I ask my­self:

Once we’ve done liter­ally all the things – there are as many hu­mans or hu­man like things that could pos­si­bly ex­ist, hav­ing all the ex­pe­riences they could pos­si­bly have...

...and we’ve cre­ated all the mind-de­signs that seem pos­si­bly co­gent and good, that can have pos­i­tive, non-hu­man-like ex­pe­riences...

...and we’ve cre­ated all the non-sen­tient uni­verses that seem plau­si­bly good from some sort of weird aes­thetic artis­tic stand­point, i.e. maybe there’s a uni­verse of el­e­gant beau­tiful math forms where no­body gets to di­rectly ex­pe­rience it but it’s sort of beau­tiful that it ex­ists in an ab­stract way...

...and then maybe we’ve du­pli­cated each of these a cou­ple times (or a cou­ple mil­lion times, just to be sure)...

...I feel like that’s it. We won. You can’t get a higher score than that.

By con­trast, if there is one per­son out there ex­pe­rienc­ing suffer­ing, that is sad. And if there are two it’s twice as sad, even if they have iden­ti­cal ex­pe­riences. And if there are 1,000,000,000,000,000 it’s 1,000,000,000,000,000x as sad, even if they’re all iden­ti­cal.

Query­ing myself

This comes from ask­ing my­self: “do I want to have all the pos­si­ble good ex­pe­riences I could have?” I think the an­swer is prob­a­bly yes. And when I ask “do I want to have all the pos­si­ble good ex­pe­riences that are some­what con­tra­dic­tory, such that I’d need to clone my­self and ex­pe­rience them sep­a­rately” the an­swer is still prob­a­bly yes.

And when I ask “once I have all that, would it be use­ful to du­pli­cate my­self?” And… I’m not sure. Maybe? I’m not very ex­cited about it. Seems like maybe nice to do, just in as a hedge against weird philo­soph­i­cal con­fu­sion. But when I imag­ine do­ing that the mil­lionth time, I don’t think I’ve got­ten any­thing ex­tra.

But when I imag­ine the mil­lionth copy of Rae­mon-ex­pe­rienc­ing-hell, it still seems pretty bad.

Clar­ifi­ca­tion on humancentricness

Un­like some other LessWrong folk, I’m only medium en­thu­si­as­tic about the sin­gu­lar­ity, and not all that en­thu­si­as­tic about ex­po­nen­tial growth. I care about things that hu­man-Ray cares about. I care about Weird Fu­ture Ray’s prefer­ences in roughly the same way I care about other peo­ple’s prefer­ences, and other Weird Fu­ture Peo­ple’s prefer­ences. (Which is a fair bit, but more as a “it seems nice to help them out if I have the re­sources, and in par­tic­u­lar if they are suffer­ing.”)

Coun­ter­ar­gu­ment – Mea­sure/​Mag­i­cal Real­ity Fluid

The main coun­ter­ar­gu­ment is that maybe you need to ded­i­cate all of the mul­ti­verse to pos­i­tive ex­pe­riences to give the pos­i­tive ex­pe­riences more Mag­i­cal Real­ity Fluid (i.e. some­thing like “more chance at ex­ist­ing”, but try not to trick your­self into think­ing you un­der­stand that con­cept if you don’t).

I sort of might be­grudg­ingly ac­cept this, but this feels some­thing like “the val­ues of weird fu­ture Be­ing That Shares a Causal Link With Me”, rather than “my val­ues.”

Why is this rele­vant?

If there’s a finite num­ber of good ex­pe­riences to have, then it’s an em­piri­cal ques­tion of “how much com­pu­ta­tion or other re­sources does it take to cause them?”

I’d… feel some­what (al­though not ma­jorly) sur­prised, if it turned out that you needed more than our light cone’s worth of re­sources to do that.

But then there’s the ques­tion of acausal trade, or try­ing to com­mu­ni­cate with simu­la­tors, or “be­ing the sort of peo­ple such that whether we’re in a simu­la­tion or not, we adopt poli­cies such that al­ter­nate ver­sions of us with the same poli­cies who are get­ting simu­lated are get­ting a good out­come.”

And… that *only* seems rele­vant to my val­ues if ei­ther this uni­verse isn’t big enough to satisfy my hu­man-val­ues, or my hu­man val­ues care about things out­side of this uni­verse.

And ba­si­cally, it seems to me the only rea­son I care about other uni­verses is that I think Hell Ex­ists Out There Some­where and Must Be De­stroyed.

(Where “hell” is most likely to ex­ist in the form AIs run­ning in­ci­den­tal thought ex­per­i­ments, com­mit­ting mind-crime in the pro­cess).

I ex­pect to change my mind on this a bunch, and I don’t think it’s nec­es­sary (or even pos­i­tive EV) for me to try to come to a firm opinion on this sort of thing be­fore the sin­gu­lar­ity.

But it seems po­ten­tially im­por­tant to have *meta* poli­cies such that some­one simu­lat­ing me can eas­ily tell (at lower re­s­olu­tions of simu­la­tion) whether I’m the sort of agent who’d un­fold into an agent-with-good-poli­cies if they gave me more com­pute.

tl;dr – what are the im­pli­ca­tions of the out­look listed above? What ram­ifi­ca­tions might I not be con­sid­er­ing?