[Question] Ramifications of limited positive value, unlimited negative value?

This assumes you’ve read some stuff on acausal trade, and various philosophical stuff on what is valuable from the sequences and elsewhere. If this post seems fundamentally confusing it’s probably not asking for your help at this moment. If it seems fundamentally *confused* and you have a decent sense of why, it *is* asking for your help to deconfuse it.

Also, a bit rambly. Sorry.

Recently, I had a realization that my intuition says something like:

  • positive experiences can only add up to some finite[1] amount, with diminishing returns

  • negative experiences get added up linearly

[edited to add]

This seems surprising and confusing and probably paradoxical. But I’ve reflected on it for a month and the intuition seems reasonably stable.

I can’t tell if it’s more surprising and paradoxical than other various flavors of utilitarianism, or other moral frameworks. Sometimes intuitions are just wrong, and sometimes they’re wrong but pointing at something useful, and it’s hard to know in advance.

I’m looking to get a better sense of where these intuitions come from and why. My goal with this question is to basically get good critiques or examinations of “what ramifications would this worldview have”, which can help me figure out whether and how this outlook is confused. So far I haven’t found a single moral framework that seems to capture all my moral intuitions, and in this question I’m asking for help sorting through some related philosophical confusions.

[/​edit]

[1] or, positive experiences might be infinite, but a smaller infinity than the negative ones?

Basically, when I ask myself:

Once we’ve done literally all the things – there are as many humans or human like things that could possibly exist, having all the experiences they could possibly have...

...and we’ve created all the mind-designs that seem possibly cogent and good, that can have positive, non-human-like experiences...

...and we’ve created all the non-sentient universes that seem plausibly good from some sort of weird aesthetic artistic standpoint, i.e. maybe there’s a universe of elegant beautiful math forms where nobody gets to directly experience it but it’s sort of beautiful that it exists in an abstract way...

...and then maybe we’ve duplicated each of these a couple times (or a couple million times, just to be sure)...

...I feel like that’s it. We won. You can’t get a higher score than that.

By contrast, if there is one person out there experiencing suffering, that is sad. And if there are two it’s twice as sad, even if they have identical experiences. And if there are 1,000,000,000,000,000 it’s 1,000,000,000,000,000x as sad, even if they’re all identical.

Querying myself

This comes from asking myself: “do I want to have all the possible good experiences I could have?” I think the answer is probably yes. And when I ask “do I want to have all the possible good experiences that are somewhat contradictory, such that I’d need to clone myself and experience them separately” the answer is still probably yes.

And when I ask “once I have all that, would it be useful to duplicate myself?” And… I’m not sure. Maybe? I’m not very excited about it. Seems like maybe nice to do, just in as a hedge against weird philosophical confusion. But when I imagine doing that the millionth time, I don’t think I’ve gotten anything extra.

But when I imagine the millionth copy of Raemon-experiencing-hell, it still seems pretty bad.

Clarification on humancentricness

Unlike some other LessWrong folk, I’m only medium enthusiastic about the singularity, and not all that enthusiastic about exponential growth. I care about things that human-Ray cares about. I care about Weird Future Ray’s preferences in roughly the same way I care about other people’s preferences, and other Weird Future People’s preferences. (Which is a fair bit, but more as a “it seems nice to help them out if I have the resources, and in particular if they are suffering.”)

Counterargument – Measure/​Magical Reality Fluid

The main counterargument is that maybe you need to dedicate all of the multiverse to positive experiences to give the positive experiences more Magical Reality Fluid (i.e. something like “more chance at existing”, but try not to trick yourself into thinking you understand that concept if you don’t).

I sort of might begrudgingly accept this, but this feels something like “the values of weird future Being That Shares a Causal Link With Me”, rather than “my values.”

Why is this relevant?

If there’s a finite number of good experiences to have, then it’s an empirical question of “how much computation or other resources does it take to cause them?”

I’d… feel somewhat (although not majorly) surprised, if it turned out that you needed more than our light cone’s worth of resources to do that.

But then there’s the question of acausal trade, or trying to communicate with simulators, or “being the sort of people such that whether we’re in a simulation or not, we adopt policies such that alternate versions of us with the same policies who are getting simulated are getting a good outcome.”

And… that *only* seems relevant to my values if either this universe isn’t big enough to satisfy my human-values, or my human values care about things outside of this universe.

And basically, it seems to me the only reason I care about other universes is that I think Hell Exists Out There Somewhere and Must Be Destroyed.

(Where “hell” is most likely to exist in the form AIs running incidental thought experiments, committing mind-crime in the process).

I expect to change my mind on this a bunch, and I don’t think it’s necessary (or even positive EV) for me to try to come to a firm opinion on this sort of thing before the singularity.

But it seems potentially important to have *meta* policies such that someone simulating me can easily tell (at lower resolutions of simulation) whether I’m the sort of agent who’d unfold into an agent-with-good-policies if they gave me more compute.

tl;dr – what are the implications of the outlook listed above? What ramifications might I not be considering?