Though it’s also worth pointing out that with a utility function like Carl is alluding to (where utilities are significantly different if the lifespans are noticeably different to humans),
To be more explicit: my take on this sort of thing is to smear out marginal utility across our conceptual space of such measures:
For years of life I would assign weight to at least (and more than) these regions:
Human happy lifespan
Thousands of years of moderately transhuman happiness
Billions of years of posthuman existence with limited energy budget
Jupiter-brains
Atoms-in-the-universe scale
Exponential growth (to various finite scales)
Families of ‘gigantic finite number’ functions found thus far
Giant finite number functions that could be represented with ludicrous brain-size/resources
Infinite life-years
Increase the measure of an infinite lifespan in various ways in an infinite world
Various complications of infinity (uncountably many life-years, etc :)
I am also tempted to throw in some relative measures:
Achieving at least fraction X of the life-years I could have obtained (with various numbers and procedures for generating numbers for this) given the laws of physics that in fact occur
Achieving as many life-years as proportion Y of my generational cohort/Earthlings/whatever
Achieving life-years in some relationship to the scale of the accessible universe
Simple conceptual space (that can be represented in a terrestrial brain) is limited, and if one cannot ‘cover all the bases’ one can spread oneself widely enough not to miss opportunities for easy wins when the gains “are...noticeably different to humans.” And that seems pretty OK with me.
“Marginal weight at infinite years is interesting. That would likely mean that, after a certain amount of fun, you just put all your resources to trying to get infinite fun.”
With these large finite numbers you exhaust all the possible brain states of humans or Jupiter-brains almost at the beginning. Then you have to cycle or scale your sensations and cognition up (which no one has suggested above), and I am not so drastically motivated to be galaxy-sized and blissfully cycling than planet-sized and blissfully cycling. Infinite life-years could be qualitatively different from the ludicrous finite lifespans in not having an end, which is a feature that I can care about.
Carl, thanks for writing this up! I may as well unpack and say that this is pretty much how I have been thinking about the problem, too (though I hadn’t considered the idea of relative measures), and I still think I prefer biting the attendant bullets that I can see to the alternatives. But I do at least find it—well—worth pointing out that if we in fact achieve one of the higher strata, and we want to be time-consistent, it looks like we’re going to stop living our lives on the mainline probability; i.e., if the universe is of size 3^^^3, it seems like we’ll spend almost all of the available resources on trying to crack the matrix (even if there is no indication that we live in a matrix) and only an infinitesimal—combinatorially small—fraction on actually having fun.
Yes, I do think that this is probably what I will on reflection find to be the right thing, because the combinatorially small fraction pretty much looks like 3^^^3 from my current vantage point and even my middle-distance extrapolations, and as we self-modify to grow larger, since we want to be time-consistent and not regret being time-consistent, we’ll design our future selves such that we’ll keep feeling that this is the right tradeoff (i.e., this is much better than starting out with a near-certainty of not having fun at all, because our FAI puts all resources into trying to find infinite laws of physics). So perhaps it is simply appropriate (to humanity’s utility function) that immense brains spend most of their resources guarding against events of infinitesimal probabilities. But it’s sufficiently non-obvious that it at least seems worth keeping in mind.
(Also, amended the post with a note that by “4^^^^4”, I really mean “whatever is so large that it is only epsilon away from the upper bound”.)
But I do at least find it—well—worth pointing out that if we in fact achieve one of the higher strata, and we want to be time-consistent, it looks like we’re going to stop living our lives on the mainline probability;
Families of ‘gigantic finite number’ functions found thus far
Giant finite number functions that could be represented with ludicrous brain-size/resources
These strike me as basically the same thing relative to my imagination. The biggest numbers mathematicians can describe using the fast-growing hierarchy for the largest computable ordinals are already too gigantic to… well… they’re already too gigantic. Taking the Ackermann function as primitive, I still can’t visualize the Goodstein sequence of 16, never mind 17, and I think that’s somewhere around w^(w^2) in the fast-growing hierarchy.
The jump to uncomputable numbers / numbers that are unique models of second-order axioms would still be a large further jump, though.
To be more explicit: my take on this sort of thing is to smear out marginal utility across our conceptual space of such measures:
For years of life I would assign weight to at least (and more than) these regions:
Human happy lifespan
Thousands of years of moderately transhuman happiness
Billions of years of posthuman existence with limited energy budget
Jupiter-brains
Atoms-in-the-universe scale
Exponential growth (to various finite scales)
Families of ‘gigantic finite number’ functions found thus far
Giant finite number functions that could be represented with ludicrous brain-size/resources
Infinite life-years
Increase the measure of an infinite lifespan in various ways in an infinite world
Various complications of infinity (uncountably many life-years, etc :)
I am also tempted to throw in some relative measures:
Achieving at least fraction X of the life-years I could have obtained (with various numbers and procedures for generating numbers for this) given the laws of physics that in fact occur
Achieving as many life-years as proportion Y of my generational cohort/Earthlings/whatever
Achieving life-years in some relationship to the scale of the accessible universe
Simple conceptual space (that can be represented in a terrestrial brain) is limited, and if one cannot ‘cover all the bases’ one can spread oneself widely enough not to miss opportunities for easy wins when the gains “are...noticeably different to humans.” And that seems pretty OK with me.
“Marginal weight at infinite years is interesting. That would likely mean that, after a certain amount of fun, you just put all your resources to trying to get infinite fun.”
With these large finite numbers you exhaust all the possible brain states of humans or Jupiter-brains almost at the beginning. Then you have to cycle or scale your sensations and cognition up (which no one has suggested above), and I am not so drastically motivated to be galaxy-sized and blissfully cycling than planet-sized and blissfully cycling. Infinite life-years could be qualitatively different from the ludicrous finite lifespans in not having an end, which is a feature that I can care about.
Carl, thanks for writing this up! I may as well unpack and say that this is pretty much how I have been thinking about the problem, too (though I hadn’t considered the idea of relative measures), and I still think I prefer biting the attendant bullets that I can see to the alternatives. But I do at least find it—well—worth pointing out that if we in fact achieve one of the higher strata, and we want to be time-consistent, it looks like we’re going to stop living our lives on the mainline probability; i.e., if the universe is of size 3^^^3, it seems like we’ll spend almost all of the available resources on trying to crack the matrix (even if there is no indication that we live in a matrix) and only an infinitesimal—combinatorially small—fraction on actually having fun.
Yes, I do think that this is probably what I will on reflection find to be the right thing, because the combinatorially small fraction pretty much looks like 3^^^3 from my current vantage point and even my middle-distance extrapolations, and as we self-modify to grow larger, since we want to be time-consistent and not regret being time-consistent, we’ll design our future selves such that we’ll keep feeling that this is the right tradeoff (i.e., this is much better than starting out with a near-certainty of not having fun at all, because our FAI puts all resources into trying to find infinite laws of physics). So perhaps it is simply appropriate (to humanity’s utility function) that immense brains spend most of their resources guarding against events of infinitesimal probabilities. But it’s sufficiently non-obvious that it at least seems worth keeping in mind.
(Also, amended the post with a note that by “4^^^^4”, I really mean “whatever is so large that it is only epsilon away from the upper bound”.)
Indeed.
These strike me as basically the same thing relative to my imagination. The biggest numbers mathematicians can describe using the fast-growing hierarchy for the largest computable ordinals are already too gigantic to… well… they’re already too gigantic. Taking the Ackermann function as primitive, I still can’t visualize the Goodstein sequence of 16, never mind 17, and I think that’s somewhere around w^(w^2) in the fast-growing hierarchy.
The jump to uncomputable numbers / numbers that are unique models of second-order axioms would still be a large further jump, though.