If the future of the universe is a ‘heat death’ in which no meaningful information can be stored, and in which no meaningful computation is possible, what will it matter if the singularity happens or not?
Ordinarily, we judge the success of a project by looking at how much positive utility has come of it.
We can view the universe we live in as such a project. Engineering a positive singularity looks like the only really good strategy for maximizing the expression of complex human values (simplified as ‘utility’) in the universe.
But if the universe reaches a final heat death, so that no intelligent life exists, and there is no memory and no record of anything, what do the contents of the antecedent eons count for? There is no way to tell if the-universe-which-resulted-in-heat-death saw the rise of marvelous intelligence and value or remained empty and unobserved.
What is the utility of a project after all of its participants, and all records and memory of it, are utterly destroyed?
The pragmatic answer is simply ‘carpe diem’: make the best of this finite existence. This is what people have done for years before the ideas of the singularity and transhumanism had been formulated.
Transhumanist beliefs, including the prospect of ‘immortality’ or transcendence seem to be a way in which some cope with their fear of death. But I fail to see why death should be any less gloomy a prospect for a 3^^^3 year old being than it is for a 30 year old. By definition, one cannot ‘reminisce’ about one’s accumulated positive experiences after death, so in one sense a 3^^^3 year old has actually lost more: vastly more information has been destroyed!
So, in short, I struggle to see a rationale for my intuitive belief that surviving into deep time is truly better than a natural human lifespan, for if heat death is inevitable, as seems to be the case, the end result—the final tally of utils accumulated—is exactly the same. 0.
This is a philosophical question, not a rational one. Terminal values are not generated by rational processes; that’s why they’re terminal values. The metaethics sequence, especially existential angst factory and the moral void, should expand on this sufficiently.
If the future of the universe is a ‘heat death’ in which no meaningful information can be stored, and in which no meaningful computation is possible, what will it matter if the singularity happens or not?
Ordinarily, we judge the success of a project by looking at how much positive utility has come of it.
We can view the universe we live in as such a project. Engineering a positive singularity looks like the only really good strategy for maximizing the expression of complex human values (simplified as ‘utility’) in the universe.
But if the universe reaches a final heat death, so that no intelligent life exists, and there is no memory and no record of anything, what do the contents of the antecedent eons count for? There is no way to tell if the-universe-which-resulted-in-heat-death saw the rise of marvelous intelligence and value or remained empty and unobserved.
What is the utility of a project after all of its participants, and all records and memory of it, are utterly destroyed?
The pragmatic answer is simply ‘carpe diem’: make the best of this finite existence. This is what people have done for years before the ideas of the singularity and transhumanism had been formulated.
Transhumanist beliefs, including the prospect of ‘immortality’ or transcendence seem to be a way in which some cope with their fear of death. But I fail to see why death should be any less gloomy a prospect for a 3^^^3 year old being than it is for a 30 year old. By definition, one cannot ‘reminisce’ about one’s accumulated positive experiences after death, so in one sense a 3^^^3 year old has actually lost more: vastly more information has been destroyed!
So, in short, I struggle to see a rationale for my intuitive belief that surviving into deep time is truly better than a natural human lifespan, for if heat death is inevitable, as seems to be the case, the end result—the final tally of utils accumulated—is exactly the same. 0.
The problem with this is that it assumes we only care about the end state.
Is it rational for a decision procedure to place great value on the the interim state, if the end state contains absolutely no utility?
This is a philosophical question, not a rational one. Terminal values are not generated by rational processes; that’s why they’re terminal values. The metaethics sequence, especially existential angst factory and the moral void, should expand on this sufficiently.
Does caring about interim states leave you open to Dutch books?