I’ve been thinking about what retirement planning means given AGI. I previously mentioned investment ideas that, in a very capitalistic future, could allow the average person to buy galaxies. But it’s also possible that property rights won’t even continue into the future due to changes brought about by AGI. What will these other futures look like (supposing we don’t all die) and what’s the equivalent of “responsible retirement planning”?
Is it building social and political capital? Making my impact legible to future people and AIs? Something else? Is any action we take today futile?
I think if we do a good job, there will be a lot “let’s try to reward the people who helped get us through this in a positive way”. In as much as I have resources I certainly expect to spend a bunch of them on ancestor simulations and incentives for past humans to do good things. My guess is lots of other people will have similar instincts.
I wouldn’t worry that much about legibility, I expect a future superintelligent humanity to be very good at figuring out what people did, and whether it helped.
I am totally not confident of this, but it’s one of my best guesses on how things will go.
Looking at historical examples, the pattern is quite pessimistic for receiving material rewards from future societies:
Symbolic recognition is common, material transfers are rare:
Scientific pioneers (Darwin, Curie, Tesla) get statues and prizes named after them, but their descendants don’t receive ongoing material compensation
Revolutionary heroes and nation builders are honored, but many died in debt (Jefferson) or their lines died out (Washington)
Abolitionists, suffragettes, and civil rights activists rarely received material rewards—recognition came much later and was mostly symbolic
War veterans often receive inadequate compensation despite their sacrifices; even when benefits exist (like the GI Bill), they’re immediate/contractual, not retrospective decisions by future societies
The few cases of retrospective material transfers are limited:
Reparations (Holocaust survivors, Japanese internment) compensate victims of injustice, not reward heroes
These required massive political struggle and were often incomplete
Normal inheritance transfers wealth, but that’s family accumulation, not society deliberately rewarding contributions
Why societies don’t do this:
Coordination problems (who decides who deserves what?), competing claims, present needs feeling more urgent, changed values over time, and unclear implementation (do descendants get it? How much? For how many generations?)
What makes the AGI scenario potentially different:
Habryka’s speculation about ancestor simulations relies on unprecedented factors—radical post-scarcity abundance, simulation technology, and superintelligent agents making allocation decisions. We have no historical examples of societies with this capability.
One reason this analysis might be wrong: History shows what humans with limited resources do; a post-scarcity superintelligence might operate on completely different principles, and the trivial cost of such rewards (relative to total resources) might make the historical coordination problems irrelevant. Additionally, if superintelligent systems explicitly optimize for acausal game theory considerations, they might reward past contributors to incentivize similar behavior in other universes/timelines.
Most people don’t subscribe to a decision theory where rewarding people after the fact for one-time actions provides an incentive, and for the incentive to actually work, both the rewarders and rewardees need to believe in it the same way they believe in property rights. Maybe they will in the fullness of time, but it seems far from guaranteed.
I can believe that people like you would spend resources on this, but I’d feel a lot better if an AI lab, a nation-state, or someone else with substantial control over the lightcone had ever mentioned retrospectively rewarding people for positive impact at a rate of at least 10^-8 of the lightcone’s resources per microdoom averted—and for this they would need to spend up to 1% of the lightcone’s resources. If you think this is likely to happen in the future I’d be interested to hear why.
Most people don’t subscribe to a decision theory where rewarding people after the fact for one-time actions provides an incentive, and for the incentive to actually work, both the rewarders and rewardees need to believe in it the same way they believe in property rights. Maybe they will in the fullness of time, but it seems far from guaranteed.
This seems clearly false? Prices are maybe the default mechanism for status allocations? I think it’s maybe just economists with weird CDT-brained-takes that don’t believe in retroactive funding stuff as an incentive. Any workplace will tell you that of course they want to reward good work even after the fact, even if it’s a one-time thing, and people saying “ah, but it’s a one time thing, why would I reward you this time” would I think pretty obviously be considered mildly sociopathic.
Agree financial compensation is rarer, and the logic here gets a bit trickier. I think people’s intuitions around status allocation are much more likely to generalize here than precedent of financial allocation.
But also, beyond all of that, the arguments around decision-theory are I think just true in the kind of boring way that physical facts about the world are true, and saying that people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future. It’s clearly the kind of thing you update on as you get smarter.
But also, beyond all of that, the arguments around decision-theory are I think just true in the kind of boring way that physical facts about the world are true, and saying that people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future. It’s clearly the kind of thing you update on as you get smarter.
Smart != philosophically competent. (There are literally millions of people in China with higher IQ than me, but AFAIK none of them invented something like FDT/UDT or is very interested in it.)
We have little idea what FDT/UDT say about what is normatively correct in this situation or in general about potential acausal interaction between a human and a superintelligence, partly because we can’t run the math, partly because we don’t even know what the math is because we can’t formalize these theories.
Another way to put it is that globally we’re a bubble (people who like FDT/UDT) within a bubble (analytic philosophy tradition) within a bubble (people who are interested in any kind of philosophy), and then even within this nested bubble there’s a further split/disagreement about what FDT/UDT actually says about this specific kind of game/interaction.
(1) seems like evidence in favor of what I am saying. In as much as we are not confident in our current DT candidates, it seems like we should expect future much smarter people to be more correct. Us getting DT wrong is evidence that getting it right is less dependent on incidental details of the people thinking about it.
(2) I mean, there are also literally millions of people in China with higher IQ than you that believe in spirits, and millions in the rest of the world that believe in the Christian god and disbelieve evolution. The correlation between correct DT and intelligence seems about as strong as it does for the theory of evolution (meaning reasonably strong in the human range, but the human range is narrow enough to not overdetermine the correct answer, especially when you don’t have any reason to think hard about it)
(3) I am quite confident that pure CDT which rules out retrocausal incentives is false. I agree that I do not know what the right way to run the math is to understand when retrocausal incentives work, and how important they are. I really don’t have much uncertainty on this, so I don’t really get your point here. I don’t need to formalize these theories to make a confident prediction that any “decision theory where rewarding people after the fact for one-time actions” cannot provide an incentive is false.
Suppose we rule out pure CDT. That still leaves “whatever the right DT is (even if it’s something like FDT/UDT), if you actually run the math on it, it says that rewarding people after the fact for one-time actions provides practically zero incentives (if people means pre-singularity humans)”. I don’t see how we can confidently rule this out.
Yep, agree this is possible (though pretty unlikely), but I was just invoking this stuff to argue against pure CDT (or equivalent decision-theories that Thomas was saying would rule out rewarding people after the fact being effective).
Or to phrase it a different way: I am very confident that future, much smarter, people will not believe in decision-theories that rule out retrocausal incentives as a class. I am reasonably confident, though not totally confident, that de-facto retrocausal incentives will bite on currently alive humans. This overall makes me think it’s like 70% likely that if we make it through the singularity well, then future civilizations will spend a decent amount of resources aligning incentives retroactively.
This isn’t super confident, but you know, somewhat more likely than not.
Wow. We have extremely different beliefs on this front. IMO, almost nothing is retroactively funded, and even high-status prizes are a tiny percentage of anything.
Any workplace will tell you that of course they want to reward good work even after the fact,
No workplace that I know of DOES retroactively reward good work if the employee is no longer employed there. Most of the rhetoric about it is just signaling, in pursuit of retention and future productivity.
people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future
It seems likely that they’ll accept evolution, and still not feel constrained by it, and pursue more directed and efficient anti-entropy measures. It also seems extremely likely that they’ll use a decision theory that actually works to place them in the universe they want, which may or may not include compassion for imaginary or past or otherwise causally-unreachable things.
(edit to clarify the decision-theory comment)
saying that people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future.
It’s not clear that any likely decision theory, let alone a non-wrong one, requires fully-acausal beliefs or actions. Many of them do include a more complete causality diagram than CDT does, and many acknowledge the the point-of-decision is often much different than the apparent one. But they’re all basically consequentialist, in that they believe that actions can influence future states of the universe.
Prices are maybe the default mechanism for status allocations?
Do you mean prizes? This is pretty compelling in some ways; I think it’s plausible that AI safety people will win at least as many prizes as nuclear disarmament people, if we’re as impactful as I hope we are. I’m less sure whether prizes will come with resources or if I will care about the kind of status they confer.
I also feel weird about prizes because many of them seem to have a different purpose from retroactively assigning status fairly based on achievement. Like, some people would describe them as retroactive status incentives, but others would say they’re about celebrating accomplishments, others would say they’re a forward-looking field-shaping signal, etc.
The workplace example feels different to me because workers can just reason that employers will keep their promises or lose reputation, so it’s not truly one-time. That would be analogous to the world where the US government had already announced it would award galaxies to people for creating lots of impact. I’m also not as confident as you in decision theory applying cleanly to financial compensation. E.g. maybe it creates unavoidable perverse incentives somehow.
If you think my future prizes will total at least ~1% of the impact I create I’d be happy to make a bet and sell you shares of these potential future prizes. It seems not totally well-defined but better than impact equity. I’m worried however that this kind of transaction won’t clear due to the enormous op cost of dollars right now.
The workplace example feels different to me because workers can just reason that employers will keep their promises or lose reputation, so it’s not truly one-time.
Agree there are not one-time dynamics, but I would bet that the vast majority of people would have strong intuitions that they shouldn’t defect on the last round of the game (in general, if people truly adopted decision-theories that thought defecting on single-shot games was rational, then all definitive finite games would also end up in defect-defect equilibria via induction, which clearly doesn’t happen).
I’d argue people have norms that they shouldn’t defect on the last round of the game because being trustworthy is useful. This doesn’t generalize to taking whatever actions our monkey-brained approximation of LDT implies we should do according to our monkey-brained judgement of what logical correlations they create.
“In as much as I have resources I certainly expect to spend a bunch of them on ancestor simulations and incentives for past humans to do good things.”
Just curious, but what are your views on the ethics of running ancestor simulations? I’d be worried about running a simulation with enough fidelity that I triggered phenomenal consciousness, and then I would fret about my moral duty to the simulated (à la the Problem of Suffering).
Is it that you feel motivated to be the kind of person that would simulate our current reality as a kind of existence proof for the possibility of good-rewarding-incentives now? Or do you have an independent rationale for simulating a world like our own, suffering and all?
I think it’s fine to simulate people who suffer a bit in the pursuit of positively shaping the long-term future. I am not totally confident of this. Luckily I will get to be much much smarter and wiser before I have to make this call.
I am sure that even if exact ancestor simulations end up being a bad idea, there will be other things you can do to figure out what happened and who was responsible, etc.
there’s a lot of fully-unknown possibilities. For me, I generalize most “today’s fundamentals don’t apply” scenarios into “my current actions won’t have predictable/optimizable impact beyond the discontinuity”, so I don’t think about specifics or optimization within them—I do think a bit about how to make them less likely or more tolerable, but I can’t really quantify.
Which leaves the cases where the fundamentals DO apply, and is ONLY short- and medium-term optimizations. Nothing I plan for in 100 years is going to happen, so I want to take care of myself and my family in coming decades in cases where things don’t collapse or go too weird. current income > expenses, and reasonable investment strategy (10- and 30-year target date funds, unless you know better, which you don’t) cover the most probability-weight for the next few decades, contingent on any current systems continuing that long.
Your suggestion of social capital (not sure political capital is all that durable, but maybe?) is a very good one as well—having friends, especially friends in different situations (another country, perhaps) is extremely good. First, it’s fun and rewarding immediately. Second, it’s a source of illegible support if things go crazy, but not extinction-crazy.
My personal strategy has been to not think about it very hard.
I am sufficiently fortunate that I can put a normal amount of funds into retirement, and I have continued to do so on the off chance that my colleagues and I succeed at preventing the emergence of AGI/ASI and the world remains mostly normal. I also don’t want to frighten my partner with my financial choices, and giving her peace of mind is worth quite a lot to me.
If superintelligence emerges and doesn’t kill everyone or worse, then I don’t have any strong preferences as to what my role is in the new social order, since I expect to be about as well-off as I am now or more.
I can think of two different ways property rights might disappear:
Our new overlords extinguish existing property rights and establish some new system for distributing resources of their choosing based on some other principle.
Resources are so abundant that ownership is irrelevant.
If you’re preparing for #2, then you probably just want to invest in all the “things money can’t buy” because you’ll have the rest.
If you’re preparing for #1, it’s hard to predict what the principle might be. Conditional on not dying, either we’re dealing with a benevolent-ish AI overlord (and you’re probably fine; doing things like living justly are probably a good idea if that’s going to be rewarded) or we’re dealing with an AI overlord that is subject to some kind of human control (maybe the future is really being run by Anthropic’s corporate leadership or something). In that case, responsible retirement planning is probably finding a way to get close to that in-group.
I’m thinking mostly of #1 and your thought seems reasonable there. #2 doesn’t make sense to me, since the number of galaxies is finite and, barring #1, there are several reasons for competition over even extremely abundant resources—Red Queen geopolitical races, people who want to own position goods, etc.
I’ve been thinking about what retirement planning means given AGI. I previously mentioned investment ideas that, in a very capitalistic future, could allow the average person to buy galaxies. But it’s also possible that property rights won’t even continue into the future due to changes brought about by AGI. What will these other futures look like (supposing we don’t all die) and what’s the equivalent of “responsible retirement planning”?
Is it building social and political capital? Making my impact legible to future people and AIs? Something else? Is any action we take today futile?
I think if we do a good job, there will be a lot “let’s try to reward the people who helped get us through this in a positive way”. In as much as I have resources I certainly expect to spend a bunch of them on ancestor simulations and incentives for past humans to do good things. My guess is lots of other people will have similar instincts.
I wouldn’t worry that much about legibility, I expect a future superintelligent humanity to be very good at figuring out what people did, and whether it helped.
I am totally not confident of this, but it’s one of my best guesses on how things will go.
Hmm, I’m pretty pessimistic about this, for two reasons:
Historical examples indicate that receiving large material rewards for impact is rare.
Claude’s thoughts on historical examples
Looking at historical examples, the pattern is quite pessimistic for receiving material rewards from future societies:
Symbolic recognition is common, material transfers are rare:
Scientific pioneers (Darwin, Curie, Tesla) get statues and prizes named after them, but their descendants don’t receive ongoing material compensation
Revolutionary heroes and nation builders are honored, but many died in debt (Jefferson) or their lines died out (Washington)
Abolitionists, suffragettes, and civil rights activists rarely received material rewards—recognition came much later and was mostly symbolic
War veterans often receive inadequate compensation despite their sacrifices; even when benefits exist (like the GI Bill), they’re immediate/contractual, not retrospective decisions by future societies
The few cases of retrospective material transfers are limited:
Reparations (Holocaust survivors, Japanese internment) compensate victims of injustice, not reward heroes
These required massive political struggle and were often incomplete
Normal inheritance transfers wealth, but that’s family accumulation, not society deliberately rewarding contributions
Why societies don’t do this:
Coordination problems (who decides who deserves what?), competing claims, present needs feeling more urgent, changed values over time, and unclear implementation (do descendants get it? How much? For how many generations?)
What makes the AGI scenario potentially different:
Habryka’s speculation about ancestor simulations relies on unprecedented factors—radical post-scarcity abundance, simulation technology, and superintelligent agents making allocation decisions. We have no historical examples of societies with this capability.
One reason this analysis might be wrong: History shows what humans with limited resources do; a post-scarcity superintelligence might operate on completely different principles, and the trivial cost of such rewards (relative to total resources) might make the historical coordination problems irrelevant. Additionally, if superintelligent systems explicitly optimize for acausal game theory considerations, they might reward past contributors to incentivize similar behavior in other universes/timelines.
Most people don’t subscribe to a decision theory where rewarding people after the fact for one-time actions provides an incentive, and for the incentive to actually work, both the rewarders and rewardees need to believe in it the same way they believe in property rights. Maybe they will in the fullness of time, but it seems far from guaranteed.
I can believe that people like you would spend resources on this, but I’d feel a lot better if an AI lab, a nation-state, or someone else with substantial control over the lightcone had ever mentioned retrospectively rewarding people for positive impact at a rate of at least 10^-8 of the lightcone’s resources per microdoom averted—and for this they would need to spend up to 1% of the lightcone’s resources. If you think this is likely to happen in the future I’d be interested to hear why.
This seems clearly false? Prices are maybe the default mechanism for status allocations? I think it’s maybe just economists with weird CDT-brained-takes that don’t believe in retroactive funding stuff as an incentive. Any workplace will tell you that of course they want to reward good work even after the fact, even if it’s a one-time thing, and people saying “ah, but it’s a one time thing, why would I reward you this time” would I think pretty obviously be considered mildly sociopathic.
Agree financial compensation is rarer, and the logic here gets a bit trickier. I think people’s intuitions around status allocation are much more likely to generalize here than precedent of financial allocation.
But also, beyond all of that, the arguments around decision-theory are I think just true in the kind of boring way that physical facts about the world are true, and saying that people will have the wrong decision-theory in the future sounds to me about as mistaken as saying that lots of people will disbelieve the theory of evolution in the future. It’s clearly the kind of thing you update on as you get smarter.
This seems way overconfident:
Our current DT candidates (FDT/UDT) may not be correct or on the right track.
Smart != philosophically competent. (There are literally millions of people in China with higher IQ than me, but AFAIK none of them invented something like FDT/UDT or is very interested in it.)
We have little idea what FDT/UDT say about what is normatively correct in this situation or in general about potential acausal interaction between a human and a superintelligence, partly because we can’t run the math, partly because we don’t even know what the math is because we can’t formalize these theories.
Another way to put it is that globally we’re a bubble (people who like FDT/UDT) within a bubble (analytic philosophy tradition) within a bubble (people who are interested in any kind of philosophy), and then even within this nested bubble there’s a further split/disagreement about what FDT/UDT actually says about this specific kind of game/interaction.
(1) seems like evidence in favor of what I am saying. In as much as we are not confident in our current DT candidates, it seems like we should expect future much smarter people to be more correct. Us getting DT wrong is evidence that getting it right is less dependent on incidental details of the people thinking about it.
(2) I mean, there are also literally millions of people in China with higher IQ than you that believe in spirits, and millions in the rest of the world that believe in the Christian god and disbelieve evolution. The correlation between correct DT and intelligence seems about as strong as it does for the theory of evolution (meaning reasonably strong in the human range, but the human range is narrow enough to not overdetermine the correct answer, especially when you don’t have any reason to think hard about it)
(3) I am quite confident that pure CDT which rules out retrocausal incentives is false. I agree that I do not know what the right way to run the math is to understand when retrocausal incentives work, and how important they are. I really don’t have much uncertainty on this, so I don’t really get your point here. I don’t need to formalize these theories to make a confident prediction that any “decision theory where rewarding people after the fact for one-time actions” cannot provide an incentive is false.
Suppose we rule out pure CDT. That still leaves “whatever the right DT is (even if it’s something like FDT/UDT), if you actually run the math on it, it says that rewarding people after the fact for one-time actions provides practically zero incentives (if people means pre-singularity humans)”. I don’t see how we can confidently rule this out.
Yep, agree this is possible (though pretty unlikely), but I was just invoking this stuff to argue against pure CDT (or equivalent decision-theories that Thomas was saying would rule out rewarding people after the fact being effective).
Or to phrase it a different way: I am very confident that future, much smarter, people will not believe in decision-theories that rule out retrocausal incentives as a class. I am reasonably confident, though not totally confident, that de-facto retrocausal incentives will bite on currently alive humans. This overall makes me think it’s like 70% likely that if we make it through the singularity well, then future civilizations will spend a decent amount of resources aligning incentives retroactively.
This isn’t super confident, but you know, somewhat more likely than not.
Wow. We have extremely different beliefs on this front. IMO, almost nothing is retroactively funded, and even high-status prizes are a tiny percentage of anything.
No workplace that I know of DOES retroactively reward good work if the employee is no longer employed there. Most of the rhetoric about it is just signaling, in pursuit of retention and future productivity.
It seems likely that they’ll accept evolution, and still not feel constrained by it, and pursue more directed and efficient anti-entropy measures. It also seems extremely likely that they’ll use a decision theory that actually works to place them in the universe they want, which may or may not include compassion for imaginary or past or otherwise causally-unreachable things.
(edit to clarify the decision-theory comment)
It’s not clear that any likely decision theory, let alone a non-wrong one, requires fully-acausal beliefs or actions. Many of them do include a more complete causality diagram than CDT does, and many acknowledge the the point-of-decision is often much different than the apparent one. But they’re all basically consequentialist, in that they believe that actions can influence future states of the universe.
there’s a long tradition of awarding military victors with wealth and titles
Do you mean prizes? This is pretty compelling in some ways; I think it’s plausible that AI safety people will win at least as many prizes as nuclear disarmament people, if we’re as impactful as I hope we are. I’m less sure whether prizes will come with resources or if I will care about the kind of status they confer.
I also feel weird about prizes because many of them seem to have a different purpose from retroactively assigning status fairly based on achievement. Like, some people would describe them as retroactive status incentives, but others would say they’re about celebrating accomplishments, others would say they’re a forward-looking field-shaping signal, etc.
The workplace example feels different to me because workers can just reason that employers will keep their promises or lose reputation, so it’s not truly one-time. That would be analogous to the world where the US government had already announced it would award galaxies to people for creating lots of impact. I’m also not as confident as you in decision theory applying cleanly to financial compensation. E.g. maybe it creates unavoidable perverse incentives somehow.
If you think my future prizes will total at least ~1% of the impact I create I’d be happy to make a bet and sell you shares of these potential future prizes. It seems not totally well-defined but better than impact equity. I’m worried however that this kind of transaction won’t clear due to the enormous op cost of dollars right now.
Yep, sorry, prizes!
Agree there are not one-time dynamics, but I would bet that the vast majority of people would have strong intuitions that they shouldn’t defect on the last round of the game (in general, if people truly adopted decision-theories that thought defecting on single-shot games was rational, then all definitive finite games would also end up in defect-defect equilibria via induction, which clearly doesn’t happen).
I’d argue people have norms that they shouldn’t defect on the last round of the game because being trustworthy is useful. This doesn’t generalize to taking whatever actions our monkey-brained approximation of LDT implies we should do according to our monkey-brained judgement of what logical correlations they create.
“In as much as I have resources I certainly expect to spend a bunch of them on ancestor simulations and incentives for past humans to do good things.”
Just curious, but what are your views on the ethics of running ancestor simulations? I’d be worried about running a simulation with enough fidelity that I triggered phenomenal consciousness, and then I would fret about my moral duty to the simulated (à la the Problem of Suffering).
Is it that you feel motivated to be the kind of person that would simulate our current reality as a kind of existence proof for the possibility of good-rewarding-incentives now? Or do you have an independent rationale for simulating a world like our own, suffering and all?
I think it’s fine to simulate people who suffer a bit in the pursuit of positively shaping the long-term future. I am not totally confident of this. Luckily I will get to be much much smarter and wiser before I have to make this call.
I am sure that even if exact ancestor simulations end up being a bad idea, there will be other things you can do to figure out what happened and who was responsible, etc.
there’s a lot of fully-unknown possibilities. For me, I generalize most “today’s fundamentals don’t apply” scenarios into “my current actions won’t have predictable/optimizable impact beyond the discontinuity”, so I don’t think about specifics or optimization within them—I do think a bit about how to make them less likely or more tolerable, but I can’t really quantify.
Which leaves the cases where the fundamentals DO apply, and is ONLY short- and medium-term optimizations. Nothing I plan for in 100 years is going to happen, so I want to take care of myself and my family in coming decades in cases where things don’t collapse or go too weird. current income > expenses, and reasonable investment strategy (10- and 30-year target date funds, unless you know better, which you don’t) cover the most probability-weight for the next few decades, contingent on any current systems continuing that long.
Your suggestion of social capital (not sure political capital is all that durable, but maybe?) is a very good one as well—having friends, especially friends in different situations (another country, perhaps) is extremely good. First, it’s fun and rewarding immediately. Second, it’s a source of illegible support if things go crazy, but not extinction-crazy.
My personal strategy has been to not think about it very hard.
I am sufficiently fortunate that I can put a normal amount of funds into retirement, and I have continued to do so on the off chance that my colleagues and I succeed at preventing the emergence of AGI/ASI and the world remains mostly normal. I also don’t want to frighten my partner with my financial choices, and giving her peace of mind is worth quite a lot to me.
If superintelligence emerges and doesn’t kill everyone or worse, then I don’t have any strong preferences as to what my role is in the new social order, since I expect to be about as well-off as I am now or more.
I can think of two different ways property rights might disappear:
Our new overlords extinguish existing property rights and establish some new system for distributing resources of their choosing based on some other principle.
Resources are so abundant that ownership is irrelevant.
If you’re preparing for #2, then you probably just want to invest in all the “things money can’t buy” because you’ll have the rest.
If you’re preparing for #1, it’s hard to predict what the principle might be. Conditional on not dying, either we’re dealing with a benevolent-ish AI overlord (and you’re probably fine; doing things like living justly are probably a good idea if that’s going to be rewarded) or we’re dealing with an AI overlord that is subject to some kind of human control (maybe the future is really being run by Anthropic’s corporate leadership or something). In that case, responsible retirement planning is probably finding a way to get close to that in-group.
I’m thinking mostly of #1 and your thought seems reasonable there. #2 doesn’t make sense to me, since the number of galaxies is finite and, barring #1, there are several reasons for competition over even extremely abundant resources—Red Queen geopolitical races, people who want to own position goods, etc.