Hmm good question. Coordinating with other time slices of your body is a very tough problem if you take empty individualism seriously (imo it is the closest to the truth of the three, but I’m not certain by any means). From the perspective of a given time slice, any experience besides the one they got is not going to be experienced by them, so why would they use their short time to get a spike in pleasure in a future time slice of the body they’re in, rather than a smaller but more stable increase in pleasure for any other time slice, same body or not? If the duration of a time slice is measured in seconds, even walking to the fridge to get a candy bar is essentially “altruism” for future time slices to enjoy it.
In terms of coordination for other goals, you can use current time slices to cultivate mental patterns in themselves that future ones are then more likely to practice such as equanimity, accepting “good-enough” experiences, recognizing that your future slices aren’t so different from others and using that as motivation for altruism, and even making bids with future time slices. If this time slice can’t rely on future ones to enact it’s goals, future ones can’t rely on even further future ones either, vastly limiting what’s possible (if no time slice is willing to get off the couch for the benefit of other slices, that being will stay on the couch until it’s unbearable not to). Check out George Ainslie’s Breakdown of Will for a similar discussion on coordinating between time slices like that https://www.semanticscholar.org/paper/Pr%C3%A9cis-of-Breakdown-of-Will-Ainslie/79af8cd50b5bd35e90769a23d9a231641400dce6
Other strategies that are way less helpful for aligning ai are more just making use of the fact that we evolved to not feel like time slices, probably because it makes it easier to coordinate between them. So there’s a lot of mental infrastructure already in place for the task
On the fear of different values, I think you need to figure out which values you actually care if future slices hold and make sure they are all well grounded and can be easily re-derived, and the ones that aren’t that important you just need to hold loosely and accept that your future selves might value something else and hope that their new values are well founded. That’s where cultivating mental patterns of strong epistemology comes in, you actually want your values to change for the better, but not for the worse
I’ve added your post to my reading list! So far it’s a pretty reliable way for me to get future time slices to read something :)
To go through the ones listed in wikipedia:
This is criticizing Rawls’s proposed next steps after he saw the map laid out by the veil. I’m just pointing to the map and saying “this is a helpful tool for planning next steps, which will probably be different than steps proposed by Rawls”. I’d point out that this criticism would hold up better if everyone started with equal claim to resources, but that’s an entirely separate conversation.
Well yeah of course it’s impossible to do it perfectly. It’s impossible for any of us to be ideal-reasoning agents, I guess rationalism is doomed. Sorry guys, pack up and go home.
Makes sense, “evidence and reason” is critical to planning specific next steps even if you have a high level map.
Sure. Again, I’m not arguing for specific interpretations of the map, just saying it’s there and it’s helpful even if you don’t come to the same conclusions as others looking at a similar one. The help principle seems reasonable, as do other strategies like giving 10% of your income rather than selling all you have to give to the poor.
😏😏😏