For example, a flat future, with no opportunity to influence my experience or that of my sibs for better or worse, would argue that caring for sibs has exactly the same expectation as not-caring.
In this case, for your future selves to care about each other is no worse than for them not to, so if the future might not be flat it increases expectations.
Alternatively, if a mad Randian was experimenting on me, rewarding selfishness, not-caring for my sibs might well have more pleasant experiences than caring.
This scenario introduces a direct dependence of outcomes on your goal system, not just your actions; this does complicate things and it’s common to assume it’s not the case.
Also, I don’t know how to compute with experiences—Total Utility, Average Utility, Rawlsian Minimum Utility, some sort of multiobjective optimization? Finally, I don’t know how to compute with future selves. For example, imagine some sort of bicameral cognitive architecture, where two individuals have exactly the same percepts (and therefore choose exactly the same actions). Should I count that as one future self or two?
I don’t know how your (or my) morality answers these questions, but however it answers them is what it would want to bind future selves to use. The real underlying reason that EY’s statement is a special case of is “see to it that other agents share your utility function, or something as close to it as possible.”
Would you argue that it is always better to assist one’s xerox-sibs, than not?
My intention in offering those two “pathological” scenarios was to argue that there is an aspect of scenario-dependence in the general injunction “assist your xerox-sibs”.
You’ve disposed of my two counterexamples with two separate counterarguments. However, you haven’t offered an argument for scenario-INDEPENDENCE of the injunction.
Your last sentence contains very interesting guideline. I don’t think it’s really an analysis of the original statement, but that’s a side question. I’ll have to think about it some more.
In this case, for your future selves to care about each other is no worse than for them not to, so if the future might not be flat it increases expectations.
This scenario introduces a direct dependence of outcomes on your goal system, not just your actions; this does complicate things and it’s common to assume it’s not the case.
I don’t know how your (or my) morality answers these questions, but however it answers them is what it would want to bind future selves to use. The real underlying reason that EY’s statement is a special case of is “see to it that other agents share your utility function, or something as close to it as possible.”
Would you argue that it is always better to assist one’s xerox-sibs, than not?
My intention in offering those two “pathological” scenarios was to argue that there is an aspect of scenario-dependence in the general injunction “assist your xerox-sibs”.
You’ve disposed of my two counterexamples with two separate counterarguments. However, you haven’t offered an argument for scenario-INDEPENDENCE of the injunction.
Your last sentence contains very interesting guideline. I don’t think it’s really an analysis of the original statement, but that’s a side question. I’ll have to think about it some more.