Yes, I think if possible you’d want to resolve to continue caring about copies even after you learn which one you are. I don’t think that you particularly want to rewind values to before prior changes, though I do think that standard decision-theoretic or “moral” arguments have a lot of force in this setting and are sufficient to recover high degrees of altruism towards copies and approximately pareto-efficient behavior.
I think it’s not clear if you should self-modify to avoid preference change unless doing so is super cheap (because of complicated decision-theoretic relationships with future and past copies of yourself, as discussed in some other comments). But I think it’s relatively clear that if your preferences were going to change into either A or B stochastically, it would be worth paying to modify yourself so that they change into some appropriately-weighted mixture of A and B. And in this case that’s the same as having your preferences not change, and so we have an unusually strong argument for avoiding this kind of preference change.
Yes, I think if possible you’d want to resolve to continue caring about copies even after you learn which one you are. I don’t think that you particularly want to rewind values to before prior changes, though I do think that standard decision-theoretic or “moral” arguments have a lot of force in this setting and are sufficient to recover high degrees of altruism towards copies and approximately pareto-efficient behavior.
I think it’s not clear if you should self-modify to avoid preference change unless doing so is super cheap (because of complicated decision-theoretic relationships with future and past copies of yourself, as discussed in some other comments). But I think it’s relatively clear that if your preferences were going to change into either A or B stochastically, it would be worth paying to modify yourself so that they change into some appropriately-weighted mixture of A and B. And in this case that’s the same as having your preferences not change, and so we have an unusually strong argument for avoiding this kind of preference change.