Imagine John is going to have kids. He will like his kids. But, depending on random factors he will have different kids in different future timelines.
Omega shows up.
Omega: “hey John, by default if you have kids and then I offer your future self a reward to wind back time to actualize a different timeline where you have different kids, equally good from your current perspective, you will reject it. Take a look at this LessWrong post that suggest your hypothetical future selves are passing up Sure Gains. Why don’t you take this pill that will make you forever indifferent between different versions of your kids (and equally any other aspects of those timelines) you would have been indifferent to given your current preferences?”
John: “Ah OK, maybe I’m mostly convinced, but i will ask simon first what he thinks”
simon: “Are you insane? You’d bring these people into existence, and then wipe them out if Omega offered you half a cent. Effectively murder. Along with everyone else in that timeline. Is that really what you want?”
John: “Oh… no of course not!” (to Omega): “I reject the pill!”
another hypothetical observer: “c’mon simon, no one was talking about murder in the LessWrong post, this whole thought experiment in this comment is irrelevant. The post assumes you can cleanly choose between one option and another without such additional considerations.”
simon: “but by the same token the post fails to prove that, where you can’t cleanly choose without additional considerations relevant to your current preferences, as in pretty much any real-example involving actual human values, it is ‘irrational’ to decline making this sort of choice, or to decline self modifying to so. Maybe there’s a valid point there about selection pressure, but that pressure is then to be fought, not surrendered to!”
In conclusion, virtue ethics is a weakness of the will.
Omega: “hey John, by default if you have kids and then I offer your future self a reward to wind back time to actualize a different timeline where you have different kids, equally good from your current perspective, you will reject it. Take a look at this LessWrong post that suggest your hypothetical future selves are passing up Sure Gains. Why don’t you take this pill that will make you forever indifferent between different versions of your kids (and equally any other aspects of those timelines) you would have been indifferent to given your current preferences?”
I am not sure what Omega rewinding time means. From the rest of your comment I assume we are to believe Omega “rewinding time” means Omega owns the simulation our world is running on, makes a copy B of the simulation when conception occurs, runs the simulation A up to the relevant decision-point, then if you say “yes”, deletes A, then runs B with the relevant changes to sperm and bank-account both.[1]
This seems bad mainly because, as you say, everyone dies! In this case, we have a path-dependent preference which is definitely not produced by the vetocracy model given in the post. It seems to me that our preferences in this scenario just aren’t modeled that well by the setup. In particular, it seems much worse to make someone no longer exist after they have existed than make someone never have existed in the first place.
That is not to say, however, that the vetocracy model is useless for human values. Indeed, the post gives a concrete scenario in which it believes the vetocracy model is applicable: Internal family systems.
So sometimes the vetocracy model is applicable, and sometimes it is not. It seems useful for some types of therapy, and not useful for, as you say “pretty much any real-example involving actual human values”, in particular circumstances involving an omnipowerful agent offering to kill everyone and reinstate them such that they are each slightly different and you perhaps have a son now instead of a daughter plus one more cent added to your bank account.
If we don’t want to deal with whether your ontology permits simulating everyone on a computer, you can imagine instead Omega killing everyone and re-setting all their atoms’ position/momenta back to the conception of your child, changing the sperm that reaches your egg, and adding 1 cent to your bank-account. However, this seems much much much less possible than the simulation answer.
Imagine John is going to have kids. He will like his kids. But, depending on random factors he will have different kids in different future timelines.
Omega shows up.
Omega: “hey John, by default if you have kids and then I offer your future self a reward to wind back time to actualize a different timeline where you have different kids, equally good from your current perspective, you will reject it. Take a look at this LessWrong post that suggest your hypothetical future selves are passing up Sure Gains. Why don’t you take this pill that will make you forever indifferent between different versions of your kids (and equally any other aspects of those timelines) you would have been indifferent to given your current preferences?”
John: “Ah OK, maybe I’m mostly convinced, but i will ask simon first what he thinks”
simon: “Are you insane? You’d bring these people into existence, and then wipe them out if Omega offered you half a cent. Effectively murder. Along with everyone else in that timeline. Is that really what you want?”
John: “Oh… no of course not!” (to Omega): “I reject the pill!”
another hypothetical observer: “c’mon simon, no one was talking about murder in the LessWrong post, this whole thought experiment in this comment is irrelevant. The post assumes you can cleanly choose between one option and another without such additional considerations.”
simon: “but by the same token the post fails to prove that, where you can’t cleanly choose without additional considerations relevant to your current preferences, as in pretty much any real-example involving actual human values, it is ‘irrational’ to decline making this sort of choice, or to decline self modifying to so. Maybe there’s a valid point there about selection pressure, but that pressure is then to be fought, not surrendered to!”
You have shown nothing of the sort.
I am not sure what Omega rewinding time means. From the rest of your comment I assume we are to believe Omega “rewinding time” means Omega owns the simulation our world is running on, makes a copy B of the simulation when conception occurs, runs the simulation A up to the relevant decision-point, then if you say “yes”, deletes A, then runs B with the relevant changes to sperm and bank-account both. [1]
This seems bad mainly because, as you say, everyone dies! In this case, we have a path-dependent preference which is definitely not produced by the vetocracy model given in the post. It seems to me that our preferences in this scenario just aren’t modeled that well by the setup. In particular, it seems much worse to make someone no longer exist after they have existed than make someone never have existed in the first place.
That is not to say, however, that the vetocracy model is useless for human values. Indeed, the post gives a concrete scenario in which it believes the vetocracy model is applicable: Internal family systems.
So sometimes the vetocracy model is applicable, and sometimes it is not. It seems useful for some types of therapy, and not useful for, as you say “pretty much any real-example involving actual human values”, in particular circumstances involving an omnipowerful agent offering to kill everyone and reinstate them such that they are each slightly different and you perhaps have a son now instead of a daughter plus one more cent added to your bank account.
If we don’t want to deal with whether your ontology permits simulating everyone on a computer, you can imagine instead Omega killing everyone and re-setting all their atoms’ position/momenta back to the conception of your child, changing the sperm that reaches your egg, and adding 1 cent to your bank-account. However, this seems much much much less possible than the simulation answer.