For example: given a choice between pressing button A, which wireheads you for the rest of your life and removes your memory of having been offered the choice, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
That’s an interesting paradox and it reminds me of Newcomb’s Problem. For this, it would be necessary for me to know the expected value of valuing people as I do and of wireheading (given the probability that I’d get to wirehead). Given that I don’t expect to be offered to wirehead, I should follow a strategy of valuing people as I currently do.
For example: given a choice between pressing button A, which wireheads you for the rest of your life and removes your memory of having been offered the choice, and button B, which prevents the people you care about from suffering for the rest of their lives, do you know enough to pick a button? If not, what else would you need to know?
That’s an interesting paradox and it reminds me of Newcomb’s Problem. For this, it would be necessary for me to know the expected value of valuing people as I do and of wireheading (given the probability that I’d get to wirehead). Given that I don’t expect to be offered to wirehead, I should follow a strategy of valuing people as I currently do.
Um, OK. Thanks for clarifying your position.