I think people are getting confused because they’re looking at it as though their preferences are altered by a magical black box, instead of looking at it as though those preferences are altered by themselves in a more enlightened state. The above line of argument seems to rest upon the assumption that we can’t know the effects that changing our preferences would have. But if we had the ability to actually rewrite our preferences, then it seems almost impossible that we wouldn’t also have that knowledge of how our current and modified preferences would work.
The above author argues that we’d gain the capacity to alter brain states before we gained the capacity to understand the consequences of our alterations very well, but I disagree. Firstly, preferences are extremely complicated, and once we understand how to cause them and manipulate them with a high degree of precision I don’t think there would be much left for us to understand. Except in a very crude sense, understanding the consequences of our alterations is the exact same thing as having the capacity to alter our preferences. Even under this crude sense, we already possess this ability, and the author’s argument is empirically denied. Secondly, I highly doubt that any significant number of people would willingly undergo modification without a high degree of confidence in what the outcome would be. Other than experiments, I don’t think it would really happen at all.
The simple solution, as I see it, is to only modify when your preferences contradict each other or a necessary condition of reality, or when you need to extend the boundaries of your preferences further in order for them to be fulfilled more (e.g. increasing max happiness whenever you have the resources to fulfill the new level of max happiness, or decreasing max sadness when you’re as happy as can be, or getting rid of a desire for fairness when it is less important than other desires that it necessarily conflicts with).
Now for the strongest form of the above argument, which happens when you recognize that uncertainty is inevitable. I think that the degree of uncertainty will be very small if we have these capabilities, but that might not be correct, and we still ought to develop mechanisms to minimize the bad effects of those uncertainties, so that’s not a wholly sufficient response. Also: Least Convenient Possible World. At the very least it’s sort of interesting to think about.
In that case, I think that it doesn’t really matter. If I accidentally change my preferences, after the fact I’ll be glad about the accident, and before the fact I won’t have any idea that it’s about to happen. I might end up valuing completely different things, but I don’t really see any reason to prioritize my current values from the perspective of the modified me, only from my own perspective. Since I currently live in my own perspective, I’d want to do my best to avoid mistakes, but if I made a mistake then in hindsight I’d view it as more of a happy accident than a catastrophe.
I think people are getting confused because they’re looking at it as though their preferences are altered by a magical black box, instead of looking at it as though those preferences are altered by themselves in a more enlightened state. The above line of argument seems to rest upon the assumption that we can’t know the effects that changing our preferences would have. But if we had the ability to actually rewrite our preferences, then it seems almost impossible that we wouldn’t also have that knowledge of how our current and modified preferences would work.
The above author argues that we’d gain the capacity to alter brain states before we gained the capacity to understand the consequences of our alterations very well, but I disagree. Firstly, preferences are extremely complicated, and once we understand how to cause them and manipulate them with a high degree of precision I don’t think there would be much left for us to understand. Except in a very crude sense, understanding the consequences of our alterations is the exact same thing as having the capacity to alter our preferences. Even under this crude sense, we already possess this ability, and the author’s argument is empirically denied. Secondly, I highly doubt that any significant number of people would willingly undergo modification without a high degree of confidence in what the outcome would be. Other than experiments, I don’t think it would really happen at all.
The simple solution, as I see it, is to only modify when your preferences contradict each other or a necessary condition of reality, or when you need to extend the boundaries of your preferences further in order for them to be fulfilled more (e.g. increasing max happiness whenever you have the resources to fulfill the new level of max happiness, or decreasing max sadness when you’re as happy as can be, or getting rid of a desire for fairness when it is less important than other desires that it necessarily conflicts with).
Now for the strongest form of the above argument, which happens when you recognize that uncertainty is inevitable. I think that the degree of uncertainty will be very small if we have these capabilities, but that might not be correct, and we still ought to develop mechanisms to minimize the bad effects of those uncertainties, so that’s not a wholly sufficient response. Also: Least Convenient Possible World. At the very least it’s sort of interesting to think about.
In that case, I think that it doesn’t really matter. If I accidentally change my preferences, after the fact I’ll be glad about the accident, and before the fact I won’t have any idea that it’s about to happen. I might end up valuing completely different things, but I don’t really see any reason to prioritize my current values from the perspective of the modified me, only from my own perspective. Since I currently live in my own perspective, I’d want to do my best to avoid mistakes, but if I made a mistake then in hindsight I’d view it as more of a happy accident than a catastrophe.
So I don’t see what the big deal is.