there should be boundaries defining the stakeholders that get to influence the preference over which machine is being built in a particular scope.
I’d be able to understand where this was coming from if yall are mostly talking about population ethics, but there was no population ethics in the example I’m discussing (note, the elephant wasn’t a stakeholder. A human can love an elephant, but a human would not lucidly give an elephant unalloyed power, for an elephant probably desires things that would be fucked up to a human, such as the production of bulls in musth, or for practices of infanticide (at much higher rates).)
And I’d argue that population ethics shouldn’t really be a factor. In humans, new humanlike beings should be made stakeholders to the extent that the previous stakeholders want them to be. The current stakeholders (californians) do prefer for new humans to be made stakeholders, so they keep trying to put that into their definition of utilitarianism, but the fact that they want it means that they don’t need to put it in there.
But if it’s not about population ethics then it just seems to me like you’re probably giving up on generalizability too early.
The point is that people shouldn’t be stakeholders of everything, let alone to an equal extent. Instead, particular targets of optimization (much smaller than the whole world) should have much fewer agents with influence over their construction, and it’s only in these contexts that preference aggregation should be considered. When starting with a wider scope of optimization with many stakeholders, it makes more sense to start with dividing it into smaller parts that are each a target of optimization with fewer stakeholders, optimized under preferences aggregated differently from how that settles for the other parts. Expected utility theory makes sense for such smaller projects just as much as it does for the global scope of the whole world, but it breaks normality less when applied narrowly like that than if we try to apply it to the global scope.
The elephant might need to be part of one person’s home, but not a concern for anyone else, and not subject to anyone else’s preferences. That person would need to be able to afford an elephant though, to construct it within the scope of their home. Appealing to others’ preferences about the would-be owner’s desires would place the would-be owner within the others’ optimization scope, make the would-be owner a project that others are working on, make them stakeholders of the would-be owner’s self, rather than remaining a more sovereign entity. If you depend on the concern of others to keep receiving the resources you need, then you are receiving those resources conditionally, rather than allocating the resources you have according to your own volition. Much better for others to contribute to an external project you are also working on, according to what that project is, rather than according to your desires about it.
As an example, normality means a person can, EG, create an elephant within their home, and torture it. Under preference utilitarianism, the torture of the elephant upsets the values of a large number of people, it’s treated as a public bad and has to be taxed as such. Even when we can’t see it happening, it’s still reducing our U, so a boundaryless prefu optimizer would go in there and says to the elephant torturer “you’d have to pay a lot to offset the disvalue this is creating, and you can’t afford it, so you’re going to have to find a better outlet (how about a false elephant who only pretends to be getting tortured)”.
But let’s say there are currently a lot of sadists and they have a lot of power. If I insist on boundaryless aggregation, they may veto the safety deal, so it just wouldn’t do. I’m not sure there are enough powerful sadists for that to happen, political discourse seems to favor publicly defensible positions, but [looks around] I guess there could be. But if there were, it would make sense to start to design the aggregation around… something like the constraints on policing that existed before the aggregation was done. But not that exactly.
I’d be able to understand where this was coming from if yall are mostly talking about population ethics, but there was no population ethics in the example I’m discussing (note, the elephant wasn’t a stakeholder. A human can love an elephant, but a human would not lucidly give an elephant unalloyed power, for an elephant probably desires things that would be fucked up to a human, such as the production of bulls in musth, or for practices of infanticide (at much higher rates).)
And I’d argue that population ethics shouldn’t really be a factor. In humans, new humanlike beings should be made stakeholders to the extent that the previous stakeholders want them to be. The current stakeholders (californians) do prefer for new humans to be made stakeholders, so they keep trying to put that into their definition of utilitarianism, but the fact that they want it means that they don’t need to put it in there.
But if it’s not about population ethics then it just seems to me like you’re probably giving up on generalizability too early.
The point is that people shouldn’t be stakeholders of everything, let alone to an equal extent. Instead, particular targets of optimization (much smaller than the whole world) should have much fewer agents with influence over their construction, and it’s only in these contexts that preference aggregation should be considered. When starting with a wider scope of optimization with many stakeholders, it makes more sense to start with dividing it into smaller parts that are each a target of optimization with fewer stakeholders, optimized under preferences aggregated differently from how that settles for the other parts. Expected utility theory makes sense for such smaller projects just as much as it does for the global scope of the whole world, but it breaks normality less when applied narrowly like that than if we try to apply it to the global scope.
The elephant might need to be part of one person’s home, but not a concern for anyone else, and not subject to anyone else’s preferences. That person would need to be able to afford an elephant though, to construct it within the scope of their home. Appealing to others’ preferences about the would-be owner’s desires would place the would-be owner within the others’ optimization scope, make the would-be owner a project that others are working on, make them stakeholders of the would-be owner’s self, rather than remaining a more sovereign entity. If you depend on the concern of others to keep receiving the resources you need, then you are receiving those resources conditionally, rather than allocating the resources you have according to your own volition. Much better for others to contribute to an external project you are also working on, according to what that project is, rather than according to your desires about it.
But not preserving normality is the appeal :/
As an example, normality means a person can, EG, create an elephant within their home, and torture it. Under preference utilitarianism, the torture of the elephant upsets the values of a large number of people, it’s treated as a public bad and has to be taxed as such. Even when we can’t see it happening, it’s still reducing our U, so a boundaryless prefu optimizer would go in there and says to the elephant torturer “you’d have to pay a lot to offset the disvalue this is creating, and you can’t afford it, so you’re going to have to find a better outlet (how about a false elephant who only pretends to be getting tortured)”.
But let’s say there are currently a lot of sadists and they have a lot of power. If I insist on boundaryless aggregation, they may veto the safety deal, so it just wouldn’t do. I’m not sure there are enough powerful sadists for that to happen, political discourse seems to favor publicly defensible positions, but [looks around] I guess there could be. But if there were, it would make sense to start to design the aggregation around… something like the constraints on policing that existed before the aggregation was done. But not that exactly.