Nobody seems to have problems with circular preferences in practice, probably because people’s preferences aren’t precise enough. So we don’t have to adopt utilitarianism to fix this non-problem.
This may not be a problem at the individual scale, but individuals design systems (a program, a government, an AI) and these systems must be precise and designed to handle these kinds of issues, they can’t just adapt like humans do to avoid repeated exploits, we first have to build the adaptation mechanism. The way I see it utilitarianism is an attempt to describe such a mechanism.
This may not be a problem at the individual scale, but individuals design systems (a program, a government, an AI) and these systems must be precise and designed to handle these kinds of issues, they can’t just adapt like humans do to avoid repeated exploits, we first have to build the adaptation mechanism. The way I see it utilitarianism is an attempt to describe such a mechanism.