I’ve written earlier that utilitarianism* completely breaks down if you try to go specific enough.
Just consider brain simulators, where a new state is computer from the current state, then the current state is overwritten. So the utility of new state has to be greater than the utility of current state. At which point you’d want to find a way to compute maximum utility state without computing intermediate steps. The trade-offs between needs of different simulators of different ages also end up working wrongly.
i assume that utilitarianism has some actual content, beyond trivialities such as assigning utility of 1 to actions prescribed by some kind of virtue ethics and 0 to all other actions, and then claiming that it is utilitarian.
I’ve written earlier that utilitarianism* completely breaks down if you try to go specific enough.
Just consider brain simulators, where a new state is computer from the current state, then the current state is overwritten. So the utility of new state has to be greater than the utility of current state. At which point you’d want to find a way to compute maximum utility state without computing intermediate steps. The trade-offs between needs of different simulators of different ages also end up working wrongly.
i assume that utilitarianism has some actual content, beyond trivialities such as assigning utility of 1 to actions prescribed by some kind of virtue ethics and 0 to all other actions, and then claiming that it is utilitarian.