I don’t know why the trade-off between population-size and average utility feels like it needs to be a mathematically justified, that function seems to be as much determined by arbitrary evolutionary selection as the rest of our utility functions.
Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don’t live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don’t need to mathematically justify these things (and maybe it’s impossible) but I wish we could!
I don’t know why the trade-off between population-size and average utility feels like it needs to be a mathematically justified, that function seems to be as much determined by arbitrary evolutionary selection as the rest of our utility functions.
Well, it would be nice if we happened to live in a universe where we could all agree on an agent-neutral definition of what the best actions to take in each situation are. It seems to be that we don’t live in such a universe, and that our ethical intuitions are indeed sort of arbitrarily created by evolution. So I agree we don’t need to mathematically justify these things (and maybe it’s impossible) but I wish we could!