Have worse consequences for everybody, where “everybody” means present and future agents to which we assign moral value.
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”, where the list refers to all the good things, as argued in the metaethcis sequence.
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”
Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don’t agree, I challenge you to give a non-deontological definition.
After thinking about it some more, I think I have a better way to explain what I mean.
What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else’s freedom is a constraint on one’s decision algorithm not just on one’s outcome, thus it doesn’t satisfy the VNM axioms.
It sounds to me like you’re implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.
You’ve assumed vague-utilitarianism here, which weakens your point. I would taboo “make everything worse” as “less freedom, health, fun, awesomeness, happyness, truth, etc”, where the list refers to all the good things, as argued in the metaethcis sequence.
Nice try. The problem with your definition is that freedom, for example, is fundamentally a deontological concept. If you don’t agree, I challenge you to give a non-deontological definition.
What is a deontological concept and what is a non-deontological concept?
After thinking about it some more, I think I have a better way to explain what I mean.
What is freedom? One (not very good but illustrative) definition is the ability to make meaningful choices. Notice that this means respecting someone else’s freedom is a constraint on one’s decision algorithm not just on one’s outcome, thus it doesn’t satisfy the VNM axioms.
It sounds to me like you’re implicitly enforcing a Cartesian separation between the physical world and the algorithms that agents in it run. Properties of the algorithms that agents in the world run are still properties of the world.
I don’t see why I’m relying in it anymore than than the VNM-utiliterian is.
I thought I had made that clear in my second sentence:
Um, no. I can’t respond to a challenge to give a non-X definition of Y if I don’t know what X means.