Can you tell me how to distinguish “arbitrary” from “non-arbitrary” moral axioms?
Since Julian Morrison has not answered yet, allow me to answer. (I personally do not advance the following argument because I believe I possess a stronger argument against happiness as terminal value.)
If you are scheduled for neurosurgery, and instead of the neurosurgeon, the neurosurgeon’s wacky brother Billy performs the surgery with the result you end up with system of terminal values X whereas if the surgery had been done by the neurosurgeon then you would have ended up with different values, well, that will tend to cause you to question system of values X. Similarly, if you learn that once the process of evolution seizes on a solution to a problem, the solution tends to get locked in, and if you have no reason to believe that a mammal-level central nervous system needs to be organized around a “reward architecture” (like the mammal nervous system actually is organized), well then that tends to cast doubt on human happiness or mammal happiness as a terminal value because if evolution had siezed on a different solution to whatever problem the “reward architecture” is a solution for, then the species with human-level creativity and intelligence that evolved would not feel happiness or unhappiness and consequently the idea that happiness is a terminal value would never have occured to them.
Can you tell me how to distinguish “arbitrary” from “non-arbitrary” moral axioms?
Since Julian Morrison has not answered yet, allow me to answer. (I personally do not advance the following argument because I believe I possess a stronger argument against happiness as terminal value.)
If you are scheduled for neurosurgery, and instead of the neurosurgeon, the neurosurgeon’s wacky brother Billy performs the surgery with the result you end up with system of terminal values X whereas if the surgery had been done by the neurosurgeon then you would have ended up with different values, well, that will tend to cause you to question system of values X. Similarly, if you learn that once the process of evolution seizes on a solution to a problem, the solution tends to get locked in, and if you have no reason to believe that a mammal-level central nervous system needs to be organized around a “reward architecture” (like the mammal nervous system actually is organized), well then that tends to cast doubt on human happiness or mammal happiness as a terminal value because if evolution had siezed on a different solution to whatever problem the “reward architecture” is a solution for, then the species with human-level creativity and intelligence that evolved would not feel happiness or unhappiness and consequently the idea that happiness is a terminal value would never have occured to them.