No, the reductionist description of the Correct Theory of Physics eventually involves pointing at lab equipment. There is no lab equipment for morality, so the analogy is not valid.
I could point a gun to your head and ask you to explain why I shouldn’t pull the trigger.
I could point a gun to your head and ask you to explain why I shouldn’t pull the trigger.
That scenario doesn’t lead to discovering the truth. If I deceive you with bullshit and you don’t pull the trigger, that’s a victory for me. I invite you to try again, but next time pick an example where the participants are incentivised to make true statements.
ETA: …unless the truth we care about is just which flavors of bullshit will persuade you not to pull the trigger. If that’s what you mean by morality, you probably agree with me that it is just social signaling.
Like I mentioned elsewhere in this thread, the “No Universally Compelling Argument” post you site applies equally well to physical and even mathematical facts (in fact that was what Eliezer was mainly referring to in that post).
In fact, the main point of that sequence is that just because there are no universally compelling arguments doesn’t mean truth doesn’t exist. As Eliezer mentions in where recursive justification hits bottom:
Now, one lesson you might derive from this, is “Don’t be born with a stupid prior.” This is an amazingly helpful principle on many real-world problems, but I doubt it will satisfy philosophers.
A formal proof is still a proof though, although nothing mandates that a listener must accept it. A mind can very well contain an absolute dismissal mechanism or optimize for something other than correctness.
We can understand what sort of assumptions we’re making when we derive information from mathematical axioms, or the axioms of induction, and how further information follows from that. But what assumptions are we making that would allow us to extrapolate absolute moral facts? Does our process give us any way to distinguish them from preferences?
I could point a gun to your head and ask you to explain why I shouldn’t pull the trigger.
That scenario doesn’t lead to discovering the truth. If I deceive you with bullshit and you don’t pull the trigger, that’s a victory for me. I invite you to try again, but next time pick an example where the participants are incentivised to make true statements.
ETA: …unless the truth we care about is just which flavors of bullshit will persuade you not to pull the trigger. If that’s what you mean by morality, you probably agree with me that it is just social signaling.
Well you could just as easily use your lab equipment to deceive me with bullshit.
And if he gave a true moral argument you would have to accept it?
How would you distinguish a true argument from a merely persuasive one?
Like I mentioned elsewhere in this thread, the “No Universally Compelling Argument” post you site applies equally well to physical and even mathematical facts (in fact that was what Eliezer was mainly referring to in that post).
In fact, the main point of that sequence is that just because there are no universally compelling arguments doesn’t mean truth doesn’t exist. As Eliezer mentions in where recursive justification hits bottom:
A formal proof is still a proof though, although nothing mandates that a listener must accept it. A mind can very well contain an absolute dismissal mechanism or optimize for something other than correctness.
We can understand what sort of assumptions we’re making when we derive information from mathematical axioms, or the axioms of induction, and how further information follows from that. But what assumptions are we making that would allow us to extrapolate absolute moral facts? Does our process give us any way to distinguish them from preferences?