I think it’s good to think of FIAT stuff as a special case of applying some usual understanding-machinery (like, abductive and inductive machinery) in value-laden cases. It’s the special case where one implicitly or explicitly abducts to (one having) goals. Here is an example ethical story where the same thing shows up in various ways such that it’d imo be sorta contrived to analyze it in terms of goals being adopted:
You find it easy to feel a strong analogy between “you do X to me” and “I do X to you”. (In part, this is because: as a human, you find it easy to put yourself in someone else’s shoes.)
This turns into an implicit ethical inference rule — you can now easily move from believing “you should not do X to me” to believing “I should not do X to you”. Machinery for this transformation of an analogy into an inference rule is present largely because it is good for understanding stuff, which is good for lots of stuff — importantly, it (or some more general thing which has it as a special case) is ultimately good for producing more offspring.
You then notice you have this inference rule, and you feel good about having it, and you turn it into an explicit principle: “do not treat others in ways that you would not like to be treated”. E.g. you do this because you want to tell your kid something to get them to stop misbehaving in a particular way, and they don’t seem to be fully getting your argument/explanation for why they behaved egregiously which used your implicit inference rule. This explicitizing move is obviously good for teaching in general, and good for individual understanding (it’s often useful to scrutinize your inference rules, e.g. to limit or expand their context of applicability).
This explicit principle then “gains points” from making sense of lots of other stuff you already thought, e.g. “lying is bad” and “stealing is bad”. Machinery for this sort of point-gaining is present because it’s again good for understanding stuff in many cases — it’s just a hypothesis gaining points by [making sense of]/predicting facts.
You then seek to make this explicit principle more precise and correct/”correct” (judged against some other criteria, e.g. by whether it gives correct verdicts (ie “makes correct predictions”) about what one should do in various particular cases). Maybe you come up with the version: “act only in accordance with that maxim through which you can at the same time will that it become a universal law”.
You seek good further justifications of it, and often adopt those as plausible hypotheses, often effectively taking the principle itself as some evidence for these hypotheses. You identify key questions relating to whether the principle is right. You clarify its meaning (that is, what it should mean) further. You study alternative formulations of it.[1] You spell out its consequences better. You seek out problematic cases. You construct a whole system around the principle. All this is a lot like something you would do to a scientific hypothesis.
(Acknowledgment. A guiding idea here is from a chat with Tom Everitt.)
(Acknowledgment’. A guiding frustration here is that imo people posting on LessWrong think way too much in terms of goals.)
e.g. “a rational being must always regard himself as lawgiving in a kingdom of ends possible through freedom of the will, whether as a member or as sovereign”
I think it’s good to think of FIAT stuff as a special case of applying some usual understanding-machinery (like, abductive and inductive machinery) in value-laden cases. It’s the special case where one implicitly or explicitly abducts to (one having) goals. Here is an example ethical story where the same thing shows up in various ways such that it’d imo be sorta contrived to analyze it in terms of goals being adopted:
You find it easy to feel a strong analogy between “you do X to me” and “I do X to you”. (In part, this is because: as a human, you find it easy to put yourself in someone else’s shoes.)
This turns into an implicit ethical inference rule — you can now easily move from believing “you should not do X to me” to believing “I should not do X to you”. Machinery for this transformation of an analogy into an inference rule is present largely because it is good for understanding stuff, which is good for lots of stuff — importantly, it (or some more general thing which has it as a special case) is ultimately good for producing more offspring.
You then notice you have this inference rule, and you feel good about having it, and you turn it into an explicit principle: “do not treat others in ways that you would not like to be treated”. E.g. you do this because you want to tell your kid something to get them to stop misbehaving in a particular way, and they don’t seem to be fully getting your argument/explanation for why they behaved egregiously which used your implicit inference rule. This explicitizing move is obviously good for teaching in general, and good for individual understanding (it’s often useful to scrutinize your inference rules, e.g. to limit or expand their context of applicability).
This explicit principle then “gains points” from making sense of lots of other stuff you already thought, e.g. “lying is bad” and “stealing is bad”. Machinery for this sort of point-gaining is present because it’s again good for understanding stuff in many cases — it’s just a hypothesis gaining points by [making sense of]/predicting facts.
You then seek to make this explicit principle more precise and correct/”correct” (judged against some other criteria, e.g. by whether it gives correct verdicts (ie “makes correct predictions”) about what one should do in various particular cases). Maybe you come up with the version: “act only in accordance with that maxim through which you can at the same time will that it become a universal law”.
You seek good further justifications of it, and often adopt those as plausible hypotheses, often effectively taking the principle itself as some evidence for these hypotheses. You identify key questions relating to whether the principle is right. You clarify its meaning (that is, what it should mean) further. You study alternative formulations of it. [1] You spell out its consequences better. You seek out problematic cases. You construct a whole system around the principle. All this is a lot like something you would do to a scientific hypothesis.
(Acknowledgment. A guiding idea here is from a chat with Tom Everitt.)
(Acknowledgment’. A guiding frustration here is that imo people posting on LessWrong think way too much in terms of goals.)
e.g. “a rational being must always regard himself as lawgiving in a kingdom of ends possible through freedom of the will, whether as a member or as sovereign”