I agree, the goal is to get humans to think about programming some forms of moral reasoning, even if it’s far from sufficient (and it’s far from being the hardest part of FAI).
I agree, the goal is to get humans to think about programming some forms of moral reasoning, even if it’s far from sufficient (and it’s far from being the hardest part of FAI).