To be fair (cough), your argument that ‘5 people means the pie should be divided into 5 equal parts’ assumes several things...
1) Each person, by virtue of merely being there, is entitled to pie.
2) Each person, by virtue of merely being there, is entitled to the same amount of pie as every other person.
While this division of the pie may be preferable for the health of the collective psyche, it is still a completely arbitrary (cough) way to divide the pie. There are several other meaningful, rational, logical ways to divide the pie. (I believe I suggested one in a previous post.) Choosing to divide the pie into 5 equal parts simply asserts the premise ‘existence = equal right’ as the dominate principle by which to guide the division of the pie.
You have to remove all other considerations (including hunger, health, and any existing social relationships such as parent-child) in order to allow the ‘existence = equal right’ principle to be an acceptable way to divide the pie. This doesn’t make that principle the ‘bedrock’ of morality. Quite the contrary. It says that this principle only dominates when all other factors are ignored.
Relatively new here (hi) and without adequate ability to warp spacetime so that I may peruse all that EY has written on this topic, but am still wondering—Why pursue the idea that morality is hardwired, or that there is an absolute code of what is right or wrong?
Thou shall not kill—well, except is someone is trying to kill you.
To be brief—it seems to me that 1) Morality exists in a social context. 2) Morality is fluid, and can change/has changed over time. 3) If there is a primary moral imperative that underlies everything we know about morality, it seems that that imperative is SURVIVAL, of self first, kin second, group/species third.
Empathy exists because it is a useful survival skill. Altruism is a little harder to explain.
But what justifies the assumption that there IS an absolute (or even approximate) code of morality that can be hardwired and impervious to change?
The other thing I wonder about when reading EY on morality is—would you trust your AI to LEARN morality and moral codes in the same way a human does? (See Kohlberg’s Levels of Moral Reasoning.)Or would you presume that SOMETHING must be hardwired? If so, why?
(EY—Do you summarize your views on these points somewhere? Pointers to said location very much appreciated.)