If you run this analysis over groups of people that include competing religions or just plain competing tribes or nations, I think you will get eigenmodes which sort those people by their affinity groups, and eigenvalues which essentially just count up how many people are in each affinity group. So we find supporting Team6 is more “moral” because there are more people in Team6 than any other team, and we conclude essentially that might makes right.
I think evolutionarily speaking, our propensity for morality is designed to make us team players, or at least to make enough of us enough of a team player to reap the benefits to the group of cooperation. So if this proposal just identifies teams and counts their members, this doesn’t make it wrong, but it would be important to point out that it is just finding the affinity groups and not answering deep questions about whether incest is wrong or whether we should push fat people in front of trolley cars.
If you run this analysis over groups of people that include competing religions or just plain competing tribes or nations, I think you will get eigenmodes which sort those people by their affinity groups, and eigenvalues which essentially just count up how many people are in each affinity group. So we find supporting Team6 is more “moral” because there are more people in Team6 than any other team, and we conclude essentially that might makes right.
I think evolutionarily speaking, our propensity for morality is designed to make us team players, or at least to make enough of us enough of a team player to reap the benefits to the group of cooperation. So if this proposal just identifies teams and counts their members, this doesn’t make it wrong, but it would be important to point out that it is just finding the affinity groups and not answering deep questions about whether incest is wrong or whether we should push fat people in front of trolley cars.