I’m quite convinced about how you analyze the problem of what morality is and how we should think about it, up until the point about how universally it applies. I’m just not sure that ‘humans different shards of god shatter’ add up to the same thing across people, a point that I think would become apparent as soon as you started to specify what the huge computation actually WAS.
I would think of the output as not being a yes/no answer, but something akin to ‘What percentage of human beings would agree that this was a good outcome, or be able to be thus convinced by some set of arguments?’. Some things, like saving a child’s life, would receive very widespread agreement. Others, like a global Islamic caliphate or widespread promiscuous sex would have more disagreement, including potentially disagreement that cannot be resolved by presenting any conceivable argument to the parties.
The question of ‘how much’ each person views something as moral comes into play as well. If different people can’t all be convinced of a particular outcome’s morality, the question ends up seeming remarkably similar to the question in economics of how to aggregate many people’s preferences for goods. Because you never observe preferences in total, you let everyone trade and express their desires through revealed preference to get a pareto solution. Here, a solution might be to assign them a certain amount of morality dollars to each outcome, let them spend as they wish, and add it all up. Like economics, there’s still the question of how to allocate the initial wealth (in this case, how much to weigh the opinions of each person).
I don’t know how much I’m distorting what you meant—it almost feels like we’ve just replaced ‘morality as preference’ with ‘morality as aggregate preference’, and I don’t think that’s what you had in mind.
I’ll be interested to see what your metamorality is. The one thing that I think has been missing so far from the discussion is the question that without some metamorality, what language do we have to condemn someone else who chooses a different morality from ours? Obviously you can’t argue morality into a rock, but we’re not trying to do that, only argue it into another human who shares fundamentally similar architecture, but not necessarily morality.
Moreover, to say that one can abandon a metamorality without affecting one’s underlying morality doesn’t imply that society as a whole can ditch a particular metamorality (eg Judeo-Christian worldviews) and still expect the next generation’s morality to stay unchanged. If you explicitly reject any metamorality, why should your children bother to listen to what you say anyway? Isn’t their morality just as good as yours?
It may be possible that religious metamorality serve as a basis to inculcate a particular set of moral teachings, which only then allows the original metamorality to be abandoned. eg It causes at least some of the population to do the right thing for the wrong reasons, when they otherwise might not have done the right thing at all.