Great post! I like that it escapes the problem of aggregating values by trying to learn already aggregated norms.
Why not to use the already codified norms as a starting point to learn actual norms? Almost any society has a large set of rules and laws as written texts, and an AI may ask an advanced member of society with high understanding of norm (lawyer) to clarify the meaning of some norms or to write down non-spoken rules.
What also could go wrong is that some combinations of norms could be dangerous, especially if implemented literary. For example, there is a fire in an apartment and a person needs help in it, but a robot can’t enter it as it will be a violation of the norm of no entering of private property without invitation. There could many edge cases there norms don’t work.
The idea of “not doing” itself needs clarification, as sometimes “not doing” is an action: for example, if a robot stays in a doorway, it doesn’t do anything, but I can’t go out.
I agree that codified norms are a good place to start, but they are only a starting point—we will have to infer norms from behavior/speech, because as you noted, codified norms interpreted literally will have many edge cases.