It’s a good point, re: some of the gap being that it’s hard to concretely visualize the world in which AGI isn’t built. And also about the “we” being part of the lack of concreteness.
I suspect there’re lots of kinds of ethical heuristics that’re supposed to interweave, and that some are supposed to be more like “checksums” (indicators everyone can use in an embodied way to see whether there’s a problem, even though they don’t say how to address it if there is a problem), and others are supposed to be more concrete.
For some more traditional examples:
There’re heuristics for how to tell whether a person or organization is of bad character (even though these heuristics don’t tell how how to respond if a person is of bad character). Eg JK Rowling’s character Sirius’s claim that you can see the measure of a person by how they treat their house-elves (which has classical Christian antecedents, I’m just mentioning a contemporary phrasing).
There’re heuristics for how countries should be, e.g. “should have freedom of speech and press” or (longer ago) “should have a monarch who inherited legitimately.”
It would be too hard to try to equip humans and human groups for changing circumstances via only a “here’s what you do in situation X”. It’s somewhat easier to do it (and traditional ethical heuristics did do it) by a combination of “you can probably do well by [various what-to-do heuristics]” and “you can tell if you’re doing well by [various other checksum-type heuristics]. Ethics is help to let us design our way to better plans, not to only always give us those plans.
Also we understand basic arithmetic around here, which goes a long way sometimes.