In a framing that permits orthogonality, moral realism is not a useful claim, it wouldn’t matter for any practical purposes if it’s true in some sense. That is the point of the extremely unusual person example, you can vary the degree of unusualness as needed, and I didn’t mean to suggest repugnance of the unusualness, more like its alienness with respect to some privileged object level moral position.
Object level moral considerations do need to shape the future, but I don’t see any issues with their influence originating exclusively from all the individual people, its application at scale arising purely from coordination between the influence these people exert. So if we take that extremely unusual person as one example, their influence wouldn’t be significant because there’s only one of them, but it’s not diminished beyond that under the pressure of others. Where it’s in direct opposition to others, the boundaries aspect of coordination comes into play, some form of negotiation. But if instead there are many people who share some object level moral principles, their collective influence should result in global outcomes that are not in any way inferior to what you imagine a top down object level moral guidance might be able to achieve.
So I don’t see any point to a top down architecture, once superintelligence enables practical considerations to be tracked in sufficient detail at the level of individual people, only disadvantages. The relevance of object level morality (or alignment of the superintelligence managing the physical world substrate level) is making it so that it doesn’t disregard particular people, that it does allocate influence to their volition. The alternatives are that some or all people get zero or minuscule influence (extinction or permanent disempowerment), compared to AIs or (in principle, though this seems much less likely) to other people.
In a framing that permits orthogonality, moral realism is not a useful claim, it wouldn’t matter for any practical purposes if it’s true in some sense. That is the point of the extremely unusual person example, you can vary the degree of unusualness as needed, and I didn’t mean to suggest repugnance of the unusualness, more like its alienness with respect to some privileged object level moral position.
Object level moral considerations do need to shape the future, but I don’t see any issues with their influence originating exclusively from all the individual people, its application at scale arising purely from coordination between the influence these people exert. So if we take that extremely unusual person as one example, their influence wouldn’t be significant because there’s only one of them, but it’s not diminished beyond that under the pressure of others. Where it’s in direct opposition to others, the boundaries aspect of coordination comes into play, some form of negotiation. But if instead there are many people who share some object level moral principles, their collective influence should result in global outcomes that are not in any way inferior to what you imagine a top down object level moral guidance might be able to achieve.
So I don’t see any point to a top down architecture, once superintelligence enables practical considerations to be tracked in sufficient detail at the level of individual people, only disadvantages. The relevance of object level morality (or alignment of the superintelligence managing the physical world substrate level) is making it so that it doesn’t disregard particular people, that it does allocate influence to their volition. The alternatives are that some or all people get zero or minuscule influence (extinction or permanent disempowerment), compared to AIs or (in principle, though this seems much less likely) to other people.