simulators can be configured to simulate many simulacra in tandem and can thus produce a variety of perspectives on a given problem
It would be nice to have a way of telling that different texts have the same simulacrum acting through them, or concern the same problem. Expected utility arises from coherence of actions by an agent (that’s not too updateless), so more general preference is probably characterized by actions coherent in a more general sense. Some aspects of alignment between agents might be about coherence between actions performed by them in separate situations, not necessarily with the agents interacting with each other. Could mutual alignment of different simulacra be measured? In a simulator, talking about this probably requires moving sideways in the text space, finding more examples of a given thing in different texts, sampling from all texts that talk about a given thing.
It would be nice to have a way of telling that different texts have the same simulacrum acting through them, or concern the same problem. Expected utility arises from coherence of actions by an agent (that’s not too updateless), so more general preference is probably characterized by actions coherent in a more general sense. Some aspects of alignment between agents might be about coherence between actions performed by them in separate situations, not necessarily with the agents interacting with each other. Could mutual alignment of different simulacra be measured? In a simulator, talking about this probably requires moving sideways in the text space, finding more examples of a given thing in different texts, sampling from all texts that talk about a given thing.