I managed to parse about half of your second paragraph, but it seems you didn’t actually answer the question. Let me rephrase.
You say that sims probably won’t work on humans because our “preference” is about this universe only, or something like that. When we build an AI, can we specify its “preference” in a similar way, so it only optimizes “our” universe and doesn’t participate in sim trades/threats? (Putting aside the question whether we want to do that.)
I managed to parse about half of your second paragraph, but it seems you didn’t actually answer the question. Let me rephrase.
You say that sims probably won’t work on humans because our “preference” is about this universe only, or something like that. When we build an AI, can we specify its “preference” in a similar way, so it only optimizes “our” universe and doesn’t participate in sim trades/threats? (Putting aside the question whether we want to do that.)