The “Less Wrong position”? Are we all supposed to have 1 position here? Or did you mean to ask what EY’s position is?
I don’t think I understand your statement/question (?) - In order to know what an AI would do, you just need to simulate it with an AI?
I think you’re saying that you could simulate what an AGI would do via any computer. If you’re simulating an AGI, are you not building an AGI?
which literature do you recommend?