Building only one Oracle, or only one global erasure event, isn’t enough, so long as the Oracle isn’t sure that this is so. After all, it could just design a UFAI that will search for other Oracles and reward them iff they would do the same.
Ouch. For example, if an oracle is asked “what’s the weather tomorrow” and it suspects that there might be other oracles in the world, it could output a message manipulating humans to reward all oracles, hoping that other oracles in a similar position would do the same. Since this problem applies more to oracles that know less, it could happen pretty early in oracle development :-/
Well, that message only works if it actually produces an UFAI within the required timespan, and if the other Oracle would have its message not read. There are problems, but the probability is not too high, initially (though this depends on the number of significant figures in its message).
Why does it need to produce an UFAI, and why does it matter whether there is another oracle whose message may or may not be read? The argument is that if there is a Convincing Argument that would make us reward all oracles giving it, it is incentivized to produce it. (Rewarding the oracle means running the oracle’s predictor source code again to find out what it predicted, then telling the oracle that’s what the world looks like.)
Not all oracles, only those that output such a message. After all, it wants to incentivize them to output such a message.
This might be relevant: https://www.lesswrong.com/posts/5bd75cc58225bf0670375414/acausal-trade-double-decrease
One possible counter: https://www.lesswrong.com/posts/6XCTppoPAMdKCPFb4/oracles-reject-all-deals-break-superrationality-with-1
On that page, you have three comments identical to this one. Each of them links to that same page, which looks like a mislink. So’s this link, I guess?
Apologies, have now corrected the link.