Right, I understand what you mean. I was thinking of in the context of a person being presented with this situation, not an idealized agent running a specific decision theory.
And Omega’s simulated agent would presumably hold all the same information as a person would, and be capable of responding the same way.
Right, I understand what you mean. I was thinking of in the context of a person being presented with this situation, not an idealized agent running a specific decision theory.
And Omega’s simulated agent would presumably hold all the same information as a person would, and be capable of responding the same way.
Cheers for clarifying that for me.