Why worry about non-optimal programs? We’re talking about a theory of how AIs should make decisions, right?
I think it’s impossible for an AI to avoid the need to determine non-trivial properties of other programs, even though Rice’s Theorem says there is no algorithm for doing this that’s guaranteed to work in general. It just has to use methods that sometimes return wrong answers. And to deal with that, it needs a way to handle mathematical uncertainty.
ETA: If formalizing the problem is a non-trivial process, you might be solving most of the problem yourself in there, rather than letting the AI’s decision algorithm solve it. I don’t think you’d want that. In this case, for example, if your AI were to encounter Omega in real life, how would it know to model the situation using a world program that invokes a special kind of oracle?
Re ETA: in the comments to Formalizing Newcomb’s, Eliezer effectively said he prefers the “special kind of oracle” interpretation to the simulator interpretation. I’m not sure which one an AI should assume when Omega gives it a verbal description of the problem.
Yes, I meant that. Maybe I misinterpreted you; maybe the game needs to be restated with a probabilistic oracle :-) Because I’m a mental cripple and can’t go far without a mathy model.
Why worry about non-optimal programs? We’re talking about a theory of how AIs should make decisions, right?
I think it’s impossible for an AI to avoid the need to determine non-trivial properties of other programs, even though Rice’s Theorem says there is no algorithm for doing this that’s guaranteed to work in general. It just has to use methods that sometimes return wrong answers. And to deal with that, it needs a way to handle mathematical uncertainty.
ETA: If formalizing the problem is a non-trivial process, you might be solving most of the problem yourself in there, rather than letting the AI’s decision algorithm solve it. I don’t think you’d want that. In this case, for example, if your AI were to encounter Omega in real life, how would it know to model the situation using a world program that invokes a special kind of oracle?
Re ETA: in the comments to Formalizing Newcomb’s, Eliezer effectively said he prefers the “special kind of oracle” interpretation to the simulator interpretation. I’m not sure which one an AI should assume when Omega gives it a verbal description of the problem.
Wha?
If you mean my saying (3), that doesn’t mean “Oracle”, it means we reason about the program without doing a full simulation of it.
Yes, I meant that. Maybe I misinterpreted you; maybe the game needs to be restated with a probabilistic oracle :-) Because I’m a mental cripple and can’t go far without a mathy model.