Suppose I am deciding whether to open box A or box B. So I consult my deduction engine, it tells me that if I open box B I die and if I open box A I get a nickel. So I open box A.
Now suppose that before making this choice I was considering “Should I use my deduction engine for choices between two boxes, or just guess randomly, saving energy?” The statement that deduction engine is useful is apparently equivalent to
“If the deduction engine says that the consequences of opening box A are better than the consequences of opening box B, then the consequences of opening box A are better than the consequences of opening box B,” which is the sort of statement the deduction engine could never itself consistently output. (By Loeb’s theorem, it would subsequently immediately output “the consequences of opening box A are better than the consequences of opening box B” independent of any actual arguments about box A or box B. )
It seems the only way to get around this is to weaken the statement by inserting some “probably”s. After thinking about Loeb’s theorem more carefully, it may be the case that refusing to believe anything with probability 1 is enough to avoid this difficulty.
I still can’t see why the AI, when deciding “A or B”, is allowed to simply deduce consequences, while when deciding “Deduce or Guess” it is required to first deduce that the deducer is “useful”, then deduce consequences. The AI appears to be using two different decision procedures, and I don’t know how it chooses between them.
Can you define exactly when usefulness needs to be deduced? In either case, it seems that it can deduce consequences in either case without deducing usefulness.
Apologies if I’m being difficult; if you’re making progress as it is (as implied by your idea about “probably”s), we can drop this and I’ll try to follow along again next time you post.
It makes sense, but I don’t think I agree.
Suppose I am deciding whether to open box A or box B. So I consult my deduction engine, it tells me that if I open box B I die and if I open box A I get a nickel. So I open box A.
Now suppose that before making this choice I was considering “Should I use my deduction engine for choices between two boxes, or just guess randomly, saving energy?” The statement that deduction engine is useful is apparently equivalent to
“If the deduction engine says that the consequences of opening box A are better than the consequences of opening box B, then the consequences of opening box A are better than the consequences of opening box B,” which is the sort of statement the deduction engine could never itself consistently output. (By Loeb’s theorem, it would subsequently immediately output “the consequences of opening box A are better than the consequences of opening box B” independent of any actual arguments about box A or box B. )
It seems the only way to get around this is to weaken the statement by inserting some “probably”s. After thinking about Loeb’s theorem more carefully, it may be the case that refusing to believe anything with probability 1 is enough to avoid this difficulty.
I still can’t see why the AI, when deciding “A or B”, is allowed to simply deduce consequences, while when deciding “Deduce or Guess” it is required to first deduce that the deducer is “useful”, then deduce consequences. The AI appears to be using two different decision procedures, and I don’t know how it chooses between them.
Can you define exactly when usefulness needs to be deduced? In either case, it seems that it can deduce consequences in either case without deducing usefulness.
Apologies if I’m being difficult; if you’re making progress as it is (as implied by your idea about “probably”s), we can drop this and I’ll try to follow along again next time you post.