This does seem to be the “obvious” next step in the UDT approach. I proposed something similar as “UDT2″ in a 2011 post to the “decision theory workshop” mailing list, and others have made similar proposals.
But there is a problem with having to choose how much time/computing resources to give to the initial decision process. If you give it too little then its logical probabilities might be very noisy and you could end up with a terrible decision, but if you give it too much then it could update on too many logical facts and lose on acausal bargaining problems. With multiple AI builders, UDT2 seems to imply a costly arms-race situation where each has an incentive to give their initial decision process less time (than would otherwise be optimal) so that their AI could commit faster (and hopefully be logically updated upon by other AIs) and also avoid updating on other AI’s commitments.
I’d like to avoid this but don’t know how. I’m also sympathetic to Nesov’s (and others such as Gary Drescher’s) sentiment that maybe there is a better approach to the problems that UDT is trying to solve, but I don’t know what that is either.
So my plan is to “solve” the problem of choosing how much time to give it by having a parameter (which stage of a logical inductor to use), and trying to get results saying that if we set the parameter sufficiently high, and we only consider the output on sufficiently far out problems, then we can prove that it does well.
It does not solve the problem, but it might let us analyze what we would get if we did solve the problem.
This does seem to be the “obvious” next step in the UDT approach. I proposed something similar as “UDT2″ in a 2011 post to the “decision theory workshop” mailing list, and others have made similar proposals.
But there is a problem with having to choose how much time/computing resources to give to the initial decision process. If you give it too little then its logical probabilities might be very noisy and you could end up with a terrible decision, but if you give it too much then it could update on too many logical facts and lose on acausal bargaining problems. With multiple AI builders, UDT2 seems to imply a costly arms-race situation where each has an incentive to give their initial decision process less time (than would otherwise be optimal) so that their AI could commit faster (and hopefully be logically updated upon by other AIs) and also avoid updating on other AI’s commitments.
I’d like to avoid this but don’t know how. I’m also sympathetic to Nesov’s (and others such as Gary Drescher’s) sentiment that maybe there is a better approach to the problems that UDT is trying to solve, but I don’t know what that is either.
So my plan is to “solve” the problem of choosing how much time to give it by having a parameter (which stage of a logical inductor to use), and trying to get results saying that if we set the parameter sufficiently high, and we only consider the output on sufficiently far out problems, then we can prove that it does well.
It does not solve the problem, but it might let us analyze what we would get if we did solve the problem.