unless it resolves never to allow itself to be outperformed on any problem (in TDT über alles fashion).
This is not actually possible. You can always play the “I simulated you and put the money in the place you don’t choose” game.
It seems a real strain to describe either of them as unfair to TDT.
From this side of the screen, this looks like a property of you, not the problems. If we replace the statement about “relative numbers” in the future (we were having to make assumptions about that anyhow, so let’s just save time and stick in the assumptions), then problem 2 reads “I simulated the best decision theory by definition X and put the money in the place it doesn’t choose.” This demonstrates that no matter how good a decision theory is by any definition, it can still get hosed by Omega. In this case we’re assuming that definition X is maximized by TDT (thus, it’s a unique specification), and yea, TDT did go forth and get hosed by Omega.
This is not actually possible. You can always play the “I simulated you and put the money in the place you don’t choose” game
But the obvious response to that game is randomisation among the choice options: there is no guarantee of winning, but no-one else can do better than you either. It takes a new “twist” on the problem to defeat the randomisation approach, and show that another agent type can do better.
I did ask on my original post (on Problematic Problems) whether that “twist” had been proposed or studied before. There were no references, but if you have one, please let me know.
It seems a real strain to describe either of them as unfair to TDT.
From this side of the screen, this looks like a property of you, not the problems. If we replace the statement about “relative numbers” in the future (we were having to make assumptions about that anyhow, so let’s just save time and stick in the assumptions), then problem 2 reads “I simulated the best decision theory by definition X and put the money in the place it doesn’t choose.” This demonstrates that no matter how good a decision theory is by any definition, it can still get hosed by Omega. In this case we’re assuming that definition X is maximized by TDT (thus, it’s a unique specification), and yea, TDT did go forth and get hosed by Omega.
So there’s a class of problems where failure is actually a good sign? Interesting. You might want to post further on that, actually.
Hm, yeah. After some computational work at least. Every decision procedure can get hosed by Omega, and the way in which it gets hosed is diagnostic of its properties. Though not uniquely, I guess, so you can’t say “it fails this special test therefore it is good.”
This is not actually possible. You can always play the “I simulated you and put the money in the place you don’t choose” game.
From this side of the screen, this looks like a property of you, not the problems. If we replace the statement about “relative numbers” in the future (we were having to make assumptions about that anyhow, so let’s just save time and stick in the assumptions), then problem 2 reads “I simulated the best decision theory by definition X and put the money in the place it doesn’t choose.” This demonstrates that no matter how good a decision theory is by any definition, it can still get hosed by Omega. In this case we’re assuming that definition X is maximized by TDT (thus, it’s a unique specification), and yea, TDT did go forth and get hosed by Omega.
But the obvious response to that game is randomisation among the choice options: there is no guarantee of winning, but no-one else can do better than you either. It takes a new “twist” on the problem to defeat the randomisation approach, and show that another agent type can do better.
I did ask on my original post (on Problematic Problems) whether that “twist” had been proposed or studied before. There were no references, but if you have one, please let me know.
I don’t have such a reference—so good job :D And yes, I was assuming that Omega was defeating randomization.
So there’s a class of problems where failure is actually a good sign? Interesting. You might want to post further on that, actually.
Hm, yeah. After some computational work at least. Every decision procedure can get hosed by Omega, and the way in which it gets hosed is diagnostic of its properties. Though not uniquely, I guess, so you can’t say “it fails this special test therefore it is good.”