As I commented on another post, It seems Eliezer already addressed the specific claim you made here via probabilistic LDT solutions, as Mikhail explained there, and in a comment here. (And the quoted solution was written before you wrote this post.)
Is there a version that the modification explains there fails to address?
Towards the end of the post in the No agent is rational in every problem section, I provided a more general argument. I was assuming LDT would fall under case 1, but if not then case 2 demonstrates it is irrational.
“More rational in a given case” isn’t more rational! You might as well say it’s more rational to buy a given lottery ticket because it’s the winning ticket.
But you really aren’t assuming that, you’re doing something much stranger.
Either the actual opponent is a rock, in which case it gains nothing from “winning” the game, and there’s no such thing as being more or less rational than something without preferences, or the actual opponent is the agent who wrote the number on the rock and put it in front of the agent, in which case the example fails because the game actually started with an agent explicitly trying to manipulate the LDT agent into underperforming.
As I commented on another post, It seems Eliezer already addressed the specific claim you made here via probabilistic LDT solutions, as Mikhail explained there, and in a comment here. (And the quoted solution was written before you wrote this post.)
Is there a version that the modification explains there fails to address?
Towards the end of the post in the No agent is rational in every problem section, I provided a more general argument. I was assuming LDT would fall under case 1, but if not then case 2 demonstrates it is irrational.
“More rational in a given case” isn’t more rational! You might as well say it’s more rational to buy a given lottery ticket because it’s the winning ticket.
I’m assuming the LDT agent knows what the game is and who their opponent is.
But you really aren’t assuming that, you’re doing something much stranger.
Either the actual opponent is a rock, in which case it gains nothing from “winning” the game, and there’s no such thing as being more or less rational than something without preferences, or the actual opponent is the agent who wrote the number on the rock and put it in front of the agent, in which case the example fails because the game actually started with an agent explicitly trying to manipulate the LDT agent into underperforming.