Thank you for responding to my post despite its negative rating.
Can you, as a human, give any practical real-world examples that do not rely on non-existent tech where anything outperforms non-naive CDT?
By non-naive I mean CDT that isn’t myopically just trying to maximize the immediate payoff but rather trying to maximize the long term value to the player into account future interactions, reputation, uncertainty about causal relationships, etc.
The closest I can come to examples might be ones where the two-box outcome is so much worse then the one-box outcome that I have nothing to lose by choosing the path of hope.
E.g. picking one box even though I and everybody else knows I’m a two-boxer if I believe that in this case two-boxing will kill me
Or, cooperating when both unilateral defection, unilateral cooperation, and mutual defection have results vastly worse than mutual cooperation.
Because, based on the behavior of people here whose intelligence and ideas I have come to respect, this is an important topic.
Clearly I completely lack the background to understand the full theoretical argument. I also lack the background to understand the full theoretical argument behind general relatively and quantum uncertainty. Yet there are many real-world practical examples that I do understand and can work backwards from to get a roughly correct intuition about these ideas.
Every example I have seen for CDT falling short has been a hypothetical scenario that almost certainly never happened.
But if the only scenarios where CDT is a dominated strategy are hypothetical ones, I wouldn’t expect smart people on LW to spend so much time and energy on them.
Thank you for responding to my post despite its negative rating.
Can you, as a human, give any practical real-world examples that do not rely on non-existent tech where anything outperforms non-naive CDT?
By non-naive I mean CDT that isn’t myopically just trying to maximize the immediate payoff but rather trying to maximize the long term value to the player into account future interactions, reputation, uncertainty about causal relationships, etc.
The closest I can come to examples might be ones where the two-box outcome is so much worse then the one-box outcome that I have nothing to lose by choosing the path of hope.
E.g. picking one box even though I and everybody else knows I’m a two-boxer if I believe that in this case two-boxing will kill me
Or, cooperating when both unilateral defection, unilateral cooperation, and mutual defection have results vastly worse than mutual cooperation.
Are these on the right track?
Because, based on the behavior of people here whose intelligence and ideas I have come to respect, this is an important topic.
Clearly I completely lack the background to understand the full theoretical argument. I also lack the background to understand the full theoretical argument behind general relatively and quantum uncertainty. Yet there are many real-world practical examples that I do understand and can work backwards from to get a roughly correct intuition about these ideas.
Every example I have seen for CDT falling short has been a hypothetical scenario that almost certainly never happened.
But if the only scenarios where CDT is a dominated strategy are hypothetical ones, I wouldn’t expect smart people on LW to spend so much time and energy on them.