Current chatbots can’t explain things, especially nonstandard things or those adjacent to popular confusions (it’s not always possible to clearly rule them out). When uncertain or getting metaphorical, they’ll find a way to agree with anything they think you are gesturing at, even if you try to ask them not to. They are very useful for teaching keywords for standard ideas or facts, and ways of talking and asking about them.
Mostly agreed. If you’re going to try to debate a concept with them, or get them to criticize something you’ve written, you absolutely need to compensate for the sycophancy.
One way is to system-prompt them to be critical.
Another good thing to also do is to try arguing each side of the argument (in different instances). Probably good for your own brain as well.
Thank you for responding to my post despite its negative rating.
Can you, as a human, give any practical real-world examples that do not rely on non-existent tech where anything outperforms non-naive CDT?
By non-naive I mean CDT that isn’t myopically just trying to maximize the immediate payoff but rather trying to maximize the long term value to the player into account future interactions, reputation, uncertainty about causal relationships, etc.
The closest I can come to examples might be ones where the two-box outcome is so much worse then the one-box outcome that I have nothing to lose by choosing the path of hope.
E.g. picking one box even though I and everybody else knows I’m a two-boxer if I believe that in this case two-boxing will kill me
Or, cooperating when both unilateral defection, unilateral cooperation, and mutual defection have results vastly worse than mutual cooperation.
Because, based on the behavior of people here whose intelligence and ideas I have come to respect, this is an important topic.
Clearly I completely lack the background to understand the full theoretical argument. I also lack the background to understand the full theoretical argument behind general relatively and quantum uncertainty. Yet there are many real-world practical examples that I do understand and can work backwards from to get a roughly correct intuition about these ideas.
Every example I have seen for CDT falling short has been a hypothetical scenario that almost certainly never happened.
But if the only scenarios where CDT is a dominated strategy are hypothetical ones, I wouldn’t expect smart people on LW to spend so much time and energy on them.
Current chatbots can’t explain things, especially nonstandard things or those adjacent to popular confusions (it’s not always possible to clearly rule them out). When uncertain or getting metaphorical, they’ll find a way to agree with anything they think you are gesturing at, even if you try to ask them not to. They are very useful for teaching keywords for standard ideas or facts, and ways of talking and asking about them.
Mostly agreed. If you’re going to try to debate a concept with them, or get them to criticize something you’ve written, you absolutely need to compensate for the sycophancy.
One way is to system-prompt them to be critical.
Another good thing to also do is to try arguing each side of the argument (in different instances). Probably good for your own brain as well.
Thank you for responding to my post despite its negative rating.
Can you, as a human, give any practical real-world examples that do not rely on non-existent tech where anything outperforms non-naive CDT?
By non-naive I mean CDT that isn’t myopically just trying to maximize the immediate payoff but rather trying to maximize the long term value to the player into account future interactions, reputation, uncertainty about causal relationships, etc.
The closest I can come to examples might be ones where the two-box outcome is so much worse then the one-box outcome that I have nothing to lose by choosing the path of hope.
E.g. picking one box even though I and everybody else knows I’m a two-boxer if I believe that in this case two-boxing will kill me
Or, cooperating when both unilateral defection, unilateral cooperation, and mutual defection have results vastly worse than mutual cooperation.
Are these on the right track?
Because, based on the behavior of people here whose intelligence and ideas I have come to respect, this is an important topic.
Clearly I completely lack the background to understand the full theoretical argument. I also lack the background to understand the full theoretical argument behind general relatively and quantum uncertainty. Yet there are many real-world practical examples that I do understand and can work backwards from to get a roughly correct intuition about these ideas.
Every example I have seen for CDT falling short has been a hypothetical scenario that almost certainly never happened.
But if the only scenarios where CDT is a dominated strategy are hypothetical ones, I wouldn’t expect smart people on LW to spend so much time and energy on them.