I’m confused about this response. at what level of prediction would it start becoming reasonable to consider it to be an approximation of newcomb’s problem? if this isn’t about trust, what similar thing is? I mean, certainly you don’t get exact behavior, but I don’t see why the usual reasoning about newcomb’s problem doesn’t apply here. could you read Critch’s post and comment about where you disagree with it?
Point 1: The Newcomb Problem tells you nothing about actual social interactions between actual humans. If you’re interested in social structures, techniques, etc., the Newcomb Problem is the wrong place to start.
Point 2: Trust in this context can be defined more or less as “accepting without verifying”. There is no trust involved in the Newcomb problem.
If you 2-box, shouldn’t Point 1 be “Newcomb’s problem doesn’t tell you anything useful about anything” rather than “Newcomb’s probably doesn’t tell you anything useful about trust?”
Newcomb’s is a hypothetical scenario (highly unlikely to exist in reality). As such, I think it’s usefulness is more or less on par with other hypothetical scenarios unlikely to happen.
I’m confused about this response. at what level of prediction would it start becoming reasonable to consider it to be an approximation of newcomb’s problem? if this isn’t about trust, what similar thing is? I mean, certainly you don’t get exact behavior, but I don’t see why the usual reasoning about newcomb’s problem doesn’t apply here. could you read Critch’s post and comment about where you disagree with it?
Let me be more clear.
Point 1: The Newcomb Problem tells you nothing about actual social interactions between actual humans. If you’re interested in social structures, techniques, etc., the Newcomb Problem is the wrong place to start.
Point 2: Trust in this context can be defined more or less as “accepting without verifying”. There is no trust involved in the Newcomb problem.
Oh, and in case you’re curious, I two-box.
If you 2-box, shouldn’t Point 1 be “Newcomb’s problem doesn’t tell you anything useful about anything” rather than “Newcomb’s probably doesn’t tell you anything useful about trust?”
Newcomb’s is a hypothetical scenario (highly unlikely to exist in reality). As such, I think it’s usefulness is more or less on par with other hypothetical scenarios unlikely to happen.