Most people also aren’t presented with Omega situations. The reason it’s important to solve newcomb’s problem is so that we can make an AI that will respond to the incentives we give it to self-modify in ways we want it to.
Maybe the right thing to do is to mix and match different presentations of the problem? I.e. one person might be all like “huh??” or “this is stupid” whenever Newcomb’s problem is discussed, but be like “oh NOW I get it” when it’s presented in terms of AI and source code. Somebody else might be the opposite.
Most people aren’t AI’s or even programers (though the latter are fairly common on LW).
Most people also aren’t presented with Omega situations. The reason it’s important to solve newcomb’s problem is so that we can make an AI that will respond to the incentives we give it to self-modify in ways we want it to.
Most people find the verbal descriptions easier to handle.
Most people are much more easily misled via verbal descriptions.
Maybe the right thing to do is to mix and match different presentations of the problem? I.e. one person might be all like “huh??” or “this is stupid” whenever Newcomb’s problem is discussed, but be like “oh NOW I get it” when it’s presented in terms of AI and source code. Somebody else might be the opposite.