I kind of wish talk about newcomb’s problem was presented in terms of source code and AI rather than the more common presentation, since I think it’s much more obvious what is being aimed at when you think about it this way. Is there a reason people prefer the original version?
Most people also aren’t presented with Omega situations. The reason it’s important to solve newcomb’s problem is so that we can make an AI that will respond to the incentives we give it to self-modify in ways we want it to.
Maybe the right thing to do is to mix and match different presentations of the problem? I.e. one person might be all like “huh??” or “this is stupid” whenever Newcomb’s problem is discussed, but be like “oh NOW I get it” when it’s presented in terms of AI and source code. Somebody else might be the opposite.
I kind of wish talk about newcomb’s problem was presented in terms of source code and AI rather than the more common presentation, since I think it’s much more obvious what is being aimed at when you think about it this way. Is there a reason people prefer the original version?
Most people aren’t AI’s or even programers (though the latter are fairly common on LW).
Most people also aren’t presented with Omega situations. The reason it’s important to solve newcomb’s problem is so that we can make an AI that will respond to the incentives we give it to self-modify in ways we want it to.
Most people find the verbal descriptions easier to handle.
Most people are much more easily misled via verbal descriptions.
Maybe the right thing to do is to mix and match different presentations of the problem? I.e. one person might be all like “huh??” or “this is stupid” whenever Newcomb’s problem is discussed, but be like “oh NOW I get it” when it’s presented in terms of AI and source code. Somebody else might be the opposite.
Orthonormal does a pretty good job of sorce code-esque considerations. Helped me out.