Newcomb’s problem requires either that a) agents aren’t Turing-complete (in which case of course agents can get sub-optimal outcomes) or b) Omega is super-Turing, in which case all bets are off. I’m not sure if it’s worth focusing on as a result.
There is a name for ‘a finite analogy to a Turing machine’ - it’s called a finite state machine. You are correct in that one sufficiently large finite state machine can simulate a smaller FSM.
If agents must be FSMs, case a) applies. Your agents can of course get suboptimal outcomes, and many standard axioms of game theory do not apply[1]. (For instance: a game’s expected value can be lowered by adding another option[2] or raising the payoff of an option[3].)
Newcomb’s problem requires either that a) agents aren’t Turing-complete (in which case of course agents can get sub-optimal outcomes) or b) Omega is super-Turing, in which case all bets are off. I’m not sure if it’s worth focusing on as a result.
One finite, but otherwise Turing complete system can emulate another,. providing it’s sufficiently more powerful.
There is a name for ‘a finite analogy to a Turing machine’ - it’s called a finite state machine. You are correct in that one sufficiently large finite state machine can simulate a smaller FSM.
If agents must be FSMs, case a) applies. Your agents can of course get suboptimal outcomes, and many standard axioms of game theory do not apply[1]. (For instance: a game’s expected value can be lowered by adding another option[2] or raising the payoff of an option[3].)
Or apply in modified forms.
Once the FSM is no longer large enough to scan through all outcomes.
Once the FSM is no longer large enough to be able to do the necessary arithmetic on the output payoffs.
I’m not very shocked by the fact that realistically finite agents can make suboptimal decisions.
Neither am I, which is why I am surprised and confused by people seemingly attaching a fair amount of importance to Newcomb’s problems.