Personally, I think I can reliably predict that Eliezer would one-box against Omega, based on his public writings. I’m not sure if that implies that he would one-box against me,
And since any FAI Eliezer codes is (nearly) infinitely more likely to be presented Newcomb’s boxes by one such as you, or Penn and Teller, or Madoff than by Omega or his ilk, this would seem to be a more important question than the Newcomb’s problem with Omega.
Really the main point of my post is “Omega is (nearly) impossible therefore problems presuming Omega are (nearly) useless”. But the discussion has come mostly to my Newcomb’s example making explicit its lack of dependence on an Omega. But here in this comment you do point out that the “magical” aspect of Omega MAY influence the coding choice made. I think this supports my claim that even Newcomb’s problem, which COULD be stated without an Omega, may have a different answer than when stated with an Omega. That it is important when coding an FAI to consider just how much evidence it should require that it has an Omega it is dealing with before it concludes that it does. In the long run, my concern is that an FAI coded to accept an Omega will be susceptible to accepting people deliberately faking Omega, which are in our universe (nearly) infinitely more present than true Omegas.
Omega problems are not posed for the purpose of being prepared to deal with Omega should you, or an FAI, ever meet him. They are idealised test problems, thought experiments, for probing the strengths and weaknesses of formalised decision theories, especially regarding issues of self-reference and agents modelling themselves and each other. Some of these problems may turn out to be ill-posed, but you have to look at each such problem to decide whether it makes sense or not.
And since any FAI Eliezer codes is (nearly) infinitely more likely to be presented Newcomb’s boxes by one such as you, or Penn and Teller, or Madoff than by Omega or his ilk, this would seem to be a more important question than the Newcomb’s problem with Omega.
Really the main point of my post is “Omega is (nearly) impossible therefore problems presuming Omega are (nearly) useless”. But the discussion has come mostly to my Newcomb’s example making explicit its lack of dependence on an Omega. But here in this comment you do point out that the “magical” aspect of Omega MAY influence the coding choice made. I think this supports my claim that even Newcomb’s problem, which COULD be stated without an Omega, may have a different answer than when stated with an Omega. That it is important when coding an FAI to consider just how much evidence it should require that it has an Omega it is dealing with before it concludes that it does. In the long run, my concern is that an FAI coded to accept an Omega will be susceptible to accepting people deliberately faking Omega, which are in our universe (nearly) infinitely more present than true Omegas.
Omega problems are not posed for the purpose of being prepared to deal with Omega should you, or an FAI, ever meet him. They are idealised test problems, thought experiments, for probing the strengths and weaknesses of formalised decision theories, especially regarding issues of self-reference and agents modelling themselves and each other. Some of these problems may turn out to be ill-posed, but you have to look at each such problem to decide whether it makes sense or not.