Three days later, “Omega” appears in the sky and makes an announcement. “Greeting earthlings. I am sorry to say that I have lied to you. I am actually Alpha, a galactic superintelligence who hates that Omega asshole. I came to predict your species’ reaction to my arch-nemesis Omega and I must say that I am disappointed. So many of you chose the obviously-irrational single-box strategy that I must decree your species unworthy of this universe. Goodbye.”
Giant laser beam then obliterates earth. I die wishing I’d done more to warn the world of this highly-improbable threat.
TLDR: I don’t buy this post’s argument that I should become the type of agent that sees one-boxing on Newcomb-like problems as rational. It is trivial to construct any number of no-less plausible scenarios where a superintelligence descends from the heavens and puts a few thousand people through Newcomb’s problem before suddenly annihilating those who one-box. The presented argument for becoming the type of agent that Omega predicts will one-box can be equally used to argue for becoming the type of agent that Alpha predicts will two-box. Why then should it sway me in either direction?
I two-box.
Three days later, “Omega” appears in the sky and makes an announcement. “Greeting earthlings. I am sorry to say that I have lied to you. I am actually Alpha, a galactic superintelligence who hates that Omega asshole. I came to predict your species’ reaction to my arch-nemesis Omega and I must say that I am disappointed. So many of you chose the obviously-irrational single-box strategy that I must decree your species unworthy of this universe. Goodbye.”
Giant laser beam then obliterates earth. I die wishing I’d done more to warn the world of this highly-improbable threat.
TLDR: I don’t buy this post’s argument that I should become the type of agent that sees one-boxing on Newcomb-like problems as rational. It is trivial to construct any number of no-less plausible scenarios where a superintelligence descends from the heavens and puts a few thousand people through Newcomb’s problem before suddenly annihilating those who one-box. The presented argument for becoming the type of agent that Omega predicts will one-box can be equally used to argue for becoming the type of agent that Alpha predicts will two-box. Why then should it sway me in either direction?