Problem of Optimal False Information

This is a thought that oc­cured to me on my way to classes to­day; shar­ing it for feed­back.

Omega ap­pears be­fore you, and af­ter pre­sent­ing an ar­bi­trary proof that it is, in fact, a com­pletely trust­wor­thy su­per­in­tel­li­gence of the cal­iber needed to play these kinds of games, pre­sents you with a choice be­tween two boxes. Th­ese boxes do not con­tain money, they con­tain in­for­ma­tion. One box is white and con­tains a true fact that you do not cur­rently know; the other is black and con­tains false in­for­ma­tion that you do not cur­rently be­lieve. Omega ad­vises you that the the true fact is not mis­lead­ing in any way (ie: not a fact that will cause you to make in­cor­rect as­sump­tions and lower the ac­cu­racy of your prob­a­bil­ity es­ti­mates), and is fully sup­ported with enough ev­i­dence to both prove to you that it is true, and en­able you to in­de­pen­dently ver­ify its truth for your­self within a month. The false in­for­ma­tion is demon­stra­bly false, and is some­thing that you would dis­be­lieve if pre­sented out­right, but if you open the box to dis­cover it, a ma­chine in­side the box will re­pro­gram your mind such that you will be­lieve it com­pletely, thus lead­ing you to be­lieve other re­lated false­hoods, as you ra­tio­nal­ize away dis­crep­an­cies.

Omega fur­ther ad­vises that, within those con­straints, the true fact is one that has been op­ti­mized to in­flict upon you the max­i­mum amount of long-term di­su­til­ity for a fact in its class, should you now be­come aware of it, and the false in­for­ma­tion has been op­ti­mized to provide you with the max­i­mum amount of long-term util­ity for a be­lief in its class, should you now be­gin to be­lieve it over the truth. You are re­quired to choose one of the boxes; if you re­fuse to do so, Omega will kill you out­right and try again on an­other Everett branch. Which box do you choose, and why?

(This ex­am­ple is ob­vi­ously hy­po­thet­i­cal, but for a sim­ple and prac­ti­cal case, con­sider the use of am­ne­sia-in­duc­ing drugs to se­lec­tively elimi­nate trau­matic mem­o­ries; it would be more ac­cu­rate to still have those mem­o­ries, tak­ing the time and effort to come to terms with the trauma… but pre­sent much greater util­ity to be with­out them, and thus with­out the trauma al­to­gether. Ob­vi­ously re­lated to the valley of bad ra­tio­nal­ity, but since there clearly ex­ist most op­ti­mal lies and least op­ti­mal truths, it’d be use­ful to know which cat­e­gories of facts are gen­er­ally haz­ardous, and whether or not there are cat­e­gories of lies which are gen­er­ally helpful.)