I don’t think you need to resolve your uncertainty with regard to decision theories to figure out the correct thing to do if you anticipate being subjected to Newcomb’s problems in the future: just precommit to one-boxing on such problems, and that precommitment will (hopefully) be honored by any simulations of you that are run in the future.
Yes, if you can commit yourself, that is. Generally we are limited in our ability to do that. Of course one can modify the problem so that all predictions are done using data collected before you thought about that, and any models of you don’t have your full conscious experience.
I don’t think you need to resolve your uncertainty with regard to decision theories to figure out the correct thing to do if you anticipate being subjected to Newcomb’s problems in the future: just precommit to one-boxing on such problems, and that precommitment will (hopefully) be honored by any simulations of you that are run in the future.
Yes, if you can commit yourself, that is. Generally we are limited in our ability to do that. Of course one can modify the problem so that all predictions are done using data collected before you thought about that, and any models of you don’t have your full conscious experience.