I’m treating the stuff about decision-theoretic ghosts as irrelevant, because they’re an extraneous wart on interpretation of FDT and taking them seriously would make the theory much much worse than it already is. I guess if you enjoy imagining that imagining conscious agents actually creates conscious agents, then go for it but that doesn’t make it reality, and even if it was reality, doing so is not FDT.
The main principle of FDT is that it recommends decisions where a (hypothetical) population of people making the same decisions in the same situations generally ends up better off.
That doesn’t mean that it recommends the best decisions for you. In the cases where it makes different decisions from more boring decision theories, it’s because the chance of you getting into worse situations is reduced when your type of person voluntarily gives up some utility when you get there. In reality this hardly ever happens because the only person sufficiently like you in your current situation is you in your current situation which hasn’t happened before. It’s also subject to superexponential combinatorial explosion once you have more than a couple of bits of information and a few deterministic actions.
That’s why the only discussion you’ll ever see about it will be about toy problems with dubious assumptions and restrictions.
I’m treating the stuff about decision-theoretic ghosts as irrelevant, because they’re an extraneous wart on interpretation of FDT and taking them seriously would make the theory much much worse than it already is. I guess if you enjoy imagining that imagining conscious agents actually creates conscious agents, then go for it but that doesn’t make it reality, and even if it was reality, doing so is not FDT.
The main principle of FDT is that it recommends decisions where a (hypothetical) population of people making the same decisions in the same situations generally ends up better off.
That doesn’t mean that it recommends the best decisions for you. In the cases where it makes different decisions from more boring decision theories, it’s because the chance of you getting into worse situations is reduced when your type of person voluntarily gives up some utility when you get there. In reality this hardly ever happens because the only person sufficiently like you in your current situation is you in your current situation which hasn’t happened before. It’s also subject to superexponential combinatorial explosion once you have more than a couple of bits of information and a few deterministic actions.
That’s why the only discussion you’ll ever see about it will be about toy problems with dubious assumptions and restrictions.