conditional on the agent choosing 10, it will first have imagined 10 so it will imagine both 5 and 10.
Can you explain this point a bit? I am missing how the setup is a game of chicken, and why the agent should imagine both 5 and 10 since you are conditionalizing on the agent selecting 5 in one case and 10 in the other case? My inclination is to imagine two possible worlds, one where the agent imagines and chooses 5 and another where the agent imagines and chooses 10, not both at once. Only one of these possible worlds turns out to be the actual. Someone modeling the agent in some way can predict that the agent will pick 5 after imagining 5 and will pick 10 after imagining 10. But it seems like you are saying more than that.
I asked my friend for a resource that explained the 5-and-10 problem well and he provided this link. Unfortunately, I still don’t have a good link for “Playing Chicken with the universe”.
I’m discussing an agent that does in fact take 5 which imagines taking 10 instead. There have been some discussions of decision theory using proof-based agents and how they can run in spurious counterfactual. If you’re confused, you can try searching the archive of this website. I tried earlier today, but couldn’t find particularly good resources to recommend. I couldn’t find a good resource for playing chicken with the universe either.
(I may write a proper article at some point in the future to explain these concepts if I can’t find an article that explains them well)
Can you explain this point a bit? I am missing how the setup is a game of chicken, and why the agent should imagine both 5 and 10 since you are conditionalizing on the agent selecting 5 in one case and 10 in the other case? My inclination is to imagine two possible worlds, one where the agent imagines and chooses 5 and another where the agent imagines and chooses 10, not both at once. Only one of these possible worlds turns out to be the actual. Someone modeling the agent in some way can predict that the agent will pick 5 after imagining 5 and will pick 10 after imagining 10. But it seems like you are saying more than that.
I asked my friend for a resource that explained the 5-and-10 problem well and he provided this link. Unfortunately, I still don’t have a good link for “Playing Chicken with the universe”.
I’m discussing an agent that does in fact take 5 which imagines taking 10 instead. There have been some discussions of decision theory using proof-based agents and how they can run in spurious counterfactual. If you’re confused, you can try searching the archive of this website. I tried earlier today, but couldn’t find particularly good resources to recommend. I couldn’t find a good resource for playing chicken with the universe either.
(I may write a proper article at some point in the future to explain these concepts if I can’t find an article that explains them well)
Ah, I missed that. That seems like a mental quirk rather than anything fundamental. Then again, maybe you mean something else.