I think the standard term for what you call “anthropic uncertainty” is “indexical uncertainty” (which, as far as I can tell, was first coined by Nick Bostrom in his 2000 PhD thesis).
Also, I suggest that you say a bit more about the context and motivation for this post. I interpret what you wrote as an outline of some of the ambiguities/choices a human or Bayesian AI would face if they tried to convert their current preferences into UDT preferences. But I’m not sure if that’s what you intended.
Apologies; the post grew out of some of the anthropics discussions at the FHI. The idea was mainly to set down a few worlds where we know the right answer (arguably) and to see what that implies about anthropics. I see now that the heart and soul of the piece are the arguments presented in defence of the models 1 and 4; I’ll expand those to a better post, with context, later.
I think the standard term for what you call “anthropic uncertainty” is “indexical uncertainty” (which, as far as I can tell, was first coined by Nick Bostrom in his 2000 PhD thesis).
Also, I suggest that you say a bit more about the context and motivation for this post. I interpret what you wrote as an outline of some of the ambiguities/choices a human or Bayesian AI would face if they tried to convert their current preferences into UDT preferences. But I’m not sure if that’s what you intended.
Apologies; the post grew out of some of the anthropics discussions at the FHI. The idea was mainly to set down a few worlds where we know the right answer (arguably) and to see what that implies about anthropics. I see now that the heart and soul of the piece are the arguments presented in defence of the models 1 and 4; I’ll expand those to a better post, with context, later.