I think the key here is to realised that indexicals aren’t a part of standard probability, so we need to de-indexicalise the situation.
This seems wrong to me. Bayes’ rule works just fine if events are things like “The marble in the box front of me is blue”. Bayes would barely be useful if you couldn’t apply it to events like these.
Any model of the world an agent learns is going to be a centered one, e.g. it will be able to talk about “the thing in front of me” and “the city of New York in the Earth that I grew up in”, but will have no need to model a New York not in causal relation to the agent.
In general I think anything you can coherently refer to is in some causal relation to you, i.e. all references are indexical. (A detailed explanation of this can be found in Brian Cantwell Smith’s On the Origin of Objects). One thing that might be an exception is mathematics, but that’s still in causal relation to me in the sense that mathematics affects what my computer outputs, so I can indexically refer to “the mathematical computation that is determining the outputs of the computer in front of me”.
“The marble in the box front of me is blue”—We don’t need to provide absolute time or space co-ordinates to de-indexicalise, we just need unique co-ordinates. If here only refers to one possible location, we can set it to (0,0,0) or if time only refers to one possible time, we can set it to t=0. On the other hand, if there are things such as memory loss or copies at different points of space or time, this de-indexicalisation strategy won’t work.
(To clarify this further, there’s no reason why the box couldn’t be at (0,0,0). But let’s suppose we found out it was at (0,100,97) instead, would that change the problem? If not, we can just solve the problem where the box is specified to be at (0,0,0))
Agree that absolute coordinates are unnecessary. But de-indexicalizing can destroy information about your location in the world, depending on how you do it.
The way I would de-indexicalize Sleeping Beauty is to say there are 3 possible centered worlds when Beauty wakes up: heads/Monday, tails/Monday, and tails/Tuesday. There isn’t any need to say only one interview counts.
A possible reason for including this indexical information: Beauty is a real person, she might be curious what day it is, and what day it is might affect her plans for that day (e.g. maybe she is allowed to write letters that are read after the experiment is over, and which day it is affects which letter she wants to write). She should be able to update on local information (e.g. overhearing people talk about which day it is) to learn which day it is.
By de-indexicalise I meant to remove indexicals. The centered possible worlds approach uses indexicals, so it would be unusual to call that de-indexicalisation. It’s the other approach instead—choosing a version of probability theory that supports indexicals. So you can either remove the indexicals or use a theory that supports them.
This seems wrong to me. Bayes’ rule works just fine if events are things like “The marble in the box front of me is blue”. Bayes would barely be useful if you couldn’t apply it to events like these.
Any model of the world an agent learns is going to be a centered one, e.g. it will be able to talk about “the thing in front of me” and “the city of New York in the Earth that I grew up in”, but will have no need to model a New York not in causal relation to the agent.
In general I think anything you can coherently refer to is in some causal relation to you, i.e. all references are indexical. (A detailed explanation of this can be found in Brian Cantwell Smith’s On the Origin of Objects). One thing that might be an exception is mathematics, but that’s still in causal relation to me in the sense that mathematics affects what my computer outputs, so I can indexically refer to “the mathematical computation that is determining the outputs of the computer in front of me”.
“The marble in the box front of me is blue”—We don’t need to provide absolute time or space co-ordinates to de-indexicalise, we just need unique co-ordinates. If here only refers to one possible location, we can set it to (0,0,0) or if time only refers to one possible time, we can set it to t=0. On the other hand, if there are things such as memory loss or copies at different points of space or time, this de-indexicalisation strategy won’t work.
(To clarify this further, there’s no reason why the box couldn’t be at (0,0,0). But let’s suppose we found out it was at (0,100,97) instead, would that change the problem? If not, we can just solve the problem where the box is specified to be at (0,0,0))
Agree that absolute coordinates are unnecessary. But de-indexicalizing can destroy information about your location in the world, depending on how you do it.
The way I would de-indexicalize Sleeping Beauty is to say there are 3 possible centered worlds when Beauty wakes up: heads/Monday, tails/Monday, and tails/Tuesday. There isn’t any need to say only one interview counts.
A possible reason for including this indexical information: Beauty is a real person, she might be curious what day it is, and what day it is might affect her plans for that day (e.g. maybe she is allowed to write letters that are read after the experiment is over, and which day it is affects which letter she wants to write). She should be able to update on local information (e.g. overhearing people talk about which day it is) to learn which day it is.
By de-indexicalise I meant to remove indexicals. The centered possible worlds approach uses indexicals, so it would be unusual to call that de-indexicalisation. It’s the other approach instead—choosing a version of probability theory that supports indexicals. So you can either remove the indexicals or use a theory that supports them.