What do you do when you can’t translate “I am a copy” to “an agent with observations X is a copy”? That’s the crux of the issue, as I see it. In these problems there are cases where “I” does not just mean “agent with observations X”. That’s the whole point of them.
Edit: If you want to taboo “I” and “me”, you can consider cases where you don’t know if other agents are making exactly the same observations (and they probably aren’t), but you do know that their observations are the same in all ways relevant to the problem.
In those cases, is probability of such an event meaningful? If not, do you have any replacement theory for making decisions?
Ah, the example I gave above was not very good. To clarify:
If I can translate things like “I am a copy” to {propositions defined entirely in terms of non-magical things}, then I think it should be possible to assign probabilities to them.
Like, imagine “possible” worlds w are Turing machines, or cellular automata, or some other kind of well defined mathematical object. Then, for any computable function f over worlds, I think that
it should be possible to assign probabilities to things like f(w)=42, or f(w)≤1, or whatever
and the above kinds of things are (probably?) the only kinds of things for which probabilities even are “well defined”.
(I currently wouldn’t be able to give a rigorous definition of what “well defined” means in the above; need to think about that.)
If you can come up with events/propositions that
can not (even in principle) be reduced to the f(w)=x form above,
but which also would be necessary to assign probabilities to, in order to be able to make decisions,
What do you do when you can’t translate “I am a copy” to “an agent with observations X is a copy”? That’s the crux of the issue, as I see it. In these problems there are cases where “I” does not just mean “agent with observations X”. That’s the whole point of them.
Edit: If you want to taboo “I” and “me”, you can consider cases where you don’t know if other agents are making exactly the same observations (and they probably aren’t), but you do know that their observations are the same in all ways relevant to the problem.
In those cases, is probability of such an event meaningful? If not, do you have any replacement theory for making decisions?
Ah, the example I gave above was not very good. To clarify:
If I can translate things like “I am a copy” to {propositions defined entirely in terms of non-magical things}, then I think it should be possible to assign probabilities to them.
Like, imagine “possible” worlds w are Turing machines, or cellular automata, or some other kind of well defined mathematical object. Then, for any computable function f over worlds, I think that
it should be possible to assign probabilities to things like f(w)=42, or f(w)≤1, or whatever
and the above kinds of things are (probably?) the only kinds of things for which probabilities even are “well defined”.
(I currently wouldn’t be able to give a rigorous definition of what “well defined” means in the above; need to think about that.)
If you can come up with events/propositions that
can not (even in principle) be reduced to the f(w)=x form above,
but which also would be necessary to assign probabilities to, in order to be able to make decisions,
then I’d be interested to see them!