Ok, but that appears to be the same reason that I gave (right after I asked the question) for why we can’t switch over to UDT yet. So why did you give a another answer without reference to mine? That seems to be needlessly confusing. Here’s how I put it:
The problem with that is that much of our preferences are specified in terms of anticipation of experience, and there is no obvious way how to map those onto UDT preferences.
There’s more in that comment where I explored one possible approach to this problem. Do you have any thoughts on that?
Also, do you agree (or think it’s a possibility) that specifying preferences in terms of anticipation (instead of, say, world histories) was an evolutionary “mistake”, because evolution couldn’t anticipate that one day there would be mind copying/merging technology? If so, that doesn’t necessarily mean we should discard such preferences, but I think it does mean that there is no need to treat it as somehow more fundamental than other kinds of preferences, such as, for example, the fear of stepping into a teleporter that uses destructive scanning, or the desire not to be consigned to a tiny portion of Reality due to “mistaken” preferences.
I can’t switch over to UDT because it doesn’t tell me what I’ll see next, except to the extent it tells me to expect to see pi < 3 with some measure. It’s not that it doesn’t map. It’s that UDT goes on assigning measure to 2 + 2 = 5, but I’ll never see that happen. UDT is not what I want to map my preferences onto, it’s not a difficulty of mapping.
That’s not what happens in my conception of UDT. Maybe in Nesov’s, but he hasn’t gotten it worked out, and I’m not sure it’s really going to work. My current position on this is still that you should update on your own internal computations, but not on input from the outside.
ETA:
UDT is not what I want to map my preferences onto, it’s not a difficulty of mapping.
Is that the same point that Dan Armak made, which I responded to, or a different one?
I can’t switch over to UDT because it doesn’t tell me what I’ll see next, except to the extent it tells me to expect to see pi < 3 with some measure.
It’s not you who should use UDT, it’s the world. This is a salient point of departure between FAI and humanity. FAI is not in the business of saying in words what you should expect. People are stuff of the world, not rules of the world or strategies to play by those rules. Rules and strategies don’t depend on particular moves, they specify how to handle them, but plays consist of moves, of evidence. This very distinction between plays and strategies is the true origin of updatelessness. It is the fault to make this distinction that causes the confusion UDT resolves.
Nesov, your writings are so hard to understand sometimes. Let me take this as an example and give you some detailed feedback. I hope it’s useful to you to determine in the future where you might have to explain in more detail or use more precise language.
It’s not you who should use UDT, it’s the world.
Do you mean “it’s not only you”, or “it’s the world except you”? If it’s the latter, it doesn’t seem to make any sense. If it’s the former, it doesn’t seem to answer Eliezer’s objection.
This is a salient point of departure between FAI and humanity.
Do you mean FAI should use UDT, and humanity shouldn’t?
FAI is not in the business of saying in words what you should expect.
Ok, this seems clear. (Although why not, if that would make me feel better?)
People are stuff of the world, not rules of the world or strategies to play by those rules.
By “stuff”, do you mean “part of the state of the world”? And people do in some sense embody strategies (what they would do in different situations), so what do you mean by “people are not strategies”?
Rules and strategies don’t depend on particular moves, they specify how to handle them, but plays consist of moves, of evidence. This very distinction between plays and strategies is the true origin of updatelessness. It is the fault to make this distinction that causes the confusion UDT resolves.
This part makes sense, but I don’t see the connection to what Eliezer wrote.
Do you mean “it’s not only you”, or “it’s the world except you”? If it’s the latter, it doesn’t seem to make any sense. If it’s the former, it doesn’t seem to answer Eliezer’s objection.
I mean the world as substrate, with “you” being implemented on the substrate of FAI. FAI runs UDT, you consist of FAI’s decisions (even if in the sense of “influenced by”, there seems to be no formal difference). The decisions are output of the strategy optimized for by UDT, two levels removed from running UDT themselves.
Do you mean FAI should use UDT, and humanity shouldn’t?
Yes, in the sense that humanity runs on the FAI-substrate that uses UDT or something on the level of strategy-optimization anyway, but humanity itself is not about optimization.
By “stuff”, do you mean “part of the state of the world”? And people do in some sense embody strategies (what they would do in different situations), so what do you mean by “people are not strategies”?
I suspect that people should be found in plays (what actually happens given the state of the world), not strategies (plans for every eventuality).
There is no problem with FAI looking at both past and future you—intuition only breaks down when you speak of first-person anticipation. You don’t care what FAI anticipates to see for itself and whether it does. The dynamic of past->future you should be good with respect to anticipation, just as it should be good with respect to excitement.
Part of which question? And whatever you call “causally connected” past/future persons is a property of the stuff-in-general that FAI puts into place in the right way.
Because I care about what I see next.
Therefore the FAI has to care about what I see next—or whatever it is that I should be caring about.
Ok, but that appears to be the same reason that I gave (right after I asked the question) for why we can’t switch over to UDT yet. So why did you give a another answer without reference to mine? That seems to be needlessly confusing. Here’s how I put it:
There’s more in that comment where I explored one possible approach to this problem. Do you have any thoughts on that?
Also, do you agree (or think it’s a possibility) that specifying preferences in terms of anticipation (instead of, say, world histories) was an evolutionary “mistake”, because evolution couldn’t anticipate that one day there would be mind copying/merging technology? If so, that doesn’t necessarily mean we should discard such preferences, but I think it does mean that there is no need to treat it as somehow more fundamental than other kinds of preferences, such as, for example, the fear of stepping into a teleporter that uses destructive scanning, or the desire not to be consigned to a tiny portion of Reality due to “mistaken” preferences.
I can’t switch over to UDT because it doesn’t tell me what I’ll see next, except to the extent it tells me to expect to see pi < 3 with some measure. It’s not that it doesn’t map. It’s that UDT goes on assigning measure to 2 + 2 = 5, but I’ll never see that happen. UDT is not what I want to map my preferences onto, it’s not a difficulty of mapping.
That’s not what happens in my conception of UDT. Maybe in Nesov’s, but he hasn’t gotten it worked out, and I’m not sure it’s really going to work. My current position on this is still that you should update on your own internal computations, but not on input from the outside.
ETA:
Is that the same point that Dan Armak made, which I responded to, or a different one?
It’s not you who should use UDT, it’s the world. This is a salient point of departure between FAI and humanity. FAI is not in the business of saying in words what you should expect. People are stuff of the world, not rules of the world or strategies to play by those rules. Rules and strategies don’t depend on particular moves, they specify how to handle them, but plays consist of moves, of evidence. This very distinction between plays and strategies is the true origin of updatelessness. It is the fault to make this distinction that causes the confusion UDT resolves.
Nesov, your writings are so hard to understand sometimes. Let me take this as an example and give you some detailed feedback. I hope it’s useful to you to determine in the future where you might have to explain in more detail or use more precise language.
Do you mean “it’s not only you”, or “it’s the world except you”? If it’s the latter, it doesn’t seem to make any sense. If it’s the former, it doesn’t seem to answer Eliezer’s objection.
Do you mean FAI should use UDT, and humanity shouldn’t?
Ok, this seems clear. (Although why not, if that would make me feel better?)
By “stuff”, do you mean “part of the state of the world”? And people do in some sense embody strategies (what they would do in different situations), so what do you mean by “people are not strategies”?
This part makes sense, but I don’t see the connection to what Eliezer wrote.
I mean the world as substrate, with “you” being implemented on the substrate of FAI. FAI runs UDT, you consist of FAI’s decisions (even if in the sense of “influenced by”, there seems to be no formal difference). The decisions are output of the strategy optimized for by UDT, two levels removed from running UDT themselves.
Yes, in the sense that humanity runs on the FAI-substrate that uses UDT or something on the level of strategy-optimization anyway, but humanity itself is not about optimization.
I suspect that people should be found in plays (what actually happens given the state of the world), not strategies (plans for every eventuality).
There is no problem with FAI looking at both past and future you—intuition only breaks down when you speak of first-person anticipation. You don’t care what FAI anticipates to see for itself and whether it does. The dynamic of past->future you should be good with respect to anticipation, just as it should be good with respect to excitement.
But part of the question is: must past/future me be causally connected to me?
Part of which question? And whatever you call “causally connected” past/future persons is a property of the stuff-in-general that FAI puts into place in the right way.