we would still be constantly pruned back to the CEV of 2045 humans
Two connotational objections: 1) I don’t think that “constantly pruned back” is an appropriate metaphor for “getting everything you have ever desired”. The only thing that would prevent us from doing X would be the fact that after reflection we love non-X. 2) The extrapolated 2045 humans would be probably as different from the real 2045 humans, as the 2045 humans are different from the MINUS 2045 humans.
I wonder if the FAI will be sad to not be able to see what evolution in its unlimited ignorance would have come up with for us?
Sad? Why, unless we program it to be? Also, with superior recursively self-improving intelligence it could probably make a good estimate of what would have happened in an alternative reality where all AIs are magically destroyed. But such estimate would most likely be a probability distribution of many different possibilities, not one specific goal.
I’m dubious about the extrapolation—the universe is more complex than the AI, and the AI may not be able to model how our values would change as a result of unmediated choices and experiense.
I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn’t happen anyway.
It’s like saying “if I don’t roll a die, I lose the chance of rolling 6”, to which I add “and if you do roll the die, you still have 5⁄6 probability of not rolling 6″. Just to make it clear that by avoiding the “spontaneous” future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.
Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.
‘Unmediated’ may not have been quite the word to convey what I meant.
My impression is that CEV is permanently established very early in the AI’s history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.
Two connotational objections: 1) I don’t think that “constantly pruned back” is an appropriate metaphor for “getting everything you have ever desired”. The only thing that would prevent us from doing X would be the fact that after reflection we love non-X. 2) The extrapolated 2045 humans would be probably as different from the real 2045 humans, as the 2045 humans are different from the MINUS 2045 humans.
Sad? Why, unless we program it to be? Also, with superior recursively self-improving intelligence it could probably make a good estimate of what would have happened in an alternative reality where all AIs are magically destroyed. But such estimate would most likely be a probability distribution of many different possibilities, not one specific goal.
I’m dubious about the extrapolation—the universe is more complex than the AI, and the AI may not be able to model how our values would change as a result of unmediated choices and experiense.
I am not sure how obvious is the part that there are multiple possible futures. Most likely, the AI would not be able to model all of them. However, without AI most of them wouldn’t happen anyway.
It’s like saying “if I don’t roll a die, I lose the chance of rolling 6”, to which I add “and if you do roll the die, you still have 5⁄6 probability of not rolling 6″. Just to make it clear that by avoiding the “spontaneous” future of humankind, we are not avoiding one specific future magically prepared for us by destiny. We are avoiding the whole probability distribution, which contains many possible futures, both nice and ugly.
Just because AI can model something imperfectly, it does not mean that without the AI the future would be perfect, or even better on average than with the AI.
‘Unmediated’ may not have been quite the word to convey what I meant.
My impression is that CEV is permanently established very early in the AI’s history, but I believe that what people are and want (including what we would want if we knew more, thought faster, were more the people we wished we were, and had grown up closer together) will change, both because people will be doing self-modification and because they will learn more.