I’d like to see a debate between you, or someone who shares your views, and Hanson on this topic. Partly because I think revealing your cruxes w/ each other will clarify your models to us. And partly because I’m unsure if Hanson is right on the topic. He’s probably wrong, but this is important to me. Even if I and those I care for die, will there be something left in this world that I value?
My summary of Hanson’s views on this topic:
Hanson seems to think that any of our “descendants”, if they spread to the stars, will be doing complex, valuable things. Because, I think, he thinks that a singleton is unlikely, and we’ll get many AI competing against each other. Natural selection is pretty likely to rule. But many of the things we care for were selected by natural selection because they’re useful. So we should expect some analogues of what we care about to show up in some AI in the future. Yes, they may not be exact analogues, but he’s OK with that as he thinks that the best option for extrapolating our values is by looking for fitter analogues.
I’d like to see a debate between you, or someone who shares your views, and Hanson on this topic. Partly because I think revealing your cruxes w/ each other will clarify your models to us. And partly because I’m unsure if Hanson is right on the topic. He’s probably wrong, but this is important to me. Even if I and those I care for die, will there be something left in this world that I value?
My summary of Hanson’s views on this topic:
Hanson seems to think that any of our “descendants”, if they spread to the stars, will be doing complex, valuable things. Because, I think, he thinks that a singleton is unlikely, and we’ll get many AI competing against each other. Natural selection is pretty likely to rule. But many of the things we care for were selected by natural selection because they’re useful. So we should expect some analogues of what we care about to show up in some AI in the future. Yes, they may not be exact analogues, but he’s OK with that as he thinks that the best option for extrapolating our values is by looking for fitter analogues.