I know I’m late to the party, but I’m pretty confused by https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end (I haven’t read the post it’s responding to, but I can extrapolate). Surely the “we have a friendly singleton that isn’t Just Following Orders from Your Local Democratically Elected Government or Your Local AGI Lab” is a scenario that deserves some analysis...? Conditional on “not dying” that one seems like the most likely stable end state, in fact.
Lots of interesting questions in that situation! Like, money still seems obviously useful for allocating rivalrous goods (which is… most of them, really). Is a UBI likely when you have a friendly singleton around? Well, I admit I’m not currently coming up with a better plan for the cosmic endowment. But then you have population ethics questions—it really does seem like you have to “solve” population ethics somehow, or you run into issues. Most “just do X” proposals seem to fall totally flat on their face—“give every moral patient an equal share” fails if you allow uploads (or even sufficiently motivated biological reproduction), “don’t give anyone born post-singularity anything” seems grossly unfair, etc.
And this is really only scratching the surface. Do you allow arbitrary cognitive enhancement, with all that that implies for likely future distribution of resources?
For cognitive enhancement, maybe we could have a system like “the smarter you are, the more aligned you must be to those less smart than you”? So enhancement would be available, but would make you less free in some ways.
I know I’m late to the party, but I’m pretty confused by https://www.astralcodexten.com/p/its-still-easier-to-imagine-the-end (I haven’t read the post it’s responding to, but I can extrapolate). Surely the “we have a friendly singleton that isn’t Just Following Orders from Your Local Democratically Elected Government or Your Local AGI Lab” is a scenario that deserves some analysis...? Conditional on “not dying” that one seems like the most likely stable end state, in fact.
Lots of interesting questions in that situation! Like, money still seems obviously useful for allocating rivalrous goods (which is… most of them, really). Is a UBI likely when you have a friendly singleton around? Well, I admit I’m not currently coming up with a better plan for the cosmic endowment. But then you have population ethics questions—it really does seem like you have to “solve” population ethics somehow, or you run into issues. Most “just do X” proposals seem to fall totally flat on their face—“give every moral patient an equal share” fails if you allow uploads (or even sufficiently motivated biological reproduction), “don’t give anyone born post-singularity anything” seems grossly unfair, etc.
And this is really only scratching the surface. Do you allow arbitrary cognitive enhancement, with all that that implies for likely future distribution of resources?
For cognitive enhancement, maybe we could have a system like “the smarter you are, the more aligned you must be to those less smart than you”? So enhancement would be available, but would make you less free in some ways.