Under my model, the modal outcome for “we have single-single aligned TAI” is something like:
Most humans live in universe-poverty, which is that they don’t get to control any of the stars & galaxies, but they also don’t have to die or suffer, and live in (what we would perceive as) material abundance due to some UBI-ish scheme. (The cost to whoever controls TAI to do this is so negligible that they will probably do this, unless they are actively sadistic). I am unsure what will happen with Malthusian drives: will the people who control TAI put a cap on human reproduction or just not care enough, until humanity has grown enough so that the allotment of labor from TAI systems given by the TAI-controllers is “spread very thin”? My intuition says that the former will happen, but not very strongly.
The people who control TAI might colonise the universe, but on my best model they use the resources mainly to signal status to other people who control TAI systems. (Alternatively, they will use the resources to procreate a lot until the Malthusian limit is reached as well). This scenario means that the cosmic potential is not reached.
I don’t see why the first people to control TAI wouldn’t just upload themselves into a computer and amplify their own intelligence to the limits of physics.
Are you imagining that aligned AGI would prevent this?
I don’t see anything in my comment that conflicts with them uploading themselves. Or are you implying that uploaded superintelligent humans won’t signal or procreate anymore?
I kind of expect they’d still signal? At least to other equivalently powerful entities. I don’t really see why they would procreate other than through cloning themselves for strategic purposes.
But my point is simply that uploads of brains may not be constrained by alignment in the same way that de-novo AGIs would be. And to the extent that uploaded minds are misaligned with what we want AGI to do, that itself seems like a problem.
Under my model, the modal outcome for “we have single-single aligned TAI” is something like:
Most humans live in universe-poverty, which is that they don’t get to control any of the stars & galaxies, but they also don’t have to die or suffer, and live in (what we would perceive as) material abundance due to some UBI-ish scheme. (The cost to whoever controls TAI to do this is so negligible that they will probably do this, unless they are actively sadistic). I am unsure what will happen with Malthusian drives: will the people who control TAI put a cap on human reproduction or just not care enough, until humanity has grown enough so that the allotment of labor from TAI systems given by the TAI-controllers is “spread very thin”? My intuition says that the former will happen, but not very strongly.
The people who control TAI might colonise the universe, but on my best model they use the resources mainly to signal status to other people who control TAI systems. (Alternatively, they will use the resources to procreate a lot until the Malthusian limit is reached as well). This scenario means that the cosmic potential is not reached.
I don’t see why the first people to control TAI wouldn’t just upload themselves into a computer and amplify their own intelligence to the limits of physics.
Are you imagining that aligned AGI would prevent this?
I don’t see anything in my comment that conflicts with them uploading themselves. Or are you implying that uploaded superintelligent humans won’t signal or procreate anymore?
I kind of expect they’d still signal? At least to other equivalently powerful entities. I don’t really see why they would procreate other than through cloning themselves for strategic purposes.
But my point is simply that uploads of brains may not be constrained by alignment in the same way that de-novo AGIs would be. And to the extent that uploaded minds are misaligned with what we want AGI to do, that itself seems like a problem.