I don’t see why the first people to control TAI wouldn’t just upload themselves into a computer and amplify their own intelligence to the limits of physics.
Are you imagining that aligned AGI would prevent this?
I don’t see anything in my comment that conflicts with them uploading themselves. Or are you implying that uploaded superintelligent humans won’t signal or procreate anymore?
I kind of expect they’d still signal? At least to other equivalently powerful entities. I don’t really see why they would procreate other than through cloning themselves for strategic purposes.
But my point is simply that uploads of brains may not be constrained by alignment in the same way that de-novo AGIs would be. And to the extent that uploaded minds are misaligned with what we want AGI to do, that itself seems like a problem.
I don’t see why the first people to control TAI wouldn’t just upload themselves into a computer and amplify their own intelligence to the limits of physics.
Are you imagining that aligned AGI would prevent this?
I don’t see anything in my comment that conflicts with them uploading themselves. Or are you implying that uploaded superintelligent humans won’t signal or procreate anymore?
I kind of expect they’d still signal? At least to other equivalently powerful entities. I don’t really see why they would procreate other than through cloning themselves for strategic purposes.
But my point is simply that uploads of brains may not be constrained by alignment in the same way that de-novo AGIs would be. And to the extent that uploaded minds are misaligned with what we want AGI to do, that itself seems like a problem.