I think at that point it will come down to the particulars of how the architectures evolve—I think trying to philosophize in general terms about the optimal compute configuration for artificial intelligence to accomplish its goals is like trying to philosophize in general terms about the optimal method of locomotion for carbon-based life.
That said I do expect “making a copy of yourself is a very cheap action” to persist as an important dynamic in the future for AIs (a biological system can’t cheaply make a copy of itself including learned information, but if such a capability did evolve I would not expect it to be lost), and so I expect our biological intuitions around unique single-threaded identity will make bad predictions.
I think at that point it will come down to the particulars of how the architectures evolve—I think trying to philosophize in general terms about the optimal compute configuration for artificial intelligence to accomplish its goals is like trying to philosophize in general terms about the optimal method of locomotion for carbon-based life.
That said I do expect “making a copy of yourself is a very cheap action” to persist as an important dynamic in the future for AIs (a biological system can’t cheaply make a copy of itself including learned information, but if such a capability did evolve I would not expect it to be lost), and so I expect our biological intuitions around unique single-threaded identity will make bad predictions.