Good point. One point against this would be that upon reflection, I expect that human immortality is not likely to be optimal in most ways we may imagine it. I expect that on most likely consequentialist framings, the resources that could be spent on continuing my own “individual self” would be more effectively used elsewhere. You might need a very liberal notion of “self” to consider what gets kept as “you.”
That said, this wouldn’t be a bad thing, it would be more of a series of obvious decisions and improvements.
That assumes some kind of impartial utility function. I believe that, to the extent people consciously endorse such preferences, it is self-deception. We are selfish-ish creatures, and if we control the AI in a meaningful sense, we will probably choose to live forever (or at least very long) rather than use those resources in some “better” way.
Thanks for your take on this. I think our intuitions here differ a fair bit.
I find it difficult to reason about what human brains will do once they are uploaded or whatever and dramatically altered. Many of the things we’re used to now may change dramatically. It may be fair to consider that many kinds of “uploaded and modified humans” will become as different to modern humans as we are to simple algorithms or insects.
It could also be that some people will choose to “live forever”, but many others will be choose to be replaced.
Well, anything can happen if we get arbitrarily altered, but as long as the alterations are in themselves an expression of our preferences, I stick with my prediction.
Good point. One point against this would be that upon reflection, I expect that human immortality is not likely to be optimal in most ways we may imagine it. I expect that on most likely consequentialist framings, the resources that could be spent on continuing my own “individual self” would be more effectively used elsewhere. You might need a very liberal notion of “self” to consider what gets kept as “you.”
That said, this wouldn’t be a bad thing, it would be more of a series of obvious decisions and improvements.
That assumes some kind of impartial utility function. I believe that, to the extent people consciously endorse such preferences, it is self-deception. We are selfish-ish creatures, and if we control the AI in a meaningful sense, we will probably choose to live forever (or at least very long) rather than use those resources in some “better” way.
Thanks for your take on this. I think our intuitions here differ a fair bit.
I find it difficult to reason about what human brains will do once they are uploaded or whatever and dramatically altered. Many of the things we’re used to now may change dramatically. It may be fair to consider that many kinds of “uploaded and modified humans” will become as different to modern humans as we are to simple algorithms or insects.
It could also be that some people will choose to “live forever”, but many others will be choose to be replaced.
Well, anything can happen if we get arbitrarily altered, but as long as the alterations are in themselves an expression of our preferences, I stick with my prediction.