I find this position on ems bizarre. If the upload acts like a human brain, and then also the uploads seem normalish after interacting with them a bunch, I feel totally fine with them.
I also am more optimistic than you about creating AIs that have very different internals but that I think are good successors, though I don’t have a strong opinion.
I am not philosophically opposed to ems, I just think they will be very hard to get right (mainly because of the environment part—the em will be interacting with a cheap downgraded version of the real world). I am willing to change my mind on this. I also don’t think we should avoid building ems, but I think it’s highly unlikely an em life will ever be as good as or equivalent to a regular human life so I’d not want my lineage replaced with ems.
In contrast to my point on ems, I do think we should avoid building AIs whose main purpose is to be equivalent to (or exceed) humans in “moral value”/pursue anything that resembles building “AI successors”. Imo the main purpose of AI alignment should be to ensure AIs help us thrive and achieve our goals rather than to attempt to embed our “values” into AIs with the goal of promoting our “values” independently of our existence. (Values is in scare quotes because I don’t think there’s such a thing as human values—individuals differ a lot in their values, goals, and preferences.)
Would you be convinced if you talked to the ems a bunch and they reported normal, happy, fun lives? (Assuming nothing nefarious happened in terms of e.g. modifying their brains to report that.) I think I would find that very convincing. If you wouldn’t find that convincing, what would you be worried was missing?
I would find that reasonably convincing, yes (especially because my prior is already that true ems would not have a tendency to report their experiences in a different way from us).
I find this position on ems bizarre. If the upload acts like a human brain, and then also the uploads seem normalish after interacting with them a bunch, I feel totally fine with them.
I also am more optimistic than you about creating AIs that have very different internals but that I think are good successors, though I don’t have a strong opinion.
I am not philosophically opposed to ems, I just think they will be very hard to get right (mainly because of the environment part—the em will be interacting with a cheap downgraded version of the real world). I am willing to change my mind on this. I also don’t think we should avoid building ems, but I think it’s highly unlikely an em life will ever be as good as or equivalent to a regular human life so I’d not want my lineage replaced with ems.
In contrast to my point on ems, I do think we should avoid building AIs whose main purpose is to be equivalent to (or exceed) humans in “moral value”/pursue anything that resembles building “AI successors”. Imo the main purpose of AI alignment should be to ensure AIs help us thrive and achieve our goals rather than to attempt to embed our “values” into AIs with the goal of promoting our “values” independently of our existence. (Values is in scare quotes because I don’t think there’s such a thing as human values—individuals differ a lot in their values, goals, and preferences.)
Would you be convinced if you talked to the ems a bunch and they reported normal, happy, fun lives? (Assuming nothing nefarious happened in terms of e.g. modifying their brains to report that.) I think I would find that very convincing. If you wouldn’t find that convincing, what would you be worried was missing?
I would find that reasonably convincing, yes (especially because my prior is already that true ems would not have a tendency to report their experiences in a different way from us).