You need to be clear who is included in “us”. AI is likely to be trained on human understanding of identity and death, which is very much based on generational replacement rather than continuity over centuries. Some humans wish this wasn’t so, and hope it won’t apply to them, but there’s not enough examples (none in truth, few and unrealistic in fiction) to train on or learn from.
It seems likely that if “happy people” ends up in the AI goalset, it’ll create new ones that have higher likelihood of being happy than those in the past. Honestly, I’m going to be dead, so my preference doesn’t carry much weight, but I think I prefer to imagine tiling the universe with orgasmium more than I do paperclips.
It’s FAR more effort to make an existing damaged human (as all are in 2025) happy than just to make a new happy human.
AI is likely to be trained on human understanding of identity and death, which is very much based on generational replacement rather than continuity over centuries.
For me it sounds like you did not mention the whole AI alignment question.Like that is the point: AI position might not follow with dataset given, as far as it start the self=imrpovement.
You need to be clear who is included in “us”. AI is likely to be trained on human understanding of identity and death, which is very much based on generational replacement rather than continuity over centuries. Some humans wish this wasn’t so, and hope it won’t apply to them, but there’s not enough examples (none in truth, few and unrealistic in fiction) to train on or learn from.
It seems likely that if “happy people” ends up in the AI goalset, it’ll create new ones that have higher likelihood of being happy than those in the past. Honestly, I’m going to be dead, so my preference doesn’t carry much weight, but I think I prefer to imagine tiling the universe with orgasmium more than I do paperclips.
It’s FAR more effort to make an existing damaged human (as all are in 2025) happy than just to make a new happy human.
For me it sounds like you did not mention the whole AI alignment question.Like that is the point: AI position might not follow with dataset given, as far as it start the self=imrpovement.
True. The question didn’t specify anything about it, so I tried to answer based on default assumptions.