You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180+. Such a being could think up novel ways of inflicting lasting suffering on other immortals, creating substantial s-risk. Moreover, this single misaligned immortal could, with time, devise a misaligned AI, and when the misaligned AI turns on the misaligned immortal and also on the other immortals and the mortals (if any are left), you are left with suffering that would make Hitler blanch.
You cannot know a person is not secretly awful until they become awful. Humans have an interpretability problem. So suppose an awful person behaves aligned (non-awful) in order to get into the immortality program, and then does a treacherous left turn and becomes extremely awful and heaps suffering on mortals and other immortals. The risks from misaligned immortals are basically the same as the risks from misaligned AIs, except the substrate differences mean immortals operate more slowly at being awful. But suppose this misaligned immortal has an IQ of 180+. Such a being could think up novel ways of inflicting lasting suffering on other immortals, creating substantial s-risk. Moreover, this single misaligned immortal could, with time, devise a misaligned AI, and when the misaligned AI turns on the misaligned immortal and also on the other immortals and the mortals (if any are left), you are left with suffering that would make Hitler blanch.