The point is what it’s not obvious whether we’d want an AI to gamble with human extinction in order to avoid morally questionable outcomes, and that this is an important question to get right.
That point is more easily made when it doesn’t involve things that might risk extinction, like the human brain cell teddies, the differing ems, etc.
The point is what it’s not obvious whether we’d want an AI to gamble with human extinction in order to avoid morally questionable outcomes, and that this is an important question to get right.
That point is more easily made when it doesn’t involve things that might risk extinction, like the human brain cell teddies, the differing ems, etc.