Similarly, what would we be willing to risk to avoid possible negatives—would we accept to increase the risk of human extinction by 1%, in order to avoid
The things avoided seem like they increase, not decrease risk.
The point is what it’s not obvious whether we’d want an AI to gamble with human extinction in order to avoid morally questionable outcomes, and that this is an important question to get right.
The things avoided seem like they increase, not decrease risk.
The point is what it’s not obvious whether we’d want an AI to gamble with human extinction in order to avoid morally questionable outcomes, and that this is an important question to get right.
That point is more easily made when it doesn’t involve things that might risk extinction, like the human brain cell teddies, the differing ems, etc.
Yes, but it might be that the means needed to avoid them—maybe heavy-handed AI interventions? - could be even more dangerous.