Perhaps the jailbreaking issue is easier to grasp.
“All modern AIs can be jailbroken to act dangerously.
Nobody knows how to solve this.
When we get really powerful AI, bad guys can use it to kill millions, maybe billions”
I think this is probably easier to grasp, but it then makes it sound like all you have to do to stop the AI is stop some people, and stopping some people isn’t supernaturally difficult.
Perhaps the jailbreaking issue is easier to grasp.
“All modern AIs can be jailbroken to act dangerously.
Nobody knows how to solve this.
When we get really powerful AI, bad guys can use it to kill millions, maybe billions”
I think this is probably easier to grasp, but it then makes it sound like all you have to do to stop the AI is stop some people, and stopping some people isn’t supernaturally difficult.