Highlights not already present in this thread:
Safely scalable AI
Humane AI
Benevolent AI
Moral AI
I like “scalable”. “Stability” is also an option for conveying that it is the long term outcome of the system that we’re worried about.
“Safer” rather than “Safe” might be more realistic. I don’t know of any approach in ANY practical topic, that is 100% risk free.
And “assurance” (or “proven”) is also an important point. We want reliable evidence that the approach is as safe as the designed claim.
But it isn’t snappy or memorable to say we want AI whose levels of benevolence have been demonstrated to be stable over the long term.
Maybe we should go for a negative? “Human Extinction-free AI” anyone? :-)
Highlights not already present in this thread:
Safely scalable AI
Humane AI
Benevolent AI
Moral AI
I like “scalable”. “Stability” is also an option for conveying that it is the long term outcome of the system that we’re worried about.
“Safer” rather than “Safe” might be more realistic. I don’t know of any approach in ANY practical topic, that is 100% risk free.
And “assurance” (or “proven”) is also an important point. We want reliable evidence that the approach is as safe as the designed claim.
But it isn’t snappy or memorable to say we want AI whose levels of benevolence have been demonstrated to be stable over the long term.
Maybe we should go for a negative? “Human Extinction-free AI” anyone? :-)