Getting humans to follow rules reliably is not a solved task either. And yet, human decision-making is relied upon all the time. Including in safety-critical domains.
This is why when humans want more reliability, they design entire systems of controlled decision-making. Systems that involve many different humans, and a lot of consensus decision-making and cross-checking. Which is something that can be done with AIs too.
If AI doesn’t pose any other risks, it just becomes a numbers game. If you can’t make AI decision-making more reliable than that of a human, evaluate whether making more mistakes is acceptable in your case, given AI’s other advantages. If you can, then kick the human out.
Completely unconvincing.
Getting humans to follow rules reliably is not a solved task either. And yet, human decision-making is relied upon all the time. Including in safety-critical domains.
This is why when humans want more reliability, they design entire systems of controlled decision-making. Systems that involve many different humans, and a lot of consensus decision-making and cross-checking. Which is something that can be done with AIs too.
If AI doesn’t pose any other risks, it just becomes a numbers game. If you can’t make AI decision-making more reliable than that of a human, evaluate whether making more mistakes is acceptable in your case, given AI’s other advantages. If you can, then kick the human out.