I agree with you that deploying AI in high-impact safety-critical applications under these conditions and relying on the outputs as though they met standards they don’t meet is insane.
I would, naturally, note that (1), (2), and (3) also apply to humans. It’s not like we have any option for deploying minds in these applications that don’t have some version of these problems. LLMs have different versions we don’t understand anywhere near as well as we do the limitations of humans, but what that means for how and whether we can use them, even now, to improve reliability and overall quality of outcomes in specific contexts is not a straightforward derivation.
I would also add, (3) is in some sense true but also an unattainable ideal. There is no set of rules we know how to write down which actually specifies what we (or any individual) want to have happen well enough for any mind, natural or artificial, to be safe by following them. In domains where we can get closer to that ideal of being able to write down the right procedure, we have to do a tremendous amount of work to get natural minds to actually be able to follow them.
I agree with you that deploying AI in high-impact safety-critical applications under these conditions and relying on the outputs as though they met standards they don’t meet is insane.
I would, naturally, note that (1), (2), and (3) also apply to humans. It’s not like we have any option for deploying minds in these applications that don’t have some version of these problems. LLMs have different versions we don’t understand anywhere near as well as we do the limitations of humans, but what that means for how and whether we can use them, even now, to improve reliability and overall quality of outcomes in specific contexts is not a straightforward derivation.
I would also add, (3) is in some sense true but also an unattainable ideal. There is no set of rules we know how to write down which actually specifies what we (or any individual) want to have happen well enough for any mind, natural or artificial, to be safe by following them. In domains where we can get closer to that ideal of being able to write down the right procedure, we have to do a tremendous amount of work to get natural minds to actually be able to follow them.