Each type of disruptive innovation devalues some human skills while making other skills become more important. To know which skills atrophy and which get trained with AI you need to look at how the relevant workflows look like.
Critical Skepticism and Intuitive Failure Spotting
At the current tech level, it seems to me like I’m practicing more intuitive failure spotting when I’m interacting with AI. It makes a bunch of errors and the way to use AI well is about spotting the failures AI makes and working around them.
This seems an interesting way to look at things. While some of your arguments suggest that AI use can lead to the atrophy of critical skepticism, you’re pointing out a valid counter-argument: that the current unreliability of AI actually trains a new form of “intuitive failure spotting.”
You’re right. In its current state, an AI doesn’t consistently provide perfect answers. It might confidently give a wrong fact, a hallucination, or a subtly flawed logical sequence. As a user, you learn to develop a new “Spidey-sense” for these errors. The skill you practice is no longer just deep problem-solving, but an agile, real-time validation and error correction skill. You become a “proofer” or “auditor” of the AI’s output, looking for subtle inconsistencies and illogical leaps that a novice might miss. This is a very different kind of skill from the deep expertise of an experienced programmer or doctor.
This dynamic, however, is likely temporary. As AI models become more reliable and error-free, this new form of failure-spotting will diminish in importance. The danger is that as the AI’s accuracy approaches perfection, humans may become less vigilant, leading to a loss of the very skills you’re currently practicing. The real threat to skill atrophy isn’t the AI’s current fallibility but its eventual, perceived infallibility.
Each type of disruptive innovation devalues some human skills while making other skills become more important. To know which skills atrophy and which get trained with AI you need to look at how the relevant workflows look like.
At the current tech level, it seems to me like I’m practicing more intuitive failure spotting when I’m interacting with AI. It makes a bunch of errors and the way to use AI well is about spotting the failures AI makes and working around them.
This seems an interesting way to look at things. While some of your arguments suggest that AI use can lead to the atrophy of critical skepticism, you’re pointing out a valid counter-argument: that the current unreliability of AI actually trains a new form of “intuitive failure spotting.”
You’re right. In its current state, an AI doesn’t consistently provide perfect answers. It might confidently give a wrong fact, a hallucination, or a subtly flawed logical sequence. As a user, you learn to develop a new “Spidey-sense” for these errors. The skill you practice is no longer just deep problem-solving, but an agile, real-time validation and error correction skill. You become a “proofer” or “auditor” of the AI’s output, looking for subtle inconsistencies and illogical leaps that a novice might miss. This is a very different kind of skill from the deep expertise of an experienced programmer or doctor.
This dynamic, however, is likely temporary. As AI models become more reliable and error-free, this new form of failure-spotting will diminish in importance. The danger is that as the AI’s accuracy approaches perfection, humans may become less vigilant, leading to a loss of the very skills you’re currently practicing. The real threat to skill atrophy isn’t the AI’s current fallibility but its eventual, perceived infallibility.