I’m not so sure I agree with that. LLMs are very useful for tasks that are much quicker for me to verify than to do myself (like writing boilerplate code with a library I’ll only use once, which I don’t want to look up the documentation for), but if I don’t know how to do something, I’d be very wary of using an LLM, on the basis that it might make mistakes that I don’t catch.
This is an especially strong concern when learning something. You might even get to a correct answer, but through a ‘rule’ that was hallucinated or may not always apply.
This is also a skill issue: learning to vet your sources properly and analyze what someone told you. If teachers are not already teaching students to do this with books and websites they consult, then yes an LLM can make the problem worse bit it’s already pretty bad.
I think it’s a bit more nuanced than that. Going through a list of human sources of various kinds, getting a sense of who wrote them and how reliable they might be on any issue, and assembling a complete picture of what you’re researching from that is one thing. Going through an LLM’s response to a question and determining which parts are true, which parts are hallucinations, and which parts are almost true but contain subtle mistakes that no human would make in quite the same way is a very different matter. The former is a biologically-ingrained skill that’s useful everywhere, and the latter much less so.
The best example I can give is the experience of debugging LLM-written code versus debugging the code of a human. The human code might have errors in it, but these errors are the result of a flawed set of assumptions that can generally be identified from context. You look for a mistake, and there’s a ‘well’ of incorrect reasoning surrounding it that you can follow to the crux of the problem. With LLM code, this isn’t the case. It’s very easy for the model to match the surface-level stylistic conventions of good code, and this often results in very unpredictable mistakes that you’d never see a human dev make. Calls to imaginary functions, dummied out functionality that’s not marked as such, and so on.
There’s a time and place for everything, but I think the set of scenarios in which I’m learning an entirely new process and not being extremely selective about the quality of my sources is quite small.
If students are seeing an LLM as a source at all then they’ve already made the critical mistake, in the context of e.g. writing a paper or solving a problem.
Can the LLM cite its own sources, such that you can re-generate the relevant findings from those sources without referencing anything else the LLM claimed? Great. If not, then from a research POV it’s claims are no more authoritative than writing “My dad said...”
I am also very, very aware that our teachers, our schools, and our students are not at all set up to actually get people to learn the skills/mindset needed to do better, or to use LLMs well more generally, for many reasons. The problem is somewhat harder and vastly more important and urgent than it used to be before LLMs. I am not sure most people can achieve the necessary mindset and habits to get past this without a lot of societal and technological scaffolding we don’t have today, and I can only hope we’ll build that adequately.
I’m not so sure I agree with that. LLMs are very useful for tasks that are much quicker for me to verify than to do myself (like writing boilerplate code with a library I’ll only use once, which I don’t want to look up the documentation for), but if I don’t know how to do something, I’d be very wary of using an LLM, on the basis that it might make mistakes that I don’t catch.
This is an especially strong concern when learning something. You might even get to a correct answer, but through a ‘rule’ that was hallucinated or may not always apply.
This is also a skill issue: learning to vet your sources properly and analyze what someone told you. If teachers are not already teaching students to do this with books and websites they consult, then yes an LLM can make the problem worse bit it’s already pretty bad.
I think it’s a bit more nuanced than that. Going through a list of human sources of various kinds, getting a sense of who wrote them and how reliable they might be on any issue, and assembling a complete picture of what you’re researching from that is one thing. Going through an LLM’s response to a question and determining which parts are true, which parts are hallucinations, and which parts are almost true but contain subtle mistakes that no human would make in quite the same way is a very different matter. The former is a biologically-ingrained skill that’s useful everywhere, and the latter much less so.
The best example I can give is the experience of debugging LLM-written code versus debugging the code of a human. The human code might have errors in it, but these errors are the result of a flawed set of assumptions that can generally be identified from context. You look for a mistake, and there’s a ‘well’ of incorrect reasoning surrounding it that you can follow to the crux of the problem. With LLM code, this isn’t the case. It’s very easy for the model to match the surface-level stylistic conventions of good code, and this often results in very unpredictable mistakes that you’d never see a human dev make. Calls to imaginary functions, dummied out functionality that’s not marked as such, and so on.
There’s a time and place for everything, but I think the set of scenarios in which I’m learning an entirely new process and not being extremely selective about the quality of my sources is quite small.
If students are seeing an LLM as a source at all then they’ve already made the critical mistake, in the context of e.g. writing a paper or solving a problem.
Can the LLM cite its own sources, such that you can re-generate the relevant findings from those sources without referencing anything else the LLM claimed? Great. If not, then from a research POV it’s claims are no more authoritative than writing “My dad said...”
I am also very, very aware that our teachers, our schools, and our students are not at all set up to actually get people to learn the skills/mindset needed to do better, or to use LLMs well more generally, for many reasons. The problem is somewhat harder and vastly more important and urgent than it used to be before LLMs. I am not sure most people can achieve the necessary mindset and habits to get past this without a lot of societal and technological scaffolding we don’t have today, and I can only hope we’ll build that adequately.