This sort of fundamental disagreement does lead to some frustrating conversations when you are talking at cross-purposes, and where even if both of you understand the difference, one of you may be talking at a different simulacrum level.
It reminds me of a conversation I had some time back with a school principal, which went something like this: He was trying to come up with proposals for how the school system could use LLMs, and naturally asked me for ideas, as I know a lot about LLMs and we’d discussed them in the past.
I replied that it was mostly a waste of time, because there was nothing really useful he could do with LLMs/AI in general. He was surprised—hadn’t I been telling him for years about AI scaling and how it was going to enable total cheating and how LLMs were already capable of doing almost all highschool level work and they were only going to keep getting better and shoot into the stratosphere and become PhD level, how could I think that LLMs were not potentially extremely useful to him?
I said that it was because to really make use of LLMs for their ostensible purpose of education, they would have to reorganize the system and fire a lot of people and replace with LLMs, and the actual purpose of the school system was to preserve itself and provide jobs (and increasingly, simply provide ‘pensions’) and daycare (in that order); and so LLMs were useless to them—even if they were used for things like making reports faster to write, by Parkinson’s law that would simply lead to more reports being demanded until the equilibrium was restored. If he proposed anything like that, it would be ignored at best and held against him at worst, and there were probably better things to spend his time on. (Specifically, since there was nothing he could do about AI scaling and any adaptations in the short run would be obsolete in a few years while he still had decades to go, he should instead be thinking hard about his career and start figuring out how to skate to where the puck will be: what does a school principal do in, say, 5 years’ time when there is AGI and everything he does on his computer can be done better for pennies a day?)
He hotly denied this as a load of overly-cynical tosh: schools and education are about learning! I knew he knew better than the official cant (he was too good at the bureaucracy to really believe that), and after a lot of arguing, I finally asked him what would happen if LLMs could replace him and all the teachers completely—would he propose they do that and turn in his resignation letter? He admitted that he would not, and at last conceded that a major purpose of the system was also to provide make-work welfare positions for the locals. (The local peoples have high unemployment, poverty, and alcoholism rates and cannot find any meaningful employment in the private-sector.) I noted that given that purpose, LLMs (or any ‘efficiency’ improvement at all) could not offer any large gains, because what they do is what those people do, but what they did was already superfluous; since it was not politically possible to just send them all home with a UBI welfare check, and make-work jobs were the chosen solution, he should instead be figuring out how to make things less, not more, efficient. (I may or may not have told the Milton Friedman teaspoon joke.)
He reframed his objective as, ‘can we make things more pleasant?’ This was finally a realistic goal… but also not one I could really help with, because to remove papercuts and genuinely unnecessary friction or pain, you have to know the system in intimate detail, and an outsider like myself can’t help there, and one where the benefits will definitionally be small. But I hoped that by coming to a more honest admission about which system he was working in—a system in which OP points #1--4 are not true—he was at least better off for the conversation.
Even if one accepts the premise that the purpose is not to educate children the education clearly still occurs and its effectiveness varries by school depending on a number of variables, many of which are controllable. Given that, you can increase how much the children learn without undermining the “true purpose” of the system, whatever one invisions that to be. To use your example, perhaps the children producing more reports actually does help them learn more. I am low confidence on that particular example but seems very possible to implement systems that increase educational efficiency below some threshold of costing current stakeholders material loses. I think your response here was too cynical.
We actually did have a post the other day reporting positive results of LLM use for education. The linked Harvard study contains a prompt that I was easily able to adapt and use with some fun and engaging results (just tried it on myself, not with students). I think suggesting a constructive approach like this for the (secondary) purpose of education could have been added in the discussion with the principal to make it seem less cynical. Also, training disadvantaged locals in LLM use for educating and as a side effect empowering them to also use the LLM tutor for themselves to learn about new topics in an engaging way could even be a benefit to themselves in the short to medium term.
I’m sure it would be less flattering to me than my version, because people never remember these sorts of conversations the same way. If you think that it might not have happened like that, then just treat it as a hypothetical discussion that could have happened and ponder how contemporary Western lower-education systems can make truly transformative, rather than minor tinkering around the edges, use of AGI which preserves all existing compensation/status/prestige/job/political arrangements and which the teachers’ unions and pension plans would not be implacably opposed to.
It’s a good thing to think about if you are trying to gauge what sort of economic or societal changes might happen over the next decade, especially if you are trying to use that as a proxy for ‘is AGI real’, as so many people are. Personally, my conclusion has long been that the economy & society are so rigid that most such arrangements will remain largely intact even if they are dead men walking, and the pace of AI progress is so rapid that you should basically ignore any argument of the form ‘but we still have human teachers, therefore, AGI can’t be real’.
Maybe, his actual goal and as using AI for the purpose of signaling to other bureaucrats? Using AI in an innovative way might mean being able to apply to grants.
Even if LMMs (you know, LLMs sensu stricto can’t teach kids read and write) are able to do all primary work of teachers, some humans will have to oversee the process because as soon as a dispute between a student and an AI teacher arises, e. g., about grades or because of the child not willing to study, parents will inherently distrust AI and require a qualified human teacher intervention.
Also, since richer parents are already paying for more pleasant education experience in private schools (often but not always organized according to Montessori method), I believe that if jobs and daycare really become the focus of middle education taxpayers would gladly agree to move the school system into more enjoyable and perhaps gamified direction. Most likely some workers for whom a teacher wouldn’t be a really appropriate term anymore (pedagogues?) will look after the kids and also oversee the AI teaching process to some extent
This sort of fundamental disagreement does lead to some frustrating conversations when you are talking at cross-purposes, and where even if both of you understand the difference, one of you may be talking at a different simulacrum level.
It reminds me of a conversation I had some time back with a school principal, which went something like this: He was trying to come up with proposals for how the school system could use LLMs, and naturally asked me for ideas, as I know a lot about LLMs and we’d discussed them in the past.
I replied that it was mostly a waste of time, because there was nothing really useful he could do with LLMs/AI in general. He was surprised—hadn’t I been telling him for years about AI scaling and how it was going to enable total cheating and how LLMs were already capable of doing almost all highschool level work and they were only going to keep getting better and shoot into the stratosphere and become PhD level, how could I think that LLMs were not potentially extremely useful to him?
I said that it was because to really make use of LLMs for their ostensible purpose of education, they would have to reorganize the system and fire a lot of people and replace with LLMs, and the actual purpose of the school system was to preserve itself and provide jobs (and increasingly, simply provide ‘pensions’) and daycare (in that order); and so LLMs were useless to them—even if they were used for things like making reports faster to write, by Parkinson’s law that would simply lead to more reports being demanded until the equilibrium was restored. If he proposed anything like that, it would be ignored at best and held against him at worst, and there were probably better things to spend his time on. (Specifically, since there was nothing he could do about AI scaling and any adaptations in the short run would be obsolete in a few years while he still had decades to go, he should instead be thinking hard about his career and start figuring out how to skate to where the puck will be: what does a school principal do in, say, 5 years’ time when there is AGI and everything he does on his computer can be done better for pennies a day?)
He hotly denied this as a load of overly-cynical tosh: schools and education are about learning! I knew he knew better than the official cant (he was too good at the bureaucracy to really believe that), and after a lot of arguing, I finally asked him what would happen if LLMs could replace him and all the teachers completely—would he propose they do that and turn in his resignation letter? He admitted that he would not, and at last conceded that a major purpose of the system was also to provide make-work welfare positions for the locals. (The local peoples have high unemployment, poverty, and alcoholism rates and cannot find any meaningful employment in the private-sector.) I noted that given that purpose, LLMs (or any ‘efficiency’ improvement at all) could not offer any large gains, because what they do is what those people do, but what they did was already superfluous; since it was not politically possible to just send them all home with a UBI welfare check, and make-work jobs were the chosen solution, he should instead be figuring out how to make things less, not more, efficient. (I may or may not have told the Milton Friedman teaspoon joke.)
He reframed his objective as, ‘can we make things more pleasant?’ This was finally a realistic goal… but also not one I could really help with, because to remove papercuts and genuinely unnecessary friction or pain, you have to know the system in intimate detail, and an outsider like myself can’t help there, and one where the benefits will definitionally be small. But I hoped that by coming to a more honest admission about which system he was working in—a system in which OP points #1--4 are not true—he was at least better off for the conversation.
Even if one accepts the premise that the purpose is not to educate children the education clearly still occurs and its effectiveness varries by school depending on a number of variables, many of which are controllable. Given that, you can increase how much the children learn without undermining the “true purpose” of the system, whatever one invisions that to be. To use your example, perhaps the children producing more reports actually does help them learn more. I am low confidence on that particular example but seems very possible to implement systems that increase educational efficiency below some threshold of costing current stakeholders material loses. I think your response here was too cynical.
We actually did have a post the other day reporting positive results of LLM use for education. The linked Harvard study contains a prompt that I was easily able to adapt and use with some fun and engaging results (just tried it on myself, not with students). I think suggesting a constructive approach like this for the (secondary) purpose of education could have been added in the discussion with the principal to make it seem less cynical. Also, training disadvantaged locals in LLM use for educating and as a side effect empowering them to also use the LLM tutor for themselves to learn about new topics in an engaging way could even be a benefit to themselves in the short to medium term.
Total tangent: this article from 2011 attributes the quote to a bunch of people, and finds an early instance in a 1901 newspaper article.
I would love to hear the principal’s take on your conversation.
I’m sure it would be less flattering to me than my version, because people never remember these sorts of conversations the same way. If you think that it might not have happened like that, then just treat it as a hypothetical discussion that could have happened and ponder how contemporary Western lower-education systems can make truly transformative, rather than minor tinkering around the edges, use of AGI which preserves all existing compensation/status/prestige/job/political arrangements and which the teachers’ unions and pension plans would not be implacably opposed to.
It’s a good thing to think about if you are trying to gauge what sort of economic or societal changes might happen over the next decade, especially if you are trying to use that as a proxy for ‘is AGI real’, as so many people are. Personally, my conclusion has long been that the economy & society are so rigid that most such arrangements will remain largely intact even if they are dead men walking, and the pace of AI progress is so rapid that you should basically ignore any argument of the form ‘but we still have human teachers, therefore, AGI can’t be real’.
Maybe, his actual goal and as using AI for the purpose of signaling to other bureaucrats? Using AI in an innovative way might mean being able to apply to grants.
Even if LMMs (you know, LLMs sensu stricto can’t teach kids read and write) are able to do all primary work of teachers, some humans will have to oversee the process because as soon as a dispute between a student and an AI teacher arises, e. g., about grades or because of the child not willing to study, parents will inherently distrust AI and require a qualified human teacher intervention.
Also, since richer parents are already paying for more pleasant education experience in private schools (often but not always organized according to Montessori method), I believe that if jobs and daycare really become the focus of middle education taxpayers would gladly agree to move the school system into more enjoyable and perhaps gamified direction. Most likely some workers for whom a teacher wouldn’t be a really appropriate term anymore (pedagogues?) will look after the kids and also oversee the AI teaching process to some extent