You are correct. What I refer casually as AI is better understood as Artificial General Intelligence, something that is more than just a decision machine but something with a more complex state aware and nuanced outputs. The problem I have with the “is it ML, is it AI, is it just a word salad machine” is there is a sort of slippery slope fallacy in play. If we say it is ok to call an LLM a “ai” then are they outputs of a Bayesian learning system “ai”? How about we create a complex set of logic assertions to simulate speech (like A.L.I.C.E. (Artificial Linguistic Internet Computer Entity)) is that still “ai”. I’m not saying these things to answer the question for anyone, but for my own internal guide rails I’ve started drawing a line around llms and putting a label on the line saying “this is not AI” for all of them in isolation. Perhaps this is simply a form of psychological defense, like calling chatGPT a “clanker” to remind me of how simple the technology is in comparison to a biological brain.
You are correct. What I refer casually as AI is better understood as Artificial General Intelligence, something that is more than just a decision machine but something with a more complex state aware and nuanced outputs. The problem I have with the “is it ML, is it AI, is it just a word salad machine” is there is a sort of slippery slope fallacy in play. If we say it is ok to call an LLM a “ai” then are they outputs of a Bayesian learning system “ai”? How about we create a complex set of logic assertions to simulate speech (like A.L.I.C.E. (Artificial Linguistic Internet Computer Entity)) is that still “ai”. I’m not saying these things to answer the question for anyone, but for my own internal guide rails I’ve started drawing a line around llms and putting a label on the line saying “this is not AI” for all of them in isolation. Perhaps this is simply a form of psychological defense, like calling chatGPT a “clanker” to remind me of how simple the technology is in comparison to a biological brain.