Do LLMs have intelligence (mind), or are they only rational agents? To understand this question, I think it is important to delineate the subtle difference between intelligence and rationality.
In current practice of building artificial intelligence, the most common approach is the standard model, which refers to building rationally acting agents—those that strive to accomplish some objective put into them (see “Artificial Intelligence: A Modern Approach” by Russell and Norvig). These agents, built according to the standard model, use an external standard for their utility function; in other words, the distinction between what is good or bad comes from an external source. It is we who give the models rewards, not the models themselves who create these rewards, right? That is why we have the outer and inner alignment problems, among other things. This standard model is, in essence, rationality, which deals with achieving goals (pragmatic rationality) and requires having a correct map of the territory (epistemological rationality).
This model is now dominant; it is the way we currently describe agents: an agent receives percepts from an environment and emits actions. And we have various types of agent architectures that allow better or more efficient interaction with the environment. The best type of these is the rational agent that has some utility function allowing the agent to deal with uncertainty (dynamic, stochastic, continuous environment). But the standard model does not describe the performance standard (where the rewards come from), and that is a crucial part.
This distinction between making a decision to act in an environment (rationality) and intelligence was well understood by Aristotle. The word “intelligence” is now equated (see the wiki) with nous, which Aristotle defined as follows [1139a30]:
Let us now similarly divide the rational part, and let it be assumed that there are two rational faculties, one whereby we contemplate those things whose first principles are invariable, and one whereby we contemplate those things which admit of variation: since, on the assumption that knowledge is based on a likeness or affinity of some sort between subject and object, the parts of the soul adapted to the cognition of objects that are of different kinds must themselves differ in kind.
To put this in context, the standard model is only part of the solution to artificial intelligence. We need another part that contemplates those invariable things, which Aristotle named “nous” (or the mind).
It is unclear to what degree LLMs are intelligent in this sense. Are they only rational agents? This seems unlikely, as they appear to have a good map of the world learned from the whole internet. However, we still need to give them this external performance standard. For example, we now see that models are given written instructions for that performance standard (see this “soul document”).
Do LLMs have intelligence (mind), or are they only rational agents? To understand this question, I think it is important to delineate the subtle difference between intelligence and rationality.
In current practice of building artificial intelligence, the most common approach is the standard model, which refers to building rationally acting agents—those that strive to accomplish some objective put into them (see “Artificial Intelligence: A Modern Approach” by Russell and Norvig). These agents, built according to the standard model, use an external standard for their utility function; in other words, the distinction between what is good or bad comes from an external source. It is we who give the models rewards, not the models themselves who create these rewards, right? That is why we have the outer and inner alignment problems, among other things. This standard model is, in essence, rationality, which deals with achieving goals (pragmatic rationality) and requires having a correct map of the territory (epistemological rationality).
This model is now dominant; it is the way we currently describe agents: an agent receives percepts from an environment and emits actions. And we have various types of agent architectures that allow better or more efficient interaction with the environment. The best type of these is the rational agent that has some utility function allowing the agent to deal with uncertainty (dynamic, stochastic, continuous environment). But the standard model does not describe the performance standard (where the rewards come from), and that is a crucial part.
This distinction between making a decision to act in an environment (rationality) and intelligence was well understood by Aristotle. The word “intelligence” is now equated (see the wiki) with nous, which Aristotle defined as follows [1139a30]:
To put this in context, the standard model is only part of the solution to artificial intelligence. We need another part that contemplates those invariable things, which Aristotle named “nous” (or the mind).
It is unclear to what degree LLMs are intelligent in this sense. Are they only rational agents? This seems unlikely, as they appear to have a good map of the world learned from the whole internet. However, we still need to give them this external performance standard. For example, we now see that models are given written instructions for that performance standard (see this “soul document”).