As you know, I totally agree that mental content is normative—this was a hard lesson for philosophers to swallow, or at least the ones that tried to “naturalize” mental content (make it a physical fact) by turning to causal correlations. Causal correlations was a natural place to start, but the problem with it is that intuitively mental content can misrepresent—my brain can represent Santa Claus even though (sorry) it can’t have any causal relation with Santa. (I don’t mean my brain can represent ideas or concepts or stories or pictures of Santa—I mean it can represent Santa.)
Ramana
Misrepresentation implies normativity, yep.
My current understanding of what’s going on here: * There’s a cluster of naive theories of mental content, EG the signaling games, which attempt to account for meaning in a very naturalistic way, but fail account properly for misrepresentation. I think some of these theories cannot handle misrepresentation at all, EG, Mark of the Mental (a book about Teleosemantics) discusses how the information-theory notion of “information” has no concept of misinformation (a signal is not true or false, in information theory; it is just data, just bits). Similarly, signaling games have no way to distinguish truthfulness from a lie that’s been uncovered: the meaning of a signal is what’s probabilistically inferred from it, so there’s no difference between a lie that the listener understands to be a lie & a true statement. So both signaling games and information theory are in the mistaken “mental content is not normative” cluster under discussion here. * Santa is an example of misrepresentation here. I see two dimensions of misrepresentation so far: * Misrepresenting facts (asserting something untrue) vs misrepresenting referents (talking about something that doesn’t exist, like Santa). These phenomena seem very close, but we might want to treat claims about non-existent things as meaningless rather than false, in which case we need to distinguish these cases. * simple misrepresentation (falsehood or nonexistence) vs deliberate misrepresentation (lie or fabrication). * “Misrepresentation implies normativity” is saying that to model misrepresentation, we need to include a normative dimension. It isn’t yet clear what that normative dimension is supposed to be. It could be active, deliberate maintenance of the signaling-game equilibrium. It could be a notion of context-independent normativity, EG the degree to which a rational observer would explain the object in a telic way (“see, these are supposed to fit together...”). Etc. * The teleosemantic answer is typically one where the normativity can be inherited transitively (the hammer is for hitting nails because humans made it for that), and ultimately grounds out in the naturally-arising proto-telos of evolution by natural selection (human telic nature was put there by evolution). Ramana and Steve find this unsatisfying due to swamp-man examples.
Wearing my AI safety hat, I’m not sure we need to cover swamp-man examples. Such examples are inherently improbable. In some sense the right thing to do in such cases is to infer that you’re in a philosophical hypothetical, which grounds out Swamp Man’s telos in that of the philosophers doing the imagining (and so, ultimately, to evolution).
Nonetheless, I also dislike the choice to bottom everything out in biological evolution. It is not as if we have a theorem proving that all agency has to come from biological evolution. If we did, that would be very interesting, but biological evolution has a lot of “happenstance” around the structure of DNA and the genetic code. Can we say anything more fundamental about how telos arises?
I think I don’t believe in a non-contextual notion of telos like Ramana seems to want. A hammer is not a doorstop. There should be little we can say about the physical makeup of a telic entity due to multiple-instantiability. The symbols chosen in a language have very weak ties to their meanings. A logic gait can be made of a variety of components. An algorithm can be implemented as a program in many ways. A problem can be solved by a variety of algorithms.
However, I do believe there may be a useful representation theorem, which says that if it is useful to regard something as telic, then we can regard it as having beliefs (in a way that should shed light on interpretability).
My current understanding of what’s going on here:
* There’s a cluster of naive theories of mental content, EG the signaling games, which attempt to account for meaning in a very naturalistic way, but fail account properly for misrepresentation. I think some of these theories cannot handle misrepresentation at all, EG, Mark of the Mental (a book about Teleosemantics) discusses how the information-theory notion of “information” has no concept of misinformation (a signal is not true or false, in information theory; it is just data, just bits). Similarly, signaling games have no way to distinguish truthfulness from a lie that’s been uncovered: the meaning of a signal is what’s probabilistically inferred from it, so there’s no difference between a lie that the listener understands to be a lie & a true statement. So both signaling games and information theory are in the mistaken “mental content is not normative” cluster under discussion here.
* Santa is an example of misrepresentation here. I see two dimensions of misrepresentation so far:
* Misrepresenting facts (asserting something untrue) vs misrepresenting referents (talking about something that doesn’t exist, like Santa). These phenomena seem very close, but we might want to treat claims about non-existent things as meaningless rather than false, in which case we need to distinguish these cases.
* simple misrepresentation (falsehood or nonexistence) vs deliberate misrepresentation (lie or fabrication).
* “Misrepresentation implies normativity” is saying that to model misrepresentation, we need to include a normative dimension. It isn’t yet clear what that normative dimension is supposed to be. It could be active, deliberate maintenance of the signaling-game equilibrium. It could be a notion of context-independent normativity, EG the degree to which a rational observer would explain the object in a telic way (“see, these are supposed to fit together...”). Etc.
* The teleosemantic answer is typically one where the normativity can be inherited transitively (the hammer is for hitting nails because humans made it for that), and ultimately grounds out in the naturally-arising proto-telos of evolution by natural selection (human telic nature was put there by evolution). Ramana and Steve find this unsatisfying due to swamp-man examples.
Wearing my AI safety hat, I’m not sure we need to cover swamp-man examples. Such examples are inherently improbable. In some sense the right thing to do in such cases is to infer that you’re in a philosophical hypothetical, which grounds out Swamp Man’s telos in that of the philosophers doing the imagining (and so, ultimately, to evolution).
Nonetheless, I also dislike the choice to bottom everything out in biological evolution. It is not as if we have a theorem proving that all agency has to come from biological evolution. If we did, that would be very interesting, but biological evolution has a lot of “happenstance” around the structure of DNA and the genetic code. Can we say anything more fundamental about how telos arises?
I think I don’t believe in a non-contextual notion of telos like Ramana seems to want. A hammer is not a doorstop. There should be little we can say about the physical makeup of a telic entity due to multiple-instantiability. The symbols chosen in a language have very weak ties to their meanings. A logic gait can be made of a variety of components. An algorithm can be implemented as a program in many ways. A problem can be solved by a variety of algorithms.
However, I do believe there may be a useful representation theorem, which says that if it is useful to regard something as telic, then we can regard it as having beliefs (in a way that should shed light on interpretability).