I’ve sketched my own view elsewhere in this post as
Strictly speaking, teleosemantics is a theory of what constitutes semantic content, but I’m using it in this essay to talk about how states acquire semantic content. My model is that teleosemantic optimisation is how systems typically acquire semantic content but, once acquired, the constitution of mental content is played by the causal/functional roles of the concepts themselves.
This seems pretty close to your “structurally-optimised” picture. However, where we might diverge is that I’m aiming for a more realist notion of content whereas (if I’m reading you correctly) you’re happier with a more deflationary view i.e. we can treat systems as if they have content using the Intentional Stance without there necessarily being a stance-independent fact of the matter about whether there’s content.
Despite the similarity in our views, there’s one bullet I’d try to avoid biting if possible:
The cost is affirming that “1+1=2” means something even if it is written by accident via the random movements of particles.
The original formulation of teleosemantics is meant to block exactly this. Roughly, content is located between a producer and consumer mechanism. For example, a frog has a visual system which produces states that lock onto flies and a motor system which consumes those states by snapping its tongue at them. In this case the content [fly] takes on meaning as it lives between producer and consumer mechanisms which participate in an optimisation process. In this way, you can say that the producer produces content for the consumer.
I think you can anchor this in the causal-functional roles played by the producer/consumer mechanisms rather than the fact that they’ve undergone a specific optimisation process i.e. we don’t require the content to have actually been produced/consumed by an optimisation process but only that it plays the right causal-functional role.
In this way, a random configuration of space dust that happens to arrange into “1+1=2” doesn’t have content in its own right because there is no sophisticated producer mechanism. Even if an agent comes along later to read the dust, the representational content is in the agent’s existing structure and the dust pattern is just an “empty vehicle” that can be co-opted for the system’s own existing representations rather than a genuine contentful state.
This contrasts with Swampman who instantiates a full producer/consumer architecture which is structurally and functionally isomorphic to our own. On my functional-role picture this is enough to constitute genuine content even though he never underwent any actual optimisation process that is normally needed to acquire the content.
To be clear, this view still probably has some issues to work through e.g. I’m probably drifting towards a type of semantic internalism which a good chunk of philosophers will want to reject. I’m also not sure this view fully evades underdetermination worries e.g. which consumer has the process been optimised for? In regular teleosemantics the consumer appears in the historical selection process but in this view there could, in principle, be many consumers which explain the same process.
Thanks for the thoughtful and detailed response!
I’ve sketched my own view elsewhere in this post as
This seems pretty close to your “structurally-optimised” picture. However, where we might diverge is that I’m aiming for a more realist notion of content whereas (if I’m reading you correctly) you’re happier with a more deflationary view i.e. we can treat systems as if they have content using the Intentional Stance without there necessarily being a stance-independent fact of the matter about whether there’s content.
Despite the similarity in our views, there’s one bullet I’d try to avoid biting if possible:
The original formulation of teleosemantics is meant to block exactly this. Roughly, content is located between a producer and consumer mechanism. For example, a frog has a visual system which produces states that lock onto flies and a motor system which consumes those states by snapping its tongue at them. In this case the content [fly] takes on meaning as it lives between producer and consumer mechanisms which participate in an optimisation process. In this way, you can say that the producer produces content for the consumer.
I think you can anchor this in the causal-functional roles played by the producer/consumer mechanisms rather than the fact that they’ve undergone a specific optimisation process i.e. we don’t require the content to have actually been produced/consumed by an optimisation process but only that it plays the right causal-functional role.
In this way, a random configuration of space dust that happens to arrange into “1+1=2” doesn’t have content in its own right because there is no sophisticated producer mechanism. Even if an agent comes along later to read the dust, the representational content is in the agent’s existing structure and the dust pattern is just an “empty vehicle” that can be co-opted for the system’s own existing representations rather than a genuine contentful state.
This contrasts with Swampman who instantiates a full producer/consumer architecture which is structurally and functionally isomorphic to our own. On my functional-role picture this is enough to constitute genuine content even though he never underwent any actual optimisation process that is normally needed to acquire the content.
To be clear, this view still probably has some issues to work through e.g. I’m probably drifting towards a type of semantic internalism which a good chunk of philosophers will want to reject. I’m also not sure this view fully evades underdetermination worries e.g. which consumer has the process been optimised for? In regular teleosemantics the consumer appears in the historical selection process but in this view there could, in principle, be many consumers which explain the same process.