Thanks for the effortpost. I feel like I have learned something interesting reading it—maybe sharpened thoughts around cell membranes vis a vis boundaries and agency and (local) coherence which adds up to global coherence.
My biggest worry is that your essay seems to frame consequentialism as empirically quantified by a given rule (“the pattern traps that accumulated the patterns-that-matter”), which may well be true, but doesn’t give me much intensional insight into why this happened, into whatkinds of cognitive algorithms possess this interesting-seeming “consequentialism” property!
Or maybe I’m missing the point.
You can always try to see a rock as an agent—no one will arrest you. But that lens doesn’t accurately predict much about what the inanimate object will do next. Rocks like to sit inert and fall down, when they can; but they don’t get mad, or have a conference to travel to later this month, or get excited to chase squirrels. Most of the cognitive machinery you have for predicting the scheming of agents lies entirely fallow when applied to rocks.
The intentional stance seems useful insofar as brain has picked up a real regularity. Which I think it has. Just noting that reaction.
I worry this is mostly not about territory but mostly map’s reaction to territory, in a way which may not be tracked by your analysis?
Agents are bubbles of reflexes, when those reflexes are globally coherent among themselves. And exactly what way those reflexes are globally coherent (there are many possibilities) fixes what the agent cares about terminally tending toward.
This isn’t quite how I, personally, would put it (“reflexes” seems too unsophisticated and non-generalizing for my taste, even compared to “heuristics”). But I really like the underlying sentiment/insight/frame.
Thanks for the effortpost. I feel like I have learned something interesting reading it—maybe sharpened thoughts around cell membranes vis a vis boundaries and agency and (local) coherence which adds up to global coherence.
My biggest worry is that your essay seems to frame consequentialism as empirically quantified by a given rule (“the pattern traps that accumulated the patterns-that-matter”), which may well be true, but doesn’t give me much intensional insight into why this happened, into what kinds of cognitive algorithms possess this interesting-seeming “consequentialism” property!
Or maybe I’m missing the point.
The intentional stance seems useful insofar as brain has picked up a real regularity. Which I think it has. Just noting that reaction.
I worry this is mostly not about territory but mostly map’s reaction to territory, in a way which may not be tracked by your analysis?
This isn’t quite how I, personally, would put it (“reflexes” seems too unsophisticated and non-generalizing for my taste, even compared to “heuristics”). But I really like the underlying sentiment/insight/frame.