Yup. Fundamentally, I think that human minds (and practically-implemented efficient agents in general) consist of a great deal of patterns/heuristics of variable levels of shallowness, the same way LLMs are, plus a deeper general-intelligence algorithm. System 1 versus System 2, essentially; autopilot versus mindfulness. Most of the time, most people are operating on these shallow heuristics, and they turn on the slower general-intelligence algorithm comparatively rarely. (Which is likely a convergent evolutionary adaptation, but I digress.)
And for some people, it’s rarer than for others; and some people use it in different domains than others.
Some people don’t apply it to their social relationships. Playing the characters the society assigned them, instead of dropping the theatrics and affecting real change in their lives.
Others don’t apply it to their political or corporate strategizing. Simulacrum Levels 3-4: operating on vibes or reaction patterns, not models of physical reality.
Others don’t apply it to their moral reasoning: deontologists, as opposed to consequentialists.
Still others, as this post suggests, don’t apply it to the field in which they’re working.
… plus probably a ton more examples from all kinds of domains.
The LW-style rationality, in general, can be viewed as an attempt to get people to use that “deeper” general-purpose reasoning algorithm more frequently. To actively build a structural causal model of reality, drawing on all information streams available to them, and run queries on it, instead of acting off of reactively-learned, sporadically-updating policies.
Yup. Fundamentally, I think that human minds (and practically-implemented efficient agents in general) consist of a great deal of patterns/heuristics of variable levels of shallowness, the same way LLMs are, plus a deeper general-intelligence algorithm. System 1 versus System 2, essentially; autopilot versus mindfulness. Most of the time, most people are operating on these shallow heuristics, and they turn on the slower general-intelligence algorithm comparatively rarely. (Which is likely a convergent evolutionary adaptation, but I digress.)
And for some people, it’s rarer than for others; and some people use it in different domains than others.
Some people don’t apply it to their social relationships. Playing the characters the society assigned them, instead of dropping the theatrics and affecting real change in their lives.
Others don’t apply it to their political or corporate strategizing. Simulacrum Levels 3-4: operating on vibes or reaction patterns, not models of physical reality.
Others don’t apply it to their moral reasoning: deontologists, as opposed to consequentialists.
Still others, as this post suggests, don’t apply it to the field in which they’re working.
… plus probably a ton more examples from all kinds of domains.
The LW-style rationality, in general, can be viewed as an attempt to get people to use that “deeper” general-purpose reasoning algorithm more frequently. To actively build a structural causal model of reality, drawing on all information streams available to them, and run queries on it, instead of acting off of reactively-learned, sporadically-updating policies.
The dark-room metaphor is pretty apt, I think.