One of the more common responses I hear at this point is some variation of “general intelligence isn’t A Thing, people just learn a giant pile of specialized heuristics via iteration and memetic spread
I’m very uncertain about the validity of the below question but I shalt ask it anyway and since I don’t trust my own way of expressing it, here’s claude on it:
The post argues that humans must have some general intelligence capability beyond just learning specialized heuristics, based on efficiency arguments in high-dimensional environments. However, research on cultural evolution (e.g., “The Secret of Our Success”, “Cognitive Gadgets”) suggests that much of human capability comes from distributed cultural learning and adaptation. Couldn’t this cultural scaffolding, combined with domain-specific inductive biases (as suggested by work in Geometric Deep Learning), provide the efficiency gains you attribute to general intelligence? In other words, perhaps the efficiency comes not from individual general intelligence, but from the collective accumulation and transmission of specialized cognitive tools?
I do agree that there are specific generalised forms of intelligence, I guess this more points me towards that the generating functions of these might not be optimally sub-divided in the usual way we think about it?
Now completely theoretically of course, say someone where to believe the above, why is the following really stupid?:
Specifically, consider the following proposal: Instead of trying to directly align individual agents’ objectives, we could focus on creating environmental conditions and incentive structures that naturally promote collaborative behavior. The idea being that just as virtue ethics suggests developing good character through practiced habits and environmental shaping, we might achieve alignment through carefully designed collective dynamics that encourage beneficial emergent behaviors. (Since this seems to be the most agentic underlying process that we currently have, theoretically of course.)
It is entirely plausible that the right unit of analysis is cultures/societies/humanity-as-a-whole rather than an individual. Exactly the same kinds of efficiency arguments then apply at the societal level, i.e. the society has to be doing something besides just brute-force trying heuristics and spreading tricks that work, in order to account for the efficiency we actually see.
The specific proposal has problems mostly orthogonal to the “agency of humans vs societies” question.
> Instead of trying to directly align individual agents’ objectives, we could focus on creating environmental conditions and incentive structures that naturally promote collaborative behavior.
I think you are really on to something here. To achieve alignment of AI systems and agents, it is possible to create solutions based on existing institutions that ensure alignment in human societies.
Look to the literature in economics and social science that explain how societies manage to align the interests of millions of intelligent human agents, despite all those agents acting in their own self-interest.
I’m very uncertain about the validity of the below question but I shalt ask it anyway and since I don’t trust my own way of expressing it, here’s claude on it:
The post argues that humans must have some general intelligence capability beyond just learning specialized heuristics, based on efficiency arguments in high-dimensional environments. However, research on cultural evolution (e.g., “The Secret of Our Success”, “Cognitive Gadgets”) suggests that much of human capability comes from distributed cultural learning and adaptation. Couldn’t this cultural scaffolding, combined with domain-specific inductive biases (as suggested by work in Geometric Deep Learning), provide the efficiency gains you attribute to general intelligence? In other words, perhaps the efficiency comes not from individual general intelligence, but from the collective accumulation and transmission of specialized cognitive tools?
I do agree that there are specific generalised forms of intelligence, I guess this more points me towards that the generating functions of these might not be optimally sub-divided in the usual way we think about it?
Now completely theoretically of course, say someone where to believe the above, why is the following really stupid?:
Specifically, consider the following proposal: Instead of trying to directly align individual agents’ objectives, we could focus on creating environmental conditions and incentive structures that naturally promote collaborative behavior. The idea being that just as virtue ethics suggests developing good character through practiced habits and environmental shaping, we might achieve alignment through carefully designed collective dynamics that encourage beneficial emergent behaviors. (Since this seems to be the most agentic underlying process that we currently have, theoretically of course.)
It is entirely plausible that the right unit of analysis is cultures/societies/humanity-as-a-whole rather than an individual. Exactly the same kinds of efficiency arguments then apply at the societal level, i.e. the society has to be doing something besides just brute-force trying heuristics and spreading tricks that work, in order to account for the efficiency we actually see.
The specific proposal has problems mostly orthogonal to the “agency of humans vs societies” question.
> Instead of trying to directly align individual agents’ objectives, we could focus on creating environmental conditions and incentive structures that naturally promote collaborative behavior.
I think you are really on to something here. To achieve alignment of AI systems and agents, it is possible to create solutions based on existing institutions that ensure alignment in human societies.
Look to the literature in economics and social science that explain how societies manage to align the interests of millions of intelligent human agents, despite all those agents acting in their own self-interest.