I think we have an elephant in the room. As I outlined in a recent post, networks of agents may do Hebbian learning as inevitably as two and two makes four. If this is the case, there are some implications.
If a significant fraction of human optimization power comes from Hebbian learning in social networks, then the optimal organizational structure is one that permits such learning. Institutional arrangements with rigid formal structure are doomed to incompetence.
If the learning-network nature of civilization is a major contributor to human progress, we may need to revise our models of human intelligence and strategies for getting the most out of it.
Given the existence of previously understudied large-scale learning networks, it’s possible that there already exist agentic entities of unknown capability and alignment status. This may have implications for the tactical context of alignment research and priorities for research direction.
If agents naturally form learning networks, the creation and proliferation of AIs whose capabilities don’t seem dangerous in isolation may have disproportionate higher-order effects due to the creation of novel large-scale networks or modification of existing ones.
It seems to me that the above may constitute reason to raise an alarm at least locally. Does it? If so, what steps should be taken?
For many years I’ve had the suspicion that complex organizations like religions, governments, ideologies, corporations, really any group of coordinating people, constitute a higher level meta-agent with interests distinct from those of their members, which only became more certain when I read the stuff about immoral mazes etc here. I had similar ideas about ecology, that in some sense “Gaia” is an intelligent-ish being with organisms as its neurons. (Of course, I used to be a New Ager, so these intuitions were rooted in woo, but as I became more rational I realized that they could be true without invoking anything supernatural.) But I’ve never been able to make these intuitions rigorous. It’s exciting to see that, as mentioned in your post, some recent research is going in that direction.
The way I see it, humans haven’t ever been the only intelligent agents on the planet, even ignoring the other sapient species like chimps and dolphins. Our own memes self-organize into subagents, and then into egregores (autonomous social constructs independent of their members), and those are what run the world. Humans are just the wetware on which they run, like distributed AIs.
Kinda valid but I personally prefer to avoid “egregore” as a term. Too many meanings that narrow it too much in the wrong places.
Eg. some use it specifically to refer to parasitic memeplexes that damage the agency of the host. That cuts directly against the learning-network interpretation IMO because independent agency seems necessary for the network to learn optimally.
In chaos magick, which is where I learned the term from, egregores are just agentic memeplexes in general, iirc. That’s how I’ve always used the term. Another perhaps better way of defining it would be distributed collective subagents.
I’m pretty sure “social constructs” in postmodernist philosophy are the same thing, but that stuff’s too dense for me to bother reading. Another good term might be “hive minds”, but that has unfortunate Borg connotations for most people and is an overloaded term in general.
Yeah, I don’t see much reason to disagree with that use of “egregore”.
I’m noticing I’ve updated away from using references to any particular layer until I have more understanding of the causal patterning. Life, up to the planetary and down to the molecular, seems to be a messy, recursive nesting of learning networks with feedbacks and feedforwards all over the place. Too much separation/focus on any given layer seems like a good way to miss the big picture.
I think we have an elephant in the room. As I outlined in a recent post, networks of agents may do Hebbian learning as inevitably as two and two makes four. If this is the case, there are some implications.
If a significant fraction of human optimization power comes from Hebbian learning in social networks, then the optimal organizational structure is one that permits such learning. Institutional arrangements with rigid formal structure are doomed to incompetence.
If the learning-network nature of civilization is a major contributor to human progress, we may need to revise our models of human intelligence and strategies for getting the most out of it.
Given the existence of previously understudied large-scale learning networks, it’s possible that there already exist agentic entities of unknown capability and alignment status. This may have implications for the tactical context of alignment research and priorities for research direction.
If agents naturally form learning networks, the creation and proliferation of AIs whose capabilities don’t seem dangerous in isolation may have disproportionate higher-order effects due to the creation of novel large-scale networks or modification of existing ones.
It seems to me that the above may constitute reason to raise an alarm at least locally. Does it? If so, what steps should be taken?
For many years I’ve had the suspicion that complex organizations like religions, governments, ideologies, corporations, really any group of coordinating people, constitute a higher level meta-agent with interests distinct from those of their members, which only became more certain when I read the stuff about immoral mazes etc here. I had similar ideas about ecology, that in some sense “Gaia” is an intelligent-ish being with organisms as its neurons. (Of course, I used to be a New Ager, so these intuitions were rooted in woo, but as I became more rational I realized that they could be true without invoking anything supernatural.) But I’ve never been able to make these intuitions rigorous. It’s exciting to see that, as mentioned in your post, some recent research is going in that direction.
The way I see it, humans haven’t ever been the only intelligent agents on the planet, even ignoring the other sapient species like chimps and dolphins. Our own memes self-organize into subagents, and then into egregores (autonomous social constructs independent of their members), and those are what run the world. Humans are just the wetware on which they run, like distributed AIs.
They’re called egregores.
Yes, I’m aware. I was into chaos magick as a teen. :) Also if you’ll notice I used the term in the comment.
Kinda valid but I personally prefer to avoid “egregore” as a term. Too many meanings that narrow it too much in the wrong places.
Eg. some use it specifically to refer to parasitic memeplexes that damage the agency of the host. That cuts directly against the learning-network interpretation IMO because independent agency seems necessary for the network to learn optimally.
In chaos magick, which is where I learned the term from, egregores are just agentic memeplexes in general, iirc. That’s how I’ve always used the term. Another perhaps better way of defining it would be distributed collective subagents.
I’m pretty sure “social constructs” in postmodernist philosophy are the same thing, but that stuff’s too dense for me to bother reading. Another good term might be “hive minds”, but that has unfortunate Borg connotations for most people and is an overloaded term in general.
Yeah, I don’t see much reason to disagree with that use of “egregore”.
I’m noticing I’ve updated away from using references to any particular layer until I have more understanding of the causal patterning. Life, up to the planetary and down to the molecular, seems to be a messy, recursive nesting of learning networks with feedbacks and feedforwards all over the place. Too much separation/focus on any given layer seems like a good way to miss the big picture.