Is there a taxonomy for cooperation anywhere?
Cooperating with copies is a trope. But has anyone written up about other types of cooperation that you expect to be evolutionarily stable. Like cooperation in some thing with things that cooperate with things like you on some things.
A specific example might be cooperation for robustness between different agents, that they’ll save you if times aren’t good for them and you’ll do the same. Because you do better in vastly different situations and you want to keep the other around as insurance.
Another example might be, if animals were rational, the shark taking steps to save a diverse set of fish so it has a healthy ecosystem to be the apex of.
One last post from me for a bit:
Does anyone want to join me in exploring a simulation of the safety and explorative nature of many copies of the LLM with the same activation Vs many copies with different activations . Known good activations and points on the line between them too.
I think this might have important implications for how we design and deploy AI. I’m hoping to answer the questions.
- Do we want a few well tested models or an ecosystem of models/agents that rely on each other.
- What are the trade offs.